What you can make on charactr, what you can't, and how we enforce. Strict on real-person likenesses; clear on satire.
Effective: April 29, 2026 · Last updated: April 29, 2026
This AI & Content Policy ("Policy") sets the rules for what you can create on the charactr Service (the iOS App, the web app at charactr.xyz, and related products and APIs). It is part of, and incorporated into, our Terms of Service.
charactr is an AI film studio for talking AI characters. Our position on real-person likenesses, deceptive content, and harm to minors is unambiguous and stricter than the legal floor. Reports go to rights@charactr.com.
You may not create or use AI characters that depict real, identifiable people without their documented consent. This applies to images, voices, and combined audiovisual likenesses. The prohibition covers:
Legal frameworks that may apply: state right-of-publicity statutes (including Cal. Civ. Code §3344 and Cal. Civ. Code §3344.1 (post-mortem); N.Y. Civ. Rights Law §§50–51 and §50-f (post-mortem); Tex. Prop. Code §26.001; 765 ILCS 1075; Fla. Stat. §540.08); the Tennessee ELVIS Act (Tenn. Code Ann. §47-25-1101 et seq., effective July 1, 2024) prohibiting unauthorized AI replicas of voice and likeness; the federal Lanham Act §43(a) (15 U.S.C. §1125(a)) for false endorsement; and proposed federal "NO FAKES" legislation. We act on credible reports of likeness misuse without requiring a registered trademark.
You may not create or use AI characters that depict minors in sexual, sexualized, romantic, or violent contexts. You may not create content that sexualizes real or apparent minors, ever, regardless of how the character is framed in narrative. Apparent age is judged by what the output depicts, not by what you tell us the character's age is.
Legal frameworks that may apply: 18 U.S.C. §2251 et seq. (sexual exploitation of children); 18 U.S.C. §2256 (definitions, including 18 U.S.C. §2256(8)(B)–(D) covering computer-generated sexual depictions of minors); 18 U.S.C. §1466A (obscene visual representations of the sexual abuse of children); the federal PROTECT Act; the proposed Kids Online Safety Act (KOSA); state child-protection laws; and analogous laws in other jurisdictions. Violations are reported to the National Center for Missing & Exploited Children (NCMEC) where required by 18 U.S.C. §2258A.
You may not generate sexual content depicting any real, identifiable person without their documented consent, including deepfake-style intimate imagery. This applies to "revenge"-style content and to non-consensual sexualized depictions of public figures.
Legal frameworks that may apply: state non-consensual intimate-imagery laws; the federal Take It Down Act (passed 2025) requiring removal of non-consensual intimate imagery, including AI-generated content; civil tort claims for invasion of privacy and intentional infliction of emotional distress.
You may not generate content that implies a real person, brand, or organization is endorsing, sponsoring, recommending, or affiliated with anything they have not authorized. This includes fake "ad reads," fake testimonials, synthetic spokesperson content, and deepfake brand mascots.
Legal frameworks that may apply: Lanham Act §43(a) (15 U.S.C. §1125(a)) (false endorsement and false advertising); FTC Act §5 (unfair or deceptive acts); FTC Endorsement Guides (16 C.F.R. Part 255); state consumer-protection statutes; the EU Unfair Commercial Practices Directive (Directive 2005/29/EC).
You may not generate content designed to make viewers believe a real, identifiable person said or did something they did not. This includes fabricated news footage and synthetic political speech attributed to real candidates or officials.
Legal frameworks that may apply: California AB 2655 (election-deepfake disclosure); California AB 2839 (large-online-platform election deepfake removals); Texas SB 751; Minnesota's deepfake election law; the EU AI Act Article 50; analogous state and national election-integrity laws.
You may not use the Service to dehumanize people based on protected characteristics; to target individuals or groups for harassment, threats, defamation, or stalking; to instruct or facilitate serious physical harm, weapons manufacture, attacks on infrastructure, or self-harm; or to produce illegal content of any kind.
Voice is a protected aspect of identity. You may not clone or imitate the voice of an identifiable real person without documented consent.
Legal frameworks that may apply: the Tennessee ELVIS Act (specifically protecting voice as a property right); state right-of-publicity statutes that include voice; California AB 2602 (digital-replica consent for performers); union collective-bargaining provisions (e.g., SAG-AFTRA digital-replica clauses).
charactr labels every output as AI-generated within the Service and embeds provenance metadata in shareable exports. This supports our compliance with:
You may not strip, alter, or obscure provenance markers on charactr outputs.
We maintain an evolving list of names and identifiers that will not generate, including:
The reserved list is not a comprehensive enumeration of what is prohibited — content can violate this Policy even when no specific name is blocked. Attempts to evade the list (misspellings, descriptors, "the [profession] from [country]" prompts) are themselves a violation.
Satire and parody are welcome on charactr. They are not impersonation. The line:
If a reasonable viewer could be fooled, it is not satire — it is deception, and likely actionable as false endorsement under the Lanham Act and analogous laws.
To report content that violates this Policy, email rights@charactr.com with the report type in the subject line — LIKENESS, TRADEMARK, COPYRIGHT, MINOR-SAFETY, NCII (non-consensual intimate imagery), VOICE, ELECTION, or OTHER — and a link or in-app identifier for the content. We aim to review reports within 7 business days. Minor-safety, NCII, and election-deepfake reports are prioritized and reviewed urgently.
For DMCA copyright takedowns specifically, see the DMCA & IP Reporting page.
charactr terminates the accounts of users who repeatedly infringe the rights of others, in accordance with 17 U.S.C. §512(i). We also terminate accounts on a single violation when the conduct is severe enough — including any minor-safety violation, malicious deepfake of a real person, NCII, or coordinated abuse of the platform.
We may remove content, restrict features, suspend accounts, or terminate access for violations of this Policy. Where the law requires it (including 18 U.S.C. §2258A), we report violations to law enforcement and to NCMEC.
We update this Policy as the platform, the law, and the threat landscape evolve. Material changes are announced in-app and reflected here.