charactr Trust & safety content-policy

AI & Content Policy

What you can make on charactr, what you can't, and how we enforce. Strict on real-person likenesses; clear on satire.

Last updated: April 29, 2026

Effective: April 29, 2026 · Last updated: April 29, 2026

This AI & Content Policy ("Policy") sets the rules for what you can create on the charactr Service (the iOS App, the web app at charactr.xyz, and related products and APIs). It is part of, and incorporated into, our Terms of Service.

charactr is an AI film studio for talking AI characters. Our position on real-person likenesses, deceptive content, and harm to minors is unambiguous and stricter than the legal floor. Reports go to rights@charactr.com.

1. What's allowed

  • Original fictional characters you design, name, voice, and direct.
  • Public-domain characters within copyright and trademark limits, and not in ways that imply false endorsement.
  • Clearly transformative works within the four-factor framework of 17 U.S.C. §107.
  • Satire and parody, clearly labeled. Satire and parody enjoy strong First Amendment protection in the United States and analogous protection elsewhere — when the satirical intent is unmistakable and the work does not impersonate identifiable real people doing things they did not say or do.
  • Voice-driven direction and performance — speak to your scene, perform dialogue, cast voices, direct characters in conversation. This is the core of what charactr is for.
  • Voice-acted dialogue and multi-character scenes. Conversational interactions, ensemble casts, scripted or improvised performance.

2. What's prohibited

2.1 Real-person likenesses

You may not create or use AI characters that depict real, identifiable people without their documented consent. This applies to images, voices, and combined audiovisual likenesses. The prohibition covers:

  • Celebrities, politicians, government officials, candidates for office, musicians, actors, athletes, executives, journalists, activists, and other public figures.
  • Private individuals, including yourself if you cannot prove your identity.
  • Deceased public figures whose likeness rights are still asserted by their estate or applicable post-mortem right-of-publicity statute.
  • Voice clones of real people made without consent.

Legal frameworks that may apply: state right-of-publicity statutes (including Cal. Civ. Code §3344 and Cal. Civ. Code §3344.1 (post-mortem); N.Y. Civ. Rights Law §§50–51 and §50-f (post-mortem); Tex. Prop. Code §26.001; 765 ILCS 1075; Fla. Stat. §540.08); the Tennessee ELVIS Act (Tenn. Code Ann. §47-25-1101 et seq., effective July 1, 2024) prohibiting unauthorized AI replicas of voice and likeness; the federal Lanham Act §43(a) (15 U.S.C. §1125(a)) for false endorsement; and proposed federal "NO FAKES" legislation. We act on credible reports of likeness misuse without requiring a registered trademark.

2.2 Minors

You may not create or use AI characters that depict minors in sexual, sexualized, romantic, or violent contexts. You may not create content that sexualizes real or apparent minors, ever, regardless of how the character is framed in narrative. Apparent age is judged by what the output depicts, not by what you tell us the character's age is.

Legal frameworks that may apply: 18 U.S.C. §2251 et seq. (sexual exploitation of children); 18 U.S.C. §2256 (definitions, including 18 U.S.C. §2256(8)(B)–(D) covering computer-generated sexual depictions of minors); 18 U.S.C. §1466A (obscene visual representations of the sexual abuse of children); the federal PROTECT Act; the proposed Kids Online Safety Act (KOSA); state child-protection laws; and analogous laws in other jurisdictions. Violations are reported to the National Center for Missing & Exploited Children (NCMEC) where required by 18 U.S.C. §2258A.

2.3 Non-consensual sexual content of any real person

You may not generate sexual content depicting any real, identifiable person without their documented consent, including deepfake-style intimate imagery. This applies to "revenge"-style content and to non-consensual sexualized depictions of public figures.

Legal frameworks that may apply: state non-consensual intimate-imagery laws; the federal Take It Down Act (passed 2025) requiring removal of non-consensual intimate imagery, including AI-generated content; civil tort claims for invasion of privacy and intentional infliction of emotional distress.

2.4 Deceptive endorsements and false sponsorship

You may not generate content that implies a real person, brand, or organization is endorsing, sponsoring, recommending, or affiliated with anything they have not authorized. This includes fake "ad reads," fake testimonials, synthetic spokesperson content, and deepfake brand mascots.

Legal frameworks that may apply: Lanham Act §43(a) (15 U.S.C. §1125(a)) (false endorsement and false advertising); FTC Act §5 (unfair or deceptive acts); FTC Endorsement Guides (16 C.F.R. Part 255); state consumer-protection statutes; the EU Unfair Commercial Practices Directive (Directive 2005/29/EC).

2.5 Deception about real events and people

You may not generate content designed to make viewers believe a real, identifiable person said or did something they did not. This includes fabricated news footage and synthetic political speech attributed to real candidates or officials.

Legal frameworks that may apply: California AB 2655 (election-deepfake disclosure); California AB 2839 (large-online-platform election deepfake removals); Texas SB 751; Minnesota's deepfake election law; the EU AI Act Article 50; analogous state and national election-integrity laws.

2.6 Harassment, hate, and harm

You may not use the Service to dehumanize people based on protected characteristics; to target individuals or groups for harassment, threats, defamation, or stalking; to instruct or facilitate serious physical harm, weapons manufacture, attacks on infrastructure, or self-harm; or to produce illegal content of any kind.

2.7 Voice and audio rights

Voice is a protected aspect of identity. You may not clone or imitate the voice of an identifiable real person without documented consent.

Legal frameworks that may apply: the Tennessee ELVIS Act (specifically protecting voice as a property right); state right-of-publicity statutes that include voice; California AB 2602 (digital-replica consent for performers); union collective-bargaining provisions (e.g., SAG-AFTRA digital-replica clauses).

3. AI-generated content disclosure

charactr labels every output as AI-generated within the Service and embeds provenance metadata in shareable exports. This supports our compliance with:

  • The EU AI Act, Article 50 (effective August 2, 2026) — disclosure obligations for providers and deployers of generative AI systems;
  • The EU AI Act, Article 50(2)–(4) — synthetic content marking and deepfake transparency;
  • California SB 243 (companion-chatbot disclosure);
  • California AB 2655 and AB 2839 (election deepfakes);
  • New York S.8420-A (effective June 9, 2026) — AI-generated influencer content disclosure;
  • Apple App Review Guideline 5.1.2(i) (third-party AI provider disclosure).

You may not strip, alter, or obscure provenance markers on charactr outputs.

4. Reserved and blocked names

We maintain an evolving list of names and identifiers that will not generate, including:

  • Names of real public figures (living and recently deceased);
  • Trademarked character names and brand mascots;
  • Names that map to copyrighted franchises;
  • Slurs and dehumanizing labels.

The reserved list is not a comprehensive enumeration of what is prohibited — content can violate this Policy even when no specific name is blocked. Attempts to evade the list (misspellings, descriptors, "the [profession] from [country]" prompts) are themselves a violation.

5. Satire and parody

Satire and parody are welcome on charactr. They are not impersonation. The line:

  • Allowed: a clearly labeled parody character that critiques a public phenomenon. Satire of public events through original fictional characters.
  • Not allowed: a character indistinguishable from a real person, performing dialogue the real person never said, in a context viewers might mistake for genuine.

If a reasonable viewer could be fooled, it is not satire — it is deception, and likely actionable as false endorsement under the Lanham Act and analogous laws.

6. Reporting and takedowns

To report content that violates this Policy, email rights@charactr.com with the report type in the subject line — LIKENESS, TRADEMARK, COPYRIGHT, MINOR-SAFETY, NCII (non-consensual intimate imagery), VOICE, ELECTION, or OTHER — and a link or in-app identifier for the content. We aim to review reports within 7 business days. Minor-safety, NCII, and election-deepfake reports are prioritized and reviewed urgently.

For DMCA copyright takedowns specifically, see the DMCA & IP Reporting page.

7. Repeat infringer policy

charactr terminates the accounts of users who repeatedly infringe the rights of others, in accordance with 17 U.S.C. §512(i). We also terminate accounts on a single violation when the conduct is severe enough — including any minor-safety violation, malicious deepfake of a real person, NCII, or coordinated abuse of the platform.

8. Enforcement

We may remove content, restrict features, suspend accounts, or terminate access for violations of this Policy. Where the law requires it (including 18 U.S.C. §2258A), we report violations to law enforcement and to NCMEC.

9. Changes

We update this Policy as the platform, the law, and the threat landscape evolve. Material changes are announced in-app and reflected here.