Consent before AI work. Verified ownership of your digital twin. Monitoring for unauthorized use.
Three pillars protecting models in the age of generative AI — built so prevention happens before harm, not after.
Each pillar addresses a different point in the AI exploitation lifecycle — before, during, and after the work happens.
Formal, traceable consent for AI use before any work begins — replacing the verbal handshakes that enable exploitation.
Verified ownership of model-owned digital twins, with a registry brands can trust and models can revoke at any time.
Web-scanning for unauthorized use of your face, with legal-grade evidence packages to support takedowns or legal action.
The earliest intervention point: formal consent for AI use of a model’s likeness before any work happens. Three-party signed records (brand, agency, model) with traceable scope, duration, and compensation.
Most AI exploitation happens in the gap between “we agreed verbally” and “what actually got produced.” The Consent Tool closes that gap with a signed record before the work starts — what AI use is permitted, for what duration, for what compensation.
It is not a substitute for legal advice or industry-specific contracts; it is a baseline standard that any party can reference, and that survivors of misuse can point to as evidence.
Model-Owned Digital Twins Only
Your digital twin belongs to you. IMF verifies and protects your ownership rights.
Your likeness is yours. IMF verifies and documents legitimate digital twins so brands know where to book — protecting models from being bypassed by unauthorized copies.
The after-the-fact pillar: detection of unauthorized use of your face online, with evidence packages built for takedowns and legal action.
When violations are found, models can capture legal-grade evidence packages — timestamped screenshots, page metadata, and integrity hashes — to support takedown requests or legal action.
The Likeness Monitor does not initiate enforcement on your behalf. It surfaces unauthorized use and gives you documentation you can use however you choose.
All three pillars working together — consent before, ownership during, monitoring after.
IMF maintains the registry, documents violations, and takes action against certified entities that misuse it. For non-certified parties, we provide documentation and evidence to support a model’s own enforcement or legal action.
Digital Twin Authorization is available now to certified models. The AI Consent Tool and Likeness Monitor are launching soon, and will be available to certified models on launch. Apply for certification today to gain access as each tool comes online.
Digital Twin Authorization requires approved human model certification.