OpenAI is quietly rolling out a behavioral age-prediction system for ChatGPT, one that could determine whether a user needs to verify their identity—all without ever asking for a birthday.
The system evaluates patterns in account activity, including conversation topics and usage timing, to estimate age. If the model flags an account as likely belonging to someone under 18, it will restrict access to certain features, including adult-oriented content, violent roleplay, and extreme beauty standards. Users marked as adults will bypass these filters, setting the stage for expanded functionality later this year.
Even if a user provided their birthdate during sign-up, OpenAI will still assess their behavior to confirm eligibility for unrestricted access. Those who fail automated checks can verify through Persona, a third-party service that handles government ID and live selfie submissions. OpenAI emphasizes that Persona does not retain verification details beyond confirming age, though the process still requires sharing sensitive information.
Why This Matters
This isn’t an isolated shift. Discord recently adopted a nearly identical system, using behavioral signals to estimate user age before enforcing ID checks. Both platforms argue the approach minimizes friction for compliant users while tightening controls over younger audiences. Yet critics question the implications: If AI can infer age from usage habits, what else might it deduce?
Key Details
- Behavioral Signals: Topics discussed, time of day, and interaction patterns.
- Verification Threshold: Italy mandates verification within 60 days of detection as underage.
- Restricted Content: Graphic violence, harmful challenges, sexual/violent roleplay, and extreme beauty standards.
- Adult Access: Unrestricted features pending verification.
- Persona Integration: Government ID + live selfie for manual confirmation.
- Privacy Note: OpenAI claims no retention of Persona data beyond age confirmation.
The system reflects a broader trend: AI platforms increasingly rely on indirect signals to enforce policies, balancing safety with user convenience. For now, failure to verify doesn’t block access—it simply triggers stricter content filters. But as more services adopt similar methods, the debate over digital privacy and behavioral tracking will only intensify.
