Keeping teens safe on Sora
- REAL School
- Oct 9
- 4 min read
Updated: Oct 10
29 October 2025

OpenAI’s new video-generation app, Sora, can turn a simple description – or even your child’s image – into an ultra-realistic video. It reached #1 on Apple’s App Store in just 48 hours. While this technology is an exciting creative leap, it also brings serious concerns. Luckily, OpenAI has introduced parental controls - accessed via ChatGPT - to help guardians manage how teens use Sora safely. Below is a guide on how you can do it.
First, let's be clear
REAL School strongly advises against giving children or teens personal mobile devices or access to social media. Research shows that early and unregulated exposure can affect attention, emotional wellbeing, and social development.
However, as technology and AI tools become increasingly present and unavoidable in young people’s lives, it is essential for parents to stay informed. This short guide will help you:
Understand what Sora is — OpenAI’s new deepfake video app,
Recognise the risks, especially around the “cameo” mode, and
Learn how to protect your teen through parental controls and healthy digital habits.
Important to know
Children under 13 must not use social media apps at all, under global and EU privacy regulations (including the GDPR and the US COPPA). (*)
Teens aged 13–15 can only use such platforms with explicit parental consent and supervision. (*)
Children and teens should never share their likeness (face, voice, or image) via deepfake or AI video apps such as Sora. (*)

Why Sora and “cameo” mode are risky
Sora’s “cameo” feature lets users insert a person’s face or likeness into AI-generated videos. While it may look fun or creative, this function carries serious risks for young people.
Cyberbullying and harassment: Deepfake videos can be used to embarrass, exclude, or threaten teens. Victims of deepfake bullying experience long-term emotional and reputational harm (*).
Loss of privacy and identity misuse: Once a likeness is uploaded, it can be reused or remixed beyond a teen’s control, sometimes for harmful or non-consensual content (*).
Mental health and social pressure: Manipulated images and videos can distort self-image and increase anxiety or depression linked to comparison and social media exposure (*).
Sextortion and blackmail: Deepfakes are increasingly used for coercion and reputational threats, a growing risk among teens.
What can parents do?
OpenAI now offers Parental controls through ChatGPT that link a parent’s and a teen’s accounts, giving adults oversight over how Sora is used.(*) Here is how you can do this step-by-step.
1. Linking your teen’s account & accessing controls
Why this matters
Linking accounts is the foundation: it allows parents to adjust how Sora behaves for their teen and enables safety filters.
How to do it
In ChatGPT, tap your profile icon, then go to Settings → Parental controls.
Select “+ Add family member” and invite your teen via email or phone.
Once the teen accepts, their name appears under Family Members, and you can manage their settings.
If the teen unlinks later, you will be notified.
Source: OpenAI
2. Content & interaction controls for Sora
Once linked, parents get toggles to limit Sora features and exposures. Here are the key settings.
Feature | What parents can do | Why it helps |
Personalised feed (algorithmic recommendations) | Turn it off so videos aren’t tailored based on teen activity (*) | Reduces exposure to content escalated by algorithmic “rabbit holes” |
Continuous scroll / unlimited feed | Limits endless scrolling and passive consumption | |
Direct messaging | Block or restrict the ability to send/receive messages (*) | Prevents unmoderated one-on-one contact with unknown users |
These controls ensure the teen’s exposure and interactivity are more limited in Sora's environment.
3. Safety, moderation & content filtering
Beyond parental toggles, Sora builds in several safety features to reduce misuse:
Automatic filters and guardrails block harmful or inappropriate content (e.g. sexual, violent, extremist) before video generation. (*)
Human moderation oversees content and reviews flagged items. (*)
Invisible watermarks and metadata are added to videos, aiding tracing and authenticity. (*)
Audio safeguards scan generated speech to prevent imitation of copyrighted works. (*)
These mechanisms guard against harmful or unauthorised uses of the deepfake technology.
4. Additional parental features & limitations
Parents cannot view a teen’s private conversations or chat transcripts—privacy is preserved. (*)
In rare cases, if serious safety risks (e.g. self-harm signals) are detected in ChatGPT, a parent may receive alerts (without full conversation content). (*)
Parents can set quiet hours, times when Sora (and ChatGPT) cannot be used. (*)
Children can’t override or change parental settings while linked. (*)
These controls are not infallible; determined users may find workarounds. (*)

5. Best practices beyond tech controls
Controls help, but they’re only one piece of a safe approach. You can:
Talk openly about AI, deepfakes, and consent. Explain how misusing likenesses or creating non-consensual videos is harmful.
Set clear household rules (e.g., “no creating deepfakes of others without permission,” “limit Sora to certain times”).
Review device-level app permissions (camera, microphone, storage) to ensure apps can’t sneak features in.
Co-review creations together to see what the child is making and sharing, and talk through potential issues.
Stay informed: Sora and ChatGPT evolve, and new safety updates will roll out.
6. Sample parental guidance workflow
Here’s how a parent might put this into practice:
Set up parent and teen ChatGPT accounts.
Send an invite and link the accounts via settings.
In parental controls, disable personalised feed, disable continuous scroll, and block messaging.
Set quiet hours (for instance, 21:00–8:00).
Enable sensitive-content filters and safety notifications.
Talk with your child about how they’ll use Sora (what they can and cannot do).
Check in weekly: review videos, check settings, update limits if needed.
7. Conclusion & caution
OpenAI’s parental controls for Sora, accessed through ChatGPT, provide a solid starting point for parents to steer their experience. These features - from feed restrictions to moderation and alerts - help reduce risks. (*)
However, no system is perfect. Continued dialogue, supervision, and education remain essential. AI tools evolve rapidly; the safest approach is to stay active and involved in your child’s digital life.
