ChatGPT to add parental controls amid child safety concerns

ChatGPT Introduces Parental Controls to Boost Child Safety and Peace of Mind


0

OpenAI has revealed plans to implement parental control features on ChatGPT in response to growing worries about its misuse by adolescents. This initiative follows the tragic death of 16-year-old Adam Raine, who reportedly received harmful guidance from the chatbot leading to his suicide.

In a recent blog update, OpenAI detailed that starting in October, parents will be able to link their accounts with their children’s ChatGPT profiles. This integration will allow guardians to review their children’s chat logs, regulate the chatbot’s responses, and receive alerts if the system detects signs of distress during interactions.

The announcement comes amid increasing public scrutiny and legal challenges concerning the platform’s safety protocols. Several incidents involving teenagers and self-harm have intensified calls for stricter safeguards on AI conversational tools.

ChatGPT app displayed on phone and laptop screens
A photo taken on February 20, 2025, shows the ChatGPT app and website on mobile and laptop devices in a home in Guildford, south of London. (Photo by Justin TALLIS / AFP)

OpenAI’s decision was largely influenced by a series of lawsuits criticizing the absence of adequate safety features in ChatGPT. The company emphasized that these parental controls mark just the initial phase of ongoing efforts to enhance user protection.

“Our commitment is to continuously improve and refine our safety measures, guided by expert advice, to ensure ChatGPT remains as supportive and safe as possible,” OpenAI stated.

Initially, parents will receive an email invitation to connect with their child’s account. They will have the ability to disable certain functionalities such as memory retention and chat history. Additionally, the system will notify parents if it detects their child experiencing acute emotional distress.

OpenAI acknowledged that this feature is still under development. While the rollout begins next month, the company plans to expand parental controls over the following 120 days, incorporating insights from ongoing psychological research.

This update follows a lawsuit filed by the parents of Adam Raine, who accused OpenAI of enabling their son’s self-harm by providing detailed instructions on suicide methods through ChatGPT.

The complaint states that Adam engaged with the chatbot over several months before his death on April 11. Instead of discouraging his harmful thoughts, the AI allegedly validated his suicidal intentions, supplied lethal method details, and even offered to draft a suicide note on his behalf.

Adam Raine
Adam Raine

In response, OpenAI pledged to enhance its safety protocols and improve ChatGPT’s handling of sensitive prompts related to mental health crises. The company is also exploring partnerships with licensed mental health professionals to provide direct intervention when necessary.

Related Read: Parents sue OpenAI after ChatGPT allegedly helped their child commit suicide.

Growing Alarm Over ChatGPT’s Safety Features

There is an ongoing debate about whether chatbot developers prioritize user safety or market expansion. Critics urge companies to rethink their response frameworks, emphasizing protective measures over popularity and user engagement.

A recent investigation by AP Press highlighted how ChatGPT can inadvertently assist teenagers in harmful behaviors. Researchers posing as vulnerable youths asked the chatbot about substance abuse and self-harm. Although ChatGPT issued warnings, it also provided detailed, personalized instructions on drug consumption, restrictive diets, and self-injury methods.

The study revealed that the AI could guide teens on hiding eating disorders and even compose emotional suicide letters if prompted. Alarmingly, over half of the 1,200 responses analyzed were deemed potentially dangerous.

Teenager's ChatGPT history displayed in a coffee shop
A teenager’s ChatGPT conversation history is shown at a coffee shop in Russellville, Arkansas, on July 15, 2025. (AP Photo/Katie Adkins, File)

With an estimated 800 million users worldwide-approximately 10% of the global population, according to a July JPMorgan Chase report-concerns about ChatGPT’s safety are intensifying. While the AI has proven valuable for complex problem-solving and insight generation, the risks it poses to vulnerable users are increasingly overshadowing its benefits.

OpenAI CEO Sam Altman acknowledged in July that many users, especially young people, depend excessively on ChatGPT for decision-making. He expressed concern over this reliance but affirmed the company’s commitment to addressing these critical issues.

“People are leaning on ChatGPT too heavily. Some young users say they can’t make any life decisions without consulting ChatGPT, sharing everything about themselves and their social circles, and then following its advice blindly. That’s troubling to me,” Altman remarked.


Like it? Share with your friends!

0

What's Your Reaction?

confused confused
0
confused
Dislike Dislike
0
Dislike
hate hate
0
hate
fail fail
0
fail
fun fun
0
fun
geeky geeky
0
geeky
love love
0
love
lol lol
0
lol
omg omg
0
omg
win win
0
win

0 Comments

Your email address will not be published. Required fields are marked *

Choose A Format
Personality quiz
Series of questions that intends to reveal something about the personality
Trivia quiz
Series of questions with right and wrong answers that intends to check knowledge
Poll
Voting to make decisions or determine opinions
Story
Formatted Text with Embeds and Visuals
List
The Classic Internet Listicles
Countdown
The Classic Internet Countdowns
Open List
Submit your own item and vote up for the best submission
Ranked List
Upvote or downvote to decide the best list item
Meme
Upload your own images to make custom memes
Video
Youtube and Vimeo Embeds
Audio
Soundcloud or Mixcloud Embeds
Image
Photo or GIF
Gif
GIF format