ChatGPT adds monitoring feature: Parents actively accompany their children

21/09/2025 2

One of the most notable features of the new monitoring mode is the ability for parents to become direct companions in their children's journey using ChatGPT.

ChatGPT adds monitoring feature: Parents actively accompany their children

In the context of artificial intelligence increasingly penetrating into life, the question of safety for young users has become a top concern of society. ChatGPT has quickly become a familiar tool not only for office workers, programming engineers or researchers but also for students. The fact that young people spend more and more time chatting, searching for information and even sharing emotions with chatbots has increased concerns about the risk of dependence, misunderstanding information or accessing inappropriate content. Under pressure from public opinion, management agencies and the parent community, OpenAI recently announced an important step forward: deploying a safe monitoring mode for parents.

The tool, which is scheduled to be available to users within the next month, marks a major effort by OpenAI to ensure a more friendly, safe and healthy AI experience for teenagers. This is not just a technical update, but also reflects the company's responsible approach to shaping the future of digital technology.

1. Parents proactively manage how their children use ChatGPT

One of the highlights of the new monitoring mode is the ability for parents to become direct companions in their children's ChatGPT journey. Instead of leaving children to chat freely with the chatbot, parents can now link their personal accounts to their children's accounts. This creates a transparent bridge, allowing parents to grasp how ChatGPT is responding and make appropriate adjustments when necessary.

According to OpenAI, parents will have several important options to manage, including turning off conversation memory and preventing sensitive content from being saved. For example, if a child has ever shared personal information or confided in someone difficult to talk about, the system will not automatically save it, which parents may worry about privacy issues. Additionally, parents can shape how the chatbot responds to teens, such as limiting misleading advice or avoiding content that is not age-appropriate.

Another notable feature is the ability to send alerts. In case ChatGPT detects that a young user is showing signs of serious stress or repeatedly asking negative questions, the system will send a notification to the parent. This gives parents the opportunity to intervene in time, instead of letting the child get lost in a long conversation with an AI that cannot replace human understanding and care.

OpenAI emphasizes that this is just the beginning. The company will continue to gather feedback from parents, educators, and the community to refine its approach. The goal is not just to manage safety, but to help create a healthy digital environment where teens can reap the benefits of AI without sacrificing their mental and emotional development.

2. Increase protection in sensitive conversations

Along with the parental control tool, OpenAI is also adding a layer of protection for special situations. Instead of letting the chatbot continue the conversation in a normal state, conversations that show signs of tension or high sensitivity will be routed to a specialized inference model.

This model is designed to enhance risk management, making AI responses more consistent and safer. For example, when children ask questions related to complex psychological issues, AI will not respond arbitrarily but will give thoughtful answers, directing users to trusted support resources. This is an important step forward, because previously chatbots could sometimes drag conversations in a negative direction, unintentionally causing young users to sink deeper into uncontrollable emotions.

To ensure effectiveness, OpenAI collaborated with experts from a variety of fields. Youth development researchers provided insights into the unique behaviors and needs of this age group. Mental health professionals provided recommendations on how to have safe conversations that avoid psychological harm.

The ultimate goal is to build a multi-layered protection system where ChatGPT is not only useful but also completely safe for young users. OpenAI is open about the fact that the ultimate responsibility lies with the company itself, regardless of whether there is an expert advisory board. This attitude builds trust with the community and affirms OpenAI’s long-term commitment to protecting users.

3. ChatGPT is under increasing pressure for security

Behind the move to add a supervised mode is the considerable pressure OpenAI faces. With more than 700 million weekly active users, ChatGPT has become the most popular conversational AI platform globally. However, this popularity also comes with a heavy responsibility to ensure safety, especially for teenagers, who are the most vulnerable user group.

Lawmakers and civil society groups have been pressing hard in recent months. In July 2025, several US senators sent a formal letter asking OpenAI to provide a detailed report on its protections for young people. In April of that year, Common Sense Media called for a ban on AI chatbots being used as companions for people under 18, citing concerns that children could develop an emotional attachment to a virtual tool.

Pressure also comes from the mixed reactions of the user community. Some previous updates have been met with controversy, such as the chatbot version being criticized for being too flattering or GPT-5 being accused of lacking personality, forcing OpenAI to urgently restore the option to roll back to the old version. These incidents clearly show that no matter how smart AI can be, when serving the masses, the line between utility and risk is very fragile.

Public opinion and media have also been constantly warning. More and more teenagers tend to share their feelings and seek advice from ChatGPT instead of family or friends. Over-reliance on a virtual tool for personal problems has the potential to distort perception, reduce social communication skills and even miss out on more practical support sources such as psychologists. It is this context that has urged OpenAI to take more drastic actions to build trust.

4. Implementation plan this year

According to OpenAI representatives, the parental control mode is just the first step in a larger protection strategy. Over the next 120 days, the company plans to roll out more new safety features and upgrade its technical infrastructure to improve its ability to recognize sensitive situations. Beyond just “blocking risks,” OpenAI wants the system to be able to assess the full context and provide safe responses while maintaining the naturalness of the conversation.

To achieve this, engineering teams are focusing on training models to distinguish between normal conversations and signals that indicate a user needs timely support. The system will also be able to automatically route complex situations to specialized models, limiting the chatbot from responding incorrectly or misleadingly. This is considered a big step forward, helping ChatGPT operate with a multi-layered protection mechanism instead of relying on a single filter.

As it develops technology, OpenAI also focuses on the human element. The company has expanded its feedback channels to make it easier for parents to report problems, and is working with education and mental health experts to fine-tune its approach. Each feedback becomes valuable data to help improve the model, ensuring ChatGPT adapts quickly to real-world needs.

OpenAI representatives affirmed that this effort will not only take place in the next few months but will be maintained and expanded for many years. The ultimate goal is to build a safe and sustainable AI ecosystem where users, especially teenagers, can access advanced technology without having to trade off their mental and emotional safety. This is also the company's way of demonstrating to the community and regulators that it is willing to combine social responsibility with technological progress, creating a new standard for the AI ​​industry.

5. Summary

Adding parental control to ChatGPT is a timely and appropriate move by OpenAI as AI becomes increasingly popular among young people. With the new feature, parents can proactively manage and ensure their children access AI in a healthier and safer way. At the same time, the multi-layered protection system and cooperation with experts help ChatGPT minimize the risk of negative impacts while maintaining its inherent usefulness.

While pressure from the public, regulators, and the community remains high, these efforts show that OpenAI has chosen to take the long road, putting safety first. This is not just a technological step forward, but also an important milestone in the journey to shaping a civilized digital environment where AI can assist without overshadowing the value of human connection. With parental controls, OpenAI has sent a clear message: the future of AI is not only intelligent, but also safe, humane, and responsible.

 

 
Sadesign Co., Ltd. provides the world's No. 1 warehouse of cheap copyrighted software with quality: Panel Retouch, Adobe Photoshop Full App, Premiere, Illustrator, CorelDraw, Chat GPT, Capcut Pro, Canva Pro, Windows Copyright Key, Office 365 , Spotify, Duolingo, Udemy, Zoom Pro...
Contact information
SADESIGN software Company Limited
 
Sadesign Co., Ltd. provides the world's No. 1 warehouse of cheap copyrighted software with quality: Panel Retouch, Adobe Photoshop Full App, Premiere, Illustrator, CorelDraw, Chat GPT, Capcut Pro, Canva Pro, Windows Copyright Key, Office 365 , Spotify, Duolingo, Udemy, Zoom Pro...
Contact information
SADESIGN software Company Limited
Hotline
Confirm Reset Key/Change Device

Are you sure you want to Reset Key/Change Device on this Key?

The computer that has this Key activated will be removed and you can use this Key to activate it on any computer.