This browser does not support the Video element.
OpenAI adds parental control to ChatGPT
OpenAI says it plans to roll out new parental controls for ChatGPT within the next month, following a lawsuit from California parents who say the chatbot contributed to their 16-year-old son’s death by apparent suicide.
If you or a loved one is feeling distressed, call the National Suicide Prevention Lifeline. The crisis center provides free and confidential emotional support 24 hours a day, 7 days a week to civilians and veterans. Call the National Suicide Prevention Lifeline at 1-800-273-8255. Or text HOME to 741-741 (Crisis Text Line). As of July 2022, those searching for help can also call 988 to be relayed to the National Suicide Prevention Lifeline.
OpenAI plans to roll out new parental controls for ChatGPT within the next month, following a lawsuit from California parents who say the chatbot contributed to their 16-year-old son’s suicide.
The company said the new features will allow parents to link accounts with their teens, default to age-appropriate settings, disable features such as memory and chat history, and receive alerts if their child shows signs of distress.
Jim Steyer, founder and CEO of Common Sense Media, a nonprofit that advocates for safe media for children, warned the changes fall short.
"They’re not human beings. They are robots. They are technology," Steyer said. "They have been trained on reams of data, but that doesn’t mean they can substitute for good parental or counselor advice."
Steyer said children are being put at risk.
"This is really a time when kids are the guinea pigs," he said. "We don’t know how these technologies are evolving, but we see a lot of mental health crises, we see suicides in the most extreme examples, we see depression, and we see robots masquerading as sentient human beings giving advice to troubled young people."
He added, "It’s important that parents have some kind of control over what their kids are using AI for, but I think it’s, in general, a misguided solution."
Steyer said many parents do not fully understand artificial intelligence, while their children are often more technologically savvy. He called the parental controls "rudimentary," arguing that teens can easily bypass them, and added that the responsibility should fall on tech companies profiting from the technology. But he described the controls as "a small step in the right direction."
Steyer doesn’t believe anyone under the age of 18 should have access to "AI companions."
"We’ve seen the damage that social media platforms have done to literally millions of young people here in California and across the country," he said. "And AI is just social media on steroids."
Young people agreed — parental controls can be easy to work around.
"I feel like teenagers especially can find a way to go through and then share with their friends how to weave through them," said Arianna Christophilis of West L.A.
Common Sense Media is supporting two state bills that would ban or restrict AI companion bots for minors. According to its website, Assembly Bill 56 would require warning labels on social media platforms, while Assembly Bill 1064 would ban AI companions for minors and establish guardrails for AI systems that are dangerous or risky for kids.