WASHINGTON, Sept 3 (Bernama-WAM) — OpenAI is introducing new safety measures for young users of its chatbot, ChatGPT, following a lawsuit filed by the parents of a teenager who died by suicide.
According to Emirates News Agency (WAM), the company said it will implement parental controls within the next month that can notify parents if the platform detects their child is in “acute distress.”
“We will soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT.
“We’re also exploring making it possible for teens (with parental oversight) to designate a trusted emergency contact.
“That way, in moments of acute distress, ChatGPT can do more than point to resources: it can help connect teens directly to someone who can step in,” the company said in a statement.
— BERNAMA-WAM