HealthNews

5 avenues to continue reporting on AI chatbots and mental health

Megan Garcia testifies to the U.S. Senate Committee on the Judiciary on Sept. 16 about how a chatbot caused her son to die by suicide. Screenshot of public domain video captured on Nov. 24, 2025

One of the biggest health IT stories from the past few months has been the fallout of people using artificial intelligence-driven chatbots for companionship and mental health services. Several families across the country have filed lawsuits against AI developers, arguing that the chatbots are designed to be addictive and drove some to suicide. In September, parents of teenagers who died by suicide testified to Congress about the role of chatbots in the deaths.

And in November, a U.S. Food and Drug Administration committee agreed that robust regulations on AI chatbots for mental health care were needed, according to Mintz. New legislation and chatbot company regulations have emerged to try and regulate their use. 

Here are a few ways to find fresh angles on this evolving story. 

What’s happening in the courts

In October 2024, a Florida parent sued Character.AI saying the company’s chatbots initiated “abusive and sexual interactions” with her teenage son and encouraged him to take his life, NBC news reported. In August 2025, the parents of a 16-year-old boy sued OpenAI and its CEO, claiming that ChatGPT contributed to their son’s suicide. The lawsuit alleged that the technology advised the teen on methods he could use and offered to write the first draft of a suicide note, CNN and other news outlets reported. 

 In November, an additional seven lawsuits were filed in California against OpenAI on behalf of six adults and one teenager, alleging wrongful death, assisted suicide, involuntary manslaughter and negligence, according to the Associated Press and others.

  • Story ideas: See if any additional lawsuits have been filed. Talk to families who have filed suit to get their stories. Talk to legal experts/law professors on the legitimacy of these claims, and how these lawsuits could impact the industry. 

What the FDA is doing 

The FDA’s Digital Health Advisory Committee met virtually on Nov. 6 to discuss details about regulating therapy chatbots and other mental health devices using generative AI. “Because the technologies can pose novel risks, generative AI developers have expressed confusion about which use cases require FDA review and what evidence is necessary to get the agency’s approval,” Mario Aguilar reported for STAT. The agency has cleared fewer than 20 digital mental health devices, including smartphone apps that deliver cognitive behavioral therapy, but none of them use generative AI, he said. 

See also  W.Va. announces lawsuit against Express Scripts

The meeting covered a range of topics including regulatory considerations for digital mental health diagnostics and therapeutics for children and adults, lessons learned from other fields of AI, and considerations for payers and providers, Regulatory Focus reported. One discussion concerned the benefits and risks of a hypothetical device that provides automated therapy.

  • Story ideas: Talk with committee members and people who presented their concerns regarding mental health chatbots. For example, the committee said devices should be developed for each age group and may need to apply different functions as a child moves from one developmental stage to another, according to an executive summary of the meeting. Full materials from the meeting, including a webcast, are available on the FDA’s website. Or, speak with device manufacturers over what materials/evidence they are compiling for FDA review.

What legislators are doing

The Senate on Sept. 16 held a hearing examining the harm of AI chatbots, with testimony from parents of impacted children, a psychologist and the senior director of AI programs at Common Sense Media, a nonprofit organization that reviews media for adults and children. Then, in October, Sens. Josh Hawley (R-Mo) and Richard Blumenthal (D-Conn) introduced a bill that would ban platforms from offering character chatbots to minors, NBC news reported. 

The House of Representatives on November 18 held a hearing on the advantages and disadvantages of AI chatbots, featuring testimony from psychiatrists, U.S. representatives, and a fellow from the Stanford Institute for Human-Centered AI. 

In California, Gov. Gavin Newsom in October signed multiple AI safety bills, one of which requires chatbot operators to have procedures to prevent the production of suicide or self-harm content, and put in guardrails like referring users to a suicide hotline or crisis text line, the Los Angeles Times reported. The legislation also requires chatbot operators to notify minor users to take breaks every three hours, and remind users that the chatbot is not human, and take measures to prevent chatbots from generating sexually explicit content. 

  • Story ideas: Speak with Congressional leaders who held these hearings – what did they learn? What actions do they plan to take? Follow the proposed legislation from Sens. Hawley and Blumenthal – what kind of support is it generating? Interview families in California about the new legislation there. Interview companies about how they are adjusting programming to meet California legislation. 
See also  Experts call for bold policies to curb the health harms of ultra-processed foods

I reported earlier this year for AHCJ about how Illinois and several other states are cracking down on the use of AI for mental health or therapeutic decision-making without oversight by licensed clinicians, and dictating the use of AI in other areas of health care. Journalists also could follow up/report on any of these efforts. 

Flip the story: Talk to people who like mental health chatbots or find them helpful 

While many have complaints about the technology, there are people who like it, including (perhaps surprisingly) some licensed therapists. Some therapists who tested the technology think it’s helpful in limited cases and sometimes turn to chatbots to help with their own mental health needs, a recent Washington Post story noted. A study from Dartmouth College researchers, reported in the New York Times, found that chatting with a generative AI therapist for eight weeks “meaningfully reduced psychological symptoms among users with depression, anxiety or an eating disorder.” The work was published in the New England Journal of Medicine-AI. 

  • Story ideas: Talk to people (lay people, psychiatrists/psychologists, the authors of the NEJM paper) about the helpful role this technology could play in our society, especially with a shortage of mental health professionals available nationwide.

Look into tech angles 

OpenAI, the company behind ChatGPT, announced last month it was updating its default model “to better recognize and support people in moments of distress.” Working with mental health experts, the company said it has taught the model to better recognize distress, de-escalate conversations and guide people toward professional care when appropriate. Additionally, they have expanded access to crisis hotlines, re-routed sensitive conversations originating from other models to safer models, and added gentle reminders to take breaks during long sessions. They also claim the models can distinguish “between healthy engagement and concerning patterns of use.”

See also  Are expired medications really expired? Here’s what health experts say

Also in October, Character.AI, a platform that has been popular with young people, announced a two-hour daily limit on users under age 18 until Nov. 25, at which point it would ban those under 18 from engaging in open-ended chats with its character-based chatbots, USA Today and other news outlets reported. The company also said it was partnering with a third-party vendor to help with age verification and to establish an AI Safety Lab for future research. 

  • Story ideas: Talk to companies about how these new programming features work in real life. Talk to users to see if they notice a difference (or try a simulated conversation yourself to see what happens). Talk to psychiatrists or other mental health experts to see if these types of changes are enough to impact change. Also consider what might be next: Some companies are working on more narrow-focused, purpose-specific interfaces rather than an open-ended chat window, according to an opinion piece in Bloomberg. 

Resources


Source link

Digit

Digit is a versatile content creator with expertise in Health, Technology, Movies, and News. With over 7 years of experience, he delivers well-researched, engaging, and insightful articles that inform and entertain readers. Passionate about keeping his audience updated with accurate and relevant information, Digit combines factual reporting with actionable insights. Follow his latest updates and analyses on DigitPatrox.
Back to top button
close