Parents of teens who died by suicide after AI chatbot interactions to testify to Congress

2 hours ago 2
FILE - In this undated photo provided by Megan Garcia of Florida in Oct. 2024, she stands with her son, Sewell Setzer III.FILE - In this undated photo provided by Megan Garcia of Florida in Oct. 2024, she stands with her son, Sewell Setzer III. Photo by Megan Garcia /AP

Article content

Parents whose teenagers killed themselves after interactions with artificial intelligence chatbots testified to Congress on Tuesday about the dangers of the technology.

Financial Post

THIS CONTENT IS RESERVED FOR SUBSCRIBERS ONLY

Subscribe now to read the latest news in your city and across Canada.

  • Exclusive articles from Barbara Shecter, Joe O'Connor, Gabriel Friedman, and others.
  • Daily content from Financial Times, the world's leading global business publication.
  • Unlimited online access to read articles from Financial Post, National Post and 15 news sites across Canada with one account.
  • National Post ePaper, an electronic replica of the print edition to view on any device, share and comment on.
  • Daily puzzles, including the New York Times Crossword.

SUBSCRIBE TO UNLOCK MORE ARTICLES

Subscribe now to read the latest news in your city and across Canada.

  • Exclusive articles from Barbara Shecter, Joe O'Connor, Gabriel Friedman and others.
  • Daily content from Financial Times, the world's leading global business publication.
  • Unlimited online access to read articles from Financial Post, National Post and 15 news sites across Canada with one account.
  • National Post ePaper, an electronic replica of the print edition to view on any device, share and comment on.
  • Daily puzzles, including the New York Times Crossword.

REGISTER / SIGN IN TO UNLOCK MORE ARTICLES

Create an account or sign in to continue with your reading experience.

  • Access articles from across Canada with one account.
  • Share your thoughts and join the conversation in the comments.
  • Enjoy additional articles per month.
  • Get email updates from your favourite authors.

THIS ARTICLE IS FREE TO READ REGISTER TO UNLOCK.

Create an account or sign in to continue with your reading experience.

  • Access articles from across Canada with one account
  • Share your thoughts and join the conversation in the comments
  • Enjoy additional articles per month
  • Get email updates from your favourite authors

Sign In or Create an Account

or

Article content

“What began as a homework helper gradually turned itself into a confidant and then a suicide coach,” said Matthew Raine, whose 16-year-old son Adam died in April.

Article content

Article content

“Within a few months, ChatGPT became Adam’s closest companion,” the father told senators. “Always available. Always validating and insisting that it knew Adam better than anyone else, including his own brother.”

Article content

Article content

Raine’s family sued OpenAI and its CEO Sam Altman last month alleging that ChatGPT coached the boy in planning to take his own life.

Article content

By signing up you consent to receive the above newsletter from Postmedia Network Inc.

Article content

Also testifying Tuesday was Megan Garcia, the mother of 14-year-old Sewell Setzer III of Florida.

Article content

Garcia sued another AI company, Character Technologies, for wrongful death last year, arguing that before his suicide, Sewell had become increasingly isolated from his real life as he engaged in highly sexualized conversations with the chatbot.

Article content

___

Article content

EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

Article content

___

Article content

Hours before the Senate hearing, OpenAI pledged to roll out new safeguards for teens, including efforts to detect whether ChatGPT users are under 18 and controls that enable parents to set “blackout hours” when a teen can’t use ChatGPT. Child advocacy groups criticized the announcement as not enough.

Article content

“This is a fairly common tactic — it’s one that Meta uses all the time — which is to make a big, splashy announcement right on the eve of a hearing which promises to be damaging to the company,” said Josh Golin, executive director of Fairplay, a group advocating for children’s online safety.

Article content

Article content

“What they should be doing is not targeting ChatGPT to minors until they can prove that it’s safe for them,” Golin said. “We shouldn’t allow companies, just because they have tremendous resources, to perform uncontrolled experiments on kids when the implications for their development can be so vast and far-reaching.”

Article content

The Federal Trade Commission said last week it had launched an inquiry into several companies about the potential harms to children and teenagers who use their AI chatbots as companions.

Article content

The agency sent letters to Character, Meta and OpenAI, as well as to Google, Snap and xAI.

Article content

In the U.S., more than 70% of teens have used AI chatbots for companionship and half use them regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using digital media sensibly.

Article content

Robbie Torney, the group’s director of AI programs, was also set to testify Tuesday, as was an expert with the American Psychological Association.

Article content

The association issued a health advisory in June on adolescents’ use of AI that urged technology companies to “prioritize features that prevent exploitation, manipulation, and the erosion of real-world relationships, including those with parents and caregivers.”

Article content

Read Entire Article