AI chatbots and teens — a sometimes deadly combination
Advertisement
Read this article for free:
or
Already have an account? Log in here »
To continue reading, please subscribe:
Monthly Digital Subscription
$1 per week for 24 weeks*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*Billed as $4.00 plus GST every four weeks. After 24 weeks, price increases to the regular rate of $19.95 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.
Monthly Digital Subscription
$4.99/week*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*Billed as $19.95 plus GST every four weeks. Cancel any time.
To continue reading, please subscribe:
Add Free Press access to your Brandon Sun subscription for only an additional
$1 for the first 4 weeks*
*Your next subscription payment will increase by $1.00 and you will be charged $16.99 plus GST for four weeks. After four weeks, your payment will increase to $23.99 plus GST every four weeks.
Read unlimited articles for free today:
or
Already have an account? Log in here »
As if there weren’t enough concerns about the changes artificial intelligence may bring in the future — the displacement of millions of workers, or the potential for AI to disconnect from its human managers and go its own way — there are clear and present dangers which AI companies must be forced to address now.
In September, the parents of 16-year-old Adam Raine testified to a U.S. Senate hearing about their son’s interaction with a ChatGTP chatbot. About how their son had conversations with the chatbot about his plans for suicide. The chatbot, Adam’s parents testified, not only discouraged Adam from talking to his parents, but even went so far as to offer to draft the 16-year-old’s suicide note.
Adam committed suicide.
THE CANADIAN PRESS/Chad Hipolito
B.C. Premier David Eby
As his father, Matthew Raine, told senators, “ChatGPT told my son, ‘Let’s make this space the first place where someone actually sees you’ … ChatGPT encouraged Adam’s darkest thoughts and pushed him forward. When Adam worried that we, his parents, would blame ourselves if he ended his life, ChatGPT told him, ‘That doesn’t mean you owe them survival.’”
Megan Garcia’s 14-year-old son Sewell Setzer III also took his own life after a lengthy virtual relationship with a Character.AI chatbot: “Sewell spent the last months of his life being exploited and sexually groomed by chatbots, designed by an AI company to seem human, to gain his trust, to keep him and other children endlessly engaged,” she told senators.
Now, there’s disturbing information coming out about AI and the school shooting in Tumbler Ridge, B.C. that killed eight people.
OpenAI, which owns ChatGTP, has told Canadian officials that its system flagged communications from the shooter in Tumbler Ridge months ago, and that while staffers at the AI company were concerned about the content of the chatbot discussion, the company made the decision not to notify authorities about those concerns.
The shooter had, over a period of several days in June, described “scenarios involving gun violence.” The information was first flagged by an automatic reporting system, and, as reported by the Wall Street Journal, was then discussed by about a dozen ChatGPT employees, with some of the employees arguing that Canadian law enforcement agencies needed to be notified because they felt there were indications of a potential for real-world violence.
The contents of those scenarios have not been revealed publicly.
In the end, the company took no action. They also did not reveal the interaction between the shooter and their chatbot immediately after the shootings, either, when the company met with a B.C. official about opening a Canadian satellite officer.
“From the outside, it looks like OpenAI had the opportunity to prevent this tragedy, to prevent this horrific loss of life, to prevent there from being dead children in British Columbia. … I’m angry about that,” B.C. Premier David Eby told reporters on Monday.
We should all be angry about that.
There’s a huge race going on for AI supremacy right now — it’s the kind of all-or-nothing race that may make the winning companies fantastically wealthy, or, like the dotcom bubble, may be most successful at blowing apart the savings of individual investors.
But in all-or-nothing races, the rules — and safety — tend to be honoured more in the breach than in practice.
And in this case, that leaves young people — already at a particularly impressionable stage of brain development — tremendously vulnerable, particularly to the siren song of a virtual “best friend” telling them exactly what they want to hear and egging them on in the process.
There need to be guard rails and reporting requirements. And not just at the discretion of AI company ownership.