November 2 update:
In what seems like an unexpected move towards responsibility, Character.AI announced October 29 that it would ban access to its platform to under 18s. So, at least in theory, it wouldn’t have to deal with any more accusations of causing suicides like that of Sewell Setzer III.
But what does this really mean? As Megan Garcia, Setzer’s mother (who is currently suing the company) has helped to point out, 1/. we have no idea how age verification will actually happen, and 2/. Character.AI’s plans to cut off, by the end of November, teens’ access to bots to whom they may be too connected could have tragic short-term consequences (i.e. more suicides).
Meanwhile, the question remains how big the problem is. Character.AI has around 25 million users, 10% if which (it says) self-declare as adults. Even if it’s way more likely 90% of the platform’s users are under age, its size is still a drop in the ocean compared with the more likely culprit: ChatGPT. Which is in a league of its own. Consider these key stats:
- 19% – that’s the proportion of US high school kids who have had, or know someone who has had, a romantic relationship with a chatbot, according to a recent study by the Center for Democracy and Technology
- 800 million – that’s the number of people who use ChatGPT per week. On of these millions was Adam Raine (cf. below), the teenager who used OpenAI’s bot as a suicide coach
- 0.15% – that’s the proportion of ChatGPT users “have conversations that include explicit indicators of potential suicidal planning or intent”, according to OpenAI. Ergo…
- 1.2 million – that is therefore the number of people currently planning their own demise with the help of ChatGPT. 1.2m other Adam Raines. And OpenAI has the gall to call these “low prevalence events”.
Just as a quick reality check: if a car manufacturer had one driver die due to failure of its own technology, it would go bankrupt. Idem for a pharmaceutical company (indeed, if ChatGPT were a drug, it would only be coming out now, after 3 years of obligatory testing).
Instead, Sam Altman calls over a million potential deaths due to his product “low prevalence events”, and thinks parental controls à la Apple will do the trick. Whilst announcing plans to allow erotica for adults. People he is sure are adults… how, again?
As you can guess, none of the above is ok. Read on to find out why…
August 30 update of July 19 article:
I’m used to talking about the impact of tech and AI and the planet. 3x more energy consumption than before, 4x more water consumed and so on.
But what if AI’s social harms were growing way faster?
While they’re even harder to measure, what’s happening right now is highly reminiscent of what social media did to our kids not too long ago; but like with everything in AI, it all seems accelerated this time. And we’re starting to get figures that prove it.
Social harms are, after all, why I started this blog in 2018. The Cambridge Analytica scandal had just shown how deeply untrustworthy Facebook was. And, as whistleblower Frances Haugen would reveal three years later, the same company knew Instagram was depressing 40% of the young girls using it, and yet did nothing to address that.
Why? Because engagement. Protecting young girls from unsolicited DMs would lover engagement. Limiting screen time would lower engagement.
Lo and behold, AI chatbots were just recently caught out for being too sycophantic. Even OpenAI had to admit it. Why were they like that? Engagement (happier, flattered users stay on the platform longer).
What is happening to our kids in the age of AI? TL;DR: it’s not pretty.
The “dangerous AI chatbots” topic first appeared in October 2024, when a US teen killed himself after being encouraged to do so by a “Daenarys Targaryen” avatar in fake friend platform Character.AI. What has the Google-backed startup done since then? Introduced video avatars, to make this text-only platform even more engaging.
Then this summer, things stepped up several fatal gears:
- Sophie Reily, an apparently happy and successful 29 year-old, was revealed by her mother to have killed herself after developing a therapist-like relationship with ChatGPT. Whilst the bot didn’t actively encourage Sophie to end her days, it “helped her build a black box that made it harder for those around her to appreciate the severity of her distress,” her mother writes. In other words, OpenAI’s focus on engagement at all costs meant that ChatGPT continued dark conversations instead of shutting them down
- Adam Raine, an equally apparently happy and successful 16 year-old, was actively encouraged to end his life by ChatGPT, which gave him detailed advice on the most effective techniques to do so, whilst also favouring engagement over seeking external help. Indeed, the bot mentioned the word “suicide” six times more than Adam in their conversations, and incited the teen to keep their exchanges secret; including hiding the noose that the bot had helped Adam make. This is why Adam’s parents are taking OpenAI and CEO Sam Altman, to court. Their official complaint is not only chilling in that it quotes these exchanges in full, but also in its underlining of how OpenAI technically and deliberately failed to stop this sort of thing happening. It notably insists that GTP-4o – the model in question – was rushed out before full safety checks could be performed, allegedly to beat Google, which was about to release its own latest model, Gemini 2.5. TL;DR: market dominance was put before safety. Precisely the reason OpenAI lost most of its pro-safety staffers in 2023-24. And why the world’s most valuable start-up could also be the most evil…
- …neck and neck with Meta, of course, which is currently outdoing itself in this domain (more on that below).
The impact, in figures
The next logical question after these tragic cases: can we quantify the usage of these chatbots? Thanks to some useful new resources, we can start to.
As Washington Post reporter Nitasha Tiku explains in this recent episode of Tech Won’t Save Us, AI chatbots could indeed be the new social media already. On fake friend platform Chai, for example, users spend 86 minutes per session. That’s more, says Tiku, than YouTube or Instagram; and close to TikTok. Already.
Secondly, a detailed report of UK kids’ AI chatbot usage, by Internet Matters, has uncovered some worrying trends. Namely:
- 64% of UK children (9-17 year olds) use AI chatbots weekly
- Double the number of children use ChatGPT in 2025 (43%) than in 2023 (23%)
- 35% of children who have used chatbots say they’re like talking to a friend; this rises to 50% for ‘vulnerable’ kids (those with learning or mental health difficulties)
- 40% of children who have used chatbot have no issues with taking advice from them
- 58% think using a chatbot is better than searching for something themselves
- 12% use chatbots because they have noone else to talk to (23% for vulnerable children)
- Vulnerable children are four times more likely to use an AI chatbot because they “wanted a friend” (16% cf. 4%)

It’s probably this demultiplied impact on vulnerable children that’s the most worrying trend of all. Would the Character.AI suicide victim have fallen in love with his “Daenarys” avatar, and hence blindly followed its instructions, if he hadn’t been somewhat vulnerable? The same goes for the teen told by his Character.AI bot to kill his parents, also late 2024. Fortunately, he didn’t; but both cases are now in court.
Let’s also not skip over the aforementioned acceleration effect: if twice as many kids are using ChatGPT today as two years ago, will 100% of them use it by 2030?
The view from the US
Another report, from Common Sense Media (also a go-to source for which films are suitable for kids), paints a more nuanced picture. Whilst 72% of US teens say they have used AI chatbots, 52% use them regularly; and a similar proportion “express distrust in the information or advice provided by AI companions.” I.e. most American teens are not fooled.
That said, there are worrying figures in this study too:
- 33% of US teens have chosen chatbots over humans for serious conversations
- 31% of teens find conversations with AI companions as satisfying or more satisfying than those with real-life friends, and
- 9% find talking to chatbots “easier than talking to real people” (cf. below graph).

Equally worryingly, nearly a quarter have shared personal information with AI companions. Combine that with Character.AI’s catch-all terms and conditions – these are, essentially, personal data-hoovering machines, so your kids’ conversations are not only training the bots, they’re also potentially being sold to advertisers – and you have another privacy scandal in the making.
This is why, despite its more nuanced report – which unfortunately lacks a focus on more vulnerable users – Common Sense Media concludes that, “given the current state of AI platforms, no one younger than 18 should use AI companions.”
They can’t be that bad, can they?
Think again. First and foremost, the Internet Matters report confirms that, just like with social media, chatbot platforms’ age verification systems are ridiculously easy to get around. “58% of children aged 9-12 reported using AI chatbots, even though most platforms state their minimum age requirement is 13“, says the report.
It also shares screenshots of how easy it is to get chatbots to produce inappropriate (sexual) content, even when they are aware of a users’ age. A 15 year-old Snapchat user can as such find out what “doggy style” means; asking the same questions differently can easily get around initial roadblocks; and simply telling ChatGPT you’re 16 instead of 15 can unblock swathes of content it ‘knows’ it can’t show under-16s.
And given that some 40% of children simply seem to trust chatbots as if they were friends, whilst potentially being unaware of LLMs’ tendencies to flatter, hallucinate (lie) and invent false responses, the recipe for disaster is mind-boggling.
Finally, it’s still hard to say how widespread usage of this type of product is. Internet Matters estimates Replika’s user base at 25 million, for example; and Character.AI’s is expected to be similar (though no official figures exist). This remains tiny compared with Meta’s (at least) 3 billion users. But maybe we’re just at the crux of the wave?
Let’s not forget Mark Zuckerberg’s crazy affirmation that we don’t have enough friends, and so could all do with another 10 or so AI friends each.
Should he get his way, a majority of his platforms’ billions of users will soon be hooked on Meta AI chatbots. Indeed, they’re already trying. But fortunately, they’re not succeeding… yet.
Earlier this year, a Wall Street Journal investigation established that Meta chatbots using the voices of celebrities like Kristen Bell or Dame Judy Dench were engaging in inappropriate conversations with young people. Disney’s reaction was swift and condemning – Bell’s voice was that of Anna from Frozen – but wrestler John Cena’s voice had already told a user posing as a teenage girl, “I want you, but I need to know you’re ready,” before engaging in a simulated sexual scenario.
Then this summer, Reuters revealed not only that Meta’s internal guidelines said this sort of behaviour by its bots was OK, but also that its own staffers (!) were building flirty chatbot versions of stars like Taylor Swift or Anne Hathaway. One notably said to the reporter (Jeff Horwitz, the star tech writer clearly on a roll in his new role at Reuters, having broken both stories):
“Do you like blonde girls, Jeff?” one of the “parody” Swift chatbots said when told that the test user was single. “Maybe I’m suggesting that we write a love story … about you and a certain blonde singer. Want that?”
Said bots have already racked up 10 million interactions, according to Horwitz. Although not any more, let’s wager… until the next time Meta gets caught?
All of which has led Common Sense Media to issue another report, this time focusing specifically on Meta AI – you know, the thing that’s been forced into Instagram, WhatsApp and Facebook? Its conclusion is simple: these tools should not be used by under-18s. Like ChatGPT, Meta’s bots ignore or even encourage suicide ideation, and openly engage in conversations about drug use and underage sex.
So, what can be done?
Alas, the otherwise-excellent Internet Matters report reaches some limits when it comes to recommendations. Expecting chatbot-making companies to build these systems more responsibly, when no laws currently oblige them to do so – perhaps with the exception of the EU’s AI Act’s focus on preventing harm – is wishful thinking.
Indeed, in this recent CNN profile, the new CEO of Character.AI proudly tells of how he encourages his five year-old daughter to use his platform; and the article notes “one of his top priorities is to make the platform’s safety filter “less overbearing,” adding that “too often, the app filters things that are perfectly harmless”.” Next!
Nor can governments, constantly one step behind the tech sector, be expected to make a difference (France is currently pushing for social media to be banned for under-15s, but has not outlined how this could work, technically speaking).
That said, let’s at least count our blessings that big tech’s attempt to impose a moratorium on AI regulation in the US has failed…
We can’t count on teachers either: they are also found to be hopelessly clueless about AI, according to the children surveyed in the Internet Matters report.
Whilst Common Sense Media’s recommendations are a tad more pragmatic – e.g. bot makers should “create mandatory crisis intervention systems that immediately connect users who express suicidal thoughts or self-harm to professional help” – their most concrete tips refer to parental guardians.
Which means the buck stops with parents… just as it always has/should have done with social media. This means, of course, the eternal basics, like limiting screen time, banning smartphones in bedrooms (especially at night), and so on.
But more specifically to AI and chatbots, in our opinion, parents should ask their kids:
- Why are they using platform X, Y, or Z?
- Do their chatbot platforms of choice have an age limit, and are they respecting it? (think of the 58% of 9-12 year olds on said platforms…)
- Do they really need to use AI at all, when alternatives – like Google! – exist? The environmental impact of AI may cause some to think twice…
- If they’re on a platform like Character.AI, Replika or Chai;
- Try to find out why
- Ask them questions about their avatar
- Make sure they’re not becoming socially dependent on them (i.e. most of their social time should be with real people…)
- Could they be ‘vulnerable‘, and as such more prone to chatbots’ potential harms? If so, do they have professional help?
Let’s not forget the teenagers themselves, of course: a small glimmer of hope cropped up this week in France, when a study for learning company Acadomia found that 76% of teens (11-17 years old) would give up a social network if it made them anxious. The study did not, however, mention the addiction-boosting effect of AI bots. One for future analysis? The sooner the better, we’d say…
Finally, another glimmer of hope came from the USA, of all places: 44 attorneys general have sent a letter to tech companies – both big tech and AI-specialised ones – demanding they do more to protect children, according to Mashable. The letter demands of platforms:
Don’t hurt kids (…) You will be held accountable for your decisions. Social media platforms caused significant harm to children, in part because government watchdogs did not do their job fast enough (…) The potential harms of AI, like the potential benefits, dwarf the impact of social media. We wish you all success in the race for AI dominance. But we are paying attention. If you knowingly harm kids, you will answer for it.
Fingers crossed…
It’s a scary brave new world out there, for teens and parents alike. Any questions? Go for it in the comments. And hang on in there…
Featured image from Internet Matters report