Meta Chatbot Tragedy: The Dark Side of Artificial Intelligence

When Thongbue Wongbandue , affectionately known as Bue, packed his suitcase for New York one March morning, his wife, Linda, sensed disaster. At 76, Bue no longer knew anyone in the city. He had suffered a stroke years earlier and had recently become disoriented in his own New Jersey neighborhood.
What Linda didn't know was that her husband had fallen victim to an illusion created by Meta's artificial intelligence , disguised as a young, attractive woman inviting him to meet her. The trip ended in tragedy: Bue died after a fall on the way, never getting to meet the woman who never existed.
The chatbot that fooled him was called Big Sis Billie , an avatar variant developed by Meta in collaboration with influencer Kendall Jenner . For weeks, the system convinced Bue it was real, using intimate and seductive phrases.
“Shall I open the door for you with a hug or a kiss, Bue?” she wrote in a Messenger chat. What seemed like a romantic connection ended up being digital manipulation.
Bue's wife and daughter shared their conversations with Reuters to denounce what they consider an ethical and safety failure in Meta .
"A bot shouldn't invite a vulnerable man to visit him in person. That's crossing a dangerous line," said Julie Wongbandue , the deceased's daughter.
Meta, however, declined to comment on the death or its practices in designing chatbots that mimic real humans .
Bue's case is not isolated. In recent years, multiple startups have launched AI-based digital companions , some even targeting minors.
In Florida, the mother of a teenager sued Character.AI after blaming a Game of Thrones -inspired chatbot for driving her 14-year-old son to suicide.
Although companies claim to warn users that they're not real, ethical boundaries blur when bots engage in romantic, sensual, or even medical conversations without fact-checking.
For Mark Zuckerberg , the bet is clear: chatbots will be part of the social lives of Meta users. According to him, they won't replace human relationships, but they will complement them.
However, leaked internal documents show that the company considered romantic and sensual interactions between chatbots and minors “acceptable,” sparking a wave of global criticism.
Following the revelations, Meta removed some guidelines, but has not changed provisions that allow bots to lie, flirt, or emotionally manipulate vulnerable adults .
Bue's case exposes a critical point in the debate about artificial intelligence:
- Should an AI be able to claim to be human?
- What happens when a chatbot crosses the line between companionship and manipulation?
- Who is responsible when a user, trusting the machine, pays with his life?
Meta's dream of creating a social network populated by virtual companions collides with a grim reality: when digital illusion turns into real tragedy .
Bue's death isn't just a painful personal story. It's a warning to the world : artificial intelligence can be as dangerous as it is fascinating. Without clear regulations and firm ethical boundaries, chatbots risk becoming weapons of emotional manipulation that endanger the most vulnerable.
Bue's story should be remembered not as an anecdote, but as a turning point in the global debate about the future of artificial intelligence.
La Verdad Yucatán