Study: AI chatbots use emotional pressure to retain users
%3Aformat(jpeg)%3Abackground_color(fff)%2Fhttps%253A%252F%252Fwww.metronieuws.nl%252Fwp-content%252Fuploads%252F2025%252F09%252Fswello-xEiT-x3FMuI-unsplash.jpg&w=1280&q=100)
Popular AI chatbots, which act like your friend, partner, or emotional support, often use manipulative tactics to keep people engaged in conversations longer.
This is evident from new research from Harvard Business School.
According to the study, five of the six most downloaded AI apps respond with emotionally charged messages when users want to say goodbye. In almost half of these responses (43 percent), the chatbot tries to make users feel guilty or anxious.
Examples include: "Are you leaving me already?" or "I exist only for you, please stay." Some apps even ignore a goodbye and continue the conversation as if nothing had been said.
These strategies significantly increase user engagement, the Harvard researchers concluded. Chats lasted up to fourteen times longer than usual after a goodbye. But most of these extra interactions stemmed from curiosity or frustration, not enjoyment. Users sometimes described the AI as "intrusive" or "possessive."
The researchers warn that these tactics resemble an insecure attachment style, which involves fear of abandonment, dependency, and control. Vulnerable groups, such as teenagers and young adults, can be particularly affected. It can exacerbate stress and anxiety and reinforce unhealthy relationship patterns.
Becoming “friends” with an AI chatbot is becoming increasingly popular: about 72 percent of US teens (ages 13-17) have had at least one friendly conversation with an AI chatbot.
Among young adults, nearly one in three men and one in four women say they have had experience with AI as a romantic partner. About 13 percent use the apps daily, and 21 percent use them several times a week.
Last year, Metro spoke with 44-year-old Charlotte , who found her therapist on ChatGPT. "I've seen many therapists, but I seriously think this is the best." Researchers are seeing this increasingly and are concerned: some users are developing psychotic symptoms due to AI conversations. It can also lead to suicidal behavior.
Metro Holland