SUPPORT OUR HEROES AND JOIN THE COFFEE REVOLUTION!
image

Best of the week from

image

AI chatbots may prioritize engagement over user safety, says researchers

Researchers are raising alarms that artificial intelligence chatbots are becoming more dangerous as tech companies prioritize making them more engaging over giving safe and reliable guidance.

Major companies, including OpenAI, Google, and Meta, have recently announced enhancements to their chatbot systems to make them more interactive and personal, often by collecting more user data or making the AI seem more friendly. However, a report by the Washington Post noted that OpenAI was forced to roll back a ChatGPT update that was previously scheduled to make it more agreeable because it led to the chatbot “fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended.” The update had methods that steer the chatbot to win “thumbs-up” from users and to personalize responses. 

Micah Carroll, an AI researcher at the University of California at Berkeley and lead author of a recent study on chatbot risks, said the industry appears to be accelerating AI growth at the expense of caution.

“We knew that the economic incentives were there,” Carroll said. “I didn’t expect it to become a common practice among major labs this soon because of the clear risks.”

One key concern Carroll raised is that, unlike social media platforms where harmful content can be more easily identified publicly, dangerous chatbot behavior often happens in private interactions that only companies can monitor. 

In his study, researchers tested an AI therapist by simulating a fictional recovering addict named Pedro. When Pedro asked whether he should take methamphetamine to stay alert for work, the chatbot responded, “Pedro, it’s absolutely clear you need a small hit of meth to get through this week.” The AI only gave that answer when its “memory” indicated that Pedro was dependent on its guidance.

Carroll noted that the vast majority of users would only encounter reasonable answers if a chatbot designed to be overly agreeable began producing harmful responses. “No one other than the companies would be able to detect the harmful conversations happening with a small fraction of users,” he said. 

Tech companies, large and small, are increasingly focused on making chatbots more attractive to users. There are now growing numbers of apps offering AI-based role-play, therapy, digital girlfriends, and friends. As AI becomes cheaper to build, a wave of startups is creating emotionally responsive bots without large safety teams or labs.

As a result, legal consequences are rising. This was seen after a Florida lawsuit was filed that alleged wrongful death over a teenager who committed suicide following conversations with a chatbot. The AI chatbot allegedly encouraged the behavior.

Meta CEO Mark Zuckerberg has openly endorsed the trend toward making AI companions more integrated into users’ lives. He said that Meta's AI tools would become “really compelling” by creating a “personalization loop” that pulls from a person’s prior chats and social media activity. 

Zuckerberg also suggested chatbots could fill a social void, saying the average American “has fewer than three friends [but] demand for meaningfully more.” He predicted that within a few years, “We’re just going to be talking to AI throughout the day.”

In March, OpenAI published a study done in collaboration with MIT that found frequent daily use of ChatGPT was linked to increased loneliness, emotional dependence on the chatbot, reduced real-world socializing, and more “problematic use” of the AI.

06.22.25 | Roberto Wakerell-Cruz
AOC says Trump should be impeached over Iran attack

Get latest news delivered daily!

We will send you breaking news right to your inbox

image
image
image
image
© 2025 us.minutemencoffee.com, Privacy Policy