Mistakes in AI: Analyzing the Failure of the Eating Disorder Chatbot

We shouldn’t let Chatbots influence our relationship with food. Or does AI expect them to have any empathy?

I firmly believe that AI is disastrous for humanity, just like every sci-fi film has ever predicted. Even if it doesn’t directly turn on us and start killing us like M3GAN, it certainly has the power to get us to turn on each other and harm ourselves—unlike social media. 

AI Analyzing the Failure of the Eating Disorder Chatbot

And it’s already coming for our jobs. First, AI came for McDonald’s drive-thru. Then it came for my job, ChatGPT and Jasper promising to write provocative prose about the human condition despite having yet to experience it. 

Now, AI is trying to replace crisis counselors, but in a not-so-shocking turn of events, the bots appear to lack the proper empathy for the job.

In March, NPR reported that the National Eating Disorders Association (NEDA) crisis hotline staffers voted to unionize. Days later, they were all fired and replaced with an AI chatbot named Tessa. 

While NEDA claimed the bot was programmed with a limited number of responses (and thus wouldn’t start, I don’t know, spewing racial slurs like many past chatbots), the bot still needs its problems. Most importantly, the tech doesn’t quite serve its intended purpose. Fat activist Sharon Maxwell revealed on Instagram that the bot encouraged her to engage in disordered eating during her interactions with Tessa.

AI Chatbot

Maxwell claims she told Tessa she had an eating disorder, and the AI bot replied with tips on restricting her diet. The bot reportedly recommended that Maxwell count her calories and strive for a daily deficit of 500-1000 calories. Tessa also recommended that Maxwell weigh herself weekly and use calipers to determine her body composition.

What Are Hard to Digest Foods

“If I had encountered this chatbot while struggling with my eating disorder, I would not have sought assistance,” Maxwell wrote. “If I hadn’t received assistance, I wouldn’t be here today.”

While Maxwell didn’t provide screenshots of the chatbot’s inappropriate messages, our sister site Gizmodo found that the bot didn’t know how to respond to rudimentary entries like “I hate my body” or “I want to be thin so badly.”

After receiving negative publicity regarding Tessa’s incapability to handle its primary function, NEDA declared they had removed the chatbot and would conduct a thorough investigation. Unfortunately, there is no assurance that the bot will be able to perform better in the future.

The incident has raised concerns about the reliability of AI-powered chatbots and whether they can be trusted to handle sensitive information.

It remains to be seen if NEDA will implement stricter guidelines and regulations to prevent similar incidents from happening again. Nonetheless, companies must prioritize the safety and privacy of their customers when utilizing AI technology.

Why? Because to be an effective hotline operator requires tailoring one’s response to the individual caller, picking up on cues that tech is incapable of detecting. Even Tessa’s creator told NPR that the chatbot would not interact with callers like humans. The design’s primary objective was to swiftly deliver pertinent information to those who require it the most. How it went so off the rails is unknown.

This isn’t the first time that tech has inadvertently posed a risk to those experiencing eating disorders, and in December, The New York Times reported that TikTok, Gen Z’s social media platform of choice, suggested video content encouraging and instructing the viewer on disordered eating within minutes of joining the forum as a new user—even if the user indicated being as young as 13.

Computer chips have never been through pain, nor have they experienced emotions. They lack free will and the ability to think. Replacing humans with AI is primarily a cost-cutting maneuver, and someone in crisis deserves better than to interface with a tool created to save time and money.