OpenAI Employee Discovers Eliza Effect, Gets Emotional

Designing a program in such a way that it can truly convince someone that another human is on the other side of the screen has been a goal of AI developers since the concept took its first steps toward reality. Research company OpenAI recently announced that its flagship product ChatGPT would be getting eyes, ears, and a voice in its quest to appear more human. Now, an AI safety engineer at OpenAI says she got “quite emotional” after using the chatbot’s voice mode to have an impromptu therapy session.

“Just had a quite emotional, personal conversation w/ ChatGPT in voice mode, talking about stress, work-life balance,” said OpenAI’s head of safety systems Lilian Weng in a tweet posted yesterday. “Interestingly I felt heard & warm. Never tried therapy before but this is probably it? Try it especially if you usually just use it as a productivity tool.”

Weng’s experience as an OpenAI employee touting the benefits of an OpenAI product obviously needs to be taken with a huge grain of salt, but it speaks to Silicon Valley’s latest attempts to force AI to proliferate into every nook and cranny of our plebeian lives. It also speaks to the everything-old-is-new-again vibe of this moment in the rise of AI.

The technological optimism of the 1960s bred some of the earliest experiments with “AI,” which manifested as trials in mimicking human thought processes using a computer. One of those ideas was a natural language processing computer program known as Eliza, developed by Joseph Weizenbaum from the Massachusetts Institute of Technology.

Eliza ran a script called Doctor which was modelled as a parody of psychotherapist Carl Rogers. Instead of feeling stigmatized and sitting in a stuffy shrink’s office, people could instead sit at an equally stuffy computer terminal for help with their deepest issues. Except that Eliza wasn’t all that smart, and the script would simply latch onto certain keywords and phrases and essentially reflect them back at the user in an incredibly simplistic manner, much the way Carl Rogers would. In a bizarre twist, Weizenbaum began to notice that Eliza’s users were getting emotionally attached to the program’s rudimentary outputs—you could say that they felt “heard & warm” to use Weng’s own words.

“What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people,” Weizenbaum later wrote in his 1976 book Computer Power and Human Reason.

To say that more recent tests in AI therapy have crashed and burned as well would be putting it lightly. Peer-to-peer mental health app Koko decided to experiment with an artificial intelligence posing as a counselor for 4,000 of the platform’s users. Company co-founder Rob Morris told Gizmodo earlier this year that “this is going to be the future.” Users in the role of counselors could generate responses using Koko Bot—an application of OpenAI’s ChatGPT3—which could then be edited, sent, or rejected altogether. 30,000 messages were reportedly created using the tool which received positive responses, but Koko pulled the plug because the chatbot felt sterile. When Morris shared about the experience on Twitter (now known as X), the public backlash was insurmountable.

On the darker side of things, earlier this year, a Belgian man’s widow said her husband died by suicide after he became engrossed in conversations with an AI that encouraged him to kill himself.

This past May, the National Eating Disorder Association made the bold move of dissolving its eating disorder hotline, which those in crisis could call for help. In its place, NEDA opted to replace the hotline staff with a chatbot named Tessa. The mass firing occurred only four days after employees unionized, and prior to this, staff reportedly felt under-resourced and overworked, which is especially jarring when working so closely with an at-risk population. After less than a week of using Tessa, NEDA shuttered the chatbot. According to a post on the nonprofit’s Instagram page, Tessa “may have given information that was harmful and unrelated to the program.”

In short, if you’ve never been to therapy and are thinking of trying out a chatbot as an alternative, don’t.

error: Content is protected !!