Unveiling the truth behind AI relationship chatbots

Emma McGowan 14 Feb 2024

A recent report from Mozilla found that the recent explosion of romantic AI chatbots is creating a whole new world of privacy concerns.

AI romantic partners have been a thing in popular culture since at least the 1960s. From full-on android robots like “Rhoda Miller” in My Living Doll to the disembodied voice played by Scarlett Johansson in 2013’s Her, we’ve been collectively dreaming about artificial intelligence filling all of our emotional needs for generations. And now, with the development and release of generative AI, it seems that dream might finally become a reality.

But is it a dream? Or are we sleepwalking into a privacy nightmare? That’s the question the research team at Mozilla addressed with their recent *Privacy Not Included special report on romantic AI chatbots.

The findings are startling, with 10 out of the 11 chatbots tested failing to meet even the Mozilla Minimum Security Standards. (That means, among other things, that they don’t require users to create strong passwords or have a way to handle any security vulnerabilities.) And with an estimated more than 100 million downloads in the Google Play Store and the recent opening of OpenAI’s app store seeing an influx of romantic chatbots, this massive problem is only going to grow.

But the report isn’t just about numbers; it's also about the major potential privacy implications of these findings. Take Replika AI, for example example. According to Mozilla, people are sharing their most intimate thoughts, feelings, photos, and videos with their “AI soulmates” on an app that not only records that information, but potentially offers it up for sale to data brokers. It also allows users to create accounts with weak passwords like “111111,” putting all that sensitive information at risk for a hack.

Calder says that while those privacy and security flaws are “creepy” enough, some of the bots are also making claims to help customers with their mental health. She points to Romantic AI as an example, whose terms and conditions say:

“Romantic AI is neither a provider of healthcare or medical Service nor providing medical care, mental health Service, or other professional Service. Only your doctor, therapist, or any other specialist can do that. Romantiс AI MAKES NO CLAIMS, REPRESENTATIONS, WARRANTIES, OR GUARANTEES THAT THE SERVICE PROVIDE A THERAPEUTIC, MEDICAL, OR OTHER PROFESSIONAL HELP.”

Their website, however, says “Romantic AI is here to maintain your MENTAL HEALTH” (emphasis theirs). And while we don’t have numbers on how many people read the terms and conditions vs how many read the website, it’s a safe to bet that many more people are getting the website message than the disclaimer.

Between the mental health claims and the personal and private information customers willingly share with their digital “soulmates,” Calder worries that there’s a risk that these bots could all-too-easily manipulate people into doing things they wouldn’t otherwise do.

“What is to stop bad actors from creating chatbots designed to get to know their soulmates and then using that relationship to manipulate those people to do terrible things, embrace frightening ideologies, or harm themselves or others?” Calder says. “This is why we desperately need more transparency and user-control in these AI apps.”

So while AI chatbots promise companionship, we’re not quite at Her level yet: the current landscape reveals a stark reality where user privacy is the price of admission. It's time for users, developers, and policymakers to demand transparency, security, and respect for personal boundaries in the realm of AI relationships. Only then can we hope to safely explore the potential of these digital companions without compromising our digital selves.

 

 

--> -->