menu

Human-AI relationships pose ethical issues

April 20, 2025

Listen to this article

Human-AI relationships pose ethical issues

It’s becoming increasingly commonplace for people to develop intimate, long-term relationships with artificial intelligence (AI) technologies. At their extreme, people have married their AI companions in non-legally binding ceremonies, and at least two people have killed themselves following AI chatbot advice. In a paper published in the journal Trends in Cognitive Sciences, psychologists explore ethical issues associated with human-AI relationships, including their potential to disrupt human to human relationships and give harmful advice.

What the researchers say: “The ability for AI to now act like a human and enter into long-term communications really opens up a new can of worms,” lead author told us. “If people are engaging in romance with machines, we really need psychologists and social scientists involved.”  

AI romance or companionship is more than a one-off conversation, note the authors. Through weeks and months of intense conversations, these AIs can become trusted companions who seem to know and care about their human partners. And because these relationships can seem easier than human-human relationships, the researchers argue that AIs could interfere with human social dynamics.

“A real worry is that people might bring expectations from their AI relationships to their human relationships,” he continued. “Certainly, in individual cases it’s disrupting human relationships, but it’s unclear whether that’s going to be widespread.”

There’s also the concern that AIs can offer harmful advice. Given AIs’ predilection to hallucinate (i.e., fabricate information) and churn up pre-existing biases, even short-term conversations with AIs can be misleading, but this can be more problematic in long-term AI relationships, the researchers noted.  

“With relational AIs, the issue is that this is an entity that people feel they can trust: it’s ‘someone’ that has shown they care and that seems to know the person in a deep way, and we assume that ‘someone’ who knows us better is going to give better advice,” they explained. “If we start thinking of an AI that way, we’re going to start believing that they have our best interests in mind, when in fact, they could be fabricating things or advising us in really bad ways.”

The suicides are an extreme example of this negative influence, but the researchers say that these close human-AI relationships could also open people up to manipulation, exploitation, and fraud.

“If AIs can get people to trust them, then other people could use that to exploit AI users,” the lead author said. “It’s a little bit more like having a secret agent on the inside. The AI is getting in and developing a relationship so that they’ll be trusted, but their loyalty is really towards some other group of humans that is trying to manipulate the user.”

As an example, the team notes that if people disclose personal details to AIs, this information could then be sold and used to exploit that person. The researchers also argue that relational AIs could be more effectively used to sway people’s opinions and actions than Twitterbots or polarized news sources do currently. But because these conversations happen in private, they would also be much more difficult to regulate.

“These AIs are designed to be very pleasant and agreeable, which could lead to situations being exacerbated because they’re more focused on having a good conversation than they are on any sort of fundamental truth or safety,” the researchers explained. “So, if a person brings up suicide or a conspiracy theory, the AI is going to talk about that as a willing and agreeable conversation partner.”

The psychologists call for more research that investigates the social, psychological, and technical factors that make people more vulnerable to the influence of human-AI romance.

“Understanding this psychological process could help us intervene to stop malicious AIs’ advice from being followed,” they said. “Psychologists are becoming more and more suited to study AI, because AI is becoming more and more human-like, but to be useful we have to do more research, and we have to keep up with the technology.”

My take: The big question is: how do you regulate AI chat? How do you prevent the kind of malevolent use of the technology that the researchers fear? And even more, how do you preserve human to human relationships?

That last is the big question, because it’s those relationships which make us human.

Dr Bob Murray

Bob Murray, MBA, PhD (Clinical Psychology), is an internationally recognised expert in strategy, leadership, influencing, human motivation and behavioural change.

Join the discussion

Join our tribe

Subscribe to Dr. Bob Murray’s Today’s Research, a free weekly roundup of the latest research in a wide range of scientific disciplines. Explore leadership, strategy, culture, business and social trends, and executive health.