It is becoming more and more common for people to develop familiar, long -term relationships with artificial intelligence technologies (AI). At their end, people have “married” their partners in non -legal binding ceremonies and at least two people have been killed after AI Chatbot tips. In an opinion document published on April 11 in Cell Press Journal Trends in cognitive sciencesPsychologists explore the moral issues related to human-atom relationships, including their ability to disrupt human-human relationships and to give harmful advice.
“AI’s ability to act now as a human being and entering long -term communications really opens a new worm box,” says lead writer Daniel B. Shank of the University of Science and Technology of Missouri, who specializes in social psychology and technology. “If people deal with romance with machinery, we really need psychologists and social scientists involved.”
AI’s romance or companion is more than a one -time discussion, note the authors. Through weeks and months of intense talks, these AISs can become reliable comrades who seem to know and care about their human partners. And because these relationships can seem easier than human relationships, researchers argue that AIS could interfere with human social dynamics.
One real concern is that people can bring expectations of their AI relationships to their human relationships. Certainly, in individual cases it disrupts human relationships, but it is not clear whether this will be widespread. ”
Daniel B. Shank, Head writer, University of Science and Technology of Missouri
There is also concern that AIS can provide harmful tips. Given AIS preference for hallucinations (ie, they make information) and stirred pre -existing prejudices, even short -term AIS conversations may be misleading, but this may be more problematic in long -term relationships, researchers say.
“With the relative AIS, the point is that this is an entity that people feel they can trust: it is ‘someone’ who has shown that they care and this seems to know the person in a deep way and assume that ‘someone who knows us is going to give better tips,’ They could build things or advise us in really bad ways. ”
Suicides are an extreme example of this negative influence, but researchers say that these close human relationships could also open people to manipulation, exploitation and fraud.
“If AIS can make people trust them, then other people could use it to take advantage of AI users,” says Shank. “It’s a little more like you have a secret agent inside. AI comes in and develops a relationship to trust, but their faith is really towards some other group of people trying to handle the user.”
For example, the group notes that if people disclose personal information in AIS, this information could then be sold and used to exploit this person. Researchers also argue that the relative AIS could be used more effectively to influence people’s views and actions from twitterbots or polarized sources of news they are doing today. But because these conversations happen privately, it would also be much more difficult to regulate.
“These AISs are designed to be very enjoyable and enjoyable, which could lead to situations that are exacerbated because they are more focused on good discussion than in all sorts of fundamental truth or security,” says Shank. “So if a person brings suicide or conspiracy theory, AI is going to talk about it as a willing and enjoyable conversation partner.”
Researchers require more research that explores social, psychological and technical factors that make people more vulnerable to the influence of human romance.
“Understanding this psychological process could help us intervene to stop the tips of the malicious AIS from the sequence,” says Shank. “Psychologists are becoming more and more suitable for studying AI, because AI is becoming increasingly human, but to be useful, we need to do more research and we need to keep up with technology.”
Source:
Magazine report:
Shank, db, et al. (2025). Artificial intimacy: moral issues of AI romance. Trends in cognitive sciences. Doi.org/10.1016/J.Tics.2025.02.007.