My friend, the robot – Can friendship be programmed?

00 Blog Mann Freundschaft - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)
© Good Studio/stock.adobe.com & ML2R

Friendships with machines are a recurring theme in literature and pop culture. As early as the mid-20th century, Isaac Asimov described a friendship between the girl Gloria and her caretaker robot Robbie, and recently, Nobel laureate Kazuo Ishiguro envisioned a similar scenario with the “Artificial Friend” Klara. However, artificial friendships are more than science fiction: one of the pioneers of social robotics, MIT professor Cynthia Breazeal, argued twenty years ago that we could be friends with robots. In fact, commercial friendship bots, known as “Artificial Companions,” have already garnered significant scientific and media interest.

Artificial friendships are not as far-fetched as they may initially seem. The human tendency to anthropomorphize machines, or attribute human qualities to them, has been well-documented. We’ve all talked to a crashed computer or threatened an old car with the junkyard. Even minimal stimuli can lead us to humanize inanimate objects. This is especially true for robots designed to mimic human appearance and behavior: many people are willing to confide intimate aspects of their lives to a robot. Numerous studies show that people feel sympathy for robots, take care of their apparent needs, or attribute humor to them. Given this context, it’s not surprising that some people consider their robots as friends: they rave about the caring nature of their virtual friend or claim they can’t imagine life without their robot.

Are robots the better friends?

These research findings also have an ethical dimension. Aristotle believed that friendships are a necessary component of a happy life, and empirical studies show that close social relationships significantly contribute to personal life satisfaction. Loneliness, on the other hand, negatively affects both physical and mental health. Surprisingly, interacting with a robot can also reduce feelings of loneliness. It seems that artificial friendships could improve the quality of life.

Moreover, new technologies, especially Artificial Intelligence (AI) in the form of Machine Learning, already surpass humans in many areas of life. Can we, therefore, create the perfect friend for everyone through AI? At first glance, many things suggest this: robots don’t spill secrets and are always available. They are never impatient or distracted by their own worries. Instead of painstakingly searching for a soulmate, the “personality” of a robot can be tailored to one’s own character. Indeed, some people prefer interactions with a robot over human connections.

Human friendships are valuable

On the other hand, a world where no human friendships exist, and we spend all our time with robot friends sounds unappealing. What do human friendships have that artificial friendships lack? The answer lies in the value of friendly relationships: friendships are intrinsically valuable because we behave morally in them. We support a friend not because it benefits us but because we wish them well and sometimes subordinate our interests to theirs. At the same time, friendships are always valuable to us because they serve a purpose. For instance, we can engage in shared hobbies with our friends, they help us move, and they make us feel lovable.

A robot friend can also be useful. It can (apparently) listen to us, provide comfort, make us laugh, and perhaps even help us move one day. However, what makes friendships intrinsically valuable is lost in artificial friendships, as such relationships inevitably serve only our interests. This is not extraordinary; even Aristotle describes friendships that endure solely because the friends derive a certain benefit from their relationship. But even such utilitarian friendships are impossible with robots: a central feature of friendships is reciprocity. Anyone who has ever been unhappily in love knows: love remains love, even if it is not reciprocated. Friendship, however, can never be one-sided. You can like a robot, tell it intimate secrets, care for it, and even react empathetically to it. But as long as no strong AI with emotions and consciousness exists, robots will only be able to simulate friendly behavior.

00 inBlog - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)
© Good Studio/stock.adobe.com & ML2R

The illusion of friendship

Returning to anthropomorphization, we know that no computer wishes us harm, yet we plead for it to respond to our keyboard inputs. Media psychology suggests that humanizing inanimate objects is an involuntary process, largely beyond our control. It’s thus easy to engage with a robot’s simulated friendship, even against our better judgment. This involves some self-deception since we genuinely want to be liked, not just trick ourselves into believing it. But what’s wrong with indulging in the illusion of friendship if it feels good?

Perhaps, that artificial friendships might feel too good. In human friendships, we learn to set aside our own needs, make compromises, and handle conflicts. For instance, a close friend’s criticism prompts us to reflect on ourselves and our behavior. This can be exhausting, but it teaches us a lot about ourselves and others. Artificial friends, however, are subject to the commercial interests of the manufacturer, who aims to make interaction with their product as pleasant as possible. Such a conflict-free, solely need-oriented artificial friendship could make human friendships seem less desirable by comparison.

Artificial friendships carry risks

To prevent the benefits of an artificial friendship from turning into their opposite, regulation of the manufacturing companies is necessary. The unconditional appreciation from a robot friend carries the risk of psychological dependency and increased social isolation. In particular, vulnerable individuals must be protected from becoming too deeply immersed in the illusion that artificial friendships are based on. Under no circumstances should the human tendency to anthropomorphize be exploited for the commercial interests of a company or the security agenda of a government.

Our friends want what’s best for us. We must ensure that this is also true for artificial friendships.

Here is a video that shows what an artificial friendship can look like:

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

Sara Mann

Sara Mann is a research assistant in the Explainable Intelligent Systems project at TU Dortmund University. She is doing her doctorate on epistemological and scientific-theoretical aspects of explainable Artificial Intelligence.

More blog posts