Issues Magazine

The Existential Robot: Living with Robots May Teach Us to Be Better Humans

By By Julie Carpenter

Human–robot interaction researcher

We are among the first humans to be regularly living and working with robots. What do we expect from our new robot companions?

Some people have interactions with robots every day, but many of us have never seen a robot in person. In the near future, people will be more likely to interact with many kinds of robots daily in roles like personal caregiver, household help, and assistants in firefighting, security and healthcare tasks. Socially, people may interact with robots in many different ways.

What Is a Robot?

Robots sometimes have a purely machine-like shape, the kind typically seen in controlled industrial settings doing repetitive tasks, like in factories. However, it is often practical from a design perspective to model some robots on human-like or animal-like shapes for them to efficiently accomplish certain tasks. There are many advantages to two-legged (biped) or four-legged (quadruped) robots, such as the ability to move with agility on all different sorts of terrain. Or, a robot that can use natural language to communicate with people will potentially make that robot easier to use for a broader range of people, not just those with specialised training.

In my research, I think of a robot as the embodied physical presence of a mechanical system in our physical and social space.

Social Robots

The term social robotics sometimes refers to the process of making robots modelled on human–human interactions. Developing robots with social interaction capabilities may allow us to understand their intent better when we work together with them. Robot social cues might include things like conversational turn-taking, motion, speech or other human-like signals that communicate the robot’s intentions to people. These social aspects also help people to understand and interact with robots in an efficient way, building on our understanding of human–human communication.

A robot that makes eye contact when it speaks to us at a doctor’s office may be interpreted as friendly and put us at ease, while a smiling robot used for security purposes could have a disturbing or creepy effect on whoever comes in contact with it. But if a robot is imbued with the capacity to adapt its social cues for its primary users in a way similar to the way in which we learn to interact as humans, the robot can also generalise what it knows and apply some subjective experience to other situations. Thus, the social aspects of robot learning are beyond reflecting our expectations of interaction, or even human-like imitation for communication purposes. Through generalisation of subjective experience, a robot may be able to respond to many different kinds of situations without being explicitly told what to do by the user.

The “New Other”: Why We Treat Robots Differently from Other Technologies

Humans search for ways to make sense of unfamiliar things in order to overcome uncertainty. One way we attempt to reduce our uncertainty about new technology systems is to base our interactions with these new things, in part, on existing communication models familiar to us. Two models we often use as building blocks to scaffold our understanding of a new way to communicate with something social are our understanding of human–human and human–animal models of interaction.

Thus, our past experiences help us create mental models, and these help us to decode new technologies like robots. Including a familiar design cue set into some robots is an effective way for developers to leverage end-user expectations of how some robots work, building on their tendency to make connections and link knowledge through familiar communication models. In other words, if people that use robots, sometimes referred to as users, can infer an understanding of robot roles and behaviours from the design, they will be able to efficiently and effectively work with the robot.

Users are not the only ones who use these mental models to influence how we participate with robots in our space. Roboticists, or people who design robots, may have additional reasons to develop a robot to look or act like a human or animal. If the robot will move in a space created for humans, like a house or urban setting, a robot with a human-like shape may be more nimble. Sometimes robots are designed to move efficiently in natural and human-made environments, and so designers may model a robot’s body and movements on one or more real animals who have those abilities, like a cheetah, dog, or even a snake.

Often the cues we recognise in robots as lifelike, such as it having legs, arms, a head, or any other natural characteristic, are purposefully included in robot design in order to make them more effective for their tasks. Other times, we look for and find these familiar characteristics even when they were not designed into the robot to resemble a living thing. There are other robots people use every day that do not look like something we easily recognise as living and are obviously recognisable as machines, but they still move and carry out some tasks with little prompting.

This type of semi-autonomous behaviour implies a sort of purpose or intent, even though the robot really may not be very intelligent without a human operator to guide it. However, people tend to assign these robots a type of independence we usually reserve for something living in order to communicate with it through a familiar method.

Is It OK for People to Be Attached to a Robot?

One way to look at emotional attachment is as an affectional tie. This tie or attachment occurs over a period of time, and causes us emotional distress when the bond is broken. We tend to become attached to people or animals or things when we share mutual experiences. A sense of attachment can create a sense of safety and comfort. In a human–human attachment model, people are responsive to each other’s needs. Research on attachment has historically looked at models such as parent–child, romantic couples, work teammates, human–animal, and even human–product.

Caregiving, romantic, and peer or teammate human–robot roles may lead naturally into some level of human attachment. I believe that from a design perspective it will be desirable to develop some robots to encourage the people that use them to become emotionally invested to a degree. For example, I can imagine therapeutic situations where a robot is used as a temporary stand-in or surrogate for a human so that a user/patient can practice healthy and successful social-emotional models of communication. This work could be done with some level of supervision of a human guide/coach/counsellor that would help someone work through therapies using a robot as a tool or medium for practice. A person’s therapeutic use of a robot for companionship or caregiving could be extremely helpful to making their life better at some level.

Problems could arise when this attachment interferes with people living in a healthy way, covering a broad list of what “healthy” might be. It’s easy to imagine scenarios where participating socially or emotionally with robots could be considered harmful.

I can also envision scenarios where humans direct their highly charged emotional energy to a technology system like a robot that cannot return real affection or emotion. That type of outcome could be considered an extreme behaviour that may limit healthy human–human interactions, or is otherwise not good for a person’s mental health.

Whether or not an individual feels their human–human interactions in life are sufficient may play a role in their emotional vulnerability when participating with robots in a way that is somehow unsafe. Many people seek some level of social fulfillment and stimulation, and this can leave them vulnerable to dependence, enmeshing, or over-reliance on any social outlet, organic or artificial. The bottom line is that human–robot interactions are transactions and not reciprocal, and therefore probably not fulfilling enough for people to rely on as a long-term means for substituting organic two-way affectionate bonds, or for use as a surrogate for a human–human reciprocal relationship.

Again, there will be similar scenarios of human–robot interactions where context changes slightly, and the social and emotional outcomes for the users will be very different. If a robot is teleoperated by a human as an avatar (as in a long-distance relationship), that presents a different use context and different design challenges than a therapeutic robot used with a human therapist present. There could be some social and geography-bridging advantages for using an avatar robot to communicate with people on behalf of a human, but there will still be a level of self-deception taking place from all users regarding embodied presence. In other words, the idea of emotional relationships with robots is and will be a complex set of topics and questions.

My research for the past few years has focused on people who use field robots in the military every day, and what their human–robot interactions are like (The Quiet Professional: An Investigation of US Military Explosive Ordnance Disposal Personnel Interactions with Everyday Field Robots, University of Washington, 2013). The people I spoke with that use these robots are very aware that these machines are primarily intended to be tools. At the same time, that has not prevented some people from engaging in a level of pseudo-social interaction with the robots. Moreover, robot operators sometimes claimed to have a sense of self-extension of their physical and emotional selves into the robot.

This type of psychological merging with technology is not unusual, and there have been similar findings with video game players and their game avatars. They explain that they view the robots they use as an extension of “their hands” because the operator controls the robot from a distance. They also describe how they sometimes blame themselves when a robot fails to carry out a task successfully, even if it is a mechanical or technical failure and no fault of the operator. A certain comfort level with a critical tool like a robot can be a positive thing for the operator because they recognise the robot’s capabilities and limitations, sometimes called tool characteristics.

There are different types of affection, such as romantic affection or friendly affection or parental affection. I claim that this idea of self-extension into a robot may be classified as a type of attachment, or affectional tie. When we view a technology as an extension of our physical and psychological selves, our relationship to the technology is immediately different than how we interact with some other tools, products or machines.

What Is Our Responsibility to Ourselves When We Live with Robots?

Robots are machines, currently not very intelligent and without sentience or self-awareness. Yet, sometimes we regard them as something with these uniquely lifelike qualities, or even as an extension of ourselves. As humans and technology users, we are still trying to figure out how to regard robot “others” in our space, but I anticipate there will be a time when norms for human–robot interactions are developed within many different cultures and contexts. In a sense, we are all the ones influencing and inventing and designing the robots, whether it is through our robot-related research and jobs, or our everyday preferences about how we use robots. In addition to formal laws and policies, we will all decide individually and collectively how to treat robots in our homes and in public spaces.

Over time we change our expectations and norms about how to work and live with new technologies. How we interacted with stationary telephones 100 years ago is not how we interact with mobile smart phones now; we have adapted to the changing technology and how we use it. As smart and responsive technologies become more integrated into our daily lives, I believe there will be a spectrum of emotional responses toward these things depending on the technologies’ roles and individual user tendencies.

In a social sense, we can already get a glimpse of how society might currently perceive human emotional attachment to a robot when we read about or know people who have similar models of human–technology attachment or affection for things such as otaku superfandom for manga or anime characters, or people who create strong social connections via virtual contexts like SecondLife or World of Warcraft. Contemporary movies like Her, AI and Lars and the Real Girl explore outcomes of human–artificial human relationships. Worldwide, there are historically many variations of stories about human–artificial life relationships, and these are important cultural touchstones and ways we share ideas about how we participate in a world interacting with people and others.

In addition to the development of the robotic hardware and technology, there are a lot of ethical and cultural issues to consider as we move forward. Everything we do with robots and think about robots will continue to evolve – not just how we use them, but our expectations of how to use them and interact with them.

I believe living with robots is not just about discovering the roles robots will play in our lives, or even about making better robots. We have the opportunity to carefully consider how we treat robots via new policies, law and everyday attitudes that will inform our paths of robot design and perhaps give us new insights about what it means to be human or not human. The ways we experience and think about robots may make us reflect on ourselves and what we really want from each other, as well as consider our treatment of the complex machines who will participate so intimately in our lives.