A Gradual Shift in Human Attitudes Toward Emotional Interaction with Robots?

Sherry Turkle is an MIT professor who studies human- robot psychological and social interactions.  She has been documenting and studying the attitudes of humans toward having emotional relationships and affective interactions with robots over time, and notes a gradual shift toward seeing such interactions favorably.  She recently presented at the annual American Association for the Advancement of Science meetings; it was covered by LiveScience (Clara Moskowitz, Human Robot Relations: Why We Should Worry, LifeScience 18 February 2013, HT Insta).  LiveScience is a popularizer of science, of course, and Turkle’s academic research is sober and restrained, and much more sophisticated than a general interest site can easily convey, but the article captures well some important points.  First, attitudes are in fact shifting in the United States:

Turkle studies people’s thoughts and feelings about robots, and has found a culture shift over time. Where subjects in her studies used to say, in the 1980s and ’90s, that love and friendship are connections that can occur only between humans, people now often say robots could fill these roles …

Turkle interviewed a teenage boy in 1983, asking him whom he would turn to, to talk about dating problems. The boy said he would talk to his dad, but wouldn’t consider talking to a robot, because machines could never truly understand human relationships.  In 2008, Turkle interviewed another boy of the same age, from the same neighborhood as the first. This time, the boy said he would prefer to talk to a robot, which could be programmed with a large database of knowledge about relationship patterns, rather than talk to his dad, who might give bad advice.

Turkle is particularly well-known within the specialist community, however, for her concern that increasingly positive feelings toward machines as companions and replacements for human interaction is not a good thing over time.  She worries (as she said at the AAAS meeting) that humans might come to see machines as the perfect and safe companions – seen as preferable to fallible and much more complicated actual human beings:

Turkle worries about this drive to replace human caretakers with robots. “Its not just that older people are supposed to be talking. Younger people are supposed to be listening,” she said. “We are showing very little interest in what our elders have to say. We are building the machines that will literally let their stories fall on deaf ears.”

Children, in turn, play with more and more robotic and electronic toys. Many, like the Tamagotchi digital pets of the 1990s, and the later robotic dog Aibo, require nurturing, which encourages kids to take care of them, and therefore, to care about them. Some kids say they prefer these pets to real dogs and cats that can grow old and die.  “People used to buy pets to teach their children about life and death and loss,” Turkle said. We are now teaching kids that real living creatures are risky, while robots are safe.

Turkle’s worries are important, and I would go further to worry that, over time, we might be building a culture and society that rewards those who interact best with machines and worst with humans.  That recognized, however, it’s also important not to sentimentalize human interactions in settings in which the machine might turn out to do a much, much better job.

In the case of elder-care robots, for example, dealing with people with serious dementia or Alzheimers – who might ask the same question over and over again, for example – a cuddly robot that can be programmed to give responses patiently, without getting frustrated or angry, is a blessing, not a curse.  There’s no reason that it can’t be programmed to give the clinically best form of response – e.g., responding a number of times, but then gently seeking to shift the conversation away and out of the loop.

These are in matters where the emotional connection is comforting precisely because it is safe, secure, dependable for an elderly, confused person.  There are other kinds of robots that (certainly I hope) will be developed for elder care or nursing facilities that are not supposed to develop emotional ties – machines intended to free up nurses for more complex tasks that require human skill, judgment, emotions and capacities.  For example, many elderly people would likely prefer a machine – one that is purely an “appliance” – to help them with intimate functions such as toilet-care.  The point of a machine in that case is that it is an extension of you and a projection of one’s own independence; one is not looking for “humanity” here.  What matters is that the machine performs well, is reliable, etc. – but it intended to be thought of as an appliance.

In some respects, then, certain aspects of robots that can make them emotionally most helpful to people with serious illnesses such as dementia or Alzheimers are precisely the ones that make them perhaps harmful in excess to ordinary people without those difficulties, in the ways Turkle identifies.  In  my next posts, I want to take up two related robot-human interaction issues.  One is robot sex; the assumption has always been that human sex will find a way and that robot sex will always be the vinyl blow-up sex doll.  Whereas technological advances and Turkle’s insights might suggest this will not always be so.  The other is a more prosaic question about human attentiveness and self-driving cars.  Stay tuned.

Comments are closed.

Powered by WordPress. Designed by Woo Themes