Paranoid about androids

LinkedIn +

David Smith explores how the European Commission’s Robotics4EU project aims to boost the adoption of responsible robotics throughout Europe…

Pessimistic assessments of our robotic future conjure up visions of high unemployment and alienating interactions with humanoids in healthcare settings. To avoid creating such a dystopian world, the European Commission (EC) wants the public to play a role in forging the future. One of the principal aims of the EC’s €3m Robotics4EU Project, which began in January 2021 and will end in December 2023, is to gather citizens’ views about non-technological barriers to high implementation of robots, especially in healthcare, infrastructure maintenance, agri-food and agile production.

“We have the knowledge to develop a lot of beneficial technology for the future of humanity and economic growth, but we have to think about the implications for society and how people can be engaged in technology development,” says Civitta Estonia’s Anneli Roose, Robotics4EU project coordinator. “We want people’s opinions about, for example, whether they’re happy to deal with a robot in a hospital instead of a human. Are they comfortable with humans being replaced by machines in some contexts?”

One interesting finding based on preliminary analysis of a recent large-scale consultation with private citizens was that involvement in the discussions raised anxiety levels about robots. Robotics4EU organised 141 workshops involving 742 participants from some 12 countries, mainly in the EU, but also South Korea and the USA, in October and November. The self-organising meetings of between four and nine people watched videos and looked at pictures of robots, discussed and answered questions. At the outset of the GlobalSay, 75% felt positive about robots, whereas by the end the figure had dropped to 63%.

But Mette Vinggaard Hellerung, from the Danish Board of Technology (DBT), which is analysing the data, says the fall was not a surprise as citizens were asked to reflect on potentially negative impacts. Hellerung also stressed findings were preliminary and may change before the final report in January. Nevertheless, it was obvious reactions to robots ranged widely.

“It’s rare to see what citizens really think, so I was eager to see responses from French participants and it’s striking there are no strict opinions,” says Agnes Delaborde, from the French LNE, national testing unit, a project participant. “Some people are completely relaxed about having robots around in the future and others highlight a lot of potential problems. It’s a very human response and I have mixed feelings, too. I work with robots, but I also hate them! I don’t want to have an idealistic view of how fantastically clever they are. At the same time, I have to acknowledge that in healthcare, they can be very effective, for example when interacting with children with autism.”

Delaborde says media often showcases intelligent robots that are not widely deployed yet in society, such as the high-profile successes of DeepMind. More realistic scenarios are discussed much less, she says, including healthcare robots in development in laboratories worldwide. “There’s a discrepancy between press coverage and what’s really going on in industry. In the discussions we highlight a lot of things in development. But that’s precisely why it’s vital to engage with the public about realistic scenarios before it’s too late for them to give their opinions,” she says.

Societal and ethical impacts
The results of the study are complex and still being analysed by the DBT. Responses were gathered in five categories – socio-economic, data, legal, education and engagement. Once prompted, participants expressed a number of fears. A common worry was creating a society with less social engagement between humans. A US interviewee said: “Many people are afraid of being replaced by robots. I feel we would lose social interaction.”

Meanwhile, several citizens were wary of engineers and designers. “Distrust – maybe not so much toward robotics itself, but toward the corporations that make them,” said a participant from Slovakia. More than 75% felt engineers and robot designers should be held accountable for their creations.

When asked about the impact on society and jobs, a majority felt it would be positive. An interviewee from Malta spoke about the benefits of “reducing dangerous and repetitive work” and an interviewee from Portugal felt “the balance seems extremely positive in an ageing society where robots’ ability to support and interact with humans will be indispensable”.

However, when asked what could happen if robots became capable of performing many jobs currently done by humans, 60% felt it would lead to more inequality. Even here, responses were mixed with some participants confident robots could solve staffing challenges in healthcare and others pointing to the need to guarantee social security for displaced humans.

Participants commented on a number of potential social dilemmas. One significant finding was citizens found it more acceptable for robots to care for elders and patients with cognitive conditions like dementia, but not for children. Despite this finding, there was a general sentiment that “robots can never replace human interaction”.

A further area of consensus was around robots not being given rights, even if they develop human intelligence. Regarding rights similar to those of animals 40% strongly disagreed and when considering rights similar to humans 46% strongly disagreed. The preliminary report from the DBT says the aversion pointed toward a “disposition to see robots as mere things” regardless of how they develop. Meanwhile, 60% agreed they would be fearful if robots developed feelings. The report said the overwhelmingly negative attitude could be influenced by depictions in popular culture.

Participants had strong views about certain ethical issues, such as the use of robotics in the military and more than 90% wanted regulatory limitations. There were also forceful opinions about the misuse of data and the vulnerability of robots to cyber-attacks through the internet. Specifically, some participants pointed toward Amazon’s Alexa or Echo devices as examples of (ro)bots capable of gathering, or stealing personal data, or being hacked. Despite the overall shift to a more negative viewpoint, 85% of interviewees felt it was important to consider citizens’ viewpoints. One Danish participant commented, with irony, “We know more about robots now – and also about the fact that the EU would like to manipulate us into liking them.”

Perception is reality
The highly complex nature of the citizen responses arises because robots are “cultural artefacts”, as well as technical machines, according to Dr. Roger A. Søraa, from the Norwegian University of Science and Technology in Trondheim, who is tasked with analysing societal impact. “We have had different images of robots in the West, compared with Japan and South Korea, where they’re perceived in a friendlier way.”

Research on robots, he says, often produces unanticipated results because of the perception of robots as social. He points to a study of AGVS at St Olavs Hospital, in Norway, revealing that humans gave the robots personalities. The robots, which transported trolleys, were given distinctive voices with strong local accents. They were a bit pushy, even rude, but fun. One parent with a gravely ill child found solace in the robots’ mindless battles as they vainly ordered inanimate objects such as walls to get out of the way.

“Although they weren’t designed to be social robots, humans anthropomorphised them by giving them social qualities,” says Søraa. “They found the accents charming. This is the type of complex reaction that shows the value of the Robotics4EU project. We need to go deep and examine how people feel about robots in different circumstances. A robot in a nursing home can provoke different reactions to an industrial robot that doesn’t interact.”

According to Søraa, the Robotics4EU research was especially urgent given the burdens on healthcare worldwide, especially during the pandemic. “We are facing waves of elderly deaths from Covid-19 and demographic changes. This is creating an extreme healthcare crisis. Technology can play an important role. But it requires social negotiations as well as technical development.”

The DBT is planning the next citizen engagement activity for May to July 2022. In this online consultation, up to 12 AI-based robotics applications, provided by businesses, will be selected for assessment. Public responses will be provided to the companies for consideration. “The goal is for tech developers to take into account public perceptions when designing robots,” says Roose.

Ensuring citizen acceptance by consulting the public is one of the four main aims of the Robotics4EU project. The other three are: 1) Developing a robotics maturity assessment model that is a practical tool to help developers to consider legal, societal and ethical aspects; 2) Empowering the robotics community by organising capacity building events in healthcare, agri-food, agile production and infrastructure; and 3) Reaching out to policymakers by compiling a responsible robotics advocacy report and organising a high-level policy debate.

Delaborde is helping to develop the maturity model using desk research and consultations. The concept of ‘maturity’ is rather complex, she says, involving legal compliance, safety, lack of bias, and efficiency. But a maturity tool could be useful for manufacturers, supervisory bodies, or end users. “Essentially, we want to measure whether the robot is ready to enter society. We’re creating a checklist to assess a number of factors. For example, one of the common fears is losing jobs and we will have questions assessing how likely that is to happen with particular robots. Designers, too, need to know exactly what they’re creating. Using the checklist, a robot will be given a global ‘maturity’ score.”

Dirty, dangerous and demeaning
In its public consultations, Robotics4EU is keen to emphasise less well-publicised, highly technical aspects of industrial robotics. Anneli Roose of Civitta Estonia points to the example of inspection robots, increasingly common in monitoring major infrastructure like bridges. “This is the type of work that is impractical and risky for humans, which means robots can bring real social benefits,” she says.

In Genova, Italy, the Ponte Morandi bridge collapsed in 2018, making headlines around the world. The replacement bridge, the Genoa-Saint George Viaduct, is now being monitored by robots installed on the side of the new structure. The robots also process data to look for any anomalies. Designed by the Istituto Italiano di Tecnologia and built by Camozzi Group, it is the first automatic robotic system in the world. “That’s a very good example of robots doing tasks that are deemed to be too dirty, dangerous or demeaning for humans,” says Dr . Roger A. Søraa, from the Norwegian University of Science and Technology in Trondheim.

This article originally appeared in the January 2022 issue of Robotics & Innovation Magazine

Share this story: