Robots are commonly used as everyday tools in anything from construction and building to engineering and even medical care. However, as most tools transition from needs to wants, the demand for “companion” robots have increased due to a growing desire for sociability.
Researchers at the University of Lincoln in the United Kingdom recently sought the best way for robots to communicate with people in general. Their study was accepted in early October at the 2015 International Conference on Intelligent Robots and Systems in Hamburg, Germany.
“We found that people prefer the robot if it makes mistakes like humans,” said Mriganka Biswas, a Ph.D. student at the University of Lincoln’s School of Computer Science and lead researcher of the study.
The study was split into two experiments that tested two cognitive biases: misattribution bias and empathy gap. According to the study, a cognitive bias can influence the way people think and cause errors in judgement.
The first experiment tested “misattribution bias,” or making mistakes in conversation after meeting an individual. Thirty participants met with the Emotional Robot with Intelligent Networks – ERWIN – gave it information about themselves and would meet it again on another date. Half of the participants received accurate information and the other half received misinformation from ERWIN at the following meeting.
The experimental group, as the scientists reported, was a bit surprised because they expected flawless memorization from ERWIN. According to the researchers, in this interaction, the participants stated that they “enjoyed the biased interactions.”
The second experiment tested empathy gap, an under or overestimated emotional or physical response to a certain stimulus. In this case, MyKeepon – a tiny, “adorable” robot responded to clapping and touching by making sounds and moving. Thirty participants interacted with the toy robot. The first half would clap as the robot would bounce and make noises accurate to the number of claps. The second half of the group would clap, but MyKeepon would become “overly happy, overly sad or unresponsive,” and would perform inaccurately to the number of claps.
Similar to the first study, the group that saw the inaccuracies in the robot’s performance were surprised, but the toy making the mistakes made for a more playful and lighthearted interaction.
“When it comes to making mistakes in counting, MyKeepon becomes sad [by] pointing its head to the ground and making sad noises – the whole situation gets very appealing to the participants,” the scientists noted in the study.
“At this stage our aim was to study how general people, who have a very limited idea about social robots, respond in these interactions,” Biswas said. These reactions to what people initially assume as “perfect” beings or tools reveal a lot more about ourselves as well.
Although not directly studied in this research, Biswas said companion robots could be helpful as caretakers for people who need constant monitoring. Especially for people who cannot communicate well on their own.
“Social companion robots can help people to overcome their difficulties,” he said. “If we want robots to stay at our home and help us in our daily tasks, then there needs to be some form of relationship with it.”
Companionship has come up in many forms to help those in need while also being helpful to anyone. Though not seen in robots at the moment, instances with voice recognition have brought forth interesting “relationships” for people as well.
“I had heard about an autistic boy who improved [socially] because he was talking to Siri,” said Barbara Di Eugenio, a professor of computer science at the University of Illinois at Chicago. “Siri doesn’t get bored, he could just keep asking the same question over and over again. By interacting with this ‘fictional character,’ he actually improved his social interaction with other real people.”
Di Eugenio focuses more on the software that goes into smart devices and interactive robots. She is an expert in natural language processing, an area of computer science that builds on linguistics and understanding human languages through programming.
Di Eugenio and Biswas both said that these interactions are valuable to people who may suffer from autism, dementia, Alzheimer’s disease and other cognitive disorders. This field in robotics presents interesting scenarios where not only do researchers apply human interaction to robots, but robots can sort of “teach or reteach” the disabled in effective communication skills.
With these results leading the way toward potential therapies and more accurate human-robot interaction, Biswas said his next step in his research will consider more human-like machines. His work has already begun with the Multi-Actuated Robotic Companion, MARC, and will test similar cognitive biases seen in this first study. He aims to see how imperfect robots will fare when they begin to look less “cute” and more like real people.
“So what if our robot presents something similar but in a more 'human-like' way?” Biswas asked. “Our current humanoid robot experiment should tell us the differences between all three experiments, and we should be able to find out how a robot's shape, size and appearance affect the human-robot interaction.”