Research

Please note that I am not currently working within Academia. This page outlines the research areas that I was active in during my time at Plymouth University. That said, I maintain a healthy interest in HRI research and continue to review scientific publications.

Long-Term Social Human-Robot Interaction:

Generally speaking, since the early 2000’s roboticists have known how to build social robots that are entertaining and engaging for people to interact with for about 10-20 minutes maximum. After this period, it becomes very easy to spot that the robot is actually just a pre-programmed machine which has essentially fooled you (ad I do use that rather lightly) into thinking that it is more intelligent than it is. This is actually a really bad thing, as it means that all of a sudden, the high expectations that you formed when interacting with the robot are violated, causing serious detriment in any ongoing or future interactions between you and the robot (and some might even argue, the actual relationship between you and the robot).

One of the important goals of social robotics at the moment is to move this “20 minute cliff edge” back, a long way back. We’re talking moving it from minutes to hours, if not days! Projects like ALIZ-E, LIREC and EASEL have taken it upon themselves to focus on this goal more closely, seeking to understand how we can build robotic technologies that are able to interact with people for longer period of time in one session, as well as across multiple interaction episodes. What technologies to do we need (e.g. natural language processing, robot memories of sorts, online robot learning, emotional recognition and models of emotion for the robot, body language, the ability for the robot to read social cues, and produce them)? If already existing, do we need to adapt these them at all? If they don’t exist, let’s develop them! When we have lots of different technologies that are needed, how do we integrate them together so that they form a smooth, well oiled machine rather than a clunky banger? These are the kinds of questions behind the technology and engineering of long-term HRI.

There is also another side. HRI has a very prominent “soft” human element that is just as important as the technology/engineering side of things. We need to look at how people are responding to our robots. How do we evaluate long-term HRI from the humans perspective? Given that we need people to interact with robots for longer, running experiments in labs in becoming less feasible, and so we’re starting to see more studies and experiments being run in the real-world, in locations such as schools, museums, shopping centers, ect. We call this “the wild”…

Personally, I am very interested in both sides here and I think that you must appreciate both sides equally. I have a strong belief  that robots (both social and non-social) can do wonderful things for us, particularly for those who need help. I see robots as having an incredible potential to improve the quality of life of people all over the world not only at an individual level, but also at a societal level. I don’t think that there is any question that we aren’t capable of building robots that will be seen as just as intelligent as us. We certainly are. I’m more curious about how long it will take for the target population of users (e..g you and me) to truly accept these machines into our lives and physical spaces, as well as what we will have them doing… On both fronts, I think we are still very much in the dark and we could be doing alot more to help ourselves here…

Non-Linguistic Utterances:

Non-Linguistic Utterances (NLUs) was the topic of Robin’s PhD. They are sounds comprised of beeps, squeaks and whirrs rather sounding like natural language, and have been used to great effect in the world of Animation (robot such as WALL-E and R2D2 are vivid examples of this). However, there has been little work in determining whether the utility of NLUs in Animation translates to social robots in the real world, and how so.

 WALL-E (2)

This research sought (and seeks) to make an initial assessment of the utility of NLUs as applied to social robots through a series of human-centered experiments aimed at understanding how NLUs can be utilised by a social robot in order to convey affective meaning to people both young and old, and what factors impact on the production and perception of NLUs.For example, it seems that not all robots should use NLUs, as the morphology of the robot matters. People perceive NLUs differently across different robots, and not always in a desired manner. People also appear to readily project affective meaning onto NLUs though not in a coherent manner. Furthermore, people’s affective inferences are not subtle, rather they are  drawn to well established, basic affect prototypes. Experiments have also found that the meaning of the context in which NLUs are used biases how the sounds are interpreted: situational context biases how people perceive utterances made by a robot, and through this, coherence between people in their affective inferences is found to increase. Finally, it seems that NLUs are best not used as a replacement to natural language (as they are by R2D2), rather, people show a preference for them being used alongside natural language where they can play a supportive role by providing essential social cues.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s