non-linguistic utterances

Quirky Sounds help Animate Aldebaran’s Pepper Robot.

Posted on Updated on

So I finally sat down to watch the recording of Aldebaran’s Press Event with Softbank where they presented their joint venture, in the form of the new “emotional” robot “pepper” to the world. Aldebaran have the full video on their YouTube channel here.


Now, the first thing that struck me, and what I wasn’t expecting, was that as the robot was “waking up” (to the world I guess), it was animated using a huge variety of different quirky sounds. Basically, Non-Linguistic Utterances were used all over the place and it worked very well. Mixing this with some very smooth choreography and motion execution this lead to a very visually appealing display. Very impressive and just what the Press likes to see.

I think this just goes to show that there is huge potential in the use of NLUs and that it seems to work well for a broad variety of different robots. The world of animation and film clearly has many, many playful ideas to offer the world of HRI. It would be great to see more of these sounds finding their way in the robot’s behavour. I shall keep a curious eye on Aldebaran (and their jobs website)…


My Research Featured on “The Academic Minute”

Posted on Updated on

So a little while ago (after HRI’14, and the New Scientist piece on robots and the Uncanny Valley) I was approached by a producer, Matthew Pryce, at WAMC Northwest Public Radio in New York, who invited me to record a short audio essay for a section of their radio called “The Academic Minute“. I took Matthew up on his exciting offer and eagerly set about writing and recording the essay. After about 50 takes, I had an audio recording that I was happy with, and I’m happy to say that it aired on the 28th May 2014.

You can listen to the piece here.

Enjoy! 😉

Talks in the Universities of Lincoln and Sheffield

Posted on Updated on

Next week I’ll be in the UK Midlands giving two talks on my PhD Research into Non-Linguistic Utterances. One in the University of Lincoln (on the 7th May) where I will be hosted by Dr. Marc Hanheide and then at the University of Sheffield (on the 8th) hosted by Prof. Roger Moore.

A good networking opportunity and perhaps even work prospects as I’m looking for a new job (my contract with Plymouth Uni comes to an end at the end of August this year). Should be good fun! 😉

NLUs featured in the New Scientist magazine

Posted on Updated on

I recently attended (and presented at) the HRI’14 conference in Bielefeld which exhibited lots of the latest and greatest developments in the field of HRI from across the world. HRI is a highly selective conference (23% acceptance rate or so), and while getting a full paper in the conference seems to have some more emphasis and weight in the US than in Europe, it’s always a good venue to meet new people and talk about the interesting research that is going on.

It turns out that this year there was a New Scientist reporter, Paul Marks, attending and keeping a close eye on the presentations and he’s written a nice article about some of the little tricks that we roboticists can use to make our robots that little bit more life-like. He draws upon some of the work that was presented by Sean AndristAjung MoonAnca Dragan on robot gaze, and also the work that I published/presented on NLUs, where I found that peoples’ interpretations of NLUs is heavily guided by the context in which they are used.

Essentially what I found was that if a robot uses the same NLU in a variety of different situations, people use the cues from within the context to help direct their interpretation of what the NLU actually meant. Moreover, if the robot were to use a variety of different NLUs in a single given context, people interpret these different NLUs in a similar way. To put it plainly, it would seem that people are less sensitive to the properties of the NLUs themselves, and “build a picture” of what an NLU means based upon how it has been used. This has a useful “cheap and dirty” implication for the robot designer: if you know when the robot needs to make a utterance/NLU, you don’t necessarily have to make an NLU that is specifically tailored to the specific context. You might well be able to get away with making any old NLU, being safe in the knowledge that the human is more likely to utilise cues from the situation to guide their interpretation, rather than specific acoustic cues inherent to the NLU. This is of course not a universal truth, but I propose that this is a reasonable rule of thumb to use during basic HRI… However, I might change my view on this in the future with further experimentation… 😉

Anyway, it’s an interesting little read and well worth the 5 minutes if you have them spare…

HRI’14 in Bielefeld.

Posted on Updated on

Robin will be giving a full paper presentation on the impact that situational context has upon the interpretation of Non-Linguistic Utterances at the 9th ACM/IEEE Human-Robot Interaction Conference in Bielefeld in March. He will also be showing a poster in the poster session. Give his sleeve a tug at the conference if you have any questions!