media/press

Quirky Sounds help Animate Aldebaran’s Pepper Robot.

Posted on Updated on

So I finally sat down to watch the recording of Aldebaran’s Press Event with Softbank where they presented their joint venture, in the form of the new “emotional” robot “pepper” to the world. Aldebaran have the full video on their YouTube channel here.

 

Now, the first thing that struck me, and what I wasn’t expecting, was that as the robot was “waking up” (to the world I guess), it was animated using a huge variety of different quirky sounds. Basically, Non-Linguistic Utterances were used all over the place and it worked very well. Mixing this with some very smooth choreography and motion execution this lead to a very visually appealing display. Very impressive and just what the Press likes to see.

I think this just goes to show that there is huge potential in the use of NLUs and that it seems to work well for a broad variety of different robots. The world of animation and film clearly has many, many playful ideas to offer the world of HRI. It would be great to see more of these sounds finding their way in the robot’s behavour. I shall keep a curious eye on Aldebaran (and their jobs website)…

ALIZ-E at the Natural History Museum and on London LIVE TV!

Posted on Updated on

So to follow on from my last post about Plymouth showcasing our Sandtray setup, today we had some media interest in what we are up to with our system that we have been developing. To give a little background, we are in fact about to roll out an experiment in a local school and are using this week as a serious test of whether our system can work intensely for hours on end. We plan to have in running in a local school for two weeks,starting in two weeks…

I was the lucky chap to get some TV air time today, and you can see the clip on London LIVE’s website here.

Showcasing ALIZ-E at the Natural History Museum in London.

Image Posted on Updated on

From the 9th to the 13th June 2014 the Natural History Museum is hosting the “Universities Week” where many UK universities will be showing their research to the general public. Along with my colleagues Paul Baxter, Joachim De Greeff, James Kennedy, Emily Ashurst and Tony Belpaeme, we will be demonstrating our “Sandtray setup” with the Nao. This is one of the main components in the ALIZ-E integrated system.

Sandtray at the Natural History Museum

Do pop along if you can and see what the wonderful world of robotics (and ALIZ-E, of course) has to offer! 😀

P.S. Thursday the 12th will be a late evening, open until 22:30.

 

My Research Featured on “The Academic Minute”

Posted on Updated on

So a little while ago (after HRI’14, and the New Scientist piece on robots and the Uncanny Valley) I was approached by a producer, Matthew Pryce, at WAMC Northwest Public Radio in New York, who invited me to record a short audio essay for a section of their radio called “The Academic Minute“. I took Matthew up on his exciting offer and eagerly set about writing and recording the essay. After about 50 takes, I had an audio recording that I was happy with, and I’m happy to say that it aired on the 28th May 2014.

You can listen to the piece here.

Enjoy! 😉

NLUs featured in the New Scientist magazine

Posted on Updated on

I recently attended (and presented at) the HRI’14 conference in Bielefeld which exhibited lots of the latest and greatest developments in the field of HRI from across the world. HRI is a highly selective conference (23% acceptance rate or so), and while getting a full paper in the conference seems to have some more emphasis and weight in the US than in Europe, it’s always a good venue to meet new people and talk about the interesting research that is going on.

It turns out that this year there was a New Scientist reporter, Paul Marks, attending and keeping a close eye on the presentations and he’s written a nice article about some of the little tricks that we roboticists can use to make our robots that little bit more life-like. He draws upon some of the work that was presented by Sean AndristAjung MoonAnca Dragan on robot gaze, and also the work that I published/presented on NLUs, where I found that peoples’ interpretations of NLUs is heavily guided by the context in which they are used.

Essentially what I found was that if a robot uses the same NLU in a variety of different situations, people use the cues from within the context to help direct their interpretation of what the NLU actually meant. Moreover, if the robot were to use a variety of different NLUs in a single given context, people interpret these different NLUs in a similar way. To put it plainly, it would seem that people are less sensitive to the properties of the NLUs themselves, and “build a picture” of what an NLU means based upon how it has been used. This has a useful “cheap and dirty” implication for the robot designer: if you know when the robot needs to make a utterance/NLU, you don’t necessarily have to make an NLU that is specifically tailored to the specific context. You might well be able to get away with making any old NLU, being safe in the knowledge that the human is more likely to utilise cues from the situation to guide their interpretation, rather than specific acoustic cues inherent to the NLU. This is of course not a universal truth, but I propose that this is a reasonable rule of thumb to use during basic HRI… However, I might change my view on this in the future with further experimentation… 😉

Anyway, it’s an interesting little read and well worth the 5 minutes if you have them spare…