ALIZ-E

Sandtray Deployed to a local School!

Posted on Updated on

Over the last few months the Plymouth Node of ALIZ-E has been working hard toward a rather large experiment that got underway at the start of the week. Last month we went to the Natural History Museum to showcase some of Plymouth University’s robotics research for Universities Week. That was used as a chance to give our system a hard beta test, which generally went very well. We identified some things that needed to be changed and made more robust, but all in all, we were very happy with the outcome.

After two weeks more development and ironing out creases in the code, we finally deployed our two systems to a local Plymouth Primary school on Monday morning this week and it will stay in two different classrooms for two weeks. You can see the setup below.

10505393_10154354800320235_1292609695499805238_n Paul Playing with the Sandtray

So, the system consists of a 27 inch all in one touch screen computer which essentially provides a shared input/malleable space for both the human and the robot. The Nao stands/kneels on one side of the screen, while the human is located on the other side, facing the robot. Behind the Nao is a Kinect sensor that is facing the human and can see just over the top of the Nao’s head.

Basically, on the touchscreen we run some programs that allow the children to play a non-competitive game with the robot. This is a categorisation task, where icons on screen (e.g. numbers, planets, colours) have to be moved into a particular location on the screen in order to be categorized in some way (e.g. odd or even number, numbers that are/not part of the 4 timetable, planets with moons and without moons). By keeping the basic underlying task simple, it is easy to change the icons and the categories to produce a wide repertoire of different games that are easily accessible for young children. As part of this game experience/setup, each child in the class has their own “saved account” (which they select by selecting their name on screen when prompted by the robot) which allows the robot to track their progress through the various games, and in turn pitch the level of difficultly accordingly. The robot also takes part in this game by also, moving icons onscreen, and as it is very open ended, turn taking tends to emerge from the process naturally as a result.

The rational for using a touchscreen is that is provides an interactive medium which is simultaneously intuitive for the human and the robot. In essence, the human is able to manipulate objects on screen in both a natural manner (say select them, drag and drop, ect) as well as being able to use similar cues as may be used with tables and such devices (e.g. swipe, pinch to zoom, ect). On the part of the robot, the touchscreen is a computer, makes communication between it and the robot very simple. Further more, when you present something on screen, the robot is able to gain lots of knowledge about objects that are being manipulated (and how) that may otherwise have been very difficult to obtain in the physical world. By providing a shared, malleable space that is virtual rather than physical, you overcome many of the limitations in both the Nao’s ability to manipulate real objects, as well as its perception limitations.

We use a Kinect for a similar reason. The Kinect is used to estimate head pose, and thus determine whether the human is looking at the screen, at the human, or somewhere else. This allows us to, in part, drive the gaze behaviour of the robot. For example, if the human is looking at the screen, then the robot can establish joint attention by doing the same, or if the human is looking at the robot then the robot knows that it can make eye contact. It is useful to have the Kinect act as a “fly in the corner” because some of these issues are difficult to achieve using the built in cameras on the Nao, particularly as they move as the head gazes around.

We’re keeping our fingers crossed that we aren’t bitten too hard by the “Demo Effect” (which means that as soon as it comes to actually showing a system working to real people, a fully working system in the lab decides to have a catastrophic break down…). Doubtless that we will seek to publish our results from the experiment in due course (so sorry for not detailing the experiment itself here), so watch this space… 😉

Advertisements

ALIZ-E at the Natural History Museum and on London LIVE TV!

Posted on Updated on

So to follow on from my last post about Plymouth showcasing our Sandtray setup, today we had some media interest in what we are up to with our system that we have been developing. To give a little background, we are in fact about to roll out an experiment in a local school and are using this week as a serious test of whether our system can work intensely for hours on end. We plan to have in running in a local school for two weeks,starting in two weeks…

I was the lucky chap to get some TV air time today, and you can see the clip on London LIVE’s website here.

Showcasing ALIZ-E at the Natural History Museum in London.

Image Posted on Updated on

From the 9th to the 13th June 2014 the Natural History Museum is hosting the “Universities Week” where many UK universities will be showing their research to the general public. Along with my colleagues Paul Baxter, Joachim De Greeff, James Kennedy, Emily Ashurst and Tony Belpaeme, we will be demonstrating our “Sandtray setup” with the Nao. This is one of the main components in the ALIZ-E integrated system.

Sandtray at the Natural History Museum

Do pop along if you can and see what the wonderful world of robotics (and ALIZ-E, of course) has to offer! 😀

P.S. Thursday the 12th will be a late evening, open until 22:30.

 

NLUs featured in the New Scientist magazine

Posted on Updated on

I recently attended (and presented at) the HRI’14 conference in Bielefeld which exhibited lots of the latest and greatest developments in the field of HRI from across the world. HRI is a highly selective conference (23% acceptance rate or so), and while getting a full paper in the conference seems to have some more emphasis and weight in the US than in Europe, it’s always a good venue to meet new people and talk about the interesting research that is going on.

It turns out that this year there was a New Scientist reporter, Paul Marks, attending and keeping a close eye on the presentations and he’s written a nice article about some of the little tricks that we roboticists can use to make our robots that little bit more life-like. He draws upon some of the work that was presented by Sean AndristAjung MoonAnca Dragan on robot gaze, and also the work that I published/presented on NLUs, where I found that peoples’ interpretations of NLUs is heavily guided by the context in which they are used.

Essentially what I found was that if a robot uses the same NLU in a variety of different situations, people use the cues from within the context to help direct their interpretation of what the NLU actually meant. Moreover, if the robot were to use a variety of different NLUs in a single given context, people interpret these different NLUs in a similar way. To put it plainly, it would seem that people are less sensitive to the properties of the NLUs themselves, and “build a picture” of what an NLU means based upon how it has been used. This has a useful “cheap and dirty” implication for the robot designer: if you know when the robot needs to make a utterance/NLU, you don’t necessarily have to make an NLU that is specifically tailored to the specific context. You might well be able to get away with making any old NLU, being safe in the knowledge that the human is more likely to utilise cues from the situation to guide their interpretation, rather than specific acoustic cues inherent to the NLU. This is of course not a universal truth, but I propose that this is a reasonable rule of thumb to use during basic HRI… However, I might change my view on this in the future with further experimentation… 😉

Anyway, it’s an interesting little read and well worth the 5 minutes if you have them spare…

HRI’14 in Bielefeld.

Posted on Updated on

Robin will be giving a full paper presentation on the impact that situational context has upon the interpretation of Non-Linguistic Utterances at the 9th ACM/IEEE Human-Robot Interaction Conference in Bielefeld in March. He will also be showing a poster in the poster session. Give his sleeve a tug at the conference if you have any questions!