user experience

Do socially animated robots just waste energy?

Posted on Updated on

So today I stumbled across this IEEE spectrum post on Care-o-bot, mark 4. I hadn’t actually come across the previous iterations of Care-o-bots, but I must say, looking at the video of the first attempt, I doubt there is much to be missing out on. Sorry Fraunhofer…

There is a link to a rather fun little promotional video of the robot, and as I watched it, I certainly felt a light hearted joy. The music was upbeat and gently fun, the basic unfolding of the plot was touching, and there is great attention to detail in the animation of the robot. HRI lessons have been picked up well here.

The robot design is simple at the heart. Slick white paint, some LEDs to provide robot specific feedback channels, and two simple dots for eyes. There is a whole host of research from HRI that demonstrates that simple is very effective, when used properly.

I particularly liked the arm movements when the robot is rushing around to get the rose. It certainly helps us empathize with the robot, no questions there. However, there is a price to pay for this. Battery cell technology is certainly coming along, but for robots energy is still a very precious resource.

So, my question, and it’s a philosophical one, and depends on how much you buy into the H in HRI, and of course what function the robot serves. Those rushing around arm movements that clearly conveyed a robot rushing around to achieve a goal. And in turn, given that, you might infer that the goal has quite some importance. They probably required a fair bit of power. Where they value for watts? Was it worth expending all that energy for the robot to rush around? Did they really bring that something truly extra to the robot?

I don’t think that there is a right or a wrong answer, but I think that this does remind us that from a practical perspective (at least for now), energy is precious to robots, and we as robot designers must contemplate whether we are using it most effectively.

Happy musings!

SociBot has arrived and a little about Retro-Projected Faces!

Posted on Updated on

Our lab got a new companion today in the form of a SociBot, a hand made/assembled 3 DoF robot head with a Retro Projected Face (RPF) sitting on top of a static torso, produced by Engineered Arts Ltd in Cornwall, UK. Setting you back about £10,000, it’s an interesting piece of kit and could well be a milestone in the development toward very (convincing) facially expressive robots.

SociBot

So, how do these things work? Well, it’s actually really simple. You take a projector, locate it in the back of the robot’s head and project a face onto a translucent material which has the profile of a face. This way you as the user can’t see into the head, but you can see the face. It’s cheap, it’s simple and it’s becoming more popular at the moment. Retro-projected faces are not exactly a new idea in the world of social robotics however (and they have an even longer history in the world of theater I’ve come to learn). There’re have been a few universities exploring them in Europe, with a notable early example coming from Fred Delaunay who did his PhD work at Plymouth a couple of years ago on his LightHead robot (Fred has since gone on to pursue a startup company, Syntheligence, in Paris to commercialise his efforts since it seems to have caught on quite a bit). That said, they do have many useful things to offer the world of robotics.

For example, the face has no moving parts, and thus actuator noise is less of an issue (which isn’t altogether true, projectors get hot and do have a fun whirring away). Faces can also be rendered in real time (thus noiseless eye saccades, blinking and even pupil dilations are possible), and are very flexible to alterations such as blushing. You can put any face you want on SociBot, with animation possibilities where the sky’s the limit. The folks who put together the software for the face system (amusingly called “InYaFace”) have also made a model of Paul Ekman’s influential Facial Action Coding system (FACs) which allows the robot to make all kinds of facial expressions, and allows it to tie in very well with research output that use the FACs system as a foundation.

The robot itself run’s Linux Ubuntu and uses lots of off the shelf software such as OpenNI for the Acus XTion Pro to do body tracking and gesture recognition, and face/age/emotion detection software using the head mounted camera as the feed source. TTS is also built in, as is Speech Recognition (I think), but there is no built in mic, only a mic input port. Also, we (by that I mean Tony) bought the SociBot Mini, but there is also a larger version where you can add a touch screen/kiosk as joint virtual space that both the robot and user can manipulate.

SociBot can be programmed using Python and has a growing API which is constantly under development and it looks promising. You also have the ability to program the robot via a web based GUI as Engineered Arts seem keen to have the robots be an “open” platform where you can log in remotely to control them.

Unboxing the robot

Given my recent post about the unveiling of Pepper by Aldebaran and the new Nao Evo that we got a little while ago, I was curious to see what Engineered Arts had put into the robot as a default behaviour. I must admit that I was rather pleasantly surprised. The robot appeared to be doing human tracking and seemed to look at new people when they came into view.

To make comparison with Aldebaran, I think that what really stands out here is that it took very little effort to start playing with the different aspects of the robot. When you get a Nao, it really doesn’t do that much when turned on for the first time, and there is quite a bit of setup required before you get to see some interesting stuff happen. Through the web interface however, we were quickly able to play with the different faces and expressions that SociBot has to offer, as well as the different little entertaining programs such as the robot singing a David Bowie number as well as blasting out “Singing in the Rain”. Lots of laughs were to be had very quickly and it was good fun exploring what you can do with the robot once it arrives.

From a product design / UX perspective, Engineered Arts have got this spot on. When you open the box of a robot, this is perhaps the most excited you will ever be about that robot, and making it easy to play with the thing and leaving a very good impression. Overall, however, some things are a bit rough around the edges, but the fact that this is a hand made robot tells you everything that you need to know about Engineered Arts: they really do like to build robots in house.

Drawbacks to retro-projected faces

Now that post is in part an overview of the current state-of-the-art in RPF technology, I can certainly see room for improvement as there are some notable shortcomings. Let’s start with some low hanging fruit, which comes in the form of volume used. As the face is projected, most of the head is actually empty space in order to avoid having yourself a little shadow puppet show. This (lacking) use of the space is actually inefficient in my view as it puts quite some limitations on where you can locate cameras. Both LightHead and SociBot have a camera mounted at the very top of the forehead. This means that you loose out on potential video quality and sensory. Robots like iCub and Romero have cameras that are mounted in actuated eyes which allows them to converge (which is interesting from a Developmental Robotics point of view), but also compensate for robot’s head and body movements, providing a more stable video feed. Perhaps this is a slightly minor point as the robot’s are generally static in the grand scheme of things, when they start to walk, I can see things changing quickly (in fact this is exactly why Romeo has cameras in the eyes).

Another problem is to do with luminosity and energy dispersion. As the “screen ratio” of the modern off the shelf projector isn’t really suitable to cover a full face at the distance required, these systems turn to specialised optics in order to increase the projection field of view. However, this comes at the cost of spreading the energy in the projection over a larger area which results in a lower overall luminosity over a given area. This is furthered even more as when the projection hits the translucent material, the energy refracts and scatters even more, which means you loose more luminosity, as well as the image loosing some sharpness. Of course the obvious solution is to put in a more powerful projector, but this has the drawback that it will likely get hotter, thus needing more fan cooling, and with that fan running in a hollow head, that sound echoes and reverberates.

Personally, I’m still waiting for the flexible electronic screens technology to develop as this will likely overcome most of these issues. If you can produce a screen that takes the shape of a face, you suddenly no longer need a projector at all. You gain back the space lost in head, loose the currently noisy fan that echos and luminosity becomes less of an issue. Marry this with touch screen technology and perhaps actuation mounted under the screen and I think that you have a very versatile piece of kit indeed!

What should robots do “out of the box” in the future?

Posted on Updated on

So today, after some waiting, we got our Nao Evolution robot. As you might expect, it took very little time for the scissors to come out and open the box, revealing the nice new shiny Nao robot, which looks surprising like our V4 Nao (it’s even got the same fiery orange body “armour”). I took a little time to glance around looking for the new visible enhancements to the design which seems to only be the new layout of the directional microphones in the head. It would seem that the rest of the improvements lay underneath the plastic shell. So, time to hit the power knob and fire up the robot…

This is where I paid far more attention. I wanted to see what software/programs/apps Aldebaran have added to the “fresh out of the box” experience. I think that this is actually really important as when you’re opening your new £5000 robot (and it doesn’t have to be a robot), you really don’t expect the excitement and wow factor to die as soon as you realise the thing doesn’t actually do anything when you turn it on. That’s a real anti-climax! Booooo!

I have to say that today when we turned on Nao Evolution, I was rather pleasantly surprised. Nao’s Life was running as default, and it seemed that the robot was doing both face tracking and sound localisation out of the box. Basically, the robot looked at you and followed you with it’s gaze, as well as responding to sounds. However, we didn’t see anything verbal and no robotic sounds (unlike Pepper’s awakening). That said, it is basic social behaviour from the robot, and already it had our roboticists enthused. Clearly Aldebaran have gotten something right! However, that said, there was still computer setup to do (giving the robot a name, a username, password, wifi/internet connection, etc). In the future, it would be nice to see some of that migrate to the social interface that the robot affords.

All of this did get me thinking though. Nao has an app store, which is a bit sparse at the moment, but I predict will become more and more populated given that Aldebaran have also introduced their Atelier program. Furthermore, it reminded me of a conversation that I had at HRI’14 with Angelica Lim (who is now at Aldebaran) where we were musing about how you might get the robot to interface with the NaoStore autonomously, and suggest apps for users to try. An interesting line of thought in my view.

Today I found myself pondering this a little further. The NaoStore and app arrangement for the Nao seems very much like the Apple App Store and Google Play services. However, I wondered about what form the apps would take. Would they be very stand alone pieces of software, or would they needs a certain degree of inherent integration with the other vital pieces of software on the robot (for example user models). Remember, we have a social robot here, who in the future will likely have a personal social bond with you (and you with it). What might be the implications on how we design apps for social robots?

Should robot apps really take the form of individual pieces of software that act and behave very differently, and thus might change the personality/character of the robot. Should we even be able to start/stop/update apps, or should app management be something that we as users are oblivious to? The latter seems to be what the setup is with AskNAO at the moment, as teachers/carers have to set up a personal robot routine for each child, but it is unlikely that the child knows that this is happening in the background. To them I suspect that it is all the same robot who is making the decisions. The magic spell remains intact (but child-robot interaction is nice that way)…

What happens with grown ups though? Somehow I can see that in a perfect future, the robot would have a base “personality” or “character” of sorts that makes it unique from other robots (at least in the eyes of the users), and that it alone manages the apps that then run. You as the user could still explicitly ask for apps to be installed and query the NaoStore, but I can image that this would be secondary to the robot being able to recognise that downloading a certain app might be useful without explicitly being told to do so (though I recognise that app management will be critical in this case, we don’t need dormant apps taking up space). Perhaps something comes up in conversation with your robot, and it decides it would be worth getting an appropriate app (for example, you like telling and hearing jokes, so, Nao downloads a jokes app so that it can spontaneously tell you jokes in the future). This is probably a long way off, and certainly needs some very clever AI and cognition on the robot’s part, not to mention many, many creases ironed out. Thus, I suspect for the time being that we will be using technology such as laptops and tablets/phones as the in-between mediums though which we manage our robots. Sadly this sounds like our robots will be more like our phones and computers, rather than different entities all together.

To sum this all up, I guess that I am generally hypothesising that people’s perception of and attitudes towards robots that have an app store behind them might differ depending upon how apps are managed (managed by users themselves, or by the robot autonomously and unbeknownst to the user) and whether people even know of the existence of the app store… Could be some interesting experiments in there somewhere…