What should robots do “out of the box” in the future?

Posted on Updated on

So today, after some waiting, we got our Nao Evolution robot. As you might expect, it took very little time for the scissors to come out and open the box, revealing the nice new shiny Nao robot, which looks surprising like our V4 Nao (it’s even got the same fiery orange body “armour”). I took a little time to glance around looking for the new visible enhancements to the design which seems to only be the new layout of the directional microphones in the head. It would seem that the rest of the improvements lay underneath the plastic shell. So, time to hit the power knob and fire up the robot…

This is where I paid far more attention. I wanted to see what software/programs/apps Aldebaran have added to the “fresh out of the box” experience. I think that this is actually really important as when you’re opening your new £5000 robot (and it doesn’t have to be a robot), you really don’t expect the excitement and wow factor to die as soon as you realise the thing doesn’t actually do anything when you turn it on. That’s a real anti-climax! Booooo!

I have to say that today when we turned on Nao Evolution, I was rather pleasantly surprised. Nao’s Life was running as default, and it seemed that the robot was doing both face tracking and sound localisation out of the box. Basically, the robot looked at you and followed you with it’s gaze, as well as responding to sounds. However, we didn’t see anything verbal and no robotic sounds (unlike Pepper’s awakening). That said, it is basic social behaviour from the robot, and already it had our roboticists enthused. Clearly Aldebaran have gotten something right! However, that said, there was still computer setup to do (giving the robot a name, a username, password, wifi/internet connection, etc). In the future, it would be nice to see some of that migrate to the social interface that the robot affords.

All of this did get me thinking though. Nao has an app store, which is a bit sparse at the moment, but I predict will become more and more populated given that Aldebaran have also introduced their Atelier program. Furthermore, it reminded me of a conversation that I had at HRI’14 with Angelica Lim (who is now at Aldebaran) where we were musing about how you might get the robot to interface with the NaoStore autonomously, and suggest apps for users to try. An interesting line of thought in my view.

Today I found myself pondering this a little further. The NaoStore and app arrangement for the Nao seems very much like the Apple App Store and Google Play services. However, I wondered about what form the apps would take. Would they be very stand alone pieces of software, or would they needs a certain degree of inherent integration with the other vital pieces of software on the robot (for example user models). Remember, we have a social robot here, who in the future will likely have a personal social bond with you (and you with it). What might be the implications on how we design apps for social robots?

Should robot apps really take the form of individual pieces of software that act and behave very differently, and thus might change the personality/character of the robot. Should we even be able to start/stop/update apps, or should app management be something that we as users are oblivious to? The latter seems to be what the setup is with AskNAO at the moment, as teachers/carers have to set up a personal robot routine for each child, but it is unlikely that the child knows that this is happening in the background. To them I suspect that it is all the same robot who is making the decisions. The magic spell remains intact (but child-robot interaction is nice that way)…

What happens with grown ups though? Somehow I can see that in a perfect future, the robot would have a base “personality” or “character” of sorts that makes it unique from other robots (at least in the eyes of the users), and that it alone manages the apps that then run. You as the user could still explicitly ask for apps to be installed and query the NaoStore, but I can image that this would be secondary to the robot being able to recognise that downloading a certain app might be useful without explicitly being told to do so (though I recognise that app management will be critical in this case, we don’t need dormant apps taking up space). Perhaps something comes up in conversation with your robot, and it decides it would be worth getting an appropriate app (for example, you like telling and hearing jokes, so, Nao downloads a jokes app so that it can spontaneously tell you jokes in the future). This is probably a long way off, and certainly needs some very clever AI and cognition on the robot’s part, not to mention many, many creases ironed out. Thus, I suspect for the time being that we will be using technology such as laptops and tablets/phones as the in-between mediums though which we manage our robots. Sadly this sounds like our robots will be more like our phones and computers, rather than different entities all together.

To sum this all up, I guess that I am generally hypothesising that people’s perception of and attitudes towards robots that have an app store behind them might differ depending upon how apps are managed (managed by users themselves, or by the robot autonomously and unbeknownst to the user) and whether people even know of the existence of the app store… Could be some interesting experiments in there somewhere…

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s