social robots

Do socially animated robots just waste energy?

Posted on Updated on

So today I stumbled across this IEEE spectrum post on Care-o-bot, mark 4. I hadn’t actually come across the previous iterations of Care-o-bots, but I must say, looking at the video of the first attempt, I doubt there is much to be missing out on. Sorry Fraunhofer…

There is a link to a rather fun little promotional video of the robot, and as I watched it, I certainly felt a light hearted joy. The music was upbeat and gently fun, the basic unfolding of the plot was touching, and there is great attention to detail in the animation of the robot. HRI lessons have been picked up well here.

The robot design is simple at the heart. Slick white paint, some LEDs to provide robot specific feedback channels, and two simple dots for eyes. There is a whole host of research from HRI that demonstrates that simple is very effective, when used properly.

I particularly liked the arm movements when the robot is rushing around to get the rose. It certainly helps us empathize with the robot, no questions there. However, there is a price to pay for this. Battery cell technology is certainly coming along, but for robots energy is still a very precious resource.

So, my question, and it’s a philosophical one, and depends on how much you buy into the H in HRI, and of course what function the robot serves. Those rushing around arm movements that clearly conveyed a robot rushing around to achieve a goal. And in turn, given that, you might infer that the goal has quite some importance. They probably required a fair bit of power. Where they value for watts? Was it worth expending all that energy for the robot to rush around? Did they really bring that something truly extra to the robot?

I don’t think that there is a right or a wrong answer, but I think that this does remind us that from a practical perspective (at least for now), energy is precious to robots, and we as robot designers must contemplate whether we are using it most effectively.

Happy musings!

Advertisements

Moving to Dyson, and thoughts on academic research and industry

Posted on Updated on

So, it’s been about 10 weeks since I joined Dyson (makers of the 360 eye robot vacuum), having left Engineered Arts over the summer, and while the move itself was perhaps not the most opportune thing to happen at that particular point in time, I think that it has been a very good outcome in the grand scheme of things. I’m back in Research, which, after a stint in a pure development role, I now realize is where my heart is. Also, I’m in job where I can apply more of my broad skills set. Generally, I’d say that I’m happier with the direction of my life and career.

It has been about a year since I decided to move out of academia into industry, and while it has been a bit of a rollercoaster ride, I’ve had lots of experiences and insights that been good food for thought and reflection. Though my time at EA was short, and rather stressful at times, I learnt some very important things about the differences in the internal functioning of small (and now larger) companies and universities, as well as the differences between robotics development, and robotics research.

Running a small company is clearly very difficult, and I take my hat off to anyone who has the guts and endurance to give it a sustained go. I don’t think that I have those guts (at least not now). Also, in a small company resources are stretched and managing those resources is difficult, and when you are a resource, that can be a very rocky journey indeed. That is something that I didn’t really get in academia, so it was quite a learning curve to get used to that.

I think that one of the most important things that I learnt in EA was to do with software development/management. I always suspected that software development in a PhD environment followed some “bad practices”, and when I look back at how I was managing software during my PhD (it was all via Dropbox, with no version control!) I was lucky nothing messed up too badly. What I saw in EA was something that was far more extreme than anything else that I had seen previously and it was a big eye opener. I also paid a lot attention to the style of (python) coding that I saw at EA as I was working with a couple of professional and very experienced software developers. Needless to say, I learnt a lot about software development in my time there.

Dyson is a completely different kettle of fish. For a start, it is a much larger company compared to EA, but still a small fish in a big ocean. Also, it is quite widely known that it is a very secretive company, and as such, I can’t say much about it. However, that is part of what makes it such an exciting place to work, and also very different to the academic environment. This secrecy is a strange thing coming from academia, which traditionally is very open. It takes a little getting used to, but it a very important aspect of the company (again, I can’t say much about work).

Overall, I’m quite glad that I made the change to industry, as I’ve learnt alot, and I think that I have become a better engineer, and a better roboticist as a result, which is generally my goal. I’m also happy to be working with robots that are truly going into the “wild”, as I feel that I am closer to helping make robots make a meaningful impact to the world – I can see the fruits of my labour in the hands of real people/users. That gives me alot of job satisfaction.

I’ve always had an uneasy feeling that there is a disconnect between academic robotics research and the trajectory that it is trying to depict/push – this “all singing and dancing” robot that is inevitably coming – and how we are actually going to get there given the current state of the (social) robotics industry and the current trajectory. I strongly believe that we need the population to get used to idea of sharing the world around us (physical and perhaps cyber space) with autonomous robots ASAP, before we unveil these “all singing and dancing” robots.

From what I have seen, it think that this is vital in order to promote uptake of smarter future robots (the kind that academia is has in mind) – if we are uneasy with robots around us, we will never accept these future robots (particularly as they will be larger in general). With that, I generally feel that there is a lack of academic HRI research that addresses research issues that will impact (and help) industry in the next 5 years or so. This is the kind of time frame that will help companies move toward building robots that academia is aiming for. Make no mistake, companies like Aldebaran, Dyson, iRobot, Samsung, Honda, EA, ect, are at the forefront and cutting edge of manipulating the uptake and wide-scale perception of robots in the present, and they are holding the steering wheel that will direct the trajectory of the kinds of robots we will see in the future (based upon how people react now, not in 10 years time).

I guess that there is perhaps a little message in all of this – if you’re an academic, and asked me for a research advice, I’d encourage you to tackle practical issues and provide solutions that companies can pick up and run with in a fairly short time frame. The alternative to this is work that stays “hot and alive” in a research lab, but has far less utility outside the lab space. In essence it could be collecting dust until a industry is in a position to actually apply it (if it remembers and/or finds that the work was ever done).

I’m stopping here, as I’m not sure whether I’m drifting off topic from what I had in my head when I started writing this post. I do think that it captures some of my thoughts on academic research and how it applies to industry. I’ll probably mull it over a bit more, and dump my thoughts here at a later date as this is a topic I have been thinking about for a while. However, if you have an opinion on this, I’d love to hear it! Perhaps it’s a topic for the HRI conference panel?

What’s the Future of Aldebaran Robotics?

Posted on Updated on

Today I came across this Rude Baguette post which discussed the current ongoings at Aldebaran Robotics, which seem to have been spurred on due to the 2012 acquisition by SoftBank.

By all signs over the years, Aldebaran has been a prosperous company, gaining a firm foothold in the academic spheres in social HRI. Just look at how many different research labs across the globe have been adopting the Nao as the “go to” platform for their scientific endeavours. By all measures, Nao is a well suited platform for a broad range of social HRI and Cognitive Science research. Furthermore, they haven’t been doing a bad job at having a stab at bringing social robots to potentially fruitful application domains such as Education and Autism Therapy beyond the scientific arena. Very nobel endeavours indeed as robots have shown considerable promise in both.

Romeo has been a weird development. It’s had a reasonable bit of air time early on, but nothing ever really came out of it. Perhaps this is what Rude Baguette really refer to when they say that R&D that never really transformed into the game changing images that was behind it (and in Aldebaran). Notably there is also no mention of it in the blog post. I suspect Romeo is dying a quiet death as Nao and Pepper have been the far more successful ventures and gain considerably more attention.

Then we had Pepper, and what an interesting development that was. A slick, aesthetically pleasing robot that is (was) built upon the maturing software products that have been developed over the years (NaoQi and Choreographe) and unveiled to the world in an equally stylistic manner. I even know people who were starting to plan Pepper into their research programs at some universities. Again, another bright horizon for the company, at least from an academic perspective.

However, for a little while there have been rumours about some odd on-goings in the company, accompanied by some odd external observations. For example, about a year ago, there were a number job adverts (about 30) on their website, but recently (about 6 months ago), they suddenly all disappeared, and I understood that there was a hold on all job applications. Hiring has completely stopped (and the job site was quite slow to reflect this). This was followed by whispers to the external world that fairly recent newcomers to the company where being laid off. These went further to the point that there was a 20% cut in employees (apparently this was 25%). Something is clearly up in the company, and the Rude Baguette seems to confirm this on a few fronts.

So, what’s happening? Is SoftBank slowly shutting Aldebaran down? Is this a case where the company has been bought purely for it’s assets (i.e. the people building the robots and vitally the technology itself) and now will have everything move to Japan, or is it just that there is a serious misalignment in the desired future directions of the two parties? It’s hard to tell, and subject to simple speculation at the moment.

For the academics at least, there is a concern in the notion that Nao will be disappearing from the shelves in the near future! In my view (as someone who studied HRI scientifically), Nao has been a very fruitful and worthwhile tool for HRI. Not only does it provide a value for money, well equipped platform, but given the numbers it has sold in, it has provided a considerable degree of standardization for researchers. Scientific findings can also be replicated and insights can be utilized in a more meaningful way as a result. All very good things generally speaking. It this enough of an industry to keep the company afloat? I’m not business savvy enough to know that (yet), but I’ve head that the answer to this is “no”.

Another perk has been that Nao is an attractive platform that draws in children. I consider this a vital attribute of any robot, as I argue strongly that it is in the best interests of the HRI community (and here I mean both industry and academia) that we introduce children to social robotic technology at an early stage. There are two main reasons for this. Firstly, early exposure to this kind of technology will likely go a long way to easing in future integration and applications (and I’m talking in 10+ years when the then young adults and parents of the future will recall their experience with robots). Secondly, the standards of children are low and they expect far less from a £6000 robot than adults do. Basically, child oriented applications provide a testing ground where current baby-step advancements in social HRI technology can be explored, evaluated and matured in the slow and careful manner that is required. I consider this as a stepping stone toward developing the technologies required to impress and engage with adults on a few (adults) to one (robot) basis (I think that many to one interactions are a different kettle of fish entirely). We as a community are still working out what is what in terms of technologies and where real potential applications lay, but we clearly see that there is appeal from both adults and children in child-oriented applications. Nao (and Aldebaran) has played an important role in uncovering and establishing this and trying to work toward making it a reality (look at the AskNAO venture).

So, for me, the news coming from Aldebaran is sad. We might be loosing an important company that is seriously helping our exploration and understanding of social HRI from a scientific perspective. I also know people in the company and hope that they are coping. It all sounds rather unpleasant from the outside, so it certainly won’t be pleasant on the inside!

Perhaps things change in the wake of this rather public news, but only time will tell…

Moving on from Plymouth

Posted on Updated on

So after completing my PhD and a Post-Doc at Plymouth, I have decided to move onto new and exciting challenges, and I’ve opted to make a move to industry. From the 24th November I will join Engineered Arts Ltd in Cornwall where I will be working to develop the social aspects of their robots RobotThespian and SociBot.

I’ve very much enjoyed my time here in Plymouth, and I have been very fortunate and am very proud to have worked with and learned from world-class scientists. But after 6 years here, I feel that it is time to move on.

Though this is not the end of my links to academia. Engineered Arts frequent the main Human-Robot Interaction conferences to show their robots, and so I hope to attend these conferences as part of the company and catch up with the friends and network in the HRI community. So, hope to catch you guys in HRI’15 in Portland!

SociBot has arrived and a little about Retro-Projected Faces!

Posted on Updated on

Our lab got a new companion today in the form of a SociBot, a hand made/assembled 3 DoF robot head with a Retro Projected Face (RPF) sitting on top of a static torso, produced by Engineered Arts Ltd in Cornwall, UK. Setting you back about £10,000, it’s an interesting piece of kit and could well be a milestone in the development toward very (convincing) facially expressive robots.

SociBot

So, how do these things work? Well, it’s actually really simple. You take a projector, locate it in the back of the robot’s head and project a face onto a translucent material which has the profile of a face. This way you as the user can’t see into the head, but you can see the face. It’s cheap, it’s simple and it’s becoming more popular at the moment. Retro-projected faces are not exactly a new idea in the world of social robotics however (and they have an even longer history in the world of theater I’ve come to learn). There’re have been a few universities exploring them in Europe, with a notable early example coming from Fred Delaunay who did his PhD work at Plymouth a couple of years ago on his LightHead robot (Fred has since gone on to pursue a startup company, Syntheligence, in Paris to commercialise his efforts since it seems to have caught on quite a bit). That said, they do have many useful things to offer the world of robotics.

For example, the face has no moving parts, and thus actuator noise is less of an issue (which isn’t altogether true, projectors get hot and do have a fun whirring away). Faces can also be rendered in real time (thus noiseless eye saccades, blinking and even pupil dilations are possible), and are very flexible to alterations such as blushing. You can put any face you want on SociBot, with animation possibilities where the sky’s the limit. The folks who put together the software for the face system (amusingly called “InYaFace”) have also made a model of Paul Ekman’s influential Facial Action Coding system (FACs) which allows the robot to make all kinds of facial expressions, and allows it to tie in very well with research output that use the FACs system as a foundation.

The robot itself run’s Linux Ubuntu and uses lots of off the shelf software such as OpenNI for the Acus XTion Pro to do body tracking and gesture recognition, and face/age/emotion detection software using the head mounted camera as the feed source. TTS is also built in, as is Speech Recognition (I think), but there is no built in mic, only a mic input port. Also, we (by that I mean Tony) bought the SociBot Mini, but there is also a larger version where you can add a touch screen/kiosk as joint virtual space that both the robot and user can manipulate.

SociBot can be programmed using Python and has a growing API which is constantly under development and it looks promising. You also have the ability to program the robot via a web based GUI as Engineered Arts seem keen to have the robots be an “open” platform where you can log in remotely to control them.

Unboxing the robot

Given my recent post about the unveiling of Pepper by Aldebaran and the new Nao Evo that we got a little while ago, I was curious to see what Engineered Arts had put into the robot as a default behaviour. I must admit that I was rather pleasantly surprised. The robot appeared to be doing human tracking and seemed to look at new people when they came into view.

To make comparison with Aldebaran, I think that what really stands out here is that it took very little effort to start playing with the different aspects of the robot. When you get a Nao, it really doesn’t do that much when turned on for the first time, and there is quite a bit of setup required before you get to see some interesting stuff happen. Through the web interface however, we were quickly able to play with the different faces and expressions that SociBot has to offer, as well as the different little entertaining programs such as the robot singing a David Bowie number as well as blasting out “Singing in the Rain”. Lots of laughs were to be had very quickly and it was good fun exploring what you can do with the robot once it arrives.

From a product design / UX perspective, Engineered Arts have got this spot on. When you open the box of a robot, this is perhaps the most excited you will ever be about that robot, and making it easy to play with the thing and leaving a very good impression. Overall, however, some things are a bit rough around the edges, but the fact that this is a hand made robot tells you everything that you need to know about Engineered Arts: they really do like to build robots in house.

Drawbacks to retro-projected faces

Now that post is in part an overview of the current state-of-the-art in RPF technology, I can certainly see room for improvement as there are some notable shortcomings. Let’s start with some low hanging fruit, which comes in the form of volume used. As the face is projected, most of the head is actually empty space in order to avoid having yourself a little shadow puppet show. This (lacking) use of the space is actually inefficient in my view as it puts quite some limitations on where you can locate cameras. Both LightHead and SociBot have a camera mounted at the very top of the forehead. This means that you loose out on potential video quality and sensory. Robots like iCub and Romero have cameras that are mounted in actuated eyes which allows them to converge (which is interesting from a Developmental Robotics point of view), but also compensate for robot’s head and body movements, providing a more stable video feed. Perhaps this is a slightly minor point as the robot’s are generally static in the grand scheme of things, when they start to walk, I can see things changing quickly (in fact this is exactly why Romeo has cameras in the eyes).

Another problem is to do with luminosity and energy dispersion. As the “screen ratio” of the modern off the shelf projector isn’t really suitable to cover a full face at the distance required, these systems turn to specialised optics in order to increase the projection field of view. However, this comes at the cost of spreading the energy in the projection over a larger area which results in a lower overall luminosity over a given area. This is furthered even more as when the projection hits the translucent material, the energy refracts and scatters even more, which means you loose more luminosity, as well as the image loosing some sharpness. Of course the obvious solution is to put in a more powerful projector, but this has the drawback that it will likely get hotter, thus needing more fan cooling, and with that fan running in a hollow head, that sound echoes and reverberates.

Personally, I’m still waiting for the flexible electronic screens technology to develop as this will likely overcome most of these issues. If you can produce a screen that takes the shape of a face, you suddenly no longer need a projector at all. You gain back the space lost in head, loose the currently noisy fan that echos and luminosity becomes less of an issue. Marry this with touch screen technology and perhaps actuation mounted under the screen and I think that you have a very versatile piece of kit indeed!

Giving Nao some visual attention.

Video Posted on Updated on

Ever since reading Cynthia Breazeal’s book, “Designing Sociable Robots“, I’ve had this constant itch to implement her visual attention model on a robot, mainly the Nao as there’re four of them laying around in the lab these days. So, suffice to say that I’ve finally gotten around to scratching this particular itch, and boy does it feel good! 🙂

So, if you haven’t already read this book (and if you work in social robotics, shame on you), I highly recommend it! It’s full of lots in interesting insights and thoughts, and it is a sure read for any new MSc/PhD students that might be embarking on their research journeys.

To get to the point, in one of the chapters, Breazeal describes the vision system running on Kismet. This is actually something that was developed by Brian Scassellati (whilst working on “Cog”, if I recall), and I must say, I think that it is a little gem (hence why I wanted to see it run on the Nao). The model is intended to make the robot attend to things that it can see in the environment (e.g. things that move, people, objects, colours, ect) using basic visual features. Basically a bottom-up approach to visual processing: take lots of basic, simple features, and combine “upwards” to something that is more complex.

I’ve finally implemented the model, from scratch and made it run using either a Desktop webcam, or using it with an Aldebaran Nao. This little personal project also holds a more serious utility. I’m now beginning to make an online portfolio of my coding skills as I have seen some employers request example code recently (and I’m currently on a job hunt). I’ve made two YouTube videos of the model. The first is it running on my Desktop machine in the lab, where I talk through the model and the parameters that drive it. In the second video I show the slightly adapted version running with a Nao. Here are those two videos:

Part #1

Part #2

I have to admit that there is certainly room for improvement and fine tuning in the parameter settings, as well as some nice extensions. For example I had a bit of trouble as there is quite a lot of red in our office and the robot was immediately drawn to this. Either I need to change the method for attention point selection, or I need to take distance into account in some way (but there isn’t and RGBD sensor on the Nao at the moment). Currently for attention point selection I am finding all the pixels that share the same max value in the Saliency Map and finding the Center of Mass of the largest connected region of these. Alas in the videos this was sometimes background items…

Talking about possible extensions, I certainly see alot of room to have an adaptive mechanism that provides the “Top Down” task orientated control of the feature weights (at least) as was done with Kismet. There are a small subset of the different parameters driving the model and finding values that work can be a little tricky. Furthermore, I suspect as soon as you change setting, you will need to tweak parameters again.

Coding this system up also made me think about the blog post I wrote a about what a robot should do out of the box. I recall that the Nao was doing at least face detection and tracking. I pondered the idea of whether this kind of model would work as on out of the box program. Rather than having fixed weights, the robot could have some pre-set modes (as Kistmet did) and just cycle through these at different intervals. Perhaps the biggest problem will be the onboard processing that would need to happen. My program is multi-threaded (each feature map is computed in it’s own thread, as is the Nao motor control) and isn’t exactly computationally cheap, and so I can see it using quite a bit of the processing resources.

Anyway, there are lots of possibilities with this model both with respect to tweaking it, extending it, and merging it with other “modules” that do other things. As such, I’ve made the code available to download:

Desktop + webcam version (needs Qt SDK, OpenCV libs and ArUco libs): Link

Version for the Nao (needs Qt SDK, OpenCV libs, ArUco libs and NaoQi C++ SDK, v 1.14.5 in my case): Link

Note: With the NaoQi SDK, this isn’t free. You need to be a Developer and I have access through the Research Projects at Plymouth University. I can’t provide you with the SDK as this would go against the agreement we have with Aldebaran… Sorry… 😦

What should robots do “out of the box” in the future?

Posted on Updated on

So today, after some waiting, we got our Nao Evolution robot. As you might expect, it took very little time for the scissors to come out and open the box, revealing the nice new shiny Nao robot, which looks surprising like our V4 Nao (it’s even got the same fiery orange body “armour”). I took a little time to glance around looking for the new visible enhancements to the design which seems to only be the new layout of the directional microphones in the head. It would seem that the rest of the improvements lay underneath the plastic shell. So, time to hit the power knob and fire up the robot…

This is where I paid far more attention. I wanted to see what software/programs/apps Aldebaran have added to the “fresh out of the box” experience. I think that this is actually really important as when you’re opening your new £5000 robot (and it doesn’t have to be a robot), you really don’t expect the excitement and wow factor to die as soon as you realise the thing doesn’t actually do anything when you turn it on. That’s a real anti-climax! Booooo!

I have to say that today when we turned on Nao Evolution, I was rather pleasantly surprised. Nao’s Life was running as default, and it seemed that the robot was doing both face tracking and sound localisation out of the box. Basically, the robot looked at you and followed you with it’s gaze, as well as responding to sounds. However, we didn’t see anything verbal and no robotic sounds (unlike Pepper’s awakening). That said, it is basic social behaviour from the robot, and already it had our roboticists enthused. Clearly Aldebaran have gotten something right! However, that said, there was still computer setup to do (giving the robot a name, a username, password, wifi/internet connection, etc). In the future, it would be nice to see some of that migrate to the social interface that the robot affords.

All of this did get me thinking though. Nao has an app store, which is a bit sparse at the moment, but I predict will become more and more populated given that Aldebaran have also introduced their Atelier program. Furthermore, it reminded me of a conversation that I had at HRI’14 with Angelica Lim (who is now at Aldebaran) where we were musing about how you might get the robot to interface with the NaoStore autonomously, and suggest apps for users to try. An interesting line of thought in my view.

Today I found myself pondering this a little further. The NaoStore and app arrangement for the Nao seems very much like the Apple App Store and Google Play services. However, I wondered about what form the apps would take. Would they be very stand alone pieces of software, or would they needs a certain degree of inherent integration with the other vital pieces of software on the robot (for example user models). Remember, we have a social robot here, who in the future will likely have a personal social bond with you (and you with it). What might be the implications on how we design apps for social robots?

Should robot apps really take the form of individual pieces of software that act and behave very differently, and thus might change the personality/character of the robot. Should we even be able to start/stop/update apps, or should app management be something that we as users are oblivious to? The latter seems to be what the setup is with AskNAO at the moment, as teachers/carers have to set up a personal robot routine for each child, but it is unlikely that the child knows that this is happening in the background. To them I suspect that it is all the same robot who is making the decisions. The magic spell remains intact (but child-robot interaction is nice that way)…

What happens with grown ups though? Somehow I can see that in a perfect future, the robot would have a base “personality” or “character” of sorts that makes it unique from other robots (at least in the eyes of the users), and that it alone manages the apps that then run. You as the user could still explicitly ask for apps to be installed and query the NaoStore, but I can image that this would be secondary to the robot being able to recognise that downloading a certain app might be useful without explicitly being told to do so (though I recognise that app management will be critical in this case, we don’t need dormant apps taking up space). Perhaps something comes up in conversation with your robot, and it decides it would be worth getting an appropriate app (for example, you like telling and hearing jokes, so, Nao downloads a jokes app so that it can spontaneously tell you jokes in the future). This is probably a long way off, and certainly needs some very clever AI and cognition on the robot’s part, not to mention many, many creases ironed out. Thus, I suspect for the time being that we will be using technology such as laptops and tablets/phones as the in-between mediums though which we manage our robots. Sadly this sounds like our robots will be more like our phones and computers, rather than different entities all together.

To sum this all up, I guess that I am generally hypothesising that people’s perception of and attitudes towards robots that have an app store behind them might differ depending upon how apps are managed (managed by users themselves, or by the robot autonomously and unbeknownst to the user) and whether people even know of the existence of the app store… Could be some interesting experiments in there somewhere…