robots in the wild

Do socially animated robots just waste energy?

Posted on Updated on

So today I stumbled across this IEEE spectrum post on Care-o-bot, mark 4. I hadn’t actually come across the previous iterations of Care-o-bots, but I must say, looking at the video of the first attempt, I doubt there is much to be missing out on. Sorry Fraunhofer…

There is a link to a rather fun little promotional video of the robot, and as I watched it, I certainly felt a light hearted joy. The music was upbeat and gently fun, the basic unfolding of the plot was touching, and there is great attention to detail in the animation of the robot. HRI lessons have been picked up well here.

The robot design is simple at the heart. Slick white paint, some LEDs to provide robot specific feedback channels, and two simple dots for eyes. There is a whole host of research from HRI that demonstrates that simple is very effective, when used properly.

I particularly liked the arm movements when the robot is rushing around to get the rose. It certainly helps us empathize with the robot, no questions there. However, there is a price to pay for this. Battery cell technology is certainly coming along, but for robots energy is still a very precious resource.

So, my question, and it’s a philosophical one, and depends on how much you buy into the H in HRI, and of course what function the robot serves. Those rushing around arm movements that clearly conveyed a robot rushing around to achieve a goal. And in turn, given that, you might infer that the goal has quite some importance. They probably required a fair bit of power. Where they value for watts? Was it worth expending all that energy for the robot to rush around? Did they really bring that something truly extra to the robot?

I don’t think that there is a right or a wrong answer, but I think that this does remind us that from a practical perspective (at least for now), energy is precious to robots, and we as robot designers must contemplate whether we are using it most effectively.

Happy musings!

Advertisements

IET Talk on Dyson 360 eye

Posted on Updated on

Here’s an interesting video from an IET event held in Bristol in early October. Mike Aldered talks through the story and development behind the 360 eye, and highlights lots of interesting things surrounding the development of robots from a company perspective.

Mike does a great job does illustrating some of the differences in the conceptual problems that academia addresses, and the real challenges faced when applying some of the solutions to real-world problems. Makes me want to point to my last post again. Ask yourself how can you apply your research to the real world? It is worth thinking about!

Watch the video here.

What’s the Future of Aldebaran Robotics?

Posted on Updated on

Today I came across this Rude Baguette post which discussed the current ongoings at Aldebaran Robotics, which seem to have been spurred on due to the 2012 acquisition by SoftBank.

By all signs over the years, Aldebaran has been a prosperous company, gaining a firm foothold in the academic spheres in social HRI. Just look at how many different research labs across the globe have been adopting the Nao as the “go to” platform for their scientific endeavours. By all measures, Nao is a well suited platform for a broad range of social HRI and Cognitive Science research. Furthermore, they haven’t been doing a bad job at having a stab at bringing social robots to potentially fruitful application domains such as Education and Autism Therapy beyond the scientific arena. Very nobel endeavours indeed as robots have shown considerable promise in both.

Romeo has been a weird development. It’s had a reasonable bit of air time early on, but nothing ever really came out of it. Perhaps this is what Rude Baguette really refer to when they say that R&D that never really transformed into the game changing images that was behind it (and in Aldebaran). Notably there is also no mention of it in the blog post. I suspect Romeo is dying a quiet death as Nao and Pepper have been the far more successful ventures and gain considerably more attention.

Then we had Pepper, and what an interesting development that was. A slick, aesthetically pleasing robot that is (was) built upon the maturing software products that have been developed over the years (NaoQi and Choreographe) and unveiled to the world in an equally stylistic manner. I even know people who were starting to plan Pepper into their research programs at some universities. Again, another bright horizon for the company, at least from an academic perspective.

However, for a little while there have been rumours about some odd on-goings in the company, accompanied by some odd external observations. For example, about a year ago, there were a number job adverts (about 30) on their website, but recently (about 6 months ago), they suddenly all disappeared, and I understood that there was a hold on all job applications. Hiring has completely stopped (and the job site was quite slow to reflect this). This was followed by whispers to the external world that fairly recent newcomers to the company where being laid off. These went further to the point that there was a 20% cut in employees (apparently this was 25%). Something is clearly up in the company, and the Rude Baguette seems to confirm this on a few fronts.

So, what’s happening? Is SoftBank slowly shutting Aldebaran down? Is this a case where the company has been bought purely for it’s assets (i.e. the people building the robots and vitally the technology itself) and now will have everything move to Japan, or is it just that there is a serious misalignment in the desired future directions of the two parties? It’s hard to tell, and subject to simple speculation at the moment.

For the academics at least, there is a concern in the notion that Nao will be disappearing from the shelves in the near future! In my view (as someone who studied HRI scientifically), Nao has been a very fruitful and worthwhile tool for HRI. Not only does it provide a value for money, well equipped platform, but given the numbers it has sold in, it has provided a considerable degree of standardization for researchers. Scientific findings can also be replicated and insights can be utilized in a more meaningful way as a result. All very good things generally speaking. It this enough of an industry to keep the company afloat? I’m not business savvy enough to know that (yet), but I’ve head that the answer to this is “no”.

Another perk has been that Nao is an attractive platform that draws in children. I consider this a vital attribute of any robot, as I argue strongly that it is in the best interests of the HRI community (and here I mean both industry and academia) that we introduce children to social robotic technology at an early stage. There are two main reasons for this. Firstly, early exposure to this kind of technology will likely go a long way to easing in future integration and applications (and I’m talking in 10+ years when the then young adults and parents of the future will recall their experience with robots). Secondly, the standards of children are low and they expect far less from a £6000 robot than adults do. Basically, child oriented applications provide a testing ground where current baby-step advancements in social HRI technology can be explored, evaluated and matured in the slow and careful manner that is required. I consider this as a stepping stone toward developing the technologies required to impress and engage with adults on a few (adults) to one (robot) basis (I think that many to one interactions are a different kettle of fish entirely). We as a community are still working out what is what in terms of technologies and where real potential applications lay, but we clearly see that there is appeal from both adults and children in child-oriented applications. Nao (and Aldebaran) has played an important role in uncovering and establishing this and trying to work toward making it a reality (look at the AskNAO venture).

So, for me, the news coming from Aldebaran is sad. We might be loosing an important company that is seriously helping our exploration and understanding of social HRI from a scientific perspective. I also know people in the company and hope that they are coping. It all sounds rather unpleasant from the outside, so it certainly won’t be pleasant on the inside!

Perhaps things change in the wake of this rather public news, but only time will tell…

Sandtray Deployed to a local School!

Posted on Updated on

Over the last few months the Plymouth Node of ALIZ-E has been working hard toward a rather large experiment that got underway at the start of the week. Last month we went to the Natural History Museum to showcase some of Plymouth University’s robotics research for Universities Week. That was used as a chance to give our system a hard beta test, which generally went very well. We identified some things that needed to be changed and made more robust, but all in all, we were very happy with the outcome.

After two weeks more development and ironing out creases in the code, we finally deployed our two systems to a local Plymouth Primary school on Monday morning this week and it will stay in two different classrooms for two weeks. You can see the setup below.

10505393_10154354800320235_1292609695499805238_n Paul Playing with the Sandtray

So, the system consists of a 27 inch all in one touch screen computer which essentially provides a shared input/malleable space for both the human and the robot. The Nao stands/kneels on one side of the screen, while the human is located on the other side, facing the robot. Behind the Nao is a Kinect sensor that is facing the human and can see just over the top of the Nao’s head.

Basically, on the touchscreen we run some programs that allow the children to play a non-competitive game with the robot. This is a categorisation task, where icons on screen (e.g. numbers, planets, colours) have to be moved into a particular location on the screen in order to be categorized in some way (e.g. odd or even number, numbers that are/not part of the 4 timetable, planets with moons and without moons). By keeping the basic underlying task simple, it is easy to change the icons and the categories to produce a wide repertoire of different games that are easily accessible for young children. As part of this game experience/setup, each child in the class has their own “saved account” (which they select by selecting their name on screen when prompted by the robot) which allows the robot to track their progress through the various games, and in turn pitch the level of difficultly accordingly. The robot also takes part in this game by also, moving icons onscreen, and as it is very open ended, turn taking tends to emerge from the process naturally as a result.

The rational for using a touchscreen is that is provides an interactive medium which is simultaneously intuitive for the human and the robot. In essence, the human is able to manipulate objects on screen in both a natural manner (say select them, drag and drop, ect) as well as being able to use similar cues as may be used with tables and such devices (e.g. swipe, pinch to zoom, ect). On the part of the robot, the touchscreen is a computer, makes communication between it and the robot very simple. Further more, when you present something on screen, the robot is able to gain lots of knowledge about objects that are being manipulated (and how) that may otherwise have been very difficult to obtain in the physical world. By providing a shared, malleable space that is virtual rather than physical, you overcome many of the limitations in both the Nao’s ability to manipulate real objects, as well as its perception limitations.

We use a Kinect for a similar reason. The Kinect is used to estimate head pose, and thus determine whether the human is looking at the screen, at the human, or somewhere else. This allows us to, in part, drive the gaze behaviour of the robot. For example, if the human is looking at the screen, then the robot can establish joint attention by doing the same, or if the human is looking at the robot then the robot knows that it can make eye contact. It is useful to have the Kinect act as a “fly in the corner” because some of these issues are difficult to achieve using the built in cameras on the Nao, particularly as they move as the head gazes around.

We’re keeping our fingers crossed that we aren’t bitten too hard by the “Demo Effect” (which means that as soon as it comes to actually showing a system working to real people, a fully working system in the lab decides to have a catastrophic break down…). Doubtless that we will seek to publish our results from the experiment in due course (so sorry for not detailing the experiment itself here), so watch this space… 😉

Showcasing ALIZ-E at the Natural History Museum in London.

Image Posted on Updated on

From the 9th to the 13th June 2014 the Natural History Museum is hosting the “Universities Week” where many UK universities will be showing their research to the general public. Along with my colleagues Paul Baxter, Joachim De Greeff, James Kennedy, Emily Ashurst and Tony Belpaeme, we will be demonstrating our “Sandtray setup” with the Nao. This is one of the main components in the ALIZ-E integrated system.

Sandtray at the Natural History Museum

Do pop along if you can and see what the wonderful world of robotics (and ALIZ-E, of course) has to offer! 😀

P.S. Thursday the 12th will be a late evening, open until 22:30.