Engineered Arts

Moving to Dyson, and thoughts on academic research and industry

Posted on Updated on

So, it’s been about 10 weeks since I joined Dyson (makers of the 360 eye robot vacuum), having left Engineered Arts over the summer, and while the move itself was perhaps not the most opportune thing to happen at that particular point in time, I think that it has been a very good outcome in the grand scheme of things. I’m back in Research, which, after a stint in a pure development role, I now realize is where my heart is. Also, I’m in job where I can apply more of my broad skills set. Generally, I’d say that I’m happier with the direction of my life and career.

It has been about a year since I decided to move out of academia into industry, and while it has been a bit of a rollercoaster ride, I’ve had lots of experiences and insights that been good food for thought and reflection. Though my time at EA was short, and rather stressful at times, I learnt some very important things about the differences in the internal functioning of small (and now larger) companies and universities, as well as the differences between robotics development, and robotics research.

Running a small company is clearly very difficult, and I take my hat off to anyone who has the guts and endurance to give it a sustained go. I don’t think that I have those guts (at least not now). Also, in a small company resources are stretched and managing those resources is difficult, and when you are a resource, that can be a very rocky journey indeed. That is something that I didn’t really get in academia, so it was quite a learning curve to get used to that.

I think that one of the most important things that I learnt in EA was to do with software development/management. I always suspected that software development in a PhD environment followed some “bad practices”, and when I look back at how I was managing software during my PhD (it was all via Dropbox, with no version control!) I was lucky nothing messed up too badly. What I saw in EA was something that was far more extreme than anything else that I had seen previously and it was a big eye opener. I also paid a lot attention to the style of (python) coding that I saw at EA as I was working with a couple of professional and very experienced software developers. Needless to say, I learnt a lot about software development in my time there.

Dyson is a completely different kettle of fish. For a start, it is a much larger company compared to EA, but still a small fish in a big ocean. Also, it is quite widely known that it is a very secretive company, and as such, I can’t say much about it. However, that is part of what makes it such an exciting place to work, and also very different to the academic environment. This secrecy is a strange thing coming from academia, which traditionally is very open. It takes a little getting used to, but it a very important aspect of the company (again, I can’t say much about work).

Overall, I’m quite glad that I made the change to industry, as I’ve learnt alot, and I think that I have become a better engineer, and a better roboticist as a result, which is generally my goal. I’m also happy to be working with robots that are truly going into the “wild”, as I feel that I am closer to helping make robots make a meaningful impact to the world – I can see the fruits of my labour in the hands of real people/users. That gives me alot of job satisfaction.

I’ve always had an uneasy feeling that there is a disconnect between academic robotics research and the trajectory that it is trying to depict/push – this “all singing and dancing” robot that is inevitably coming – and how we are actually going to get there given the current state of the (social) robotics industry and the current trajectory. I strongly believe that we need the population to get used to idea of sharing the world around us (physical and perhaps cyber space) with autonomous robots ASAP, before we unveil these “all singing and dancing” robots.

From what I have seen, it think that this is vital in order to promote uptake of smarter future robots (the kind that academia is has in mind) – if we are uneasy with robots around us, we will never accept these future robots (particularly as they will be larger in general). With that, I generally feel that there is a lack of academic HRI research that addresses research issues that will impact (and help) industry in the next 5 years or so. This is the kind of time frame that will help companies move toward building robots that academia is aiming for. Make no mistake, companies like Aldebaran, Dyson, iRobot, Samsung, Honda, EA, ect, are at the forefront and cutting edge of manipulating the uptake and wide-scale perception of robots in the present, and they are holding the steering wheel that will direct the trajectory of the kinds of robots we will see in the future (based upon how people react now, not in 10 years time).

I guess that there is perhaps a little message in all of this – if you’re an academic, and asked me for a research advice, I’d encourage you to tackle practical issues and provide solutions that companies can pick up and run with in a fairly short time frame. The alternative to this is work that stays “hot and alive” in a research lab, but has far less utility outside the lab space. In essence it could be collecting dust until a industry is in a position to actually apply it (if it remembers and/or finds that the work was ever done).

I’m stopping here, as I’m not sure whether I’m drifting off topic from what I had in my head when I started writing this post. I do think that it captures some of my thoughts on academic research and how it applies to industry. I’ll probably mull it over a bit more, and dump my thoughts here at a later date as this is a topic I have been thinking about for a while. However, if you have an opinion on this, I’d love to hear it! Perhaps it’s a topic for the HRI conference panel?

Advertisements

Moving on from Plymouth

Posted on Updated on

So after completing my PhD and a Post-Doc at Plymouth, I have decided to move onto new and exciting challenges, and I’ve opted to make a move to industry. From the 24th November I will join Engineered Arts Ltd in Cornwall where I will be working to develop the social aspects of their robots RobotThespian and SociBot.

I’ve very much enjoyed my time here in Plymouth, and I have been very fortunate and am very proud to have worked with and learned from world-class scientists. But after 6 years here, I feel that it is time to move on.

Though this is not the end of my links to academia. Engineered Arts frequent the main Human-Robot Interaction conferences to show their robots, and so I hope to attend these conferences as part of the company and catch up with the friends and network in the HRI community. So, hope to catch you guys in HRI’15 in Portland!

SociBot has arrived and a little about Retro-Projected Faces!

Posted on Updated on

Our lab got a new companion today in the form of a SociBot, a hand made/assembled 3 DoF robot head with a Retro Projected Face (RPF) sitting on top of a static torso, produced by Engineered Arts Ltd in Cornwall, UK. Setting you back about £10,000, it’s an interesting piece of kit and could well be a milestone in the development toward very (convincing) facially expressive robots.

SociBot

So, how do these things work? Well, it’s actually really simple. You take a projector, locate it in the back of the robot’s head and project a face onto a translucent material which has the profile of a face. This way you as the user can’t see into the head, but you can see the face. It’s cheap, it’s simple and it’s becoming more popular at the moment. Retro-projected faces are not exactly a new idea in the world of social robotics however (and they have an even longer history in the world of theater I’ve come to learn). There’re have been a few universities exploring them in Europe, with a notable early example coming from Fred Delaunay who did his PhD work at Plymouth a couple of years ago on his LightHead robot (Fred has since gone on to pursue a startup company, Syntheligence, in Paris to commercialise his efforts since it seems to have caught on quite a bit). That said, they do have many useful things to offer the world of robotics.

For example, the face has no moving parts, and thus actuator noise is less of an issue (which isn’t altogether true, projectors get hot and do have a fun whirring away). Faces can also be rendered in real time (thus noiseless eye saccades, blinking and even pupil dilations are possible), and are very flexible to alterations such as blushing. You can put any face you want on SociBot, with animation possibilities where the sky’s the limit. The folks who put together the software for the face system (amusingly called “InYaFace”) have also made a model of Paul Ekman’s influential Facial Action Coding system (FACs) which allows the robot to make all kinds of facial expressions, and allows it to tie in very well with research output that use the FACs system as a foundation.

The robot itself run’s Linux Ubuntu and uses lots of off the shelf software such as OpenNI for the Acus XTion Pro to do body tracking and gesture recognition, and face/age/emotion detection software using the head mounted camera as the feed source. TTS is also built in, as is Speech Recognition (I think), but there is no built in mic, only a mic input port. Also, we (by that I mean Tony) bought the SociBot Mini, but there is also a larger version where you can add a touch screen/kiosk as joint virtual space that both the robot and user can manipulate.

SociBot can be programmed using Python and has a growing API which is constantly under development and it looks promising. You also have the ability to program the robot via a web based GUI as Engineered Arts seem keen to have the robots be an “open” platform where you can log in remotely to control them.

Unboxing the robot

Given my recent post about the unveiling of Pepper by Aldebaran and the new Nao Evo that we got a little while ago, I was curious to see what Engineered Arts had put into the robot as a default behaviour. I must admit that I was rather pleasantly surprised. The robot appeared to be doing human tracking and seemed to look at new people when they came into view.

To make comparison with Aldebaran, I think that what really stands out here is that it took very little effort to start playing with the different aspects of the robot. When you get a Nao, it really doesn’t do that much when turned on for the first time, and there is quite a bit of setup required before you get to see some interesting stuff happen. Through the web interface however, we were quickly able to play with the different faces and expressions that SociBot has to offer, as well as the different little entertaining programs such as the robot singing a David Bowie number as well as blasting out “Singing in the Rain”. Lots of laughs were to be had very quickly and it was good fun exploring what you can do with the robot once it arrives.

From a product design / UX perspective, Engineered Arts have got this spot on. When you open the box of a robot, this is perhaps the most excited you will ever be about that robot, and making it easy to play with the thing and leaving a very good impression. Overall, however, some things are a bit rough around the edges, but the fact that this is a hand made robot tells you everything that you need to know about Engineered Arts: they really do like to build robots in house.

Drawbacks to retro-projected faces

Now that post is in part an overview of the current state-of-the-art in RPF technology, I can certainly see room for improvement as there are some notable shortcomings. Let’s start with some low hanging fruit, which comes in the form of volume used. As the face is projected, most of the head is actually empty space in order to avoid having yourself a little shadow puppet show. This (lacking) use of the space is actually inefficient in my view as it puts quite some limitations on where you can locate cameras. Both LightHead and SociBot have a camera mounted at the very top of the forehead. This means that you loose out on potential video quality and sensory. Robots like iCub and Romero have cameras that are mounted in actuated eyes which allows them to converge (which is interesting from a Developmental Robotics point of view), but also compensate for robot’s head and body movements, providing a more stable video feed. Perhaps this is a slightly minor point as the robot’s are generally static in the grand scheme of things, when they start to walk, I can see things changing quickly (in fact this is exactly why Romeo has cameras in the eyes).

Another problem is to do with luminosity and energy dispersion. As the “screen ratio” of the modern off the shelf projector isn’t really suitable to cover a full face at the distance required, these systems turn to specialised optics in order to increase the projection field of view. However, this comes at the cost of spreading the energy in the projection over a larger area which results in a lower overall luminosity over a given area. This is furthered even more as when the projection hits the translucent material, the energy refracts and scatters even more, which means you loose more luminosity, as well as the image loosing some sharpness. Of course the obvious solution is to put in a more powerful projector, but this has the drawback that it will likely get hotter, thus needing more fan cooling, and with that fan running in a hollow head, that sound echoes and reverberates.

Personally, I’m still waiting for the flexible electronic screens technology to develop as this will likely overcome most of these issues. If you can produce a screen that takes the shape of a face, you suddenly no longer need a projector at all. You gain back the space lost in head, loose the currently noisy fan that echos and luminosity becomes less of an issue. Marry this with touch screen technology and perhaps actuation mounted under the screen and I think that you have a very versatile piece of kit indeed!