Nuts and Bolts

A Model of Visual Attention for Social Robots (Code Available):
This is a from scratch implementation of the model of Visual Attention for Social Robots developed and described by Cynthia Breazeal in her book, Designing Sociable Robots. I’ve developed this with the Aldebaran Nao as the platform in mind, however I have made two implementations. One for a standard PC/laptop which uses only free, open-source tools. The second implementation is specific to the Nao, as it makes use of the NaoQi SDK (v.1.14.5), but is essentially a slightly extended version of the first piece of code.

I’ve put together two videos that talk through the model. In the first video I talk about the motivation for the model, why Visual Attention is important and the parameters that drive the model. In the second video I show the model running with the Nao and show some of the functionality and potential that it has.

Video #1: Model running on a standard PC.

Video #2: Model running on a standard PC using the Nao as a camera source.

So, for some technical information, as this is part of my CV portfolio. These programs are written completely in C++ and from scratch. I use the Qt SDK to handle the GUI aspect of things, multi-threading and data flow (which I do with the SIGNALS/SLOT functionality). The OpenCV libraries are used to get images and do the vision processing. The ArUco libraries are used to facilitate the AR Code tracking. Finally, the NaoQi SDK is used to allow the Nao to be used as a camera source and to control the motors. Each feature map run in it’s own thread which means that all the features are computed in parallel as soon as a new image in received from the camera. Once that is done the production of the Saliency Map and Attention Point Selection is a serial process. We finally update the motor position if needs be.

You can download the two versions of the code:
Desktop version
Nao specific version

Note: I also wrote a little blog post about this model and implementation.

Creating an online Nao ChatBot:
Here is a little video of a simple Chat Bot prototype that I hacked for the Nao using Urbi (running on the Nao), Python (running on a remote PC). The Urbi code was used to capture the utterances made by the human, and the Python code on the laptop was used to pass the utterances to Google ASR, obtain a response and pass that to an online ChatBot (IzarFree). The response from the ChatBot was then sent to the Nao and vocalised using the onboard TTS engine (again via Urbi).

The idea behind the system was to have a simple system that could sustain a spoken interaction and explore how vocal interjections and vocalisations made by the robot impact the overall interaction. Alas while I was impressed with the overall performance on the speech recognition (which did work well in rather noisy environments), the ChatBot did not work well and often resulted in short, “circular” conversations (often changing subject or just not making sense) rather than longer “linear” conversations where the focus could remain on a single subject for a few minutes if not longer.

Conclusion, ASR wasn’t bad at all, but the Dialogue Management, Natural Language Understanding and Generation was missing. For this experiment, Wizard-of-Oz is the way forward apparently…

Gripper for a Wheelchair Mounter Robotic Arm
This was actually my undergraduate project while I was at Middlesex University studying Product Design and Engineering. I designed and developed a gripper that was geared toward exploiting the benefits of mechanical designs of grippers in order to expand the different types of manners in which a wide variety of everyday objects can be manipulated.

Even today, most robot grippers that we see in the world, and especially those that are used with assistive/rehabilitation robotic technologies, are still rather crude in their design. Most commonly they consist of a single pair of parallel plates that are used to essentially “pinch” objects. There are many draw backs to this kind of approach and this is certainly not how humans grasp objects (and by extension, these objects we wish to grasp are not designed to be grasped in this way). We occasionally see rather more complex grippers that have more fingers on them, and while providing increased dexterity, they come with the cost of more complex control. I wanted to find a solution that gained the former without needing the latter.

Here’s a little video of what I came up with. It’s actually made up of slides and videos that I presented at the 2008 20th annual Medical Engineering Student Project Competition run by the Institution of Mechanical Engineers (IMechE), where I won the category for the “Best Design and Development of a Medical Device”. A proud moment for me.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s