publishing

NLUs featured in the New Scientist magazine

Posted on Updated on

I recently attended (and presented at) the HRI’14 conference in Bielefeld which exhibited lots of the latest and greatest developments in the field of HRI from across the world. HRI is a highly selective conference (23% acceptance rate or so), and while getting a full paper in the conference seems to have some more emphasis and weight in the US than in Europe, it’s always a good venue to meet new people and talk about the interesting research that is going on.

It turns out that this year there was a New Scientist reporter, Paul Marks, attending and keeping a close eye on the presentations and he’s written a nice article about some of the little tricks that we roboticists can use to make our robots that little bit more life-like. He draws upon some of the work that was presented by Sean AndristAjung MoonAnca Dragan on robot gaze, and also the work that I published/presented on NLUs, where I found that peoples’ interpretations of NLUs is heavily guided by the context in which they are used.

Essentially what I found was that if a robot uses the same NLU in a variety of different situations, people use the cues from within the context to help direct their interpretation of what the NLU actually meant. Moreover, if the robot were to use a variety of different NLUs in a single given context, people interpret these different NLUs in a similar way. To put it plainly, it would seem that people are less sensitive to the properties of the NLUs themselves, and “build a picture” of what an NLU means based upon how it has been used. This has a useful “cheap and dirty” implication for the robot designer: if you know when the robot needs to make a utterance/NLU, you don’t necessarily have to make an NLU that is specifically tailored to the specific context. You might well be able to get away with making any old NLU, being safe in the knowledge that the human is more likely to utilise cues from the situation to guide their interpretation, rather than specific acoustic cues inherent to the NLU. This is of course not a universal truth, but I propose that this is a reasonable rule of thumb to use during basic HRI… However, I might change my view on this in the future with further experimentation… 😉

Anyway, it’s an interesting little read and well worth the 5 minutes if you have them spare…

Pimp my graph, yo!

Posted on Updated on

Have you ever opened up a Journal article that you’ve been really excited to read, but after a quick flick through it an generally left with a slight feeling of anti-climax? Ever been to a conference and been disappointed by the talk that sounded like it was right up your street? Perhaps you have submitted a paper or article that has come back with before average reviews? I know I have, and there is no denying that there are many, many reasons for this in each case. However, I recently stumbled upon a little strategy that can help, particularly if you are the author of an article or the person giving a presentation. I’ve unofficially coined it graph pimping, but it is actually something that I saw was done by Bilge Mutlu and the students in his lab.

While I think that we like to believe that scientific research and peer-review is a process that is unbiased when it comes to how things look visually, I tend to get the feeling that this is not strictly the case. In fact, it seems that people tend to pay attention and exhibit a desirable reaction to things that are basically visually appealing – people like eye candy. It gives a good initial impression of things, and depending on who you’re dealing with, this may lead to a make of break situation, particularly if you work in Marketing. And let’s face it, as a researcher there is quite a serious element of self promotion and marketing that is involved.

MatLab Original Plot Adobe Illustrator Pimped Plot

So, as for pimping a graph, take these two graphs above as an example. They both show exactly the same information, however I think that most people would agree that the graph on the right is far more visually appealing than the one on the left (which is a standard MatLab bar plot). What I’ve done is take the MatLab plot, and run it through Adobe Illustrator and given it a serious make over. What is astonishing is that the effort is minimal, while the return is rather substantial in my view. By simply adding a gradient background, tracing over some lines, using a different font and adding a splash of colour, you can turn a MatLab monstrosity into a rather nice looking graph.

Now, I’ve not seen any scientific study that shows that this can directly impact the chances of your next Journal article getting published, but I do certainly think that pimping those figures can’t hurt your chances when it comes to review time. Same goes for your PhD Thesis… 😉

Should we trust “p-values” blindly?

Aside Posted on Updated on

This subject was brought up at a lab meeting that we had at the end of last year, and my oh my, did that talk cause a stir!

In essence, the debate that we had about p values revolves around the issue of reproducibility. There are many studies and exciting results, particularly in the field of Experimental Psychology (which much of the HRI community subscribes to), which the general research community has struggled to replicate. This is a serious cause for concern as it then becomes difficult to distinguish between results that are down to the “luck of the draw” and results that are caused by a real effect.

Recently there was an article published in Nature that sought to highlight the issues with values. While providing an entertaining and insightful overview about the different attitudes toward p values and hypothesis testing, it seems that the main message of this article is to encourage researchers to think a little more critically about the statistical methods that they employ to test their hypotheses. Moreover, and perhaps more importantly, the article stresses that researchers should also cast a critical eye on the plausibility of the hypotheses that are actually being tested, as this can have an important impact on the statistical results that are obtained.

Prof Geoff Cumming made a very interesting YouTube video which outlines the problems with effect replication and p values. It’s well worth 10 minutes of your day (and spend another 30 minutes writing a MatLab script to replicate his results, you might need the sanity check). In what he calls “The Dance of the P Values”, Prof Cumming demonstrates that when repeatedly sampling data from two normal distributions with different means and checking whether the means of these samples are significantly different (p < 0.05), you tend to find inconsistent results regarding “significance” (which we have rather arbitrarily set a p < 0.05). In fact, his simulations show that more than 40% of the time, your data sample will fail a significance test! Oh dear!

So what is to be done? Well, suggestions are to firstly think critically about your hypothesis. Is this a plausible hypothesis in the first place? Secondly, it is suggested that you alter slightly the statistical results that you report. Rather than standard deviations, report 95% confidence intervals as these tell a broader story. Thirdly, when you interpret your significance tests, do not use these as the sole bit of evidence that you base your conclusions upon. Rather, use them as a just another piece of evidence to help gain insight. Finally, you might consider changing the actual test that you use also. There are other approaches to data analysis (e.g. Bayesian methods) that might provide you with an equally strong piece of evidence. Perhaps even use both.

I think that main message is clear, do not blindly follow the result that a single significance test gives you. There is a reasonable chance that you were just lucky with your population sample. Just think of all those little experiments that you decided not to follow up or publish just because your pilot study didn’t bare any significant fruit…

HRI’14 in Bielefeld.

Posted on Updated on

Robin will be giving a full paper presentation on the impact that situational context has upon the interpretation of Non-Linguistic Utterances at the 9th ACM/IEEE Human-Robot Interaction Conference in Bielefeld in March. He will also be showing a poster in the poster session. Give his sleeve a tug at the conference if you have any questions!