Showing posts with label Nao. Show all posts
Showing posts with label Nao. Show all posts

Tuesday, 18 March 2008

First off the production line



Here is a sneak peak of the first of many Nao's to come off the production line.

View the Mov or Wmv.

Enjoy!

Saturday, 7 July 2007

Aldebaran Nao MSRS Video


For the last month I have had the pleasure to work with Aldebaran Robotics on bringing their Nao humanoid robot into the Microsoft Robotics Studio simulation engine.

The last few days of work was focused on creating a video of the Nao doing the Haka.

From the outset it was a pleasure working with such a pretty robot and a great team.

I also take my hat off to the MSRS team who have made such a great environment for robotic simulation. Although the learning curve can be pretty steep the end results are well worth the effort.

I hope that in the coming years Nao will play a big part in robotic soccer and many other realms.

Enjoy,

Wednesday, 4 April 2007

Humanoid Robotics

Last week I went to the JNRH (Journees Nationales de la Robotique Humanoide) at the LIRMM in Montpellier France where many of the French Humanoid robotics community presented their latest research. Many of the group are involved in the JRL (Joint Robotics Laboratory), a French - Japanese effort at making control software for the HRP-2 humanoid from Kawada and the HOAP-3 from Fujitsu (pictured).

From having seem videos of ASIMO and QRIO, I had thought that the basics of locomotion were quite well understood, but the reality is far from this. In fact most of the time the researchers work in simulation, dead scared of running their algorithms on their rare and expensive real robots. When they do work on the real robots, there is always someone holding a safety harness just in case. The videos you often see on the web are mostly pre-scripted actions resulting from extensive offline optimization. That is why the ASIMO fell down the stairs - unable to react in real time to the situation.

There was a quick demonstration of the HOAP-3 standing on one leg. Amusingly, it had to be powered down to get it to return to a natural position as they had only optimised the movement in one direction.




Many of the researchers are working on rather narrow aspects of control such as controlling the ZMP (Zero Moment Point) (Pierre-Brice Wieber PDF, landing after a jump, controlling contact forces (Christine Chevalereau PDF), kicking, moving a box by pivoting etc. The French approach seems to concentrate on finding theoretical perfection, which may be where an ultimate solution will be found if available. But do really humans work like this? Do they really pre-plan perfection, or do they learn by error and react to feedback?

The heavy theoretical approach contrasted somewhat with the work of Oussama Khatib from Stanford who, rather than pre-plan an optimal path, just applies an external force and as if by magic, his control system incrementally pulls the robot along a very natural path by exploiting the prime law of human effort, namely laziness. He demoed this in simulation running in real time, by moving around where he wanted the hand to be (primary task), the movement obeyed all of the constraints (joint limits, balancing, contact points, collision avoidance) and moved towards the goal while using any available redundancy to minimize effort, maintain body symmetry etc. (MPG) (PDF) You could see that some of the crowd where exasperated by the apparent simplicity - there were groans of 'if only we had the budget' etc. One of the problems he mentioned is that most humanoid platforms control movement and give feedback in terms of joint angle rather than angular effort which is needed for his laziness concept to work.


David Gouaillier from Aldebaran Robotics gave a presentation of their forthcoming Nao humanoid (not pictured)- a very pretty 23DOF humanoid with cam, speakers, microphones and wifi. (See minute 36 of MPG) It is likely to be able to do face tracking, play music, localize sounds and download new behaviors from the web. They will be making some hand-crafted models for sale in the near future and move to industrial production thereafter with an expected retail price of €2000-€3000.
Although they are open to making interfaces to several robot programming environments, their initial effort is centered around the URBI (Universal Real-time Behavior Interface) language from Gostai. In contrast to MSRS, URBI uses some very simple language constructs which let you do pretty much the same thing without explicitly using ports and interleaves.
A typical little behavior (for an aibo) might be:

whenever (sensorFront.val < threshold)
{
robot.stop();
neck.val = 10 time:450ms &
leg.val = -45 speed:7.5 &
tail.val = 14 sin:4s ampli:45;
};
'whenever' creates a persistent background task that checks for the condition and triggers the action. In this case the action stops the robot, and *simultaneously* moves the neck, leg and tail according to the time, speed and oscillation parameters supplied. Not bad for just a few lines.