Friday 31 August 2007

Devantech Drivers for MSRS 1.5

After an excesive delay, I have finally put up the Devantech drivers for MSRS 1.5 on CodePlex at http://www.codeplex.com/DevantechMSRS.

Not many code changes, just a lot of cleaning up the code and making it live in /Apps instead of /Samples/Platforms so as to keep it seperate from the other MS samples.

One interesting change was made to SonarPan, so that it can deal with multiple sonars, or pretend sonars, take the initial pose presented by the Generic Sonar contract and alter it on the fly relative to the servo that controls the panning and pass this on to any consuming service. This was done, so that my SLAM services can accept the now well-posed sonar readings and chuck them straight into the maps.

I've also put up all my SLAM code to codeplex and will move it out of setup mode in the next couple of weeks. The basic algorythm and test app work fine, but a fair ammount of work is still needed to make the MSRS service which wraps the DLL a pleasure to use. So far I've tested it with the traxster's IR using the DLL, and can get nice non-slam maps from the simulator with the Pioneer's laser.

I have a heavy workload for the near future, so won't be able to devote much time to continuing this - If you think you might be interested in helping complete porting the SLAM to MSRS, please get in touch.

Thursday 23 August 2007

Kuka Educational Framework

Kuka has released a very thorough collection of tutorials about controlling their industrial robotic arm using MSRS.

It covers many real-world robotics subjects including: forward and inverse kinematics of articulated chains, syncronized velocity / acceleration control, sensor feedback, mobile platforms and task excecution architectures.

This is probably the most advanced collection of services for MSRS so far, complete with clear, detailed documentation. Well done Kuka.

Tuesday 14 August 2007

Aldebaran's Nao selected for Robocup Standard League

As reported on the Robocup 4 legged league site, the Nao has been selected for the Standard League Robocup 2008 humanoid football competition. I look forward to watching it!

And for full disclosure, I can now happily say that I have taken work with Aldebaran and hope to play a part in bringing the Nao to next years games. May the fun begin...

Saturday 7 July 2007

Aldebaran Nao MSRS Video


For the last month I have had the pleasure to work with Aldebaran Robotics on bringing their Nao humanoid robot into the Microsoft Robotics Studio simulation engine.

The last few days of work was focused on creating a video of the Nao doing the Haka.

From the outset it was a pleasure working with such a pretty robot and a great team.

I also take my hat off to the MSRS team who have made such a great environment for robotic simulation. Although the learning curve can be pretty steep the end results are well worth the effort.

I hope that in the coming years Nao will play a big part in robotic soccer and many other realms.

Enjoy,

Saturday 26 May 2007

Elephant in the cupboard

The register has good article called "Why do robot experts build such lousy robots?" which explorers some of the less talked about aspects of this "cusp of a revolution", such as why "making and playing with robots evidently beats using and ignoring them" and Thrun's "apparently simple challenge: how a computer vision system might detect moving objects and predict their motion, a task at which frogs are still in the lead".

Most of it is bang on, but there are some hopeful signs out there in machine learning (most of which fall into the overflowing "soon to be released" category.

Sentience stereo vision slam
braintech volts-iq *** ITS ALIVE! ***
SRI Slam as a webservice
Incremental of Linear Model trees PDF


[*** Update June 12th 2007] Volts-IQTM Community Technical Preview
is released. Look forward to checking it out...

Wednesday 16 May 2007

Map tests with a Traxster using IR

I think most researches would laugh at the idea. Why bother trying IR when you could use a Lidar ? :Budget.

The Traxster is a rugged tracked robot from RoboticsConnection that often comes with a pan-tilt head containing three Sharp IR sensors that have a max range of 80cm.

The image on the left shows a 0.5cm per pixel map of the traxster panning its head and recording the distances from the three IRs and sending it to an occupancy grid using a cone model of width 0.1 radians and an obstacle depth of 2cm.

As none of the IRs are located above the center of rotation, there is a little work needed in order to place them correctly so that when they pan round, the three sets of data point to the right places in the map. Fortunately, I had all that code already, so it was just a question of saying distanceSensor[i].AttachToServo(x,y,theta). Where the x,y and theta are the offsets from the servo's center of rotation. You can see the space left by the head as it turns as a grey area within the robot's rectangle.

However, as with all tracked vehicles with skid steer, the odometry can be pretty misleading when skidding on the spot. The image on the left shows a 2cm per pixel map. The odometry path is shown as a blue line. You can see that odometry would have made the robot go through an obstacle, but once fed into grid slam, the estimated true position has diverged away from the odometry leading to a slightly better map.

The robot was controlled from within Microsoft Robotics Studio using a joystick, sending Pid velocity commands. Only by driving very slowly to avoid skidding, and panning the head continually was it possible to make a consistent map. It might be possible to correct rotational error by fusing the odometry with compass data, but even this is unlikely to work on large scale maps where the IRs often read max range. My next experiments will be with sonar.

Friday 4 May 2007

How to build a COTS PC Robot - Part 1

This first part will deal with how to fit the physical and electrical parts together to make a pile of hardware that can be called a robot because it can move, not because it will actually be intelligent or autonomous.

At the end of this part, you will have a platform that with the help of part 2 (software / drivers) can be used as an expensive remote controlled 2 wheel car that can report on its sensors, and be in a position to have intelligence and automomy programmed into it. It will be similar in hardware composition to the Whitebox 914, but will have cost you $4000 less. It will of course be missing a beautiful shell and software to make it do anything. ( No disrespect to whitebox – the hardware part of this project was completed before the whitebox, but turns out to be very similar. Their product certainly has a beautiful shell and probably has some good software. If you want a finished system: check them out. )


What makes a robot?
For the purposes of this articles we will say that to be able to be called a robot it must:
1) Be able to move using wheels or legs
2) Be able to sense its surroundings, at least a bit.
3) Be able to respond to its senses and choose a move.

There are many ways to accomplish the above, and a $10 BEAM robot, may in many cases perform the above more reliably and with greater fluidity. This article will assume that you want a full PC as brain, either an existing desktop machine or a small PC that will move with the robot.

Advantages:
1) You can program with familiar languages.
2) You have access to large memory and storage.
3) You can use web cams and other commodity parts.
4) You can have access to wifi and the power of the network.

Disadvantages:
1) Complex system with many parts
2) Slow start up time.
3) Only as reliable as your software.
4) Expensive.

How do you make a PC move and sense?
These days there are many devices that can plug directly into a PC using serial, or USB and give you access to a vast range of actuators and sensors. Popular examples include, Phidgets, the Serializer and I2C devices. This article will concentrate on I2C devices from a UK company called Devantech mainly because it is what I have most experience with. Although I personally think they are a good option by price, performance and ease, many other choices are possible, and your own research many lead you in another direction.

Easy option: the RD01 Drive system from Devantech. £96.91
It contains the I2C controlled MD23 dual 3A motor controller, two gear motors with 360 step encoders, complete with wheels and mounting brackets. You could just about screw these directly into a laptop and you would nearly have a robot. We won’t be following that route.

How do you talk to it?
We will follow two possible routes:
1) The USBI2C adaptor from Devantech. £16.56
Plug one end into a USB port and plug some I2C wires into the other side. Done.
2) The RF04/CM02 from Devantech £79.88
Plug the RF04 into a desktop PC, and plug the CM02 via some I2C wires into your MD23. The RF04 will speak over radio waves to the CM02. This way you can use your desktop PC as the robot’s brain, and avoid you having to buy another computer for the robot. This will be much cheaper, but communication will have more latency and it won’t be as easy to stick a webcam on your robot.

Both of the above will let you speak to I2C devices through a virtual COM port. You simply send a few bytes to the COM port to tell the device to do something, or send another few bytes asking the device to send you something back. Higher level drivers exist which can do all of this low level communication for you and give you access to the devices via a web page or a service based architecture (more details in part 2).

Other sensors:
Once you have established access from your PC to the I2C bus, you can keep on adding sensors and actuators, up to about 122 devices in theory, and even more if you use several USB2I2C adaptors, or I2C bus splitters, but this is plenty already.

SD21 Servo Driver £21.99. This lets you control up to 21 servos, easily enough for a pan-tilt head, a couple of arms, a simple walker, or some other custom mechanisms that you may dream up.

If you are tight on budget and have chosen not to carry the PC brain around with you, you could even use a couple of continuous rotation servos (or hacked servos) as wheels.





PC
Beware, this is a fairly full PC that with added screen and speakers would make a fine home entertainment system. As the software will rely on XP, we have steered away from using compact flash as the hard disk. If you already have a CD drive lying around you could use that while installing the motherboard drivers and XP far enough to get network access, and use terminal services thereafter.

The exact part listing is not important. This list is for a mini-itx system, but an old laptop would be just as good. There are other variants with dual cores, pico-itx, or much lower power consumption – the choice is yours.



Motherboard / CPU: EPIA CN 13000G C7 Mini-ITX Board £109
Ram: 512MB PC4200 DDR2 533 DIMM £45
Hard Disk: 40GB Western Digital Scorpio 2.5" IDE 5400RPM Hard Drive - 8MB Cache £39
Slimline 24x CDROM Drive £25
Netgear 54Mpbs Wireless PCI Card £27
M2-ATX 160W Vehicle DC-DC Power Supply £59
Windows XP Pro £89



Power
This depends on the automomy you require. Obviously the more battery power you have, the longer it will be able to live without recharging, but that power will come at the cost of weight which will slow down your robot. You need 12V for the ATX, but you may also want 6V-7.2V for powering the servos, and 5V for some of the sensors. To avoid having another battery for the servos, we will use two 6V batteries in series, to produce a total of 12V, and we can take 6V from the middle for the servos. In the end, another two batteries and an extra 5V regulator was added to supply sufficient noise-free power to all the sensors.

Chassis
You can build the chassis out of just about anything, with wood and plastic being amoung the easiest. Basically all you need is a platform to put the pc parts on, with a sturdy frame on which to attach the wheels. Later on you may want to attach bumpers and a higher level for more sensors.

The chassis in the pictures was made out of aluminium profiles bought from a hardware store. It can be cut with a metal bladed hacksaw with a bit of patience and the sharp corners rounded off with a file. You will also need a small drill with some metal bits and a vice to hold things in place while sawing or drilling. Many designs are possible.

The pictures show the wheels in the middle which makes it a little easier to turn on the spot, but the castors at front and back could easily make it get stuck on a small edge of carpet. Many commercial designs sensibly use a circular frame and three wheels which is much less likely to get stuck on corners of furniture and can get over bumps without becoming grounded.




All pretty obvious stuff. The chassis and the wiring take a good deal of patience, so hopefully there will be commercially available cheap kits made by someone in the near future.

Is it finished?
No, not really. I need to make some much shorter USB cables to connect the web cam and the USBI2C. I'll be adding a second webcam for some stereo vision tests, and an arm or two to make use of all the spare SD21 channels. The servos go a bit jumpy while the PC starts, so I'll need to add a few capacitors or re-route power a bit. Also the plastic head is not pretty - I'll probably make a prettier metal head when the second cam arrives.

What software is it running?
XP with MSRS and Devantech Drivers

Is is autonomous?
No, not yet. It can be controlled wirelessly via a Joystick and web pages using MSRS and can report on all its sensors. Autonomy is planned: See: Grid SLAM Explorer

Can it actually do anything?
Ha! No, not really. It could push something around, I guess. It does telepresence and could warn me about a fire or intruder, or speak RSS feeds if I really wanted it to. For the moment though I'll be concentrating on mapping then give it an arm or two.

Thursday 3 May 2007

Wednesday 2 May 2007

c# Grid SLAM explorer

This c# application was made as a test ground for a grid slam algorythm that I'm preparing for a Microsoft Robotics Studio service. It allows you to tweak most of the parameters of grid slam including motion errors, the number of sensors, their placement, beam models and cone models for each sensor. It was designed for sonars but includes presets for infrared and laser.







If you want to spend some time wondering:

"will it close the loop, will it close the loop .... "


Download the appliction...
Diversity Grid Slam Explorer Beta - release notes

enjoy!

Wednesday 4 April 2007

Humanoid Robotics

Last week I went to the JNRH (Journees Nationales de la Robotique Humanoide) at the LIRMM in Montpellier France where many of the French Humanoid robotics community presented their latest research. Many of the group are involved in the JRL (Joint Robotics Laboratory), a French - Japanese effort at making control software for the HRP-2 humanoid from Kawada and the HOAP-3 from Fujitsu (pictured).

From having seem videos of ASIMO and QRIO, I had thought that the basics of locomotion were quite well understood, but the reality is far from this. In fact most of the time the researchers work in simulation, dead scared of running their algorithms on their rare and expensive real robots. When they do work on the real robots, there is always someone holding a safety harness just in case. The videos you often see on the web are mostly pre-scripted actions resulting from extensive offline optimization. That is why the ASIMO fell down the stairs - unable to react in real time to the situation.

There was a quick demonstration of the HOAP-3 standing on one leg. Amusingly, it had to be powered down to get it to return to a natural position as they had only optimised the movement in one direction.




Many of the researchers are working on rather narrow aspects of control such as controlling the ZMP (Zero Moment Point) (Pierre-Brice Wieber PDF, landing after a jump, controlling contact forces (Christine Chevalereau PDF), kicking, moving a box by pivoting etc. The French approach seems to concentrate on finding theoretical perfection, which may be where an ultimate solution will be found if available. But do really humans work like this? Do they really pre-plan perfection, or do they learn by error and react to feedback?

The heavy theoretical approach contrasted somewhat with the work of Oussama Khatib from Stanford who, rather than pre-plan an optimal path, just applies an external force and as if by magic, his control system incrementally pulls the robot along a very natural path by exploiting the prime law of human effort, namely laziness. He demoed this in simulation running in real time, by moving around where he wanted the hand to be (primary task), the movement obeyed all of the constraints (joint limits, balancing, contact points, collision avoidance) and moved towards the goal while using any available redundancy to minimize effort, maintain body symmetry etc. (MPG) (PDF) You could see that some of the crowd where exasperated by the apparent simplicity - there were groans of 'if only we had the budget' etc. One of the problems he mentioned is that most humanoid platforms control movement and give feedback in terms of joint angle rather than angular effort which is needed for his laziness concept to work.


David Gouaillier from Aldebaran Robotics gave a presentation of their forthcoming Nao humanoid (not pictured)- a very pretty 23DOF humanoid with cam, speakers, microphones and wifi. (See minute 36 of MPG) It is likely to be able to do face tracking, play music, localize sounds and download new behaviors from the web. They will be making some hand-crafted models for sale in the near future and move to industrial production thereafter with an expected retail price of €2000-€3000.
Although they are open to making interfaces to several robot programming environments, their initial effort is centered around the URBI (Universal Real-time Behavior Interface) language from Gostai. In contrast to MSRS, URBI uses some very simple language constructs which let you do pretty much the same thing without explicitly using ports and interleaves.
A typical little behavior (for an aibo) might be:

whenever (sensorFront.val < threshold)
{
robot.stop();
neck.val = 10 time:450ms &
leg.val = -45 speed:7.5 &
tail.val = 14 sin:4s ampli:45;
};
'whenever' creates a persistent background task that checks for the condition and triggers the action. In this case the action stops the robot, and *simultaneously* moves the neck, leg and tail according to the time, speed and oscillation parameters supplied. Not bad for just a few lines.

Tuesday 27 March 2007

Rapidly Exploring Random Trees

So you have a map, then what?

Although maps are pretty in themselves and are proof of a certain level of awareness, there comes a point when you might actually want your robot to do go somewhere, and eventually do something!

There are many ways to plan a path, and some of the best results often come from the simplest approaches like 'try a straight line', or A* search, but sometimes this isn't enough to solve the problem at hand. Typical problems like a car stuck in a dead end, or a 'beetle trap' are problematic for greedy approaches, and sometimes your search space is too large to explore exhaustively.
Here, I'll look at one approach called RRT which searches the space very quickly and can incorporate the motion constaints of the robot.
















The simplest form is a Sharp tree.

At each step:
  • A random point is chosen in free space.
  • Find the closest existing point.
  • If the two can be connected, then connect.
  • Otherwise, move the new point to furthest safe place and connect.
  • Test for solution.

The advantage of this type of search is that it quickly explores free space without searching everywhere and progressively increases its coverage. Aditional contraints can be added to keep the minimum distance between points above a given resolution, and make the definition of free space include knowledge about the size of the robot.

The downside is that the result is likely to be far from optimal. To get round this, it can be iterated a few times to see if a shorter path can be found, and then smoothed by looking for shorter lines or curves between points in the path.

However many robots can't turn sharply, or prefer smoother paths. The random approach can be adapted to this by using a slightly different approach:


  • Choose a random existing point in the tree.
  • Choose a random control. (e.g. max speed, gentle left, for 1 second)
  • If the path is safe, add this to the tree.
  • Otherwise add the longest safe path.
  • Test for solution


This method can eaily be adapted to any control space, and subtle optimisations can be made such as only allowing a certain granularity of control, and keeping the tree points a certain distance apart.

The great thing about this method is that you know that the path is possible in terms of the controls of your robot. Still, the path is far from optimal, and may well need iterating, and smoothing of the path and acceleration.


By tweaking the control space to allow reverse, it can solve three point turns and reversing out of tight corridoors.


Again, using reverse frequently is not optimal, so it would be best to use to imply some cost to reversing, and choose a less expensive path if available.

One optimisation that seems to work well, is to grow two trees at the same time, one from your current location and another from the destination.


This often succeeds at finding a solution in many fewer iterations and can help avoid the search getting stuck in beetle traps when the search resolution is too low and points near the exit of the trap are overlooked.

The same technique has been used to solve much more complex problems, where brute force is far too expensive to try.

Related links:

Animations of RRT

Overview

Steven LaValle's free book

Thursday 22 March 2007

Grid SLAM animation


This is an animation of the best aggregate map of 200 particles doing grid SLAM. It wiggles when another particle becomes the best aggregate particle. The map represents 10cm per pixel with a robot motion of 30cm per second. It is just dumbly wandering around, using a Vector Field Histogram to avoid walls - no purposeful entropy reduction.
It is using 24 simulated sonars modeled as fairly short range (200cm), narrow beam, medium motion noise and a beam probability model whose results are trusted to the power of 0.1 (i.e. not much, so as to encourage a smooth distribution without heavy particle duplication upon resampling - typically only 5%).
When the particles are moving into unknown territory, they diverge. When they see parts of the map a second time, they converge. As all of the particles' poses are overlaid on the current best particle's map, it can look as if they go through walls - if you were to see their own map, you would see that they think they are in the middle of the corridoor, and that the corridoor is in a different place.
You can see that the particle diversity is only just enough to close the loops.

Wednesday 21 March 2007

Grid SLAM

Lately I've been playing with a c# implementation of GridSLAM, with an aim of getting a system that could work with poor sonar data. In simulation this is starting to get good results when the model parameters are tweaked just so.


The basic idea of SLAM (Simultaneous Localization and Mapping) is, as the name implies, to make a map while guesstimating where you are in the map. The output you hope for is a good map and a good estimate of where you are (x,y,theta). The inputs you get are noisy sonar data (range, angle), and poor odometry information (bad x,y,theta from dead reckoning using wheel encoder ticks).

If you were to try to make a map using just the raw data you might get something like this:


In GridSLAM, you try to model the uncertainty of your odometry information and the uncertainty of ranging data and together make an estimate of your state (x,y,theta) using the correspondance between new data and your existing map.






The major tool is called a particle filter: Rather than model just the best guess, you model hundreds of possible states, each with their own map. Each step, each particle is moved according to odometry data plus added gaussian noise, supposedly sufficient to cover the expected range of error in the odometry data.


The image above left shows 1000 particles moving to the right, with a small clockwise rotation. You will notice that the bulk of particles stay near the middle of the distribution. This is due to an essential but dangerous step of the particle filter called re-sampling whose aim is to keep the distribution of particles closely related to the overall probability of being true. It does this by assigning a weight (0 to 1) to each particle based on the probability of the latest move - i.e. how close it was to perfect motion. It then picks particles to be promoted into the next generation with a probability related to the weight: less likely particles die, while more likely particles are duplicated to keep the total number of particles the same.

This is fine if there are enough particles to cover the space, but when there are too few particles, the resampling could favour particles that happened to move well in the last step, but aren't good representations of a longer term movement.


The image on the left shows what could happen when using only 30 particles. The resampling has killed the true distribution by duplicating the 'wrong' particles and ending up in what is called 'particle deprivation' where too many duplicates followed the wrong path and have lost coverage of the model they were trying to approximate.
Without re-sampling (left) the distribution would have been closer to the truth. Unfortunately because GridSLAM updates a map for each particle at each step, with limited memory and processor power, it is difficult to use thousands of particles. 100-200 is more typical, so great care has to applied in choosing when to resample.


Each time a particle is duplicated, part of the history of possible movements is destroyed with the lost particle. In a maze with multiple levels of loops, a small loop can exagerate confidence, loosing the particle diversity needed to later close the large loop.


Much recent research has been done on ways to avoid this problem. Some of which are:
  • Don't re-sample if the distribution is too focused
  • Compress the diversity of weights, so resampling is less vicious
  • Remember previous diversity and revert to it after a small loop

As an exmaple, consider this robot path. It starts at the top left (1), moves to the right and around the first loop (2). Just after (2) it starts to see places it has seen before, so the uncertainty about the movement in the small loop is reduced. In this case it closed the small loop pretty well, and coninues back across the top, past where it started (3) and down to the bottom left(4). From (2) to (3) it was reducing its error, then from (3) to (4) more error is creeping into the distribution in preparation for closing the larger loop at (5).



The particles are shown as a red mess near (5). The graph shows how the Root Mean Square error of the entire particle set away from perfect odometry changes over that time. Just before (5) it is high, and the current aggregate best estimate is somewhat mis-aligned, but recoverable.

A little later, it managed to recover to a decent map, but only because all the model parameters had been tweaked to make this possible. If there was more motion error, less particles or more trust of sensor data, it would have failed spectacularly.

Many of the published papers work well due to massive overlap between the current sensor data and existing map. By using long range laser with 180 data points per scan a fairly accurate match is possible that removes much of the rotational uncertainty. Poorer systems with just a few highly noisy wide-beam, short-range sonars are far less likely to be good at closing large loops. I'm hoping that inserting rotation estimates from image tracking and a digital compass will help aleviate some of these problems.

My next step is to port this code from a c# test app into an MSRS service that it can be evaluated with real world data.

Image Tracking

The last few weeks I've been playing with Image tracking algorythms. After searching around a bit I decided to start with Edard Rosten's FAST Corner Detector (PDF).

It seems to produce good results and has been used by Andrew Davison to do real time monocular SLAM (i.e. one web cam). Although Bob Motram has already done a great c# port of Davison's code (MonoSLAM), it relies on seeing a dark rectangle of known size before starting, which didn't seem appropriate for my usage on a robot and I wanted to get to know the algorythms at first hand.

Finding corners is pretty easy, the hard part is tracking them. The first phase is to find corners in the next frame that look similar, but this also generates a good deal of randomly placed matches.

The first method I attempted created a mean move from all the moving corners, then fine tunned by iteratively re-calculating a mean move by weighing moves based on the probability of being within a gaussian distribution of the last mean move. This has the effect of ignoring outliers a often converging on the true move. In retrospect, the initial mean should have been calculated according to weights based on how similar the corners looked, but it didn't seem to matter too much.

Although this worked OK-ish, unless the camera is on a perfect horizontal and all the objects in the scene are far away, the result of panning a camera really wants to be part of a sphere. My actual robot is likely to have a fixed camera, so I should have been content with only optimising the x displacement (rotation) and leaving it at that, but alas curiosity got the better of me, so I decided to try to look to higher dimensions. Bad move.

Really, there is no escaping it - full 3D is the only proper representation that can account for the movement in the pictures. Alas this isn't just an average or weighted average of pixel movement. It requires simultaneous gradient descent in all the dimensions of the model, and projection of corners into the real world. This involves some hard maths that I wasn't quite prepared for and am far from mastering...