Skip to main content

Team Waterloo Research Paper on SRR

Team Waterloo published about their work on a robot for the 2012 and 2013 NASA Sample Return Robot Centennial Challenges.
Mapping, Planning, and Sample Detection Strategies for Autonomous Exploration
This paper presents algorithmic advances and field trial results for autonomous exploration and proposes a solution to perform simultaneous localization and mapping (SLAM), complete coverage, and object detection without relying on GPS or magnetometer data. We demonstrate an integrated approach to the exploration problem, and we make specific contributions in terms of mapping, planning, and sample detection strategies that run in real-time on our custom platform. Field tests demonstrate reliable performance for each of these three main components of the system individually, and high-fidelity simulation based on recorded data playback demonstrates the viability of the complete solution as applied to the 2013 NASA Sample Return Robot Challenge.
 It is the Journal of Field Robotics in the Wiley Online Library. I could, and probably will, spend a lot of time with the back issues in the Journal.



I was blown away by the paper. Calling it a paper is misleading. It is 32 pages of details on mapping, planning and vision processing. In all my web searches I have not found anything that compares to the material. It is intensive and extensive.

Can you tell I liked it? It is just incredible. Reading it boosted my own ego for attempting the competition. Every one who competes in the SRR should give themselves a big smile of congratulations the next time they are in front of a mirror.

This work is not just abstract. It provided tangible results during the competition.

They were the first team to successfully collect the Phase 1 pre-cached sample and return it to the starting platform. This was in the second Phase 1 day of the 2013 SRR. The only thing the missed was stopping in time. The sample was just a little beyond the rear of the platform so it did not qualify. (Fortunately, in one sense, they are a demonstration team since they are Canadian. It would have been heartbreaking to miss by, in "honour" of Canada, centimeters and not collect the $5000 from NASA.) It is kind of amusing that for all the work reported in the paper they did not report on how the robot determined when to stop. 

The robot was guided by a suitcase on the Home Beacon. The suitcase was a clever idea since it added no additional weight to the transportation and they were going to have a number of them available. If the suitcase were positioned just a few centimeters closer to the platform the sample would have been properly positioned. 

Excellent work all-around by Team Waterloo.


Comments

Popular posts from this blog

Sensor - Accelerometer & Magnetics

Just as I was finishing my first look at the accelerometer and magnetic field sensors a couple of threads cropped up on the Android Developer's group:

http://groups.google.com/group/android-developers/browse_frm/thread/1b42c48ce47cb1c9/720c6f4f8a40fc67#720c6f4f8a40fc67

http://groups.google.com/group/android-developers/browse_frm/thread/2e14272d72b7ab4f#

I had the basic code working so dug a little deeper into the rotation routines and the timing. I posted responses on the threads but want here to dig into the details more.

First some observations applicable to my G1:

The sensors report approximetly every 20, 40 and 220 msec for FAST, GAME, and NORMAL.
A sample may be missed for a specific sensor but usually one of them will be generated - but sometimes all can be missed.
The magnetic field sensor is most reliable with only a few drops. The other sensors are dropped considerably more often.

A caveat in all this is the way I setup the sensor handling may make a difference. I have a singl…

Programming Languages Used for SRR

I asked at the SRR Challenge about the languages and vision processing used by each team. Here is what I found:

Team Language                       Vision Processing
Intrepid                         C++ / Matlab                                           Kuukulgur                     C++                                          OpenCV Mystic                           C++                                          RobotRealm SpacePride                    RoboRealm state machine          RoboRealm Survey                           C++, Python                             OpenCV Middleman                     LabView                                   LabView UCSC                           C/C++                                      OpenCV Waterloo                       C++, Python                                             WPI                              C++                                                              Wunderkammer             Python                                      ROS …

Shifting Gears - iRobot Create

I'm shifting gears to robotics. Awhile ago I got an iRobot Create. Its basically a Roomba vacuum cleaner with the guts removed to make a cargo area. In this area is a 25-pin connector that provides power, TTL serial port, digital and analog I/O.

I also got a Command Module (CM) which fits onto the connector. The CM is an Atmega 168 processor that adds some additional I/O. It can be programmed to control the Create. I did so and basically reproduced the wandering behavior of the Create. It move around, bumps into things and turns away from what it hit. I added some additional behaviors such as if it got trapped, i.e. caught in the same place for a period of 10 secs, it would move to extract itself.

I want to do more with robots, such as entering in a RoboMagellan contest. That requires an outdoor capable robot that does a lot more than bump into things. A key component to me is vision. Maybe I could do that with the CM and another processor (like the CMUCam) but I really didn't …