07 September 2015

Navigating into Robot Navigation & Waterloo Paper

I've been waiting for the announcement on the 2016 NASA Sample Return Centennial Challenge since early August. But so far no information. This is unusual since they typically make the announcement early or mid-August. I've hesitated to start working on my robots until I knew about the challenge.

During August I was visiting family in Canada (drove by Waterloo on the way!) so I started re-reading the Waterloo paper I mentioned back in January 2014. Unfortunately it is no longer available for free although I downloaded it when it was available. No, I won't distribute copies since it is a copyrighted work, although IMO the charges for academic papers are absurd. I'm glad I grabbed it while I could.


Patting myself on the back a bit, I was struck while reading the introduction by the parallels with my original analysis. You can find my material on my web page. Now my material is not written in academic terms so you need a translator to see the similarities. The specific sentence that struck me mentioned the "sensor footprint". My original analysis estimated the area a web camera could see and worked from there to check the speed and number of rovers need to perform the search. As I skimmed back through the paper's introduction I saw more similarities.

But, back to what I am going to do now. I read Section 2 Mapping which discussed the problems with exploring an area such as the park used for the SRR. As I read it, I realized that it was a nicely contained problem that I could work on without getting into all the other details of the SRR. One specific topic it avoided, for the most part, was vision processing.

Vision is my weakest area. One might expect that is where I should focus my efforts first. Without vision I'd have no chance of success with the SRR. But, if I focused on vision and there were no SRR I'd not have much to show. So navigation it is.

This works out nicely because I can was planning on using Raspberry Pis to offload processing from the main processor. The areas I contemplated the Pi handling was motor control and scanning with a lidar.

I will be using Linux and Robot Operating System (ROS) for my robotics work. The Pi runs Raspbian Linux and a version of ROS. With a network connection the copies of ROS on each computer can communicate to exchange commands and data.

The lidar I have is from the Neato vacuum cleaner. It scans a plane and returns a point cloud of the surrounding area. It appeared that the West Virginia Mountaineers' robot used a lidar mounted on a tilting platform to scan the area in front of their robot during movement. That would provide a means of obstacle detection probably more reliable and simpler than vision. Controlling the tilt scan and reading the lidar is another application for the Pi. The Pi can determine if there is an obstacle from the point cloud and report this to the main processor.

A Pi 2 should be able to handle this scanning and the motor control without any problems. It's worth a try, at least.

The overall concept now is to take one of my Wild Thumper chassis, mount the Pi, a servo controlled tilt (like I used for picking up samples), and the lidar. To control this I need to use the Pololu Maestro servo controller and Simple Motor Controllers I used previously.

Today I re-examined the Pololu controller capabilities and connections since I've not worked with them in awhile. Early in August I had the lidar working so it was fresh in my mind. The lidar as configured now needs a USB port and the Pololu controllers can be connected either to USB or serial port. I'll need to daisy chain the controllers but that is not a problem. With the 2013 rovers each controller had its own USB port.

All the parts and pieces of the concept came together nicely. Just enough work to get the hardware in place to make it fun, but not so much as to make it frustrating.