Showing posts with label robot software. Show all posts
Showing posts with label robot software. Show all posts

01 July 2017

The Autonomous Roboticist

Since September 2016 I've been competing in the NASA Space Robotics Centennial Challenge (SRC). The challenge had a qualifying period and the final competition. I was one of the twenty teams from an international pool who qualified for the final competition. In mid-June the competitors ran their entries on a simulation in the cloud. The last few days, June 28 -30th capped the competition with a celebration at Space Center Houston, an education and entertainment facility next to the NASA Johnson Space Center.

On Thursday, the 29th, teams were invited to give presentations to the other teams, the NASA people who organized the challenge, and others. I used the opportunity to speak about my approach to the competition but also to raise the question of how an amateur roboticist, like myself, can make a meaningful contribution to robotics. 

Two ways are through competitions like this and by contributing software to the Robot Operating System (ROS). There aren't always competitions to work and ROS contributions don't fulfill my desire. In part, ROS misses the mark because before adding a new, usable package, I need to develop something new and useful. Now perhaps there are existing topics that need software and ROS packaging but how do I learn about them? 

And underlying issue for the amateur is knowing the state of the art in academia and industry. Often current academic material is behind paywalls. The amateur is also lacking in the background that lead to the current work. 

One of the reasons for this entry, and a possible new blog with the title, is to see if a third way can be found or created.

18 September 2012

DARPA LAGR Project

Today I was looking through some robotics papers applicable to Sample Return that I previously found on the web. One mentioned they were using a DARPA LAGR robot so I looked to see what it was. I found that Carnegie Mellon was involved in producing the standardized robots. The idea was to provide these robots to different researchers, have them develop navigation software, and have them compete on real-word runs to see what went better. The robots only had vision, GPS, and bumper sensors. The outcome of this project seems very applicable to the SRR competition.

One of the researchers at NYU has a long list of papers on navigation.

17 September 2012

Sample Return Robot Challenge

The focus of the blog is changing. I mentioned that I rotate through projects. It is time to focus more on robotics, specifically the NASA Sample Return Robot Centennial Challenge.

In June 2012 NASA ran a Centennial Challenge competition at Worcster Polytechnic Institute. This concept for the competition was a robot on the Moon or Mars retrieving samples. Its tasks were:
  • Obtain a pre-cached sample
  • Search for other interesting samples
  • Return all samples to a landing platform
I considered entering but abandoned the effort for personal and technical reasons. I am going to use the competition guidelines in the development of a robot. I believe the challenge will be repeated and am working now to overcome the technical issues. I will be sharing the effort on my web site. I am starting with a high-level analysis and dropping down to more details as that proceeds.

On this blog I will keep some notes on what has been updated on the project and provide some running commentary on the effort. 

05 February 2010

RoboRealm Vision Processing - Wrappers Classes

I've been working with RoboRealm over the last week. It is a vision processing application. One of its nice features is being able to access it from another program. You can let it do the heavy lifting of extracting information from a web cam image and then your program just gets a few important data points for analysis.

The module I've been working with is Center of Gravity which locates a blob in the image and reports its size and location. In particular, I'm looking for a red circle.

The interface I've used is the RR_API which is a XML over a socket connection. Reading a single variable is straightforward but reading multiple variables with one request is a lot of detail chasing. I hate chasing details over and over again. That is why they originally created subroutines and, more recently, classes. So I wrote some classes to wrap the read variable routines. I haven't need to write information, yet, so that will wait until needed.

The files are in Google Code.

Individual variables are handled through the RoboRealmVar class and its base class RoboRealmVarBase. The base class is needed to provide an interface for reading multiple variables. More on that below.

RoboRealmVar is a template class to allow for handling different data types. One of the details with th RR interface is all data is returned as a char string so it has to be converted to the correct data type. The class handles that automatically. The header file has instances of the template for int, float, and string. Other types could be added but may need a char* to data type conversion routine. See the string instantiation for how that is done.

Variables are declared by:

rrIntVar mCogX;
rrIntVar mCogBoxSize;
rrIntVar mImageWidth
;

The examples are all class members, hence the prefix 'm' on their names.

Initialize the variables with the instance of the RR class. In the example mRoboRealm is the instance of RR opened through RR_API:

mCogX("COG_X", mRoboRealm),
mImageWidth("IMAGE_WIDTH", mRoboRealm),
mCogBoxSize("COG_BOX_SIZE", mRoboRealm),


and then read them using an overload of operator():

int cogx = mCogX();

Multiple variables are read using the RoboRealmVars class. Declare it and instantiate it with:

RoboRealmVars mCogVars;
mCogVars(mRoboRealm)

Again, my examples are from inside a class.

Then add the individual variables to the list by:

mCogVars.add(mCogX);
mCogVars.add(mImageWidth);
mCogVars.add(mCogBoxSize);

then read them through:

mCogVars();

You can access their values just as shown above through the individual variables.

Hopefully this will be useful to others.

03 February 2010

Create Fun with Grandson

Last weekend two grand kids were here. The girl, Dorian, is a teenager. The boy, Kade, is six. Just before Christmas I was working on the Fit PC Slim to iRobot Create interface when they visited. He had his nose up close asking when it would be done. He asked the same thing in another visit since then. I had to reply it was not done but I was working on it.

So this visit I just had to have something working. I got the basic wander and bump into routines working with the Slim - a reproduction of the Create demo 1 behavior. I figured that would be good for about 2 minutes of interest so needed more.

Since this project will be using a web camera for vision I used some velcro to plunk the camera onto the Create just behind the IR sensor. The velcro raised it enough to see over the top of the sensor. I brought up RoboRealm and setup its built-in web page viewer. This lets you see the camera's images. I pointed the laptop at the web pages to display what the Create was seeing.

That was good for about 20 minutes and we got called for lunch and told to put the robot away. Awwww!!!

Next visit is in a couple weeks - they are visiting here pretty regularly now. The goal is to have the Create follow a "leash" - a red dot mounted on the end of a stick. That means getting the software to talk with RR to get information on the center of gravity (COG) of the red dot and send the drive commands to the Create to center and drive toward the dot. It should stop when it gets a little bit away from the dot, and even back up if the dot gets closer.

28 January 2010

Thinking Aloud - Long Ago Neural AI

I got thinking about way back when in 1972-74 in undergrad school. I was doing some work in AI, albeit within the psych department. This was before the heyday of neural network although there was some activity in the area. I ran across the book, Intelligence: Its Organization and Development by Michael Cunningham. He proposed a rigorous, testable way in which intelligence organizes in the infant. I guess it didn't work out since it didn't make the front page of the New York Times as a major breakthrough sometime in the intervening decades.

Interestingly, a web search turns up little information beyond citations. None of the titles in the citations indicate a successful implementation or breakthrough based on the work.

I still have a paper I wrote about the book and a description of a FORTRAN implementation that never got finished.

One of the challenges back then, and remains so somewhat today, is that testing ideas like this requires a simulation environment that can be as complex to produce as the actual ideas you want to test. But I realized that today I do have a physical device, my Create robot, that could be used for testing.

I'm not going to layout all the details of Cunningham's proposal since he took a book to develop and describe the idea. I won't even list the roughly 2 dozen specific assumptions in the model. What I am going to do is walk through some thoughts on how a project might proceed to see if it is worth pursuing.

You start with input and output elements - sensors and actuators in today's robot parlance. There are some reflex connections between these elements. For example, a pain reaction reflex so if an infant's hand touches something hot it jerks away. Or if the side of the mouth touches something the head turns in that direction in an attempt to suckle.

Jumping over the start up process (which is always a pain), lets assume the robot is moving forward and hits a wall. The bumper switch closes but there is no reflex to shut down the motors. The motors keeps turning and you get an overload reading. There is a reflex for this and it stops the motors. Now the motors are stopped and the bump switch is still triggered.

There would be a number of elements. Each sensor input on the Create could have an input element. Each actuator would have an output element. As indicated, the over current input element could be connected to an output element that stops the motors. Note - A point to consider that there might be output elements that don't directly connect to actuators but instead inhibit actuators. Continuing the thought, there might need to be backup, stop, and forward elements for the motors. In the situation described, these elements would have high levels of activity. Other elements, like a push button, would have no activity. The Cunningham model proposes that those elements with high activity are connected through a new memory element. The inputs to the input side of the memory and the outputs to the output side. What might happen is a connection is created between the bump switch, the over current and the motor stop elements through the new memory element. In the future, a bump switch closure would stop the motor.

I now recall one result from my work with the FORTRAN implementation. This is the need to have multiple elements to represent the state of input and output elements. My note above reflects this. For example, the bump switch needs two elements - open and closed. The motor needs forward, reverse and stopped. It may need even more indicating speed, although I would first try relating the element activity level with the speed.

The activity level of an element decays if it is not triggered. So the bump switch closing triggers activity that decays over time. The motor activity decreases until the motor stops. An issue would be to keep the bump switch closed activity going long enough for the over current activity to shutdown the motor and get the new memory element built. Note: maybe an input triggers again after a period of time?

How do we get the bump switch open? The only way is by getting the motor to reverse. Infants in a situation like this flail. They randomly move. Sometimes they do this happily while cooing and sometimes angrily while crying. It appears to be a natural reaction to try something, anything to make things different. (A really ugl phenomena in an adult but you still see it. If not physically at least mentally. Ever had a boss whose reaction was, "Don't just stand there! Do SOMETHING.") I don't recall the model addressing this situation. (I did find used copies of the book and have one ordered so I can refresh my thinking.)

Somehow some general level of activity has to increase which can generate activity at outputs. Sometimes this would be through inputs. For an infant this could be sound, pressure on skin, internal senses, and vision. I dislike simply generating random activity levels to cause something to happen. Maybe the general inputs of the Create - power levels, current readings, etc are sufficient to generate activity.

Clearly, a dropping charge level in the battery could be tied to a "hunger" reaction which sends the robot searching for its charger. That brings in using the IR sensor to control the drive for the docking station. That probably requires external guidance to train the IR / motor control coordination to execute the docking maneuver. That opens up an entirely different set of thoughts.

Which is enough for today... No conclusion on trying to implement this. But no conclusion not to do so, either.

19 January 2010

Subsumption Architecture - Introduction

The brain of a robot is the software. The software has to take in the sensor data, interpret it, and generate commands to the actuators. One architecture for robot software is called subsumption. It came out of MIT and Prof. Rodney Brooks who is a founder of iRobot who makes my Create robot. The idea is to break the robotos activities into small pieces. Let me build up the concept by example. A fundamental activity of a robot is to cruise around. If nothing is to be done just let the robot drive straight ahead. So we create an activity called Cruise. It simply actuates the motors to go straight at a safe speed. It is easy to write and test. After driving straight ahead for awhile the robot bumps into something. And continues to try to go straight ahead. This is not good for the robot or the cat or furniture it bumped into. So we write a Bump activity using the sensors on the robot - the Create has a right and a left bump sensor on the front that wrap around a bit toward the sides. So Bump determines if a bump sensor was triggered the robot should stop. So we write Bump. How to Bump and Drive get put together in the software? This is the subsumption architecture part. Initially its easy. First call Bump. If it doesn't activate, call Drive. If Bump does activate, don't call Drive. Let this run. The robot goes merrily off, bumps into a cat, and stops. The cat gets up, the robot continues straight ahead, hits a chair, stops, and stops and stops and stops. Not very interesting. What we'd like is for the robot to back up a little bit, turn, and then continue straight ahead. Hopefully that will clear the obstacle, and it will if the robot just brushed a wall. But even if it doesn't, repeating that behavior will eventually turn the robot toward an open area. (Well, many times it will. More later...) This new behavior is somewhat different from Bump and Drive because we want it to do the back and turn without interruption. The term used for this is a ballistic behavior. Do we add this to Bump or Drive, or create a new behavior? The texts I've read added it to Bump. But based on my experience I create a new behavior called Flee. Flee works with what could be called an internal sensor. This internal sensor tells Flee thow much to back up and turn. So Bump sets this internal sensor to backup a little bit (40 mm) and turn a little (20 degrees). Since Bump can tell whether the left or right bump sensor (or both) was hit it also sets the direction of the turn so the robot will turn away from the bump. Now the activities are called in the order: Flee, Bump, Drive. Remember that if Flee is active the later activites aren't called. If Bump is active, Drive is not called. So the robot Drives ahead, Bumps into something, the internal flee sensor is set, and Flee backs up and turns the robot. Then with both Flee and Bump inactive, Drive engages and the robot moves ahead. Just for completeness, I have another activity called Trapped. It is added between Flee and Bump. Every time Trapped is called it records the time and distance moved. If the robot has not moved very far (80 mm) in a certain period of time (10 seconds) then Trapped sets the flee sensor to back a little bit and turn 180 degrees. The idea is that by turning 180 degrees the robot can get out of a bad situation. One such situation is the legs on a rolling desk chair, or a corner. With these behaviors my Create wanders around the house pretty well. The actual implementation needs a couple more details. Here is some psuedo code: preempt = false; preempt = Flee(preempt); preempt = Trapped(preempt); preempt = Bump(preempt); preempt = Drive(preempt); The variable preempt is set to true by an activity if it is active. If Bump sense a bump then it sets preempt. When Drive sees preempt is set it does not activate, i.e. does not drive forward. If Bump sees preempt is set it does not bother checking the bump sensors, because presumably Flee is now handling the bump condition. Why bother calling activities if they are preempted? Look back at how Trapped works. It is monitoring the distance traveled. If it is not called because Flee is active... And there I'm going to let it hang because I don't remember why. But I'm going to publish this now, leave it as is, and resume in another posting. This is software development as it is, folks. There are a few possibilities here:
  • I simply don't recall the reason so have to remember it or rethink it. That is why you should document things.
  • There was a valid reason that is no longer valid. Boy, that happens all the time in development. A good habit to develop is to revist assumpts regularly to see how they've changed.
  • I simply blew it when writing the code many months ago.
  • ...or some totally different situation that I can't think of right now.

That is the basics of subsumption, though. A good book on robot programming that covers subsumption is Robot Programming - A Practical Guide to Behavior-Based Robotics" by Joseph L. Jones.

...sine die

SRC2 - Explicit Steering - Wheel Speed

SRC2 Rover This fourth post about the  qualifying round of the NASA  Space Robotics Challenge - Phase 2  (SRC2) addresses t he speed of the ...