I finally released the free, lite version of Zap GPS to the Android market. If it is received at all well, I will do a paid version. If you have tried the Lite version and have ideas on how to improve it or enhance it for a paid version please feel free to comment.
The Lite version omits one capability that the ADC2 version contained: cloaking. I felt that was an interesting capability that would be better in a paid version. Cloaking is implemented by hiding a Sentinel if the GPS (Global Positioning Satellite) signal is lower than a threshold. The player still needs to damage the Sentinel to proceed to the next round but it is more difficult since the Sentinel cannot be seen. When both Cloaking and Command & Control (sequential destruction) are present it can be difficult since a cloaked Sentinel may also drop off the Sentinel list because its signal is weak and then lost. But if may reappear if the signal increases. If it is the lowest sequential number the player has a real challenge.
One additional feature is timing the player. The player would be allowed a period of time to damage the Sentinels in the list. After that time Sentinels would be "repaired" and added to the list. That isn't implemented but would be easy to do.
The big feature to add would be competition among users. That would require setting up a web presence to record scores and show the best players for, say, the day, week, and month. The effort for that would be about the same as developing the game up until now. Not sure I want to expend that much effort for little or no reward.
Any other ideas for Zap GPS? Any ideas for a different Galactic Guardian game?
16 July 2010
05 February 2010
RoboRealm Vision Processing - Wrappers Classes
I've been working with RoboRealm over the last week. It is a vision processing application. One of its nice features is being able to access it from another program. You can let it do the heavy lifting of extracting information from a web cam image and then your program just gets a few important data points for analysis.
The module I've been working with is Center of Gravity which locates a blob in the image and reports its size and location. In particular, I'm looking for a red circle.
The interface I've used is the RR_API which is a XML over a socket connection. Reading a single variable is straightforward but reading multiple variables with one request is a lot of detail chasing. I hate chasing details over and over again. That is why they originally created subroutines and, more recently, classes. So I wrote some classes to wrap the read variable routines. I haven't need to write information, yet, so that will wait until needed.
The files are in Google Code.
Individual variables are handled through the RoboRealmVar class and its base class RoboRealmVarBase. The base class is needed to provide an interface for reading multiple variables. More on that below.
RoboRealmVar is a template class to allow for handling different data types. One of the details with th RR interface is all data is returned as a char string so it has to be converted to the correct data type. The class handles that automatically. The header file has instances of the template for int, float, and string. Other types could be added but may need a char* to data type conversion routine. See the string instantiation for how that is done.
Variables are declared by:
rrIntVar mCogX;
rrIntVar mCogBoxSize;
rrIntVar mImageWidth;
The examples are all class members, hence the prefix 'm' on their names.
Initialize the variables with the instance of the RR class. In the example mRoboRealm is the instance of RR opened through RR_API:
mCogX("COG_X", mRoboRealm),
mImageWidth("IMAGE_WIDTH", mRoboRealm),
mCogBoxSize("COG_BOX_SIZE", mRoboRealm),
and then read them using an overload of operator():
int cogx = mCogX();
Multiple variables are read using the RoboRealmVars class. Declare it and instantiate it with:
RoboRealmVars mCogVars;
mCogVars(mRoboRealm)
Again, my examples are from inside a class.
Then add the individual variables to the list by:
mCogVars.add(mCogX);
mCogVars.add(mImageWidth);
mCogVars.add(mCogBoxSize);
then read them through:
mCogVars();
You can access their values just as shown above through the individual variables.
Hopefully this will be useful to others.
The module I've been working with is Center of Gravity which locates a blob in the image and reports its size and location. In particular, I'm looking for a red circle.
The interface I've used is the RR_API which is a XML over a socket connection. Reading a single variable is straightforward but reading multiple variables with one request is a lot of detail chasing. I hate chasing details over and over again. That is why they originally created subroutines and, more recently, classes. So I wrote some classes to wrap the read variable routines. I haven't need to write information, yet, so that will wait until needed.
The files are in Google Code.
Individual variables are handled through the RoboRealmVar class and its base class RoboRealmVarBase. The base class is needed to provide an interface for reading multiple variables. More on that below.
RoboRealmVar is a template class to allow for handling different data types. One of the details with th RR interface is all data is returned as a char string so it has to be converted to the correct data type. The class handles that automatically. The header file has instances of the template for int, float, and string. Other types could be added but may need a char* to data type conversion routine. See the string instantiation for how that is done.
Variables are declared by:
rrIntVar mCogX;
rrIntVar mCogBoxSize;
rrIntVar mImageWidth;
The examples are all class members, hence the prefix 'm' on their names.
Initialize the variables with the instance of the RR class. In the example mRoboRealm is the instance of RR opened through RR_API:
mCogX("COG_X", mRoboRealm),
mImageWidth("IMAGE_WIDTH", mRoboRealm),
mCogBoxSize("COG_BOX_SIZE", mRoboRealm),
and then read them using an overload of operator():
int cogx = mCogX();
Multiple variables are read using the RoboRealmVars class. Declare it and instantiate it with:
RoboRealmVars mCogVars;
mCogVars(mRoboRealm)
Again, my examples are from inside a class.
Then add the individual variables to the list by:
mCogVars.add(mCogX);
mCogVars.add(mImageWidth);
mCogVars.add(mCogBoxSize);
then read them through:
mCogVars();
You can access their values just as shown above through the individual variables.
Hopefully this will be useful to others.
03 February 2010
Create Fun with Grandson
Last weekend two grand kids were here. The girl, Dorian, is a teenager. The boy, Kade, is six. Just before Christmas I was working on the Fit PC Slim to iRobot Create interface when they visited. He had his nose up close asking when it would be done. He asked the same thing in another visit since then. I had to reply it was not done but I was working on it.
So this visit I just had to have something working. I got the basic wander and bump into routines working with the Slim - a reproduction of the Create demo 1 behavior. I figured that would be good for about 2 minutes of interest so needed more.
Since this project will be using a web camera for vision I used some velcro to plunk the camera onto the Create just behind the IR sensor. The velcro raised it enough to see over the top of the sensor. I brought up RoboRealm and setup its built-in web page viewer. This lets you see the camera's images. I pointed the laptop at the web pages to display what the Create was seeing.
That was good for about 20 minutes and we got called for lunch and told to put the robot away. Awwww!!!
Next visit is in a couple weeks - they are visiting here pretty regularly now. The goal is to have the Create follow a "leash" - a red dot mounted on the end of a stick. That means getting the software to talk with RR to get information on the center of gravity (COG) of the red dot and send the drive commands to the Create to center and drive toward the dot. It should stop when it gets a little bit away from the dot, and even back up if the dot gets closer.
So this visit I just had to have something working. I got the basic wander and bump into routines working with the Slim - a reproduction of the Create demo 1 behavior. I figured that would be good for about 2 minutes of interest so needed more.
Since this project will be using a web camera for vision I used some velcro to plunk the camera onto the Create just behind the IR sensor. The velcro raised it enough to see over the top of the sensor. I brought up RoboRealm and setup its built-in web page viewer. This lets you see the camera's images. I pointed the laptop at the web pages to display what the Create was seeing.
That was good for about 20 minutes and we got called for lunch and told to put the robot away. Awwww!!!
Next visit is in a couple weeks - they are visiting here pretty regularly now. The goal is to have the Create follow a "leash" - a red dot mounted on the end of a stick. That means getting the software to talk with RR to get information on the center of gravity (COG) of the red dot and send the drive commands to the Create to center and drive toward the dot. It should stop when it gets a little bit away from the dot, and even back up if the dot gets closer.
28 January 2010
Thinking Aloud - Long Ago Neural AI
I got thinking about way back when in 1972-74 in undergrad school. I was doing some work in AI, albeit within the psych department. This was before the heyday of neural network although there was some activity in the area. I ran across the book, Intelligence: Its Organization and Development by Michael Cunningham. He proposed a rigorous, testable way in which intelligence organizes in the infant. I guess it didn't work out since it didn't make the front page of the New York Times as a major breakthrough sometime in the intervening decades.
Interestingly, a web search turns up little information beyond citations. None of the titles in the citations indicate a successful implementation or breakthrough based on the work.
I still have a paper I wrote about the book and a description of a FORTRAN implementation that never got finished.
One of the challenges back then, and remains so somewhat today, is that testing ideas like this requires a simulation environment that can be as complex to produce as the actual ideas you want to test. But I realized that today I do have a physical device, my Create robot, that could be used for testing.
I'm not going to layout all the details of Cunningham's proposal since he took a book to develop and describe the idea. I won't even list the roughly 2 dozen specific assumptions in the model. What I am going to do is walk through some thoughts on how a project might proceed to see if it is worth pursuing.
You start with input and output elements - sensors and actuators in today's robot parlance. There are some reflex connections between these elements. For example, a pain reaction reflex so if an infant's hand touches something hot it jerks away. Or if the side of the mouth touches something the head turns in that direction in an attempt to suckle.
Jumping over the start up process (which is always a pain), lets assume the robot is moving forward and hits a wall. The bumper switch closes but there is no reflex to shut down the motors. The motors keeps turning and you get an overload reading. There is a reflex for this and it stops the motors. Now the motors are stopped and the bump switch is still triggered.
There would be a number of elements. Each sensor input on the Create could have an input element. Each actuator would have an output element. As indicated, the over current input element could be connected to an output element that stops the motors. Note - A point to consider that there might be output elements that don't directly connect to actuators but instead inhibit actuators. Continuing the thought, there might need to be backup, stop, and forward elements for the motors. In the situation described, these elements would have high levels of activity. Other elements, like a push button, would have no activity. The Cunningham model proposes that those elements with high activity are connected through a new memory element. The inputs to the input side of the memory and the outputs to the output side. What might happen is a connection is created between the bump switch, the over current and the motor stop elements through the new memory element. In the future, a bump switch closure would stop the motor.
I now recall one result from my work with the FORTRAN implementation. This is the need to have multiple elements to represent the state of input and output elements. My note above reflects this. For example, the bump switch needs two elements - open and closed. The motor needs forward, reverse and stopped. It may need even more indicating speed, although I would first try relating the element activity level with the speed.
The activity level of an element decays if it is not triggered. So the bump switch closing triggers activity that decays over time. The motor activity decreases until the motor stops. An issue would be to keep the bump switch closed activity going long enough for the over current activity to shutdown the motor and get the new memory element built. Note: maybe an input triggers again after a period of time?
How do we get the bump switch open? The only way is by getting the motor to reverse. Infants in a situation like this flail. They randomly move. Sometimes they do this happily while cooing and sometimes angrily while crying. It appears to be a natural reaction to try something, anything to make things different. (A really ugl phenomena in an adult but you still see it. If not physically at least mentally. Ever had a boss whose reaction was, "Don't just stand there! Do SOMETHING.") I don't recall the model addressing this situation. (I did find used copies of the book and have one ordered so I can refresh my thinking.)
Somehow some general level of activity has to increase which can generate activity at outputs. Sometimes this would be through inputs. For an infant this could be sound, pressure on skin, internal senses, and vision. I dislike simply generating random activity levels to cause something to happen. Maybe the general inputs of the Create - power levels, current readings, etc are sufficient to generate activity.
Clearly, a dropping charge level in the battery could be tied to a "hunger" reaction which sends the robot searching for its charger. That brings in using the IR sensor to control the drive for the docking station. That probably requires external guidance to train the IR / motor control coordination to execute the docking maneuver. That opens up an entirely different set of thoughts.
Which is enough for today... No conclusion on trying to implement this. But no conclusion not to do so, either.
Interestingly, a web search turns up little information beyond citations. None of the titles in the citations indicate a successful implementation or breakthrough based on the work.
I still have a paper I wrote about the book and a description of a FORTRAN implementation that never got finished.
One of the challenges back then, and remains so somewhat today, is that testing ideas like this requires a simulation environment that can be as complex to produce as the actual ideas you want to test. But I realized that today I do have a physical device, my Create robot, that could be used for testing.
I'm not going to layout all the details of Cunningham's proposal since he took a book to develop and describe the idea. I won't even list the roughly 2 dozen specific assumptions in the model. What I am going to do is walk through some thoughts on how a project might proceed to see if it is worth pursuing.
You start with input and output elements - sensors and actuators in today's robot parlance. There are some reflex connections between these elements. For example, a pain reaction reflex so if an infant's hand touches something hot it jerks away. Or if the side of the mouth touches something the head turns in that direction in an attempt to suckle.
Jumping over the start up process (which is always a pain), lets assume the robot is moving forward and hits a wall. The bumper switch closes but there is no reflex to shut down the motors. The motors keeps turning and you get an overload reading. There is a reflex for this and it stops the motors. Now the motors are stopped and the bump switch is still triggered.
There would be a number of elements. Each sensor input on the Create could have an input element. Each actuator would have an output element. As indicated, the over current input element could be connected to an output element that stops the motors. Note - A point to consider that there might be output elements that don't directly connect to actuators but instead inhibit actuators. Continuing the thought, there might need to be backup, stop, and forward elements for the motors. In the situation described, these elements would have high levels of activity. Other elements, like a push button, would have no activity. The Cunningham model proposes that those elements with high activity are connected through a new memory element. The inputs to the input side of the memory and the outputs to the output side. What might happen is a connection is created between the bump switch, the over current and the motor stop elements through the new memory element. In the future, a bump switch closure would stop the motor.
I now recall one result from my work with the FORTRAN implementation. This is the need to have multiple elements to represent the state of input and output elements. My note above reflects this. For example, the bump switch needs two elements - open and closed. The motor needs forward, reverse and stopped. It may need even more indicating speed, although I would first try relating the element activity level with the speed.
The activity level of an element decays if it is not triggered. So the bump switch closing triggers activity that decays over time. The motor activity decreases until the motor stops. An issue would be to keep the bump switch closed activity going long enough for the over current activity to shutdown the motor and get the new memory element built. Note: maybe an input triggers again after a period of time?
How do we get the bump switch open? The only way is by getting the motor to reverse. Infants in a situation like this flail. They randomly move. Sometimes they do this happily while cooing and sometimes angrily while crying. It appears to be a natural reaction to try something, anything to make things different. (A really ugl phenomena in an adult but you still see it. If not physically at least mentally. Ever had a boss whose reaction was, "Don't just stand there! Do SOMETHING.") I don't recall the model addressing this situation. (I did find used copies of the book and have one ordered so I can refresh my thinking.)
Somehow some general level of activity has to increase which can generate activity at outputs. Sometimes this would be through inputs. For an infant this could be sound, pressure on skin, internal senses, and vision. I dislike simply generating random activity levels to cause something to happen. Maybe the general inputs of the Create - power levels, current readings, etc are sufficient to generate activity.
Clearly, a dropping charge level in the battery could be tied to a "hunger" reaction which sends the robot searching for its charger. That brings in using the IR sensor to control the drive for the docking station. That probably requires external guidance to train the IR / motor control coordination to execute the docking maneuver. That opens up an entirely different set of thoughts.
Which is enough for today... No conclusion on trying to implement this. But no conclusion not to do so, either.
25 January 2010
Robot Components
Time to explain the components of the robot a bit more. The diagram provides an overview.
The main platform is the iRobot Create. It is an autonmous robot by itself but provides control through a serial port connection using a protocol called the Open Interface (OI). The OI can read the sensors and control the actuators of the Create.
The Fit PC Slim is a compact, low power PC with 3 USB ports and a Wifi, plus the usual PC components. It is powered from the Create through a voltage regulator on the Interface Board (IB). The IB also carries the USB interfaces for the serial port and I2C.
I2C is a standard 2 wire bus for controlling actuators and accessing sensor input. I'm not totally sure what is going to be on the bus. I expect a compass module, at least, to provide orientation. I have sonar and IR distance sensors working on I2C but am not sure which to use. These would be backup for detecting obstacles via vision processing. A main goal is for the robot to move around without bumping into obstacles. I also have a digital I/O board that could be used to provide LED indicators of what the robot is doing.
The reasons for the Wifi on the Slim is to download software and allow monitoring from the desktop or laptop, especially in the field.
RoboRealm (RR)is a software package whose main purpose is vision processing. It also has a lot of robot control capability, including a plug-in for the Create. I decided not to use that plug-in after some issues figuring out exactly how it worked. That may have been a mistake. My other concern was the latency of getting sensor information with it getting collected by RR and then collected from RR by the control program. RR will be used to handle the camera and vision processing.
The main platform is the iRobot Create. It is an autonmous robot by itself but provides control through a serial port connection using a protocol called the Open Interface (OI). The OI can read the sensors and control the actuators of the Create.
The Fit PC Slim is a compact, low power PC with 3 USB ports and a Wifi, plus the usual PC components. It is powered from the Create through a voltage regulator on the Interface Board (IB). The IB also carries the USB interfaces for the serial port and I2C.
I2C is a standard 2 wire bus for controlling actuators and accessing sensor input. I'm not totally sure what is going to be on the bus. I expect a compass module, at least, to provide orientation. I have sonar and IR distance sensors working on I2C but am not sure which to use. These would be backup for detecting obstacles via vision processing. A main goal is for the robot to move around without bumping into obstacles. I also have a digital I/O board that could be used to provide LED indicators of what the robot is doing.
The reasons for the Wifi on the Slim is to download software and allow monitoring from the desktop or laptop, especially in the field.
RoboRealm (RR)is a software package whose main purpose is vision processing. It also has a lot of robot control capability, including a plug-in for the Create. I decided not to use that plug-in after some issues figuring out exactly how it worked. That may have been a mistake. My other concern was the latency of getting sensor information with it getting collected by RR and then collected from RR by the control program. RR will be used to handle the camera and vision processing.
Create - Initialization Processing
Developing software is as much a research project as an engineering process. In part this is because a developer is not usually a domain knowledge expert, say, an accountant. The developer thus has to learn a fair amount about the domain in order to proceed. Still, some of the learning occurs as the project proceeds. You have to learn what you don't know.
First lesson is that the Create doesn't provide unswitched power that is sufficient to run the Fit PC Slim. If the Create is turned on there is sufficient power. This means the Slim can't turn the Create power off because the Slim loses power, also.
Ideally the Create can stay on all the time. It has a docking stations - its home base - for recharging. The built in processing can find the base and run onto it to charge, or my software could replicate that processing.
First minor glitch is that the Create stops responding to the Slim commands when it docks. It turns out there is an undocumented soft reset command (a '7') that puts it back into a mode where it will accept commands. Once back into this mode the Slim can monitor the charging process and determine when it is safe to leave the dock.
When the Create is charging is a good time for doing work on the interface board or the Slim. What happens when you reconnect everything onto a charging Create?
The (re)learned lesson is that startup and shutdown processing are often the most challenging parts of a software project. Once everything is up and running a software process is usually straightforward, albeit with a lot of details to chase. You just nibble away at them one at a time until they are all resolved. Then the process is just doing the same thing over and over again.
Startup is a big discontinuity. What was the robot doing before it shutdown? What has changed since then? What is the current state now?
This all was triggered when I realized the robot had to determine whether it was charging or in the wild when it started up. It takes different actions depending on where it is. If docked it continues until charged, backs off the dock, and switches to "wild" mode of operation.
Okay, who said, "Rud, as an experienced developer you should have dealt with this already." Guilty as charged but this has been a casual, hobby project up until now. This was my wakeup to start applying a more formal approach and thinking through some of these issues. I'm still not going to go fully formal since I'm more interested in having fun. So don't expect a 6 week hiatus while I produce a formal analysis and design. I am going to do some of my thinking in blog posts, using it for my documentation.
First lesson is that the Create doesn't provide unswitched power that is sufficient to run the Fit PC Slim. If the Create is turned on there is sufficient power. This means the Slim can't turn the Create power off because the Slim loses power, also.
Ideally the Create can stay on all the time. It has a docking stations - its home base - for recharging. The built in processing can find the base and run onto it to charge, or my software could replicate that processing.
First minor glitch is that the Create stops responding to the Slim commands when it docks. It turns out there is an undocumented soft reset command (a '7') that puts it back into a mode where it will accept commands. Once back into this mode the Slim can monitor the charging process and determine when it is safe to leave the dock.
When the Create is charging is a good time for doing work on the interface board or the Slim. What happens when you reconnect everything onto a charging Create?
The (re)learned lesson is that startup and shutdown processing are often the most challenging parts of a software project. Once everything is up and running a software process is usually straightforward, albeit with a lot of details to chase. You just nibble away at them one at a time until they are all resolved. Then the process is just doing the same thing over and over again.
Startup is a big discontinuity. What was the robot doing before it shutdown? What has changed since then? What is the current state now?
This all was triggered when I realized the robot had to determine whether it was charging or in the wild when it started up. It takes different actions depending on where it is. If docked it continues until charged, backs off the dock, and switches to "wild" mode of operation.
Okay, who said, "Rud, as an experienced developer you should have dealt with this already." Guilty as charged but this has been a casual, hobby project up until now. This was my wakeup to start applying a more formal approach and thinking through some of these issues. I'm still not going to go fully formal since I'm more interested in having fun. So don't expect a 6 week hiatus while I produce a formal analysis and design. I am going to do some of my thinking in blog posts, using it for my documentation.
Android - Not Abandoned
I'm not abandoning the Android work. I still want to get Galactic Guardian on the market in both a lite, free version, and a paid version. I'll just bounce between the robot and android projects as the spirit moves me.
The report from the ADC2 was that 50% of the testers in phase one liked the game so it seems well worth the effort to submit it to the market.
The report from the ADC2 was that 50% of the testers in phase one liked the game so it seems well worth the effort to submit it to the market.
Subscribe to:
Posts (Atom)
SRC2 - Explicit Steering - Wheel Speed
SRC2 Rover This fourth post about the qualifying round of the NASA Space Robotics Challenge - Phase 2 (SRC2) addresses t he speed of the ...
-
The brain of a robot is the software. The software has to take in the sensor data, interpret it, and generate commands to the actuators. On...
-
Another NASA Centennial Challenge began earlier this year. It will be the 3rd I've entered. I also entered the 2019 ARIAC competition...
-
SRC2 Rover This fourth post about the qualifying round of the NASA Space Robotics Challenge - Phase 2 (SRC2) addresses t he speed of the ...