Vision Logic Open File Report

Chapter 2: One Eye Vision - Euglena AI demonstration

Updated 4/25/08

 For our first step in developing a usable robotics vision system, I will demonstrate that the VL Robot is capable of the intelligence of the organism on the right, Euglena. This one celled animal has but one eye spot, and no brains. It navigates purely by chemical reflex, and since it is photosynthetic, will travel toward a bright light in the water to maximize its sunlight gathering abilities.

In this first experiment, we will demonstrate the following behavior:

Home in on on a bright point source of light then stop upon contact.

As I discovered, this is no easy feat for an animal with simply one eye, since it must actually move back and forth to be able to tell if its going in the right direction to the light.

 
   At the left is the configuration for the demonstration. On the very top of the robot hanging over the red colored connector is a single eye photo detector mounted on a flexible stalk, which will be detailed schematically below, and has an acceptance angle of about 90 degrees or so. This is comparable to the range of the eye spot on Euglena. Using this single sensor, and a behavioral programming approach we have succeeded in our task of homing in on the bright lamp mounted on the end of the robot arena. Since the robot can now talk, it will both make beeps and vocalizations on what it sees. Well show you a small movie clip later on how the experiment went.

 The Schematic diagram of a "One Pixel Eye"

The photocells are crucial to the vision logic experiments. They have close to visual wavelength response, are very inexpensive, and come in a very convenient T1 LED style package. They are Panasonic PNA1801L available from Digikey. The op amps are quad packages and are a rail to rail amp, with a low to medium frequency response. They are the TS954 also from Digikey. The 1uf filter cap is a low pass filter to remove any high frequencies from the fluorescent lamps and AC. The amps are set for a K=1 configuration and this circuit provides a 0 - 5v output into the A/D converter on the microcontroller.

 

  At less than a dollar each, you can buy bags of these wondrous devices for all of your robotic vision needs !

 Behavioral Programming Description.

There are many ways to program a single chip microcontroller or microprocessor to control a robot. Since controller code is run serially in most processors like ours, a Microchip 16F877A, programing always has to be run either in distinct subroutine like states called "Finite State Machine" programming OR in a way that approximates how living simple animals think known as "Subsumption Architecture". While most of us and more advanced animals think in terms of separate moods or states such as "eat mode", or "sleep mode", we can change our mode of operation in an instant to a different state of mind. A good approximation of this type of thinking is called Finite state machine programming. Most of my robots have used this method and can perform quite advanced functions. But simple organisms and most insects (with the exception of the wasp family) think in a layered behavioral approach. Their brains are set to automatic mode, programmed so to speak by billions of years of evolution one cell at a time to act by reflex alone: Action --> Response. No thinking. Just react. Programming a processor to act like this is not a very straightforward task. This framework or organization of the program is called the AI, or Artificial Intelligence matrix.

To program the Subsumption architecture in a serial processor requires a method known as "Priority Arbitration Architecture". One of my first robots, PAAMI was programmed this way and while it was complex electronically, it had a very simple thinking pattern. Heres how it works. The robot is programmed in layers of behaviors or reflexes with the most important at the top and the least at the bottom. A very simple behavior might be to wander about aimlessly. A more complex and higher behavior would be to seek out the light. Insects think this way, they react to only the highest order stimulus. Feeding for example overrides sleeping. Our Euglena program has four layers of behaviors:

BRIGHT IMPACT BEHAVIOR
     AVOID BEHAVIOR
            HOMING BEHAVIOR
                   RANDOM WANDER BEHAVIOR

Each layer has control over the motors, voice, beeper and LCD display. But only highest active behavior will control them. All the sensors are fed into each layer so that each behavior has full access to the sensory input. The layers are triggered by events in the sensors such as a threshold voltage, brightness level or the bumpers hitting something. Then the layers provide output to motors. Keeping all the layers in check and only providing motor access to the highest layer is the Arbiter which is not shown here. This is a part of the program that scans the sensors, then gives access to the motors to only the highest behavior. I know all of this sounds complex, but this is very close to how lesser life forms think.

Behavior descriptions:

Bright Impact: The highest level of behavior is to stop moving if it has in its sights a very bright light and also hits something. Overrides all other behaviors.

Avoid: If the robot hits something, it overrides wander and homing and takes evasive action.

Homing: Triggered by a level of light level to indicate it is near a bright light. Seeks the bright light by making an S curve into the source. Heres how the Homing behavior works: The robot waddles back and forth effectively scanning its sensor mounted on a stalk pointing forward and taking readings after every waddle. If the light level is brighter, then it keeps waddling in that same direction. If the level dropped then it reverses the waddle direction. In the real animal, Euglena, the brainless creature with its rigid body wavers back and forth seeking the light. IF it gets dimmer, the reaction is to change direction and see if it gets brighter. The poor hapless Euglena will merely circle endlessly if it cannot find a brighter direction. The robot will do this as well!

Random Wander: The robot will wander at random until it hits something.

We could also add a "Do Nothing" behavior, after all what does a bug do when its not wandering? But hey, how exciting would that be to watch...

 Running the Experiment:

All tests are run in the new Robot Arena, which was built last year as part of the PicBot series of experiments. The arena contains the robot in a controlled way and allows the insertion of both obstacles and targets of interest into the robots constrained world. Each level of the robots behavior were developed one at at time. Each time I added the next level I tested it in the arena to validate the response. Finally, when the last Bright Impact behavior was added, the final experiment could be done. The movie clip below will show that the robot has easily achieved the Euglena level of artificial intelligence.

The movie clip Description: This is a small 160 pixel sized MPEG clip for the web to show the robots final Euglena response experiment. The movie starts by me turning on the robot in the arena. The robot then speaks: "Booting" and then identifies itself as the Vision Logic Robot. Then the robot runs the Priority Arbitration Architecture. It first wanders off to the wall, then the AVOID behavior takes evasive action. After wander a few more inches, the HOMING response activates when it sees the minimum threshold brightness level for being near a light source. It then moves in toward the light by swaying back and forth to find the brightest path. Upon impact with the lamp, the highest BRIGHT IMPACT behavior kicks in and stops the robot. The robot proclaims at this point that it has found the light.

 MOVIE CLIP HERE

Be sure to turn your volume up on your computer!

The movie clip Description: This is another demonstration of AI. Here, by removing the Bright Impact Behavior, the robot hovers around the light source. A very simple change in behaviors can result in a completely new level of visual response. The robot will drive toward the light, then the avoid behavior subsumes. The robot bounces back away, but then is back into the home behavior. This repeats over and over showing a hovering behavior.

 MOVIE CLIP HERE

Be sure to turn your volume up on your computer!

The movie clip Description: This third demonstration of visual AI shows what happens if the robot is in homing state - the moth flying toward the light, and then I shut the light off. The wander behavior resumes. Similarly, when the single celled organism is heading for the light and it stops or dims significanatly say by sunset or shade, the Euglena will go back to wandering to seek a new light source. Again, no brains - just react in layers.

 MOVIE CLIP HERE

Be sure to turn your volume up on your computer!

 Conclusion:

Here we have demonstrated a similar level of intelligence to a single celled animal known as a Euglena. Next onward to Cyclops...

HOME