For our next step, we have the robot moving
and talking while using its 9 pixel horizontal image scanner
for navigation using vision alone. Here I'll show you some small
movies (mpeg format), that illustrate the visual capabilities
- however limited, that can be achieved with a very low resolution
visual setup. Sometimes the robot is hard to understand, and
I'll key you in on what it is saying on each movie clip.
Basic Concept
|
In
the arena, when the robot is far from the black target, it sees
with its 9 pixel vision after histogram clipping, this - a 180
degree field of white walls with a distant black target as one
pixel wide. The range here is about 2 feet. |
|
When
the robot is much closer, say less than six inches, this is what
it sees - a much larger black target filling its field of view. |
The point to be made here is that if you
merely count the number in each frame of black continuous pixels,
you can tell how close you are to the target. When the robot
is far away, you may only record one black pixel. But when close
the number will fill up a maximum of 6 or 7 since a flat target
cant be picked up on the 0 and 180 degree views. |
Movie
1 | Well start with a bright light source,
and move it in front of the robot at different angles. The robot
sees the light, and responds by reading out the angle of the
light source. (If its the same as last time, it says nothing).
the robot will say at each new position "Target at - (three digit degrees).
For example,
left is zero degrees and right is 180. The position read is the
center of each 9 pixel bin. |
To calculate the angle,
the robot takes the white bits from the histogram sliced image,
and does a centroid calculation. Here it finds the center of
the mass of white pixels and calculates the center. This is the
angle reported. |
Movie
2 | Putting this into action, the robot drives
directly toward the light, first orienting its body toward the
light, then making corrections at periodic intervals to re-aim.
Scanning is very slow and time consuming! but you get the idea.
Robot is saying "Target
Dead Ahead"... |
After the robot makes a
centroid calculation on the image, it measures the angle, and
knows how much to rotate to point at it by on simple calibration
I made - I found the time to make the robot turn 180 degrees
with the motors on and the robot calculates the partial angles
and thus time to rotate to the targets measured position. |
Movie
3 | This is a very important step here. The
last white light demonstration the robot drives toward a non
point source, calculates its center and moves toward that centroid.
THEN when the lamp is at a specific distance which equates to
the number of white pixels in its view, it stops just before
it hits. This is a huge step in demonstrating the advantage of
a multi pixel vision over just a two pixel light seeker. It knows
how close it is to the lamp, and can aim right at the center
no matter how big the light is. You cant do THAT with a two pixel
eye! |
In this instance above,
the robot will stop when it sees the glowing globe fill about
4 pixels. This is about 2 inches. |
Movie
4 | Now here's a new feat for our vision guided
robot, to drive to a BLACK target of finite width, and stop when
it is close without hitting it. No glowing lamps this time. |
The centroid calculation
is the same, but we now are concerned with dark pixels. The white
walls read as 1's and the black target will read as a series
of 0's in the robots sliced histogram vision. Again, when the
robot reads a total of 4 or more continuous black pixels, it
assumes it is at the target. If we push the robot right against
the target, it reads about 7 pixels... |
Movie
5 | Here I've added more information on what
the robot reports. As the robot nears the target, It reads out
the width of the target each time. You can see when it is far,
it will read 1 pixel wide, but when close that value goes up
until it stops just before hitting. The robot is saying "Target width 1 pixels" or what ever width is
sees. |
Movie
6 | Finally, the icing on the cake so to speak,
after driving to the target, the robot docks with the black foam
board and stops, similar to what it might do if connecting to
a battery charger. But here we dock with 9 pixel vision ONLY
and don't use IR beacons to guide us in! |
The robot reports its range
with pixel widths, and when it is front of the target, it says
"I am
in front of the black target", and "Now docking with target"
and finally after docking, which is determined here by impact
with the front central bumper plate, it proclaims "I am docked with
black target".
|
Conclusion:
For this series
of experiments, we were able to demonstrate some of the capabilities
of having a visual field of 180 degrees and a very low resolution
of only 9 pixels wide. In a Biomimetic sense, this shows that
increasing the number of photo sensors with some crude directionality
from one or two light spots as in simple organisms to what can
be best described as primitive arthropod vision of wide angle,
and one dimensional will have a big advantage. It can be seen
here with the robots similar visual acuity, that our primitive
arthropod would have had more information about its environment
and had an advantage in escaping predators by being able to not
only determine which side the dark cave was for cover, but at
what approximate angle. The animal could have then rotated directly
toward the cave and made its rapid escape. The poor hapless worm
with only one or two eye spots would be slow to find cover and
become food for the predator.
You can see the
evolutionary push here - More pixels in your eyes, the better
you can survive. This trend of course continued and after hundreds
of millions of years of evolution, trilobites had thousands of
facets in their crude eyes. The trend continues today. Dragonflies
have the most "pixels" of any insect, about 750,000
per eye. You can imaging the changes in their tiny brains to
be able to process all of that. Blows my mind...
In our home robotics
realm, the advantages of being able to seek and dock with black
or white targets is a big advantage as well, and this additional
sensory improvement may make the difference between running out
of charge and shutting down and seeing the docking port.
In our next experiments,
we will be having the robot count targets in its vision, size
them up and perhaps select the larger to drive to. Beyond that,
a whole series of avoid maneuvers using 9 pixel scanning vision
while moving is planned.
|
|