Vision Logic Open File Report

Chapter 1: Base Robot Overview

Updated 1/4/09



The project robot consists of a three platform octagonal shaped platform, driven by modified servo motors similar to those used in model airplanes. A omnidirectional caster ball on the back forms the third point of contact with the ground and allows the robot to rotate effortlessly in any direction. This type of arrangement is called a differential drive and is the most common type today in home robotics. I chose this style because my goal of providing the home robotics community with vision on a type they are most accustomed to using.

The three decks

Each deck performs a specific function. The lower most deck has the wheel drive motors on the bottom, and the battery pack on the top. The batteries are NiMH type rechargeables, and connects with a simple charging circuit to the front of the robot for vision guided docking experiments. The middle deck houses my custom made robot board, one of three produced in the previous SweepBot project. This contains a PIC16F876A processor, and two 12F629 sub processors to drive the servos. This board takes up the entire deck and is very capable in interfacing with the vision experimental apparatus. This level also contains a new addition that I have not used before, a speech synthesizer so that the robot can talk and describe what it is seeing as it navigates. The top deck is the blank platform which will contain the visual experiment in progress. Here I will mount home brew digital robot "eyes", and some small machine vision cameras which interface with the deck below. The top deck will also have a back illuminated LCD display for non verbal data and voltage readings.

Why speech?

A key component in this robot is a Devantech text to speech synthesizer. The robot will be performing most of the time in a large arena, attempting to do tasks or mazes using vision only. Rather than hover over the robot (creating shadows to boot) to read a tiny LCD display on its back, Ive decided it will be easiest - and of course very cool - to have the robot simply talk to us, and tell us whats going on internally. An example of this might be the recognition of a shape or target, or perhaps to relay a light intensity real time.

Additional sensors

Because this robot is to perform primarily using vision guidance, I am keeping the other usual sensors to a minimum. I also plan to add to the front of this robot a Sony IR distance sensor to alleviate the need for stereoscopic vision on target acquisition, and for bland object detection. Two music wire front whiskers complete the frontal sensing to keep the robot out of trouble during vision algorithm development. I haven't decided yet on what charging plate arrangement Ill be using at this time for docking experiments.