MGC Project Abstract

The Mini Grand Challenge is an outdoor autonomous vehicle navigation challenge. It requires participants to design and build a vehicle that can drive across multiple terrains (a cement sidewalk and grassy field) autonomously. The competition has certain requirements and limitations that make this slightly more difficult. Please read the Mini Grand Challenge article for detailed information on the competition.

= Solution Overview =

The 2009/2010 solution is based on a laser/vision merger of data with forward vehicle projection. The vehicle will have a camera at waist level, facing forward. Below this, a laser-range finder device is mounted, but slightly pitched down. The collision data points will be within the camera's field of view. This is done such that we can apply a laser/vision fusion to gather the color of the possible drivable surface.

Laser collision data will be used to find what the robot may use as a surface to drive on, as well as general obstacles. This is merged with camera image data to produce a color range that the robot considers a "drivable surface". The image is then segmented using this color range, producing a collision map (a black and white image of drivable and non-drivable surfaces). A projection transform is applied to help choose the best forward path for the vehicle. This last component will find the optimal path for all near-future vehicle positions, acting as the "main controller" of the vehicle.

More about the competition can be found in the Mini Grand Challenge article.

During the development of our robot platform, we will have several long-term goals in mind:


 * Portability: Strive to keep our code and algorithms modular; so that they are easily transferred between platforms and projects.
 * Simplicity: Keeping a simple design allows for stable and efficient code.
 * Stability: Having a stable robotics system is critical to this challenge.
 * Efficiency: We strive for high efficiency and performance for all code and algorithms.
 * Openness: We are trying to advance the community by using open-source tools and freely publishing our work.

= 2009/2010 Solution =

Our 2010 solution is based on a laser/vision fusion. This originates from past experience with the Mini Grand Challenge and IGVC competitions. In the past, we have experimented with a visions-only approach, but the vision data was not efficient enough at adjusting to the scene variance (shadows, overcast weather, etc). With the 2009 IGVC competition, we have experimented with a global mapping system, only to fail as performance would never obtain efficient run-time speeds. Laser/vision fusion is a proven method, used in several competitions and famous robots. Stanley, the Stanford robot for the DARPA Grand Challenge, used a similar approach for the first and second grand challenges.

The general solution is to have the system find a road, using computer visions, but have it dynamically choose it through the correct color ranges. The color ranges are selected by looking at flat surfaces detected by a laser-range finder. The laser-range finder is pitched towards the ground at a roughly 45 degree angle. A simple forward vehicle projection system then attempts to find the best wheel angle and velocity for the vehicle to use when navigating this path.

The final "product" of our solution will be both a hardware platform and a series of programs. When these programs are configured and ran, the robot will autonomously complete all tasks for this competition. Many of the mentioned components from the general solution will be these stand-alone programs that use inner-process communications to pass component data. For example, the camera interface will pass images through IPC to the first image processing components.

A module may either be a series of functions, a class, or a series of classes. Each module communicates using a wrapper of the CMU IPC library, passing strings of data between components. Each component, based on a configuration file, will know where and what each module is. These modules are as follows:

Component List


 * Arduino Vehicle Interface
 * Client: Sends commands to and receives data from the MGC Vehicle
 * Server: Sends data and receives commands from the on-board computer
 * Observer Client: A graphical user interface application that allows remote viewing and configuration of the system software
 * GPS Waypoint manager: Manages waypoint information and positional data
 * LIDAR Interface: A laser-ranger finder interface
 * Collision Interface: Takes the collision data and returns a warning message if a possible collision is detected
 * Laser Vision Merge: Takes collision data and a camera picture and returns the range of colors that makes up the road
 * Road Detection: Takes a range of colors for the road and returns a black-and-white image of what the road is
 * Projection Transform: Takes a given image and applies a projection transform
 * Forward Vehicle Projection: Given a top-down view of the road, predicting the best path for the vehicle to follow over time
 * Main Controller: A high-level manager of all components

Each component is discussed in detail on the Mini Grand Challenge Architecture article.

Run-Time
The following is a diagram of the entire software system at run-time, showing how data passes and is manipulated. Do not confuse this diagram with how/when objects are created. Note how the Main Controller component (in the top-right) is the start and end point of this counter-clockwise cycle. Each major component in the diagram is a component listed in the Mini Grand Challenge Architecture. Note that certain lines, such as from the GPS interface to the main controller, are not explicitly drawn.



Component Model
Each component is a stand-alone executable that runs in its own process. The reasoning for not using a threading system is for code simplicity, clarity, and performance. Doing this also allows us to deploy all processes on varying number of computers without too much difficulty. The CMU IPC library allows for efficient inner process communications. The library will know if two processes are trying to communicate to each other are on the same or different computer, and will optimize for each case. If a process were to die, the failure remains encapsulated, with an "observer" process noting the failure and then attempts to relaunch the process. This method of "resurrecting" components can help with run-time failures and allows us to continue without completely failing.

Another reason is for stand-alone executables is for parallelism in the development process. Each component has a specific input/output protocol definition. This is outlined in the Mini Grand Challenge Architecture per component. This design approach also allows us to test each component individually, rather than have to test the system as a whole.

Each component will have to derive from a base class MGC Base Component that wraps all initialization and communication protocols. This is done so that the component developer can focus on the component's task, rather than boiler-plate code. Communication methods within this base class also contains an internal and output buffer. These are used such that data does not have to be transferred with explicit get/set calls. Rather, data is "trickled" over time, such that the input of a component has the last n-number of outputs from a dependency, reducing transfer times and increasing performance. The same is done with output, such that when new data is posted for output, any receiving process may choose at their own discretion when to take the data.

Each component will also have to fulfill a performance requirement: complete the component's task within a certain amount of time. Once the task is complete, the process will sleep for the remainder of the allocated time. Most of this is manged by the MGC Base Component base class, through overloading several critical virtual functions, namely the Update function stored in MGC Base Component. This is done as a major optimization; if all processes were to run as fast as possible, the entire system would start responding too slowly to produce real-time solutions. Also, the issue of information redundancy occurs in which some processes re-calculate for an entire cycle without generating any new data. By setting these performance requirements, some processes that do need the extra time are given priority as processes that update infrequently are placed in sleep.



Component Communication
Component communication is fully managed using the MGC Base Component class. This base class provides initialization and deallocation routines, such that they register (or release) with the main controller, as well as obtain the relevant module interfaces. The actual communication is done through the CMU IPC library, which is wrapped by the MGC Base Component class.

Upon initialization, each component automatically register's itself to the main component. This is done within the base component's constructor, through the generalized communication class. The component's name and input/output components are sent. The IP and port numbers of this main component is predefined in a configuration file that the base component class automatically reads. Once this component is registered, a specific IP and port is returned for each of the requirements, such that each component can talk directly to each other rather than a main server being the "middle-man".

Over time all components will have some sort of input / output communication. All input / output is managed by the generalized component class, being buffered until needed. When asking for input, the component will ask data from the communication class based on a string formation. When posting output, the same general method is used. It is important to note that a single component can have multiple input and output connections. This system can be almost seen as a continuous "engine" of data: The component will ask for data, calculate, and output, even if no one has yet to request data.