MGC Road Detection

= General Description =

This component is for detecting the road based on a given image and road colors. A black and white image is returned, such that the white area is the valid road, and black is any obstacle. There may be small segments of varying data found either on the road or in collision-space (e.g.: A person on the road should appear black), but this should be cleaned within this component.

Algorithm

 * 1) Input: Accept a list of points that are on the road.  These may not arrive every cycle, but overwrite the old ones if we get new ones.
 * 2) Input: An Image to be segmented.
 * 3) CHOICE:
 * 4) Grab the pixel values of the points on the road when the points come in, take the mean, covariance of these points
 * 5) Grab the pixel values of the points on the new image in the same position as the pixels we were told were on the road.  Take the mean and covariance of these points.
 * 6) Use the covariance to calculate the Mahalanobis distance (using opencv functions) between all pixels of the inputted image and the mean.  Store these values in a 32F_1 image.
 * 7) Threshold this 32F_1 image.  Threshold can be chosen emperically, or by dividing the "training set" of pixels into one set for choosing the mean/covariance, and one for choosing the threshold based off these mean/covariance choices.
 * 8) Return the thresholded image.
 * 9) OPTIONAL: Apply open/close morphological operations to clean up "islands" of black in "sea" of white.

Owners

 * Adam.brockett
 * Matt Jones

= Performance Requirements =

This component must complete a cycle of road detection within half a second.

= Input =


 * From LaserVisionFusion
 * List of points on the image that correspond to the road
 * Data type name: laserVisionRoad
 * Actual Type: An int count of the number of points included in this message, followed by (x,y) pairs of coordinates of pixels on the road.
 * Note: These points are (x,y) pairs that are relative to the current OpenCV camera image

= Output =


 * A black and white image of what is flat/road and not
 * Data type name: FusionImage
 * Actual Type: An OpenCV image format that is a black and white image of what is drivable (white) and not (black)

= Related Links =
 * Road segmentation paper
 * Example using OpenCV. It looks to be optimized for tracking color despite changes in lighting, which means it is robust to changes in the gray scale.  Unfortunately, those are the details we want to track.
 * Plain english description of Mahalanobis distance, as well as what makes it different than Euclidiean distance (which is basically what we're using when we subtract the pixels straight from the mean and threshold that)
 * Paper suggesting an advanced form of Mahalanobis distance specifically for hard to track paths such as forrest trails. I used the normal mahalanobis distance algorithm outlined here.  Pay particular attention to the results (look at how much better mahalanobis is) and the way the "training set" is segmented.

= Developer Discussion =
 * There is a consistency issue which comes from using the three-step image process: How do we keep using the same image? Do we have to explicitly pass it between each component, and what's the overhead on that? JBridon 07:00, 27 January 2010 (UTC)
 * Agreed on overhead. We can't use the same image, as using the three discrete step process we currently have means that we must keep 3 images around in memory because these steps will be run concurrently.  From here there are two choices.  The first is to explicitly pass these images around using the ipc mechanisms.  The second is to have a 3*sizeof(Image)-sized buffer that is shared between all three components, instead pass around pointers to this buffer, and edit the images in place.  This means no overhead of copying, but setting up the shared memory and keeping everything sane requires more thought.  I'd say go with the first unless that is too resource intensive, which brings us to your last question.  I can't really say, as you have more insight into how the IPC mechanism works, but its past time to test that out.  There is some testing code in ProjectionInterface that should provide a good base.  Adam.brockett 09:08, 27 January 2010 (UTC)