Put this in your walletHausdorff-Based Image Comparison

 

We have developed a fast, reliable method for comparing binary images based on the generalized Hausdorff measure. The generalized Hausdorff measure provides a means of determining the resemblance of one point set to another, by examining the fraction of points in one set that lie near points in the other set (and perhaps vice versa). There are two parameters used to decide whether or not two point sets resemble one another using this measure: (i) the maximum distance that points can be separated and still be considered close together, and (ii) what fraction of the points in one set are at most this distance away from points of the other set. Hausdorff-based distance measures differ from correspondence-based matching techniques, such as point matching methods and binary correlation, because there is no pairing of points in the two sets being compared. Often in matching and recognition problems, the two images are allowed to undergo some kind of geometric transformation in the matching process.  In this case we are concerned with finding the transformations of one image that produce good matches to the other image.  We have developed efficient search techniques for the case where the transformation is a  translation, or a tranaslation and a scaling.

For more information on the generalized Hausdorff measure, there is a brief introduction, as well as our paper A Multi-Resolution Technique for Comparing Images Using the Hausdorff Distance. A C implementation of Hausdorff matching (for matching with translation or with translation and scaling) is available as a tar file via ftp.

We have used the generalized Hausdorff measure to search for a two-dimensional model (a point set represented as a bitmap) in a bitmap image (usually the intensity edges from some image) under various transformations. For example

is a two-dimensional bitmap that serves as a model view of an object (Kevin sitting on a couch)

is a two-dimensional bitmap (intensity edges) in which we want to locate this model, under the transformation of two-dimensional translation and scaling (that is the model is allowed to move in x and y, and also to scale separately in each of these dimensions, for a total of four transformation parameters). Here is the intensity image from which the edge bitmap was computed, using the Canny edge detector.

shows the best transformation (translation and scaling) of the model with respect to the image, in the sense that it maximizes the fraction of model edge points that lie near image edge points (within 1 pixel diagonally). The green points are image egdes, the red points are transformed model edges, and the yellow points are locations where both an image edge and a transformed model edge are coincident. Note that there are many red locations adjacent to green ones (which would not be detected by a method such as binary correlation).

Back to Vision Homepage