Image Matching using Local Symmetry Features
by Daniel Cabrini Hauagge and Noah Snavely
Abstract
We present a new technique for extracting local features from images of architectural scenes, based on detecting and representing local symmetries. These new features are motivated by the fact that local symmetries, at different scales, are a fundamental characteristic of many urban images, and are potentially more invariant to large appearance changes than lower-level features such as SIFT. Hence, we apply these features to the problem of matching challenging pairs of photos of urban scenes. Our features are based on simple measures of local bilateral and rotational symmetries computed using local image operations. These measures are used both for feature detection and for computing descriptors. We demonstrate our method on a challenging new dataset containing image pairs exhibiting a range of dramatic variations in lighting, age, and rendering style, and show that our features can improve matching performance for this difficult task.
Downloads
Results
Keypoints. Top keypoints according to scale and score for all images in our dataset for three detectors: Sym-I, Sym-G, SIFT. Figures 5 and 6 of the paper present similar results.
Deteval. Detector repeatability measured as the fraction of matched top keypoints. Set of top keypoints is obtained by sorting keypoints according to scale or score. Results for two pairs of images were shown in Figure 7 and table summarizing results for the dataset can be seen in Table 1. For more details on how these curves were obtained refer to Subsection 6.1 of the paper.
Desceval. Precision-recall curves to evaluate the performance of our local symmetry based descriptor in an image matching task. In total 9 plots from this experiment are included in Figure 8 of the paper and the mean average precision for the ful dataset can be seen in Table 2. Here we show all precision recall curves for all image pairs. For more details on how these curves were obtained refer to Subsection 6.2 of the paper.
Acknowledgements. This work is supported by the National Science Foundation under IIS-0964027 and the Intel Science and Technology Center for Visual Computing.