CS6670 Project 1 Report

Bennett Rummel - bwr38

Custom Feature Descriptor

The custom feature descriptor I chose to use is a modified version of the MOPS descriptor from the lecture notes. Instead of converting the image to greyscale before computing the descriptor, I handle each color channel independently. It should maintain many of the same properties as the MOPS detector but could be helpful in environments where color is more uniform. I was hoping, for example, it would not match two features that are different colors but appear the same in greyscale. Each of these is intensity normalized independently of the others as well. Therefore, each feature takes up 3x as much data. This can end up being a lot of data and may be a potential problem. Additionally, it may be difficult to match images taken with different cameras since some cameras may handle colors differently. In general, this descriptor did not perform as well as I would like and may be useful only in special circumstances. It did better than the simple descriptor but not as well as the MOPS descriptor.

Design Choices

I didn't change much of the existing skeleton code at all besides where I implemented my own functions. The only change I can think of is I added 2 additional bands to the harrisImage object in ComputeHarrisFeatures(). This was done so I could calculate the angle of the corner in ComputeHarrisValues(). In my version of the code, the angle of the xmax Eigenvector of the Harris matrix is calculated for every point in the image and stored in the 2nd band of harrisImage. This is not an elegant solution but I did not want to go back and calculate the Harris matrix for each feature again so I used the existing values. In practice, this does not seem to cause any significant slowdown - images still complete on the order of milliseconds. I also found that a corner strength function threshold of about 0.015 to be a suitable middleground between number of features and noise.

Performance Metrics

Please note, for some reason the code segfaults during the matching phase when loading any of the "bikes" images. I have been unable to figure out why as all of the other images work correctly. This even occurs with DummyComputeFeatures() and SSD matching. For this reason, these results have been omitted from this document.

ROC Curves

Yosemite Images
ROC, Yosemite, Simple Descriptor
ROC Curve with Simple Descriptor


Threshold, Yosemite, Simple Descriptor
Threshold with Simple Descriptor


ROC, Yosemite, Custom Descriptor
ROC Curve with Custom Descriptor


Threshold, Yosemite, Custom Descriptor
Threshold with Custom Descriptor


Graf Images
ROC, graf, Simple Descriptor
ROC Curve with Simple Descriptor


Threshold, graf, Simple Descriptor
Threshold with Simple Descriptor


ROC, graf, Custom Descriptor
ROC Curve with Custom Descriptor


Threshold, graf, Custom Descriptor
Threshold with Custom Descriptor


Harris Images

Yosemite1.jpg Harris Image
Harris Image for Yosemite1.jpg


graf/img1.ppm Harris Image
Harris Image for graf/img1.ppm

AUC Scores

Image SetSimple DescriptorCustom Descriptor
SSDRatio TestSSDRatio TestSSD
graf0.5249020.4458690.5603750.568705
leuven0.4102830.5516520.5383620.598124
bikes????
wall0.5680170.4427710.5609190.592459

Sample Images

I do not have a camera but here are some pictures of the program's GUI showing features found in some of the sample images.

Bikes Screenshot
The image here is "bikes/img1.ppm". For some reason very few features were detected in this image.


Leuven Screenshot
The image here is "leuven/img1.ppm". There are features throughout the shot but they seem to be clustered around trees. This is seen elsehwere.