Testing
I'll be straight with you: many students find this assignment pretty frustrating. Why? Because there are many ways to get something just a little bit wrong... That's the thing with image operations, though: tiny little errors can accumulate quite quickly, and what may seem invisible to us could be enough to, for example, bias what a neural network learns and prevent generalization to new data. So testing your code will be important. Here are some tips.
Most errors are a result of rounding too soon or incorrectly. To make your life easier, double check the following:
- You understand the difference between
int()
andround()
- Make sure to truncate/round as late as possible. Doing these operations too early will cost precision unnecessarily.
- Check that you are handling boundary/edge cases accurately
Rather than provide tests for A3, we will be giving you reference images that your outputs should match. There are ways to ensure your implementation is correct and that your outputs match the reference images (or at least are extremely close).
Checking Output and Reference are Equal
The first is actually provided for you. In many of the cells we have code that looks something like this:
print("Image is same as reference: ", np.array_equal(our_ref_image, your_output_image))
display(Image.fromarray(your_output_image))
If the output matches exactly with the reference image then this will output True. If the value for one channel for one pixel is off by even 1 then the output will be False.
Debugging
Unfortunately, your output may not be exactly the same as the reference, even if they look the same. Suppose the errors are almost imperceptible and can't be seen by visually inspecting your output image. In that case, we recommend you move on to the rest of the assignment and return later. For example, the reference images distortion-rotate-reference.png and distortion-rotate-reference-wrong.png look the same, but in the Comparing Images section of the NumPy_Tutorial notebook you see that they are actually different.
Don't spend too much time trying to debug your code if the output and reference images look the same but are not 100% equal. We had a few rare instances last year where different systems had very slightly different behavior, so some students with the right implementation in code got slightly different results when they ran that code on their own machine. Using the recommended installation should help avoid this, but if you find yourself spending a long time trying to fix an extremely small error, let us know, move on, and come back to it later... If this happens again we will try to announce some error threshold or figure out a way to compute alternative reference results.
Speaking of the "Comparing Images" section in the NumPy_Turorial notebook, please use what we've given you to help figure out what is wrong with your code. By examining the differences in your output with the reference, you can start to work backward and figure out where bugs in your code exist. In this section, we've provided code to:
- Check the max pixel difference across channels
- Check max and average pixel difference across the entire image
- Create an interactive plot that plots a color map of pixel differences but also allows you to hover over each pixel and see it’s value
A good idea would be to create a testing notebook with functions you write that take in reference and output images and run a series of comparisons. The ones listed in the NumPy_Tutorial notebook will help you get started, but please feel free to add other methods. Another helpful method would be to take the difference of two images and display them.
I've added helperfuncs.py and HelperFunctionExamples.ipynb
as examples of how to write and use helper functions across different notebooks. Please do not include any of these
functions in your imaging.py
file, though, as we will not import it when we test your code.