Your task in this milestone is to complete your third and final sprint, and to submit official peer evaluations.
Build and Test
There won’t be any other programming assignments during this sprint. You have a full 14 days (which doesn’t include Thanksgiving Break) to work on this sprint. You can and should take Thanksgiving Break off.
Demos, progress reports, submission: These will all work the same as MS1.
Total: 50 points.
10 points: demo, source code, progress report. Same as MS1 and MS2.
10 points: code quality and documentation. This will be graded much like it has been throughout the semester. You must provide a working
make docscommand that the grader can run to extract HTML documentation. A detailed rubric can be found here.
10 points: testing. Your source code must include an OUnit test suite that the grader can run with a
make testcommand that you provide. You are also encouraged to use Bisect, but that is not required. At the top of your test file, which should be named
test.mlor something very similar so that the grader can find it, please write a (potentially lengthy) comment describing your approach to testing: what you tested, anything you omitted testing, and why you believe that your test suite demonstrates the correctness of your system. A detailed rubric can be found here.
20 points: system size. With over 100 projects being built, it’s quite difficult to compare the effort expended by teams or the functionality they achieved. So we will instead use an objective metric that is an proxy for effort and achievement: physical lines of code (LOC), which means non-blank, non-comment lines. It includes any testing code you have written. You can easily measure LOC with a tool called
cloc: just run
cloc .in your source directory, and look at the count reported for OCaml. But first, run
make clean, so that the code generated in
_buildis not included in the count.
We expect there to be about 500 physical lines of OCaml code per person on your team. (Why 500? It’s approximately the median measurement for past projects submitted in this course, before we instituted this rubric.) We will compare your LOC/person to other teams, including this semester and previous semesters. Your score will be a sigmoid function of those data. Systems that are undersized will lose points: the penalties will be extremely small at first (so, the 500 number is not a hard requirement), but as system size decreases, the penalties will increase. Oversized systems will not receive any bonus points, though they could be excellent entries in your portfolio to show off to potential employers.
LOC is a metric that could be gamed: a team could artifically increase its LOC count by adding some “dead code” that isn’t really needed in the system. Bear in mind that graders will be reading and evaluating your source code for this milestone. If evidence were discovered that a team did unethically inflate their LOC, it would likely result in an Academic Integrity case.
You are welcome to add a file named
LOC.txtto your submission to explain anything you think we should know about the measurement of your system or how you think it should be interpreted. These files will be read before making any large deductions.
Update (12/16/19): Here is a graph showing the scoring function we used. The shape of the curve was partly determined by statistics of all the projects submitted this semester.
See this page for how peer evaluations will work.