Reviewers do not assign scores to papers in the same way. Some tend to give lower scores and some, higher. Inconsistent scoring can affect which papers are discussed in program committee meetings and which papers are advocated for, or how strongly.
This tool is intended to help program committee chairs compensate for scoring bias. It suggests a compensation to normalize review scores, and plots average score versus normalized score for each paper.
All processing is done locally in the browser: your data will not be transmitted elsewhere.
Paper and Reviewer names may be any strings. Scores are numeric but need not be integral or positive. [example file] [How to extract this from HotCRP]
The Review Normalizer was created by Andrew Myers at Cornell University. Thorsten Joachims gave some key pointers on ML and latent factors.
Thanks also to the creators of
numeric.js, which is used for