Richard Lanas Phillips


This is a picture of me smiling
My research interests are in the ways we make decisions around ML algorithms. How do we calibrate around ML predictions? How do model decisions affect our community development on social media platforms? How can algorithms amplify or augment our existing biases? I'm currently using the tools of network science, algorithmic transparency and interpretability, and mechanism design towards exploring these questions.



Card image cap

Disentangling Influence: Using Disentangled Representations to Audit Model Predictions

We showed how disentangled representations might be used for auditing. This is useful for finding proxy features and searching for potential introduction of bias in correlated features. Our solution allowed for the explicit computation of feature influence on either individual or aggregate-level outcomes.



Card image cap

Interpretable Active Learning

This project proposed a way we might explain why active learning queries are being suggested to a domain expert. We built this to help batch active learning queries in a way that let chemists design natural experiments. We also explored some auditing and fairness applications of the proposed framework.



Dark Reactions Project Logo

Dark Reaction Project

At Haverford College, I worked on the Dark Reaction Project team. The goal of this project was to take thousands of unused, "failed" reactions and the leverage them into a machine learning model. This was successful and the system helped expose human biases in the exploratory synthesis pipeline.