Program analysis tools help developers ensure the safety and robustness of software systems by automatically reasoning about the program. An important barrier to adoption is that these "automatic" tools oftentimes require costly inputs from the human using the analysis. For example, the user must annotate missing code (e.g., dynamically loaded or binary code) and additionally provide a specification encoding desired program behaviors. The focus of this research is to develop techniques that minimize this manual effort using techniques from machine learning. First, in the verification setting (where the goal is to prove a given correctness property), I describe an algorithm that interacts with the user to identify all relevant annotations for missing code, and show that empirically, the required manual effort is substantially reduced. Second, in the bug-finding setting, I describe an algorithm that improves the ability of random testing by automatically inferring the program input language, and show that it generates much higher quality valid test cases compared to a baseline.

Bio:
Osbert Bastani is a Ph.D. student in Computer Science at Stanford University advised by Alex Aiken. He is interested in improving the automation of program analysis tools using techniques from machine learning, artificial intelligence, and program synthesis. His work is motivated by applications to building secure systems, and analyzing software systems that rely on machine learning models.