Leveraging the Bimodality of Software (via Zoom

Abstract: After discovering, back in 2011, that Language Models are useful for modeling repetitive patterns in source code (c.f. The "Naturalness" of software), and exploring some applications thereof, more recently (since about 2019) our group at UC Davis has focused on the observation that Software, as usually written,  is bimodal, admitting both the well-known  formal, deterministic semantics (mostly for machines) and probabilistic, noisy semantics (for humans). This bimodality property affords both new approaches to software tool construction (using machine-learning) and new ways of studying human code reading. In this talk, I will describe some of the projects we have undertaken in this domain.

We are grateful for NSF support for this work, via grants 1247280, 1414172, and 2107592

Bio: Prem Devanbu holds a B.Tech from IIT Madras, and a Ph.D from Rutgers University. After nearly 20 years at Bell Labs, he joined UC Davis, where he is now a distinguished professor of computer science. He works in the area of Empirical Software Engineering, specifically on the use of "big data" available in public software repositories to help software developers and improve software processes. He is a winner of  the ACM SIGSOFT Outstanding Research Award (2021), the Alexander von Humboldt Research Award (2022), and several best-paper and most-influential paper awards.  He is a Fellow of the ACM.