Representative recent work
Publications (Google Scholar)
Broadly, I’m motivated by the goals of building machine learning systems that generalize strongly (extrapolating rather than interpolating); while requiring less data (greater sample efficiency); and which acquire interpretable knowledge that humans can understand and build on. I draw on ideas and techniques from machine learning, artificial intelligence, programming languages, and cognitive science. More specifically, I’ve investigated the hypothesis that some progress on these fronts can come from program induction. Program induction systems represent knowledge in the form of symbolic code, and treat learning as a kind of program synthesis. Useful resources are here. Main themes of my research are:
Model infers high-level LaTeX-style graphics program from hand drawing, and can auto-extrapolate visual patterns (paper). Model infers CAD code from voxels (paper)
Programs and perception: Human vision is rich – we infer shape, objects, parts of objects, and relations between objects – and vision is also abstract: we can perceive the radial symmetry of a spiral staircase, see the forest for the trees, and also the recursion within the trees. Much of this abstract perceptual structure can be modeled and automatically recovered through different kinds of graphics program synthesis, from hand drawings, to 2D and 3D geometric models.
Learning a generative model over LOGO/Turtle graphics programs. Shown are renders of randomly generated programs from the learned prior. System learns prior by learning to draw these images. System described here
Learning to program: Program induction presents two intertwined challenges: the space of all programs is infinite, and so we need a well-tuned inductive bias to organize the space of program hypotheses. And, even if we have this inductive bias, efficiently homing in on the most plausible programs is, in general, intractable. Roughly these correspond to the sample-complexity and computational-complexity of program induction. We’ve found ways of learning both the inductive bias, and the search algorithm that will efficiently find the right programs.
Theory induction as program synthesis: It’s not just scientists that build theories to understand and model the world: during the first dozen years of life, children learn ‘intuitive theories’ for number, kinship, taxonomy, physics, social interaction, and many other domains. Intelligent machines will need to similarly organize and represent their knowledge of the world in terms of modular, causal, and interpretable theories. A new direction for my research is to explore algorithms for synthesizing causal theories, with the hypothesis that generative programs are the right representational substrate on which to build a theory inducer. My collaborators and I are starting with theories of human language (work in prep), with the longer-term goal of building systems that can infer theories of the physical world.
Website template taken from Mathias Sablé-Meyer