Abstract: Recent advances in brain recording technologies are transforming neuroscience, offering unprecedented abilities to measure neural activity and the behavior it gives rise to.  These recordings have the potential to offer new insights into biological computation, but their size and complexity pose a serious challenge.  In parallel, advances in machine learning have risen to meet this challenge, giving us new tools for modeling complex data generating processes and then fitting these models at scale. I will present two examples of our recent work on discovering structure in large scale neural and behavioral data with new machine learning methods, which we call point process latent variable models.  First, we consider the problem of inferring latent cell types and the functional networks that connect them, given access to only the spike times of a population of neurons. Such data are naturally modeled as a point process, but standard point process models fail to capture this underlying structure.  We blend latent variable models of random networks with Hawkes processes (a class of interacting point processes) to build a generative model of neural spike trains, and we develop Bayesian inference algorithms to reason about the underlying cell types and networks.  In the second example, we use similar methods to study how exogenous sensory inputs and internal states jointly determine the natural behavior of larval zebrafish, an important model organism in neuroscience.  Fit to twenty hours of video recordings, our model reveals interpretable classes of swim bouts and shows how prey locations combine with latent hunger state to predict future behavior.  These examples illustrate the potential for bringing new machine learning techniques to bear on complex neural and behavioral datasets. 

Bio: Scott is a Postdoctoral Fellow in the labs of David Blei and Liam Paninski at Columbia University. He completed his Ph.D. in Computer Science at Harvard with Ryan Adams and Leslie Valiant.  He received his B.S. in Electrical and Computer Engineering from Cornell (Go Big Red!) and is thrilled to be returning to his alma mater for this talk. His research is focused on machine learning, computational neuroscience, and the general question of how computer science and statistics can help us decipher neural computation. Next summer, Scott will be joining Stanford University as an Assistant Professor in the Statistics Department and the Stanford Neurosciences Institute.