I'm Yixuan Li. I am a PhD candidate at Cornell University, advised by John E. Hopcroft. My thesis committee members are Kilian Weinberger and Thorsten Joachims. The goal of my thesis research is to develop computational foundations and practical advances for scaling machine learning methods on web-scale data. I have pursued research on both principled and applied aspects of machine perception, learning and reasoning.
I am particularly interested in large-scale machine learning for the web, with topics including scalable semi-supervised learning, deep representation learning for vision tasks, user modeling in social media, and visual attention based personalization etc. A key focus of my recent work has been on deep learning. Projects include convergent learning in deep neural networks, optimizing neural networks with efficient computational cost, and adversarial training of deep generative models.
Prior to coming to Cornell, I graduated from Shanghai Jiaotong University with B.Eng in Information Engineering in 2013. I spent two summers at Google (Research) Mountain View in 2015 and 2016.
I travel and occasionally take photos. Here is my pictorial Travel Memo.
Update (6/8/2017): Paper on Principled Out-of-Distribution Detector available on arXiv.
Update (6/6/2017): Paper accepted for publication in Transactions on Knowledge Discovery from Data (TKDD).
Update (5/16/2017): I will be speaking at Grace Hopper Conference (GHC) Artificial Intelligence track in October 2017.
Update (3/12/2017): Received ICLR 2017 Student Travel Award.
Update (2/27/2016): Paper on StackedGAN has been accepted into CVPR 2017.
Update (2/6/2017): Paper on Snapshot Ensembles has been accepted into ICLR 2017.
Update (12/20/2016): My summer internship paper at Google Research is invited to the industrial track in WWW 2017.
Update (2/5/2016): I will be interning at Machine Intelligence at Google Research (Mountain View) for the summer. I am very excited about it!
Update (2/4/2016): Paper on Convergent Learning has been accpeted for oral presentation (5.7%) in ICLR 2016! (check out preprint here)