We propose Yum-me, a personalized nutrient-based meal recommender system designed to meet individuals' nutritional expectations, dietary restrictions, and fine-grained food preferences. Yum-me enables a simple and accurate food preference profiling procedure via a visual quiz-based user interface, and projects the learned profile into the domain of nutritionally appropriate food options to find ones that will appeal to the user.

Users of software applications generate vast amounts of unstructured log-trace data. These traces contain hidden clues to the intentions and interests of those users, but service providers may find it difficult to uncover and exploit those clues. In this paper, we propose a framework for personalizing software and web services by leveraging such unstructured traces.

In this work we study the connection between metric learning and collaborative filtering. We propose Collaborative Metric Learning (CML) which learns a joint metric space to encode not only users' preferences but also the user-user and item-item similarity. The proposed algorithm outperforms state-of-the-art collaborative filtering algorithms on a wide range of recommendation tasks and uncovers the underlying spectrum of users' fine-grained preferences. CML also achieves significant speedup for Top-K recommendation tasks using off-the-shelf, approximate nearest-neighbor search, with negligible accuracy reduction.

We propose a new user-centric recommendation model, called Immersive Recommendation, that incorporates cross-platform and diverse personal digital traces into recommendations.

We introduce an approach called YADL (Your Activities of Daily Living) that uses images of ADLs and personalization to improve survey efficiency and the patient experience.

We present GroupLink, a group event recommendation system that suggests events to promote group members' face-to-face interactions in non-work settings.

In this paper, we propose PlateClick, a novel system that bootstraps food preference using a simple, visual quiz-based user interface.

In this paper, we explore the possibilities of learning a user's latent visual preferences directly from image contents.

Our research seeks to investigate the extent to which easily available sensory information may be used by external service providers to make occupancy-related inferences. Particularly, we focus on inferences from two different sources: motion sensors, which are installed and monitored by security companies, and smart electric meters, which are deployed by electric companies for billing and demand-response management.


  • 2016, Research intern, Adobe Research
  • 2014, Research intern, Microsoft Research Asia
  • 2013, Research intern, UCLA-CSST


Teaching Assistant:

  • CS5785 Applied Machine Learning
  • CS5300 The Architecture of Large-scale Information System
  • CS5434 Defending Computer Networks