image

Eugene Bagdasaryan

Bio

I am a CS PhD candidate at Cornell Tech and an Apple AI/ML Scholar, working on privacy and security in machine learning and advised by Deborah Estrin and Vitaly Shmatikov.

My research goal is to build ethical, safe, and private machine learning systems. In our work, we demonstrate security drawbacks of Federated Learning (AISTATS'20) and fairness implications of Differentially Private Deep Learning (NeurIPS'19). Recently, we proposed a framework for backdoor attacks and defenses (USENIX'21).

Earlier, I worked on Ancile – a framework for language-level control over data usage, and OpenRec – a modular library for deep recommender systems. Amazon and Google Research hosted me for summer internships. Before starting my PhD, I received engineering degree from Baumanka and worked at Cisco on OpenStack networking.

In my free time I play water polo and (used to...) travel.

Research papers
  • visibility_offBlind Backdoors in Deep Learning Models

    We propose a novel attack that injects complex and semantic backdoors without access to the training data or the model and evades all known defenses.

    [USENIX, 2021], [Code].
  • smartphoneHow To Backdoor Federated Learning

    We introduce a constrain-and-scale attack, a form of data poisoning, that can stealthily inject a backdoor into one of the participating models during a single round of Federated Learning training. This attack can avoid proposed defenses and propagate the backdoor to a global server that will distribute the compromised model to other participants.

    [AISTATS, 2020], [Code]
  • local_hospitalSalvaging Federated Learning by Local Adaptation

    Recovering participants' performance on their data when using federated learning with robustness and privacy techniques.

    [Paper], [Code],
  • doneAncile: Enhancing Privacy for Ubiquitous Computing with Use-Based Privacy

    A novel platform that enables control over application's data usage with language level policies and implementing use-based privacy.

    [WPES'19], [Code], [Slides].
  • faceDifferential Privacy Has Disparate Impact on Model Accuracy

    This project discusses a new trade off between privacy and fairness. We observe that training a Machine Learning model with Differential Privacy reduces accuracy on underrepresented groups.

    [NeurIPS, 2019], [Code]
  • extensionOpenrec: A modular framework for extensible and adaptable recommendation algorithms

    An open and modular Python framework that supports extensible and adaptable research in recommender systems.

    [WSDM, 2018], [Code]
Recent news
  • Aug 2021, our blind backdoors paper was covered on ZDNet.
  • April 2021, received Apple fellowship.
  • Feb 2021, paper on backdoors was accepted to USENIX Security'21.
  • Jan 2021, presented our work at Microsoft Research.
  • Nov 2020, open sourced our new framework for research on backdoors in deep learning.
  • Jul 2020, presented our work on local adaption for Federated Learning at Google.
  • Jun 2020, Ancile project was discussed in Cornell Chronicle.
  • Summer 2020, interned at Google Research with Marco Gruteser and Kaylee Bonawitz, focused on Federated Learning and Analytics.
  • Jan 2020, our attack on federated learning was accepted to AISTATS'20!
  • Nov 2019, passed A exam (pre-candidacy): "Evaluating privacy preserving techniques in machine learning."
  • Sep 2019, our paper about differential privacy impact on model fairness was accepted to NeurIPS'19.
  • Aug 2019, our work on the use-based privacy system Ancile was accepted to CCS WPES'19.
  • Aug 2019, presented at Contextual Integrity Symposium on contextual recommendation sharing.
  • June 2019, Digital Life Initiative fellow 2019-2020.
  • Summer 2018, interned at Amazon Research with Pawel Matykiewicz and Amber Roy Chowdhury.
  • Sep 2017, Bloomberg Data for Good fellow 2017.