My research goal is to build ethical, safe, and private machine learning systems. In our work, we demonstrate security drawbacks of Federated Learning (AISTATS'20) and fairness implications of Differentially Private Deep Learning (NeurIPS'19). Recently, we proposed a framework for backdoor attacks and defenses (USENIX'21).
Earlier, I worked on Ancile – a framework for language-level control over data usage, and OpenRec – a modular library for deep recommender systems. Amazon and Google Research hosted me for summer internships. Before starting my PhD, I received engineering degree from Baumanka and worked at Cisco on OpenStack networking.
In my free time I play water polo and (used to...) travel.
We introduce a constrain-and-scale attack, a form of data poisoning, that can stealthily inject a backdoor into one of the participating models during a single round of Federated Learning training. This attack can avoid proposed defenses and propagate the backdoor to a global server that will distribute the compromised model to other participants.[AISTATS, 2020], [Code]