Manipulation Attacks in Local Differential Privacy

Abstract: Local differential privacy is a widely studied restriction on distributed algorithms that collect aggregates about sensitive user data, and is now deployed in several large systems. We initiate a systematic study of a fundamental limitation of locally differentially private protocols: they are highly vulnerable to adversarial manipulation. While any algorithm can be manipulated by adversaries who lie about their inputs, we show that any non-interactive locally differentially private protocol can be manipulated to a much greater extent. Namely, when the privacy level is high or the input domain is large, an attacker who controls a small fraction of the users in the protocol can completely obscure the distribution of the users' inputs. We also show that existing protocols differ greatly in their resistance to manipulation, even when they offer the same accuracy guarantee with honest execution. Our results suggest caution when deploying local differential privacy and reinforce the importance of efficient cryptographic techniques for emulating mechanisms from central differential privacy in distributed settings.

Joint work with Albert Cheu and Adam Smith.

Bio: Jonathan Ullman is an assistant professor in the Khoury College of Computer Sciences at Northeastern University, and a member of the Cybersecurity & Privacy Institute. His work in theoretical computer science studies problems at the intersection of machine learning, algorithms, and cryptography, with a focus on data privacy. He earned his BSE from Princeton University in 2008 and his PhD from Harvard University in 2013. His work has been recognized with an NSF CAREER Award and a Google Research Award.