About
I am an Assistant Professor at
UMass Amherst CICS. In our group, we look at security and privacy attack vectors for AI
systems deployed in real-life.
I am part-time at Google working on privacy-conscious agents.
I completed my PhD at
Cornell Tech
advised by
Vitaly
Shmatikov
and
Deborah
Estrin. My research was recognized by
Apple Scholars in AI/ML
and
Digital Life Initiative
fellowships, and Usenix Security Distinguished Paper
Award. At UMass, we got
Schmidt Sciences AI Safety Grant.
I received an engineering degree from
Baumanka
and worked at Cisco as a software engineer before going to
grad school.
I grew up in
Tashkent
and play water polo.
Announcement 1:
I am looking for PhD students (apply) and post-docs to work on attacks on LLM agents and generative
models. Please reach out over email and fill the form!
Announcement 2: We are running a
seminar
on Privacy and Security for GenAI, please sign up if you are
interested.
Research
Security: We worked on backdoor
attacks in
federated learning
and proposed frameworks
Backdoors101
and
Mithridates, and an
attack
on generative language models covered by
VentureBeat
and
The Economist. We studied
vulnerabilities in multi-modal and agentic systems:
self-interpreting images,
adversarial
illusions
and backdooring bias into diffusion models. We recently did an OverThink attack on reasoning models.
Privacy: We proposed
AirGapAgent
for privacy protection in agentic applications leveraging
Contextual Integrity. Before, I also worked on aspects of differential privacy including
fairness
trade-offs,
applications to
location
heatmaps, and tokenization
methods
for private federated learning. Additionally, we built the
Ancile
system that enforces use-based privacy of user data.
PhD student(s): Abhinav
Kumar.
Recent collaborators:
Amir Houmansadr,
Shlomo Zilberstein,
Brian Levine,
Kyle Wray,
Ali Naseh,
Jaechul Roh,
Dzung Pham,
June Jeong,
and many others.
Teaching
Courses
CS 360:
Intro to Security, SP 25.
CS 692PA: Seminar on Privacy and Security for GenAI models,
FA'24, SP'25, FA'25
CS 690: Trustworthy and Responsible AI, FA'25 (link TBD)
Service
Academic Service
Program Committees:
- ACM CCS'24/'25, ICLR'25, IEEE S&P'25
Workshop Organizer:
Broadening Participation
Eugene co-organizes
PLAIR
(Pioneer Leaders in AI and Robotics), an outreach program
that introduces high school students across Western Massachusetts to
the world of robotics and AI safety. Please reach out if you are
interested in joining.