AI-Powered Access: Intelligent Interactive Systems to Support People with Visual Impairments

Abstract: As artificial intelligence advances, it presents opportunities to address human needs in new ways. I aim to leverage advances in AI to solve problems of equity for people with diverse abilities. The US Census Bureau estimates that about 20 percent of Americans have a disability, meaning that they face significant barriers in their daily lives because their needs and abilities differ from what is typically considered “mainstream.” In my research, I conduct studies to understand these specific barriers and design intelligent interactive systems that help people overcome them. In my talk, I will describe several recent projects involving people with visual impairments, both blind and low vision. The projects aim to help people with visual impairments learn STEM concepts, navigate, and engage with others on social networking sites. I will conclude with open questions for the community on how to ensure that advances in AI empower (instead of further marginalize) all people, regardless of (dis)ability. 

Bio: Shiri Azenkot is an Assistant Professor of Information Science at Cornell Tech, the new Cornell University campus in New York City. Her research lies in the intersection of technology, disability, and interaction. She likes building things and discussing their sociocultural implications. In particular, Shiri’s research focuses on designing intelligent interactive systems for people with visual impairments. She has published at top-tier human-computer interaction and accessibility venues such as ACM CHI, ACM ASSETS, and ACM UIST, receiving multiple best paper awards and nominations. Shiri is also the founder of the XR Access Initiative (, a broad academic-industry partnership to make augmented and virtual reality accessible from the ground up. She received her PhD in computer science from the University of Washington and her BA, also in computer science, from Pomona College.