CS 4732: Ethical and Social Issues in AI
The course will meet for 8 weeks 7:30-9:55 PM on Monday evenings, starting Feb. 27. There will be a one-hour public lecture (which can be attended by anyone), followed by a discussion. The discussion is limited to the students registered in the course. The course will be pass/fail, but in order to pass, you will have attend all classes and participate actively. There will be short readings associated with all the presentations. Here is the current schedule of presentations:
- 2/27 - Bart Selman, Cornell: The Future of AI: Benefits vs. Risks
- 3/6 - Jon Kleinberg, Cornell:
Inherent Trade-Offs in Algorithmic Fairness
- Abstract: Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for such a classification to be fair to different groups. We consider several of the key fairness conditions that lie at the heart of these debates, and discuss recent research establishing that except in highly constrained special cases, there is no method that can satisfy all of these conditions simultaneously. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them.
- 3/13 - Kilian Weinberger, Cornell: Interpretable Machine Learning: What are
the limits and is it necessary?
Recent years have seen a revival of deep neural networks in machine learning. Although this has led to impressive reduction in error rates in some prominent machine learning tasks, it also raises the concern about interpretability of machine learning algorithms---can we understand/explain what they are doing. In this talk I will describe the basics of deep learning algorithms and explain their basic building blocks. I will show that these are easy to understand. I will also try to shed some light onto what these networks learn on a higher level and hopefully be able to convince the audience that these are not âblack boxâ algorithms, as they are often described ... maybe instead gray or dark gray at most.
Christian Szegedy et al., Intriguing properties of neural networks
- 3/20 - Dan Weld, U. Washington: Computational Ethics for AI
Stephen Hawking, Bill Gates, and other luminaries warn that an “intelligence explosion” may lead to the extinction of humanity at the hands of rampant robots. At the same time, many pundits see a prosperous future in which self-driving cars reduce highway fatalities , while AI advisors improve medical care and minimize malpractice. Weld argues that the utopian outcome is more likely, but only if we address several key social and technical challenges .
- 3/27 - Moshe Vardi, Rice: Humans, Machines, and Work: The Future
Automation, driven by technological progress, has been increasing
inexorably for the past several decades. Two schools of economic
thinking have for many years been engaged in a debate about the
potential effects of automation on jobs: will new technology spawn
mass unemployment, as the robots take jobs away from humans? Or will
the jobs robots take over create demand for new human jobs?
I will present data that demonstrate that the concerns about automation
are valid. In fact, technology has been hurting working Americans for
the past 40 years. The discussion about humans, machines and work
tends to be a discussion about some undetermined point in the far
future. But it is time to face reality. The future is now.
- 4/3 - spring break
- 4/10 - no lecture
- 4/17 - Karen Levy, Cornell: Working With and Against AI: Lessons from Low-Wag
- 4/24 - Ross Knepper, Cornell: Autonomy, Embodiment, and Anthropomorphism:
the Ethics of Robotics
Ross A. Knepper is an Assistant Professor in the Department of Computer Science at Cornell University whose research focuses on the theory, algorithms, and mechanisms of automated assembly. Ross is the recipient of a Young Investigator Award from AFOSR, and he received the Best Paper award at the Robotics: Science and Systems conference in 2014.
- 5/1 - Joe Halpern, CS: Moral Responsibility, Blameworthiness, and
Intention: In Search of Formal Definitions