Date Posted: 2/05/2019

In her article in The MIT Technology Review—“Giving Algorithms a Sense of Uncertainty Could Make Them More Ethical” (January 18, 2019)—Karen Hao reached out to Cornell CS Professor Carla Gomes to ask if Peter Eckersley (and his Partnership on AI) is onto something in his approach to considering partial orders of solutions with respect to multiple, often conflicting, objectives, and possibly introducing uncertainty into AI systems, especially those addressing decision making and moral dilemmas. Eckersley says: “We as humans want multiple incompatible things. There are many high-stakes situations where it’s actually inappropriate—perhaps dangerous—to program in a single objective function that tries to describe your ethics.” Supportively, Gomes remarks: “The overall problem is very complex. It will take a body of research to address all issues, but Peter’s approach is making an important step in the right direction.”

Gomes, who works in AI and increasingly in computational sustainability, aims to develop, with Alex Flecker (in the Ecology and Evolutionary Biology Department at Cornell) and other collaborators, an approach to assess the impact of over 500 proposed hydroelectric dams in the Amazon River basin. Among other outcomes, the proposed dams would provide energy while they would also concomitantly transform segments of the river and negatively affect natural ecosystems. Eckersley’s notion to consider partial orders and to introduce uncertainty into AI systems would, according to his theory, avoid having AI solve our ethical conundrums, but instead provide clarity on our moral options so that we can make the final call. As he puts it: “Have the system be explicitly unsure and hand the dilemma back to humans.” In the light of her research on multi-objective optimization, with applications to hydropower dam planning, and Eckersley’s research, Gomes says “[t]his is a completely different scenario from autonomous cars or other [commonly referenced ethical dilemmas], but it’s another setting where these problems are real. There are two conflicting objectives, so what should you do?”

According to Eckersley, this is a question humans should still answer, even if we increasingly rely on AI to articulate our options.