Ethical decision-making is already part of AI systems. Life-or-death decisions by a self-driving car in an inevitable accident is an example of the ethical challenges these systems may encounter. In such cases, the “correct” action is defined by the aggregate societal preference among deterministic alternatives. AI’s vision for autonomicity is much more ambitious, though, and includes situations in which an algorithm has to be selected from a portfolio of algorithms for carrying out a task.
As our most powerful algorithms for several computational problems are typically randomized, decision-making goes far beyond the selection among deterministic alternatives. The important decision is now to select among lotteries over alternatives. How should a group of experts (playing the role of a society) make such decisions, particularly when ethical issues arise? How should an autonomous system act, reflecting societal beliefs? And how should our legal system evaluate these actions and determine the liability of autonomous systems?
These are some of the questions we discuss in this talk, mainly from a computational social choice viewpoint.