The Alignment Problem
Machine Learning and Human Values
Finalist for the Los Angeles Times Book Prize Today’s “machine-learning” systems, trained by data, are so effective that we’ve invited them to see and hear for us—and to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole—and…
A jaw-dropping exploration of everything that goes wrong when we build AI systems and the movement to fix them.