Book Review: Meredith Broussard, More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech (MIT Press 2023). 248 Pages. Available from MIT Press, Barnes and Noble and Amazon.
More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech is AI expert Meredith Broussard’s latest book in the field of artificial intelligence. In the span of 188 pages that even a technology Luddite can understand, More than a Glitch introduces the reader to the intersection of the hottest emerging technology and various forms of social bias. More than just a guide, More than a Glitch inspires readers to think critically about the risks that lie behind AI’s promises.
Chapter 1, Introduction, offers a concept that will become clearer throughout the book. “Mathematical truth and social truth are fundamentally different systems.” This quote prepares the reader for the definition and discussion of “technochauvinism,” which is a kind of bias that considers computational solutions to be superior to all other solutions. Companions to technochauvinism are the notions that “algorithms are unbiased” and “computers make neutral decisions because their decisions are based on math.” After introducing these concepts, Broussard introduces herself as a writer, a computer scientist who also works in the field of AI ethics, and as a Black woman, in that order.
Chapter 2, Understanding Machine Bias, advises readers that they may skim through this chapter if they have already read her earlier book, entitled Artificial Unintelligence. All other readers are advised to “buckle in.” Despite the blunt warning to the uninitiated, the chapter is an easy read. With some helpful graphs and comprehensible narratives, this chapter covers concepts in both the social sciences and technology. For example, what we call “machine learning” is fact just computational statistics. The machine detects patterns in the data in a way that is very from the way humans process information. This point leads to the cautionary observation that humanity has not entered a new evolutionary phase simply because of the math that we can do with the machines that process the variables. Instead, since it is humans who produce the data that is fed into the machines for the machine “learning,” human biases shape the inputs and, inevitably, the outputs. If race is a social construct, but computer scientists plug it into their computational systems as if it were a scientific fact, one begins to see technochauvnism in action. Broussard concludes this chapter by inviting readers to talk more about the “why” of tech biases to effect the changes that can render tech less biased.
Chapters 3 Through 6. These four chapters are essentially self-contained, in the sense that one could read them in any order without losing the narrative thread. This review will nevertheless cover the chapters in order.
The third chapter, Recognizing Bias in Facial Recognition, makes the point that a facial recognition system does not say it is a definite match. It computes similarity based on available data. This point more elegantly confirms the cliché of “garbage in, garbage out.” At a minimum, this uncomfortable revelation about facial recognition will ruin all future movies that feature facial “matches.”
The fourth chapter, Machine Fairness and the Justice System, reveals shortcomings in the use of statistics to predict future crimes. After presenting readers with disturbing examples of tech run amuck, Broussard notes,
We need human checks on computational decisions, we need computational checks on human decisions and we need better safety nets plus the flexibility to change and adapt toward a better world.
Broussard could have placed this quote in either of these middle chapters, resulting in an equally persuasive argument.
By the fifth chapter, Real Students, Imaginary Grades, readers may think they are prepared for the examples, but the real events described herein may be the most shocking ones of all. Without ruining the shock value by revealing too much here, one should know the strength of the next statement:
We know that algorithmic systems are going to fail and discriminate; we should be prepared to mitigate the shortcomings immediately.
Note the choice of words here. Algorithmic failure is not a possibility. It is an inevitability. All the remaining chapters deal with ways the adverse effects of inevitable algorithmic failure can be mitigated.
The sixth chapter, entitled Ability and Technology, shines a light on ablism, the “ism” that garners less attention than racism and sexism. A discussion of digital accessibility, or the lack thereof, may make readers begin to question their own biases. Returning to the term “technochauvinism” Broussard explains how “[t]echnochauvinism depends on a perception that people in computer science are special, have more skills, and are smarter or more capable than other.” The reality is that the educational curriculum for computer scientists, mathematicians and engineers fails to include ethical considerations in the training of these so-called experts.
Chapter 7, Gender Rights and Databases. This is the first and only chapter where the low-tech reader may have to read more slowly and focus a bit more intently. Rest assured that the extra effort will be amply rewarded in the “aha” moment when the flaws of the TSA full-body scanners are explained.
Chapters 8 (Diagnosing Racism) and 9 (An AI Told ME I Had Cancer) both discuss the medical community’s use of technology to diagnose and treat patients. Experienced human beings can detect when something instinctively seems “off.” Algorithms lack this ability. These chapters underscore the rigidity of computer models as compared to the flexibility of the human mind.
Chapter 10, Creating Public Interest in Technology. In this chapter, Broussard pivots from quasi-journalistic revelations to proposals for improvement. Specifically, “[a]uditing is a way to make sure that the public interest is being preserved in and around algorithms.” Audits could start with two questions: (1) asking what it means for a given algorithm to work; and (2) asking what it means for a given algorithm to fail and for whom. Broussard goes on to outline what a good organizational process for responsible AI would look like. She ends the chapter by offering a key observation.
Finally, diversifying the landscape of technology creators will help, so that there are more people in the room who can bring more viewpoints and can raise awareness of potential issues that will need to be audited.
This point may seem obvious to some readers, especially to those readers who work in the audit community, but perhaps less so to others, and still less obvious for anyone who may lean towards technochauvinism.
Chapter 11, Potential Reboot. Some readers like to read the last chapter first and then proceed to read the rest of the book. This is not the best approach with this book. The legislative suggestions and the strong conclusory statements will invigorate readers who have saved this for last.
The Bottom Line: More Than A Glitch puts a spotlight on a critical contemporary societal issue. It is essential reading for anyone concerned about avoiding some of the worst potential failures of artificial intelligence.