Discover more from Reboot
⚡️ More Than a Glitch ft. Meredith Broussard
THURSDAY 3/23: Dismantling the doctrine of techno-chauvinism
Our guest for this Thursday, March 23 at 5-5:30 pm PT is Meredith Broussard, data journalist, professor at New York University, and author of Artificial Unintelligence: How Computers Misunderstand the World. Her new book, More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech pitches that the problem with tech stems not just from techno-solutionism, but techno-chauvinism.
RSVP to our online Q&A with her at this link, or keep reading for Rishi’s review.
📖 dismantling techno-chauvinism
By Rishi Balakrishnan
Edited by Jake Gaughan
GPT-4 was released last week. The release was accompanied by a full-court press of fanfare from OpenAI: artificial general intelligence is coming soon! Recently, the noise about the new age of artificial intelligence has been deafening. However, not all are as convinced about its marketed revolutionary promise.
Meredith Broussard, a Professor at NYU and author of Artificial Unintelligence: How Computers Misunderstand the World, analyzes the current cycle of hype in her newest book, More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech. Providing examples of software systems gone wrong, Broussard takes issue with the technical mindset that reduces political, social, and historical complexity to an optimization problem and applies the blunt force of technology to solve these issues. Departing from critiques of “techno-solutionism”, Broussard describes this approach as “techno-chauvinism”, a term that analyzes modern technology not as looking forward but recreating the injustices of the past. This frames her two main arguments: that technology cannot be divorced from the society that produced it and that an over-reliance on technology leads to its application in areas that require care and nuanced decision making.
The reasons given to defer decision-making to computers often rely on the assumption that computers are better at two things than people are: objectivity and efficiency. Humans lie, but data does not. Data reduces complex situations with varying contexts into a clear set of metrics, making accuracy the simple ability of a system to represent a complex world. Claims of the accuracy of machine learning systems are supported by mathematical and statistical principles, and this veneer of science redirects trust from messy human decision-making to impartial statistical systems. But this ignores historical patterns of injustice. Broussard argues it should be no surprise that predictive policing algorithms recommend over-policing neighborhoods of color, or that loan underwriting algorithms approve black homeowners at rates far below white ones. All of this is encoded in our supposedly ‘objective’ data. What further increases the appeal of algorithmic decision-making is their speed. This isn’t by accident either: efficiency is the core of much computer science education and research. There’s a special joy to making a computer program run as quickly as possible, but speed comes at the cost of understanding complexity and nuance. When computer scientists give exclusive focus on creating the fastest systems, the shortcuts taken often reflect pre-existing biases in society. A one character variable for gender (‘M’ or ‘F’) saves precious memory in a database, but disregards the experience of everybody that doesn't neatly fit into either category. By its nature, computing reduces the complex and nuanced human experience into binary representations.
While computing technology is new, Broussard’s description of the techno-chauvinist mindset argues that the focus on “objective” and scientific categorizations isn’t. Broussard traces this line of thinking back to Carl Linneaus, the Enlightenment philosopher credited with developing taxonomies of plants and animals, who also applied those same ideas to create racial classifications of humans. When techno-chauvinism is the starting point of our evaluation, then tech is not even neutral by default. It’s expected to discriminate – this behavior is more than just a glitch.
But Broussard’s analysis is not the prevailing sentiment in the industry. Investment into creating systems that neatly solve messy social problems invest these systems with authority they don't deserve. Treating a machine learning system as a crystal ball leads to its use in situations that require human consideration, not statistical analysis. Broussard recounts the International Baccalaureate program’s process to algorithmically assign student grades during the height of the COVID-19 pandemic. Instead of evaluating actual student work, grades were determined using historical data of a school’s performance, discriminating against hard working students from under-resourced schools. In the world of criminal justice, Broussard points to Chicago Police Department’s use of software that predicted which specific people would be involved in a shooting. Because of their blind faith in the software, police officers repeatedly paid visits to Robert McDaniel – a Chicago resident with a relatively clean record – despite his many protests. Neighbors started to get wary, and McDaniel was later shot on suspicion of being a snitch. The software's predictions and the department's trust in it resulted in the shooting of an innocent man. It’s absurd to think that a system can accurately foretell a person’s future based on a couple of data points in a police database.
So what’s the solution? Defeating techno-chauvinism requires commitment to the public interest. This means centering the public good in conversations around technology instead of assuming that new technology will automatically get us there. She highlights two specific manifestations of public interest technology, investigative journalism and algorithmic auditing. Stories like ProPublica's investigation of the COMPAS recidivism algorithm or the Markup's exposé of Facebook's discriminatory advertising practices not only called public attention to unfair algorithms but also forced discussions around solutions and redressing harm. Algorithmic auditing, especially when supported by regulation, explores when (and for whom) a model can fail and takes preventative measures to ensure new models don't reproduce old injustices.
Overall, Broussard provides an incredibly accessible primer into the issue of blindly applying technology to complex social questions. I find her descriptions of the technical mindset that informs this solutionism both accurate and compelling, and agree with her advocacy for a renewed focus on the public interest when creating technology.
However, underlying the ideology of techno-chauvinism is the fact that automating judgment is incredibly profitable. Broussard dismantles the philosophical underpinnings of techno-chauvinism, but the structure of venture capital and Big Tech means that it likely isn’t disappearing any time soon. These material incentives inform the hype cycles around AI and should make us wary of the way that efforts to regulate AI are often co-opted by those it seeks to regulate. Algorithmic auditing and investigative journalism are necessary to limit the damage of current technology, but I also would have loved to see an exploration of an alternative politics of software design. Broussard briefly touches on such alternatives: universal design principles drawn from disability studies, juries that involve community members in the software development process, and movements like Afrofuturism which integrate cultural history into science fiction.
While I would have appreciated a deeper development of a politics that imagines liberatory, collectively designed technology, the book makes an important contribution in providing a framework to analyze technology. We cannot assume that new technology solves familiar injustices, and have to build technology with an eye towards equity and fairness. The creators of technology must be held accountable to their promises, and Broussard shows us how.
Rishi Balakrishnan is a software engineer based in Berkeley.
Reboot publishes essays and interviews reimagining tech’s future every week. If you liked this and want to keep up, subscribe below ⚡️
The WSJ wins award for the all-time worst explainer of the SVB crisis.
My favorite contrarianridicules the popular narrative of tech company masterminds hacking user psychology or national politics, when in reality, his ex-employer Facebook struggled endlessly to even "make users post more" or "make people like Facebook."
This essay on “putdownable prose” laments what is lost when we ask all our books to be page-turners, where the central aim is to entertain the reader rather than to challenge them.
The most fun thing I read this week was thispost on Flaco the owl.
💝 closing note
From the community recently:
Daniel Bashir talked to writer Ken Liu about how sci-fi can help technologists forpodcast.
Alexa Jakob reflects in a blog post on why DEI alone won’t make AI ethical.
I was interviewed byabout Reboot and reimagining techno-optimism. Watch the video or read the transcript here.
Toward the public interest,
Jasmine & Reboot team