THURSDAY ⚡️ Resisting AI ft. Dan McQuillan
An argument for the importance of human self-determination and consciousness of broader structural conditions.
Dan McQuillan is a Lecturer in Creative and Social Computing, but he’s not just an ivory tower academic — highlights from his bio include founding a citizen science project for young people in Kosovo with open-source hardware sensors, creating the first digital directorate for Amnesty International, and participating in 2001 Genoa protests against the G8. In other words, McQuillan understands how to get things done in the real, physical world — which makes his academic work all the more interesting.
📖 Resisting AI ft. Dan McQuillan
Dan’s new book, Resisting AI: An Anti-fascist Approach to Artificial Intelligence, comes out in just a few days.
Join us next Thursday, August 25 for a conversation on AI, its politics, and how to move past the surface-level of “AI has politics.” Note this is 4p PT, an hour earlier than is usual! Thank you to Dan for being very accommodating with US time zones from the UK :)
🤖 our take: a resonant critique
By McKane Andrus and joice tang
As AI — whether it be a complex deep learning model or simple automated decision-making tool — is repeatedly turned towards by decision makers in the hopes of resolving any number of social issues, we hear of countless cases of harm, intended and unintended. Thousands of Australians being wrongly forced to repay the cost of received social benefits, Black activists being targeted with facial recognition and other AI-enabled surveillance tools, minoritized groups being censored online by “hate speech” content moderation systems. Why do our social institutions keep turning to AI and, with the amount of work and money being poured into “Ethical and Responsible AI,” why does AI keep failing us?
Much of the existing work critiquing AI has pointed to specific problems with AI-centric systems: bias, non-inclusiveness, exploitativeness, environmental destructiveness, opacity and non-contestability, or any other of a number of valid, seemingly discrete concerns. When these issues are presented independently, however, one can be left with the feeling that they are resolvable in isolation from each other. During McKane’s two and a half years at the Partnership on AI, an organization deeply entrenched in the pursuit of “Responsible AI,” they very often felt the need for a more comprehensive theory of harm of AI (why AI ends up hurting people and society) that doesn’t just encourage playing whack-a-mole with the “ethical” issues that arise with AI in deployment1. Goldsmith’s, University of London Professor Dan McQuillan’s new book Resisting AI: An Antifascist Approach to Artificial Intelligence aims to provide such a theoretical framework.
Within the responsible AI industry, there are often two archetypal understandings to the question of why AI causes harm. From one point of view, harm can be seen as stemming from a lack of understanding within institutions — and within the discipline of AI more broadly — of how to produce AI ethically and responsibly. Under this perspective, research and education are a reasonable path towards developing socially beneficial AI. From the other point of view — the one we tend to align with — harm is a result of the business models and institutional motivations behind deploying AI systems, meaning that systemic social changes are needed to actually eliminate AI harms. It can be difficult, however, to know how to go deeper than the somewhat simplistic, albeit accurate, critique that states “the problem is capitalism and capitalist production of AI.” This is where Resisting AI makes its strongest contribution: McQuillan illustrates how the root of modern problems with AI is its applicability to and immediate alignment with neoliberal and far-right ideologies — what McQuillan calls the “resonances” between AI and fascism.
McQuillan’s conceptual tool of resonances between technology and social practices ends up being incredibly helpful for understanding and interrogating the politics of AI. It is related to Langdon Winner’s conception of some technologies being “inherently political” because of the social arrangements they require to function (e.g. a nuclear plant requires extensive centralization and administration). But McQuillan takes the assessment even further. By looking also at how AI becomes a lens through which we view the world that is itself more amenable to certain ways of thinking (e.g. us vs. them mentality), McQuillan’s “resonances” makes clear that it is the technical attributes of AI, not just the politics of its developers, that impact what direction the tech pushes us in and who chooses to mobilize it.
One of the recurring “resonances” throughout the book that stood out to us given our prior work is AI’s applicability to tasks of segregation and discrimination. In many cases, algorithmic decision-making systems exist solely to discriminate — between the deserving and undeserving, the risky and the secure. Instead of looking into why differences might exist between groups, the typical application of AI encourages the reproduction and reification of boundaries between them. While prior work has delved into this propensity of AI2, McQuillan makes the broader case that this is a significant reason why institutions turn to AI — it provides a pseudo-scientific basis for ailing social institutions to reduce costs and direct limited resources “objectively” and in a way that is difficult to be challenged. As an increasing number of institutions adopt the ideology that the problems our society faces stem from individual failings and not systemic ones, AI allows them to construct systems that “objectively” identify these “problem” individuals, shielding from view the actual causes of social strain. McKane, in their own experience with this research space, has seen how a majority of “ethical AI” work does not question whether largely unaccountable, tech multinationals should be building tools that segregate and discriminate on such a massive scale in the first place, leaving us to focus on how to more accurately label and discriminate between individuals as a path towards social justice.
Another one of the resonances McQuillan examines is AI’s potential to reframe correlative relationships as scientific understanding of the world. Given that AI merely uncovers trends in historical data, deploying an AI system tells one almost nothing about why trends exist. This in turn allows far-right and fascist-inspired actors and institutions to freely project their otherwise unsubstantiated views of the world onto the patterns AI settles on, imbuing them with a veneer of scientism and objectivity. As McQuillan writes, “AI takes sides not simply by being a tool used by the powerful but by its inherent reinforcement of rigid dualisms and representations” (109).
Ultimately, McQuillan considers technology a secondary solution to collective action, solidarity, and self-determination, a perspective we both strongly agree with and hope to further in our own research and community-based work. By rooting his anti-facist approach to AI in mutual aid, solidarity, and a political Luddite-esque resistance to exploitative tech, McQuillan makes an argument for the importance of human self-determination and consciousness of broader structural conditions. However, while we appreciated his avoidance of techno-solutionism, the paths forward were vaguer than we had both hoped. To put his approach into action, he argues for self-organized workers’ councils, pointing to examples of how such groups have built counter power and taking the time to differentiate them from traditional citizen’s juries and worker’s unions. McQuillan helps us understand how changing the social conditions of AI production would likely mitigate AI’s resonances with fascism, but he does not so much explore what role, if any, AI or its builders should play in the process of articulating social change.3 He argues for a prefigurative politics of anti-fascist AI, but does not spend much time discussing how we may use, create, and interact with technology in an anti-fascist manner beyond resisting it or changing material conditions entirely.4 Although we hesitate to imagine technologies without first centering affected communities, it felt like McQuillan, with his acute understanding of the fascist potential of AI, would be an appropriate guide in exploring what some prototypical anti-fascist AI systems could be.
Taking the social organization strategies McQuillan offers to heart, however, reminds us that the role of the technologist is not more valuable than any other role in society. We can and should speculate what tech-involved futures can be, but we should do so as members of these communities, not as removed, “objective” engineers and designers. Pointing to feminist and new materialist theory, McQuillan encourages the reader to question what “objectivity” is in the first place; reminding us of both the validity and necessity of alternative ways of knowing. As technologists, this should remind us that our specific brand of technical knowledge is but one way to understand technology and its role in the social world. Taking an emergent, secondary stance to technological development means we must prioritize social relations, letting communities figure out the role of technology in society as we figure out what resonates with a just and equitable future for everyone.
McKane Andrus is an incoming PhD student at UW HCDE and former researcher at the Partnership on AI.
joice tang is an incoming PhD student at UW HCDE.
Reboot publishes free essays on tech, humanity, and power every week. If you liked this or want to keep up with the community, subscribe below ⚡️
🌀 microdoses
Some topic-relevant recommendations c/o McKane and joice:
If you are looking for a little more insight into what technology should/could be improved on the path to collective and self-organized decision making, Divya Siddarth writes in WIRED on how to make technology more democratic, and what we need to demand of “democracy”
Studying Up, an academic paper that explores how to invert AI to interrogate power instead of acquiescing to it
The Hammer of Ludd episode of This Machine Kills Podcast is a great description of the kind of political Luddism that Dan suggests in this book – break and resist the technologies that don’t allow for the social arrangements we actually want to see.
Regularly scheduled shitposts c/o Reboot Team:
A bit late to this and also not really a shitpost but this thread is incredible
Can’t believe my favorite didn’t make the top 3 :(
Threads/ twitter/ discord/ substack/ …
💝 closing note
If you haven’t already, come hang out with us on Threads! See last week’s post for more info (tldr get the app).
Towards spicier discourse and more chaotic posts,
Reboot team
This certainly isn’t a unique experience, and it’s why we have efforts such as the Radical AI podcast and the Radical AI Network (unrelated to each other) that attempt to “grasp at the root” of AI harms.
See especially Chelsea Barabas et al.’s Interventions over Predictions.
Combining some of the suggestions in the paper Roles for Computing in Social Change with McQuillan’s anti-fascist lens might be a good place to start, though.
We like Prefigurative Design as a Method for Research Justice or Design Justice (and the Reboot community’s review of it) for ideas on how to be more anti-fascist in your work. But also… Unionize and take part in community efforts for broader change!!