Today, we’re sharing both the full text and the audio version of a Kernel piece by Archana Ahlawat, to all subscribers (free and paid). Most other audio versions are paywalled — if you’d like access to audio versions for other essays but aren’t able to pay, email email@example.com and we can set you up with a complimentary subscription, no questions asked.
In this essay, Archana brings her extensive organizing and nonprofit experience in conversation with the tech industry’s vision of progress. It’s worth a read (or listen) in full.
From the Non-Ideal to the Ideal: what technologists can learn from activists
By Archana Ahlawat (she/her)
“When organizers imagine a world without poverty, without war, without borders or prisons—that's science fiction. They're moving beyond the boundaries of what is possible or realistic, into the realm of what we are told is impossible. Being able to collectively dream those new worlds means that we can begin to create those new worlds here.”
— Walidah Imarisha (Williams, 2015)
The tech industry, with all its science fiction roots and hopes, is supposed to embody this imaginative worldview. At its boldest, the culture of techno-optimism certainly appears to call us to a similar practice of unconstrained visioning and optimism for the future. Yet the bombastic revolutionary rhetoric from well-meaning young technologists to venture capital moguls falls short.
It is true that technology companies continue to disrupt and reform major aspects of society. And yes, many in the tech industry at least endeavor to “change the world,” as the old tech mantra asserts. But even those that agree we need to build for material change, to build for good, often lack a robust theory of change and strategy informed by a grounded, pluralistic approach.
When technologists think about changing the world, we should adopt the more militantly optimistic, visionary, and grounded practices of grassroots organizing. This is not as radical a shift as one might think. The relentless drive to make the imagined real, the desire to question assumptions to arrive at better solutions, and a sense of high agency are foundational aspects of both organizing and the tech industry’s dominant modes of thinking.
But how do we want to change the world? This is a question of our individual and collective imaginations. Activist visioning and building processes, which are fundamentally rooted in communities and context, are instructive for anyone seeking to make change. They also provide a useful contrast to the tech industry’s often individualistic and universalizing methods.
Even when armed with an imaginative vision for the future and democratic methods, it can still be challenging to identify how to navigate towards this envisioned bold ideal when we live in a nonideal world. The way that activists balance long-term, ideal visions with immediate gains is instructive here.
By connecting and contrasting the tech industry’s rhetoric and implementation of change with strategies employed by activists, specifically grassroots organizers, we further expand our visions for what is possible, ensure we brainstorm and build in participatory ways, and strive towards ideals more carefully.
Both technologists and organizers change the world. Small startup teams can disrupt establishment companies and practices and build technologies that quickly become part of the fabric of everyday life. Grassroots organizers in every era catalyze fundamental changes to institutions and norms that govern society, politics, and markets.
Their lofty goals both often emerge from taking the speculative seriously. What do we want to see in the world that does not currently exist? For technologists, the answers might evoke the inventions of golden age science fiction, with its flying cars and space cities. Those oriented towards social change might focus on the inventions of new institutions and norms, like a noncoercive economic system that does not require everyone labor for survival.
What worlds do we want to live in? Many futures are possible, some more liberatory, fulfilling, caring, and just than others. The question of possibility prefigures our desires for the world — the extent of our social and political imagination constrains or expands our visions for the future. What is possible may just be what we allow ourselves to think is possible.
The classic philosophical discussion around ideal and nonideal theory is useful for illustrating why this question matters. Ideal theory is concerned with what ought to be, our most ideal vision for the world, while nonideal theory takes our current world and theorizes immediate steps we can take to improve it. Those focused on ideal theory show how it can be used to formulate an overall long-term strategy that informs the short-term tactics we can take given our nonideal world. However, critics of this approach point out that the ideal may be so radically different from our current society that it is impossible or merely highly improbable that we reach that point. In this case, they argue it is better to advocate for incrementalist reform that takes where we are and seeks to marginally improve it.
Though tempting in the face of uncertainty and inevitably incomplete information, incrementalism is akin to a greedy hill climbing algorithm. Without better alternative solutions, we are bound to local maxima worlds (Green, 2019). In this analogy, a mountain range represents the full range of worlds that could exist, with our current world being one point. We might have some idea of what the highest peak looks like, but we don’t know exactly where it is or how to get there from our coordinates. This challenge is no excuse to mindlessly follow the path of highest marginal improvement without considering whether or not the path is the best one to be on.
For the most part, thought and action directed at changing the world are dominated by the nonideal stance.
When guided by short-term thinking combined with a drive for profit maximization, it is easy to take current trends and double down on them with little consideration for what more is possible. The view of technology as a universal tool to solve any problem lends itself to this, as it can lead to a narrow optimization that loses sight of externalities and second-order effects. The rapid proliferation of surveillance technology, especially for crime and policing, is a prime example.
Whether embedded in smart cities or accessed through apps, surveillance technology claims to make people’s lives more seamless and safe. Taken at face value, Citizen, for instance, an app that provides real-time incident reporting via video and text as well as paid protection services, wants to empower individuals to monitor and protect themselves and their communities from crime, disaster, or perceived risky situations. In this framing, the core problem is individuals’ quick access to information and safe environments, and protection is achieved through hiring private security agents (Robertson, 2021). This narrow framing provides that the most effective and profitable way to create a safer world is through privatized, crowdsourced mass surveillance.
In a vacuum and at a micro level, Citizen might allow an individual to feel marginally safer. In fact, at surface level this solution looks like a natural follow-up to the proliferation of crowdsourced videos of natural disasters, potential dangerous encounters, and police brutality. It is one step up a particular peak in our problem landscape. However, choosing this path is mistaking tactics for strategy and failing to consider what else is possible.
Disseminating information through social media can be a useful method for calling attention to an issue and receiving help, especially when public resources are overwhelmed or absent, in such a case as natural disasters. In the specific case of police brutality and misconduct, sousveillance, which refers to citizens monitoring authority figures to hold them accountable, can be an effective tactic to raise awareness and alarm. It can help connect community members to one another. But digital information sharing is not a sole ideal end itself, especially given how social media platforms ultimately thrive off of extreme content and extreme emotions.
As a for-profit social network company, Citizen benefits off of fear-driven usage, stokes user anxieties, and encourages vigilantism (Ashworth, 2021). The ACLU’s comparable sousveillance app, Mobile Justice, seeks to empower individuals by allowing them to record from the app, streaming to selected close contacts and the local ACLU (American Civil Liberties Union). While the two apps are similar, Mobile Justice is directed at unveiling abuses of power and is framed as a tactic for harm reduction, not as the ultimate solution to danger. It also does not provide options to privately summon protection agents. Mobile Justice’s overall purpose is connected to an overarching strategy of holding public institutions accountable and pushing to fundamentally restructure them.
Mobile Justice was developed from an understanding of the broader history and context around crime, policing, and neighborhood safety. Surveillance apps and tools like Citizen, Nextdoor, and Amazon Ring assume that the way our society has defined crime and safety is correct and static. But even a rudimentary understanding of history would reveal how notions of crime and public safety have been produced and deputized for the maintenance of racial and economic hierarchies for centuries. Reporting on these apps has repeatedly shown they amplify racism and lead people to become more irrationally fearful. Taking the history of policing seriously would mean anticipating some of these impacts.
Through taking the nonideal route, these surveillance technology solutions lead to worse worlds, with less democratic accountability, less trust amongst community members, and an even greater culture of fear and suspicion. In the end, tools like Citizen have not been effective at increasing public safety (Lustbader, 2019).
Often, these nonideal solutions follow naturally from a tech-first mindset of changing the world, particularly when dealing with political and social issues. Consider our mountain range analogy again: where we are situated in the terrain fundamentally impacts our point of view, understanding of what problems exist, and where the highest peak is. Even with an expansive, open-minded imagination, actually choosing between a set of possible worlds involves subjective value judgments informed by our backgrounds.
As we know well, the tech industry has plenty of diversity and inclusivity problems. This of course influences what sorts of problems technologists have personal experience with and consider significant. However, even with increasing representation of various identities and backgrounds, the culture of the industry can be intellectually homogenous, heavily influenced by Silicon Valley and its iconic entrepreneurs and investors, as well as the sheer amount of wealth flowing in. The jokes about how some startups invent problems that don’t exist or only exist in cities tech workers reside in reflect this.
It is easy to understand how surveillance technologies are seen as the key solutions to problems of public safety when they are typically developed from the tech industry’s universalizing vantage point. The convenience of these digital resources matter most, nevermind how they actually impact different types of people and communities. If the groups creating these solutions have historically not been targeted by the police and the surveillance state, or are protected by their class status, the full impacts of such technology may be overlooked or simply ignored.
The Silicon Valley founder mythology is another prime example of this individualistic mentality. From coverage of single entrepreneurs like Steve Jobs and Elon Musk to the slew of resources on developing what it takes to be a good founder, the emphasis is on the agency, ability, and intellect of one person or sometimes a small group of people. We repeatedly uplift an individual’s dreams and values at the expense of more democratic visions.
It is essential for technologists to instead develop a pluralistic practice of identifying problems and developing solutions. Only then can we have a clearer, fuller view of the terrain. Let us look to organizers for insight here.
Similar to the individual technologist or founder, activists must cultivate an indefatigable sense of agency to chip away at their missions. Significantly, the activist’s personal agency must always be connected to that of fellow community members. It is not enough to believe merely in one’s individual high agency. A sense of our collective agency and collective ability to change ourselves and our institutions is integral to organizers’ work. This requires that brainstorming new worlds and worldbuilding itself be participatory, agonistic processes.
Organizers take a relational approach to defining and building out futures. This means that their analysis and action plans come from the grassroots, from countless conversations with community members, and not as prescriptions from above. This comes from a view that “people are not passive cognizers that contemplate and grasp the world in abstract forms from a distance; instead, knowledge and understanding emerge from concrete lived experiences.” We are individually situated in the world and can collectively become more clear-eyed through dialogue and disagreement.
Participatory design and development techniques are nascent practices in industry that adopt this kind of relational approach by involving community stakeholders in all phases of a project. Co-creating with people situated all across the mountain range will yield more sustainable and useful projects. Aspects of participatory design fit into the broader approach of design justice, which draws on key tenets of activist work, especially the call to center the experiences and needs of the most marginalized and impacted communities (“Design Justice Network”).
Instead of adopting an illusory “view from nowhere,” organizers believe the most powerful and true visions of liberation will emerge from those who have historically been most unfree. This community-centered analysis leads to a theory and practice grounded in first commitments. Sarah T. Hamid, an abolitionist organizer and researcher focused on combating carceral technologies, discusses how defining first commitments — the specific values and visions you and your organization are committed to — help guide your actions and overall strategy, including the nature of compromises or incremental steps.
Relational ethics and first commitments together form a political orientation that can guide us in evaluating the short and long-term benefits of all possible reforms. When tackling thorny political and social problems, defining one’s orientation can help avoid nonideal solutions.
A particularly useful example is with issues of facial recognition and surveillance. Technologists who are worried about the proliferation and abuse of these technologies are prone to first looking for a techno-fix. After the seminal Gender Shades study, many technical interventions focused on making facial recognition systems more inclusive, so that they can better identify people of color, especially women of color. While these developments can be technically helpful, they obviously do not curb the use of facial recognition for surveillance and policing. Making these technologies more inclusive also makes their abuses more efficient.
When technologists instead collaborate with organizers and those from other disciplines, such as in spaces like the recent Beyond Fairness computer vision workshop, a wider and more rigorous array of interventions can emerge (“Beyond Fairness”). The workshop sessions highlighted research on diversity in training datasets alongside conversations with activists and legal scholars focused on surveillance technology and racial justice.
We have seen how the incrementalist or nonideal approach is often insufficient. But most radical big ideas still require a series of steps that bring the world closer and closer to their full manifestation. How do we distinguish between short-term-oriented reforms that cement existing trends and reforms that take us a step closer to the ideal? This is difficult to parse when many different reforms can improve the material situation, yet also foreclose other paths we haven’t considered.
Grassroots organizers focused on structural social change are well-trained in this. To advocate and organize for radical change is to develop an intimate understanding of the materiality of living conditions, with particular focus on the failures of the world — inequality, injustice, abuse, etc. — while also centering a broad analysis of the societal structures, institutions, and norms that facilitate and enable these failures. It is necessary to orient towards long-term change, towards ideal visions for the world, while also tactically working on harm reduction, which constitute marginal improvements.
In practice, this means that many social change activists aggressively promote and work on radical ideas and policy solutions at the same time as working with or within a system. Having an eye towards the ideal is no excuse to escape responsibility and action in the present.
As an example, abolitionists like Mariame Kaba have been organizing their communities and spreading their ideas for decades, even without any mainstream traction. Kaba has written and spoken frequently for the last twenty years on how she aligns her work with a belief in long-term abolition. She has developed a variety of nonprofit campaigns, research projects, and organizing projects on police brutality, transformative justice, youth decarceration and leadership, and violence against women. These projects have clear material impacts, for instance, on the young people she supports. They also serve to educate more and more people on abolition and the harms of the prison-industrial complex. This political education work helps upend the narrative around policing, which then expands the scope of what popular changes could look like.
The types of incrementalist steps organizers choose not to fight for are just as important to understand, especially since many of those are tech-oriented solutions. Some liberal reformers have advocated for digital versions of carceral apparatuses, like a digital border wall or electronic monitoring systems embedded in incarcerated people’s ankle bracelets. A recent new suggestion from a major prison contracting company is a controlled virtual reality system that allows inmates to experience outside life and virtual visits in lieu of in-person interactions (Cox, 2021).
These are all examples of systems that may appear to be more humane reforms to a flawed carceral system at first. However, anyone with an eye to long-term abolition, or even just decarceration and transformative justice practices, should be able to see that these digital solutions simply funnel more resources to a bloated institution, enable frictionless expansion of the incarcerated population, and further reduce their freedoms. All of these trends make abolition, decarceration, or any fundamental transformation of this system even more unreachable. The long-term perspective is essential, else we may lose out on other paths and better worlds.
By adopting the activist’s values-based, grounded, relational, and critical approach, we can better understand how to question, disrupt, and build more responsibly and rigorously. The tech industry should not endeavor merely to tinker with improving legacy infrastructures somewhat, or to rewire insufficient and inequitable norms and institutions without questioning underlying power differentials and their implications. We should not dismiss the genuine desire to drive progress and prosperity with technology, but must redirect this energy and reorient our thinking.
Calls from technologists and activists to think bigger, to question all assumptions, and to change norms are often met with disbelief and derision. But this is no excuse to constrain our visions for the world to local maxima. We must have the commitment and discipline to move beyond technologists’ “purely gestural kind of courage,” so that we may imagine and enact better, more just worlds (Daub, 2020).
The digital home and full reference list for this piece, as well as for all other Kernel pieces, is at kernelmag.io.
Archana Ahlawat is an engineer, researcher, and writer. She was most recently Founding Engineer at Color of Change, and previously researched surveillance tech and data brokers with Just Futures Law.
💝 closing note
A lot happened in tech news this week. Here are a few that are particularly relevant to Archana’s essay:
News broke that “tech for good” darling Crisis Text Line was selling conversation data to a for-profit spinoff. They’ve since stopped, but the breach of trust remains. I enjoyed this thoughtful thread from a former academic collaborator with CTL.
More notable departures from Google Ethical AI to join Dr. Timnit Gebru’s Distributed AI Research Institute. Read this essay from Alex Hanna, DAIR’s new Director of Research.
A commentary a popular mathematical method in economics, and its limitations in building towards an ideal world (see thread)
Hoping everyone is staying safe, warm, and imagining ideal features.