⚡️ Open Source and Politics
A conversation with programmer and activist Coraline Ada Ehmke
Another week, another weak move by aggrieved Twitter CEO Elon Musk. After mysteriously first banning links to Mastodon, Twitter’s federated competitor, the Twitter office issued an official policy prohibiting links to Facebook, Instagram, Truth Social, Nostr, Tribel, and Post.1 Amusingly, it’s entirely plausible that public interest and use in Mastodon might’ve gone up after this call—its energy is now far more “cool rebel platform” rather than “Twitter with HOAs” (is Elon the HOA president to rule them all?).
Anyway, Mastodon has been around long before Elon made it a thoughtcrime to reference. And free and open source software (FOSS), of which Mastodon is a prime example, has been here even longer. So I thought it a good time to share this conversation from Kernel Magazine Issue Two—in which open source developers Jacky Zhao and Coraline Ada Ehmke discuss not only the broad campaign for a decentralized web, but also Coraline’s fight for social justice within open source.
Open Source and Politics: A conversation with Coraline Ada Ehmke
This interview has been edited for length and clarity. The original interview can be read at kernelmag.io.
How did we get here?
Jacky: I’m curious about how you first got involved with the tech justice movement and open source.
Coraline: I was a software engineer for 26 years. When I started my gender transition in 2013, I started experiencing firsthand some of the pervasive problems in open source and the tech industry that I had only been aware of intellectually before. It woke something up in me. It was around that time that I started becoming more interested in the issues of justice and equity and technology.
I remember back in the early 2010s, when tech conferences started becoming popular, there was a big fight to get tech conferences to have codes of conduct. It’s something that seems so normal and natural today, but it was actually a very, very difficult fight.
2014 was also when I wrote version 1.0 of Contributor Covenant, which was the first code of conduct for open source communities. I feel like over the years — eight years now — that Contributor Covenant has been around, we’ve seen codes of conduct become more normalized in open source communities. Today, it’s hard to count the number of adoptions. It’s in the ten thousands.
One of the things I worry about with that normalization, is that we don’t recognize our history. People in tech have very, very, very short memories, which is part of why we keep reinventing the same stuff over and over.
People in tech have very, very, very short memories, which is part of why we keep reinventing the same stuff over and over.
J: In my free time, I do a lot of hacking and open source projects. I initially started posting a lot of my projects on GitHub more and never expected any real usage, so I never really thought about being “a maintainer.”
It wasn’t until my very first project started getting real usage that I realized there never was any real introduction to what it means to be a good maintainer. I started getting contributors that were like “Hey, you actually don’t have a license or a code of conduct in your repository.” Then I realized, wait, I actually don’t know much about how we got here. That kicked off a personal learning journey.
C: Yes. So that takes us to 2018. An activist group called Mijente was in the midst of their No Tech for ICE campaign. They were posting the names of companies that had contracts with CBP or with ICE. One company that got called out was Chef, which plays a large role in infrastructure, server deployments, and was necessary for a lot of the large-scale internet operations.
Seth Vargo, who had previously worked on open source tooling at Chef, saw that Chef was profiting from human rights abuses at the border. In an act of conscience, he pulled all of his open source code that was part of the Chef ecosystem and made a statement about why. But within two hours, both GitHub and RubyGems forcefully restored the code he took down because it was affecting the many companies around the world who depended on those libraries.
The open source establishment, or what I call “open source traditionalists,” saw this and said to Seth, “No, you can’t do that because open source is neutral.” I saw that as an epic moral failure on the part of the establishment. So I wrote version 1.0 of the Hippocratic License, which was not intended to be a viable license, but rather a lightning rod for broader discussion around the neutrality of open source tech in general.
J: That’s really interesting. At school, I study computer science and philosophy. One of the foundational pieces that I read that really shaped my thinking was “Do Artifacts Have Politics?” That paper was really influential in terms of making me think, “Actually, this technology that we spend so long claiming to be neutral actually has political implications as well.” A lot of people working in tech spend a lot of time trying to deny the fact that it does.
C: A couple of years back, I gave a talk called “The Rising Ethical Storm in Open Source.” I traced that illusion, or, honestly, that lie, that computer technology is neutral.
J: I watched that talk. One of your quotes that really resonated with me was: “I believe that as technologists, we have a moral imperative to prevent our work from being used to harm others. Responsibility is about impact and not intent.”
One model I use often to think about tech is to model it as a multiplicative tool instead of something that’s purely additive.2 Multiplicative in the sense that it will only exacerbate the existing discrepancies in distributions of power, right? Some people will obviously be way better off, and then there’s some that will be disproportionately harmed. I feel like some people hold a very strong belief that technology is purely additive in that it will just truly raise the ground bar of everyone who uses it. But there’s always a hidden tradeoff or downside.
I wonder whether the developers of these technologies should be responsible for expecting how their tools will be used. If you build an open source project, it’s hard to tell what type of people will end up using your project. At what point do developers have to start thinking about these tradeoffs? For example, who will my end users be, and what is acceptable use? Even as someone who is educated about these things, how do you weigh the potential upsides and downsides?
I came across the concept of the Collingridge Dilemma. It says that any efforts to influence or control further development of technology faces a double bind, where you come across two irreconcilable problems. One is an information problem. You can’t really predict what impact your technology will have until it is extensively developed and widely used. But two, you also run into a power problem where by the time you’ve already extensively developed and put it into wide use, changes become too difficult because the technology has already become so entrenched in society. And at any point, it is incredibly difficult to evaluate impact. So, what is even the ideal place to start thinking about this impact?
C: Continually. You have to do it continually. You have to do it after deployment, you have to do it after it’s widespread. You have to do it continuously.
In academia, if you’re in any of the social sciences, you have to go before an institutional review board when you’re planning a research or development project. One requirement is having an effective plan for not only preventing harm, but for mitigating when someone actually is harmed. We don’t see that same principle being applied to hard science, we don’t see it applied to engineering. Why not?
But to your point, it is very difficult to predict. One of the instruments that we’re developing at the Organization for Ethical Source is a “priority of constituencies,” which comes from one of the W3 specs for HTML. There’s this one sentence in the spec that said, in case of conflict, we “consider users over authors over implementors over specifiers over theoretical purity.”
When you draw a line like that, what you’re explicitly saying is that even if this makes it inconvenient for adopters, even if it makes it inconvenient for developers, even if it makes it inconvenient for end users, we have to make that decision based on the most vulnerable and work upward. It may be uncomfortable, but that cuts through a lot of the ethical gray areas.
J: I want to poke from the opposing side a little bit. There’s a non-negligible number of people who argue that by increasing consideration for ethics, even in the medical industry where I think a lot of this regulation is important, they say that regulations are too tight to enable innovation at a speed that is continually beneficial to progress.3 It limits people from trying new things and developing technologies that could have far greater upsides than we could have predicted.
C: Every technology for the entire duration of human history has been modulated by understanding that it exists in an increasingly complicated sociotechnical space. So if that stifles innovation, if that means a given technology is five years late, isn’t it worth being careful?
Regulations are imperfect. But they are a way of codifying constraints or guardrails. If we’re gonna reduce harm, maybe it’s okay to slow down a bit.
What Can We Do?
J: One take I have been seeing is that top-down regulation is bad and we should “decentralize.” I don’t think decentralization is objectively a good thing [on its own], but rather something that can return agency to users. A lot of my research this summer has been figuring out how to apply, for example, the net neutrality debate to data. The net neutrality debate was very much about separating content from providers; similarly, how do we separate data from applications?
A lot of our modern centralized providers are incredibly successful and have such large network effects. They have huge data moats where it’s impossible to move from Facebook, for example, to some other provider. So a lot of it has been asking questions of how we reclaim agency for people to choose how to use and store their data.
I think the convincing case for decentralization is in terms of enabling agency for people to choose what types of frameworks they want, rather than having to be locked in.
C: Twenty some years later, we’re never going to have an internet that’s 100 percent open source. And hopefully, we’ll never have an internet that’s 100 percent closed or proprietary either. What we have to recognize is the reality of where we are — that we need more than data portability. We need data autonomy, and we need permeability between closed and open systems. I believe permeability as opposed to mobility is an important aspect.
J: How would you differentiate permeability and mobility?
C: Mobility means you can export all of your tweets. Okay, well, what are you going to do with that? Can you import them into Mastodon? No. So when you withdraw your data, it becomes valueless. It’s not in a form that you can ever reuse. So is that really ownership of your data? No.
J: Yeah, one common theme I am noticing in a lot of retrospectives of older peer-to-peer projects that have been alive for a while but haven’t really garnered any major usage is that they’ve thought about all these ways to create new shiny platform, but no one has really thought about how to off-ramp easily from existing systems.
C: I think the way we guarantee that kind of permeability between closed and open systems is through standards.
But the problem is that representation in these standards bodies are primarily private corporations. Logically, they veto things that will impact their business models. Amnesty International wrote a paper in 2018 where they flat-out said that for platforms like Google and Facebook, their entire business models are predicated on human rights abuses, harvesting data, and surveillance capitalism. Those are the companies that have not only the economic power, but also power in standards bodies and governing bodies.
J: I agree. At some point, being able to download the source code isn’t enough. Governance and accountability is critically important too.
What do you think the first step to reclaiming standards bodies even looks like? I read the Decentralized Identifiers (DID) specification, and I remember that of all of the W3C members, only three members had formal objections about the proposal. They were Google, Apple, and Mozilla. When you propose new standards that undermine the business model of these large companies, it feels incredibly difficult to get these pushed through.
Also, a lot of these standards and processes are illegible to people new to this space. These Internet Engineering Task Force (IETF) proposals are ridiculously long at times, often almost 100 pages. It feels like some of them require decades of experience to have a voice.
If you want to create a widely accepted standard, is there any way to do that without dismantling these original systems?
C: I don’t know the answer to that. But I think we have a lot of power. There’s a growing trend: More people are asking whether tech companies should pay for the tech that open source contributors are freely giving them today.
As participants in good standing in the open source community, we should find ways to hold large players accountable, either by threatening their primacy through the development of alternatives, but also figuring out how can we pressure them to make governance as intentional, equitable, and diverse as we’ve done within our developer communities.
It’s sort of a radiating effect, right? We’ve normalized codes of conduct. Now let’s normalize representative and equitable governance of open source projects. Let’s go a step beyond that and talk about standards and enforceable standards. And then we have the legal aspects.
I think of an InfoSec metaphor. You have a server on the internet. It is not secure, as it is impossible to secure any resource on the internet 100 percent. But we add layers of protection, privacy, and security to make it not impossible to breach a system, but so involved and so expensive, that it’s no longer worth someone’s while.
That’s an approach that we can take with the development of ethical and equitable technology as well. If we’ve made it difficult for Google to sweep issues like accessibility standards under the rug, then we’re incentivizing them to do the right thing. Because if they don’t, they will lose status, they will lose their ability to draw employees. We as developers have the ability to exercise moral authority.
Where Do We Go from Here?
J: How do you think we can bring about these changes to build more ethical and equitable technology? The logical path forward seems to be either finding ways to empower a more diverse set of people, or regulating how open source is used.
C: To be clear, ethical source is not about licensing. Ethical source is not about the Hippocratic License. It is not solely about legal instruments for trying to protect the vulnerable, marginalized, and under-represented. It’s bigger than that. It’s about codes of conduct. It’s about governance. It’s about social contracts and rights.
J: It’s about building those layers, right?
C: Exactly, that layered approach.
It’s not a fire-and-forget thing either. It’s not a problem that we solve at the beginning and we’re good forever. These systems that can cause harm and perpetuate systemic inequities — they’re not static. Just like how consent isn’t an event, it’s a process; harm reduction is also not an event, it’s a process.
We have to start normalizing those processes [of harm reduction at all of these different layers], if we want any chance of allowing the internet to be the incredible force for good that it has the potential to be.4
Part of that is being interdisciplinary, transdisciplinary, and multidisciplinary. Bringing together the people who have different areas of expertise, whether it be technological or social in nature, because a lot of these problems have happened before.
J: I think a big part of this is moving beyond the individualistic perception of open source as a lone hacker in the basement.
Based on personal experience, I think building a visible community around synchronous interaction with actual users makes such a big difference when it comes to maintenance. As a maintainer or creator of a software library, most people are like, “I just want to make new features and do whatever works best for me.”
How do we enable people who weren’t those you initially had in mind to be able to use the tools you make?
But there are so many edge cases and bugs that don’t work for a lot of users of your library. For example, users from Saudi Arabia said that they would really appreciate right-to-left support using Quartz. How do we enable people who weren’t those you initially had in mind to use the tools you make?5
I think we can do it by curating a community that is open and accepting of more types of people and getting them to suggest and contribute.
C: Absolutely. And I think a lot of that comes down to ingroup-outgroup biases. Just look at how much open source technology is simply tooling for people just like us.
J: This is a great call for more diversity in the space as well. A friend and I noticed that the people who build developer tooling do that because that’s the only problem they’ve really known! By bringing people who’ve had experiences in other fields, you start getting useful applications of technology in those areas.
C: And we do that not through consultation, but meaningful empowerment. Getting people who are not like us in positions of power, by yielding power and distributing agency.
Coraline Ada Ehmke is an acclaimed speaker, writer, engineer, and activist with over 25 years of experience in software and almost 20 years in open source. She works to promote diversity, equity, and justice in open source communities and the tech industry as a whole. She created the Contributor Covenant, the very first code of conduct for open source communities, as well as the Hippocratic License, which legally prohibits an open source project from being used for human rights violations.
Jacky Zhao is studying computer science and philosophy at the University of British Columbia. He thinks a lot about how we can better incentivize public goods funding, support better interactions with computers and data, and be more responsible stewards of technology. Currently, he spends his time working with Hypha Coop on Distributed Press and doing independent research focused on better ways of relating and coexisting on the web.
We publish essays and interviews reimagining tech’s future every week. If you liked this and want to keep up, subscribe below ⚡️
🌀 microdoses
From Jacky:
His essay on data neutrality: a vision for the web based on personal data ownership, broad interoperability, and peer-to-peer applications.
- on web2's "bad emperor problem” and the necessary criteria for “credible exit.”
I recently used Movetodon to mass follow a bunch of my Twitter follows on Mastodon (find me here). I’d also recommend making a personal newsletter for the sake of collecting emails, even if you don’t write (here’s a Substack referral link). Protocols over platforms, baby!
One of my most used apps, Readwise Reader, is finally in public beta. Save, highlight, and annotate everything, everywhere, all of the time.
💝 closing note
A quick highlight from the community: Klaudia Jaźwińska, Matthew Sun, Sayash Kapoor, Mona Wang have a piece on remote work and tech worker organizing in the latest issue of Logic Magazine.
Since so many people are looking for somewhere off Twitter to share links and shitpost, we’ll let a bunch of y’all into the Reboot Discord next week. We do book clubs, IRL meetups, contributor opportunities, and more! Add your email here to join.
See you off Twitter,
Jasmine & Reboot team
Join Reboot’s Discord! Adding new people next week.
This model emerged after reading “The Ones Who Walk Away From Omelas” by Ursula Le Guin for the second time. The story presents a classic utilitarian problem: is it morally justifiable to inflict suffering on one person in the service of others’ happiness (and a potential utopia)? Is it then morally just to develop technology to benefit others knowing that it will exacerbate the suffering of marginalized groups? Is progress to one person necessarily progress to the collective?
Do we care more about technological progress or social progress? Historically, Silicon Valley has valued ‘moving fast and breaking things,’ but progress implies direction. What is progress towards? Who decides that? The relatively new field of Progress Studies attempts to critically take apart and answer this question, including looking at the potential drawbacks and mitigating risks of progress.
Systems of feedback and regulation are incredibly important if we want to prevent absurd and tragic events from happening on the internet. As Ali Akkhatib states in his work To Live in Their Utopia: “Absurdity follows when algorithmic systems deny the people they mistreat the status to lodge complaints, let alone the power to repair, resist, or escape the world that these systems create.”
Many treat algorithmic systems as ‘mathematically pure’ objects, taking only pure inputs and producing pure outputs. To these engineers, human lives are treated as ‘externalities’ that spoil that purity. Impacts on humans should be first and foremost. To quote Rúnar Bjarnason from his post on the future of software, “Why does a computer even exist? The reality is that computers exist solely for the purpose of executing programs. The machine is not a metaphysical primary. Reality has primacy, a program is a description, an abstraction, a proof of some hypothesis about an aspect of reality, and the computer exists to deduce the implications of that fact for the pursuit of human values.”
Amazing discussion. Thanks for this. I am glad the points about decentralization were raised ("I think the convincing case for decentralization is in terms of enabling agency for people to choose what types of frameworks they want, rather than having to be locked in."). We often forget that decentralized technology often operates within a centralized power structure.