Can Tech Really Help World Peace?
What if a VPN was a better tool to build empathy in a conflict zone than a VR headset?
We rarely get pitches from non-technologists, but I loved learning from Laura how her work in conflict mediation has been impacted by tech. It’s radical to realize that we have all the tools we need for peace-building; it’s entirely a choice on how we choose to deploy them.
—Shohini Gupta, Reboot Editorial Board
Can Tech Really Help World Peace?

Last year, the United Nations equipped their peacebuilding efforts with an unexpected tool: virtual reality headsets. The idea was to "help conflict parties step into their opponents' shoes and enhance reciprocal understanding of local communities." The initiative is part of a nascent but quickly growing field within peacebuilding: peacetech. Peacetech is as broad-reaching as the name suggests; the field of research looks to examine how tech can be developed and used in peacebuilding efforts. It sits within the wider field of "peacebuilding," or the industry dedicated to supporting peace processes and armed conflict resolution.
Peacetech can be as broad as mediated dialogues over Zoom, smartphones for cross-border communication, or social media literacy campaigns around polarization and community-building. But there are also far more technically complex streams of peacetech, like software aiming to predict spikes in wartime violence or the virtual reality exercise hosted by the UN. The UN’s VR initiative describes its approach as "cautious," noting that "its application remains limited, making it difficult to fully grasp its efficacy," but insisting that it’s "not just a gimmick." But in armed conflict, with mass media blackouts and human security concerns at their most severe, are VR headsets really life saving tools, or even usable in peacebuilding settings in 2025 (or the near future)?
Western donors have sometimes encouraged and funded research that’s disconnected from the needs of residents, activists, and peacebuilders in conflict-affected areas, creating two sometimes-separated worlds of "research" and "practice" in the peacebuilding/conflict resolution space that many peacebuilders are trying to bridge. The prioritization of ‘neutral third-party’ mediation has downplayed the inevitable roles that positionality, emotion, and personal bias play in all peacebuilding efforts, further widening the gap between academic research (which can more easily cling to the label of ‘neutrality’) and practice. This is made worse by the longstanding academic divisions between conflict studies and STEM disciplines, with most peacebuilders having little to no experience with tech, putting peacetech in a precarious position. While it’s still establishing its foundations, peacetech runs the risk of becoming a playground for utopic developments responding to Western funder interests, instead of focusing on the (albeit potentially less glamorous) needs of conflict-affected communities.
An overcorrection from tech pessimism?
I was first introduced to the intersection of tech and peacebuilding through the nuclear disarmament world. I was supporting conflict resolution sessions on a nuclear-armed conflict and joined a professional forum called Young Pugwash. Pugwash was founded by a dissenting Manhattan Project physicist, Joseph Rotblat, who sought to bridge the work of scientists, technologists, and peacebuilders. The organization focused on nuclear, chemical, and biological weapon disarmament. To this day, disarmament remains a primary overlap between those three vocations.
Keeping this history in mind, it’s easy to understand the widespread tech pessimism in conflict resolution spaces: most peacebuilding researchers and practitioners (like me) are introduced to technology through the lens of arms races, lethal autonomous weapons, authoritarian surveillance equipment, and the ambiguous threats of "emerging technologies." Tech has essentially been poised as a threat to peace and a tool of war, similar to many climate justice communities that have only seen tech growth hurt the environment. That is, until peacetech gained steam.
In the past five years, the idea that tech could be good unleashed a wave of excitement from donors and researchers, who were ready to inject a sense of long-overdue tech optimism into peacebuilding. Even during earlier tech booms, technology was still largely seen as peripheral or simply irrelevant to peacebuilding. But with the pandemic requiring many organizations to hold online dialogues and really examine things like cybersecurity and access, technology began to enter the forefront of peacebuilding research.
Major universities have funding partnerships for peacetech research with NGOs or institutional donors - like Notre Dame’s multi-million dollar Kroc Institute, Congress-backed PeaceTech Lab and Drexel University, and the Lucerne and Northeastern-partnered Global PeaceTech Hub. Within these initiatives, the remit is huge - some projects look very promising, like some of the programming on digital hate speech and cybersecurity. The peacebuilding space has only recently been looking at how daily tech (as opposed to government policies or other high-level interventions) and narratives can impact peace, despite local realities showing how things like hate speech and censorship hugely warp the daily lives of conflict-affected realities. This is something that younger professionals in the field (like me!) can’t afford to not consider as our work moves forward. Put differently, funding "local" initiatives is beginning to be seen as just as important as funding high-level meetings.
However, local organizations (a loaded term that’s largely used to describe peacebuilders from/in the conflict-affected area at hand) haven’t seemed to be included/onboarded onto the "tech craze" as easily as Western, highly-resourced researchers. Combing through the funding pipelines of PeaceTech, I can't find many grassroots organizations - particularly in the Global South - receiving much of this funding. I can’t speak to any widespread consensus of local practitioners, but the realities of funding often being West-to-West, combined with precarious digital rights and a need for practical applications in conflict settings, makes peacetech a field that’s more accessible to Western academia than local practitioners.
The idea that peacetech is always "good" because the premise is positive is just as reductionist as the notion that tech is always "scary" because of its past uses by the military. There’s always an opportunity cost: by funding research for future tech applications instead of the "now," or the needs of local practitioners today, we lose out on the peacetech that people may need right now.
Discipline Divide
So why is there a lag between what tech is needed now and the tech that is explored in some of these hugely-funded research programs? Developers and the intended users of peacetech seem to have different "starting points". Engineers are not trained on global conflict resolution and peace processes and are more interested in - or incentivized to - focus on the technological complexity of the tool(s) over its practical applications. On top of this, intended users of peacetech are further divided: some sit in Western academic spaces or large NGOs, and some are local, grassroots peace organizations that have entirely different funding landscapes and access points to peace processes.
Professional peacebuilders have backgrounds in fields like diplomacy, political science, and law. At any peacebuilding conference, most attendees come from an academic background. It’s only in the past few years that local practitioners are also prioritized in several of these spaces, instead of being referenced only for "local expertise" or "sources" for research programming (i.e., an incredibly extractivist approach to peacebuilding that’s thankfully changing).
This discipline divide creates an echo chamber where social scientists pay little attention to tech issues, and technologists don’t focus on applications. I can’t think of more than two (aforementioned) forums for positive, practice-focussed interactions between disciplines. Despite social and mainstream media often intensely fuelling polarization and fake news in conflict zones, even more "sociological" concepts like tech literacy - i.e., an understanding of the good and bad uses and implications of various tools - have not been an academic priority for the conflict resolution field.
Some Western academia have developed programs for engineers to focus on the ethical implications of their work, like the ones mentioned earlier at University of Waterloo and Drexel University. These programs are early, and there’s no information on whether their graduates have gone on to actually work in peacebuilding situations, and what tech they’ve built - but there’s certainly promise that an interdisciplinary approach may be the future of peacebuilding. The key consideration is foregrounded the needs of those with lived experience of conflict, and prioritizing funding and other support for technologists and peacebuilders in these settings, who know the peacetech needs of their context. If the tools these researchers are building aren't usable in the near future in conflict settings, it becomes more of a creative exercise than a peacebuilding project.
The myth of neutrality
Both tech and mediators often share a utopian vision of being completely "neutral," acting only as facilitators who don’t take any overtly political or partisan stance. This has led to a longstanding division between peacebuilding and "activism" spaces - i.e., the realm of advocacy for certain issues, implying partiality. Activists are sometimes viewed by peacebuilders as having the potential to isolate or discontinue existing peace processes if they drum up too much opposition from governments, funders, or other official parties. Because of this, when conflict resolution dialogues are being held, mediators often don’t work with activists, prioritizing things like risk mitigation and ‘neutrality.’ As such, when peacetech is brought up, mediators tend to focus on "neutral" technology applications, like online communication for facilitating dialogue. Things like VPNs are framed as merely tools for secure dialogue, rather than as tools of resistance, even in places like Myanmar where VPNs are illegal. In an ideal world, mediators understand their deployment of VPNs as part of a wider fight for equitable digital rights, and work more closely with activists.
A persisting issue is that “mediators” and “activists” rarely convene. It took me five years before I attended a workshop that included both mediators and activists, and I wasn’t the only one. Earlier in my (still early) career, I was told that if I wanted to be a mediator, I could damage those changes by writing on human rights in Palestine. This is common in our industry because there are security risks for dialogue attendees if they become associated with an activist group that oppose official policy. But it furthers the illusion that it’s helpful for peacebuilders to avoid engaging with activists for the sake of this "neutrality."
As tech enters conflict resolution, prioritizing neutrality becomes even more dangerous, because peacetech developers and proponents can't afford to not think of themselves as collaborators if they wish to hold sustainable dialogue(s). For example, since Myanmar’s 2021 military coup, the military has constantly shut down the internet, with 37 just in 2023 (the "iron curtain" of Myanmar), accompanied by constant surveillance and censorship. Governments in Kashmir, Sudan, and Hong Kong - all hotbeds for dialogue efforts - also actively weaponize internet access and surveil the public.
Peacetech forces the hand of peacebuilders. If you want your tools to have practical applications, you must engage with inequities in tech access and cybersecurity. Take the example of VR headsets or even digital dialogues on Zoom or Teams - if peacebuilders wish to convene dialogue, we must first work with tech activists to ensure community members can safely use and access secure communication platforms. If not, peacetech developments risk becoming a bleed-out of opportunity cost, funneling funds to Western thought experiments instead of supporting sustainable conflict resolution. Moreover, risk mitigation efforts become muddled: if a participant in Myanmar requires a VPN to access your online meeting, they are required to violate the laws of Myanmar’s military government, putting both themselves and your dialogue at risk.

Seeds of Optimism
Several peacebuilding NGOs and researchers have recognized the need for more clarity on peacetech "protocol”. The map of peacetech programming is diverse, ranging from some of the experiments above (VR headsets and AI conflict analysis) to the promotion of smartphones in conflict-affected areas to be used as tools for citizen journalism - in Iraq, for example, to anti-online hate speech and peacebuilding projects, to support online dialogue efforts in Afghanistan on accessible online software. Projects like these, which have already entered the practical or "field" stage, seem both useful and impactful for the activist and peacebuilding space (which are becoming more and more closely linked). Not only are the tools low-cost and easy to distribute, but the very idea of promoting citizen journalism within peace programming represents more of a shift towards embracing digital rights and access as part of peacebuilding. Successful peacetech programs recognize the need for digital rights promotion and equitable internet access.
Things like low-cost cell phones, VPNs, and secure online meeting platforms are far more realistic peacetech tools that respond to the direct, material needs of conflict-affected areas than high-tech projects. In a 2019 report, the Centre for Humanitarian Dialogue described the need for the "field" (i.e., peacebuilding and conflict resolution) to embrace online dialogue platforms, writing "while significant opportunities are discerned, the integration of digital technologies into the conflict management and mediation toolbox also requires a risk management approach guided by the "do no harm" principle," prescribing increased digital literacy and cybersecurity skills amongst dialogue practitioners. They go on to note that "[digital technologies] do not in themselves bring revolutionary change to the practice of mediation, which remains a human-intensive endeavour.”
The report doesn't talk about "peacetech" as understood by donors, instead focussing on the role of things like social media and bugged communication tools in dialogue. In the six years since its publication, the report’s point still rings true for the more ambitious "peacetech" endeavours. Whether we like it or not, technology will have an impact on peace processes (even if it’s a primarily logistical one, like not being able to access a Zoom call). We should harness technology’s potential for practical, immediate, and locally-informed efforts. It’s not the technology itself that is revolutionary to peacebuilding, it’s how it’s used. I hope that all the tools that fall into the very broad "peacetech" label are accessible and equitable, responding to the needs of conflict-affected communities in crises and not an exercise of throwing spaghetti at the wall to see what sticks.
Technology is inanimate: the applications and values we place on technology are what bring it to life. But for decades, "technology" was a living, breathing beast to the world of conflict resolution and disarmament practice, bringing forth the words of Joseph Rotblat calling for a moratorium on atomic physics and dissenters of military-led weapons technology research.
When peacetech was introduced, so were new waves of optimism around technology for peacebuilders: hope was renewed, and funding reflected this. Right now, there isn’t a consensus around the ethics of peacetech and its future trajectories, which has materialized into seemingly disorganized funding priorities around which tech is being funded. Peacetech has no shared founding principles or "manifesto." It’s at risk of falling into the longstanding traps of Western-led peacebuilding: having little ownership by local, actual conflict-affected experts, being far too academic to have practical implications, and divorcing itself from the activist spaces that it needs. If peacetech is to work, the few successful examples so far have shown us that engaging local experts, responding to practical needs, and advocating for digital rights are all necessary for any kind of ethical application.
Laura O'Connor is a London-based writer with five years of experience working in gender and peacebuilding. Her Instagram is @apostrophereads
Reboot publishes essays on tech, humanity, and power every week. If you want to keep up with the community, subscribe below ⚡️
🌀 microdoses
PEPFAR is very effective (did EA know about this?)
Cutting Medicaid is really unpopular and hasn’t been finalized yet, but big insurance companies like United Healthcare are already cutting jobs and programs that deliver patient care in anticipation
I went to a city with no public transit recently (link)
💝 closing note
We’re always interested in pitches about tech and all the messy and beautiful ways it intersects with the rest of the world.
Write for Reboot by sending us a pitch here.
—Shohini & Reboot team
Yeah, EA definitely knew PEPFAR is effective! It’s often used as an example of a successful government program in introductions to EA. Shortly after, they usually emphasise how little of your taxes actually go to it :( Indeed, one of the reasons EA doesn’t talk about AIDS all that much is exactly because PEPFAR exists and is already kinda decent—there is usually more impact to be had in more neglected global health issues. If PEPFAR ends up truly defunded, I think we’ll be talking about AIDS a lot more though. (Also the person who led pepfarreport.org founded Stanford EA, and is one of the most well-known and influential effective altruists.)