⚡ Towards Data Neutrality
A decentralized, empowering vision of the internet (NOT crypto!)
As someone who feels quite tied to the institution-flavored version of capital-a Academia, I’m always curious about how else we might go about ~knowledge production~. That’s exactly what my friend Jacky Zhao is doing: he’s spending this summer doing full-time independent research, and as someone who is very much a builder, his work is the opposite of ivory tower.
Today, I’m so so excited to share this piece from him, a mix of research proposal, manifesto, and essay. In it, Jacky shares a vision for a decentralized, empowering internet; explains why peer-to-peer and blockchain-based solutions fall short; and finally, introduces Rhizome, his name for the technical artifacts he plans to produce. (Jacky’s work is entirely self- and supporter- funded; if you’d like to contribute, you can do so here.)
🌳 Towards Data Neutrality
By Jacky Zhao
To borrow words from Robin Sloan: I want to underscore that I am sharing this idea with the hope that you will really think about it! At such a primordial stage, a proposal like this doesn’t need diffuse, drive-by attention. It needs, instead, close consideration and generous imagination.
The competitive advantage of the vast majority of today’s centralized platforms are in their data moats and network effects. Services like Facebook, Twitter, and Reddit conceptually aren’t difficult to replicate – in fact, your average computer science graduate could probably recreate the functionality of these apps without much difficulty. Rather, the major reason why these platforms remain so dominant is because of their data and users: Facebook has all of our childhood friends, Twitter is the go-to place for unhinged humour and political discourse, and Reddit has millions of niche micro-communities found nowhere else on the internet.
These platforms, especially aggregators, are incentivized to resist decentralization and interoperability. After all, ‘data is the new oil’. These services almost entirely depend on making sure that only they have access to that valuable data. Interoperability, on the other hand, means you no longer have a data moat, or a privileged hub position in the network.
As a result, apps have become inseparable from data. They are extractive, asking for ever-increasing access to our personal lives. We willingly sign over the ability to control our data, blindly scrolling miles and miles of Terms of Service Agreements because we know that at the end of the day, we have no power to change what they want from us. You can’t choose what parts you like; you either use the platform and sign all of your rights to them, or don’t use it at all. Privacy and security in this world mostly means “which company do you trust with your safety?” The answer often is the one with the largest walls and deepest moats.
Clearly, this leaves much to be desired. We spend so much time online that it is worthwhile to explore better ways of existing online.
An Ideal World
Decentralization is not the solution for everything, but it has value in empowering people to act decisively within their social context. Decentralizing the Web means that people gain the ability to store their data wherever they want while still getting the services they need. Decentralization is about agency: we get choice about where we store our data, who we give access to which parts of that data, which services we want on top of it, and how we pay for those.
In an ideal world, instead of being forced to accept package deals we cannot customize, we get modular, interoperable, local-first software which we can stack to a global scale. Apps and platforms in this model follow the Unix philosophy: expect the output of every program to become the input to another, as yet unknown, program. Like the Lego “dot” that is the universal connector between all Lego pieces, there exists a universal API that freely enables all software of this model to freely interoperate. With a universal API, each composition between each tool increases the total possible compositions and workflows by n*(n-1), all without developers needing to write the transformations between each one.
In an ideal world, there is data-neutrality. Much like how the Net Neutrality debate strives to maintain the separation of the content and connectivity markets, data neutrality strives to maintain the separation of data and application markets. Our current market is competitive based on data ownership when it could be competitive based on service quality instead. If we conceive a decentralized approach as a way to enable data and platform neutrality, application platforms and data providers can mix and match, much like how you can browse the many websites of the web on any Internet provider.
In an ideal world, we focus on local-first software that works independently of large platforms – at the end of the day platforms should be used to support efficiency of collaboration at scale, not to gate users from moving their data for the sake of retention.
Peer-to-Peer
Peer-to-peer technology has existed for a while and in theory, gets quite close to realizing this ideal world on its own. Federated open source software means anyone can run their own local instances and customize them to their liking. Organizations like the IETF and W3C work on standardizing open data formats to act as universal formats to store and convert between. Yet, most platforms do the minimum they need as required by law to maintain interoperability and data-neutrality.
So why does it fall short?
1. Running your own infrastructure
Even nerds do not want to run their own servers at this point. Even organizations building software full time do not want to run their own servers at this point. If there’s one thing I hope we’ve learned about the world, it’s that people do not want to run their own servers.
— Moxie in My first impressions of Web3
Running your own infrastructure and servers is hard. Maintenance and upkeep of your software is hard. There is a reason that companies which offered to do that for others were so successful.
If one company figured out a good way to do x, it was incentivized to offer doing x as a service (hence the explosion of SAAS startups) instead of making it easy for competitors to do the same. Overtime, companies specialized at getting really good at doing x and thus became known as the go-to people for that thing. This centralization-over-time of this knowledge leads to the monopolies that we see today.
Centralizing this knowledge in open standards and public, forkable code rather than data moats and proprietary technology is a great start but it isn’t enough if the general public doesn’t know how to use it. Just as you wouldn’t expect the average home owner to setup their internet connection, we shouldn’t expect the average person to run their own infrastructure.
It should be easy for people to create competing yet interoperable platform providers and it should be easy for people to switch between platform providers as one can switch between internet providers today.
2. Data Availability and Durability
The vast majority of peer-to-peer applications have yet to solve the data availability problem. In short, all connections are ephemeral – there is no persistent state. Imagine if everybody you shared a Google Doc with had to be online at the same time every time you wanted to edit it, or if all 3 billion users of Facebook all had to have the app open to even use it. Imagine if you had your Twitter account deleted every time you closed your browser window.
This means that asynchronous collaboration isn’t possible in most peer-to-peer apps. Platforms usually get around this by storing the state of a user on one of their many servers who make it available on your behalf but peer-to-peer apps do not have this luxury – most people do not have a device that is “always-on” like a server is.
3. Existing Network Effects
Migrating data off of existing platforms is extremely difficult as this is something large platforms are disincentivized from supporting. Even if there are ‘export’ tools on platforms, they are the worst they can be while still meeting GDPR Requirements. New platforms almost never have ‘import’ tools because each platform has their own data format and that format changes unpredictably. This creates a form of n-to-n problem where every app needs to know what the APIs of another app are to even begin to interoperate.
“But usually you don’t want a dead snapshot; you want to “use this data elsewhere”—which requires repeatedly exporting & reconciling.”
This means that, even if an alternative platform offers a better service, switching is often impossible.
The important question is: can applications on top of decentralized data behave the same way as centralized apps? Can we still aggregate information into feeds and present a cohesive user experience even if all of our friends’ data is stored in different places?
Why Not Blockchain?
I admit that blockchain might solve most of these problems. Blockchain approaches have great approaches to solving both identity and availability through a combination of wallet addresses and token incentive mechanisms. Yet, they solve it in a way that leaves much to be desired.
Blockchain causes a whole new set of problems that makes it quite cumbersome to build on top of it. Some of the core problems that I have personally seen include
Lack of ability to store large files on-chain in a cost-effective manner
Massively reduced speed and efficiency (the global Ethereum computer operates at roughly the speed of a Raspberry Pi)
High latency for transactions and finality (not to mention transaction + gas fees but I am assuming these will be negligible at some point down the line)
All of these make it incredibly unfeasible for data-intensive or real-time applications (e.g. file sharing, games, collaborative text editing) without aggressive application of blockchain scaling ideas. Of course, there are certain applications that benefit from the unique properties that blockchains possess (namely strong guarantees about consistency and message ordering among the presence of Byzantine actors) that make it worthwhile for certain applications like cryptocurrencies, but for most applications these tradeoffs make it hard for end users to adopt.
Blockchain is suitable for a very small subset of use-cases. Is there a more general purpose technology that still addresses these main problems?
Rhizome: The Personal Cloud
My main research question is about how we can create technical infrastructure to enable data-neutrality on a web dominated by data moats. I’m tentatively calling it Rhizome: a data-persistence and identity layer for the distributed web.
It is made up of two layers:
Root: a personal data pod that you own. Think iCloud or Dropbox but you have agency over how much storage you want, who has access to it, and what you want to do with it.
Trunk: a framework for easily developing cohesive peer-to-peer applications on top of data from Root
As a whole, it forms the basis for a new model of the internet where first and foremost, people own their own data. This enables entirely new dimensions of computation and collaboration on the web.
Single purpose apps backed by general-purpose data. Apps in this new model are now just views on top of data rather than a tight coupling of data and logic. If two apps are views on the same data, any change to the underlying data will instantly update both apps.
Applications ask for access rather than store their own data. Instead of maintaining a separate log-in for each app, you give apps permission to read or write specific parts of your data.
Local-first means interaction times are measured in microseconds not seconds, resulting in more responsive-feeling applications and no loading spinners.
Two users can collaborate by simply ‘inviting’ another to temporarily synchronize a subset of their data. Developers no longer have to worry about building out separate infrastructure for live editing or collaboration.
As there are separate markets for data and applications, it creates competition based on service quality rather than on data ownership.
With Rhizome, we get the convenience of a single centralized platform without the lack of agency that comes with it. You can find more technical details here, and rough notes from my day-to-day here.
I encourage you to imagine with me what a world like this could look like. I miss when we would dream of worlds to come, filled with exciting possibilities and hopeful futures. I love the concept of skyhooks: dream about what you can build as if the sky had hooks you could hang your creations from (a metaphor for unlimited resources) and most of the time, you can still create that with scarce resources.
Think of this project as a skyhook: a dream about a future so that we may build towards it.
Thank you to Anson, Sebastien, Jamie, Raymond, Morgan, David, Aadil, JZ, Nishant, Anh, Farzaa, Amir, Aaron, Spencer, Rishi, Jasmine, and Pranav for your continued support. This project wouldn't be possible without all of you :)
Reboot publishes free essays on tech, humanity, and power every week. If you liked this or want to keep up with the community, subscribe below ⚡️
Jacky Zhao is a student studying computer science and philosophy at the University of British Columbia. He thinks a lot about how we can better incentivize public goods funding, support better interactions with computers, and be more responsible stewards of technology. He has built and maintains many widely used open source projects like Quartz, which enables users to host their digital gardens online for free, and cursor-chat, which is a library to add Figma-like cursor chat to websites.
If you’d like to support Jacky’s work, you can sponsor him on Github here.
This proposal has seen a lot of evolution. How does this current version differ from your initial proposal? What shaped your ideas?
The early proposal contained very lofty goals for the future but I don’t think it critically thought about how to get there or why people would want a future like it. It had very scattered technical ideas and lacked conviction in the vision.
I found it really odd that this type of technology has existed for a while but there were no widespread examples of usage. I spent about a month and a bit looking at retrospectives of peer-to-peer protocols and applications to get a good foundation for what’s been done and what hasn’t been done.
This literature review era led me to revise the proposal to go a lot more in-depth about the why of the research, placing heavier emphasis on adoption and why the average internet user should care. The internal metric I had was whether my Mom, who doesn’t work in technology and mostly uses the internet to look at articles, would be able to understand why what I was working on was important.
What’s something that surprised you in your work so far this summer?
People say independent research is a lonely journey. I disagree. I don’t think being independent means you need to do it alone. Rather, it means without being attached to any particular institution or label.
I’ve been very surprised at just the number of people who have been thought partners, collaborators, and supporters throughout the whole process.
Rhizome is very much in early stages right now from a research perspective. Let’s say the research phase fully succeeds — what do you think would come next? What would it take for this to shape how individuals interact with the internet?
I honestly don’t think the technical part or the research will be the hard part of this project! Getting people to build on top of it and to imagine possible futures using this technology is the hard part.
Building distributed systems and peer-to-peer tech shouldn’t require a PhD to do. I think a large part of the next step if research is successful is to build demo apps with real users to show what’s possible with it.
I want your average computer science student to be able to build apps for their friends and family using this. I want people to actively play with this new framework, this new mental model of computing, and then to build the apps they wish they could have themselves.
What have you been reading recently?
I recently finished Envisioning Information by Edward R. Tufte and The Book of Form and Emptiness by Ruth Ozeki. Currently, I’m working my way through Tools for Conviviality by Ivan Illich. I also really enjoyed reading Gordon Brander’s piece on Soulbinding Like a State.
🌀 microdoses
Carolyn Chen interviewed in Guernica! Read our review of Work Pray Code!
I’ve said for the longest time that I’ll read any genre other than whatever tf Sapiens is, and this essay, on Harari’s inaccuracies, sensationalism, and why they make his writing dangerous, is the best distillation I’ve seen of why I can’t stand his work. (I’m honestly surprised this is essay just getting published now!)
WHERE IS THE LIE @ all the growth bros
Best or worst job ???
Probably something insightful or funny to be said about product design here idk
💝 closing note
If you haven’t registered yet to hear from Paris Marx on the past & future of transportation, it’s not too late! We’re chatting with them tonight at 5p pacific.
Thinking about skyhooks & all the things we can do to get there!
Reboot team
This was super interesting! One thing it left me wondering is if you've thought about the complexities of how moving data between contexts changes its meaning and that individual control over data may not match up well with the relational information encoded in data (see e.g. Kimberly Christen's "Does Information Really Want to be Free?" and Salomé Viljoen's "A Relational Theory of Data Governance"). (To be clear, I absolutely agree that companies owning your data is the worst option of all.) Do you think that Rhizome can be constructed to help engage with these issues?
wooooooo