Peer-to-peer is simpler than client-server
I've been reflecting on the intrinsic architectural characteristics of peer-to-peer, local-first software systems. My tentative conclusion is that peer-to-peer, local-first apps are simpler to build than client-server apps due to a simpler architecture.
Since joining DXOS, I've been reflecting on the intrinsic architectural characteristics of peer-to-peer, local-first software systems. My brain has lived in client-server mode for a few decades; I've internalized the pleasure and pain of working in that architecture. Peer-to-peer is a different beast. There are no servers. Everything is a peer (albeit with different capabilities). How does that change the developer's experience? Does that make for a better user experience?
My tentative conclusion is that peer-to-peer, local-first apps are simpler to build than client-server apps due to a simpler architecture.
My working mental model is that a well-architected peer-to-peer framework such as DXOS solves the distributed systems problem inherent in client-server apps and bakes the solution into the framework layer. Developers simply build client-side apps and treat state as if it's local. The platform solves for replication and synchronization behind the scenes.
Let's dig a little deeper into the intrinsic shape of client-server versus peer-to-peer apps.
The typical client-server model of application architecture in use by most web applications today forces the developer to deal with a startling degree of complexity in order to build applications. To borrow the image of "an app can be a home-cooked meal" from Robin Sloan, building client-server web apps forces you to an industrial kitchen in order to cook a simple meal. Sometimes, you just want a pot and a burner to make some mac-n-cheese.
Let's look at some of the concepts inherent in building a client-server application. (various web development frameworks do their best to hide or minimize the inherent complexity in these concepts, but they remain nonetheless):
State is distributed across client and server and the developer is in charge of ensuring that state remains synchronized across the two systems
Typically, the developer has to set up, configure, and maintain a database
The user interface has to account for this distributed state. Since a state change cannot be guaranteed locally and requires a server round-trip to determine if it was successful, the UI code must contain conditions that account for this.
Techniques such as optimistic updating and complicated state logic are required.
The end-user UI must also communicate this ambiguity to the user through the use of loading states, spinners, "pending" states, etc.
Ultimately, this makes the UI far more complicated.
Authentication to the centralized server and its resources must be handled by the developer.
Authorization to various resources must also be handled by the developer and stored on a per-user basis. Since all users are accessing the same server, care must be taken to ensure privacy is maintained and access control is kept up-to-date.
Centralization of user data in a single logical data store creates a significant honeypot for attackers. Securing the network against motivated attackers becomes a significant concern since the developer is responsible for keeping user data safe and private.
Centralization of network resources in order to access data from the centralized store create a more complex networking infrastructure (load balancers, fail-overs, network security, etc) and increases bandwidth costs, which accrue to the developer.
While we have spent the last few decades developing techniques, tools, and services to minimize these challenges, they are inherent to the client-server architecture and cannot be fully dispensed with when building software.
How are peer-to-peer systems simpler? In short, we can think of client-server applications as a distributed systems problem. Architected carefully, a peer-to-peer software system starts with the assumption that the software is a distributed system and moves the problem out of the developer's realm and into the infrastructure layer.
Here's how a peer-to-peer system is different:
Application state is stored locally on each device by default.
User interface can be therefore be reactively updated based on local state.
Each user's data is stored on their own device in a local data store, which simplifies the authentication and authorization problem down to controlling access to a data store running on their local device.
Optionally, a user's data can be synchronized across all of their devices automatically.
Conflicts in state synchronization can be handled automatically by the sync layer through the use of specialized data structures like CRDTs, if desired.
Network activity largely takes place between peers rather than accessing centralized services, reducing the burden of infrastructure cost and maintenance.
Because user data is stored on-device, the developer is not responsible for securing a centralized store or keeping user data adequately partitioned from each other.
The architectural shape of peer-to-peer systems make application development simpler for the developer and has intrinsic advantages over client-server architecture. This architecture is not without its challenges, but I'll address those in a future note.
I keep a copy of this note up-to-date in my Research Notebook.
ICYMI: Stuff I've worked on
Do you think Computers Can Be Better? Me, too. I've started thinking about a "research agenda" for better computers.
I've been thinking about communicating online and offline states in peer-to-peer systems and I realized that for peer-to-peer software, "online" and "offline" are about access to specific peers and services.
I've been customizing my office for ten years; I posted some photos and thoughts on the experience.
Glimpses of the future
If you want to prototype a futuristic augmented reality experience you could drop $3500 on an Apple Vision Pro (good luck getting your hands on one!)... or you could drop $350 on the Monocle, an AR device you wear over normal glasses. It's very steampunk, but also very open source. While the Vision Pro is certainly a visionary device, it feels like a shepherd's crook that drags you further into Apple's carefully manicured walled garden. Monocle has the vibes I want from our augmented reality future: open, hackable, inexpensive, and relatively unobtrusive. And it has the price to match. If you're in the Brooklyn area, I know someone who has one in their possession: DM me for an intro. 😉
I had a chance to chat with James Addison, Silicon Jungle on twitter, about his research into collaborative, malleable software applications. His work has spanned the whole stack, from writing his own JSON-powered CRDT to prototyping the UX of various interactions. I'm especially impressed by the way he's distilled his learnings into principles. Check out these short clips of his UX explorations on twitter to get a feel for what he's up to.