The following article is a summation of this presentation given at ETHBarcelona.
Public goods are shared resources that benefit everyone without subtracting from others' access. How we distribute and pay for public goods helps determine whether or not we live in a fair and just society. Because public goods are available to all people (non excludable) they typically suffer from the free-rider problem, where some members of the community may use them without contributing to their maintenance or development. There are also bad actors looking to game the system to their benefit.
Open source code, which is how we’re developing our protocols in Logos, is a digital public good and thus is subject to the same free-rider and bad-actor problems. For example, anyone can see and fork our code to use it however they choose, without contributing in any way to the original protocol. But that’s not all — anyone may also participate in the apps built with our technology as an ally or an adversary. Since using censorship to quell adversaries is against our principles, we must innovate new ways to mitigate the influence of bad actors wishing to game the system.
Bad actors are normal
Something unfortunate that we have to accept is every community has bad actors who choose to do things for themselves to the detriment of others. As a community grows the likelihood of bad actors appearing also increases. They may present as thieves looking to scam other community members or just disruptive trolls. Perhaps they act out to drive attention to themselves, which disproportionately empowers them within the group. Without the proper guardrails, bad actors with power might exploit systemic deficiencies to their advantage, hurting innocent community members in the process.
Google’s original guiding principle ‘Don’t be evil’ was meant to inspire the company to do the right thing. But we all recognize Google has long since abandoned it. There was no technical or legal protection to stop the Google leadership from giving in to financial and political pressure, so they have been captured by special interests. In this way ‘Don’t be evil’ was little more than a suggestion, and would not be powerful enough to stop bad actors looking to exploit public goods.
Instead, we must create protocols where bad actors ‘Can’t be evil.’ This concept was introduced by Dr Muneeb Ali, and that’s exactly what we're trying to build into our technology infrastructure. Unfortunately, if there is the opportunity to be evil, people may do so if it benefits them. That’s why we must make it impossible for them to make the choice to be evil in the first place, so only actions that benefit the system are profitable. As Milton Friedman stated eloquently, “The way you solve things is by making it politically profitable for the wrong people to do the right things."
Consider the case of Ethereum right now. In the wake of the sanctions imposed on Tornado Cash, the Ethereum ecosystem has witnessed a significant shift in transaction dynamics in favour of Office of Foreign Assets Control (OFAC) compliance. Protocol-level censorship happens through the use of MEV-Boost relays which are regulated under OFAC. As per MEV Watch, close to 50% of Ethereum blocks mined in the last 30-day period were OFAC compliant — this is OFAC sanctions impacting what we thought was censorship-resistant technology.
Ethereum is operating on a ‘Don’t be evil’ basis which inherently allows space for censorship. We need privacy and censorship resistance at lower and lower protocol layers to eliminate such opportunities for bad actors.
We believe the principles that guide our technology development at the Logos Collective are ideal for building public goods infrastructure in part because they are applied to every layer of the stack. These are:
- Censorship resistance
In some cases our principles can be a little in conflict with each other. For example, liberty and security are often on opposite sides of a continuum — as security increases, liberty decreases. But overall they serve incredibly well to guide how we think about trade-offs when implementing and developing technologies. Another concern is how to overcome bias in the technology we’re creating. I’d be lying if I said we’ve completely figured it out yet.
Understanding the message
I find myself frequently returning to this quote from Marshall McLuhan for guidance on how to remain true to our principles and where to focus resources:
‘The medium is the message.’
The technology we use as the medium to broadcast a message has a drastic impact on our ability to convey its meaning to those who receive it. The medium acts like a filter, changing the message on its way to the receiver.
Using centralised technology as the medium homogenises communications. It removes the complexity of human interactions and churns out messages in a uniform way by removing our optionality almost entirely.
As Douglas Hofstadter conceived in his 1999 book Godel Escher Bach, the medium is just one of three layers of every message that we share. Reviewing what these are allows us to better understand the subtlety in which evil is able to be enacted if we’re not vigilant.
- Frame message
This is to know there is a message to be decoded. The frame message is implicitly conveyed in its structure, such as a book; as long as the recipient knows what a book is, they understand there is a message on the pages for them to decode.
- Outer message
This is the medium by which a message is sent and delivered, such as the written text inside a book. It’s of great importance to make sure the medium does not bias the interpretation of the inner message, otherwise, the receiver will not fully understand the intent of the sender. The receiver must be able to decode the outer message to be sure of its intent.
- Inner message
The inner message is the initial intent of the content being conveyed in the first place. When you have read and understood a book, you have extracted the meaning that was intended by the author.
While this is an overly simplistic model of what a message is, it gives us a framework to see where problems can be introduced and how to begin mitigating them.
Blockchain as the medium
Blockchain networks are a coordination mechanism. We use them to mitigate message distortion as best we can, and to also allow us to trust an incorruptible protocol (and not a corruptible human) to convey those interactions. There is a downside; blockchains with tokens that have real-world value attract bad actors who are greedy. We have created these beautiful cross-border, cross-jurisdictional attempts for censorship-resistant coordination mechanisms, but we must harden them against those who would harm the system.
Using the framework I’ve laid out above around how we can think about what a message is, it’s really quite difficult to apply neutrality to a blockchain network because of its many layers. Let’s take the most basic stack, demonstrated below.
Networking sits at the bottom. It indicates how messages get shared so that everyone who is contributing to the network receives all the correct network data. Contributors then go through the process of validating all these messages to make sure they're well formed, or if there’s an issue such as someone double spending. As a defensive measure this type of message would get dropped instead of being included in a block. Then the message is passed through the consensus mechanism where everyone in the distributed system comes to an agreement on the data.
We then have to find some way of extracting that data from the blockchain at the retrieval stage. In order to scale, this stack would need to process a massive amount of messages using different mechanisms, which all possess outer messages that need appropriate interpretation. A network participant’s ability to understand the outer message combined with the powers they have throughout the stack, presents an opportunity for them to change that message or censor it.
Generally, when we think about adding security or privacy to the stack we consider changes to the validation and consensus layers, which is essentially ensuring they are operating at their optimal level. What we increasingly see, however, is a focus on privacy at the retrieval level. When issues like censorship strike, a lot of time is consumed by addressing the retrieval layer because censorship prevention wasn’t fully considered at the bottom of the stack.
From Whisper to Waku
As Ethereum has grown and ossified, it has become clear that the network doesn‘t provide the self-sovereignty, privacy, and censorship resistance that many of us believed it inherently could. I see this as a failure of not incorporating robust principles at the lowest end of the stack.
The diagram above shows what the decentralised stack looked like with Whisper, the Ethereum communications protocol. Built as a tool for enabling dynamic peer-to-peer communications, development of Whisper has always included obfuscating the route between the message sender and receiver. This was completely aligned with our ethics, and what we were looking for from the communications of our Status app. Though we wanted to push it even further to ensure resource-constrained users could also rely on it, in line with our principle of inclusivity.
So, perhaps naively, we integrated Whisper as developed by the Ethereum ecosystem, using their existing proof of work anti-spam mechanism. In testing we experienced major issues. We killed batteries, and we maxed out testers’ data plans. We tried to deploy it in extreme environments, but eventually succumbed to a high churn of mobile devices dropping in and out of service or losing functionality temporarily. It was impossible to get good message reliability.
We could have offered up a lot of centralised solutions to combat spam, for example banning devices based on their ISP or running messages through our servers and censoring them, but these are in direct contradiction to our core values.
Scaling was another big issue. We realised soon after taking on the project that Whisper was built in a fragile way that simply couldn't scale. It needed a tremendous amount of work to fix.
We saw only one viable option for transforming Whisper into the public good that we knew it could be: developing it ourselves.
We forked Whisper to create our own protocol called Waku. Our initial objective was to provide Status users with an acceptable user experience for communications without relying on centralisation. Vac was formed as a separate organisation to focus on the research required to grow Waku and other public goods technologies we planned to build. Their research led to the development of Nomos for building modular blockchains, and Codex for censorship-resistant storage.
Because we develop open source, anyone can access our technologies. All of our research and findings are published at vac.dev, open to all viewers and contributors.
Improving Waku while not compromising on our principles meant we were slower in addressing some of the user issues that came to light. We stuck to our principles and didn’t compromise on our goal of building a truly decentralised tech stack. While it was painful, we arrived at a unique solution that we may never have arrived at had we been willing to compromise.
Key takeaways for building public good infrastructure
- Publish openly. You can't build in the dark, especially when it's a digital permissionless system. You can't expect others to have trust in your protocol if they don't know how it works. Building public goods should be community-based.
- Innovate. Using old tools will lead to the same problems we see in existing solutions. If we had compromised on our principles and used existing technology to scale at speed, we would have just ended up with the same type of centralised product that already exists, rather than creating censorship-resistant public goods.
- Reflect interpersonal relationships. We should conform technology to the interpersonal relationships we're trying to have, and not the other way around.
- Can’t be evil. Build technology that doesn’t allow bad actors to manipulate the outer message so that it can’t be censored or changed in any way. That means building censorship resistance into the bottom of the stack.
The Logos Collective is provisioning a public goods infrastructure that will enable eventual Logos citizens to access greater freedom, transparency, and stability through voluntary participation in the network state.
If you’d like to continue the conversation with me can find me on the Status app at corpetty.
If our principles and approach speak to you, take a look at our open contributor positions.