#consensus

By Maria Climent-Pommeret

Consensus algorithms form a large family. Whether choosing an existing algorithm for your blockchain or designing a new one, there are many properties to consider. In this article, we will explore just a few of these properties and consider their usefulness and tradeoffs.

Censorship resistance

Many blockhain users place great value on a consensus algorithm’s censorship resistance. An algorithm is censorship resistant if any node can send any operation to the network without being vetoed by another node.

When a system is censorship resistant, small parties can communicate and exchange without being dominated or silenced by the already-powerful. This in turn gives space for innovation and cultural dissidence, which has long been the lynchpin of democracy.

On the other hand, lack of any censorship can be dangerous. If any message is allowed, malicious parties will inevitably spam a community with false, offensive, or harmful content, effectively robbing good-faith users of the use of the system and eroding trust for all.

When applying/designing a consensus algorithm, one must consider carefully how to balance the risks censorship triggers with the risks of having bad-faith users. It’s likely that many blockchains and internet systems will rise and fall before society figures out “sand defaults” for moderation and censorship of online communities.

Permissioned versus permissionless

Imagine a blockchain that anyone can join at anytime, dynamically, without prior permission from anyone. This configuration is called a permissionless blockchain. Thinking outside the blockchain scope, the internet used to be a good example of a permissionless system. However, this setting has a major downside: nothing prevents an attacker from creating a large number of virtual participants until the network is overwhelmed and controlled by the attacker’s nodes. This is called a Sybil attack. Permissioned blockchains mitigate that risk by requesting a prior configuration and clearance system before a node is allowed to participate in the network.

While this property seems trivial, it has profound implications on the whole blockchain. Indeed, with a permissionless consensus, having a distributed and decentralized network is of paramount importance since it makes it harder for a single entity to take the system down and to tamper with. Choosing a permissioned consensus changes that perspective: while decentralization remains important, control is not evenly distributed among nodes and depends on managed and identified entities. In permissioned systems, the governance model takes precedence over decentralization.

In a permissioned network, governance is decided and agreed upon by members of the blockchains and cryptoeconomics. Control of the network is based on preexisting business relationships among members. However, in permissionless networks, there are many governance models such as off-chain governance[1] models (Bitcoin) or on-chain governance[2] models (Tezos).

Moreover, in a permissionless network participants require some sense of trust to the system in the absence of a central authority, which is why transparency on every operation (transaction order, block creation, etc) is a key component of such networks since it is directly linked to its cryptoeconomics. Since most permissioned blockchains run previously-defined business relationships, incentives are not as important and might not be necessary, making cryptoeconomics and transparency less important.

Permissioned and permissionless networks have very distinct philosophy behind them and the consequences of this choice have an impact on many other properties such as (pseudo)anonymity, privacy[3] and scalability.

Objectivity

The objectivity/subjectivity criteria is not easily defined but is critical as it is linked to the ability to support light-clients[4]: objective and weakly-subjective blockchains may have light-clients, strongly-subjective blockchains cannot.

In an objective system, nodes have a deterministic rule for deciding which is the true active chain. Under the assumption of instant finality in an objective system, nodes can witness and retrace the past up to present day and will never disagree on the current state of the chain, since there is an always an objective proof as to which of two candidate chains is the true one, if any.

By contrast, in a subjective system, nodes can argue on a question and no simple determinist rule exists to make them agree. An observer placed outside the subjective system cannot know who is right and who is wrong. Moreover, the past remains forever subjective, as someone in the network might not have seen every step of the system’s history. However, the current moment can either be:

  • subjective (the system is then strongly subjective)
  • objective (the system is then weakly-objective).

Synchronicity

The synchronicity level of a distributed model, AKA its communication model, is strongly linked to the adversarial model in use and directly impacts resistance to network partitions[5]. Indeed, the communication model defines the limits an adversary has on delaying messages on the network.

A synchronous model, in which an adversary can delay a message by at most Δ, appears -at first- as being good enough. However, picking the Δ bound can become a real issue[6] and the assumption begins to seem unrealistic. Removing assumptions on message delays creates an asynchronous model which lead to very robust protocols who are latency-adaptive and in which message delays create no safety violations.

While tricky to reason with, asynchronous models appear as the best solution until taking an interest to the theoretical gap between synchronous and asynchronous models. For instance, Fischer, Lynch and Paterson proved in 1988 that in an asynchronous model, every protocol for the consensus problem has the possibility of non-termination, even with only one faulty process. By contrast, solutions exist for the synchronous model. Moreover, n a synchronous model byzantine agreement is possible when at least 2/3rds of the nodes are honest, while asynchronous models require at least a 3/4th majority of honest nodes[7].

The partial synchronicity model, introduced in 1988 by Dwork, Lynch and Stockmeyer, as a middle ground that describes something closer to reality: systems that are usually synchronous but experience asynchronous episodes. This model gives a mental framework as to how one can build protocols that remain safe during asynchronous episodes and achieve liveness and termination when the system is synchronous.

This article only covers a small number of consensus properties, and leaves out many critical ones such as critical as network overhead, instant finality, and leaders (all of which have profound implications on the algorithm design). In a previous blogpost we reached the unsatisfaying conclusion that a blockchain is function of an unidentified number of interdependant parameters. Unfortunately[8], we must reach a similar conclusion on core properties of consensus algorithms.

  1. in an off-chain governance model, the decisions for managing and implementing changes to the blockchains are made by a few people “in the human world”. ↩︎
  2. “On-chain governance is a system for managing and implementing changes to cryptocurrency blockchains. In this type of governance, rules for instituting changes are encoded into the blockchain protocol. Developers propose changes through code updates and each node votes on whether to accept or reject the proposed change”, Frankenfield (2018) ↩︎
  3. privacy as in data privacy and not as in blockchain privacy. A permissioned blockchain does not mean a private blockchain! ↩︎
  4. a node/client that relies on other nodes to interact with a blockchain. A light client isn’t connected at all time to the blockchain. ↩︎
  5. Network partitioning occurs when a blockchain is split into two (or more) pools of nodes that can’t communicate with one another. By creating a network partition, an attacker forces the creation of several parallel blockchains. ↩︎
  6. When the value of Δ is too small value, it ceases to adequately describe the real world, leading to all sorts of protocol violations. Setting Δ to a value too large will degrade performance. ↩︎
  7. See this interesting blog post ↩︎
  8. Maybe the correct word is “fortunately”: we still have a lot of fun discoveries ahead! ↩︎

If you want to know more about Marigold, please follow us on social media (Twitter, Reddit, Linkedin)!

Scroll to top