Blog > 2020
The Ouroboros path to decentralization
The protocol that powers Cardano and its design philosophy
23 June 2020 6 mins read
Designing and deploying a distributed ledger is a technically challenging task. What is expected of a ledger is the promise of a consistent view to all participants as well as a guarantee of responsiveness to the continuous flow of events that result from their actions. These two properties, sometimes referred to as persistence and liveness, are the hallmark of distributed ledger systems.
Achieving persistence and liveness in a centralized system is a well-studied and fairly straightforward task; unfortunately, the ledger that emerges is precariously brittle because the server that supports the ledger becomes a single point of failure. As a result, hacking the server can lead to the instant violation of both properties. Even if the server is not hacked, the interests of the server’s operators may not align with the continuous assurance of these properties. For this reason, decentralization has been advanced as an essential remedy.
Informally, decentralization refers to a system architecture that calls for many entities to act individually in such a way that the ledger’s properties emerge from the convergence of their actions. In exchange for this increase in complexity, a well-designed system can continue to function even if some parties deviate from proper operation. Moreover, in the case of more significant deviations, even if some disruption is unavoidable, the system should still be capable of returning to normal operation and contain the damage.
How does one design a robust decentralized system? The world is a complicated place and decentralization is not a characteristic that can be hard-coded or demonstrated via testing – the potential configurations that might arise are infinite. To counter this, one must develop models that systematically encompass all the different threats the system may encounter and demonstrate rigorously that the two basic properties of persistence and liveness are upheld.
The strongest arguments for the reliability of a decentralized system combine formal guarantees against a broad portfolio of different classes of failure and attack models. The first important class is that of powerful Byzantine models. In this setting, it should be guaranteed that even if a subset of participants arbitrarily deviate from the rules, the two fundamental properties are retained. The second important class is models of rationality. Here, participants are assumed to be rational utility maximizers and the objective is to show that the ledger properties arise from their efforts to pursue their self interest.
Ouroboros is a decentralized ledger protocol that is analyzed in the context of both Byzantine and rational behavior. What makes the protocol unique is the combination of the following design elements.
- It uses stake as the fundamental resource to identify the participants’ leverage in the system. No physical resource is wasted in the process of ledger maintenance, which is shown to be robust despite ‘costless simulation’ and ‘nothing at stake’ attacks that were previously thought to be fundamental barriers to stake-based ledgers. This makes Ouroboros distinctly more appealing than proof-of-work protocols, which require prodigious energy expenditure to maintain consensus.
- It is proven to be resilient even if arbitrarily large subsets of participants, in terms of stake, abstain from ledger maintenance. This guarantee of dynamic availability ensures liveness even under arbitrary, and unpredictable, levels of engagement. At the same time, of those participants who are active, barely more than half need to follow the protocol – the rest can arbitrarily deviate; in fact, even temporary spikes above the 50% threshold can be tolerated. Thus Ouroboros is distinctly more resilient and adaptable than classical Byzantine fault tolerance protocols (as well as all their modern adaptations), which have to predict with relative certainty the level of expected participation and may stop operating when the prediction is false.
- The process of joining and participating in the protocol execution is trustless in the sense that it does not require the availability of any special shared resource such as a recent checkpoint or a common clock. Engaging in the protocol requires merely the public genesis block of the chain, and access to the network. This makes Ouroboros free of the trust assumptions common in other consensus protocols whose security collapses when trusted shared resources are subverted or unavailable.
- Ouroboros incorporates a reward-sharing mechanism to incentivize participants to organize themselves in operational nodes, known as stake pools, that can offer a good quality of service independently of how stake is distributed among the user population. In this way, all stakeholders contribute to the system’s operation – ensuring robustness and democratic representation – while the cost of ledger maintenance is efficiently distributed across the user population. At the same time, the mechanism comes with countermeasures that de-incentivize centralization. This makes Ouroboros fundamentally more inclusive and decentralized compared with other protocols that either end up with just a handful of actors responsible for ledger maintenance or provide no incentives to stakeholders to participate and offer a good quality of service.
These design elements of Ouroboros are not supposed to be self-evident appeals to the common sense of the protocol user. Instead, they were delivered with meticulous documentation in papers that have undergone peer review and appeared in top-tier conferences and publications in the area of cybersecurity and cryptography. Indeed, it is fair to say that no other consensus research effort is represented so comprehensively in these circles. Each paper is explicit about the specific type of model that is used to analyze the protocol and the results derived are laid out in concrete terms. The papers are open-access, patent-free, and include all technical details to allow anyone, with the relevant technical expertise, to convince themselves of the veracity of the claims made about performance, security, and functionality.
Building an inclusive, fair and resilient infrastructure for financial and social applications on a global scale is the grand challenge of information technology today. Ouroboros contributes, not just as a protocol with unique characteristics, but also in presenting a design methodology that highlights first principles, careful modeling and rigorous analysis. Its modular and adaptable architecture also lends itself to continuous improvement, adaptation and enrichment with additional elements (such as parallelization to improve scalability or zero-knowledge proofs to improve privacy, to name two examples), which is a befitting characteristic to meet the ever-evolving needs and complexities of the real world.
Further reading
To delve deeper into the Ouroboros protocol, from its inception to recent new features, follow these links:
- Ouroboros (Classic): the first provably secure proof-of-stake blockchain protocol.
- Ouroboros Praos: removes the need for a rigid round structure and improves resilience against ‘adaptive’ attackers.
- Ouroboros Genesis: how to avoid the need for a recent checkpoint and prove the protocol is secure under dynamic availability for trustless joining and participating.
- Ouroboros Chronos: removes the need for a common clock.
- Reward sharing schemes for stake pools.
- Account management and maximizing participation in stake pools.
- Optimizing transaction throughput with proof-of-stake protocols.
- Fast settlement using ledger combiners.
- Ouroboros Crypsinous: a privacy-preserving proof-of-stake protocol.
- Kachina: a unified security model for private smart contracts.
- Hydra: an off-chain scalability architecture for high transaction throughput with low latency, and minimal storage per node.
Integrating and advancing with Adrestia
Taking the challenge out of fast-paced blockchain development
15 June 2020 4 mins read
For exchanges and developer partners, integrating with any blockchain can be challenging. The technology often moves so quickly that keeping up with the pace of change can be unrealistic. Cardano’s development and release process are now driving things forward apace. Managing parallel software development workstreams moving at different speeds can feel a bit like changing the tires on a truck while it’s driving at 60 miles per hour.
Cardano’s vision is to provide unparalleled security and sustainability to decentralized applications, systems, and societies. It has been created to be the most technologically advanced and environmentally sustainable blockchain platform, offering a secure, transparent, and scalable template for how we work, interact, and create, as individuals, businesses, and societies.
In line with these ambitions, we needed to devise a way that our partners could swiftly, easily and reliably integrate with Cardano, regardless of what was going on under the hood. Whatever the pace and cadence of future rollouts, we wanted to develop a consistent method by which all updates to the core node could be easily adopted by everyone.
In order to make that integration and interaction with Cardano easier and faster, IOHK engineers formed the Adrestia team, to take responsibility for building all the web APIs and libraries that make Cardano accessible to developers and application builders. Developments to the node can then focus on performance and scalability, while users will always be able to interact with it effortlessly. The name Adrestia was chosen after the goddess of revolt because with these new interfaces we expect everyone to be able to integrate with Cardano, creating a ‘revolution’ in accessibility.
Enabling developers to keep pace with change
The goal of the Adrestia team is to provide – via Web APIs – a consistent integration experience so that developers can know what to expect between Cardano roadmap releases. Whether they are a wallet developer or an exchange, users can flexibly explore the chain, make transactions and more.
The APIs are as follows:
- cardano-wallet: HTTP ReST API for managing UTXOs, and much more.
- cardano-submit-api: HTTP API for submitting signed transactions.
- cardano-graphql: HTTP GraphQL API for exploring the blockchain.
The SDK consists of several low-level libraries:
- cardano-addresses: Address generation, derivation & mnemonic manipulation.
- cardano-coin-selection: Algorithms for coin selection and fee balancing.
- cardano-transactions: Utilities for constructing and signing transactions.
- bech32: Haskell implementation of the Bech32 address format (BIP 0173).
In addition to providing a flexible and productive way to integrate with Cardano, maintenance is also made easier. With consistency, it can often require less time to update integrations between releases. This familiarity reduces maintenance costs. New software can then deploy in days rather than weeks. Ultimately, anyone can keep pace with change.
Get Started
The results are now live in the Byron era of Cardano. Exchanges or third-party wallets using Cardano-SL, should now be integrating to prepare for the new Byron and upgrading to Shelley wallet. These need to happen consecutively to avoid any outages. Full details have been added to the Adrestia team repo and we continue to work with our partners to ensure there is no interruption in service for ada holders keeping their funds on exchanges or in third-party wallets. The chart below shows the difference between the Cardano-SL node and the upcoming Shelley node. The components in red are non-Shelley compatible and will break after the hard fork, while the other components are Shelley compatible and will be supported during and after the hard fork.
Consistency is key in creating a blockchain network that works for everyone. Cardano is not being built for the next five or ten years, but for the next fifty. Change to the system is inevitable in that time but Adrestia was made to ensure that everyone can connect with the Cardano node. To get started, check out the Adrestia project repo and read the user guide.
Why we are joining Hyperledger
Collaboration is central to fostering industry development and enterprise blockchain adoption.
11 June 2020 3 mins read
A community's overall success largely depends on its ability to collaborate. How its members interact with each other, how they share knowledge to find new avenues of development, and how open they are to embrace innovation and novel technologies will determine the community's long-term viability.
One of IOHK’s founding principles is its belief in nurturing a collaborative ecosystem for blockchain development. Our commitment to knowledge-sharing and to our deeply-held principles of open-source serves as the rationale behind becoming a member of the Hyperledger community.
IOHK understands and fosters the principles of building its own blockchain technologies, while simultaneously partnering with other enterprises - all driven by the shared goal of driving forward blockchain adoption. We want to share what we have learned, build partnerships, and combine expertise to solve enterprise challenges across multiple industries for the benefit of all.
Becoming part of the Hyperledger Community: What it means for IOHK
Blockchain is a rapidly developing technology. But its sheer versatility has demonstrated an unparalleled potential for innovation across multiple sectors. We are now entering the next phase of maturity with the creation of enterprise ecosystems to support novel projects that leverage decentralization.
Hyperledger focuses on developing a suite of robust frameworks, tools, and libraries for enterprise-grade blockchain deployments, serving as a collaborative platform for various distributed ledger frameworks.
IOHK is joining Hyperledger to contribute to its growing suite of blockchain projects while employing the Hyperledger umbrella to provide visibility and share components of IOHK's interoperable framework. Through this collaborative effort, we want to foster and accelerate collective innovation and industrial adoption of blockchain technologies.
Why we are joining Hyperledger
Hyperledger includes more than 250 member companies, including many industry leaders in finance, banking, technology, and others. Becoming a part of this consortium enables us to do two things: (1) to collaborate and build synergies with others in this community, and (2) to refine our products for further interoperability. By doing so, we hope to encourage the co-development of our protocols and products, in line with our open-source development philosophy.
We are keen to gain a deeper understanding of specific enterprise requirements from all these projects, so we can construct better products and attune our offering more accurately. We want to help shape the future of the enterprise blockchain ecosystem by leveraging blockchain technology for a more decentralized economy - both through the products and solutions we engineer, and everything that this framework represents. We believe that this collaborative spirit will drive innovation and adoption of blockchain over the next decade, and beyond.
Looking to the future of Haskell and JavaScript for Plutus
Smart contracts use the GHCJS cross-compiler to translate off-chain code
4 June 2020 8 mins read
This is the second of the Developer Deep Dive technical posts from our Haskell team. This occasional series offers a candid glimpse into the core elements of the Cardano platform and protocols, and gives insights into the engineering choices that have been made. Here, we outline some of the work that has been going on to improve the libraries and developer tools for Plutus, Cardano’s smart contract platform.
Introduction
At IOHK we are developing the Plutus smart contract platform for the Cardano blockchain. A Plutus contract is a Haskell program that is partly compiled to on-chain Plutus Core code and partly to off-chain code. On-chain code is run by Cardano network nodes using the interpreter for Plutus Core, the smart contract language embedded in the Cardano ledger. This is how the network verifies transactions. Off-chain code is for tasks such as setting up the contract and for user interaction. It runs inside the wallet of each user of the contract, on a node.js runtime.
Compiling the off-chain part of a Plutus contract therefore involves translating the off-chain code to JavaScript. To accomplish this, we use GHCJS, the Glasgow Haskell to JavaScript cross-compiler.
In the past year, we haven't made many changes in the GHCJS code generator. Instead, we did some restructuring to make compiling things with GHCJS more reliable and predictable as well as adding support for Windows and making use of the most recent Cabal features. This post gives an overview of what has happened, and a brief look at what's in store for this year.
Cabal support
When installing a package with GHCJS, you probably use the --ghcjs
command line flag or include compiler: ghcjs
in your configuration file. This activates the ghcjs
compiler flavor in Cabal. The ghcjs
flavor is based on the ghc
flavor and adds features such as support for running set-up scripts built by GHCJS, and for adding JavaScript sources.
Cabal has introduced many features in recent years, including support for backpack, Nix-style local builds, multiple (named) libraries per package, and per-component build plans. Unfortunately, the new features resulted in many changes to the code base, and maintenance for the ghcjs
flavor fell behind for some time. We have brought GHCJS support up to date again in version 3.0. If you want to use the new-style build features, make sure that you use cabal-install version 3 or later.
The differences between the ghcjs
and ghc
compiler flavors are minor, and cross-compilation support in Cabal has been improving. Therefore, we hope that eventually we will be able to drop the ghcjs
compiler flavor altogether. The extensions would instead be added as platform-specific behaviour in the ghc
flavor.
Compiler plug-ins
GHC allows the compiler to be extended with plug-ins, which can change aspects of the compilation pipeline. For example, plug-ins can introduce new optimization features or extend the typechecker.
Unlike Template Haskell, which is separated from the compiler through the Quasi typeclass abstraction, plug-ins can directly use the whole GHC API. This makes the ‘external interpreter’ approach that GHCJS introduced for running Template Haskell in a cross-compiler unsuitable for compiler plug-ins. Instead, plug-ins need to be built for the build platform (that runs GHC).
In 2016, GHCJS introduced experimental support for compiler plug-ins. This relied on looking up the plug-in in the GHCJS package database and then trying to find a close match for the plug-in package and module in the GHC package database. We have now added a new flag to point GHCJS to packages runnable on the build system. This makes plug-ins usable again with new-style builds and other ‘exotic’ package database configurations.
In principle, our new flag can make plug-ins work on any GHC cross-compiler, but the requirement to also build the plug-in with the cross-compiler is quite ugly. We are working on removing this requirement followed by merging plug-in support for cross-compilers into upstream GHC (see ticket 14335 and ticket 17957).
Emscripten toolchain
Long, long ago, GHCJS worked on Windows. One or two brave souls might have actually used it! Its boot packages (the packages built by ghcjs-boot
) would include the Win32 package on the Windows build platform. The reason for this was the Cabal configuration with GHCJS. Cabal’s os(win32)
flag would be set if the build platform was Windows. At the time it was easiest to just patch the package to build without errors with GHCJS, and include it in the boot packages. However, the Win32
package didn't really work, and keeping it up to date was a maintenance burden. At some point it fell behind and GHCJS didn't work on Windows any more.
The boot packages having to include Win32
on Windows was indicative of poor separation between the build platform (which runs the compiler) and the host platform (which runs the executable produced by the compiler). This was caused by the lack of a complete C toolchain for GHCJS. Many packages don't just have Haskell code, but also have files produced by a C toolchain, for example via an Autotools configure
script or hsc2hs
.
The GHCJS approach was to include some pre-generated files, and use the build platform C toolchain (the same that GHC would use) for everything else, hoping that it wouldn't break. If it did break, we would patch the package.
In recent years, the web browser as a compilation target has steadily been gaining more traction. Emscripten has been providing a C toolchain for many years, and has recently switched from its own compiler backend to the Clang compiler with the standard LLVM backend.
Clang has been supported by GHC as a C toolchain for a while. It can output asm.js and WebAssembly code that can run directly in the browser. Unfortunately, users of the compiler cannot yet directly interact with compiled C code through the C FFI (foreign import ccall
) in GHCJS. But having a C toolchain allows packages that depend on configure
scripts or hsc2hs
to compile much more reliably. This fixes some long-standing build problems and allows us to support Windows again. We thought this is already worth the additional dependency.
A variant of GHCJS 8.6 using the Emscripten toolchain is available in the ghc-8.6-emscripten
branch, which can be installed on Windows. This time around, the set of boot packages is the same on every build platform. Emscripten is planned to be the standard toolchain in GHCJS 8.8 onwards.
Code organization
At some point in the early days, GHCJS was implemented as a patch to GHC. It generated JavaScript from the STG intermediate code that a regular GHC installation produced. This made it easy to install packages with GHCJS. Just install normally and get JavaScript for free. Even Template Haskell worked!
The downside of this approach was that the build platform very much affected the generated code. If you build on a 64-bit Linux machine, all platform constants would come from the Linux platform. And the code would be built with the assumption that Int
is 64-bit. That is not very efficient to implement in JavaScript. And Cabal would not be aware that JavaScript was being generated at all.
Later, we switched to using ghc
as a library, introducing Hooks
to change the compilation pipeline where needed. This made it possible to make the GHCJS platform word size independent of the build platform, introduce the JavaScriptFFI
extension and run Template Haskell in a separate process with node.js.
Unfortunately, it turned out to be hard to keep up with changes in the upstream ghc
library. In addition, modifying the existing ghc
through Hooks
encouraged engineers to work around issues instead of directly fixing them upstream.
In early 2018, we decided to build a custom ghc
library for GHCJS, installed as ghc-api-ghcjs
, allowing us to work around serious issues before they were merged upstream. Recently, we dropped the separate library, and built both the GHC and GHCJS source code in one library: ghcjs
.
Although we cannot build GHCJS with the GHC build system yet, we are using the upstream GHC source tree much more directly again. Are we going back to the past? Perhaps, but this time we have our own platform with a toolchain and build tools, avoiding the pitfalls that made this approach so problematic the first time.
Outlook
In 2019, we saw improvements to make Cabal work again, to bring back Windows support and to improve the C toolchain. We hope that the underlying changes in the GHCJS code base will make it easier to merge more upstream and to make libraries compatible with fewer changes. By continuing this path of turning GHCJS into a proper cross-compiler with a proper toolchain, as well as reducing its custom tools, we intend to further align and eventually merge GHCJS with GHC. While there will always be platform-specific libraries that simply cannot be cross-compiled, our ultimate goal is to provide GHC with a JavaScript target, alongside its x86_64, Arm, AArch64, and other code generation targets.
The abstract nature of the Cardano consensus layer
This new series of technical posts by IOHK's engineers considers the choices being made
28 May 2020 21 mins read
As we head towards Shelley on the Cardano mainnet, we’re keen to keep you up to date with the latest news. We also want to highlight the tireless work going on behind the scenes, and the team of engineers responsible.
In this Developer Deep Dive series of occasional technical blogs, we open the floor to IOHK's engineers. Over the months ahead, our Haskell developers will offer a candid glimpse into the core elements of the Cardano platform and protocols, and share insights into the choices that have been made.
Here, in the first of the series, we consider the use of abstraction in the consensus layer.
Introduction
The Cardano consensus layer has two important responsibilities:
- It runs the blockchain consensus protocol. In the context of a blockchain, consensus - that is, 'majority of opinion' - means everyone involved in running the blockchain agrees on what the one true chain is. This means that the consensus layer is in charge of adopting blocks, choosing between competing chains if there are any, and deciding when to produce blocks of its own.
- It is responsible for maintaining all the state required to make these decisions. To decide whether or not to adopt a block, the protocol needs to validate that block with respect to the state of the ledger. If it decides to switch to a different chain (a different tine of a fork in the chain), it must keep enough history to be able to reconstruct the ledger state on that chain. To be able to produce blocks, it must maintain a mempool of transactions to be inserted into those blocks.
The consensus layer mediates between the network layer below it, which deals with concerns such as communication protocols and peer selection, and the ledger layer above it, which specifies what the state of the ledger looks like and how it must be updated with each new block. The ledger layer is entirely stateless, and consists exclusively of pure functions. In turn, the consensus layer does not need to know the exact nature of the ledger state, or indeed the contents of the blocks (apart from some header fields required to run the consensus protocol).
Extensive use is made of abstraction in the consensus layer. This is important for many reasons:
- It allows programmers to inject failures when running tests. For example, we abstract the underlying file system, and use this to stress-test the storage layer while simulating all manner of disk faults. Similarly, we abstract over time, and use this to check what happens to a node when a user's system clock jumps forward or backwards.
- It allows us to instantiate the consensus layer with many different kinds of ledgers. Currently we use this to run the consensus layer with the Byron ledger (the ledger that's currently on the Cardano blockchain) as well as the Shelley ledger for the upcoming Shelley Haskell testnet. In addition, we use it to run the consensus layer with various kinds of ledgers specifically designed for testing, typically much simpler than 'real' ledgers, so that we can focus our tests on the consensus layer itself.
- It improves compositionality, allowing us to build larger components from smaller components. For example, the Shelley testnet ledger only contains the Shelley ledger, but once Shelley is released the real chain will contain the Byron ledger up to the hard fork point, and the Shelley ledger thereafter. This means we need a ledger layer that switches between two ledgers at a specific point. Rather than defining a new ledger, we can instead define a hard fork combinator that implements precisely this, and only this, functionality. This improves reusability of the code (we will not need to reimplement hard fork functionality when the next hard fork comes around), as well as separation of concerns: the development and testing of the hard fork combinator does not depend on the specifics of the ledgers it switches between.
- Closely related to the last point, the use of abstraction improves testability. We can define combinators that define minor variations of consensus protocols that allow us to focus on specific aspects. For example, we have a combinator that takes an existing consensus protocol and overrides only the check whether we should produce a block. We can then use this to generate testing scenarios in which many nodes produce a block in a given slot or, conversely, no nodes at all, and verify that the consensus layer does the right thing under such circumstances. Without a combinator to override this aspect of the consensus layer, it would be left to chance whether such scenarios arise. Although we can control 'chance' in testing (by choosing a particular starting seed for the pseudorandom number generator), it would be impossible to set up specific scenarios, or indeed automatically shrink failing test cases to a minimal test case. With these testing consensus combinators, the test can literally just use a schedule specifying which nodes should produce blocks in which slots. We can then generate such schedules randomly, run them, and if there is a bug, shrink them to a minimal test case that still triggers the bug. Such a minimal failing schedule is much easier to debug, understand and work with than merely a choice of initial random seed that causes... something... to happen at... some point.
To achieve all of this, however, the consensus layer needs to make use of some sophisticated Haskell types. In this blog post we will develop a 'mini consensus protocol', explain exactly how we abstract various aspects, and how we can make use of those abstractions to define various combinators. The remainder of this blog post assumes that the reader is familiar with Haskell.
Preliminaries
The Cardano consensus layer is designed to support the Ouroboros family of consensus protocols, all of which make a fundamental assumption that time is split into slots, where the blockchain can have at most one block per slot. In this article, we define a slot number to just be an Int
:
type SlotNo = Int
We will also need a minimal model of public-key encryption, specifically
data PrivateKey
data PublicKey
data Signature
verifySig :: Signature -> PublicKey -> Bool
The only language extension we will need in this blog post is TypeFamilies
, although the real implementation uses far more. If you want to follow along, download the full source code.
Consensus protocol
As mentioned at the start of this post, the consensus protocol has three main responsibilities:
- Leader check (should we produce a block?)
- Chain selection
- Block verification
The protocol is intended to be independent from a concrete choice of block, as well as a concrete choice of ledger, so that a single protocol can be run with different kinds of blocks and/or ledgers. Therefore, each of these three responsibilities defines its own 'view' on the data it requires.
Leader check
The leader check runs at every slot, and must determine if the node should produce a block. In general, the leader check might require some information extracted from the ledger state:
type family LedgerView p :: *
For example, in the Ouroboros Praos consensus protocol that will initially power Shelley, a node's probability of being elected a leader (that is, be allowed produce a block) depends on its stake; the node's stake, of course, comes from the ledger state.
Chain selection
'Chain selection' refers to the process of choosing between two competing chains. The main criterion here is chain length, but some protocols may have additional requirements. For example, blocks are typically signed by a 'hot' key that exists on the server, which in turn is generated by a 'cold' key that never exists on any networked device. When the hot key is compromised, the node operator can generate a new one from the cold key, and 'delegate' to that new key. So, in a choice between two chains of equal
length, both of which have a tip signed by the same cold key but a different hot key, a consensus protocol will prefer the newer hot key. To allow specific consensus protocols to state requirements such as this, we therefore introduce a SelectView
, specifying what information the protocol expects to be present in the block:
type family SelectView p :: *
Header validation
Block validation is mostly a ledger concern; checks such as verifying that all inputs to a transaction are available to avoid double-spending are defined in the ledger layer. The consensus layer is mostly unaware of what's inside the blocks; indeed, it might not even be a cryptocurrency, but a different application of blockchain technology.
Blocks (more precisely, block headers) also contain a few things specifically to support the consensus layer, however. To stick with the Praos example, Praos expects various cryptographic proofs to be present related to deriving entropy from the blockchain. Verifying these is the consensus layer's responsibility, and so we define a third and final view of the fields that consensus must validate:
type family ValidateView p :: *
Protocol definition
We need one more type family*: each protocol might require some static information to run; keys to sign blocks, its own identity for the leader check, etc. We call this the 'node configuration', and define it as
data family NodeConfig p :: *
We can now tie everything together in the abstract definition of a consensus protocol. The three methods of the class correspond to the three responsibilities we mentioned; each method is passed the static node configuration, as well the specific view required for that method. Note that p
here is simply a type-level tag identifying a specific protocol; it never exists at the value level.
class ConsensusProtocol p where
-- | Chain selection
selectChain :: NodeConfig p -> SelectView p -> SelectView p -> Ordering
-- | Header validation
validateBlk :: NodeConfig p -> ValidateView p -> Bool
-- | Leader check
--
-- NOTE: In the real implementation this is additionally parameterized over
-- some kind of 'proof' that we are indeed the leader. We ignore that here.
checkLeader :: NodeConfig p -> SlotNo -> LedgerView p -> Bool
Example: Permissive BFT (PBFT)
Permissive BFT is a simple consensus protocol where nodes produce blocks according to a round robin schedule: first node A, then B, then C, then back to A. We call the index of a node in that schedule the 'NodeId'. This is a concept for PBFT only, not something the rest of the consensus layer needs to be aware of:
type NodeId = Int
The node configuration required by PBFT is the node's own identity, as well as the public keys of all nodes that can appear in the schedule:
data PBft -- The type level tag identifying this protocol
data instance NodeConfig PBft = CfgPBft {
-- | The node's ID
pbftId :: NodeId
-- | The verification keys of all nodes
, pbftKeys :: [PublicKey]
}
Since PBFT is a simple round-robin schedule, it does not need any information from the ledger:
type instance LedgerView PBft = ()
Chain selection just looks at the chain length; for the sake of simplicity, we will assume here that longer chains will end on a larger slot number**, and so chain selection just needs to know the slot number of the tip:
type instance SelectView PBft = SlotNo
Finally, when validating blocks, we must check that they are signed by any of the nodes that are allowed to produce blocks (that can appear in the schedule):
type instance ValidateView PBft = Signature
Defining the protocol is now simple:
instance ConsensusProtocol PBft where
selectChain _cfg = compare
validateHdr cfg sig = any (verifySig sig) (pbftKeys cfg)
checkLeader cfg slot () = slot `mod` length (pbftKeys cfg) == pbftId cfg
Chain selection just compares slot numbers; validation checks if the key is one of the allowed slot leaders, and leader selection does some modular arithmetic to determine if it's that node's turn.
Note that the round-robin schedule is used only in the check whether or not we are a leader; for historical blocks (that is, when doing validation), we merely check that the block was signed by any of the allowed leaders, not in which order. This is the 'permissive' part of PBFT; the real implementation does do one additional check (that there isn't any single leader signing more than its share), which we omit from this blog post.
Protocol combinator example: explicit leader schedule
As mentioned in the introduction, for testing it is useful to be able to explicitly decide the order in which nodes sign blocks. To do this, we can define a protocol combinator that takes a protocol and just overrides the is-leader check to check a fixed schedule instead:
data OverrideSchedule p -- As before, just a type-level tag
The configuration required to run such a protocol is the schedule and the node's ID within this schedule, in addition to whatever configuration the underlying protocol p
requires:
data instance NodeConfig (OverrideSchedule p) = CfgOverride {
-- | The explicit schedule of nodes to sign blocks
schedule :: Map SlotNo [NodeId]
-- | Our own identifier in the schedule
-- (PBFT has such an identifier also, but many protocols don't)
, scheduleId :: NodeId
-- | Config of the underlying protocol
, scheduleP :: NodeConfig p
}
Chain selection is unchanged, and hence so is the view on blocks required for chain selection:
type instance SelectView (OverrideSchedule p) = SelectView p
However, we need to override header validation---indeed, disable it altogether---because if the underlying protocol verifies that the blocks are signed by the right node, then obviously such a check would be incompatible with this override. We therefore don't need anything from the block to validate it (since we don't do any validation):
type instance LedgerView (OverrideSchedule p) = ()
Defining the protocol is now trivial:
instance ConsensusProtocol p => ConsensusProtocol (OverrideSchedule p) where
selectChain cfg = selectChain (scheduleP cfg)
validateHdr _cfg () = True
checkLeader cfg s () = scheduleId cfg `elem` (schedule cfg) Map.! s
Chain selection just uses the chain selection of the underlying protocol p
, block validation does nothing, and the is-leader check refers to the static schedule.
From blocks to protocols
Just like the consensus protocol, a specific choice of block may also come with its own required configuration data:
data family BlockConfig b :: *
Many blocks can use the same protocol, but a specific type of block is designed for a specific kind of protocol. We therefore introduce a type family mapping a block type to its associated protocol:
type family BlockProtocol b :: *
We then say a block supports its protocol if we can construct the views on that block required by its specific protocol:
class BlockSupportsProtocol b where
selectView :: BlockConfig b -> b -> SelectView (BlockProtocol b)
validateView :: BlockConfig b -> b -> ValidateView (BlockProtocol b)
Example: (Simplified) Byron blocks
Stripped to their bare bones, blocks on the Byron blockchain look something like
type Epoch = Int
type RelSlot = Int
data ByronBlock = ByronBlock {
bbSignature :: Signature
, bbEpoch :: Epoch
, bbRelSlot :: RelSlot
}
We mentioned above that the Ouroboros consensus protocol family assumes that time is split into slots. In addition, they also assume that these slots are grouped into epochs. Byron blocks do not contain an absolute number, but instead an epoch number and a relative slot number within that epoch. The number of slots per epoch on the Byron chain is fixed at slightly over 10k, but for testing it is useful to be able to vary k, the security parameter, and so it is configurable; this is (part of) the configuration necessary to work with Byron blocks:
data instance BlockConfig ByronBlock = ByronConfig {
bcEpochLen :: Int
}
Given the Byron configuration, it is then a simple matter of converting the pair of an epoch number and a relative slot into an absolute slot number:
bbSlotNo :: BlockConfig ByronBlock -> ByronBlock -> SlotNo
bbSlotNo cfg blk = bbEpoch blk * bcEpochLen cfg + bbRelSlot blk
What about the protocol? Well, the Byron chain runs PBFT, and so we can define
type instance BlockProtocol ByronBlock = PBft
Proving that the Byron block supports this protocol is easy:
instance BlockSupportsProtocol ByronBlock where
selectView = bbSlotNo
validateView = const bbSignature
The ledger state
The consensus layer not only runs the consensus protocol, it also manages the ledger state. It doesn't, however, care much what the ledger state looks like specifically; instead, it merely assumes that some type of ledger state is associated with a block type:
data family LedgerState b :: *
We will need one additional type family. When we apply a block to the ledger state, we might get an error if the block is invalid. The specific type of ledger errors is defined in the ledger layer, and is, of course, highly ledger specific. For example, the Shelley ledger will have errors related to staking, whereas the Byron ledger does not, because it does not support staking; and ledgers that aren't cryptocurrencies will have very different kinds of errors.
data family LedgerError b :: *
We now define two type classes. The first merely describes the interface to the ledger layer, saying that we must be able to apply a block to the ledger state and either get an error (if the block was invalid) or the updated ledger state:
class UpdateLedger b where
applyBlk :: b -> LedgerState b -> Either (LedgerError b) (LedgerState b)
Second, we will define a type class that connects the ledger associated with a block to the consensus protocol associated with that block. Just like the BlockSupportsProtocol
provides functionality to derive views from the block required by the consensus protocol, LedgerSupportsProtocol
similarly provides functionality to derive views from the ledger state required by the consensus protocol:
class LedgerSupportsProtocol b where
-- | Extract the relevant information from the ledger state
ledgerView :: LedgerState b -> LedgerView (BlockProtocol b)
We will see in the next section why it is useful to separate the integration with the ledger (UpdateLedger
) from its connection to the consensus protocol (LedgerSupportsProtocol
).
Block combinators
As a final example of the power of combinators, we will show that we can define a combinator on blocks and their associated ledgers. As an example of where this is useful, the Byron blockchain comes with an implementation as well as an executable specification. It is useful to instantiate the consensus layer with both of these ledgers, so we can verify that the implementation agrees with the specification at all points along the way. This means that blocks in this 'dual ledger' set-up are, in fact, a pair of blocks***.
data DualBlock main aux = DualBlock {
dbMain :: main
, dbAux :: aux
}
The definition of the dual ledger state and dual ledger error are similar:
data instance LedgerState (DualBlock main aux) = DualLedger {
dlMain :: LedgerState main
, dlAux :: LedgerState aux
}
data instance LedgerError (DualBlock main aux) = DualError {
deMain :: LedgerError main
, deAux :: LedgerError aux
}
To apply a dual block to the dual ledger state, we simply apply each block to its associated state. This particular combinator assumes that the two ledgers must always agree whether or not a particular block is valid, which is suitable for comparing an implementation and a specification; other choices (other combinators) are also possible:
instance ( UpdateLedger main
, UpdateLedger aux
) => UpdateLedger (DualBlock main aux) where
applyBlk (DualBlock dbMain dbAux)
(DualLedger dlMain dlAux) =
case (applyBlk dbMain dlMain, applyBlk dbAux dlAux) of
(Left deMain , Left deAux ) -> Left $ DualError deMain deAux
(Right dlMain' , Right dlAux') -> Right $ DualLedger dlMain' dlAux'
_otherwise -> error "ledgers disagree"
Since the intention of the dual ledger is to compare two ledger implementations, it will suffice to have all consensus concerns be driven by the first ('main') block; we don't need an instance for ProtocolLedgerView
for the auxiliary block, and, indeed, in general might not be able to give one. This means that the block protocol of the dual block is the block protocol of the main block:
type instance BlockProtocol (DualBlock main aux) = BlockProtocol main
instance LedgerSupportsProtocol main
=> LedgerSupportsProtocol (DualBlock main aux) where
ledgerView = ledgerView . dlMain
The block configuration we need is the block configuration of both blocks:
data instance BlockConfig (DualBlock main aux) = DualConfig {
dcMain :: BlockConfig main
, dcAix :: BlockConfig aux
}
We can now easily show that the dual block also supports its protocol:
instance BlockSupportsProtocol main => BlockSupportsProtocol (DualBlock main aux) where
selectView cfg = selectView (dcMain cfg) . dbMain
validateView cfg = validateView (dcMain cfg) . dbMain
Conclusions
The Cardano consensus layer was initially designed for the Cardano blockchain, currently running Byron and soon running Shelley. You might argue that this means IOHK's engineers should design for that specific blockchain initially, and generalize only later when using the consensus layer for other blockchains. However, doing so would have important downsides:
- It would hamper our ability to do testing. We would not be able to override the schedule of which nodes produce blocks when, we would not be able to run with a dual ledger, etc.
- We would entangle things that are logically independent. With the abstract approach, the Shelley ledger consists of three parts: a Byron chain, a Shelley chain, and a hard fork combinator mediating between these two. Without abstractions in place, such separation of concern would be harder to achieve, leading to more difficult to understand and maintain code.
- Abstract code is less prone to bugs. As a simple example, since the dual ledger combinator is polymorphic in the two ledgers it combines, and they have different types, we couldn't write type correct code that nonetheless tries to apply, say, the main block to the auxiliary ledger.
- When the time comes and we want to instantiate the consensus layer for a new blockchain (as it inevitably will), writing it in an abstract manner from the start forces us to think carefully about the design and avoid coupling things that shouldn't be coupled, or making assumptions that happen to be justified for the actual blockchain under consideration but may not be true in general. Fixing such problems after the design is in place can be difficult.
Of course, all of this requires a programming language with excellent abstraction abilities; fortunately, Haskell fits that bill perfectly.
This is the first in a series of Deep Dive posts aimed at developers
*Unlike the various views, NodeConfig
is a data family; since the node configuration is passed to all functions, having NodeConfig
be a data family helps type inference, as it determines p
. Leaving the rest as type families can be convenient and avoid unnecessary wrapping and unwrapping.
** Slot numbers are a proxy for chain length only if every slot contains a block. This is true for PBFT only in the absence of temporary network partitions, and not true at all for probabilistic consensus algorithms such as Praos. Put another way, a lower density chain may well have a higher slot number at its tip than another shorter but higher density chain.
*** The real DualBlock
combinator keeps a third piece of information that relates the two blocks (things like transaction IDs). We Have omitted it from this post for simplicity.
Recent posts
2021: the year robots, and graffiti came to a decentralized, smarter Cardano by Anthony Quinn
27 December 2021
Cardano education in 2021: the year of the pioneers by Niamh Ahern
23 December 2021
Cardano at Christmas (and what to say if anyone asks…) by Fernando Sanchez
21 December 2021