Stablefees and the Decentralized Reserve System
Exploring a new mechanism to help make fees fair, stable, and more predictable over time
10 June 2021 7 mins read
Facilitating transactions in cryptocurrency platforms stumbles on the dual utility of the platform’s underlying asset. On the one hand, users can hold and trade it as part of their investment portfolios. On the other hand, it supplies the necessary “fuel” for processing transactions. This duality suggests that the system should have a mechanism for adjusting transaction costs, so they remain competitive and reasonable. Also, the bounded throughput of decentralized platforms per unit of time introduces another hurdle: the system should also allow the users to discover the correct price for timely transaction processing, depending on their individual needs.
Why not drop transaction fees altogether? Three reasons: One, transaction processing incurs costs on the system’s side (in terms of computation and storage). It is reasonable to allow transaction processors (stake pool operators, in the case of Cardano) to offset their costs. Two, even with a theoretically infinite capacity, it is important to prevent transaction issuers from saturating the network with worthless transactions. Three, it is appropriate to incentivize transaction processors to provide quality of service. A surge in demand should influence their payoffs accordingly.
Adding a fee to each transaction can address the above considerations.
Bitcoin and beyond
Bitcoin set out the first mechanism for pricing transactions in distributed ledger platforms. This mechanism resembles a first-price auction: transactions bid for a place in a block naming a specific reward, and block producers select the transactions that they prefer to include. Block producers also get rewarded with the right to mint new coins, i.e., their operation is subsidized by the whole community via inflation of the total coin supply. Inflation drops geometrically over time, and transaction fees become increasingly dominant in the rewards. This mechanism, while enabling Bitcoin to run for well over a decade, has been criticized for its inefficiency. Transaction costs have also risen over time.
In this blog post, we explore a new mechanism that builds on Cardano's approach to ledger rules and system assets, and complements the Babel fees concept. The objective is making fees fair, stable, and predictable over time. We describe the mechanism in the context of Cardano. However, it can be adapted to any other cryptocurrency with similar characteristics.
Introducing 'Stablefees'
The core idea behind Stablefees is to have a base price for transactions through pegging to a basket of commodities or currencies. Stablefees includes a native "decentralized reserve" contract that issues and manages a stablecoin pegged to the basket. A comparison in the fiat world might be the International Monetary Fund’s SDR, (established in 1969) and valued based on a basket of five currencies—the U.S. dollar, the euro, the Chinese renminbi, the Japanese yen, and the British pound sterling. The stablecoin --- let’s call it "Basket Equivalent Coin" (BEC) --- is the currency used for paying transaction fees (and all other real world pricing needs of the platform, e.g., SPO costs).
In this system, ada will play a dual role: Reserve asset of the decentralized reserve, and reward currency for staking. It will also be the fall-back currency in extreme scenarios where the reserve contract is in a liquidity crunch. Before a transaction, the issuer will have to obtain BECs, either via other third parties or directly by sending ada to the decentralized reserve contract. On what basis will the reserve issue BECs? The reserve contract will also issue equity shares -we will call them decentralized equity coins (DECs)-, in exchange of ada. Leveraging the value of DECs, the decentralized reserve will often adjust the value of BEC so it is pegged on the underlying basket of commodities. In other words, DECs will absorb the fluctuations of ada vs. the basket to ensure that the real-world value of BECs remains stable (cf. the AgeUSD stablecoin design that has been already deployed and used on Ergo).
This trinity of coinage, issued natively by the system, will attract different cohorts. BECs' stability and liquidity might be attractive to risk-averse, transaction-intensive holders. DECs will offer the highest rewards if ada goes up, but also take the most significant hit when ada goes down. Long-term holders may find DECs more attractive. Also, since decentralized reserve prices these coins in ada, both BECs and DECs can facilitate participation in staking and governance. Returns can be issued at different rates, reflecting the different nature of each coin. Ultimately, rewards will always be denominated and payable in ada, which will remain the most versatile of all three coins.
Oracles
The centerpiece of this mechanism is an on-chain oracle that determines the price of the basket in ada. SPOs can implement this oracle in a decentralized manner. The reserve can offer extra rewards to all oracle contributors from the fees collected during BEC/DEC issuances. This will ensure two things: thousands of geographically-diverse contributors, and ledger rules calculating a synthesized exchange rate in some canonical way (through a weighted median across all price submissions in an epoch, for example). If oracle contributors manipulate their contributions, they can be held accountable by tracking their reputation and performance on-chain.
The pricing mechanism
How would one price transactions and reward block producers? Using the current approach in Cardano, each transaction will be deterministically mapped to a precise value denominated in BECs, using a formula determined by the ledger rules. The formula will take into account both transaction size and its computational requirements, and may also incorporate runtime metrics (such as the average system load). The resulting value will be the base fee guaranteeing that the transaction will be processed by the system. Given the base fee, end users will be able to apply a multiplier if they wish (which will be a value at least 1, e.g., 1.5x, 3x, etc.) to increase the fee and accelerate processing. This will become relevant at times of surging demand.
This approach has one advantage when compared with the first-price auction model: the pricing mechanism is continuously stabilized to a reasonable default value. Users perform price discovery in one direction only to accelerate processing, if required. Also, transaction issuers can store BECs to secure their future transaction-issuing ability without being affected by ada price volatility.
Stablefees and Babel fees
The Stablefees mechanism can be considered a natural extension of Babel fees ---spot conversion of BECs into ada by the decentralized reserve. Both mechanisms complement (and are compatible with) each other. Babel fees can be deployed together with Stablefees with just one change: Using BECs to cover Babel fee liabilities, instead of ada. This also means that fees will always be payable in ada (via a Babel fee liability convertible in ada on the spot). Hence, the whole mechanism is backwards compatible: it won’t affect occasional users who just hold ada and do not wish to obtain BECs.
A final point about diversity. While the above narrative identifies a unique and global BEC, the same mechanism can be used to issue regional BECs pegged to different baskets of commodities, which could possibly be weighted differently. Such “regional” BECs will be able to increase system inclusivity, while enabling SPOs to have more fine-grained policies in terms of transaction inclusion.
Stablefees 'lite'
The above mechanism requires a decentralized reserve contract and the issuance of BECs and DECs by the contract to buyers. A “lite” version avoids the reserve contract and directly adjusts the fee formula by pegging it onto the agreed basket of commodities through the price oracle. The resulting system denominates transaction fees nominally in BECs and immediately converts them into ada. The payable amount fluctuates, depending on the value of BEC. The mechanism is otherwise identical, also facilitating unidirectional price discovery through the multiplier. The only disadvantage is that a prospective transaction issuer has no access to a native token that enables transaction processing predictably; transaction issuers must pay fees in ada. Still, the fees will continuously adjust and remain stable via the pegging mechanism with respect to the basket. As a result, a transaction issuer will be able to organize their off-chain asset portfolio to meet their transaction needs effectively.
The road ahead
Our team is currently researching the granular details of the Stablefees mechanism. Once this research is complete, Stablefees can be integrated into Cardano to offer fair and predictable transaction pricing. Moreover, the price oracle and the global BEC (and regional variants, if included) will undoubtedly find uses beyond paying transaction fees, expanding the capabilities of decentralized applications in the Cardano ecosystem.
Babel fees - denominating transaction costs in native tokens
Introducing a novel mechanism that allows the payment of transaction fees in user-defined tokens on Cardano
25 February 2021 8 mins read
In Douglas Adams' classic The Hitchhiker's Guide to the Galaxy, a Babel fish is a creature that allows you to hear any language translated into your own. This fantasy of universal translation ensures meaningful interaction despite the myriad different languages in the galaxy.
In the cryptocurrency space, smart contract platforms enable the development of a myriad custom tokens. Is it possible to interact with the platform using your preferred token? If only there was a “Babel fees” mechanism to translate the token you use to the one that the platform requires for posting a transaction.
Common wisdom in blockchain systems suggests that posting a valid transaction must incur a cost to the sender. The argument is that, without such constraint, there is nothing to stop anyone from overloading the system with trivial transactions saturating its capacity and rendering it unusable. Given the above tenet, a frequently made corollary is that in any blockchain system where user-defined tokens are supported, it should be prohibited to pay transaction fees in such tokens. Instead, transactions should carry a fee in the native token of the platform that is accepted by all participants as being valuable. Arguably such a restriction is undesirable. But how is it possible to circumvent the ensuing – and seemingly inevitable – vulnerability?
The art of the possible
Cryptography and game theory have been known to make possible what seemed impossible. Celebrated examples include key exchange over a public channel, Merkle's puzzles, and auctions where being truthful is the rational thing to do, like Vickrey's auctions. And so it also turns out in this case.
First, let us recall how native assets work in Cardano: Tokens can be created according to a minting policy and they are treated natively in the ledger along with ada. Cardano's ledger adopts the Extended UTXO (EUTXO) model, and issuing a valid transaction requires consuming one or more UTXOs. A UTXO in Cardano may carry not just ada but in fact a token bundle that can contain multiple different tokens, both fungible and non-fungible. In this way it is possible to write transactions that transfer multiple different tokens with a single UTXO.
Transaction fees in the ledger are denominated in ada according to a function fixed as a ledger parameter. A powerful feature of Cardano's EUTXO model is that the fees required for a valid transaction can be predicted precisely prior to posting it. This is a unique feature that is not enjoyed by other ledger arrangements (such as the account-based model used in Ethereum). Indeed, in this latter case the fees needed for a transaction may change during the time it takes for the transaction to settle, since other transactions may affect the ledger's state in between and influence the required cost for processing the transaction.
A thought experiment
Let's consider the following thought experiment to help us move closer towards our objective of Babel fees. Imagine that it is possible to issue a transaction that declares a liability denominated in ada equal to the amount of fees that the transaction issuer is supposed to pay. Such a transaction would not be admissible to the ledger. However it can be perceived as an open offer that asks for the liability to be covered. Why would anyone respond to such an offer? To entice a response, assuming the token bundle concept already present in Cardano, the transaction can offer some amount of token(s) to whoever covers the liability. This suggests a spot trade between ada and the offered token(s) at a certain exchange rate. Consider now a block producer that sees such a transaction. The block producer can create a matching transaction absorbing the liability covering it with ada as well as claiming the tokens that are on offer.
By suitably extending the ledger rules, the transaction with the liability as well as its matching transaction become admissible to the ledger as a group. Due to the absorption of the liability, the set of two transactions becomes properly priced in ada as a whole and hence it does not break the ledgers' bookkeeping rules in terms of ada fees. As a result, the transaction with the liability settles, and we have achieved our objective. Users can submit transactions priced in any token(s) they possess and, providing a block producer is willing to take them up on the spot trade, have them settle in the ledger as regular transactions!
A concrete example
The mechanism is of course conditioned on the presence of liquidity providers that possess ada and are willing to issue matching transactions. In fact the mechanism creates a market for such liquidity providers. For instance, a stake pool operator (SPO), can publish exchange rates for specific tokens they consider acceptable. For instance an SPO can declare that they will accept tokenX for an exchange rate 3:1 over ada. It follows that if a transaction costs, say ₳0.16, the transaction can declare a liability of ₳0.16 as well as offer 0.48 of tokenX. In the native asset model of Cardano this can be implemented as a single UTXO carrying a token bundle with the following specification (Ada→ -0.16, tokenX→0.48). Note the negative sign signifying the liability.
Suppose now that the SPO is about to produce a block. She recovers the liability transaction from the mempool and issues a matching transaction consuming the UTXO with the liability. The matching transaction transfers 0.48 of tokenX to a new output which is owned by the SPO. The resulting block contains the two transactions in sequence. The matching transaction provides the missing ₳0.16 in addition to the fees that are needed for itself. In fact multiple transactions can be batched together and have their fees covered by a single matching transaction.
Figure. Alice sends a quantity of 9 tokens of type X to Bob with the assistance of Stacy, an SPO, who covers Alice's transaction liability and receives tokens of type X in exchange. The implied exchange rate between X and Ada is 3:1.
New measures of value
The above process is entirely opt-in for SPOs. Each one can determine their own policy and exchange rate as well as decide to change the exchange rate for the various tokens they accept on the spot. Moreover, there is no need for agreement between SPOs about the value of a specific token. In fact, different SPOs may provide different exchange rates for the same token and a user issuing a liability transaction can offer an amount of tokens corresponding to the minimum, average or even maximum of the posted exchange rates in the network. In this way, a natural trade off arises between settlement time of liability transactions and the market value of tokens they offer.
This illustrates how native assets, the EUTXO model, and the simple but powerful tweak of introducing liabilities in the form of negative values in token bundles can accommodate Babel fees empowering users to price transactions in any token supported natively by the system. It also shows the unique advantage of being an SPO in such a system. It should be noted that SPOs need not be the only entities in the network offering to cover liabilities. In fact, an SPO can readily partner -if they wish- with an external liquidity provider who will be issuing the matching transactions. In addition, third party providers can also act on the network independently and issue matching transactions. Nevertheless, the benefit will remain with the block producers; SPOs can always front-run matching transactions and substitute them for their own if they wish so. This is a case that front-running transactions is a feature: it makes it feasible for SPOs to be paid in the tokens they prefer for their transaction processing services.
The mechanism of negative quantities in token bundles can be implemented in the basic ledger rules of Cardano at some point following the introduction of native assets with the Mary Hard Fork. Beyond Babel fees, the mechanism allows a variety of other interesting applications, such as atomic swaps for spot trades, that we will cover in a future blog post. It is yet another illustration of the power of Cardano's approach and its ability to support a diverse and entrepreneurial community of users and stake pool operators.
I am grateful to Manuel Chakravarty, Michael Peyton Jones, Nikos Karagiannidis, Chad Nester and Polina Vinogradova for helpful discussions, suggestions and comments related to the concept of Babel fees and its implementation in the Cardano ledger. We also have a video whiteboard walkthrough covering this topic.
Blockchain reward sharing - a comparative systematization from first principles
Navigating the diverse landscape of reward-sharing schemes and the choices we have made in the design of Cardano’s reward-sharing scheme
30 November 2020 10 mins read
In the previous article, we identified the objectives of the reward scheme in Cardano, and we gave general guidelines regarding engaging with the system.
Taking a more high-level view, we will examine from first principles, the general problem of reward sharing in blockchain systems. To recall, the two overarching objectives of any resource-based consensus system is to incentivize the following.
High engagement. Resource-based consensus protocols are more secure the more resources are engaged with protocol maintenance. The problem, of course, is that the underlying resources are useful for a wide variety of other things too (e.g., electricity and computational power in the case of proof of work, or stake for engaging in decentralized apps in the case of proof of stake), so resource holders should be incentivized to commit resources for protocol maintenance.
Low leverage: leverage relates to decentralization. Take a group of 10 people; if there is a leader and the group follows the leader’s wishes all the time, the leader’s leverage is 10 while everyone else’s is zero. If, on the other hand, everyone’s opinion matters the same, everyone’s leverage is 1. These are two extremes, but it should be fairly obvious what types of leverage align better with decentralization. From an economic viewpoint, however, a “benevolent dictatorship” is always more efficient; as a result, decentralization will come at a cost (exactly as democracy does), and hence it has to be also properly incentivized.
Given the above objectives, let us now examine some approaches that have been considered in consensus systems and systematize them in terms of how they address the above objectives. An important first categorization we will introduce is between unimodal and multimodal reward schemes.
Unimodal
In a unimodal scheme, there is only one way to engage in the consensus protocol with your resources. We examine two sub-categories of unimodal schemes.
- Linear Unimodal
This is the simplest approach and is followed by many systems, notably Bitcoin; the original proof-of-work based Ethereum, as well as Algorand. The idea is simple: if an entity commands x% resources, then the system will attempt to provide x% of the rewards – at least in expectation. This might seem fair—until one observes the serious downsides that come with it.
First, consider that someone has x% of resources and that x% of the rewards in expectation are below their individual cost to operate as a node. Then, they will either not engage (lowering the engagement rate of the system), or, more likely, actively seek others to combine resources and create a node. Even if there are two resource holders with x% of resources each and a viable individual cost c when running as separate nodes, they will fare better by combining resources into a single node of 2x% resources because the resulting cost will be typically less than 2c. This can result in a strong trend to centralize, and lead to high leverage since the combined pool of resources will be (typically) run by one entity.
In practice, a single dictatorially operated node is unlikely to emerge. This is due to various reasons such as friction in coordination between parties, fear of the potential drop in the exchange rate of the system’s underlying token if the centralization trend becomes noticeable, as well as the occasional use of complex protocols to jointly run pools. Even so, it is clear that unimodal linear rewards can hurt decentralization.
One does not need to go much further than looking at Bitcoin and its current, fairly centralized, mining pool lineup. It is worth noting that if stake (rather than hashing power) is used as a resource, the centralization pressure will be less – since the expenditure to operate a node is smaller. But the same problems apply in principle.
An additional disadvantage of the above setting is that the ensuing “off-chain” resource pooling that occurs will be completely opaque from the ledger perspective, and hence more difficult for the community to monitor and react to. In summary, the linear unimodal approach has the advantage of being simple, but is precarious, both in terms of increasing engagement and for keeping leverage low.
- Quantized Linear Unimodal
This approach is the same as the linear rewards approach, but it quantizes the underlying resource. I.e., if your resources are below a certain threshold, you may be completely unable to participate; you can only participate in fixed quanta. Notably, this approach is taken in ETH2.0, where 32 Ether should be pledged in order to acquire a validator identity. It should be clear that this quantized approach shares the same problems with the linear unimodal approach in terms of participation and leverage. Despite this, it has been considered for two primary reasons. First, using the quantized approach enables one to retrofit traditional BFT-style protocol design elements (e.g. that require counting identities) in a resource-based consensus setting. The resulting system is less elegant than true resource-based consensus but this is unavoidable since traditional BFT-style protocols do not work very well when there are more than a few hundred nodes involved. The second reason, specific to the proof-of-stake setting, is seeking to impose penalties on participants as a means of ensuring compliance with the protocol. Imposing quantized collateral pledges makes penalties for protocol infractions more substantial and painful.
Multimodal
We next turn to multimodal schemes. This broad category includes Cosmos, Tezos, Polkadot & EOS. It also includes Cardano. In a multimodal scheme, a resource holder may take different roles in the protocol; being a fully active node in the consensus protocol is just one of the options. The advantage of a multimodal scheme is that offering multiple ways to engage (with correspondingly different rates of return) within the protocol itself can accommodate a higher engagement, as well as limit off-chain resource pooling. For instance, if the potential rewards received by an individual when they engage with all their resources sit below their operational cost of running a node, they can still choose to engage by a different mode in the protocol. In this way, the tendency to combine resources off-chain is eased and the system – if designed properly – may translate this higher engagement to increased resilience.
We will distinguish between a number of different multimodal schemes.
- Representative bimodal without leverage control. The representative approach is inspired by representative democracy: the system is run by a number of elected operators. The approach is bimodal as it enables parties to (1) advertise themselves as operators in the ledger and/or (2) “vote” for operators with their resources. The set of representative operators has a fixed size and is updated on a rolling basis typically with fixed terms using some election function that selects representatives based on the votes they received. Rewards are distributed evenly between representatives, possibly taking into account performance data and adjusting accordingly. Allowing rewards to flow to voters using a smart contract can incentivize higher engagement in voting since resource holders get paid for voting for good representatives (note that this is not necessarily followed by all schemes in this category). The disadvantage of this approach is the lack of leverage control, beyond, possibly, the existence of a very large upper bound, which suggests that the system may end up with a set of very highly leveraged operators. This is the approach that is broadly followed by Cosmos, EOS, and Polkadot.
A different approach to the representative approach is the delegative approach. In general, this approach is closer to direct democracy as it allows resource holders the option to engage directly with the protocol with the resources they have. However, they are free to also delegate their resources to others as in liquid (or delegative) democracy (where the term delegative is derived from). This results in a community-selected operator configuration that does not have a predetermined number of representatives. As in the representative approach, user engagement is bimodal. Resource holders can advertise themselves as operators and/or delegate their resources to existing operators. The rewards provided are proportional to the amount of delegated resources and delegates can be paid via an on-chain smart contract, perhaps at various different rates. Within the delegative approach we will further distinguish two subcategories.
- Delegative bimodal with pledge-based capped rewards. What typifies this particular delegative approach is that the resource pool’s rewards have a bound that is determined by the amount of pledge that is committed to the pool by its operator. In this way, the total leverage of an operator can be controlled and fixed to a constant. Unfortunately, this leverage control feature has the negative side effect of implicitly imposing the same bound to all, small and large resource holders. So, on the one hand, in a population of small resource holders, engagement will be constrained by the little pledge that operators are able to commit. On the other hand, a few large whale resource holders may end up influencing the consensus protocol in a very significant manner, possibly even beyond its security threshold bound. In terms of leverage control, it should be clear that one size does not fit all! From existing systems, this is the approach that is (in essence) followed by Tezos.
It is worth noting that all the specific approaches we have seen so far come with downsides – either in terms of maximizing engagement, controlling leverage, or both. With this in mind, let us now fit into our systematization, the approach of the reward-sharing scheme that we are using in Cardano.
- Delegative bimodal with capped rewards and incentivized pledging. In this delegative system (introduced in our reward-sharing scheme paper), the rewards that are provided to each pool follow a piecewise function on the pool’s size. The function is initially monotonically increasing and then becomes constant at a certain “cap” level which is a configurable system parameter (in Cardano this is determined by the parameter k). This cap limits the incentives to grow individual resource pools. At the same time, pledging resources to a pool is incentivized with higher pledged pools receiving more rewards. As a result, lowering one’s leverage becomes incentive-driven: resource pools have bounded size and operators have an incentive to pledge all the resources they can afford into the smallest number of pools possible. In particular, whale resource holders are incentivized to keep their leverage low. The benefit of the approach is that high engagement is reinforced, while leverage is kept in control by incentivizing the community to (i) pledge as much as possible, (ii) use all the remaining unpledged resources as part of a crowdsourced filtering mechanism. This translates stake to voting power and supports exactly those operators that materially contribute to the system’s goals the most.
The above systematization puts into perspective the choices that we have made in the design of the reward-sharing scheme used in Cardano vis-a-vis other systems. In summary, what the Cardano reward system achieves is to materially promote with incentives and community stake-based voting the best possible outcome: low leverage and high engagement. And this is accomplished, while still allowing for a very high degree of heterogeneity in terms of input behavior from the stakeholders.
As a final point, it is important to stress that while considerable progress has been made since the introduction of the Bitcoin blockchain, research in reward sharing for collaborative projects is still an extremely active and growing domain. Our team continuously evaluates various aspects of reward-sharing schemes and actively explores the whole design space in a first-principles manner. In this way, we can ensure that any research advances will be disseminated widely for the benefit of the whole community.
I am grateful to Christian Badertscher, Sandro Coretti-Drayton, Matthias Fitzi, and Peter Gaži, for their help in the review of other systems and their placement in the systematization of this article.
The general perspective on staking in Cardano
Advice for Stakeholders - Delegators and Stake Pool Operators.
13 November 2020 13 mins read
As a project, decentralization remains arguably our most important and fundamental goal for Cardano. Protocols and parameters provide the foundations for any blockchain. Last week, we outlined some of the planned changes around Cardano parameters and how these will impact the staking ecosystem and thus accelerate our decentralization mission.
Yet the community itself – how it sees itself, how it behaves, and how it sets common standards – is a key factor in the pace of this success. Cardano has been very carefully engineered to provide “by design” all the necessary properties for a blockchain system to operate successfully. However, Cardano is also a social construct, and as such, observance, interpretation, and social norms play a crucial role in shaping its resilience and longevity.
So in anticipation of the k-parameter adjustment on December 6th, I would like to give a broader perspective on staking, highlighting some of the innovative features of the rewards sharing scheme used in Cardano.
Principles & practical intent
As well as outlining some of the key principles, this piece has a clear practical intent; to provide guidance and some recommendations to stakeholders so that they engage meaningfully with the mechanism, and support the project’s longer-term strategic goals through their actions.
Consensus based on a resource that is dispersed somehow across a population of users – as opposed to identity-based participation – has been the hallmark of the blockchain space since the launch of the Bitcoin blockchain. In this domain, proof-of-stake systems are distinguished in the sense that they use a virtual resource, stake, which is recorded in the blockchain itself.
Pooling resources for participation is something that is inevitable; some level of pooling is typically beneficial in the economic sense and hence resource holders will find a way to make it happen. Given this inevitability, the question arises: how does a system prevent a dictatorship or an oligarchy from emerging?
The objectives of the reward sharing scheme
Contrary to other blockchain systems, Cardano uses a reward sharing scheme that (1) facilitates staking with minimum friction as well as (2) it incentivizes pooling resources in a way that system-wide decentralization emerges naturally from the rational engagement of the resource holders.
The mechanism has the following two broad objectives:
- Engage all stakeholders - This is important since the more stakeholders are engaged in the system, the more secure the distributed ledger will be. This also means that the system should have no barriers for participation, nor should impose friction by requiring off-chain coordination between stakeholders to engage with the mechanism.
- Keep the leverage of individual stakeholders low -. Pooling resources leads to increased leverage for some stakeholders. Pool operators exert an influence in the system proportional to the resources controlled by their pool, not to their own resources. Without pooling, all resource holders have leverage of exactly 1; contrast this e.g., to a pool operator, owning, say 100K ada, who controls a pool of total delegated stake of 200M ada; that operator has leverage of 2,000. The higher the leverage of the system, the worse its security (to see this, consider that with leverage above 50, launching a 51% attack requires a mere 1% of the total resources!).
It should also be stressed that a disproportionately large pool size is not the only reason for increased leverage; stakeholders creating multiple pools, either openly or covertly (what is known as a Sybil attack) can also lead to increased leverage. The lower the leverage of a blockchain system, the higher its degree of decentralization.
Putting this into practice
So how does the reward sharing scheme used in Cardano meet the above objectives? Staking via our scheme facilitates two different paths: pledging and delegating. Pledging applies to stake pool operators; pledged stake is committed to a stake pool and is supposed to stay put for as long as the pool is operating. Think of pledge as a ‘commitment’ to the network – ‘locking up’ a certain amount of stake in order to help safeguard and secure the protocol. Delegating on the other hand, is for those who do not wish to be involved as operators. Instead, they are invited to assess the offerings the stake pool operators provide, and delegate their stake to one or more pools that, in their opinion, best serve their interests and the interest of the community at large. Given that delegation does not require locking up funds, there is no reason to abstain from staking in Cardano; all stakeholders can and are encouraged to engage in staking.
Central to the mechanism’s behavior are two parameters: k and a0. The k-parameter caps the rewards of pools to 1/k of the total available. The a0 parameter creates a benefit for pledging more stake into a single pool; adding X amount of pledge to a pool increases its rewards additively by up to a0*X. This is not to the detriment of other pools; any rewards left unclaimed due to insufficient pledging will be returned to the Cardano’s reserves and allocated in the future.
Beyond deciding on an amount to pledge, creating a stake pool requires that operators declare their profit margin and operational costs. When the pool rewards are allocated at the end of each epoch, the operational costs are withheld first, ensuring that stake pools remain viable. Subsequently, operator profit is calculated, and all pool delegators are rewarded in ada proportional to their stake afterwards.
Paired with the assessment of stake pools performed by the delegates, this mechanism provides the right set of constraints for the system to converge to a configuration of k equal size pools with the maximum amount of pledge possible. The equilibrium point has the property that delegator rewards are equalized (so it doesn’t matter what pool they delegate to!), while stake pool operators are rewarded appropriately for their performance, their cost efficiency, and their general contributions to the ecosystem.
For the above to happen, it is necessary to engage with the mechanism in a meaningful and rational manner. To assist stakeholders in understanding the mechanism, here are some points of advice.
Guidance for delegators
- Know your pool(s) - Investigate the pools’ available data and information. What is the operators’ web-presence? What kind of information do they provide about their operation? Are they descriptive about their costs? Are the costs reasonably based on geographic location and other aspects of their operation? Do they update their costs regularly to account for the fluctuation of ada? Do they include the costs for their personal time? Remember that maintaining a high-performance pool requires commitment and effort, so those committed operators deserve compensation.
- Think bigger - Consider your choice holistically, not based on just a single dimension. Consider the longer term value your choices bring to the network. Think of your delegation as a ‘vote of confidence’, or a way to show your support to a pool's mission or goals. Opt for professionalism and demonstrated long-term commitment to the system’s goals. Recognize community members who have been helping to lay down the foundations for the ecosystem, either with their community presence or by helping to build things. The long-term wellbeing of the ecosystem is crucially affected by your delegation choice. A more decentralized network is a more resilient and long-lived network.
- Be wary of ‘pool splitters’ - Pool operators that run multiple pools with small pledge hurt delegators and smaller operators. They hurt their delegators because they could have provided a higher amount of rewards by concentrating their pledge into a single pool; by not doing that, there are rewards that remain unclaimed. They hurt smaller and new operators, because they are forcing them to remain without delegates and hence making their operation unviable – without delegates a pool may be forced to close. So avoid pool operators that run multiple pools with pledge below saturation level. Note there are legitimate reasons for large stakeholders to accept delegators and run a public pool (e.g., they are delegating some of their stake to other pools to support the community); consult any public statements such operators make about their delegation strategy and their leverage. It is ok to delegate to them, assuming they keep their leverage low and they support the community.
- Be wary of highly leveraged operators - Be mindful of the stake pool operators’ leverage (see below for more details on how to calculate leverage). A higher pledge is correlated to less leverage when comparing pools of the same size; a high leverage is indicative of a stake pool operator with very little “skin in the game.” Stake pool operators may prove to have skin in the game in other ways than pledging stake of course; e.g., they can be very professional and contribute to the community in different ways. You should be the judge of this: high leverage in itself is not a reason to avoid delegating to a particular pool, but it is a strong indication that you should proceed with caution and carefully evaluate the people behind the operation.
- Shop around - Do take into account the information provided from your wallet software (or from recognized community resources such as adapools or pooltool) in terms of the pool’s ranking and its performance factor. Remember though, while the ranking is important, it should not be the sole factor behind your delegation choice. Think holistically – you may want to consider pools fulfilling a mission you agree with, or trying to add value to the wider community through podcasts or social activity, even if they do not offer the highest possible returns.
- Be involved - A pool with no performance data on display may have attractive characteristics; it could be providing better rewards in the best case scenario, but also high risk as a delegation choice since its performance may turn out to be suboptimal. Delegate according to your ‘risk profile’, and the frequency you are willing to re-delegate your stake. Do check the pool’s performance and updates regularly to ensure that your choice and assessment remains the best possible.
Guidance for pool operators
- Be transparent - Choose your pool’s operational cost as accurately as possible. Do include the personal effort (priced at a reasonable rate) that you and your partners put into the pool operation! You are a pillar of Cardano and so you have every right to be compensated by the community. Be upfront about your costs and include them in your pool’s website. Educate your prospective delegates about where the pool costs are going. Always remember that it is important to charge for the time you invest in maintaining your pool. In the short term, you may be prepared to invest your time and energy ‘for free’ (or after hosting costs, at an effective loss) but remember that this is not a sustainable model for the network over the medium and longer term.
Don’t split your pool - With the coming changes in k (commencing with the move to k=500 on 6th December), we are already seeing pool operators splitting their pools in order to retain delegators without becoming saturated. Do not engage in pool splitting unless you can saturate a pool completely with your stake. If you are a whale (relative stake > 1/k) you can create multiple pools – but you should keep your leverage as close to 1 as possible or less. Pool splitting that increases your leverage hurts the delegators’ rewards, and more importantly, it hurts the decentralization of the Cardano ecosystem, which is detrimental to everyone. If you run and control multiple pools under different tickers, make a public statement about it. Explain the steps you take to control your leverage. Creating multiple pools while trying to conceal the fact that you control them is akin to a Sybil attack against Cardano. This behavior should be condemned by the community. You can calculate and publicize your leverage using the following formula:
Exchanges are a special kind of whale stakeholder, since they collectively manage other people’s stake. One strategy for an exchange is to avoid leverage altogether and delegate the stake they control to community pools. If an exchange becomes a pool operator, they can maintain their leverage below 1 by using a mixed pledging and delegation strategy.
- Set your profit margin wisely - Select the margin to make your pool competitive. Remember that if everyone delegates their stake and is rational, you only have to beat the (k+1)-th pool in the rankings offered by the Daedalus wallet. If your pool offers other advantages that can attract delegation (e.g., you are contributing to a charitable cause you feel others may wish to support), or you have acquired exceptional equipment that promises notable uptime/performance, make sure you promote this widely. When you offer such benefits, you should consider setting a higher profit margin.
- Keep your pool data updated - Regularly update the cost and margin to accommodate fluctuations in ada price. Give assurances to your delegators and update them about the stake pool operational details. In case of mishaps and downtimes, be upfront and inform your delegators via your website and/or other communication channels you maintain with them.
- Pledge as much as you are able to - Increase the amount of pledge as much as you comfortably can and not more. Beyond using your own stake, you can also partner with other stakeholders to increase the pledge of your pool. A high pledge signals long-term commitment and reduced leverage, and it unlocks additional rewards every epoch as dictated by the a0 term in the rewards sharing scheme calculation. As a result, it does make your pool more desirable to prospective delegators. On the other hand, remember that pledge is not the only factor that makes a pool attractive. Spend time on your web and social media presence and be sure to advertise all the ways that you contribute to the Cardano ecosystem.
If you are a Cardano stakeholder, we hope that you find the above advice informative and helpful in your efforts to engage in staking. As in many other respects, Cardano brings a novel and heavily researched mechanism to its blockchain design. The rewards scheme is mathematically proven to offer an equilibrium that meets the set of objectives set out in the beginning of this document. Ultimately though, the math is not enough; it is only the people that can make it happen.
Cardano’s future is in the hands of the community.
The opinions expressed in the blogpost are for educational purposes only and are not intended to provide any form of financial advice.
The Ouroboros path to decentralization
The protocol that powers Cardano and its design philosophy
23 June 2020 6 mins read
Designing and deploying a distributed ledger is a technically challenging task. What is expected of a ledger is the promise of a consistent view to all participants as well as a guarantee of responsiveness to the continuous flow of events that result from their actions. These two properties, sometimes referred to as persistence and liveness, are the hallmark of distributed ledger systems.
Achieving persistence and liveness in a centralized system is a well-studied and fairly straightforward task; unfortunately, the ledger that emerges is precariously brittle because the server that supports the ledger becomes a single point of failure. As a result, hacking the server can lead to the instant violation of both properties. Even if the server is not hacked, the interests of the server’s operators may not align with the continuous assurance of these properties. For this reason, decentralization has been advanced as an essential remedy.
Informally, decentralization refers to a system architecture that calls for many entities to act individually in such a way that the ledger’s properties emerge from the convergence of their actions. In exchange for this increase in complexity, a well-designed system can continue to function even if some parties deviate from proper operation. Moreover, in the case of more significant deviations, even if some disruption is unavoidable, the system should still be capable of returning to normal operation and contain the damage.
How does one design a robust decentralized system? The world is a complicated place and decentralization is not a characteristic that can be hard-coded or demonstrated via testing – the potential configurations that might arise are infinite. To counter this, one must develop models that systematically encompass all the different threats the system may encounter and demonstrate rigorously that the two basic properties of persistence and liveness are upheld.
The strongest arguments for the reliability of a decentralized system combine formal guarantees against a broad portfolio of different classes of failure and attack models. The first important class is that of powerful Byzantine models. In this setting, it should be guaranteed that even if a subset of participants arbitrarily deviate from the rules, the two fundamental properties are retained. The second important class is models of rationality. Here, participants are assumed to be rational utility maximizers and the objective is to show that the ledger properties arise from their efforts to pursue their self interest.
Ouroboros is a decentralized ledger protocol that is analyzed in the context of both Byzantine and rational behavior. What makes the protocol unique is the combination of the following design elements.
- It uses stake as the fundamental resource to identify the participants’ leverage in the system. No physical resource is wasted in the process of ledger maintenance, which is shown to be robust despite ‘costless simulation’ and ‘nothing at stake’ attacks that were previously thought to be fundamental barriers to stake-based ledgers. This makes Ouroboros distinctly more appealing than proof-of-work protocols, which require prodigious energy expenditure to maintain consensus.
- It is proven to be resilient even if arbitrarily large subsets of participants, in terms of stake, abstain from ledger maintenance. This guarantee of dynamic availability ensures liveness even under arbitrary, and unpredictable, levels of engagement. At the same time, of those participants who are active, barely more than half need to follow the protocol – the rest can arbitrarily deviate; in fact, even temporary spikes above the 50% threshold can be tolerated. Thus Ouroboros is distinctly more resilient and adaptable than classical Byzantine fault tolerance protocols (as well as all their modern adaptations), which have to predict with relative certainty the level of expected participation and may stop operating when the prediction is false.
- The process of joining and participating in the protocol execution is trustless in the sense that it does not require the availability of any special shared resource such as a recent checkpoint or a common clock. Engaging in the protocol requires merely the public genesis block of the chain, and access to the network. This makes Ouroboros free of the trust assumptions common in other consensus protocols whose security collapses when trusted shared resources are subverted or unavailable.
- Ouroboros incorporates a reward-sharing mechanism to incentivize participants to organize themselves in operational nodes, known as stake pools, that can offer a good quality of service independently of how stake is distributed among the user population. In this way, all stakeholders contribute to the system’s operation – ensuring robustness and democratic representation – while the cost of ledger maintenance is efficiently distributed across the user population. At the same time, the mechanism comes with countermeasures that de-incentivize centralization. This makes Ouroboros fundamentally more inclusive and decentralized compared with other protocols that either end up with just a handful of actors responsible for ledger maintenance or provide no incentives to stakeholders to participate and offer a good quality of service.
These design elements of Ouroboros are not supposed to be self-evident appeals to the common sense of the protocol user. Instead, they were delivered with meticulous documentation in papers that have undergone peer review and appeared in top-tier conferences and publications in the area of cybersecurity and cryptography. Indeed, it is fair to say that no other consensus research effort is represented so comprehensively in these circles. Each paper is explicit about the specific type of model that is used to analyze the protocol and the results derived are laid out in concrete terms. The papers are open-access, patent-free, and include all technical details to allow anyone, with the relevant technical expertise, to convince themselves of the veracity of the claims made about performance, security, and functionality.
Building an inclusive, fair and resilient infrastructure for financial and social applications on a global scale is the grand challenge of information technology today. Ouroboros contributes, not just as a protocol with unique characteristics, but also in presenting a design methodology that highlights first principles, careful modeling and rigorous analysis. Its modular and adaptable architecture also lends itself to continuous improvement, adaptation and enrichment with additional elements (such as parallelization to improve scalability or zero-knowledge proofs to improve privacy, to name two examples), which is a befitting characteristic to meet the ever-evolving needs and complexities of the real world.
Further reading
To delve deeper into the Ouroboros protocol, from its inception to recent new features, follow these links:
- Ouroboros (Classic): the first provably secure proof-of-stake blockchain protocol.
- Ouroboros Praos: removes the need for a rigid round structure and improves resilience against ‘adaptive’ attackers.
- Ouroboros Genesis: how to avoid the need for a recent checkpoint and prove the protocol is secure under dynamic availability for trustless joining and participating.
- Ouroboros Chronos: removes the need for a common clock.
- Reward sharing schemes for stake pools.
- Account management and maximizing participation in stake pools.
- Optimizing transaction throughput with proof-of-stake protocols.
- Fast settlement using ledger combiners.
- Ouroboros Crypsinous: a privacy-preserving proof-of-stake protocol.
- Kachina: a unified security model for private smart contracts.
- Hydra: an off-chain scalability architecture for high transaction throughput with low latency, and minimal storage per node.
Recent posts
2021: the year robots, and graffiti came to a decentralized, smarter Cardano by Anthony Quinn
27 December 2021
Cardano education in 2021: the year of the pioneers by Niamh Ahern
23 December 2021
Cardano at Christmas (and what to say if anyone asks…) by Fernando Sanchez
21 December 2021