Simplicity and Michelson

A programming language that is too simple

9 December 2017 Prof Philip Wadler 10 mins read

Simplicity and Michelson - Input Output

Simplicity

Only once in my life have I encountered a programming language that was too simple to use. That was Lispkit Lisp, developed by Peter Henderson, Geraint Jones, and Simon Jones, which I saw while serving as a postdoc at Oxford, 1983–87, and which despite its simplicity was used to implement an entire operating system. It is an indightment of the field of programming languages that I have not since encountered another system that I consider too simple. Until today. I can now add a second system to the list of those that are too simple, the appropriately-titled Simplicity, developed by Russell O'Connor of Blockstream. It is described by a paper here and a website here.

The core of Simplicity consists of just nine combinators: three for products (pair, take, and drop), three for sums (injl, injr, and case), one for unit (unit), and two for plumbing (iden and comp). It is throughly grounded in ideas from the functional programming, programming language, and formal methods communities.

When I call Simplicity too simple it is intended as a compliment. It is delightful to see full adders and cryptographic hash functions cobbled together using just products, sums, and units. It is eye-opening to see how far one can get without recursion or iteration, and how this enables simple analyses of the time and space required to execute a program. It is a confirmation to see a system with foundations in category theory and sequent calculus. Now I know what to say when developers respond to my talk "Categories for the Working Hacker" by asking "But how can we use this in practice?"

The system is accompanied by a proof of its correctness in Coq, which sets a high bar for competing systems. O'Connor even claims to have a proof in Coq that the Simplicity implementation of SHA-256 matches the reference specification provided by Andrew Appel's Verified Software Toolchain project (VST), which VST proved corresponds to the OpenSSL implementation of SHA-256 in C.

At IOHK, I have been involved in the design of Plutus Core, our own smart contract scripting language, working with Darryl McAdams, Duncan Coutts, Simon Thompson, Pablo Lamela Seijas, and Grigore Rosu and his semantics team. We have a formal specification which we are preparing for release. O'Connor's work on Simplicity has caused us to rethink our own work: what can we do to make it simpler? Thank you, Russell!

That said, Simplicity is still too simple, and despite its emphasis on rigour there are some gaps in its description.

Jets

A 256-bit full adder is expressed with 27,348 combinators, meaning addition in Simplicity requires several orders of magnitude more work than the four 64-bit addition instructions one would normally use. Simplicity proposes a solution: any commonly used sequence of instructions may be abbreviated as a "jet", and implemented in any equivalent matter. Hence, the 27,348 combinators for the 256-bit full adder can be ignored, and replaced by the equivalent four 64-bit additions.

All well and good, but this is where it gets too simple. No one can afford to be inefficient by several orders of magnitude. Hence, any programmer will need to know what jets exist and to exploit them whenever possible. In this sense, Simplicity is misleadingly simple. It would be clearer and cleaner to define each jet as an opcode. Each opcode could still be specified by its equivalent in the other combinators of Simplicity, but programs would be more compact, faster to execute, and—most important—easier to read, understand, and analyse accurately. If one ignores jets, the analyses of time and space required to execute a program, given toward the end of the paper, will be useless—off by orders of magnitude. The list of defined jets is given nowhere in the paper. Nor could I spot additional information on Simplicity linked to from its web page or findable by a web search. More needs to be done before Simplicity can be used in practice.

Gaps

It's not just the definition of jets which is absent from the paper, and cannot be found elsewhere on the web. Lots more remains to be supplied.

  • Sections 2.4, 2.5, 3.2 claim proofs in Coq, but apart from defining the semantics of the nine combinators in Appendix A, no Coq code is available for scrutiny.
  • Section 2.5 claims a representation of Simplicity terms as a dag, but it is not specified. Lacking this, there is no standard way to exchange code written in Simplicity.
  • Section 4.4 defines an extended semantics for Simplicity that can read the signature of the current transaction, support Merklised abstract syntax trees, and fail when a transaction does not validate. It also lifts meanings of core (unextended) Simplicity programs to the extended semantics. However, it says nothing about how the seven combinators that combine smaller Simplicity programs into bigger ones act in the extended semantics! It's not hard to guess the intended definitions, but worrying that they were omitted from a paper that aims for rigour.
  • Section 3 provides a Bit Machine to model the space and time required to execute Simplicity. The model is of limited use, since it ignores the several orders of magnitude improvement offered by jets. Further, the Bit Machine has ten instructions, enumerated on pages 10–12, but the list omits the vital "case" instruction which appears in Figure 2. Again, it's not hard to guess, but worrying it was omitted.

Michelson

A second language for scripting blockchains is Michelson. It is described by a paper here and a website here. (Oddly, the website fails to link to the paper.)

I will offer just one word on Michelson. The word is: "Why?" Michelson takes many ideas from the functional programming community, including higher-order functions, data structures such as lists and maps, and static type safety.

Currently, it is also much more thoroughly described and documented than Simplicity. All of this is to be commended.

But Michelson is an inexplicably low-level language, requiring the programmer to explicitly manipulate a stack. Perhaps this was done so that there is an obvious machine model, but Simplicity offers a far superior solution: a high-level model for programming, which compiles to a low-level model (the Bit Machine) to explicate time and space costs.

Or perhaps Michelson is low-level to improve efficiency. Most of the cost of evaluating a smart contract is in cryptographic primitives. The rest is cheap, whether compiled or interpreted. Saving a few pennies of electricity by adopting an error prone language—where there is a risk of losing millions of dollars in an exploit—is a false economy indeed. Premature optimisation is the root of all evil.

The language looks a bit like all the bad parts of Forth and Lisp, without the unity that makes each of those languages a classic. Lisp idioms such as CAAR and CDADAR are retained, with new ones like DUUP, DIIIIP, and PAAIAIAAIR thrown in.

There is a fair set of built-in datatypes, including strings, signed and unsigned integers, unit, product, sum, options, lists, sets, maps, and higher-order functions. But there is no way for users to define their own data types. There is no way to name a variable or a routine; everything must be accessed by navigating a data structure on the stack.

Some operations are specified formally, but others are left informal. For lists, we are given formal rewriting rules for the first three operators (CONS, NIL, IF_CONS) but not the last two (MAP, REDUCE). Type rules are given in detail, but the process of type inference is not described, leaving me with some questions about which programs are well typed and which are not. It reminds me of a standard problem one sees in early work by students—the easy parts are thoroughly described, but the hard parts are glossed over.

If I have understood correctly, the inference rules assign types that are monomorphic, meaning each term has exactly one type. This omits one of the most successful ideas in functional programming, polymorphic routines that act on many types. It means back to the bad old days of Pascal, where one has to write one routine to sort a list of integers and a different routine to sort a list of strings.

Several of these shortcomings are also shared by Simplicity. But whereas Simplicity is intended as a compilation target, not to be read by humans, the Michelson documentation includes a large collection of examples suggesting it is intended for humans to write and read.

Here is one of the simpler examples from the paper.

{ DUP ; CDAAR ; # T
NOW ;
COMPARE ; LE ;
IF { DUP ; CDADR ; # N
     BALANCE ;
     COMPARE ; LE ;
     IF { CDR ; UNIT ; PAIR }
        { DUP ; CDDDR ; # B
          BALANCE ; UNIT ;
          DIIIP { CDR } ;
          TRANSFER_TOKENS ;
          PAIR } }
   { DUP ; CDDAR ; # A
     BALANCE ;
     UNIT ;
     DIIIP { CDR } ;
     TRANSFER_TOKENS ;
     PAIR } }

The comment # T is inserted as a reminder that CDAAR extracts variable T, and similarly for the other variables N, B, and A. This isn't the 1950s. Why don't we write T when we mean T, instead of CDAAR? WHY ARE WE WRITING IN ALL CAPS?

In short, Michelson is a bizarre mix of some of the best and worst of computing.

Conclusion

It is exciting to see ideas from the functional programming, programming languages, and formal methods communities gaining traction among cryptocurrencies and blockchains. While there are shortcomings, it is fantastic to see an appreciation of how these techniques can be applied to increase reliability—something which the multi-million dollar exploits against Ethereum show is badly needed. I look forward to participating in the conversations that ensue!

Postscript

The conversation has begun! Tezos have put up a page to explain Why Michelson. I've also learned there is a higher-level language intended to compile into Michelson, called Liquidity.

What is our release strategy for Cardano?

A new development approach from IOHK engineers will bring benefits to users

1 December 2017 Jeremy Wood 4 mins read

What is our release strategy for Cardano - Input Output

What is our release strategy for Cardano?

Since the release of mainnet, IOHK engineers have not only started work planning the new features of Shelley, the next big release, but have also been evolving the way they work. The biggest change is that the team developing Cardano will follow a new release cycle where updates are more regularly made to the live software. This is now possible because we have the foundation of Byron to build on. This approach will benefit the community because it means we can deliver features to users sooner, and also has other advantages, such as allowing us to design very detailed tests for the code. Considerable planning has been under way to help the team shift to this approach, and we are confident it will bring good results. This blog post explains what is happening behind the scenes. Cardano involves collaboration from several development teams working towards a common vision following the same roadmap. The development teams collaborating on the project include Core SL (Settlement Layer), Networking, Cardano Blockchain Explorer, Wallet Backend, and Daedalus. Development follows the agile software development approach, which allows us to continuously inspect, adapt and improve the way we build and ship Cardano. Agile is an iterative approach where software is built incrementally from the start of the project, instead of delivering it all at once near the end, as we did with Byron, the launch of mainnet.

The development teams working on Shelley now follow synchronized two-week sprints, which are a set period of time during which specific work has to be completed and made ready for review. There are daily meetings to coordinate the teams and resolve any issues. The product backlog – where the prioritized list of features are stored and is essentially all the work that needs to go into developing Cardano – is being managed in an issue tracker tool. Individual teams have their own sprints that take their work from this shared product backlog.

After a two-week sprint, we will deliver a testnet release. After a number of testnet releases there will be a mainnet release based on feedback from the community and internal quality assurance testing.

Our goal is to reduce the time between mainnet deployments. To release early and often, we’re adopting time-based releases as opposed to feature-based releases. Time-based release strategies are used in many commercial and open source projects, for example, Linux. Moving to time-based releases means no more waiting for features to be ready; instead we merge only the features that are ready at the time of the release. This doesn’t include patches, which are updates to improve the software or fix bugs, which will continue to be released when they are ready.

One of the problems with the "big bang" release approach we followed for Byron is that it creates a bottleneck due to problems coordinating all the features slated for release at the same time. Decoupling features from releases, where possible, means that in future we will not have to delay a release because we are waiting for one feature to be ready. It also means that we can focus on getting smaller features out of the door and into the hands of our community quicker. The shorter feedback loops allow us to gather valuable feedback from the community on the direction that we are taking Cardano. It allows users, contributors and stakeholders to prepare and plan around a predictable timeframe.

Release Strategy Diagram

However, scaling this agile approach across several teams working on the same product brings merging and integration challenges. We are using a well known, successful git branching model called git-flow to help manage this.

Once a development sprint is finished, our DevOps (development and operations) team branches off the code from the develop branch into the release branch. Changes are deployed to internal staging and once testing approves the changes, they are redeployed to the testnet. Once we decide to release mainnet, the release branch is merged to the master branch and tagged. The master branch is therefore always in stable state as the mainnet. We’re working to align our processes to enable moving towards more frequent, scheduled releases. Once a suitable release schedule is decided it will be shared with the community. The process described here is not an end state. We strive to continuously improve, so updates to our release process can be expected as we fine tune our software development processes.

Artwork,
Creative Commons
Mike Beeple

Ethereum Classic community comes together at successful Hong Kong summit

Principles and plans for growth unite participants at event organised by Grayscale

22 November 2017 Jeremy Wood 10 mins read

Ethereum Classic community comes togeth

Since day one, Ethereum Classic has been driven by principles. What began as a disparate group of people who came together after the DAO hack and the consequent hard fork on Ethereum has now become a strong and growing community, united by a belief in immutability and that code is law. So it was tremendously exciting to see everyone gather at the first Ethereum Classic Summit held last week and watch the community move beyond the realm of instant messaging to interacting in person and making new friends. There were many highlights of the two day summit in Hong Kong. It was a moment to reflect on how much Ethereum Classic has achieved in a little over a year, developing its identity, building its technology and securing its future. Grayscale, which provides Bitcoin and Ethereum Classic investment trusts, has provided much momentum by organising this event.

Barry Silbert, founder, explained that Grayscale will put a third of its ETC fees for three years into supporting the growth of ETC though development, marketing and the community, and the ETC Cooperative has been established to do this. Barry told IOHK: “We wanted to do our part and help support increased communication, collaboration and bring together all the various stakeholders at least once a year. I’ve been super excited to see the participation, the level of engagement, the quality of the speakers and participants and think this year has been a huge success.”

Charles Hoskinson giving his keynote speech

What might come a surprise to some is that Ethereum Classic is not a direct competitor to Ethereum. In his keynote speech on the first day of the summit, Charles Hoskinson, founder and CEO of IOHK, and former CEO of Ethereum, said: “Frankly, we compete far more with Bitcoin. To me, Ethereum Classic is basically a digital commodity, it’s digital gold. It’s really exciting because it can give us a very interesting design space that we can explore that’s totally independent of the design space that Ethereum is pursuing. We don’t have to embrace proof of stake, Plasma or the fork of the week. Instead we can start with principles and we can grow from principles.”

Charles argued that two separate philosophies had always sat uncomfortably together in Ethereum, even before the community split. “We actually had two Ethereums from the beginning but we didn’t know that. We had two philosophies at the same time. One was this notion that code is law, that code can’t care, it doesn’t matter what you’re running and it doesn’t matter about the consequences. But there was this other philosophy, the notion of a world computer, the idea that you could have universal infrastructure that would be a beautiful computation layer that could be added to the internet. But the fundamental problem is that these things are philosophically so different they can’t coexist. When you’re the world computer you have to fork, you have to embrace mutability and have a lot of flexibility in your platform.”

Now more than a year after the split, Ethereum Classic has developed its own identity. It had a great asset in its principled and warm spirited community, Charles pointed out and championed the development work being done. “We see a lot of great things coming down the pipeline. It’s got to a point where we can prove beyond reasonable doubt that we can keep up with Ethereum Foundation and in some cases we’re writing better software. As a comparison, Bitcoin Core has about 110,000 to 120,000 lines of C++ code. The Mantis client does more and it’s 10,000 lines of code. It’s twelve times more concise. “The Sputnik VM, pulling the VM out and having it as a standalone, is a phenomenal idea, it’s a great piece of engineering. It’s far better than anything coming out of the Ethereum Foundation.”

A highlight of the conference was hearing about the steady pace of development of the ETC protocol. The eight-strong team of engineers that make up ETCDEV team are working on implementing the monetary policy, a flexible software developer kit that can be used to build applications on Ethereum Classic, and making performance and reliability improvements to the Ethereum Virtual Machine. Isaac Ardis, Go developer from ETC Dev, outlined the direction and purpose of the Emerald Project, which is not just a wallet but a library and suite of tools for third party developers. Licensing projects under Apache 2.0 makes them commercial-friendly, and easy to implement.

Igor Artamanov, CTO of ETCDEV, told IOHK that he believed Ethereum Classic would rise to be in the top three cryptocurrencies. “We’ll continue our work on open tools for developers who will build apps on top of ETC blockchain. Our goal is to set up development processes and help with building various tools for engineers and open source libraries around ETC.” A theme of the conference was formal verification, and on that topic Igor said: “It will be cool to have. We are not experts in formal verification, but if someone will decide to introduce it to SputnikVM, we’ll appreciate that.” On monetary policy, Matt Mazur, advisor to the ETC Dev team, gave a clearly explained presentation on ECIP 1017, the proposal that contains changes such as capping the amount of ETC that can be created, in contrast to Ethereum’s uncapped supply.

Alan Verbner speaking at ETC Summit
Alan Verbner speaking at ETC Summit
The [ETC monetary policy](/blog/ethereum-classic/a-joint-statement-on-ethereum-classics-monetary-policy/ "ETC monetary policy, iohk.io") is modelled on Bitcoin’s monetary policy, and is designed to reduce risk, create simplicity, and encourage investors. Every 5 million blocks, there is a 20 per cent reduction in block rewards. The question of how Ethereum Classic will scale has always been an issue of debate. Igor shared insights on how sidechains could be deployed, making use of public and private sidechains and having the main chain for security. Cody Burns, ETCDEV adviser, talked about cross-chain interoperability and how [cross-chain atomic swaps](https://medium.com/@DontPanicBurns/ethereum-cross-chain-atomic-swaps-5a91adca4f43 "Cody Burns, Medium") could allow people to exchange coins in a trustless manner.

There was news from the Grothendieck team, the IOHK developers that are dedicated to ETC and who have over the past year built the Mantis client for Ethereum Classic. Alan McSherry, team lead, gave details about integration with the Daedalus wallet. It will be fantastic for the community to have another option for a wallet, where they can securely store their ETC. “The Daedalus release is going to happen hopefully in December. That’s taking the Daedalus wallet, the UI that was built for Cardano, and putting it on top of our Mantis client, our ETC node, so you’re able to use a familiar UI with Ethereum Classic.”

Alan McSherry also offered a look at what the team are thinking about in terms of Mantis development in 2018. With regard to K Framework, the important research to create formal semantics of the EVM done by Professor Grigore Rosu at the University of Illinois, and funded by IOHK, he said: “With any luck, that will result in the possibility of writing Scala contracts that can be compiled to run on the EVM, or a slightly different version of the EVM, and we’ll have the formal verification and it’ll be a question of trying to create formal proofs around those contracts.” IOHK research into sidechains could allow the possibility of creating contracts to allow interactions between ETC and Ethereum and ETC and Bitcoin, while other features on the horizon for ETC include zero knowledge proofs and anonymous addresses, he said.

And Alan Verbner, developer on the team, gave a reminder of why the team use Scala. “Functional programming allows you to create less and more secure code and that’s why we are trying to build in Mantis compact code that is safe and you can check its correctness. Our client has 10,000 lines of code, it’s easier to read and easier to access. We are building open source software so we like people to look into it and analyse it to look for bugs and security issues. You have fewer lines of code to test.”

The theme of principles arose again when it came to hearing from Ethereum Classic miners. We heard how mining pools chose ETC because of the core principles of the community, proving the attractiveness of the code is law philosophy. There was good news from the 91Pool, the first group that ever mined on ETC, when Meicy Mei announced the group has purchased a building in Shanghai to serve as a dedicated hub for the ETC community in China, with a cafe and bar that will host meet ups.

ETC Summit key stakeholders on stage
ETC Summit key stakeholders on stage
A final reminder of the principles that brought the Ethereum Classic together came during an impassioned keynote speech from Meredith Patterson, an expert in programming language security. She recounted the scorn that Ethereum had responded with when formal methods of software development were suggested as a good idea. She argued that Ethereum Classic still had the chance to adopt formal methods – an approach that IOHK is a strong advocate of and actively pursues during its development process. For code to be law, she said, the meaning of the code must be unambiguous, and users must have the same understanding of it that machines do. Steps to get there include adopting a rigorous formal semantics, like KEVM. Deficits in test suites should be found and the problem corrected. Solidity should be replaced.

The importance of coming together as a community to debate these issues cannot be underestimated. The results of these discussions will drive the development of Ethereum Classic. Sharing this view, Barry told IOHK: “I think in the digital currency space there’s a little too much interaction done through social media and through anonymous forums and it’s important to get people together to meet each other, challenge each other, and do it in a way that is respectful and that fosters communication and collaboration. So I do think it’s important to have summits like this. I absolutely expect that we’ll do it again next year and hopefully the year after that will become an annual event that everybody in the community looks forward to.”

Writing a High-Assurance Blockchain Implementation

Guest blog from Edsko de Vries, who is working on the high assurance implementation of the Ouroboros blockchain protocol

3 November 2017 Edsko de Vries 9 mins read

Writing a High-Assurance Blockchain Implementation - Input Output

Writing a High-Assurance Blockchain Implementation

In our previous blog post Cryptocurrencies need a safeguard to prevent another DAO disaster we discussed the need for high assurance development of cryptocurrencies and their underlying blockchain protocols. We also sketched one way in which one might go about this, but we did not give much detail. In this follow-up blog post we delve into computer science theory a bit more, and in particular we will take a closer look at process algebras. We will have no choice but omit a lot of detail still, but hopefully we can develop some intuition and provide a starting point for reading more.

Compositionality

When we want to study computer programs as mathematical objects that we can manipulate and analyze, we need to write them in a programming language for which we have a well understood mathematical theory. If this mathematical theory inherits much of what we know from elementary mathematics, that's an additional benefit. For example, we learn in school that:

for every number n, n + n = 2 * n

Shockingly, in many programming languages even this most basic of equations does not hold. For example, in the language C, when we define

int f() {
 return 5;
}
int g() {
 int x;
 scanf("%d", &x);
 return x;
}

we have that while f() + f() is the same as 2 f() (both equate to 10), g() + g() is certainly not the same as 2 g(): the former asks the user for two numbers and adds them together, the latter asks the user for a single number and multiplies it by two. In the language Haskell this distinction between an honest-to-goodness number, and a program that returns a number is explicit (this is what we mean when we say that Haskell is a pure language), and for numbers basic mathematical equations such as the above hold true. Knowing that for every number n, n + n = 2 n in Haskell would still not be particularly useful without another property of Haskell: when we see the expression n + n anywhere in a program, we can (almost) always replace it with 2 n and know that our program still behaves in the same way: we say that Haskell is referentially transparent. This is key: it means that we can analyze bits of our program at a time and know that our conclusions are also correct in the wider context of the whole program. In other words, it means that we can do compositional reasoning: we can understand the larger program by understanding its components individually. This is essential when dealing with anything but toy examples.

Process Algebra

Of course, even in Haskell we can read from files, write to network sockets, etc. and while the type system makes sure that such programs are distinguished from pure values, reasoning about such programs is nonetheless more difficult because we do not have a good mathematical theory that covers all of these "effects". It is therefore useful to identify a subset of effects for which we do have a mathematical theory. Different kinds of applications need different subsets of effects. For some applications a process algebra is a good match. A process algebra is a programming language in which we can express systems of concurrent programs that communicate with each other, along with a mathematical theory. There are many different kinds of process algebras, and they differ in the details. For example, some provide synchronous point to point communication, others asynchronous; a few provide broadcast communication. So what would be an analogue to a law like n + n = 2 * n in such a language? Well, there are multiple answers, but one answer is bisimilarity, which intuitively allows us to say that "two processes behave the same". For example, consider the two following processes

p1 m = forever $ do
        bcast m
        bcast m
p2 m = forever $ do
        bcast m

Both p1 and p2 consist of an infinite loop; p1 broadcasts message m twice within that loop, and p2 only once. While these two processes may look different, they really aren't: both broadcast message m indefinitely. To make "they aren't really different" more precise, we can try to prove that p1 and p2 are bisimilar. Intuitively, this means that we have to show that any action that p1 can take, p2 can also take, and they are still bisimilar after taking those actions. The proof for p1 and p2 would look something like this, where a red line indicates "can do the same action here":

Process Algebra

So why is this useful? When two processes are bisimilar, you know three things: they can do the same actions, they will continue to be able to do the same actions, and most importantly, this is compositional: if p1 is bisimilar to p2 then we can replace p1 with p2 in (most) contexts. As before, this is essential: it means we can understand and analyze a larger program by understanding and analyzing its components.

Psi-calculus

Process algebra is a large field in computer science, and many different kinds of algebras have been studied and are being studied. Some of the earliest and perhaps most well-known are the Calculus of Communicating Systems (CCS) and Communicating Sequential Processes (CSP). These days one of the most important process algebras is the pi calculus; argueably, the pi calculus is to concurrent programming what the lambda calculus is to functional programming. While the lambda calculus forms the theoretical basis for functional programming, nobody actually writes any code in the lambda calculus; it would be akin to writing in assembly language. Instead we write programs in a much more sophisticated functional language like Haskell. Similarly, the pi calculus is a "core" calculus, good for foundational research but not so good for actually writing programs. Instead, we will use the psi calculus which is a generalization of the pi calculus with more advanced features, but still with a well understood theory. Let's consider an example of bisimilarity in the psi calculus, similar to the previous example but slightly more interesting. Do you think we should regard processes p3 and p4 as "behaving the same"?

p3 m = forever $ do
        k  <- newKey ; bcast (encrypt k  m)
        k' <- newKey ; bcast (encrypt k' m)

p4 m = forever $ do
        k  <- newKey ; bcast (encrypt k m)

What about process p5 and p3?

p5 m = forever $ do
        k <- newKey
        bcast (encrypt k m)
        bcast (encrypt k m)

It turns out process p3 and p4 are bisimilar, and the proof is almost the same as before

Psi Calculus

They both send out an infinite number of messages, each message encrypted with a new key. Although p3 and p4 will generate different keys, from the outside we cannot tell them apart because we don't know anything about those keys. Process p5 and p4 are however not bisimilar:

Psi Calculus

After p5 sends out a message encrypted with some key k, it then sends that message again encrypted with the same key; p4 cannot follow.

Ouroboros Praos

One of the nice things about the psi calculus is that we can embed it as a domain-specific language (DSL) in Haskell. We use this to define a model of the Ouroboros Praos blockchain protocol, which we can both execute and run (it's just Haskell after all), and even test using QuickCheck (Haskell's randomized testing tool), but moreover is also amenable to formal verification using the psi calculus metatheory.

We start by expressing the algorithm as it is defined in the Ouroboros Praos paper in our Haskell psi calculus embedding. Although we cannot formally verify that this algorithm matches the one in the paper, the correspondence is close enough that it can easily be verified by hand. We can run this implementation and experiment with it. This first implementation will do various things that, while we can implement them, we would not want to do in a real implementation. So we then make a number of small, incremental adjustments, that get us closer and closer to an actual high performance implementation. Each of the individual adjustments should be small enough that a human reader can understand how we go from one implementation to the next and convince themselves that we haven't changed anything fundamental.

Each of these algorithms can be run, so that we can test that all of the adjustments we make to the original algorithm don't change any fundamental properties. Since we express the algorithm in terms of a well-understood process calculus, we can also take some of these adjustments and formally prove that we haven't changed anything fundamental. Bisimulation is but one tool in the theoretical toolbox here; there are many others. The end result is that we have a bridge from the high level specification of the algorithm in the academic literature to the actual low level implementation that we implement, with high assurance that what we actually run indeed corresponds to the algorithm defined and proved correct by the cryptographers.

Download and Contributing

If you want to follow the work in progress, you can find the psi calculus development of the Praos blockchain protocol at Praos Formalization. One way in which you could contribute would be to read the document, compare it to the Praos algorithm in the literature, and convince yourself that indeed they do the same thing; after all, this is a step we cannot formally prove. Although the subsequent adjustments are in principle formally provable, in practice we may only have resources to do that for a few steps; so here too the more people read the document and try to poke holes in the arguments, the better. If you want to do more than verify what we have done, we would be happy to discuss the possibilities; contact us here. We look forward to hearing from you!

Introducing the Cardano roadmap

Next steps of development are charted ahead of further information releases

31 October 2017 Charles Hoskinson 9 mins read

Introducing the Cardano roadmap - Input Output

Introducing the Cardano roadmap

Cardano has been an incredibly challenging and fun project to work on involving many teams throughout the world with different skills and opinions on design, process and quality. Our core technology team consists of Well Typed, Serokell, Runtime Verification, Predictable Network Solutions and ATIX, with IOHK leading them. Then we have external auditors such as Grimm, RPI Sec and FP Complete ensuring quality and holding us accountable to delivering what we have promised. When dealing with a consortium, it’s important to be aligned not only on day to day affairs, but on the broader engineering philosophy of why and how things should be constructed, as well as the pace of development. I’d like to spend a few paragraphs exploring the principles that guide our roadmap.

First, we can only grow as fast as our community. At the moment, more than half of our community has never used a cryptocurrency before. This reality means that while it would be awesome to introduce complex features like stake pools, blockchain based voting and subscribable checkpoints, they would be underutilized and therefore ineffective until the community catches up. To this end, we’ve provisioned resources towards a help desk tour, a dedicated support channel and trying to interact with our community as much as possible as they grow. This is tremendously time consuming, but it really does force our software to become simpler and more useable.

Cardano Help Desk Tour Cardano help desk tour, Tokyo sessions.

This effort challenges design assumptions, such as what the capabilities of the average node in our system should be. Throughout the next few months, a large part of our focus will be on answering questions, debugging software, improving the user experience and community education. The output will be that more people use Daedalus comfortably and back up their Ada, are able to use exchanges according to best practices, and understand why Cardano is a special project. It also means our software will slowly become more bug free and compatible with different configurations.

Second, we believe firmly in the vision of Satoshi that these networks must be resilient and distributed. Resilience means we cannot optimize around a collection of trusted or usually reliable nodes to maintain the network. Distributed means that wherever possible, every node contributes to propagating the network and its DNA. This creates tremendous design challenges. One is at the mercy of the weakest links when replication is the tool used for resilience because it dramatically slows the network as more users come online. While we feel that a protocol like Ouroboros is perfectly suited to properly balance these conflicting demands, we admit that general network and database capacities aren’t quite ready for this task. The sole – and temporary – advantage that Cardano enjoys here is that we are new and won’t feel the pain of scale for the next year.

Yet we must remain vigilant in our efforts to address these concerns systematically as we have done with Ouroboros. Therefore, many of the most significant network improvements will be scheduled for deployment in late 2018 and throughout 2019. Third, there is a difficult balance between research and deployment. We have great protocols and protocol developers. We also have a unique advantage in being able to quickly submit papers for peer review in order to promptly validate them or fail fast, instead of enduring difficult lessons of hindsight that can result in the loss of funds. However, we cannot allow the urge to have something ready soon for commercial advantage to drown out proper process. With Ouroboros for example, we have a serious and ongoing formal specification effort using Psi Calculus to model and remove all ambiguity from our consensus protocol. This effort has been fruitful in dramatically increasing our understanding of all the benefits and sins of Ouroboros, but will not yield working code until the second half of 2018.

Ouroboros Authors at Crypto 2017 Ouroboros Praos researchers, left to right: Bernardo David, Alexander Russell, Aggelos Kiayias, Peter Gaži

We deployed a rigorously built protocol for Byron, but one that does not enjoy a formal specification. There is always the chance we did something wrong, as with all software projects. There is always the chance we haven’t followed the intent of the scientists, despite our best engineering efforts. The broader point is that as our protocols increase in complexity, interdependence and with the use of more exotic cryptographic primitives, finding a balance will become more challenging. The saving grace we have is that many of our security assumptions are somewhat composable, we use great methods like Red-Black teams, peer review and formal specs, and using Haskell forces deeper conversations about intent.

The question of when to introduce smart contracts has become the most difficult issue of our roadmap. IOHK has brought some of the best minds from the programming language theory world, such as Professor Rosu and Professor Wadler, into the area of smart contracts. They have watched the world evolve from simple computers measuring performance in the megahertz and kilobytes of RAM to the internet age of big data, AI on demand and nearly unlimited processing power.

It’s an extraordinary honor and privilege to work with people with so much wisdom, hindsight and proven contributions to the fabric of computer science. Yet we also have the demands of Ethereum and the other systems that while lacking rigorous foundations have proven to attach a large following and much mindshare. We won’t accept the model of smart contracts is a closed matter and that these issues are now just a matter of optimization. We won’t accept that society is anywhere close to mass adoption of this new paradigm. It’s the beginning and it would be foolish to say Ethereum’s model is the one to back. Yet we acknowledge the need for something like that model. Therefore, we’ve decided to increase our allocation of resources towards two parallel tracks of research.

One is focused on fixing the issues that Ethereum’s hastily driven design process has yielded to the despair of those who have lost funds as a result. The other is focused on the ontology of smart contracts in general as well as different computational models that achieve similar ends without necessarily involving the new complexity or cost that Ethereum introduces. Professor Rosu’s team at the University of Illinois and the partnership with Runtime Verification is focused on the former effort. The work of Professor Wadler’s team is focused on the future and broader theory. We hope to establish more foundational research in the field of smart contracts, as Professor Aggelos Kiayias did with the GKL model and Ouroboros.

Next, there is the interplay between good partners like Bittrex, Ledger and others who are consumers of our APIs and have to deploy our software for thousands to use. The reality is that all interfaces are best guesses until they are used and then must change to conform with reality, not best intentions. We’ve already seen lots of room for improvement for our middleware layer, Daedalus’s design and also new features that would be helpful in making integration simpler and more cost effective. To this end, we’d like our first light clients to come from third parties instead of directly from IOHK, to force our documentation to improve and our software to become less arcane. Much of the effort towards Shelley (Cardano’s next major release) will be focused towards these types of improvements. They create a wonderful feedback loop that helps us make better decisions that benefit larger groups of users. Finally, the Cardano roadmap isn’t the property of IOHK, Cardano Foundation or Emurgo. It belongs to the community. When a cryptocurrency is new, it needs good shepherds to guide and steer, but we cannot allow the ecosystem to be ruled by a beneficent dictator or oligarchy.

This requirement creates some issues about scope. We have a clear idea of what needs to be done over the next 18 months to realize some of the technical requirements of Cardano from scalability to interoperability, but we cannot operate as a dictatorship pushing along and telling everyone to accept it. Cardano belongs to those who hold Ada. Thus we’ve decided to execute with three parallel paths. First, Shelley is our path to the full decentralization of Cardano. This must be done and we have decided to publish in full what Shelley entails. Next, there is the issue of Cardano Improvement Proposals (CIPs) and gathering consent for moving forward on whichever one is decided upon. We have invested a great deal of effort into building a solid voting system and will be releasing it for public review soon. This includes a standard for CIPs.

After Shelley, we will be proposing all changes to Cardano as CIPs and adding increasingly better democratic mechanisms to gather consent for these changes. The burden should be on us to inform and rally the community behind a direction. We cannot allow a technocracy to form alongside a cult of personality of the good leaders. Cardano must have the legitimacy of consent from its users. Finally, the roadmap needs to be often reviewed and updated. It cannot be a static document so we are going to run a monthly clock and update it frequently. We are also going to create a channel for people to proposal alternative roadmaps. As our users become more sophisticated and we accrue more enterprise partnerships, we fully expect divergent paths to be proposed and our hope is that they are better than our own. As a last thought, a cryptocurrency is only as good as the community behind it. We’ve been humbled by how amazing, patient and helpful our community has been. Our hope is that the roadmap is something we can build together over time and it becomes one of our strongest pillars.

To learn more about the project see the recent whiteboard video. Filmed on location at IOHK's Blockchain technology lab, Tokyo Institute of Technology - 東京工業大学