In this blog post, I'm going to discuss the overall structure of a proof refinement system. Such systems can be used for implementing automatic theorem provers, proof assistants, and type checkers for programming languages. The proof refinement literature is either old or hard to understand, so this post, and subsequent ones on the same topic, will present it in a more casual and current way. Work on proof refinement as a methodology for building type checkers and interactive proof systems has a long history, going back to LCF, HOL, Nuprl, and Coq. These techniques never really penetrated into the mainstream of programming language implementation, but perhaps this may change.
The GitHub repo for this project can be found here.
Prologue
As part of my work for IOHK, I've been designing and implementing a programming language called Plutus, which we use as the scripting language for our blockchains. Because IOHK cares deeply about the correctness of its systems, Plutus needs to be held to a very high standard, and needs to enable easy reasoning about its behavior. For that reason, I chose to make Plutus a pure, functional language with a static type system. Importantly, the language has a formal type theoretic specification.
To implement the type checker for the language, I used a fairly standard technique. However, recently I built a new framework for implementing programming languages which, while different from the usual way, has a more direct connection to the formal specification. This greater similarity makes it easier to check that the implementation is correct.
The framework also makes a number of bugs easier to eliminate. One class of bugs that arose a number of times in the first implementation was that of metavariables and unification. Because the language supports a certain about of unification-driven type inference, it's necessary to propagate updates to the information state of the type checking algorithm, so that everything is maximally informative and the algorithm can run correctly. Propagating this information by hand is error prone, and the new framework makes it possible to do it once and for all, in a bug-free way.
The framework additionally makes some features easier to program. For example, good error reporting is extremely important. But good error messages need to have certain information about where the error occurs. Not just where in the source code, but where in the logical structure of the program. All sorts of important information about the nature of the error, and the possible solutions, depend on that. The framework I developed also makes this incredibly easy to implement, so much so that it's build right into the system, and any language implemented using it can take advantage of it.
Proofs and Proof Systems
In the context of modern proof theory and type theory, a proof can be seen as a tree structure with labeled nodes and the class of proof trees which are valid is defined by a set of rules that specify how a node in a tree may relate to its child nodes. The labels, in the context of proofs, are usually hypothetical judgments, and the rules are inference rules, but you can also use the same conceptual framework for many other things.
An example of a rule is for any choice of A and B, if you can show that A is true and that B is true, then it's permissible to conclude A∧B is true
. This is usually written with the notation
This rule explains one way that it's ok to have a node labeled
A is true
and B is true
. So this rule justifies the circled node in the following tree:When used as a tool for constructing a proof, however, we think about rule use in a directional fashion. We start from the conclusion — the parent node — because this is what we'd like to show, and we work backwards to the premises of the rule — the child nodes — to justify the conclusion. We think of this as a dynamic process of building a proof tree, not simply having one. From that perspective, the rule says that to build a proof tree with the root labeled A&B is true
, we can expand that node to have two child nodes A is true
and B is true
, and then try to build those trees.
In the dynamic setting, then, this rule means that we may turn the first tree below into the second:
https://ucarecdn.com/651f5f86-0dde-4cdd-8c2f-850e62377635/)
In this setting, though, It's important to be aware of a distinction between two kinds of nodes: nodes which may appear in finished proof trees as is, and nodes which must still expanded before they can appear in a finished proof tree. In the above trees, we indicated this by the fact that the nodes which may appear in finished proof trees have a bar above them labeled by the name of the rule that justified them. Nodes which must still be expanded had no such bar, and we call them goals.
Building a Proof Refinement System
Let's now build a proof refinement system for a toy system, namely the system of equality (x == y
) and addition (x + y = z
) as relations among natural numbers. We'll implement it in both Haskell and JavaScript to give a sense of how things work. We'll start by just making the proof refiner system, with no interactivity nor any automatic driving of the proof system. We'll have to rely on the REPL to manage these things for us.
So the first thing we need is a definition of the naturals. In Haskell, we'll just use a normal algebraic data type, but in JavaScript, we'll use an object with a mandatory tag
key and an optional arg
key, together with helpers:
-- Haskell
data Nat = Zero | Suc Nat
deriving (Show)
// JavaScript
var Zero = { tag: "Zero" };
function Suc(n) {
return { tag: "Suc", arg: n };
}
Similarly, we'll define the judgments in question:
-- Haskell
data Judgment = Equal Nat Nat | Plus Nat Nat Nat
deriving (Show)
// JavaScript
function Equal(x,y) {
return { tag: "Equal", args: [x,y] };
}
function Plus(x,y,z) {
return { tag: "Plus", args: [x,y,z] };
}
We'll next need some way to decompose a statement such as Plus (Suc Zero) (Suc Zero) Zero
into subproblems that would justify it. We'll do this by defining a function that performs the decomposition, returning a list of new goals. But, because the decomposition might fail, we'll wrap this in a Maybe
.
-- Haskell
decomposeEqual :: Nat -> Nat -> Maybe [Judgment]
decomposeEqual Zero Zero = Just []
decomposeEqual (Suc x) (Suc y) = Just [Equal x y]
decomposeEqual _ _ = Nothing
decomposePlus :: Nat -> Nat -> Nat -> Maybe [Judgment]
decomposePlus Zero y z = Just [Equal y z]
decomposePlus (Suc x) y (Suc z) = Just [Plus x y z]
decomposePlus _ _ _ = Nothing
// JavaScript
var Nothing = { tag: "Nothing" };
function Just(x) {
return { tag: "Just", arg: x };
}
function decomposeEqual(x,y) {
if (x.tag === "Zero" && y.tag == "Zero") {
return Just([]);
} else if (x.tag === "Suc" && y.tag === "Suc") {
return Just([Equal(x.arg, y.arg)]);
} else {
return Nothing;
}
}
function decomposePlus(x,y,z) {
if (x.tag === "Zero") {
return Just([Equal(y,z)]);
} else if (x.tag === "Suc" && z.tag === "Suc") {
return Just([Plus(x.arg, y, z.arg)]);
} else {
return Nothing;
}
}
If we explore what these mean when used, we can build some intuitions. If we want to decompose a judgment of the form Plus (Suc Zero) (Suc Zero) (Suc (Suc (Suc Zero)))
in Haskell/Plus(Suc(Zero), Suc(Zero), Suc(Suc(Suc(Zero))))
in JavaScript, we simply apply the decomposePlus
function to the three arguments of the judgment:
-- Haskell
decomposePlus (Suc Zero) (Suc Zero) (Suc (Suc (Suc Zero)))
= Just [Plus Zero (Suc Zero) (Suc (Suc Zero))]
// JavaScript
decomposePlus(Suc(Zero), Suc(Zero), Suc(Suc(Suc(Zero))))
= Just([Plus(Zero, Suc(Zero), Suc(Suc(Zero)))])
We'll define a general
-- Haskell
decompose :: Judgment -> Maybe [Judgment]
decompose (Equal x y) = decomposeEqual x y
decompose (Plus x y z) = decomposePlus x y z
// JavaScript
function decompose(j) {
if (j.tag === "Equal") {
return decomposeEqual(j.args[0], j.args[1]);
} else if (j.tag === "Plus") {
return decomposePlus(j.args[0], j.args[1], j.args[2]);
}
}
Now let's define what constitutes a proof tree:
-- Haskell
data ProofTree = ProofTree Judgment [ProofTree]
deriving (Show)
// JavaScript
function ProofTree(c,ps) {
return { tag: "ProofTree", args: [c,ps] };
}
The first argument of the constructor ProofTree
is the label for the node, which is the conclusion of inference that justifies the node. The second argument is the sub-proofs that prove each of the premises of inference. So for example, the proof tree
would be represented by
-- Haskell
ProofTree (Plus (Suc (Suc Zero)) (Suc Zero) (Suc (Suc (Suc Zero))))
[ ProofTree (Plus (Suc Zero) (Suc Zero) (Suc (Suc Zero)))
[ ProofTree (Plus Zero (Suc Zero) (Suc Zero))
[ ProofTree (Equal (Suc Zero) (Suc Zero))
[ ProofTree (Equal Zero Zero)
[]
]
]
]
]
// JavaScript
ProofTree(Plus(Suc(Suc(Zero)), Suc(Zero), Suc(Suc(Suc(Zero)))),
[ ProofTree(Plus(Suc(Zero), Suc(Zero), Suc(Suc(Zero))),
[ ProofTree(Plus(Zero, Suc(Zero), Suc(Zero)),
[ ProofTree(Equal(Suc(Zero), Suc(Zero)),
[ ProofTree(Equal(Zero,Zero)
[])
])
])
])
])
We now can build a function, findProof
, which will use decompose
to try to find a proof for any judgment that it's given:
-- Haskell
findProof :: Judgment -> Maybe ProofTree
findProof j =
case decompose j of
Nothing -> Nothing
Just js -> case sequence (map findProof js) of
Nothing -> Nothing
Just ts -> Just (ProofTree j ts)
// JavaScript
function sequence(mxs) {
var xs = [];
for (var i = 0; i < mxs.length; i++) {
if (mxs[i].tag === "Nothing") {
return Nothing;
} else if (mxs[i].tag === "Just") {
xs.push(mxs[i].arg);
}
}
return Just(xs);
}
function findProof(j) {
var mjs = decompose(j);
if (mjs.tag === "Nothing") {
return Nothing;
} else if (mjs.tag === "Just") {
var mts = sequence(mjs.arg.map(j => findProof(j)));
if (mts.tag === "Nothing") {
return Nothing;
} else if (mts.tag === "Just") {
return Just(ProofTree(j, mts.arg));
}
}
}
If we now run this on some example triples of numbers, we find that it returns Just
s of proof trees exactly when the statement is true, and the trees show just how the statement is true. Try it on the above example for suc(suc(zero)) + suc(zero) = suc(suc(suc(zero)))
.
The choice above to use the Maybe
type operator reflects the idea that there is at most one proof of a judgment, but possible also none. If there can be many, then we need to use List
instead, which would give us alternative definitions of the various functions:
-- Haskell
decomposeEqual :: Nat -> Nat -> [[Judgment]]
decomposeEqual Zero Zero = [[]]
decomposeEqual (Suc x) (Suc y) = [[Equal x y]]
decomposeEqual _ _ = []
decomposePlus :: Nat -> Nat -> Nat -> [[Judgment]]
decomposePlus Zero y z = [[Equal y z]]
decomposePlus (Suc x) y (Suc z) = [[Plus x y z]]
decomposePlus _ _ _ = []
decompose :: Judgment -> [[Judgment]]
decompose (Equal x y) = decomposeEqual x y
decompose (Plus x y z) = decomposePlus x y z
findProof :: Judgment -> [ProofTree]
findProof j =
do js <- decompose j
ts <- sequence (map findProof js)
return (ProofTree j ts)
// JavaScript
function bind(xs,f) {
if (xs.length === 0) {
return [];
} else {
return f(xs[0]).concat(bind(xs.slice(1), f))
}
}
function sequence(xss) {
if (xss.length === 0) {
return [[]];
} else {
return bind(xss[0], x =>
bind(sequence(xss.slice(1)), xs2 =>
[[x].concat(xs2)]));
}
}
function findProof(j) {
return bind(decompose(j), js =>
bind(sequence(js.map(j2 => findProof(j2))), ts =>
[ProofTree(j,ts)]));
}
In fact, Haskell lets us abstract over the choice of Maybe
vs List
, provided that the type operator is a monad.
A Type Checker with Proof Refinement
Let's now move on to building a type checker. We'll first define the types we're going to use. We'll have natural numbers, product types (pairs), and arrow types (functions). We'll also use a BNF definition as a reference:
<type> ::= "Nat" | <type> "×" <type> | <type> "→" <type>
-- Haskell
data Type = Nat | Prod Type Type | Arr Type Type
deriving (Show)
// JavaScript
var Nat = { tag: "Nat" };
function Prod(a,b) {
return { tag: "Prod", args: [a,b] };
}
function Arr(a,b) {
return { tag: "Arr", args: [a,b] };
}
Next we must specify what qualifies as a program, which we'll be type checking:
<program> ::= <name> | "〈" <program> "," <program> "〉"
| "fst" <program> | "snd" <program>
| "λ" <name> "." <program> | <program> <program>
-- Haskell
data Program = Var String
| Zero | Suc Program
| Pair Program Program | Fst Type Program | Snd Type Program
| Lam String Program | App Type Program Program
deriving (Show)
// JavaScript
function Var(x) {
return { tag: "Var", arg: x };
}
var Zero = { tag: "Zero" };
function Suc(m) {
return { tag: "Suc", arg: m };
}
function Pair(m,n) {
return { tag: "Pair", args: [m,n] };
}
function Fst(a,m) {
return { tag: "Fst", args: [a,m] };
}
function Snd(a,m) {
return { tag: "Snd", args: [a,m] };
}
function Lam(x,m) {
return { tag: "Lam", args: [x,m] };
}
function App(a,m,n) {
return { tag: "App", args: [a,m,n] };
}
The programs built with Fst
, Snd
, and App
have more arguments that usual because we want to do only type checking, so we need to supply information that's not present in the types when working in a bottom up fashion. We'll see how we can avoid doing this later.
We'll also make use of a judgment HasType G M A
, where G
is a type context that assigns Type
s to variables (represented by a list of string-type pairs), M
is some program, and A
is some type.
-- Haskell
data Judgment = HasType [(String,Type)] Program Type
deriving (Show)
// JavaScript
function HasType(g,m,a) {
return { tag: "HasType", args: [g,m,a] };
}
We can now define the decompose functions for these. We'll just use the Maybe
type for the decomposers here, because we don't need non-deterministic solutions. Type checkers either succeed or fail, and that's all.
-- Haskell
decomposeHasType
:: [(String,Type)] -> Program -> Type -> Maybe [Judgment]
decomposeHasType g (Var x) a =
case lookup x g of
Nothing -> Nothing
Just a2 -> if a == a2
then Just []
else Nothing
decomposeHasType g Zero Nat =
Just []
decomposeHasType g (Suc m) Nat =
Just [HasType g m Nat]
decomposeHasType g (Pair m n) (Prod a b) =
Just [HasType g m a, HasType g n b]
decomposeHasType g (Fst b p) a =
Just [HasType g p (Prod a b)]
decomposeHasType g (Snd a p) b =
Just [HasType g p (Prod a b)]
decomposeHasType g (Lam x m) (Arr a b) =
Just [HasType ((x,a):g) m b]
decomposeHasType g (App a m n) b =
Just [HasType g m (Arr a b), HasType g n a]
decomposeHasType _ _ _ =
Nothing
decompose :: Judgment -> Maybe [Judgment]
decompose (HasType g m a) = decomposeHasType g m a
// JavaScript
function lookup(x,xys) {
for (var i = 0; i < xys.length; i++) {
if (xys[i][0] === x) {
return Just(xys[i][1]);
}
}
return Nothing;
}
function decomposeHasType(g,m,a) {
if (m.tag === "Var") {
var ma2 = lookup(m.arg, g);
if (ma2.tag === "Nothing") {
return Nothing;
} else if (ma2.tag === "Just") {
return eq(a,ma2.arg) ? Just([]) : Nothing;
}
} else if (m.tag === "Zero" && a.tag === "Nat") {
return Just([]);
} else if (m.tag === "Suc" && a.tag === "Nat") {
return Just([HasType(g, m.arg, Nat)]);
} else if (m.tag === "Pair" && a.tag === "Prod") {
return Just([HasType(g, m.args[0], a.args[0]),
HasType(g, m.args[1], a.args[1])])
} else if (m.tag === "Fst") {
return Just([HasType(g, m.args[1], Prod(a,m.args[0]))]);
} else if (m.tag === "Snd") {
return Just([HasType(g, m.args[1], Prod(m.args[0],a))]);
} else if (m.tag === "Lam" && a.tag === "Arr") {
return Just([HasType([[m.args[0],a.args[0]]].concat(g),
m.args[1],
a.args[1])]);
} else if (m.tag === "App") {
return Just([HasType(g, m.args[1], Arr(m.args[0],a)),
HasType(g, m.args[2], m.args[0])]);
} else {
return Nothing;
}
}
function decompose(j) {
if (j.tag === "HasType") {
return decomposeHasType(j.args[0], j.args[1], j.args[2]);
}
}
Using the same definitions of proof trees, sequencing, and finding a proof that we had earlier for Maybe
, we can now perform some type checks on some programs. So for example, we can find that the program λx.x
does indeed have the type Nat → Nat
, or that it does not have the type (Nat × Nat) → Nat
.
Conclusion
That about wraps it up for the basics of proof refinement. One thing to observe about the above system is that its unidirectional: information flows from the bottom to the top, justifying expansions of goals into subtrees. No information flows back down from subtrees to parent nodes. In my next blog post, I'll discuss how we might go about adding bidirectional information flow to such a system. This will allow us to perform type checking computations where a judgment might return some information to be used for determining if a node expansion is justified. The practical consequence is that our programs become less cluttered with extraneous information.
If you have comments or questions, get it touch. I'm @psygnisfive on Twitter, augur on freenode (in #languagengine and #haskell).
This post is the first part of a three part series, the second part is here, and the third part is here.
Thoughts on an ontology of smart contracts
6 March 2017 6 mins read
The concept of smart contracts has grown considerably since the birth of Ethereum. We've seen an explosion of interdisciplinary research and experimentation bundling legal, social, economic, cryptographic and even philosophical concerns into a rather strange milieu of tokenized intellect. Yet despite this digital cambrian explosion of thought, there seems to be a lack of a unified ontology for smart contracts. What exactly is an ontology? Eschewing the philosophical sense of the word, an ontology is simply a framework for connecting concepts or groups alongside their properties to the relationships between them. It's a fundamental word that generally is the attempt at bedrock for a topic. For example, it's meaningful to discuss the ontology of democracy or the ontology of mathematics.
Why would one want to develop an ontology for smart contracts? What is gained from this exercise? Is it mostly an academic exercise or is there a prescriptive value to it? I suppose there are more questions to glean, but let's take a stab at the why.
Smart contracts are essentially two concepts mashed together. One is the notion of software. Cold, austere code that does as it is written and executes for the world to see. The other is the idea of an agreement between parties. Both have semantical demands that humans have traditionally had issues with and both have connections to worlds beyond the scope in which the contract lives.
Much of the focus of our current platforms, such as Ethereum, is on performance or security, yet abstracting to a more ontological viewpoint, one ought to ask about semantics and scope.
From a semantical perspective, we are trying to establish what the authors and users of smart contracts believe to be the purpose of the contract. Here we have consent, potential for non est factum style circumstances, a hierarchy of enforceability and other factors that have challenged contract law. What about cultural and linguistic barriers? Ambiguity is also king in this land.
Where normal contracts tend to pragmatically bind to a particular jurisdiction and set of interpretations with the escape hatch of arbitration or courts to parse purposeful ambiguity, decentralized machines have no such luxury. For better or worse, there is a pipeline with smart contracts that amplifies the semantical gap and then encapsulates the extracted consensus into code that again suffers from its own gap (Loi Luu demonstrated this recently using Oyente).
Then these structures presume dominion over something of value. Whether this dominion be data, tokens or markers that represent real life commitments or things such as deeds or titles. For the last category, like software giving recommendations to act on something in physical world, the program can tell one what to do, but someone has to do it.
So we have an object that combines software and agreements that has a deep semantic and scope concern, but one could add more dimensions. There is the question of establishing facts and events. The relationship with time. The levels of interpretation for any given agreement. Should everything be strictly speaking parsed by machines? Is there room for human judgement in this model (see Nick Szabo, Wet and dry code and this presentation)?
One could make a fair argument that one of the core pieces of complexity behind protocols like Ethereum is that it actually isn't just flirting with self-enforcing smart contracts. There are inherited notions from the Bitcoin ecosystem such as maximizing decentralization, maintaining a certain level of privacy, the use of a blockchain to order facts and events. Let's not even explore the native unit of account.
These concepts and utilities are fascinating, but contaminate attempts at a reasonable ontology that could be constructive. A less opinionated effort has come from the fintech world with both Christopher Clack's work on Smart Contract Templates and Willi Brammertz’s work on Project ACTUS. Here we don't need immutability or blockchains. The execution environment doesn't matter as much. It's more about consensus on intent and evaluation to optimize processes.
What about the relationship of smart contracts with other smart contracts? In the cryptocurrency space, we tend to be blockchain focused, yet this concept actually obfuscates that there are three data domains in a system that uses smart contracts.
The blockchain accounts for facts, events and value. There is a graph of smart contracts in relation to each other. Then there is a social graph of nodes or things that can interact with smart contracts. These are all incredibly different actors. Adding relays into the mix, one could even discuss the internet of smart contract systems.
Perhaps where an ontology could be most useful is on this last point. There seems to be economic value in marrying code to law for at least the purpose of standardization and efficiency, yet the hundreds of implicit assumptions and conditions upon which these systems are built need to be modelled explicitly for interoperability.
For example, if one takes a smart contract hosted on Rootstock and then via a relay communicates with a contract hosted on Ethereum and then connects to a data feed from a service such as Bloomberg offers, then what's the trust model? What assumptions has one made about the enforceability of this agreement, the actors who can influence it and the risk to the value contained? Like using dozens of software libraries with different licenses, one is creating a digital mess.
To wrap up some of my brief thoughts, I think we need to do the following. First, decouple smart contracts conceptually from blockchains and their associated principles. Second, come to grips with the semantic gap and also scope of enforcement. Third, model the relationships of smart contracts with each other, the actors who use them and related systems. Fourth, extract some patterns, standards and common use practices from already deployed contracts to see what we can infer. Finally, come up with better ways of making assumptions explicit.
The benefits of this approach seem to be making preparations for sorting out how one will design systems that host smart contracts and how such systems will relate to each other. There seems to be a profound lack of metadata for smart contracts floating around. Perhaps an ontology could provide a coherent way of labeling things?
Thanks for reading,
Charles
This article is inspired by my recent visit to a blockchain technology conference and my discussions with colleagues about ideas to improve blockchain. Most of the conference speakers were from big Russian banks and their talks were about blockchain use cases, mainly as databases or smart contract platforms. However, none of the speakers were able to answer the question, ‘why did they really need blockchain?’. The correct answer for this question recently came from the R3 CEV consortium: "no blockchain because we don't need one". Blockchain is not needed for banks, it is needed instead of banks. It is required for decentralized systems only, applications with a trusted party will always be more efficient, simple, etc. The meaning of decentralization has been widely discussed (see, for example, this post from Vitalik Buterin and it is the only real purpose of blockchains. In this blog post I'm going to discuss the degree of centralization of existing cryptocurrencies and the reasons leading to it.
Governance and development centralization
Let's start with a non-technical topic. It is nice to think that no-one controls blockchain, ie that network participants (miners) act as a decentralized community and chose the direction of development. In fact, everything is much worse.
The first source of centralization here, is the source of the protocol for improvement. Only a small group of core developers is able to accept changes to the source code or even understand some protocol improvement proposals. No-one works for free and the organization that pays money to the core team in fact controls the cryptocurrency’s source code. For example, Bitcoin development is controlled by Blockstream, and this organization has its own interests. A treasury system like the one for Dash or the one proposed for Ethereum Classic may be the solution here. However, a lot of questions are still open (for example, the 78 pages of ETC treasury proposal are quite complicated, while the Dash treasury system was developed without any documentation).
The next centralization risk in governance is the cult of personality. While Vitalik tells us in his post, that no-one controls cryptocurrencies, his opinion is so important for the Ethereum community, that most of its members accepted the bailout of the DAO, which breaks the basic immutability principle of cryptocurrencies.
Finally, there are a lot of interested parties behind cryptocurrencies, and the opinion of some of them (for example the end users of the currency) is usually ignored. Anyway, the development of cryptocurrencies is a social consensus, and it is good to have a manifesto, declaring its purpose from the start.
Services centralization
One of the biggest problems with existing cryptocurrencies is the centralization of services. Blockchain processing is heavy (eg Ethereum processing from the genesis block may take weeks) and regular users that just want to send some coins have to trust centralized services. Most Bitcoin users trust blockchain.info, Ethereum users trust myetherwallet and so on. If these popular wallets were to be compromised, users’ funds would be lost.
Moreover, most users trust in blockchain explorers and never check that the blocks in it are correct. What is the meaning of the "decentralized" social network Steemit, if almost none of its users download a blockchain and believe that the data on Steemit.com is correct? Or imagine that blockchain.info was compromised: an attacker could steal all the users’ money from their wallets, hide the criminal transactions and show user-created transactions in blockchain explorer, and the attack could go unnoticed for a long time. Thus, trust in centralized services produces a single point of failure, allows censorship and puts user coins in jeopardy.
Mining centralization
With popular cryptocurrencies, hardware requirements are high even just for blockchain validation. Even if you own modern hardware able to process blocks fast, your network channel may not be wide enough to download the created blocks fast enough. This leads to a situation where only a few high-end computers are able to create new blocks, which leads to mining centralization. Being open by design, Bitcoin mining power is now concentrated in a limited group of miners, which could easily meet and agree to perform a 51% attack (or just censor selected transactions or blocks). Mining pools worsen the situation, for example, in Bitcoin just five mining pools control more than 50% of the hash rate (at least if you believe blockchain.info. Another option for miners is to skip transaction processing and produce empty blocks, which would also make blockchain meaningless.
Proof-of-Stake is usually regarded as more hardware friendly, however, a really popular blockchain requires a wide network channel to synchronize the network anyway. Also, it is usually unprofitable to keep a mining node in PoS and just a small percentage of coins are online that makes the network vulnerable. This is usually fixed by delegating mining power to someone else, that also leads to a small amount of mining nodes.
Centralization as a solution
The most scary point of all this is that more and more often, centralization is regarded as a solution for some problems of cryptocurrencies. To fix scalability issues, cryptocurrencies propose to use a limited number of trusted "masternodes", "witnesses", "delegates", "federations" and so on to "fix" the too-large amount of mining nodes in the network. The number of these trusted nodes may vary, but by using this method to fix scalability issues, developers also destroy the decentralized nature of the currency. Eventually this would lead to a cryptocurrency with only one performing node, that processes transactions very efficiently, without confirmation delays and forks, but suddenly a blockchain is not needed, as in R3's case.
Unfortunately, most users are not able to determine the lie in cryptocurrencies and like these centralized blockchains more and more, because for sure, the centralized way is (and will always be) more simple and user-friendly.
Conclusion
We are going to see more centralized cryptocurrencies and that will inevitably lead to mass disappointment in blockchain technology, because it is not needed for centralized solutions. It is still a user choice, whether to believe a beautiful and fast web interface or to use trustless, but harmful software, requiring you to download blockchain data and process it.
Most centralization risks may be fixed, if trustless full-nodes, wallets, explorers are cheap to launch and easy to use. I'm not going to propose a solution in this paper, but I hope it is coming soon!
Scotland and Japan launch IOHK's research network
28 February 2017 4 mins read
Scotland and Japan launch IOHKs research network - Input Output
L-R: Nikos Bentenitis, IOHK Chief Operating Officer; Aggelos Kiayias, IOHK Chief Scientist; Charles Hoskinson, IOHK Chief Executive Officer and Co-Founder; Johanna Moore, Head of the School of Informatics, University of Edinburgh; Jon Oberlander, Assistant Principal for Data Technology, University of Edinburgh Research is at the core of what IOHK does so I am extremely proud of the company’s achievement in launching blockchain research centres at two world-class universities in February. This is a recognition of the pioneering work that IOHK is doing in advancing the science of cryptocurrencies, producing research that will all be open source and patent-free and progress the industry as a whole.
The lab will provide a direct connection between developers and researchers, helping to get projects live faster and will pursue outreach projects with entrepreneurs in Edinburgh’s vibrant local technology community. Recruiting and outreach will begin immediately, and the full facility will be operational from summer 2017, located in the School of Informatics’ newly refurbished Appleton Tower.
Professor Kiayias says: “We are very excited regarding this collaboration on blockchain technology between the School of Informatics and IOHK. Distributed ledgers is an upcoming disruptive technology that can scale information services to a global level. The academic and industry connection forged by this collaboration puts the Blockchain Technology Lab at Edinburgh at the forefront of innovation in blockchain systems.”
Two of our top researchers, Mario Larangeira and Bernado David will be embedded into a team led by Professor Tanaka at Tokyo Tech’s main site, the Ookayama campus. The team, along with professors and graduate students, will tackle industry challenges in this rapidly developing area of research into cryptocurrencies and provide education for the Japanese market. The partnership also includes support for students and researchers to attend international conferences.
Tokyo Tech President, Yoshinao Mishima, says: “This agreement is important because Tokyo Tech is seeking to enhance the collaboration with industries and universities in Japan and abroad by producing groundbreaking results in research and engineering which will be published in internationally renowned scientific journals and conferences.”
These launches are just the beginning of a global network of research centres that IOHK is building, to drive collaboration between the world’s best cryptographers and developers to create the cutting edge blockchain technology that will revolutionise the world’s financial services. Further centres are planned in the US, Europe and beyond. Expect more news this year and in 2018.
A Joint Statement on Ethereum Classic’s Monetary Policy
27 February 2017 3 mins read
I'd like to thank everyone from the Ethereum Classic community for their support. I'm really excited to present the following statement which is in reference to monetary policy for the ETC platform. The Ethereum Classic community has grown substantially since its inception in July of 2016. In order to further Ethereum Classic’s vision, the community needs to adopt a monetary policy that balances the long-term interests of investors, developers, and business operators.
This statement represents a collaborative show of support for the monetary policy proposed in ECIP 1017. As industry stakeholders, ETC community members, exchange operators, mining pool operators, miners, wallet providers, developers, the Distributed Autonomous Coalition Asia (DACA) and the China Ethereum Classic Consortium (ECC), and other industry participants; we are committed to jointly planning and implementing a road map to effect the future ETC monetary policy.
We have agreed on the following points:
We understand that a monetary policy has been proposed that establishes an upper bound on the total number of ETC that will ever be issued, and that this policy was the result of extensive discussions within the community. The policy also defines a method of reducing the block reward over time. It is available in English here and in Chinese here.
The new monetary policy sets a limit for the total ETC issuance. The block reward will be reduced by 20% at block number 5,000,000, and another 20% every 5,000,000 blocks thereafter. Uncle block rewards will also be reduced. Due to variations in the reward rate of ETC, we anticipate the total supply to be approximately 210 million ETC, not to exceed 230 million ETC.
We will continue to work with the Ethereum Classic protocol development community to develop, in public, a safe hard fork procedure based on the proposed monetary policy. ETCDEV Team will provide an implementation of the monetary policy for both the geth and parity clients after the hard fork procedure is agreed upon.
We will run consensus systems that are compatible with Ethereum Classic clients, which will eventually contain the ECIP 1017 monetary policy and the hard-fork, in production.
Based on the above points, the timeline will likely follow the below dates.
Geth and Parity clients that include the updated monetary policy are expected to be released in June 2017.
If there is strong network-level support, the hard-fork activation will likely happen around Fall 2017.
If the hard fork is activated, the first reduction in block reward will happen around December 2017.
Our common goal is to make Ethereum Classic a success. We believe that the community can unite and work together to achieve a sustainable global platform.
Together, we are:
- 白洪日, CEO, 币创网bichuang
- 陈刚, CEO, ETCWin & 91pool
- 顾颖(初夏虎) Eric Gu, CEO, ViewFin & szzc.com
- 郭宏才 Chandler Guo, Miner & Investor
- 韩锋博士, Chairman, DACA区块链协会
- 黄天威, CEO, 比特时代BTC38
- 胡洪杰, Vice Chairman, DACA区块链协会
- 李大伟, CEO, CHBTC
- 毛世行, CEO, F2Pool
- 徐刚 博士, Developer
- 许子敬 Ryan Xu, Miner & Investor
- 张淞皓, CEO, BTC123
- 赵千捷, Vice President, BTCC
- 邹来辉Roy Zou, Chairman, 以太坊原链协会 (Ethereum Classic Consortium)
- Igor Artamonov, CTO, ETCDEV Team
- Charles Hoskinson, CEO, IOHK
- Michael Moro, CEO, Genesis Global Trading
- Yates Randall, CTO, epool.io
- Barry Silbert, Founder & CEO, Digital Currency Group & Grayscale Investments
- Cory Tselikis, Miner Investor and Pool Operator, etc.minerhub.io & bec.minerhub.io
Recent posts
2021: the year robots, and graffiti came to a decentralized, smarter Cardano by Anthony Quinn
27 December 2021
Cardano education in 2021: the year of the pioneers by Niamh Ahern
23 December 2021
Cardano at Christmas (and what to say if anyone asks…) by Fernando Sanchez
21 December 2021