Blog > 2021
Building native tokens on Cardano for pleasure and profit
New capabilities will allow users to choose simple and powerful tools to bring their assets to life on Cardano
18 February 2021 9 mins read
With the ‘Mary’ protocol upgrade, which will be implemented using our hard fork combinator technology, native tokens and multi-asset capability are coming to Cardano.
On February 3, we upgraded the Cardano public testnet to ‘Mary’ for final testing. We plan to deploy the Cardano update proposal to mainnet on February 24, which would therefore deploy ahead of the boundary of epoch 250 and take effect on March 1. If we need a few more days of testing, we'll deploy ‘Mary’ the following epoch instead, which will take a five-day period required for updates to take effect. ‘Mary’ has been successfully running on our testing environments for several weeks, so our confidence level remains high. As always, however, we’ll follow a strict process (developed and honed over the previous Shelley and Allegra HFC events) to get this right.
Once the code is successfully deployed to mainnet, we’ll release a new Daedalus Flight version for user testing, which will be our first Cardano wallet with integrated multi-asset capability. Once we are happy with wallet performance and usability, we’ll deliver the Daedalus mainnet release bringing the full-fat native token experience to every Cardano user.
Why native tokens?
Native tokens will bring multi-asset support to Cardano, allowing users to create uniquely defined (custom) tokens and carry out transactions with them directly on the Cardano blockchain.
The use of tokens for financial operations is becoming ever more popular. It can cut costs at the same time as improving transparency, enhancing liquidity, and, of course, being independent of centralized entities such as big banks. Tokenization is the process of representing real assets (eg, fiat currencies, stocks, precious metals, and property) in a digital form, which can be used to create financial instruments for commercial activities.
Cardano will provide many tokenization options. With the ‘Mary’ upgrade, the ledger’s accounting infrastructure will process not only ada transactions but also transactions that simultaneously carry several asset types. Native support grants distinct advantages for developers as there is no need to create smart contracts to handle custom token creation or transactions. This means that the accounting ledger will track the ownership and transfer of assets instead, removing extra complexity and potential for manual errors, while ensuring significant cost efficiency.
Future and utility
Developers, businesses, and applications can create general purpose (fungible) or specialized (non-fungible) tokens to achieve commercial or business objectives. These might include the creation of custom payment tokens or rewards for decentralized applications; stablecoins pegged to other currencies; or unique assets that represent intellectual property. All these assets can then be traded, exchanged, or used as payment for products or services.
Unlike ERC-20 tokens that are based on Ethereum smart contracts, the tracking and accounting of custom tokens on Cardano is supported by the ledger natively. Because native tokens do not require smart contracts to transfer their value, users will be able to send, receive, and burn their tokens without paying the transaction fees required for a smart contract or adding event-handling logic to track transactions.
Working with native tokens on Cardano
In creating an environment for native tokens, we have focused on simplicity of working, affordability, and, of course, security.
Depending on their preferences and technical expertise, users will be able to choose from three ways to create, distribute, exchange and store tokens:
- Cardano command-line interface (CLI). Advanced users can currently access the CLI via a dedicated testing environment. We will deploy the CLI on the mainnet when we hard fork.
- A ‘token builder’ graphical user interface (GUI). This will follow the native token CLI launch, providing an easier way for creating tokens.
- The Daedalus wallet. Daedalus will provide support for sending and receiving custom-created tokens. Daedalus Flight will test native token functionality in March, which will be shortly followed by the mainnet release.
Let’s dig down a little into each option.
Working with Cardano CLI
Advanced developers can use the native tokens testing environment to create (mint) assets and send test transactions to different addresses.
The nature of working with the CLI assumes that someone is familiar with setting up and operating the Cardano node, and has experience in working with transactions and managing addresses and values. To create native tokens using Cardano CLI, one would need to:
- Set up and start the Cardano node
- Configure a relay node to connect to the native tokens testing environment
- Start interaction with the network (prompt Cardano CLI)
- Construct a monetary policy script
- Create tokens using the monetary policy script
- Finally, submit and sign transactions to transfer tokens between addresses.
Native token tutorials and exercises are available on our Cardano documentation site to help developers mint tokens, create monetary policies, and learn how to execute multi-asset transactions.
We are already seeing particular interest from stake pool operators for this. So far, hundreds of test tokens have been created, and we continue to improve the CLI based on feedback. We welcome your comments and encourage community testing.
Token builder: a user-friendly GUI for token creation
The CLI requires a certain level of development prowess. So we have devised other ways for less technically proficient users to create tokens. To achieve this, we plan to launch a token builder after the mainnet CLI launch.
The token builder is a graphical user interface that makes token creation easier. If you’re interested in creating tokens for your decentralized application, wish to tokenize your property, create NFT collector cards represented as specialized assets, or want to create a stablecoin pegged to the value of other currencies, the token builder can help with that.
To create a token you would just need to fill in:
- The token name (eg, Hello World)
- The token symbol (eg, HEW)
- The token icon (generated automatically)
- Amount to create (eg, 1,000)
- Cardano wallet address (your address to host newly created tokens).
The token builder generates the monetary policy automatically – you won’t need to define it yourself. This streamlines the token creation and simplifies it for a non-technical user.
Figure 1. The prototype token builder dashboard
Initially, the token builder will be supporting only fungible token creation (while non-fungible tokens can be created using Cardano CLI). In time, we’ll extend the functionality to allow creating non-fungible tokens and changing the monetary policy according to specific preferences. This means that users will be able to specify the conditions under which tokens are minted (or burned), or who has control over the asset supply, for example.
Finally, when tokens are minted, it will be possible to mint more by clicking the ‘Mint more’ button. This can be done based on the same policy to create more tokens of the same kind, or you can create other tokens that represent different values based on a different policy. For example, you can create more Hello World tokens, or, starting from scratch, you can create 500 ‘test’ tokens that will be used for other purposes (these will have a different minting policy).
The token builder aims to reduce the complexity of token creation and also focuses on the enhancement and visual presentation of functional processes. As an outcome, we aim to provide visibility around all the tokens created, their values, quantity, and addresses between which they are being transferred – all in one place.
Daedalus
Those users who do not wish to create their own tokens but who want to use existing ones for payments, purchases or exchange, will be able to use such wallets as Daedalus, and later Yoroi.
The Daedalus team continues to work on integrating the wallet backend with the user interface to support the native token functionality. Users will be then able to hold native tokens in their wallets, send and receive them as they would do with ada.
Native tokens are uniquely identified by two hexadecimal numbers stored on-chain ‒ the Policy ID and the Asset Name. Considering that these numbers are not 'human-friendly', we have created fingerprints for easier identification of native tokens by users. Fingerprints are 44 character long alphanumeric strings beginning with the prefix 'token'.
Additional token data displayed in the wallet UI (name, description, and acronym) will be provided by the Cardano token registry, administered initially by the Cardano Foundation.
Figure 2. Daedalus native tokens UI
Native token lifecycle
When all the necessary components are deployed, the native token lifecycle will be complete. It consists of five phases:
- minting
- issuing
- using
- redeeming
- burning.
Figure 3. Native token lifecycle phases
During these phases, asset controllers will be able to define the policy for the asset class and authorize token issuers to mint or burn tokens. Token issuers can then mint tokens (for applications, for instance), maintain their circulation, and issue them to token holders. Finally, token holders (eg, individual users or exchanges) will be able to send tokens to others, use them for payment, or redeem them when they have finished using them.
What’s next?
We launched the testing environment in December 2020, laying the foundation for native token development. We also added a staging environment to enable initial testing by exchanges and stake pool operators. It features a faucet and allows a network of nodes to be built while connecting to the relays.
Follow our Cardano status updates to see our weekly progress. As we expand the capabilities of the native tokens, and add tools and interfaces, we’ll be providing documentation and tutorials to encourage people to get involved. Naturally, the codebase is open source and we have already seen a number of interesting community projects emerge (around digital collectibles, for example).
So a lot will be happening in late February and early March, from final testing and the HFC event, to native tokens on Cardano within a brand new Daedalus wallet experience. Exciting times ahead!
Find out more by joining other community members to discuss native tokens in the Cardano Forum's dedicated native token section. And don't forget to sign up for our devnets program.
Additional technical input by Olga Hryniuk.
Our million-dollar baby: Project Catalyst
The next Catalyst funding round will be our most accessible and ambitious round of funding yet
12 February 2021 3 mins read
We launched Project Catalyst six months ago as a series of experiments to advance on-chain governance and accelerate community-driven innovation on Cardano. The project seeks to achieve the highest levels of community collaboration and to seed the best ideas with development funding via a community-moderated process. Community, innovation, funding, value, growth – Catalyst creates powerful synergies, and ultimately a self-sustaining engine of growth for Cardano’s future.
Each funding round has grown in its scope, level of funding, and community engagement. We already have 7,000 members on the IdeaScale innovation platform with 1,800 active voters. Adoption is growing by 10% every week and we have only just begun.
Fund4 will be our most accessible and ambitious round yet and our first million dollar round – that’s the size of the ada pot to fund development projects on Cardano. Proposal teams will use these funds to develop tooling, build decentralized applications, launch education and training initiatives for developers, and so much more. Every fresh contribution adds fresh value to the ecosystem. And since the community is at the core of Catalyst, 20% of treasury funds are set aside to reward and incentivize community advisers, referrers and participating voters for their contribution.
Throughout 2021, we will continue to encourage engagement with the project across the Cardano community by making it more accessible. In Fund3, voter registration has been significantly improved. Registration is now fully integrated with the Daedalus wallet, within a new registration center. This replaces a separate user-unfriendly and time-consuming process we had to use in Fund2 for technical reasons, now addressed. For Yoroi light wallet users, a browser extension provides easy registration. Voters will then use a dedicated mobile voting app – downloadable on iOS or Android – to complete the process. In a future Daedalus release, users will ultimately be able to register and vote from the wallet. To participate in voting you need to meet a threshold currently set at 3,000 ada - a threshold set to help protect the voting system from malicious attacks. To get a Cardano wallet, make sure to download Daedalus only from its official site or use the official Yoroi browser extension.
In less than half a year, Project Catalyst has grown to become the world’s largest decentralized autonomous organization (DAO). It is a fulcrum of future development and sustainable innovation, driven by the Cardano community, for the Cardano community. This latest fund is a huge step up for the proposers, advisers, and voters collaborating already. We want to encourage everyone to become part of bringing on-chain governance to Cardano.
If you are an ada holder and you want to influence and contribute to the future direction for Cardano, then bring your ideas and join us at Project Catalyst.
*Please note, due to an editing error, a previous version of this blog erroneously stated that voter registration and voting would be included in the forthcoming Daedalus release. Our apologies for any confusion.
Decentralizing social media: a conversation with Ben Goertzel and Charles Hoskinson
The minds behind SingularityNET and Cardano come together to explore a vision of the future of decentralization, AI, and social media.
5 February 2021 57 mins read
At the end of 2020, we announced our collaboration with SingularityNET, in an exclusive fireside chat between Charles Hoskinson and SingularityNET founder & CEO, Ben Goertzel.
SingularityNET recently shared further information on the partnership when they announced their exciting Phase Two initiative, which includes a shift from Ethereum to Cardano, to achieve new network functionalities and launching a Stream of New AGI-ADA Tokens.
Last week, Charles and Ben sat down again together in a special SingularityNET podcast. In a wide-ranging discussion, the pair explore decentralized social media, the Cardano collaboration, and how a benevolent general AI technology might help a healthier social discourse.
Here, in this exclusive long read, we have transcribed the whole conversation for you to enjoy and savour.
Ben Goertzel: Alright. Pleasure to be chatting once more Charles. And I thought it'll be amazing to have an on air discussion on the topic that's been in so many people's minds recently, which is the perhaps of critical importance of decentralization for social media and social networks, because this is something we both been thinking about deeply for quite a long time and have been both moving toward action on for quite a long time in our own ways, maybe the AI spin and you with Cardano and blockchain. But now things seem to be coming to a head and the world seems to suddenly be concerned that a few large corporations are programming everyone's brains in bizarre ways. So, yeah, maybe it is cool to start out just by hearing your overview of the topic.
Charles Hoskinson: Yeah, it's an interesting situation. So I'm kind of conflicted. So, I'm a big libertarian and the libertarian guys say, "Hey, let the market decide. So when someone gets de-platformed, we say, "Hey, it's a private company. They can do whatever they want." But the issue is collusion and so the watershed moment for me wasn't the de-platforming of Trump. I said, yeah, okay the guy violated the end user license agreement probably 9 million times. At some point you have to throw the guy out. The issue was the de-platforming of Parler, because that was a very different animal.
So the whole argument was, well, if you don't like Twitter, go compete with it, build your own social network. That's exactly what Parler did. And they had different moderation standards. But then what occurred was that all of Silicon Valley got together and they colluded and they basically jointly decided to completely de-platform Parler. So Amazon took them down, Apple took them down, Google took them down. And if you're put in a market position where 100% of the mobile market and most of the web market is basically blacklisting you and you have no way to be on a cell phone for an average consumer, no way to have a website for an average consumer without going to extraordinary lengths and it's almost like the pirate bay. You have to host servers in Afghanistan or something to escape it. That's very problematic. It feels like a standard oil controlling the shipping prices of oil back in the 19th century.
BG: The appeal to ethics seems so disingenuous, right? It's like you can search Qanon on garbage on Google just fine. So then why is it so unethical for there to be Qanon garbage on Parler as some of the content, right?
The idea that these big tech companies are acting out of a moral necessity to save everyone's lives. I mean, it rings very hollow, right? And I mean, there's no doubt some people in those companies really are thinking that way. But the alignment of these marginal ethical arguments with obvious corporate profit interests as being advanced by explicit collusion among these big players. It makes it hard to take the ethical aspect one hundred percent seriously.
CH: It's almost become like an ethical tautology in a certain respect. They say 'Don't be evil, except for the times you have to be.' It's a crazy, crazy statement where these companies say, well, we're trying to be moral. And I say, 'Okay, but no one elected you. And why are you guys in charge of the totality and curation flow of all information?' I very firmly believe what needs to happen is we need to split the protocols that carry the information from the interfaces that curate that information. And that feels to be a much more natural thing. The problem we have right now is the stack is vertically completely controlled by a company.
So, Google doesn't just curate what you see in the search engine. They also control the underlying engine. And so as a consequence, they can make a decision on pretty much anything and exclude people laterally. And it's the same for the app stores. It's the same for social networks. The level of collusion is very problematic. I mean, you can't tell me that they didn't talk to each other if they all de-platformed someone the same day in the same hour. It'd be one thing if it was a gradual process where maybe Google and two weeks later, Amazon, something like that. But if it's all exactly at the same time, then it means they picked up the phone and they called each other and say, well, we just decided that this is no good for you.
The problem is that decentralization doesn't solve the underlying problem that they're complaining about, which is radicalization. The issue is that the way information is being presented, it's manipulating our cognitive biases. We're creatures of cognitive biases. No matter how smart we are, we have availability bias, and selection bias and confirmation biases. There's hundreds of them and social scientists, psychologists and neuroscientists, they think about these things and quantify them. And if you digitize those biases and you build algorithms to exploit them, then what ends up happening is you create echo chambers. So you create these silos. Each and every one of those silos they are incapable of getting out of it. There's no idea flow between them. So all you do when you decentralize that, if you don't solve that underlying problem is you make the silos more resilient.
BG: I mean there's a problem when you're applying AIs to learn to win in games or video games, which is both a problem and a benefit is that the AI will learn to do what you asked it to do. So if you're asking it to get maximum points in this game, and there's a way to do it by hacking around the rules of the game in some weird way no human would ever think of, the AI will explore various options. And if it's working well, will find some route to achieve the objective function without taking into account whatever implicit constraints you had about what's the artful way to do it.
I think something similar exists with social media companies. They have certain metrics and objectives they're working toward. Often very, systemically internally, right? I mean, they want people to be looking at their site as long as possible, for example, or they want them to be spending as much as possible clicking on ads. And they'll put a lot of human and algorithmic effort into optimizing toward that goal. And then we can't be very surprised that these groups of brilliant people make cool software build systems that are optimizing toward that goal, like via whatever hacks they can find. And those hacks include exploiting human cognitive biases and exploiting dynamics of addiction in the human brain and all sorts of human, emotional patterns. Exploiting human angst and the desperation and existential confusion. I mean the algorithms and the corporate systems will exploit whatever they can to achieve the goals they're given.
And as you say, it's organized so that these corporate organisms, which are now hybrid human and digital computing process organisms. These corporate organisms are almost like a parasite on modern society and they're achieving their own goal pretty effectively. If you took a bird's eye view of human society and where we want to be and where we want to go during the next few years, and maybe leading towards the singularity and creation of AGI and all that. A situation where these corporate human/computer networks orient toward maximizing shareholder value by getting you to buy stuff online and stare at their website as long as possible.
I mean, these sorts of organizations having that much power is not the optimal dynamic for shaping the collective mind, heart and soul of humanity, right? I mean it's pretty far off from where we want to be. You'd imagine that extremism and siloeing and tribalism, which we're seeing online and in real life, I think that's probably the only scratching the surface of the screwed up patterns that are being fostered. That's the surface layer where it's easy to see how screwed up it is. And there's so many other screwed up individual and collective dynamics that are happening. I wouldn't say all caused by this organization of social media in the tech industry, but certainly co-evolving with it and codependent on it.
CH: Well, it's an interesting thing. So I tend to agree with Max Tegmark in this respect where you invent the car first and then you invent the safety belt. With new technology or new processes, there's a lack of wisdom in the safety components of it until after you've suffered the consequences. So, we looked at the oil and gas industry in the 19th century, they started drilling all these wells and only after they started doing that, did we start thinking about environmentalism. And we said, well, maybe it's not such a good idea just to have unrestricted oil well drilling. Maybe we need to think carefully about what this is actually doing to the environment.
Well, the oil of the 21st century is really the attention economy and the data economy. And we have all this surveillance capitalism and we have all these early pioneering firms and they're effectively mining that. And they're creating a social environmental damage by this process, to use an analogy where these algorithms are built and these platforms were built away to exacerbate human nature. So to your point that they didn't cause it, but I'd certainly say that they're exacerbating it and-
BG: I always think of everything in human society from the end game of legacy humanity. Like we're working on creating AGI. If we can create a benevolent AGI, I mean, this is going to make our current problems seem so archaic and silly. Of course, things won't be perfect. There will be new problems we can't imagine there. But this is certainly the biggest threshold event in the history of humanity, perhaps of life on earth. We could be a few decades from that even less. If there's even a decent odds that this singularitarian view is true, I mean then how the collective mind of humanity is shaped is insanely important, right?
Because the first AGI probably isn't going to be just a stupid human, stupid mind in a box, totally separate from human society. The way things are going it's more likely to come out of the interaction of multiple different AI components made by multiple parties, serving useful economic functions in the world at large. If the first AGI, which triggers this singularity is coming out of the whole mess of the tech ecosystem and people using the technology to do useful things, I mean then how messy that mess is, is an extremely important thing. And that right now, the direction does not look like the internet AI tech ecosystem is evolving in a great configuration for spotting a benevolent super AGI, 5, 10 to 20 years from now, right? Maybe some redirection if some of the sub-networks in there, like the ones we're involved with could affect it. Some redirection would be highly beneficial.
CH: Well, the problem with AGI is that that's kind of like the Deus ex Machina situation where you're saying, well, we could solve this problem if we have this insanely powerful tool. And it's like, well, yeah, but maybe we don't actually need a tool that powerful to make meaningful progress towards this problem.
BG: Decentralized social networks you don't need AGI. Absolutely not. You can do a lot with blockchain networks.
CH: Hang on. So I think an AI solution does provide a lot of value, but I look at it more like a cognitive crutch. So if you injure your leg you get on crutches or you walk with a cane or something like that. I recently had a gout attack and for two weeks I was on a cane. So it's kind of funny. We physically think about this, but for the mental stuff, we don't really think we need it. We say, oh, our brains are perfectly well functions. Like no, we're dopamine addicts. We're constantly manipulated by digital devices and we're in a situation where we're not acting rationally or objectively most of the time.
BG: With access to our hardware and software. We can't fix the bugs in the direct way.
CH: So the question is, what would be the simplest possible agent, intelligent agent that could be constructed that could act as a cognitive crutch to alert us if we are being manipulated or our behavior is exhibiting patterns that have been propagandized. That feels like it would be a massive step forward.
BG: Now we're getting it. Some of this stuff that I'm hoping we will be able to build together with a SingularityNET on the Cardano network over the next few years. I mean, if you look at intelligent virtual assistants now like an Alexa or Google assistant, I mean, A: these things are very stupid in many senses, right? I mean, I have a Google Home Max. I used to play music in my house and the system still hasn't realized I never listen to music with vocals during the day. I mean, it doesn't have that metadata there. It hasn't recognized that very simple pattern, so repeatedly throw stuff at me. I won't listen to it. It's not even able to understand extremely simple repeated patterns in human behavior, which would help them make more money, even by showing me more stuff I want to listen to, right?
So these systems are optimized very narrowly to serve certain functions and their functions certainly are not to help us navigate the universe of the internet and media, in a way that's optimal for our own growth and self understanding, achieving our own goals and optimizing the collective intelligence of humanity. Very, far from it. So one could envision a personal assistant that had a bit more general intelligence. So it understood at least a little bit of what we actually want and are doing, but also was not controlled by a mega corporation with the primary goal of making them money, but was controlled by us who were being assisted by the personal assistant, right?
I mean, I don't want the human personal assistant working for me, helping me do things whose main goal is to make some other corporation money, right? I want the human personal assistant working for me whose goal is to help me because I hired them to help me, right?
And we should have digital assistants like that and they're going to be building machine learning models of everything we're doing like a human assistant builds their own biological model of what their employer is doing. And we should be better than the human assistant. We should be able to explicitly inspect what that model is and edit and correct it if we didn't like it and delete that model if we want to, right? So, I mean, we need among other things, we need intelligent virtual agents to help guide our navigation of the whole internet information sphere, which are secure and decentralized and explainable to us. The thing is we can do that without AGI. We can do that with technology we have right now, and this technology can help along the path toward AGI.
CH: Where do we get the training data from? That was the one thing I was thinking about is how do I train an agent like that?
BG: I mean it's going into smartphones that we use all day, right? So the training day that Google and Amazon and so forth are using, where does it come from? It comes from all of us. In principle, you can download most of what Google is basing its training data on you on, but very few of us are doing it. We're not using it, right? So, I mean, clearly you need all the data that you're using to interact with devices and with people all day. I mean you need that data to be in a secure data world that's owned and controlled by you where you're confident it's being managed and secure. Yeah, but we got to get a little deeper. I mean, it's not just interaction use. You'd have to clearly show an example of confirmation bias to an extent that an ML model would be able to understand that. And so how do you do that in an unsupervised way?
BG: We show it all the time, right? And I mean if the AI has a view of a lot of people, I mean, even those of us who are especially clever in some ways and our basic human social, emotional interactions, there's a lot that we do, which is the same.
Emotional interactions. There's a lot that we do, which is the same as a lot of other people are doing, right? Like in how you interact with an employee versus a romantic partner or a friend or someone who's arguing with you. I think the sort of dialogue meta games and the inner dialogue meta games that people are playing, they're within the scope of current advanced neuro AI tech to recognize it's just, that's not what's being focused on. What's being focused on is recognizing subtle patterns and who's going to click on what ad. And I mean, you don't need to tell that to predict who's going to click on what ad in the most concise and effective way. I mean, you don't care. Right?
It's just a principle problem that the tech industry is not currently trying very hard to solve, but yeah, you're right. You focus on the AI part and I focus on the blockchain part. But in reality, I mean, you need them because I guess the other guy's part is harder because we understand how to solve our part. But I mean, you need both of them. I mean, you need the secure, scalable data ecosystem, respecting data sovereignty and you need that to fuel intelligent virtual assistants that really serve the person that they're assisting is the prime directive. Plus this massive scale data analytics that really understands what's going on with each of us in a way that lets it genuinely help us.
Because what is giving a person what they want? Does it mean gratifying their most intense short-term impulse at each moment? Or does it mean giving them what they want in a sort of balance along multiple timescales? Which is at multiple levels of our being, which is what we try to do with our family and our human friends. And AI's, they're laughably far from making an effort to give us what we want and in the more profound sense at the moment.
CH: Right? Well, the reason why I was focusing on the AI part is the biggest part, the blockchain part, the incentives engineering relies very heavily upon the users and the agents inside the system. And so we say, "Okay, how do we incentivize people to supervise and curate data and agents in a way that we get more dialogue and we get a great moderation?" The ideal form would be, if you take clique's that are disjoint and you put them in the system, then idea flow starts occurring between them. And over time they'll converge into kind of a great moderated middle.
So you can take a very extremist person and either the system acts like it has an immune system and it kind of kicks them out or that node over time, moderates. The incentives in the system have to be designed that way. The reason we have so many problems in my view with Facebook and Twitter is that it actually has the opposite incentive. You get more clicks and more interaction with the more polarized people become. So the system is built in a way to polarize people as much as possible and thus divide them as much as possible. Because it's actually boosting revenue.
BG: I think that's an easier problem to solve. Righteous indignation and the glorious feeling being approved by others in your ingroup and jointly indignant of the guys in the next group. This is a really easy emotion to manipulate with people. It's sort of a low hanging fruit. And to an extent these networks implicitly got stuck in manipulating this low-hanging fruit because it was the easiest way to keep people staring at their app. I mean, just as the internet settled on porn with love, it's been with. Because that's a really low algorithmic complexity way to keep people staring at something, is to show them naked bodies. So, if something would give greater benefit and even get people to start their site longer in the long run, but it isn't quite as simple of a problem, it sort of gets bypassed in the loop of trying to incrementally achieve these metrics more and more each month.
And what's interesting is that the thought that rearranging sort of the configuration of the tech stack as you suggest in the beginning of the conversation, so like rearranging the tech stack so that the protocols are separate from the applications and then the AI models and tools used to create the AI models and inspect the AI models, they're also separate from the applications. I mean, reorganizing things in this way, then it sort of opens up the dynamics of the whole ecosystem in a way that I believe has decent odds of leading to the evolution of social media tools that they give people what they want in a more profound sense. And in doing so, they're creating communication networks among people that are not focused entirely on sort of immediate gratification of the ego and soaking of inter tribal rivalries and so forth.
Because all these good and beautiful things we're alluding to, exist on the internet right now. They exist on the internet right now. There's love, there's compassion, there's true connection between people with rival political views or from different historical tribes and so forth. It's not that we're not capable of that or that it isn't there. People are capable of amazing deep connections with other people and have incredible self-awareness and uplifting of their own consciousness. It's just, you need networks that foster this rather than trying to squash it and channel you into tribalism and immediate ego gratification. And of course neither you or I nor our teams are going to build all the systems that solve this problem. So you're going to create the ecosystem and tool set in which the solutions are going to emerge.
CH: Right. Well, that's the point of incentives engineering is that it's the initial push. And because you don't have friction to slow you down, you tend to accelerate and eventually you get to a great place. I mean, Bitcoin obviously got their incentives engineering right. And they went from a single miner to warehouses of miners all around the world. And now this colossal system. We can argue about the power consumption, but that model was quite competitive to a point that it created a trillion dollar ecosystem. So I often think, "Well, what incentives do we need?" And we kind of have three sets of distinct things we need to accomplish at the same time if the network is going to be sustainable and useful to society.
So one thing is that you would like information to be curated, where it can clearly separate objective reality from the subjective analysis of it and give people a diverse set of viewpoints and understand that stuff is nuanced. So if they get that, then you kind of get rid of the fake news. You also get some consensus in the network of baseline facts. Because right now we live in a reality where people can't even agree to basic things. Some people think coronavirus is a hoax. Some people think vaccines are poisoned, et cetera, et cetera. So there's just disjoint realities that people are in. It used to be we would have one set of facts. We'd agree on that. But then our interpretation of what those means-
BG: It's true. A lot of people really believed Donald Trump had the most people at his inauguration ever, and the New Yorker doctored those images. And of course, sometimes the mainstream media may have distorted something about Trump, but the thing is, that's like an image, right? And people didn't believe the photograph, they believed the photograph was fake. And when you're at that point where people don't believe the photographs, then it's very hard. Then you have to be on the ground there, observing it in a sort of very clear state of mind to believe anything.
So, I mean, I'm not even a realist or materialist fundamentally. I don't know if there is an objective reality. But what people are doing is they're not thinking in a clear and coordinated way about this belief they have or this thing they'd been told. What evidence is that grounded in? What's the process of grounding the abstraction or the claim in the evidence? That process is broken. And it's partly because of AI and advanced informatics tools. Because you can make a deep fake. I mean, it's actually hard to tell if this video is Goertzel and Hoskinson or is this video a deep fake of Goertzel and Hoskinson put up by someone to troll all of us. It's not immediate seeing is believing to tell that you have to think.
CH: Oh yeah. Like the Collider, George Lucas, deep fakes are extraordinary. And that's last generation technology. Where they're going in a few years is going to be socially very damaging because you'll have these perfect simulacrums of major figures and there'll be saying and doing terrible things. So that's the first part, the curation, go ahead?
BG: You need the social network to tell bullshit from reality. So if the social network is broken, then you can't tell because you can't tell by looking, you can only tell by what you read and what others are saying, right?
CH: Right. And I think that's why they're proactively de-platforming people and controlling flow of information because there's a political terror about the consequences of deep fakes and what they're going to do to dialogue.
BG: Yeah, the point they're going to come to.
CH: Yeah. Put a pin in that because there's two more points. So, as I mentioned, the first is just the curation of the information itself. And putting it in a way that it promotes instead of siloing idea flow, idea quality, separation of objective reality from subjective reality. And then when you're looking at the subjective to give you a spectrum of viewpoints, almost like a next-generation Nolan chart to show you different viewpoints.
Okay, so then second there's clearly a data economy that exists. And surveillance capitalism is not just a nice term. It's a multi-billion perhaps trillion dollar economy. It's very valuable to society in certain respects. It allows you to micro target people. It allows you to have more friction-free commerce. You get the right products to the right people. So there's a huge advertising model and that shouldn't go away, but it should respect the privacy of the individual.
So there's been a lot of attempts to explore better ad models like with Brave and BATs, for example. I think whatever social network you create, you have to move in that particular direction where people are able to monetize their data and preserve their privacy, and actually get a share of the profit from the interactions that they have. And then the third design goal has to be the infrastructure itself is horrendously expensive to maintain. I mean, you're talking about petabytes of data. All these systems have N squared plus interactions. And so as your social network gets to a billion people, that quadratic complexity becomes very difficult to curate and manage. So the computational cost of that infrastructure, there's a reason why Google is so big and Amazon is so big and Facebook is so big.
So you somehow have to figure out how you subsidize a decentralized distributed system to curate and store all of that information. And you actually have to make data and users an economic actor, or they get pruned out if they don't contribute enough to the system. And we haven't quite figured out how to do that in a much simpler sense with just smart contracts and these big systems.
I mean, you see things like IPFS and Gollum and other attempts to distribute network and data and storage. But if those protocols are imperfect, and when you talk about a social network, you talk about people posting videos, every day, 4k videos. You talk about people posting pictures every day, sometimes 100s of them, millions of meaningful interactions, even a small clique. If you take an extended family, that's going to be over a month's time, probably a million plus interactions of various things from likes and thumbs up. And then you're adding these intelligent agents that also have to do an enormous amount of processing on a regular basis. And those agents are only going to get more sophisticated and be interacted with a lot. So you have to have a lot of that be handled by the edges, the end user.
BG: Yeah, yeah, yeah, absolutely. And that's hard. I mean, we've been working on that with a project called NewNET, which is spun off of SingularityNET. And I think we understand a lot about the architecture that has to be there and about how this sort of split up machine learning algorithms for this sort of a hardware infrastructure. But there's a lot of work to be done there. There's a lot of avenues for inter-operation of NewNET, SingularityNET, and Cardano there. But I mean, it's hard. It's hard to do computer science and software engineering. And on the other hand, obviously Google and Amazon and Microsoft have solved a lot of really hard large-scale software engineering problems, different ones. But I mean, I think with a fraction of the effort that they put in, I think we can solve that problem.
CH: Yeah, they're cheating because they always have a trusted third party. And so that massively simplifies your protocol. Their problem is easier. This is a harder problem. But on the other hand, computer science and hardware have both advanced a lot since they started doing what they're doing. But yeah, the incentive engineering aspect, incentive design aspect is quite critical and quite fascinating and exists for end users and also just within the developer community. Because I mean, what you see now is the significant majority of AI PGS, and we're going to work for these big tech companies. Or start a startup, which then gets acquired by one of these big tech companies. So the incentive structures of end users and of developers have sort of been channeled. They've been channeled around these large tech companies, which is an amazing achievement. I would be proud if I created one of them. On the other hand, it's not optimal, but it's doing the course of society.
And I mean, this is one thing that interests me, in our own collaboration over the next few years. I've been working with my team in SingularityNET to architect a five-year tentative plan for how to roll out and grow SingularityNET on Cardano platform. I mean, part of this involves the AGI token, the new AGI on the ada token that we're working to launch as a new version of Singularity AGI token. Because we need the AGI token to be the right sort of incentive mechanism, largely on the backend. For AI algorithm developers and for AI application developers who are building these applications backending on the AI, you need the incentivization there to work right in order to create the systems that will be creating the right incentive structures for end users.
BG: And I think things like the Catalyst Program within Cardano or a very interesting step there. I mean, where in Catalyst community members democratically vote with some liquid democracy mechanisms that they vote on, which Cardano projects should get some tokens. And I've been watching and participating now, and then on the Catalyst discussions. And I want to do something that's a lot like that with some added dimensions, for SingularityNET on Cardano for fostering the community and expanding the community to build AI applications on our shared decentralized network. Because you need the right incentive structures on all these different levels and they need to coordinate together, which is hard. But I mean, there I think Tokenomics sort of gives you an advantage over what the big tech has because it's more scriptable and it's more flexible than the money and stock options and the incentive mechanisms they have.
CH: Well, what's so cool about Catalyst is there's at the end of this year, going to be at least probably a $100 million worth of value that's available to the community. And the partnership with IdeaScale is just the beginning. We keep adding more and more firms to assist us with figuring out how to build a productive voting community, because it's not just the raw participation. So we say, "Hey, I think about two, 3% of ada holders are right now in idea scale, Because it's still kind of in a beta form. Our goal is to get that to 50% before the end of the year." But then we were trying to identify what meaningful participation means?
Because I would argue the American election system is not meaningful at all. You just show up and vote, but whether you spent hours thinking carefully about it, or you just voted randomly, it doesn't really matter. And the system doesn't differentiate that. So you end up with very poor outcomes and rational ignorance and a race to the bottom, effectively. So, meaningful participation is something we're definitely very interested in. And our hypothesis is that's going to lead to significantly better funding outcomes. So our return on intention is quite good for the system.
It gives you this M & M thing. It feels so empty without M & M, maintenance and moonshots. So maintenance means that you can maintain the system as it is and iterate and refine and evolve, and moonshots means that you have enough money to go pursue a high risk, high return research. And most great societies do this through some vehicle. It can be the Horizons program, the European Union, or it can be DARPA in the United States where they say, "All right, we're going to throw a bunch of money at some crazy stuff." And the odds are, it's probably not all going to work out. In fact, we seldom get exactly what we want, but then every now and then, we get fiber optic cables and satellites and the internet, and we get self-driving cars, and we get CALO and these other cool things.
The value to any DApp that comes over to Cardano is that you get to reuse the catalyst stack at some point, and then you can start entertaining, "Well, what does a treasury system look like within our ecosystem?" So, let's look three, five years out into the future, and let's say SingularityNET's gotten a lot of adoption. There's tons of transaction volume. You could put a slight tax on each transaction that can go into a treasury system for all the AGI holders. And then suddenly, you now have a mini catalyst just for AGI, and you can follow your own M & M strategy. So one part can say, "Hey, we just want to add more agents and more capabilities," and the other part can say, "Let's go tackle a super hard problem in the AI space." And it's really risky to go chase that problem. It may be the Holy Grail AGI, or it could be a subset of that or a compositional subset where you can decompose that problem to a collection of subproblems, and you're solving one of them. And if you fail, it's okay. And if you succeed, that solution lives in the open domain, and it's not controlled by a company. It's controlled by a protocol, so it's ubiquitously accessible.
BG: So with what we're planning out now with a certain amount of AGI ADA tokens, I think we can do something catalyst based that can help get AI developers on the SingularityNET on Cardano platform and can help build toward both applied narrow AI in domains from social media to medicine, to DeFi as well as other components toward AGI. But there's also much bigger things. Like if you think about it, we're competing with these trillion-dollar companies, right, so I mean, eventually, we need custom hardware for decentralized AGI. If there is enough usage, as you say, a modest fee on usage can, can drive catalyst-based funding of research. And I mean, you could fund the design and prototyping of decentralized AGI chips, right?
I mean, ultimately, we need to be seeding these exponential economic growth processes to the point where there's more wealth in the decentralized AI ecosystem than there is in the centralized AI ecosystem, which sounds very fanciful now. But I mean, I'm a lot older than you. I'm old enough to remember the computer
companies were like Honeywell right? No one believed that PC companies were going to supply them, let alone internet companies like online ad agencies. Right? But this is how things go. And I mean, in the same way, the potential for network effects and exponential growth based on the right incentive mechanisms on multiple layers... The potential is there for a decentralized AI ecosystem to grow much bigger than the current trillion dollar companies. I mean, you just need to see the right growth processes in place. And I think, between our communities and codebases, we're able to see what those are right now, but of course, getting that seeding to work involves an endless number of difficult subproblems, both technological and human.
CH: Right. Well, that's the value of trade. Bob makes the spear, and Alice makes the rope. So one of the things we're trying to focus on in Cardano is abstracting the toolsets and capabilities of the protocol so that each DApp that comes can reuse that, and they don't have to be a domain expert.
BG: That's what got me to fall in love with Cardano in the first place. It's like, this is actually a reasonable software architecture, right? I mean, you're using functional programming. You're breaking things down into pieces. So if I want to take some AI algorithm and make it do homomorphic encryption or multi-party computing, so it runs in a secure and scalable way, I don't need to write all that code myself. There's actually tools within the blockchain infrastructure that are useful as code when you're on the AI level. I mean, Ethereum is super cool. Launching smart contracts into the world was a landmark thing, but I mean, the Ethereum codebase is not like that. There's nothing in there you're going to reference or use within your secure AI layer.
CH: Well, the computation model is just wrong. It's got a global state, and so you can't grow beyond a certain amount.
BG: It's supposed to be a world computer, but you cannot build a functional world computer that way.
CH: No. You have to go from global to local. And then you just have so many problems in that model. In fact, we just had a lecture this morning with Manuel Chavravarty talking about the differences with the extended UTX cell model to the Ethereum style accounts model. And we'll publish that video probably next week, but it just becomes so obviously self-evident that while it's a great proof of concept, the system... First, it can't scale. And second, the use of other utilities comes at the same resources for everything. So whether you're using a voting system, or you're using a stablecoin or a DEX, it all comes from one pool of finite resources. So if one of those resources gets over consumed by a Crypto Kitties, it makes all the other resources in the system more expensive. And that's a bizarre and asinine model. If a catalyst, for example, runs as a side chain of Cardano's... So let's say we have tons of DApps bombarding that, using that for the voting systems for their DApp, that will have no impact at all on the main chain performance.
BG: A hundred US dollars in gas for you to swap transactions.
CH: I know.
BG: And how can you obsolete Wall Street that way? I mean, it's going to be tough, right? But on the other hand, I think the foundational algorithms to get around those problems are there in Cardano. And then, in SingularityNET, we have foundational algorithms for distributing and decentralizing secure AI. So, I mean, I think ingredients are there for what needs to be done. On the downside, none of us has the war chest that Google and Amazon and Apple and Microsoft do, so we have to work around that by being cleverer than them and designing the right incentive mechanisms so that you get positive feedback effects and network effects, and things can really grow. And I think that this year is going to be pivotal actually, but we're going to... I mean, you've got native assets coming out, and we'll be putting AGI token as a native asset, and then a few other SingularityNET spin offs as native assets.
But I mean, we're going to get to a flourishing native asset ecosystem in Cardano, and then SingularityDAO, which is a DeFi system we're building on SingularityNET, I mean, we can use to help coordinate getting liquidity into all these Cardano native assets. I'm super psyched about that coming out publicly because not many people are thinking about what you can do when you have a real programming language as a smart contract framework, which security by design is built in. So, I mean, I think we're really providing stuff that is prepared to explode in an incredible way in 2021.
CH: Yeah. So first about the treasury management, Tesla 2008 was a day away from bankruptcy, and now it's worth more than Toyota, Honda, Nissan, and Ford and GM combined. I mean, it's just crazy how fast they grew. So treasuries can grow exponentially if you get to a certain... It's almost like a standing ovation model where a few people stand up and clap, and then eventually you hit this point, and then everybody just gets up and claps. And it's the same thing, I think, with capital and companies. There's a few pivotal moments that you have where you're just right at this explosive growth, and then boom, the hockey stick happens, and then suddenly you have a lot there. And I think that's happening in the crypto industry. I remember when we hit a billion dollars with Bitcoin, and I was like, "Wow, this is incredible." We could never fathom a trillion dollars. It was a crazy concept, and that had happened within eight years of that point. It took nearly five years for it to get to a billion. So it's extraordinary how quickly things can grow.
Then in terms of the collaboration, getting to that, Plutus is coming very soon, and we have this test net coming out. What we're doing is we're going to beat the hell out of it. So we'd love for your guys to beat the hell out of it with the SingularityDAO.
BG: Beat the hell out of it. That's right. Yeah.
CH: We're a little easier because we have the Hard Fork Combinator, but your mistakes tend to sit around forever. Like we made a lot of protocol design mistakes with Byron, and we still have to support them. And so we found a really nice way of doing that. But when we released version one of Plutus and the extended UTXO model and native asset standard, that's probably not going to be perfect because nothing is. As an engineer, version one's there, but yet we have to be backwards compatible. So when you go to version two, you still have to support version one. So to me, it's super important that we get as many people as quickly as possible, beat the hell out of the native assets standard, beat the hell out of, especially Plutus, before we do the next hard fork to bring that in because I would rather not be backwards compatible with obviously wrong things as we are with Byron.
So it's great to have you guys around. I know that the code you're going to write is very novel, and it's also going to push the system to its limits. And you're going to create a very strong demand for performance and scale, I think. And I can already see several areas where we would like to use AI, for example, transaction fees. We have this fee parameter, and that's right now set with the update system, so the minimum transaction fee is a DDoS parameter. It'd be so cool once we have Oracles and DEXs within the system, and we have some notion of the value of ada relative to the US dollar, to create an automated transaction monetary policy that can take those data points and compare them to other networks real-time, and then try to make sure that we always have a compatible-
BG: This is actually a subtle point that we've been discussing between SingularityNET platform team and Cardano platform team, right? Because I mean, the transaction framework for Cardano now, and that's planned for common native assets, it's fine from what we're doing with SingularityNet at this moment, but if we want to go to a swarm AI or microservices model, where you have a whole bunch of little AIs that within the second, one AI is consulting others to create others. I mean, if you really want to get AI by this dynamic microservices architecture, I want to have this using the blockchain rather than all off on the side. I mean, you need a way for some sub-networks to have substantially lower transaction fees, but then you need some system that's intelligent in some sense to regulate and moderate that because you still need to protect against DDoS attacks and then all sorts of other things, right. So there's a lot of areas like that where some machine learning, participating in the infrastructure can help a lot. And one of the things it can help with is to help make the system better able to manifest the emergence of higher levels of intelligence and learning, right, so you got a lot of positive cycles there.
CH: Yeah, and you want it to be deterministic yet dynamic. And you would also like it to be globally aware of competition. So you'd like the agents to be able to parse all the competing blockchains and look at their monetary policies, look at their transaction policies or transaction rates and their relative values to each other, and then be able to pull that into Cardano and form a transaction policy based on that.
BG: It is there, right. I mean, the data is there online. You can download it into your AI, and I think that's quite feasible. So, yeah, going back to decentralized social networks, where we started, I mean, there's been, as you know, and you've looked at this in more depth than me even... I mean, there's been loads and loads of attempts to make decentralized social networks. There's dozens of cool projects started by smart well-intentioned people with the right vision. Obviously, none of them has yet become the next Facebook or Twitter. I mean, some like Minds.com from Bill Ottman, I think, is really cool, but I log on there not yet as often, even as I log onto Facebook, which is not that often, right. I mean, Mines is great. It just doesn't have such a critical mass of people yet, although it's done a way, way better job than the vast majority of decentralized social networks, right?
So how do we get Minds and Everipedia and dozens of other decentralized social network platforms and the new ones that haven't been heard of yet... How do we get these to really take off? And I think we share the conclusion that a lot of what's needed there is to make the underlying stack more amenable to lower costs, larger-scale operations of the needed kinds, both in data storage and processing distribution, and then the distributed AI, also. It's interesting, Jack Dorsey from Twitter has seen this also, and they're looking at making a decentralized protocol and reorganizing the Twitter stack. The question there is, can you really make that work with incentive structures that are implicit in Twitter as the company that it is?
CH: That's why I separate the base protocol from the interface, like what Steem did. They had the Steem protocol and then Steem at the interface, and their problem was that they didn't have a full end to end monetary policies, so they had value leakage. There was no incentive to buy the token, but they used the token to curate information. Had they solved that problem, it would be still around and much larger, but I think that Twitter can survive with a decentralized social network protocol because it would just be a very popular, curated interface to it, and they'd still have their network effect. It's just the customers, and the data would be ephemeral. They could flow from one interface to another interface and get that same experience. The problem right now is you have to rebuild the network effect every time you launch a new one of these things. Every time we want to do an internet application, we have to completely rebuild the internet underneath it. It's a preposterous thing, right? Yeah.
BG: It makes sense, and I think it's visionary of Jack Dorsey to even entertain the notion, right? I mean, not many corporations of that scale are willing to-
CH: Well, it's a proactive solution to a big problem he has because if he plays censor and chief and he has to de-platform people from the protocol, then he can never win.
BG: I wouldn't want that job either because, I mean, you got people that are clearly colluding to kill someone. Fine. You ban them. You have people who are saying stuff that's nasty but not yet criminal. And then I don't want them to be in the job of telling what's too nasty and what's okay. I mean, court systems aren't perfect at that, but I mean, they've been honed for that over significant periods of time, and you don't want to have to do that at fast speed and large scale as part of operating your tech company. And I mean, none of these tech companies actually want that job, right. That's not why they got into the business, like how can I censor people's political speech? So, I mean, of course, if things can be reorganized so that that job is done by the community for the community, rather than having to be done by the CEO. I mean, that's far, far better. And the community won't do it perfectly, but actually, it will do a better job than these centralized authorities. And I mean, it's completely possible to do that.
We did a lot of simulations of Singularity 's machine learning-moderated reputation system over the last couple of years. You can make decentralized, AI-guided rating and reputation systems and you can tune them and you can see if I tune it one way, you get information silos, if you tune it another way, you just get trolls and spammers and so forth. If you tune it in a different way, you get a system that self-policing and fosters a healthy level of interaction. And you can do this to get networks that self-regulate without anyone giving top level control. If this is operating within the current global political systems, which I have my issues with too, as I'm sure you do, but it's there, then you still have top level control over things that are clear crimes, according to the nation states people's bodies are sitting in, but you don't need top level control for anything else.
And I think that not just would avoid garbage like minds are proud of being de-platformed. It would also create something that's a breeding ground for positive and creative and beneficial content in which people's minds are being nudged toward positive growth, rather than channeled into this site and click on this ad. I think potential is there to do that. What's a little scary is that handful of us in the decentralized AI space, the two of us, probably understand more about how to achieve this than anyone else on the planet. It's actually a very big and significant problem, both in terms of setting the stage for a positive singularity and just making life less shitty for humanity on the planet at this moment.
CH: The one thing I've always learned from being a cryptocurrency guy is that incentives are king, and it's always been an incentives problem. How many people were, in 1990, being paid to think about social networks? You'd probably be in the sociology department at Harvard or something that, or toying around in an MIT AI group or something. But it wasn't a real job and nobody would understand. How many people who are experts in how to build effective social networks are floating around now? There's thousands of them. They're fabulously wealthy. So if you show that in a free market system you can achieve great wealth, or at least the prospect of great wealth by building a system of a certain design, then you'll end up getting a lot of it.
The cryptocurrency space was exactly the same. How many people were experts in Bitcoin-like systems in 2010? Very few. Now in 2021, now the existing chairman of the Securities Exchange Commission, Gensler, he was lecturing at MIT on cryptocurrencies. That's how far we've gone in just such a short period of time because the incentives are right. So when I look at this problem, I say, "Well, how do we get the incentives in the right way to encourage a large clique of people to come in and actually start applying serious hardcore brainpower to these types of problems?" So it's a first mover situation. Now, to the minds and these other guys, to that earlier point you brought up, I look at them almost like mechanical horses. When we were first thinking, how do we build a better horse? If we all let's build a robot horse, or a steam powered horse or something like that. Well now there's this automobile idea that we've been toying around with. Maybe that's just a fundamentally more competitive or better model.
Or similarly, when people are thinking about vacuum tubes, you can certainly optimize them, and I'm sure you could build a much better vacuum tube today than they were building back in the 1940s. But obviously that was superseded by the transistor. So similarly, when you look at social networks, we have to say what is our automobile moment to replace the horse?" And minds is not it. I think that if those things existed, they'd actually just be worse than Facebook or Twitter. They'd get far more siloed. The three problems I outlined, the great moderation, the incentives models being aligned so that people can actually make money and produce money and do useful things with the system, and the infrastructure funding problem.
You have to solve all three of those with one protocol design and one incentives design. And if you do that, then it's going to be this massive beacon that will attract tons of people to come in and start working on an augmented system and evolve it. And it doesn't matter if it starts very small. It'll go very viral and eventually get to that Tesla-style hockey stick, when Tesla figured out the entire model. Plenty of battery-powered cars before, but their particular model was the one that everything came together and then it had exponential.
BG: In terms of tokenomics systems, it's quite interesting. Because having a unified scheme and dynamic for promoting the right incentives doesn't mean just one token. So you're sculpting multiple tokens in a multi-token ecosystem where they interoperate. So say we have ada, we the AGI token on ada, and for, say, a decentralized social network running on ada, leveraging SingularityNET AI, potentially could involve a different token for a certain purpose within that network. You have to think through the inter interoperation of these different networks. And I think that this is one of the things I'm most excited about in collaboration between the two of us and between SingularityNET and Cardano. I think you guys have done very well in thinking through incentive structures and how they boil down into tokenomic structures. I look forward to some cognitive synergy among us on that.
CH: We learned how much we don't know. We started this program at Oxford with Elias, and he's an algorithmic game theorist. He won the Gödel Prize and all these things. He's a really good guy, and he's got some... Yeah, Oxford, he's got some really good graduate students too, so we said, "Okay, between him and his graduate students, we're done. Put a fork in it. We should easily be able to tackle all these consensus incentives problems in Ouroboros." It took two years to refine the entire incentive model just for a consensus algorithm, and now we're talking about incentives for the curation of information. So it's going to be fun to collaborate. I agree there. It's such a hard problem.
BG: Curation of information that's being created by just decentralized AI algorithms, not just of existing information.
CH: Yeah, because you need to create demand for a token and you need to be able to use that token because it's demanded and it's valuable to incentivize a certain collection of human behavior. You also need to be able to use that to incentivize people to interact with agents in a way that they could become trained to become good cognitive crutches to reinforce the network, and then that token also has to incentivize the hosting of decentralized infrastructure that eventually can scale to petabyte scale storage and huge network capacity and massive computational capacity. It's a tall order. It's a lot of incentive engineering, and that's why I don't think these networks exist yet.
BG: They don't. As you say, once it's gotten to a certain level, the potential to gain both personal wealth and to help promote broad benefit to a huge degree, those are both there in a very clear way, which I think can cause a rush of talent into the space of decentralized AI and decentralized AI guided social networks. We're at a pivotal moment now, I think, in terms of both the readiness and even eagerness of the world for these technologies and the existence of the needed tools, or at least a significant fraction of the needed tools to create them. This conversation is occurring in a quite interesting time.
CH: But the good news is that there's a lot of almost right attempts, like the creation of Bitcoin, we had HashCash and bit gold and DigiCash. They were wrong, but they were wrong in the right direction, so you just had to pull them along enough and then eventually it fell through. So you have things like BATs, and I mentioned that before, and suddenly now you've created demand for a token. Steem had enormous growth, but the problem was there was no demand for the token, but there was good payment for content creation and curation. So they got a lot of users, but they had too much value leakage, so they couldn't sustain network value and then the system fell apart.
I almost felt if you could combine BATs and Steem together, then you've created a feedback loop where the system will sustain and it'll continue to grow at a very rapid rate. However, they had to use the token to subsidize the actual running of the infrastructure. They didn't have a sustainable model there. So even though it was the protocol Steem and Steemit was just the company, the Steemit company had all the power and control because they were the people that could afford to run that protocol.
BG: We've got a hive now, right? I mean, that's the beauty of open source code and decentralized communities.
CH: It's a Pareto problem where a small group runs the vast majority of everything and there's no economic diversity there. With Cardano, we spent five years on Ouroboros because we wanted a system that would get naturally more decentralized over time. So as the price of ADA increases, the K factor increases, and then suddenly you go from 1,000 to 10,000 stake pools and then 100,000, and then all the infrastructure is federated with those stake pools, so suddenly you have 10,000 Hydra channels and suddenly you have 10,000 oracle entry points et cetera, et cetera. So the system, when we get to Bitcoin scale, could have 100,000 stake pool operators that run that, and that scales quite nicely.
BG: I'm sort of thinking into the growth of SingularityNET during the next phase. I think that the platform as we've built it now does something good. If you create multiple AI agents all over the place that collaborate and cooperate to solve hard problems. But we need to architect the next stages of development in a way that will incentivize massive increase of utilization of the platform using AGI ADA, but also that will ensure that increasing decentralized control of the network happens along with this massive increasing utilization. I think we can do it, and I think a lot of the thinking you guys have put into growing Cardano was actually helpful there in ways that we probably don't have time to explore in this podcast.
CH: Well you get the democracy stack for free with Catalyst, and you also get the decentralized infrastructure for free. One thing we'd love to do is see if we can get outsourceable computation. I've been following that for God knows how many years. Pinocchio and Geppetto over at MSR, can you do the computation on an untrusted computer, but then provide a proof that the computation was done correctly? And then you know that whatever the result was given is right, regardless of who did it.
BG: That's there on the computer science level, but it's not yet there on the scalable, usable software level.
CH: We have some proof that perhaps these algorithms work, but a lot of them are exponential time.
BG: One of the things I've been doing with my non-existent spare time is going through all the core cognitive algorithms of OpenCog, which is the AI architecture I'm working on, expressing all the core algorithms of OpenCog in terms of Galois connections over metagraphs and the chrono morphisms and stuff. So you get the right elegant formalization of your core cognitive algorithms. And then once you've done that, then you can deploy the kind of math you're saying so that this core AGI computation could be done by outsourced computing. So the math and CS is there for a lot of these different things, but there's a number of stages yet to go through before that kind of thing is rolled out scalably.
CH: That's an interesting mathematical expression. Do you deal with a dependent type system?
BG: It's an independent pair of consistent probabilistic type systems, so yeah.
CH: That's a mouthful. But can you prove anything interesting? You can show certain things that are isomorphic to each other or what you are looking for with those.
BG: We are working on that right now, actually. But this would probably lead us too deep down some usually interesting rabbit holes for a broad audience podcast.
CH: Okay, fair enough. All right. Well, Ben, this has been so much fun. I have another meeting I got to jump into, but I really enjoyed our time.
BG: Yeah, this is fantastic. It's both broad and deep, and I think decentralized social networks, it's both really important on its own, and I think we can work together to solve it, but it also highlights a bunch of other more general points, both about bringing Singularity and Cardano together and about just what we need blockchain and AI together to do. So yeah, very cool. Look forward to the next one.
CH: I guess a closing point is platforms tend to get defined by the killer apps that are on the platforms, and I'm very glad that one of the most meaningful and significant applications on our platform is SingularityNET. I would hate to see us be defined by Crypto Kitties or something like that. It's great to have you guys around. I think this collaboration is going to result in an enormous amount of evolution of our own platform and an acid testing of things in a way that's very productive for everybody. And my hope is you guys become one of the most successful pieces of infrastructure on top of a Cardano and it leads to a lot of user growth. And we're not just collaborating technologically. I think we're going to share some office space at some point in Ethiopia.
BG: The space has been found, actually. So our Addis team and your Addis team will co-locate.
CH: John was very excited about it, so I imagine the office is quite nice.
BG: It's in Bole, which is a great neighborhood. It was a very pleasant and surprising coincidence that we actually both had flourishing teams in Addis Ababa contributing to the development of our various platforms. Very cool that maybe the next time we meet face to face will be over some injera in Addis.
CH: That'd be a lot of fun. That'd be a lot of it just to have to get rid of the civil war and the COVID, but those are just minor technical details, All right. Thank you so much, Ben.
BG: Great. Thanks a lot.
Native tokens to bring new utility to life on Cardano
Users will soon be able to create their own on-chain tokens for transactions on Cardano
4 February 2021 5 mins read
Portrait of Mary Shelley by Richard Rothwell (1800-1868)
The Goguen rollout continues with another key building block in Cardano’s evolution into a decentralized, multi-asset (MA) smart contract platform. The Goguen ‘Mary’ update – named after author Mary Shelley – introduces the ability to create user-defined tokens. These custom tokens will be ‘native’, so they can be transacted directly on the blockchain, just like ada. While ada will remain Cardano’s principal currency, Cardano will transform into a multi-asset (MA) blockchain, opening up a constellation of possibilities. This MA capability will become a fresh development fulcrum for developers worldwide, further widening Cardano's reach and potential.
Another hard fork?
Yesterday, using what was effectively a hard fork, we successfully deployed the Mary update to the Cardano public testnet, for final testing prior to mainnet deployment. This forking event is a crucial step in the process, as the Testnet is as close an environment to the Mainnet as we can get. Once we deploy all the elements on the Testnet, invite devs to dive in and monitor the results, we can accurately ascertain how the Mainnet will behave.
Hard forks tend to be disruptive events because the history of the pre-forked blockchain is no longer available. Without careful planning, testing, and execution there can be unintended consequences. Earlier blocks can be lost when the protocol rules are altered, for example.
However, Cardano handles hard fork events differently. We use a hard fork combinator to combine protocols without triggering service interruptions or a network restart – and, crucially, the combinator maintains the history of the previous blocks.
Cardano has undergone several development stages, and the quest is far from over. Goguen is happening now. We’re seeing the early steps toward Voltaire now with Project Catalyst, and Basho will follow. Each stage brings Cardano's journey closer to its ultimate destination: True decentralization and scalability, utility, and sustainable governance. And each stage will use the combinator, a tried and tested technology, to power the transition. We first used it for the Byron to Shelley upgrade, proving the combinator's effectiveness in achieving a seamless transition. Allegra, which introduced token-locking in December, used it, too, as will Cardano’s next development stages.
How we got to Mary
The advent of token-locking with Allegra, though a relatively small technical change to the Ouroboros protocol in itself, established the threshold for Cardano's multi-asset strategy, and the network's future as a whole. The change readied the platform for smart contracts and the support of native assets other than ada.
Allegra laid down the foundations for Mary with the introduction of production-ready code so engineers could start testing. This work covered features such as defining a monetary script, minting, redeeming and burning tokens, and sending tokens in a transaction.
Just before the holiday break, a programming interface (command line interface -CLI) was added for the wallet backend. Since then, updates for that wallet backend and interface, along with explorer support for multi-currency blocks, have been underway.
We are now finalizing the integration of the completed wallet backend with the metadata registry, and the Rosetta API (a common interface for exchanges to interact with the Cardano blockchain) will be updated to support multi-assets.
The metadata registry
The concept of metadata is worth explaining here. In Cardano, metadata is a description of the native assets that people can read. These assets are stored on-chain using identifiers which are non human-readable. The readable version of this information is stored off the blockchain, in public token registries. These registries – initially managed by the IOG – will ultimately be owned and be configurable by the community, thus enabling another layer of Cardano's decentralization goal. By empowering the community to own and configure these registries, we ensure that the community can fully trust the datasets, as the users themselves are the owners of the data, so it's in their best interest to act honestly.
Mary is almost here
The Mary codebase is due to be deployed on mainnet by the end of February, assuming all final testing goes as planned during the month. Mary's arrival is the first in a series of evolutionary stages that will enable the community to benefit from these new capabilities:
- Yesterday, we successfully deployed the Goguen ‘Mary’ code onto the Cardano testnet. The SPO community and internal teams are now doing final UAT on this.
- The Cardano explorer (the tool that retrieves and presents blockchain and transaction information from the Cardano network) has also been updated and released for quality assurance testing yesterday.
- We also deployed a basic version of the Daedalus wallet, for testing the wallet backend.
- During February, the Daedalus wallet will be updated to include support for sending, receiving, and viewing multiple tokens , including integration with the new backend interface.
- The metadata registries (Github repos that store user-submitted metadata) will come online a little later this month.
- From the testnet phase onward, there will be support from our Technical Support Desk (TSD), a specific testnet wallet to view and transact tokens, and use of the registry to add metadata to tokens. There is also a dedicated dev support program run by our community team to support developers who want to get involved.
The deployment of Goguen ‘Mary’ marks a significant stage in Cardano’s journey. When Mary turns her crypto key within the network, we will unlock the mechanism for users to create their own tokens for a myriad applications: Decentralized Finance (DeFi), and countless other business use cases.
Next week, we’ll publish a blog post digging a little deeper into core native token functionality and what users can expect. Remember to follow us on Twitter and subscribe to our YouTube channel to get the very freshest updates as we continue the Goguen rollout.
Plutus Tx: compiling Haskell into Plutus Core
Get to the heart of writing smart contract applications on Cardano
2 February 2021 9 mins read
Last week saw the release of the refreshed version of the Plutus Playground. This is our showcase for the Plutus Platform, at the heart of which is the ability to write smart contract applications in a single, high-level language: Haskell.
Our toolchain allows a single Haskell program to produce not only an executable file that users can run on their own computers, but also the code that runs on the Cardano blockchain. This gives users a battle-tested, high-quality programming language, and makes use of standard tooling and library support. Nobody wants to learn a proprietary programming language and half-baked tools if they don’t have to!
The technology that powers this is called Plutus Tx, and is, in essence, a compiler from Haskell to Plutus Core – the language that runs on the chain – provided as a GHC plug-in. In this post we’ll dive into how this works, and some of the technical challenges.
Boiling down Haskell
Isn’t Haskell an old, complicated language? Notoriously, it has dozens of sophisticated extensions that change the language in far-reaching ways. How are we possibly going to support all this?
Fortunately, the design of GHC, the primary Haskell compiler, makes this possible. GHC has a very simple representation of Haskell programs called GHC Core. After the initial typechecking phase, all of the complex surface language is desugared away into GHC Core, and the rest of the pipeline doesn’t need to know about it. This works for us too: we can operate on GHC Core, and get support for the much larger Haskell surface language for free.
The other complexity of Haskell is its type system. This is much harder to avoid. However, we have the luxury of choosing what type system we want to use for our target language, and so we use a system that is a subset of Haskell’s – fortunately Haskell’s type system is pretty good!
In the end, it turns out that we don’t want to support all of Haskell. Some features are niche, inapplicable (nobody needs a C FFI on the blockchain), or, honestly, just a real pain to implement. So for now the Plutus Tx compiler will give you a helpful error if you use a feature it doesn’t support. Most ‘simple’ Haskell is supported (although there are a few things that look simple, but are annoyingly complicated in practice).
Down the tube
What do we compile Haskell into? At the end of the day we have to produce Plutus Core, but it is ancient compiler wisdom to break down big compilation pipelines like this by introducing ‘intermediate languages’, or an intermediate representation (IR). This ensures that no one step is too large, and that the steps can be tested independently.
Our compilation pipeline looks like this:
- GHC: Haskell -> GHC Core
- Plutus Tx compiler: GHC Core -> Plutus IR
- Plutus IR compiler: Plutus IR -> Typed Plutus Core
- Type eraser: Typed Plutus Core -> Untyped Plutus Core
As you can see, there are quite a few stages after GHC Core, but I just want to highlight Plutus IR. This is an extension of Plutus Core designed to be close to GHC Core. So, strictly speaking, the Plutus Tx compiler doesn’t target Plutus Core: it targets Plutus IR, and then we invoke the rest of the pipeline to get the rest of the way.
This reduces the amount of logic that has to live in the plug-in itself. It can focus on dealing with the idiosyncrasies of GHC, and leave well-defined (but difficult) problems such as handling data types and recursion to the Plutus IR compiler, where they can be tested without having to run a GHC plug-in!
Having Plutus IR in the pipeline gives us other advantages too. We don’t have total control over how GHC generates GHC Core, but we do control how Plutus IR gets turned into Plutus Core. So if users want to ensure total reproducibility of their on-chain code, they can save the Plutus IR and get a (comparatively) readable dump that they can reload later.
Sneaking into GHC
How do we actually get the GHC Core in the first place? GHC Core is part of GHC’s compilation pipeline. We’d have to somehow insert ourselves into the middle of GHC’s compilation process, intercept the part of the program that we want to compile to Plutus Core (remember: we only compile some of the program to on-chain code), compile it, and then do something useful with the result.
Fortunately, GHC provides the tools for this in the form of GHC plug-ins. A GHC plug-in gets to run during the GHC compilation process, and is able to modify the program that GHC is compiling however it likes. This is exactly what we want!
Because we are able to modify the program GHC is compiling, we have an obvious place to put the output of the Plutus Tx compiler – back into the main Haskell program! That’s the right place for it, because the rest of the Haskell program is responsible for submitting transactions containing Plutus Core scripts. But from the point of view of the rest of the program, Plutus Core is opaque, so we can get away with just providing it as a blob of bytes ready to go into a transaction.
This suggests that we want to implement a function like this:
compile :: forall a . a -> CompiledCode a
From the user’s perspective, this takes any Haskell expression and replaces it with an opaque value representing that expression, but compiled into a Plutus Core program (or rather a bytestring containing a serialized representation of that program). The real version is a little more complicated, but, conceptually, it’s the same.
However, we don’t want to try and implement this as a normal Haskell function. A normal Haskell function with the signature of compile
would take a value of type a, and turn it into a Plutus Core program at run time. We want to take the syntax tree for the expression of type a and turn it into a Plutus Core program at compile time.
The switcheroo
Here’s the trick: we don’t actually implement compile
as a function; instead, our plug-in trawls through the program to find applications of compile
to an argument, and then replaces the whole application with the compiled code.
So, for example, we turn
compile 1
into
<bytestring containing the serialized Plutus Core program ‘(con integer 1)’>
This means that the program continues to be type-correct throughout. Before the plug-in runs, the expression compile 1
has type CompiledCode
, and the same is true afterwards – but now we have an actual program!
Finding the source
Compilers work with the source of programs, and the Plutus Tx compiler is no different. We process the GHC Core syntax tree for programs. But what happens when a program calls a function from another module? Haskell is separately compiled: typically modules only see the types of functions in other modules, and the object code is linked together later. So we don’t have the source!
This is, in fact, extremely annoying, and in the long run we plan to implement support in GHC for reliably storing the GHC Core for modules inside the interface files that it generates. This would enable us to do something more like ‘separate compilation’ for Plutus Tx. Until then, however, we have a workaround using ‘unfoldings’.
Unfoldings are the copies of functions that GHC uses to enable cross-module inlining. We piggyback on these as a way of getting the source of functions. Consequently, functions that are used transitively by Plutus Tx code must be marked as INLINABLE
, which ensures that unfoldings are present.
Run time matters too
This all sounds fine, until you realise that you usually want to create different versions of a Plutus Core program based on decisions at run time. If I’m writing a contract that implements an atomic trade, I don’t want to have to recompile my program to change the participants or the amount!
But as we said before, it’s tricky to write a function of type a -> CompiledCode a
that actually works at run time. Rather than looking at the GHC Core syntax tree representing the expression in the Haskell program, we instead need to deal with values that the program computes.
We can do this in typical Haskell fashion by defining a pair of typeclasses:
Typeable
: which tells us how to turn a Haskell type into a Plutus Core typeLift
, which tells us how to turn a Haskell value into a Plutus Core value
For those familiar with Haskell, these deliberately parallel the Typeable
and Lift
classes that GHC provides for turning Haskell types and values into representations useful for more typical Haskell metaprogramming.
We can’t write instances of these type classes for all types. For example, when we’re looking at the GHC Core we can inspect the GHC Core for \x -> 1
and see that it is a lambda, and what the body is. But when the code is run, a function can be a compiled blob of native code, and we can’t do this any more. So, unfortunately, we can’t lift functions at run time.
Ultimately, this means you can typically lift data at run time, like an integer, or a complicated data type representing a fee schedule. You can then pass the lifted data to a function that you compiled at compile time with a little helper: applyCode :: CompiledCode (a -> b) -> CompiledCode a -> CompiledCode b
.
This is a nice instance of a functional architecture paying off for us: we can handle these tricky dependencies between compile time and run time with simple functions and arguments!
Getting out of the way
The goal of Plutus Tx is to allow you to freely write Haskell and seamlessly use it in both on-chain and off-chain code. We’ve made a lot of progress towards that goal, and we look forward to polishing off the remaining warts as we go.
Postscript: show me the money!
How can you actually use Plutus Tx? The compiler will be released with Plutus Foundation in one of the updates of Cardano to support Goguen’s capabilities. This will include support for Plutus Core on the Cardano chain. At that point we’ll release the libraries into the wild. Until then, you can cheer us along on Github, and let us know how you get on with the new Plutus Playground.
Recent posts
2021: the year robots, and graffiti came to a decentralized, smarter Cardano by Anthony Quinn
27 December 2021
Cardano education in 2021: the year of the pioneers by Niamh Ahern
23 December 2021
Cardano at Christmas (and what to say if anyone asks…) by Fernando Sanchez
21 December 2021