Arguably, the core feature of tokenized ecosystems, aka public blockchains, is getting people to do stuff. Incentives are powerful. But similar to Artificial Intelligence (AI) / optimizer design, getting incentives right is hard. Blockchains can even be framed as life. In this context, what if we end up with a rogue life form sucking the life energy out of the planet? More pointedly: has Bitcoin (available on Coinbase) gone rogue? This article explores these questions, in the first installment of a broader series aimed at improving the token design process.
We’ll start from the perspective of optimization and AI, and wind our way back to blockchains and incentives.
AI Whack-a-Mole
For several years I worked on creative AI, making technology to synthesize analog circuits like amplifiers from scratch.
I’d do a synthesis run and find an issue, like “there are random dangling wires”. Then I’d add a constraint, codified in computer science terms, such as “each node must connect to >1 edges” for the danglers issue.
Then I’d run again, and find another issue, like “current in this wire is 100x larger than sanity and will blow up the circuit”. I’d fix it, and repeat the process.
This was fine for the first few constraints. But after a dozen or so it became very tiresome.
After much pain, I took a different approach to solve the problem, by implicitly capturing intent.
Just like software debugging, it’s a challenge to bridge the gap between my intent and what the machine thinks I want. What I learned: setting the objective function and constraints is hard.
The Paperclip Maximizer
Communicating intent is hard. This idea underpins Nick Bostrom’s Paperclip Maximizer:
“Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”
In this scenario, the humans communicated the main goal — maximize paperclips — but missed a key constraint, namely, don’t destroy humanity. But then how do you specify the latter constraint? Yep: it’s hard.
It doesn’t even matter if the AI is really stupid. As long as it has access the resources to keep growing, our world could end up being overrun with paperclips. Optimizers and AIs don’t care about your intent, they just happily maximize away. Some people say “well, just unplug that AI”. That would work for a centralized AI, but not a decentralized one (AI DAO).
Blockchains as Trust Machines
We’ve given context to optimizers. Let’s now give context to blockchains. After that, we’ll merge the contexts.
Blockchains have several great features above and beyond traditional distributed systems: decentralized (no single entity owns or controls them), immutable (once you’ve written to a blockchain, it’s like it’s cast into stone), and they make it easy to issue & transfer assets. This frames blockchains as trust machines. These features unlock higher level capabilities like smart contracts.
Blockchains as Incentive Machines
The blockchain community understands that blockchains can help align incentives among a tribe of token holders. Each token holder has skin in the game. But the benefit is actually more general than simply aligning incentives: you can design incentives of your choosing, by giving them block rewards. Put another way: you can get people to do stuff, by rewarding them with tokens. Blockchains are incentive machines.
I see this as a superpower. The block rewards function defines what you want network participants to do. Then the question is: what do you want people in your network to do? It has a crucial corollary: how well can you communicate that intent to the machines? This is a devilish detail. Do we really know how to design incentives?
Blockchains as Life?
Erwin Schrödinger framed life merely as physical processes in his treatise “What is Life?”. More recently, physicist Jeremy England has given it a thermodynamic framing: it’s all about entropy. Carbon is not a deity.
The Artificial Life (A-Life) community acknowledges that the definition of “life” is contentious. There are some things that clearly aren’t life, like a hammer; and other things that clearly are, like a puppy. But there are shades of gray in between. We can think of it like a checklist of say 20 items. Autonomous mobility? Check. Self-replication? Check. Decision-making? Check. And so on. Check enough and “it’s life”.
Ralph Merkle wrote about Bitcoin (available on Coinbase) as life: “Bitcoin (available on Coinbase) is the first example of a new form of life. It lives and breathes on the internet. It lives because it can pay people to keep it alive. It lives because it performs a useful service that people will pay it to perform. … It can’t be stopped. It can’t even be interrupted. If nuclear war destroyed half of our planet, it would continue to live, uncorrupted.”
Bitcoin (available on Coinbase) Gone Rogue?
To recap the last few sections:
Designing objectives & constraints for an optimizer / AI is hard.
An AI with access to vast resources yet has bad objectives & constraints could end badly for humanity (the paperclip maximizer).
Bitcoin (available on Coinbase) can be seen as a life form, or a super-stupid AI. It’s nearly impossible to stop.
Let’s bring these togETHer. Recall Bitcoin (available on Coinbase)’s block rewards function (aka objective function): maximize security, by maximizing hash rate, by maximizing electricity usage. And it’s optimizing against that objective remarkably “well.” So well that it is on track to overtake USA by July 2019. And energy is perhaps the most important resource on earth. It’s the thing we humans start wars over. (Oil, remember?)
In short: “We have a life form that we basically can’t stop, which is optimizing maniacally for that most precious resource — energy. This life form is called Bitcoin (available on Coinbase).”
How’s that for the power of incentives? Which means: we need to get incentives right when we build these tokenized ecosystems.
Conclusion
Satoshi almost certainly didn’t mean to suck the life force out of the planet. Objective function design aka incentive design is hard. But we have to try! To do a good job, we need solid engineering theory, practice, and tools. That is, token engineering. The next article in this series explores this further.