ETHShanghai2023|Yi Sun: Adding reflection to the EVM with Axiom

Mask Network
14 min readJul 17, 2023

At the ETH Shanghai 2023 event, Yi Sun, founder of Axiom brought us a speech titled “Adding reflection to the EVM with Axiom.”

Below is the transcription of the speech "Adding Reflection to the EVM with Axiom":

Thanks for the invitation to speak here, and I’m especially honored because I was actually born in Shanghai. So today I decided to tell everyone about how we’re adding reflection to the EVM using Axiom.

So to start, want to talk through the user journey of actually accessing information on ethereum so when you first use ethereum, the way you actually receive information about what happened on chain is through a JSON -RPC call to an archive note. The purpose of this JSON-RPC API is to actually show information about the history of the chain to the human eyeball. And though you may not actually realize this. Essentially, all information that you see about the blockchain is taken from one of these API calls and put on a website for your human consumption in an item.

Now, as users have got more sophisticated in interacting with the blockchain, they start demanding increasingly more sophisticated views of the chain. So there are different types of archive nodes that are being developed for different user tradeoffs. So there’s Geth, Erigon, Nethermind, and now Reth. You can choose your favorite archive node from that menu.

And if you’re not satisfied with the pure JSON-RPC API, you can choose an indexer that will apply post-processing, as well as trace your transactions, and for different applications, you might be interested in the data that’s returned from the graph or Covalent.

And more recently, there are wallets and other products coming out that offer transaction simulation on top of an archive node. What that means is that you can see what will happen counterfactually after you apply a transaction, you’re about to submit. So all in all, the story here is that as end users, we interact with Ethereum and increasingly more complex ways that use more computation on top of the data that we’re reading.

Now, if we think now from the perspective of not a user but a smart contract on Ethereum. Contracts, of course, also want to access data and do compute that data, but it’s much more challenging. In fact, if you go to OpenSea and look at a list for CryptoPunk, you’ll find that of all of the information on the page, only a very small fraction is going to be accessible by a smart contract on chain today.

In fact, I believe for the cryptopunk listing, that information is simply the current owner. Of course, there’s a lot of other information on the page for a good reason, but all of that pertaining to historical transfers, historical prices, and historical owners is not actually accessible to smart contracts because it belongs to the history of here. Of course, that constitutes on-chain information, but it’s not available to Ethereum for contracts because we need to avoid forcing every full node of Ethereum to maintain this information in its random access to validate transactions.

So this is both a practical restriction on Ethereum smart contracts, as well as a theoretical limitation on all blockchain computers.

Moreover, any blockchain developer can tell you that running computations on the chain is pretty expensive, although Ethereum has relatively efficient VM operations and it’s made certain types of operations cheaper through precompiles. For example, elliptic curve operations on the BN254 curves, there are many applications where you want different complications. For example, types of cryptography or signature schemes that aren’t supported cheaply on ethereum, maybe some numerical computations or even machine learning.

And so for these application specific applications, the EVM is a pretty expensive place to run your application today.

Again, this is a fundamental tradeoff.

When you design a blockchain virtual machine, you have to choose an inherently small set of operations that are carefully metered in order to make sure that every node can actually validate transactions in a consistent amount of time. And moreover, you have to think about the worst-case security and consensus stability in these cases. So the challenge here is how you can actually get application-specific scaling for on-chain applications.

So the problem that Axiom is being built to solve is to scale data access and compute for smart contracts in a way that actually respects the different types of scaling that each application needs.

So what we’re building is something we’re calling a ZK Coprocessor for Ethereum.

The way that works is we allow some smart contracts to trustlessly delegate the two operations we’ve discussed so far to our off-hand system so they can delegate data reads and verifiable computation to Axiom.

To make a query to axiom, a smart contract can send a transaction to our on-chain system. Our off-chain nodes will pick up that transaction, read the query that the history of Ethereum it’s looking for generate the results accompanied by a zero-knowledge proof that the result is actually correct.

Finally, we verify the results on the chain and give the results trustlessly to downstream smart contracts.

So this is very similar to how a CPU on your computer will delegate computation to a GPU and get it back when the result is known. So back in the day, these things were called coprocessors, and on the slide, I’ve shown an image of an advanced math coprocessor from the early 90s, so this is the analogy that we’re making to our product today.

So to dive a bit deeper into what types of operations are possible at Axiom.

Every query in Axiom breaks into three pieces. First, with the read pieces, we can trustlessly read historic unchained data, so that’s always the input into the query of Axiom.

The second piece is that we can run verify computations on top of that data. So that might start from basic analytics like taking the sum of some numbers or the Max or the min to more complex computations coming from cryptography, maybe some signature aggregation or verification to even things in zk machine learning, maybe verifying the run of some reputation algorithm on chain social data or some machine learning algorithm for financial applications.

And eventually, we’re going to offer programmable composition of these complications via a virtual machine.

Finally, after the read and compute steps are done, we come to a result and we always pair that result with a zero-knowledge proof that the result was computed in a valid way. So we verify that proof on chain in the Ethereum smart contract and then deposit the result for your contract use.

So because all of these results from axiom are actually verified by zero-knowledge proof, what that means is everything that axiom returns has security that’s cryptographically equivalent to Ethereum itself. And our philosophy is that we don’t want to impose any additional cryptographic assumptions on our users above what they’re already assuming by using Ethereum.

So I’ve told you a lot about what Axiom can do, and I now want to talk through how it’s even possible. And that’s going to get to the concept of reflection that is mentioned in the title of the talk.

So the core principle that enables all of this is that every block of Ethereum and in fact of any blockchain actually commits to the entire history of the chain.

So for Ethereum, the way that this works is that the current block contains the hash of the previous block, and the previous block contains the hash of the block before that, and all the way back to Genesis. So in this way, any block of Ethereum commits to the previous block headers of all blocks in the area.

Now, for any specific block, there’s a commitment to all of the data pertaining to Ethereum at that block that consists of all the account information, all the contract storage, and all the transactions and receipts in that block. And so this is kind of a special property, and I’d like to think of it as blockchain, having a sense of cryptographic memory.

So what does that enable? It’s an operation that I’m calling reflection.

So what we can do is we can take a current block of Ethereum, and go backward to a previous block that we’re interested in. If we take the block headers between the pass block and the current block, then we can actually reverse the commitments of the past block into the current block by verifying a hash pane of block headers in between.

Then, if we’re interested in some piece of information in that past block, we can give inclusion proof inside the commitment of that information in that block. So in this case, it would concretely be a Merkle-Patricia trie proof into either the state trie, the transactions trie, or the receipt trie.

So, at least in principle, it’s possible in EVM just by knowledge of a recent block hash to access any past piece of information on the chain.

Unfortunately, doing this in EVM is going to be prohibitively expensive. So as I mentioned earlier, you have to verify this hash chain of all block headers and this Merkle proof, and that involves doing a lot of Keccak hashes over a pretty large amount of data. So once you get to the past, it’s pretty prohibitive.

So what we’re doing is applying this operation of reflection in the EVM by wrapping this proof in ZK. So instead of putting all past block headers and all of these Merkl3 proofs on the chain and then verifying, we check in zero-knowledge that there exists a chain of past block headers and there exist some proofs that verify.

So this has two advantages. First, it allows us to not put all that very heavy proof data in call data. And second, it lets us aggregate proofs to a level that would be unimaginable without using ZK. The idea here is that to verify any amount of computation in ZK on Ethereum, the gas cost is fixed, so as a result we can put a truly large number of these historic data accesses to be verified by single zk proof the act.


Let me talk a little bit about what are the trade-offs of this ZK-based notion of reflection.

So there are sort of two ways to access data. First is the way that you kind of know above. You can directly access data on Ethereum from your smart contract. And that has one very big advantage, which is that the access is synchronous. So you directly call smart contract read from your contract and get the present value.

And so, of course, that’s very powerful. For example, when you’re trading on Uniswap, you need this type of synchronization.

Unfortunately, it has many limitations. Your computational power is really limited by gas costs, and you can’t access any of the history.

On the other hand, if you want to reflect into Ethereum using the power of ZK because you have to generate a zero-knowledge proof that your witness values which prove your access was correct actually validate, there’s no way to do this in a synchronous fashion. So actually it’s impossible to access the exact current on-chain state just because you have to target a state and make a proof of it.

On the other hand, if you allow yourself to access historic data in an asynchronous way, then you can apply essentially unlimited computations to it and also access a large amount of it. And so by relaxing this notion of synchronitude, the zk-based reflection-based access to data can be scaled up substantially more. So let me just talk really quickly about how we actually achieve this reflection with Axiom.

There are two key components. The first is that we actually have to maintain a cache of all previous block caches in our smart contract.

In the EVM, the last 256 block hashes are available natively. And what we can do is we can prove in blocks of 1024 that. The block hash of the last block in that batch is committed to the next block. The block hash of the second from the last block in that group is committed to in the last block and so on. So we can reverse this hash chain of headers and prove in zero-knowledge that this hash payment is valid.

So this lets us, this lets us cache the block hashes starting from a recent block all the way back to genesis. And we’re actually, we’ve actually done this in our smart contract on mainnet today, and it has the cache Merkle routes of every 1024 block hashes all the way back to genesis.

An additional feature that we’re adding is a Merkle mountain range. On top of this cache of Merkle routes. This is a data structure that lets us reference every block hash in ethereum in a finite amount of DNA.

Now, once we’re done forming this cache, we can query into axiom by verifying our queries against block caches in the cache. So to do this, we have to prove that every piece of data from the history of ethereum we’re trying to access is actually committed to a block cache. And second, all the computations we’re doing on top of this query were actually done correctly.

So to check this on chain, we check that our zero-knowledge proof is valid. And also we check that it actually relates to the information that we’ve remembered on the chain. We’re always rooting our trust in our cache or block caches, and we’re also matching the information from those block caches to the public information in our zero-knowledge proof.

Okay, so I’ve talked a lot about this new operation of reflection, the zk coprocessor we’re building to achieve reflection. But let’s now talk about the applications that we envision make it making possible.

So I like to think of all applications you can do on a blockchain along two axes. So on the horizontal axis here we have the data complexity. That’s just how much data you need to access to actually make this application happen. And on the vertical axis is the computing complexity. So sort of very roughly how much compute you have to apply to actually do this thing.

So there’s the first set of applications that axiom or any type of flex can make possible are actually possible to do today on Ethereum, but they’re just a little bit expensive.

A couple of examples include reading from the consensus level randomness in that from the Ethereum consensus layer that’s actually made available in the block header are verifying your historic account age, or reading historic or reading different types of oracles from historic price information. This could be reading historic prices off uniswap, constructing volatility oracles, or constructing an average price Oracle, for example, between uniswap and sushiswap. So all of these, there are various workarounds you can do in EVM today, but it can be made much more efficient by putting those workarounds in ZK.

Now, there’s another class of applications that roughly speaking, applies for more data access and therefore requires more computing. And these are at a scale where I don’t think it’s really possible to do them without using a zk Coprocessor.

So, just to talk through a few of them, one interesting application is to allow cross-roll up data reads. What that means is to allow a roll-up on Ethereum to read, either from the base layer or from the state of another roll-up in a trustless fashion using ZK.

One perhaps weaponized form of this is to actually allow a roll-up to read a complete checkpoint of the balances of an ERC20 token.

If we move away from storage and look more at the transaction history of an account, we can imagine that can form a trustless reputation, identity, or credit score. By advocating the complete history of an address on Ethereum and trying to give that semantic meaning to your application. That might be for a credit score, it might be for getting you access to some type of on-chain DAO, or maybe for getting you access to a custom NFT mint.

There’s another class of applications where reading from the history of the chain is used to actually govern protocols. So I like to call this protocol accounting.

The idea here is that protocols on period exist to coordinate the behavior of their participants, and the fundamental primitive of coordination is the ability to either reward or punish actions that are desirable or not desired. And if you look at a lot of protocols on Ethereum, the record of what participants have done is actually fully on the chain. And so using axiom, we can imagine systems where depending where as the function of the complete set of actions of participants in the protocol, the protocol can determine the payout structure or even some type of objective slashing of participants. And so we think this can really open up the design space for what types of profile applications are possible.

Finally, if we really wrap up the level of computing that’s possible, we think that it could be really interesting to use machine learning models to adjust parameters on the chain. If you think about traditional financial applications, there’s really sophisticated modeling of what the future parameters should be, based on a lot of historic data.

You know, price data, economic data and so on. Whereas if you look at DeFi today, we’re very far away from that. Now I don’t think that DeFi should work the same way as traditional finance, but we do think that injecting some of these historical database and machine learning-based models and information might be helpful for creating more dynamic DeFi protocols.

So these are just a few ideas of what reflection can achieve for blockchains curious to see what the community comes up with.

Finally, to close, I want to talk through where we are today with achieving reflection via Axiomand where we’re headed.

So we’re going to launch to the main net soon, a solution that will allow any smart contract on Ethereum to read historic block headers, accounts, and account storage.

On top of that, we will allow these contracts to do computations via custom-written ZK circuits.

So the flow there would be that a contract can request these data reads, and then apply, compute, and verify all of it trustfully on Ethereum.

Our vision for the ultimate form of zk-based reflection on the chain is a little bit more complicated.

First, we’re in progress with adding all historic on-chain data, so on top of block headers, accounts, and receipts that would constitute transactions and receipts. Second, we think that on top of this data, users should be able to actually simulate the execution of different view functions by executing a ZK VM. Yeah, so imagine taking a view function from your smart contract and being able to verifiably execute it on any historic state and use the results on the chain. With these two first pieces, that would constitute putting an entire archive node into an indexer inside ZK. So you can imagine that any piece of information from Etherscan or the Graph could be used in your smart contract without additional trust assumptions.

Finally, we think that a piece that will make sense on top of this is to allow users to actually do arbitrary computations on the downstream outputs of this whole system in a more zk native virtual machine. I think this is important for greater efficiency and actually letting developers do richer operations.

So we’re still in the early days of building this whole system, but we think that the final output could be a very powerful way for developers to use reflection in an on-chain way.

So to wrap up, we are a zk coprocessor for Ethereum and we’re coming to mainnet soon. So I want to invite everyone to try our demos at And if you’re looking to work in zk or enabling crypto applications more generally, you can check out our jobs page at mission is to power developers to build powerful expressive trust and minimize applications by providing application-specific zk scaling solutions.