ETHShanghai 2023 | ZK technology: Layer2 and Beyond

Mask Network
33 min readJul 17, 2023

--

At the ETH Shanghai 2023 event, a remarkable panel titled “ZK Technology: Layer 2 and Beyond” captivated the audience with an engaging discussion on leveraging zero-knowledge (ZK) technology to tackle privacy challenges in the blockchain domain.

Elias Tazartes, CEO of Kakarot; Shumo Chu, Co-founder of Poseidon ZKP; Carter Jack Feldman, Founder and CEO of Open Asset Standards (OAS); and Dirk Xu, Director of BD & Strategy at ConsenSys, converged on the panel to unravel the potential of ZK technology in addressing privacy issues, specifically within the realm of Layer 2 scaling solutions and beyond.

Elias Tazartes: In 2018, 2019, I was in Singapore for an exchange abroad. A friend of mine was very into Bitcoin. He convinced me into this industry. Then I became a software engineer. A year after when I was looking into the Ethereum ecosystem a lot, my friends introduced me to zero knowledge technology. And now I’m fully starting a build, so I’m exclusively in the Starknet ecosystem and ending with Starknet.

Carter Jack Feldman: I’m the founder of OAS. We’re a layer 2, like our other panelists. We have our own smart contract protocol. So we’re not using EVM. We’re actually a ZK native protocol. So you can imagine all of our contract functions are actually compiled directly into unique circuits. And my crypto journey started when I was a bit younger around 2015 or so. Primarily because I was accepting payments via crypto due to the nature of some of the work that I used to do.

Dirk Xu: I’m now a director at ConsenSys doing business strategy. Before joining ConsenSys I did traditional finance for quite a while. Three years of investment banking and four years of private equity investment and large funds. So for crypto, I started doing crypto investment personally approximately five years ago but professionally approximately two years ago with a primary focus on infra.

That was when I started paying attention to zero knowledge and obviously after joining ConsenSys had gotten amazed by Linea vision of becoming the best high-performance roll-up solution on Ethereum so raised my hand to become part of the Linea building effort.

Shumo Chu: I had a pretty similar experience but from a different background. I was from a technical and academic background. I get into crypto when I was doing a PhD in computer science at the University of Washington, Seattle. And I actually organized our first cryptocurrency and blockchain seminar back in the day at the University of Washington in 2018.

Then in 2019, I graduated from the PhD program. I decided that the best way to learn a thing is actually doing the thing. My first job was at Algorand, a cryptocurrency company located in Boston. I spent one year there and learning a lot of cryptography and hanging out with a lot of great cryptographers including “the father of zero-knowledge proof”, Silvio Micali himself, and just kind of getting into ZK rabbit hole.

And just a little bit of brief intro of Poseidon, we’re building a ZK aggregation infrastructure on Ethereum. Think about that we take a batch of any kind of ZKproof you can imagine, like Starks, Snarks, we don’t care. And we can aggregate the batch of proof into a single proof. So you get a much better throughput and also latency. And also more importantly, you get much cheaper gas cost.

You’re thinking of us as a ZK aggregation engine for Ethereum. And that’s what we’re building right now.

Host: First, I’d like to ask Kakarot and Linea. There are already solutions, such as ZKSync, and Polygon. As relative newcomers, how do you intend to compete?

Dirk Xu: Just like in the previous cycle where other layer ones coexisted for quite a long time, we don’t think that in the next bull round, layer-two solutions will be engaging in a zero-sum game of competition for sure. So everybody should work together to better address the scalability issue of Ethereum dApps. We just need to grow the pie through a joint effort. That’s the short answer.

As for Linea itself, I’m obviously willing to share a few crystal clear highlights, especially if we put ourselves in developers’ shoes. So number one, Linea is backed by all those ConsenSys family tools which you guys are very familiar with such as Infura, Truffle. No service provider has a strong track record to handle the highest maybe concurrency in the blockchain world historically.

Therefore, with the support of Infura, Linea will absolutely not encounter the serious congestion problems that some of the others carry in the space today. Number two, just a reminder that ConsenSys also has another product called MetaMask. It’s a wallet that will be progressively and deeply integrated into all aspects of Linea as well. So for all our projects on Linea, this will provide an unparalleled opportunity to distribute your products to over 30 million users today. Number three, there will be an ecosystem that we initially built during the test net phase. So when you have time just feel free to take a look at our portal that includes all well-known DeFi OG projects and a lot of super exciting, innovative Linea native new dApps. So ultimately we hope to form a long-term attraction for real end users.

Elias Tazartes: I guess the roll-up space is really exciting right now. I think everyone is in Alpha, but adoption is still quite low. So Optimistic rollups still have the biggest, biggest share of the pie because they are fast, they work well, they’re cheap and so they’re what users want right now.

Our considerations are thick and ZK like Maxis’ considerations such as they’re not safe or fraud proofs or not. What they claim right now is non-relevant to end users. So I think when I think about ZK rollups, I think the game is still on. I think everyone is in Alpha, everyone is every week improving. They’re improving infra and the sequencer, right now Polygon sequencer is very slow, even Starkware’s sequencer is slow and everything is going to get way better within the next year. So I think the real game is played within the next 300 days, maybe 600, and the space still needs to stabilize. That’s kind of my take. And there’s no better way to test that in prod, I guess, to improve things. So they did a very smart thing. They went in prod while not really ready and then call it Alpha so that people don’t shoot up the TVL too high. And then they improve and have real data because I guess when you have a test net for a year, it’s not really real user, it’s not real money. So we kind of everyone needs to go abroad, everyone needs to advertise that this is really, really Alpha and then everyone needs to stabilize their ZK tech and it’s good for the space. Like the more the merrier.

As Dirk said, we grow the share of the pie together and we collaborate, so I think that’s great. So I look forward to us, Linea and Kakarot, all the ZK EVM going to prod early next year or early before the next year, I guess. And it’s going to be great.

Shumo: I just want to drag people out of this a little bit and thinking about the bigger picture. I think they’re two kinds of scaling right? One is called horizontal scaling. It is just like building another layer. So you basically replicate what EVM does in a layer one, you did a layer two, layer three, there’s kind like a infinite layer of towers you can build.

And what we’re building here is we call vertical scaling. So because we think in the year to come ZK verification will be a huge bottleneck from the entire Ethereum ecosystem and we think there are tens of thousands ZK proofs that need to be verified on-chain. So basically where we focus on this specific problem and we want to do one thing, and do one thing well.

Basically, we can scale the ZK verification through proof aggregation on Ethereum. For that, we actually see a huge demand. So if you are a ZK DID builders if you want to attest people’s identity on-chain, every single person needs to put a ZK proof. Currently, roughly speaking you need about 200k to 400k gas fees and unfortunately, if you’re using Stark, it’s worse. Right now it’s about like 3 million to 5 million gas fees.

So if you can do the calculation using today’s gwei price, how much ETH you need to pay, right? So we did a rough calculation about the scale you need to pay, like $20 in terms of gas fees. And for all the ZK DID projects that’s your user acquisition cost. We do think by providing ZKaggregation technology and building an infrastructure middle layer and also we can deploy any Ethereum layer one or layer twos, and we can bring the ZK on-chain to the masses. So that’s what we’re building. I mean, I’m glad I’m not competing with other layer twos. So that’s also one of the good things for what we’re building right now.

Carter Jack Feldman: I guess I would first say that, your witness generation needs to be scalable in order for your proving system to be scalable. So fundamentally, with EVM it’s just not going to happen. Within a block, I could deploy a transaction and or deploy a contract and then have another transaction in the block, like callback contract. So there needs to be clear ordered execution.

So when we talk about scaling, I think it’s just very important that we also ground ourselves to what do we think that scaling is? It’s like are we talking about 1000 TPS, 2000 TPS? For the vast majority if we’re talking about web-scale use cases, that’s not very much for these people right? So we have a fundamentally different state model. I think there are definitely use cases for EVM, but within the ZK space, I would say that there are also sort of two different groups. Right now we’re the only member of group one, we have a global state and then we also can scale horizontally so we can guarantee that transactions within a single block cannot conflict with each other and cannot affect each other’s execution.

This is just something that doesn’t work with EVM. Just like the fact that in any, if you’re in a transaction I could potentially read or write to any storage slot on-chain, I can make extra calls or whatever. The only way to decide where state is going to be in conflict as we’ve seen with projects like Aptos is just by running them. When you do that and you try to like back off, it kills your scaling.

They claimed 100,000 TPS and the max that they’ve ever seen is like seventy. I think it’s very interesting. Especially as recursive proving gets a lot faster. It’ll be interesting to see where the limit is. We’ve done a lot on ourselves and we see there’s an attraction point that sort of forms because you can always reduce the degree of the proof by recursively proving over and over again.

But there’s kind of a minimum amount of constraints that you need to do any proof system. So it’ll be very interesting to see. I’m very glad to hear that there are more ZK infra startups coming around but I guess that’s what I would say.

Host: On programming languages, if I understand correctly, Elias you’re using CairoVM. So I’m wondering what’s behind that decision.

Elias Tazartes: Essentially, I guess there are two ways to build ZK EVM. You can build ZK EVM by rewriting all the opcodes as circuits and then you prove execution by taking every transaction and then and then proving them. Scroll is doing this, that’s what Linea is doing.

So you build a backend around the EVM that allows you to prove any transaction that’s been executed. And for Kakarot, what we’ve been doing is emulating the EVM with a zero-knowledge virtual machine, which is the CairoVM. So essentially the CairoVM is a virtual machine that outputs provable execution traces. So whatever you write in the Cairo programming language, you can thereafter prove it with Starks.

So we wrote the EVM in Cairo. If you write an EVM in Cairo, you have a ZK EVM. It’s kind of like you build A ZK virtual machine, then you build the EVM on top of it, you have a ZK EVM or you take the EVM and then you build a way to prove all the things that go within that EVM and then you’ve got a ZK EVM. So that’s kind of where we are at. So our choice was to add a new approach to the ZK EVM space and the more the merrier.

We have more research and applied research and also we truly believe that the CairoVM is one of, if not the fastest, and the most promising ZK virtual machine. In that sense, we’re looking to scale our solution and have hopefully the best results.

Carter: A quick question. Why do you use a ZK VM at all? For instance, what we do is we just generate like circuits for a contract and then you can imagine, you can identify which proof you’re verifying by hashing the verifier data and then you get much more succinct proofs with way fewer constraints than you know a massive VM circuit.

I’m just curious because it does seem like you’ve kind of taken the opposite approach to go more I guess more abstract to build even on top of these layers. But why do you think, because there may be a good reason behind this? Why do you think that VM is the right solution rather than, you know maybe exposing like ah ah delta Merkel proofs like start and end route in the public inputs of proof?

So why use the ZK VM? For any kind of like transaction, essentially what we’re trying to do is prove that a state transition was legal so we have a start like a Merkle proof and we can use constraints to connect those two together and we’ll only be able to generate a proof if our delta Merkle proof’s values correspond to what is acceptable based on the logic of the circuit. So why do we need to emulate virtual machines when we can just like symbolic so what we do is we symbolically evaluate the code for each contract function and build a specific circuit. And then because now recursive verification is very fast. You know we can just recursively verify, prove that this verifier data exists in the contracts like whitelisted Merkle Tree and you know we’re in business. So I’m just curious, do you feel recursive because it seems to me there is an argument for it. It’s if recursive will forever be slower than the larger circuits, right? Is that sort of like the cost-benefit analysis that you guys are making?

Elias Tazartes: No, I don’t think so. So maybe there are some subtleties that I have not explained.

You could prove every opcode as you said every function and then evaluate which a transaction uses and then generated proof. The CairoVM is actually quite slim and you can also do recursive proving. There’s no circuitry per se and we already use recursive proofs. So for example currently in a batch what we have is a batch let’s say approximately 100 blocks and a leaf is one block and recurses them. We do recursive Stark proofs so we have hundreds of leaves that amount just to one Stark proof of the whole batch of 100 blocks and then we submit that to L1.

Carter: So what I gonna say is not we make opcodes that we make a gap. It wouldn’t be a circuit right we’re not going to do a recursive verification for each, that would be very, very long right because that’s ridiculous. We do a symbolic evaluation of the code so you can imagine that you actually don’t need to implement it as a virtual machine like a zero-knowledge proof is not a Turing machine right? A zero-knowledge circuit rather is not a Turing machine.

It is a vast vector constraint usually like some kind of polynomial constraints. So you can imagine that you can constrain the public inputs by symbolically evaluating like you can make you know like a binary syntax tree. For some cases, you can do that, if you follow to know other things you have to use special tricks but you can imagine that for some pure functions that’s very easy to evaluate in the very base case that I could generate some expression for what the outputs of some function should be just by symbolically evaluate.

So in those cases, my circuit is just a bunch of constraints. I don’t have the old opcodes, I don’t have to do any Merkle proofs. Whenever I’m accessing stuff from Ram because you know anyways like this is, I guess the difference in strategy. But it’s very interesting because also you guys have gone the abstraction route, which I think makes a lot of sense because in the future you guys can just kind of pick the winner of the ZK wars and just deploy to them.

So I think that makes a lot of sense. It’s a very, very interesting approach.

Shumo: One question for Carter. What do you mean by symbolic evaluation? Are you guys generating the certificate on the fly?

Carter: We generate the circuit compile time. So when you compile a contract there’s a bunch of different contract functions, right? And then we generate the surface for each function, hash the verifier data, and make a Merkle tree. Merkle root gets deployed on-chain whenever you call the contract function you have to first during the recursive verification. You also prove that the verifier data you input exists in that contract.

Elias Tazartes: Maybe that’s why I want to understand when you compile a solidity contract you essentially end up with bytecode and these are representing opcodes.

Carter: So we don’t compile solidity. First of all we support JavaScript contracts, we are able to do symbolic evaluation. There are certain cases where you have indeterminacy with the number of constraints per circuit. However, because we have a Merkle tree of all the whitelists we could just generate different permutations of the circuit and then fall back to recursive verification for let’s say if you have a very large amount of logic within the inside of a loop that we can’t automatically vectorized. However there are tools, I don’t know if you guys are familiar with SMT-LIB or Z3 which let you generally condense or simplify this kind of logic. It’s also pretty good at vectorizing now because we can compile to SMT-LIB tool, compile the LLVM, then run LLVM opt which will vectorize and they have the best like optimization and then we can lift back up to Z3, and then back up to our constraint system.

So it’s pretty good we’re not generating like assembly however using frameworks like Triton. So I come from the security like the reverse engineering community. We’ve had a symbolic evaluation for like x86 and ARM for like so long. You can generate an AST (Abstract Syntax Tree) for a register after you just give it like the x86 up routine. .

And then tell me what is EAX, give it to me symbolically in terms of my virtualized memory. So you can generate an AST for any execution that you want and then if you have an AST of course it’s very easy to generate constraints for it because you just do it like a depth-first evaluation of the tree. So it’s not so bad. We don’t we’re not, we’re not learning with EM though.

We don’t believe in using registers or constraint numbers that are larger than our field size because it makes everything horrible and slow. But that may just be our humble opinion.

Shumo: That makes sense. This reminds me I was tuning Z3 performance for roughly about six months at a certain point in my life. I think my take on that is that that was in the details. Z3 is this kind of magic solver. If everything works well, it works well and sometimes you have to deal with a very unpredictable solving time during the compilation.

Carter: We only do it for larger circuits because you’re absolutely right. It can be very unpredictable. It’s not clearly bounded like the time, but if you guys have any suggestions, we’re also welcome. Please help.

Shumo: I think one of the things here is the other kind of symbolic execution engines, right? For example, like an LVM, right? So there is another sort of less powerful but something you can get. Maybe not as good as a result, but you get a more predictable runtime. More predictable like a solving time like a symbolic execution engine and also more sort of closely coupled to the compilation pipeline if you’re using LVM.

Have you guys thought about this kind of approach?

Carter: We’re actually not using LVM so we’re not like LVM or KVM we just use there’s a tool called Triton It has one-to-one parody from like SMT-LIB to LVM. It’s just the laziest way possible to get like good vectorization for loops which is like very important and we can do arithmetic that’s very cheap. You can imagine where I’m adding one in a for loop a bunch of times.

I can actually just constrain it to be multiplication. LVM just makes that easier. It’s kind of a hack for simple contract functions. We actually don’t even run any optimization because proving takes four milliseconds. So I think it is a long-term decision that we made though, just as a project that it’s better to have 10% better proving time and 50% longer compilation time, because you only have to compile once. We’ve been looking at some other solutions we use the Triton, but if there are other simplification strategies that comes by please let us know. We’re always looking.

Host: I know that this is the battle of Snark VS Stark. What does everyone think about this?

Shumo: Maybe I can open the conversation. I think in my view it really doesn’t matter like so for two reasons, right? So firstly that in the modern zkSnark community we have developed a modular way of thinking about how the zero-knowledge system in general should work. So you can roughly divide that into a sort of three pipelines. The first is arithmetization and you can have a STARK-ish ish like R1CS and you can have PLONKish and this is used by PLONK, Halo2, and a lot of ZK environment users. And then you can have like a STARK-ish. But rationalization is really not tightened to the air and then you have the polynomial IOP, you thinking about this is a compiler to compile these arithmetizations to polynomials and then you have PLONK IOP you have other kind of IOP then you have a polynomial commitment scheme. For example, like a FRI, KZG and others.

So for example, if you are in the PLONKish systems you just replace the KZG to FRI. Then you kind of get some processes like Plonky2. You can call it a STARK because the entire security relies on Fiat-Shamir transformations like random oracle model and also just like hash functions. So essentially this is STARK, but if you want to choose a different kind of tradeoff points in the performance you want. If you get to depend on what a circuit is, get longer proving time but you get constant verification time. On the smaller proof side you get SNARK. In my view, these are all technologies and you should be using them in right place and It really doesn’t matter to the end users.

But in a long-term point of view, quantum resilience is a problem. I’m also very bullish on the Snark coming down quantum resilience because there are new tricks. So I don’t think quantum resilience either in the short term or long term for both SNARK and STARK community will be a super big issue in my view.

Carter: So Plonky3 will support Snarks and Starks and a bunch of different IOP and basically it’s kind of like pick your favorite tradeoff. In the future probably just going to be people use one library to sort of define abstracted circuit, and then you could just target whichever scheme you like.

Elias Tazartes: Yeah, I agree with those views. We don’t really care. I guess snarks were really used because the proof size is very small and you would essentially want that because everyone’s gas cost is very high. So it’s always nicer to verify snarks on L1, I guess they arrived a bit sooner as well.

And I mean there are many contingent things like Snark’s got a lot of hype because some of the libraries about it were the first ones to go kind of mainstream. So people learned them. And then they became kind of these names that encompass a lot of things. And then since there are two of them, we kind of invented the fights between them.

But in all, I think even stark you can find that augment the proving time increase the proving time to reduce the proof size you can do many things. So I know for example for Starknet and Kakarot, when we do recursion of stark proofs we have a small proving time but the high proof size for the leaves but then a long proving time for the final step so that we reduce as much as possible the proof size.

We chose, and not to have Snark at the last step. So for those who don’t know, Polygons at EVM, for instance, prove everything proves their EVM and transactions with Starks. But then the last, the last step of the recursion, they actually bundle their stark into a Snark. So that the proof size is very small and then they verify that on mainnet. So I guess with that example, we can see that it’s very modular.

You can mix and match, as Carter said, you can pick your tradeoffs, you can pick your tradeoffs between proving time, proof size, commitment scheme, quantum resilience, the ability to upgrade it. So maybe if it commits to L1 mainnet you would want to not upgrade it every week so as to not hurt your credibility to the users. It’s the all-boring software engineering thing, a picture tradeoff, and know in advance what you’re getting into. But it’s, it’s all about picking technology and using it as a tool and really bearing in mind that we’re doing everything for end users.

Dirk Xu: Totally agree. That can be super modular because obviously, both Tech roll has their pros and cons. And for Linea, we’re quite true and quite relaxed about this because we actually have our own house-proof technology that is different from, other ZK EVMs in terms of the arithmetic scheme, so Linea basically relies on that is based crypto. Cryptography and error-correcting notes. And we think that you know, it technically can be a little bit faster than the popular elliptic curve cryptography and it can be, slightly more resilient to quantum computing tasks, things like that. And most importantly we hope that. We by putting together the right components. We don’t want to face the trade-off. With traditional hash functions, you have to choose between whether or not they’re fast to run or friendly to using snark.

That’s our take just. We would have our in-house technology, and hopefully that they can deliver the same results for us all.

Elias Tazartes: Do you mean you run it on Snark or Stark or neither one?

Dirk Xu: It’s neither.

Carter Jack Feldman: So in general though, I think maybe there will be a winner though, because right now and we’re just doing it mostly CPU-like based proving it may be the case because I don’t know very much about hardware. Maybe you guys know more than me and you could tell me I’m wrong but it may be the case that certain strategies are easier to implement such as ASICs or FPGA and that because of the energy and cost-saving benefits of those just grow to dominate the space. Also, shout out to Polymer DAO also for publishing their Plonky2 Ethereum verifier, that was very helpful for us in our early days.

Elias Tazartes: The hardware acceleration firm that produces the cheapest chips that allow approving schemes to be really fast and cheap. So like a hardware-induced win, that would be fun. Actually, that would be there’s no way to predict. I guess there will be a winner but there is no way to predict. it’s all we already know. Even Nova is in its infancy and people are saying “Oh, folding schemes are going to take over but we don’t know.” So I think it’s very exciting. I think it also goes back to the roll-up discussion. Everything is still up for grabs in a good way, not for exclusive but everything is still applied research. It’s good that we have an application, because usually research takes tens of years because there is no application. Now there’s lots of funds pouring into crypto and it’s kind of funds and money is essentially hard turbo to the research. I feel that’s kind of my take. So I feel very grateful for blockchain because it’s accelerating the mathematical research of cryptography.

Shumo: I do have some slightly opinionated views on the folding vs Stark. So I think one thing here people need to realize is that at the end of the day, the ZK proof can work like magic.

If you truly think about a little kind of magic it’s just like a very small number of bytes. You can compress so much computation no matter how Gengenomous the computation is. So the stark side of the argument is like hey we’re using a smaller field and this is like a memory, a very fit fitting to like the L1 and L2 catches on CPU very well. Oh, you get a lot of acceleration and on the snark side, the argument here is that we have this big field and because we have a big field and patron non-curve, we have these amazing homographic properties that you can essentially keep adding things into. This is kind of like a magic even curve group. It didn’t even explode just a single group. Then you get very dense. I think both sides are valid but I think at the end of the day if you’re thinking from the entire system bottleneck point of view, I think everyone should realize the bottleneck of the entire system is bandwidth. That’s in my view because that’s really limited by the speed of light and also limited by if we’re assuming that everything in the blockchain space is going to build on this universal base layer ETH. So I think that fundamental is going to be bandwidth so I wouldn’t say stark and snark, which side will win. But we are actually very firmly doing that on the snark side purely because we want to solve the last-mile problems. We want to build the proofation and I mean we are very happy to compress stark proofs to bring more savings for stark projects on snark.

Elias Tazartes: When you say bandwidth you mean like Ethereum throughput, like TPS or do you mean bandwidth like WiFi?

Shumo: I mean general consensus bandwidth because consensus requires synchronization of global states and that is inherently limited by the communication bandwidth and this is kind of like there’s information-theoretical limited for this and the physical limit for this.

Carter Jack Feldman: For each block, a pseudo-random number generator you can imagine for people. We can prove transactions in parallel we assign to them. You could imagine something like different parts of a tree. We haven’t released our white paper yet so I don’t want to say too much because of recursive verification, the actual amount of data that each node that’s doing the proving has to receive. Data per second, in order for our chain as a whole to hit hundreds of thousands of transactions per second which we have on AWS at our great expense. So the solution to the data bandwidth problem is log base two, solves a lot of these issues.

Because the one good thing as you mentioned with Snarks is that you put a lot in and you get something small out. As long as we’ve got a lot of logs occurring in this data stream. Then we can fight like the speed of light with exponentially smaller amounts of data coming from an exponentially growing number of nodes.

Host: Drik, what do you think about that solution?

Dirk Xu: For a roll of solution just tack route itself just one lever. There are also many many other levers we can pull to guarantee the ideal developer experience in the user experience. So from Linea perspective the Tech route is just one folk’s series whereas we’re also trying to maximize our efforts in other areas such as systems and liquidity bootstrapping for DeFi protocols such as the disability experience for all the pen entertainment, and pen social apps. The Tech route discussion in our house is just one area where people spend time and we do think there are many, many other levers that should attract people’s attention.

I just mentioned two examples, right for example for all the DEX protocols given the lack of visibility of Linea native token and we got to inject the liquidity from both external liquid providers and also our in-house provider and also for all those pen entertainments Dapps whether it’s gaming or social we got to use other tech tools and dev tools to make sure the experience can be very consistent and very stable. So those are more on the operational front which we can see and then for the tack route itself. I’m obviously not a bad person to speak on behalf of it but we sort of come up with an innovative solution of neither snark nor stark.

Host: For ZKenabled privacy, what do people think about this?

Carter Jack Feldman: We could do KDF in the circuit. So I can also have local proving because we can do things in parallel so I can prove locally. So sensitive data only is processed on the user’s machine. And then we can have like a per block or per update KDF. So we encrypt in the circuit and what’s published on the chain is encrypted. So we have, we are very bullish on privacy.

Shumo: I think fundamentally what we will be able to is that we scale the ZKP verification and a side effect is that your ZKP verification cost will be much lower. We actually imagining other magnitude drops in terms of ZK verification costs and this is great news for the privacy focus project, especially in DIDs.

That’s basically the user cost they pay for each ZK transaction that happens on-chain. So we do think, it’s kind of a chicken-egg problem. So people keep saying, hey, like privacy is not there but we think partially it’s because we’re lacking kind of infrastructure to truly lower the like ZK verification cost on chain. That’s exactly what we’re working.

Dirk Xu: When we think about the benefits of privacy brought to ZK, it is actually exactly one of the reasons why ZK solutions can potentially be differentiated against OP approach. It can be the use cases actually can be implemented at a very specific level. So for these, for instance, NFT owners can verify their ownership without disclosing too much sensitive information or compromising their identity.

So in the long run this can mitigate the risk of fraud and identity set and ultimately really really Foster the trust INC confidence among team participants and please you know take one more step. So this kind of value prop position can really be translated into a very disruptive marketplace user experience in the long run that’s our view and secondly, another example will be anti-counterfeiting across a lot of use cases.

For gateways for entity systems for multiple classes in the crypto world. So that can also be one of the value propositions. Overall we think ZK case is kind of arguably one of the most important Tech approaches to bringing the level of entire crypto security to new heights in the long run. But just we’re just in the middle of that and fingers crossed for the long-term Outlook.

Elias Tazartes: I’d like to add something to what we are saying. I guess someone the CEO of Zama once said something cool that I hadn’t realized yet.

If you’re using that ZK for privacy but you can, you the prover is going to access all a centralized program is going to access all the data. This is something we have to bear in mind. Unless you’re doing fully homomorphic encryption.

You prove it doesn’t okay Carter. Maybe you have something to say so I’d love to hear it. So local proving is very cool. It’s not there yet, but it’s going to be.

So maybe let’s take the point of view of a roll-up. So for a rollup, you don’t have local proving, and there is a centralized program that’s going to access all your data. So unless the field of fully homographic encryption is stabilized and complete, I guess it should take a couple of years, I’m very bullish on it. But your encryption is as good as the prover has access to all your data when it’s proving things.

Carter: Local proving does work and recursion is fast so we don’t have to wait for homographic encryption. It works.

Shumo: I think a lot of people have this impression that these cloud machines have to be much faster than your local machines is actually exactly the opposite because I’m an engineer actually did the benchmark right so more specifically AWS X five large machines the proving will be slower than your M1 MacBook actually if you really run the random number, so it’s just like we don’t I think the the the bottleneck right now it actually in the browser because currently the browser can if you are doing the proving in the MetaMask or even in a Snap that the browsers using this Ahrenditous no virtual machine called WASM I think, ah, web GPU and folks are bringing like web GPU into the browser. And I do think this will be a game changer for the local Proving but by quite, quite quick, quite some margin. I think that this is some engineering problem. I will say it’s a pretty difficult engineering problem, but people know how to solve this. If we can have more attention from the industry and more resources put into this kind of client proving space, I do think a lot of problems can be solved.

Elias Tazartes: But if you have to aggregate all those proofs, there may be also a problem of, say, networking, because for a roll-up, if you do local proving, there’s at least a problem of standardization and adopting a standard altogether. So either someone wants to do a transaction, they prove it locally, and then they submit the transaction they want to do and the proof and then it’s accepted. But currently, this is not how it works.

Carter Jack Feldman: I’ll send you our white paper next week and you could read all about it. Yeah, you have a standard. We have everything end to end and I have like but it’s formally verified so we have everything expressed as constraints for all the different sub-proofs. Really fast recursion is everything and we are very excited that there are people that are working on you know like Shumo on solutions that can help recursion get faster and improve compression get faster.

Also, I think web GPU stuff. There have been some experiments. I haven’t seen anything or heard even anything successful yet. There have been slow loss improvers. But who knows? Maybe someone will make a breakthrough.

Elias Tazartes: I fully agree. Either full homomorphic encryption in the coming years or months, your local proofing solution. What I’m saying is essentially we need to see it in production. And I think there’s a decent amount of time between the whitepaper and production, widely used by users, and a standard that all the browsers and local proving can do fast. That’s why I’m saying we’re not saying it yet.

And that’s also why most chains have chosen a scalability part of ZK rather than privacy. Part of ZK is also for that reason. And, there are many challenges when doing an encrypted chain that you mentioned and so I think definitely is going to come in a couple of years.

Carter Jack Feldman: Two months, also it’s not so hard. I’ll give you an ETH, I’ll give you if there is a chance that we will not launch by that but we already so we built the platform and then wrote the white paper. So the stuff we are already in a good place. I think we only have one significant disadvantage that we are not compatible with the rest of the ecosystem and we are focusing more on bringing in new applications, new use cases, and new developers to the ecosystem rather than trying to cannibalize the existing ones because we don’t support EVM a parallel. Systems, like ours can’t support EVM so I think ourselves and Kakarot and Scroll and all of these other great projects. We’re just happy to work with each other. I don’t think it’s necessarily going to be like a cage match on scalability or a use case.

There’s probably going to be technical limitations for different use cases and trade-offs.

Elias Tazartes: Coming from starknet, I feel you. the whole Cairo EVM is not EVM compatible and we’ve had to grow an ecosystem of builders that learn Cairo and learn a whole new toolbox and toolchain and just a different way of thinking as well. I mean good luck and we can talk about it because the network effect is both an overestimated overrated thing and also an underrated thing because when you think about the L1 war, everyone lost kind of, because of developer tooling like algorithm included because I used to be there and so everyone is saying there are only 15,000 solidity devs surely can’t be that hard to rebuild an ecosystem so I’m happy to see it. Starknet is doing a good job. We’re doing a good job.

Carter Jack Feldman: Why can you rebuild it though? Why not just like us that we support vanilla ES6 javascript.

So all the javascript tooling that isn’t like requiring you know different. I guess features beyond maybe like the the proxy object where you can anyways like weird javascript features that were introduced after ES6 all of the prior tooling just works. Why do you guys make a brand new language? You mean Cairo like what was the point of that? Like why did you do that?

Elias Tazartes: I think it’s essentially a language that’s fully featured and thought out for the Cairo EVM, which is like a Turing CPU that is fully provable. We could have done two things right either build a compiler from Rust or any language to the Cairo assembly or just write Cairo. We started by Cairo assembly which is the lowest level of the CPU thing. Now we’ve added layers on top of it so there’s an intermediary representation.

So in the end it’s a matter of why did we do it. Because it was the first thing we did.

Carter Jack Feldman: I don’t mean to throw shape but I do want to call attention. Starknet has the greatest mathematical minds in the field. I would say in the field of zero knowledge groups but perhaps it’s more of a MatLab kind of a crowd versus you know hardcore engineering kind of a crowd.

Elias Tazartes: The research team is very math-heavy, but all the builders are actually engineering-heavy, so they’re the ones that are also directing some choices linked directly to blockchain stuff. So that’s the end of it. It works for a cool symbiosis like we call out. When we think Cairo is not like a practical language, we call it out and the fixes do come in and one of the fixes is actually this intermediary representation where we’re essentially going to be able to do rust to Starknet and solidity to Starknet itself.

Carter Jack Feldman: Makes sense.

Host: What do you think is the biggest problem in the ZK scene right now, and where do you think we’ll be with it in five years’ time?

Elias Tazartes: I mean goes back to what I said, but everyone is on Optimism, Arbitrum, and Polygon PoS because it just works and it’s low cost, fast, somewhat fast TPS and people don’t yet care about security in some weird sense which is fine. And so the mindsets of engineers that say the best Tech will win are not true. So I think one of the biggest problems is the reality check. The best tech will not win. It’s the best user and kind of narrative that will win. And it will take a year or two to stabilize. That’s kind of my take so we are now in the applied research to prod kind of era and in five years, it’s going be money time for ZK it’s going to be everywhere it’s going to be even in web2, that’s kind of my take ZK is going to take over because as Shumo said, this incredible magic. The thing about compression and it has to be used everywhere. As long as it’s battle-tested. We trust that you cannot break ZK proof then is going to be the new way to just zip computation in a really meaningful way and it’s going to add so much composability to the web and blockchain. So it’s going might take one or two years to go into full power mode and then take over and then optimistic rollup.

Optimistic way of thinking will not be a part of the world anymore. So I kind of suspect optimism. They’ve already started to do it and Arbitrum will just switch to ZK proof systems and they’ve hinted at the switch from OP stack and the ability to do zero-knowledge proof. So I think everyone is just going to switch to ZK and it’s going to be great.

Dirk Xu: From our observations, the largest headwind or China job is being. We just don’t have the first-mover advantage anymore, right? Just as it is mentioned and, the developer advocacy is a really meaningful challenge here. And our take would be because, well anyways, I think all the participants in this conference have found that the consensus of Ethereum itself is strengthening rather than weakening.

the roll-up in you know holistically matters a lot but ZK approach has a later start in slower development. If we look from the perspective of developers friendliness we believe that the emergence of millimia and EVMs at this time truly can potentially make the developer’s life easier. Everyone can continue to use other tools and services that were used in the process of developing smart contracts on these ETH mainnets. The only difference now is that we can now technically achieve better throughput and user experience and I also want to mention that you know if we look from Mid to long term continuing to use the ETH token rather than the L2 token to coordinate our production relationships on the roll-up.

We believe it might be the most sustainable way in terms of governance in the uncertain market and we have already come up with a lot of good. Ideas to solve the incentive problems during the bootstrapping stage. That’s our observation.

Shumo: I just want to mention one number here. That’s a number. Like I truly care is the number between native execution. Let’s say you run a program on a CPU or GPU. To the number that you can end off efficiently, you can run a prover. So you can see over the past five to ten years it keeps shrinking and I think we just need to take one or two or three years and we reach the shilling point, right? The shilling point is, the proving speed is only, let’s say two or three other magnitudes slower than the native time. Then we get a lot of adoption for the entire ZK space. That’s essentially why there are so many people who are so hyped about folding for a good reason, right and specifically.

When the recursion overhead reloading is about 4k constraint, think about that 4k constraint. It’s, it’s truly amazing. That’s why so many people have about loading. So I think, to us the biggest challenges there we want to be like Elias said, right, we want to be production ready. So we want to ship our v one pretty soon actually into the romances in testnet but we also want to. sort of keep up with this super high-speed train of the ZK technology development. This there is like a kind of balance of adopting new research and also shipping code and I don’t think best tech will win. I think the best Tech actually shipped will win. You have to ship code, there’s no other way.

Carter Jack Feldman: ZK is going to make the internet a fair place for users, so users will be less transparent with their data to platforms and platforms will be more transparent with their logic and the way that they operate their platforms, the platforms that we all rely on every day.

I think ZK is going to eat like the part of the world’s compute that is very valuable probably, you know, the succinct stuff or the simple but important transactions first and moving up from there. There’s also one number I want to draw everyone’s attention to, which is the price of an NGINX zero-day to the amount of money that can be gained by exploiting the Arbitrum One sequencer.

So once there is a crossover in that number, then I can start just flooding all of the watchtowers with as many transactions as they want, at a rate higher than they can process them once that queue reaches one day.

Ordering is gone I can be submitting transactions on behalf of others especially on Nova. So Nova I don’t know if anyone knew but people with a reasonable botnet can take down the playstation network for 20 days and hold it offline with their infrastructure. Do we really think that these optimistic rollups have the infrastructure to deal with an actual attack like someone who’s trying to benefit from stealing hundreds of millions of dollars?

I’m not very optimistic about their future as well. Friends don’t let other friends have assets on Optimistic rollup.

--

--