Starknet Decentralized Protocol IV - Proofs in the Protocol

In continuation of post I (introduction), post II (leader elections), and post III (consensus), this post outlines our thoughts and questions surrounding proofs in the decentralized Starknet protocol. The exposition is divided into five parts:

  1. Proof fundamentals
  2. Goals of prover decentralization
  3. Proof production (division and designation of labor)
  4. L1 state updates
  5. Pricing and transaction fees

Proof fundamentals

Given a sudoku puzzle, verifying a solution is easier than solving from scratch. If our goal is to convince people of the statement “this puzzle has been solved”, we can save a lot of computation by having one person compute a solution and then propagate it for others to verify. In this strategy, each computation of a solution becomes a one-time event which does not require replication by society. In a similar vein, Starknet scales Ethereum by replacing heavy L1 computation with lighter (hence cheaper!) L1 verification using STARK proofs computed off-chain. In this section, we outline high-level information about these proofs which directs protocol design.

Complexity: proof vs verification

The STARK proof complexity of an execution of a program in n steps is n\log n while verification complexity is \operatorname{polylog}n .

  • It is slightly more computationally efficient to prove k small jobs of length n than a single large job of length kn , because k\times n\log n \leq knĂ—\log kn .
  • It is much more computationally efficient to verify (a proof) of a large job of length kn than of k small jobs of length n , because \operatorname{polylog} kn\ll kĂ—\operatorname{polylog}n .

The benefit from the proof inequality is logarithmic and therefore minor. In contrast, the benefit from the verification inequality is substantial: it reduces a linear dependence on the number of jobs to a logarithmic dependence.

Recursion

By a recursive proof we mean a proof of verifications of other proofs. Efficient recursion allows us to reap the benefits of logarithmic verification without the drawback of computing enormous proofs. A recursive Cairo verifier has been in production for several months. Recursive proofs can be described by a forest of trees whose leaves are applicative jobs and whose internal nodes are recursive merge jobs.

  1. Compression – verification complexity of the root is polylogarithmic in the sum of the leaves. Consequently, submitting a root proof of a tree with many leaves makes L1 verification more economically efficient than verifying many leaves.
  2. Concurrency – a queue of jobs can be divided into concurrent queues of smaller jobs which are handled separately and later merged.
  3. Less waiting – no need to wait for the last job in a train to begin proving.
  4. Accessibility – the absence of huge proofs allows smaller machines to contribute to the recursion tree.

More details on benefits of recursion may be found in this lucid post by StarkWare’s head of core engineering.

Goals of prover decentralization

Having outlined the fundamentals of proofs, we ask ourselves: What is the goal of decentralizing the proving layer on Starknet? By design, performing an L1 state update to a state S will require evidence of consensus on S to prevent hijacking the L1 rollup contract. Since proofs are sound, provers will therefore be restricted to proving the ledger output by consensus. Hence, the only malicious prover behavior is a liveness attack by veto on consensus. The primary goal of decentralizing the prover layer is to ensure (user confidence in) its liveness.

We observe two orthogonal protocol properties that facilitate decentralization:

  1. Permissionless (censorship resistant) – prover participation cannot be censored by stakers
  2. Accessible – the entry barrier for participation is low enough to prevent a corporate cartel

If the entry barrier involves computational capabilities (latency, throughput) or economic efficiency (proof price), then there is a trade-off between accessibility and performance. For example, PoW is more accessible than racing for the fastest proof, since the latter will be completely dominated by specialized corporations (more on this comparison later).

An exclusive protocol is susceptible to liveness failures ordered by some executive decree, e.g an ill-disposed CEO of the dominant proving conglomerate. In such a scenario, sufficiently profitable permissionless protocols will eventually incentivize competing corporations and recover. However, eventual recovery raises an obvious question: what is the expected recovery period? As long as the proof protocol stalls, users cannot perform L1/L2 interaction nor finalize high-valued transactions. Even if provers are highly performant, delayed recovery periods may deter many users.

A theoretical method to cut extended recovery periods is gradual opening: while proofs stall, the entry barrier is lowered until liveness is recovered. In practice, this opening may not improve on the eventual recovery scenario outlined above. True improvement requires people and small companies to maintain proving infrastructure which can be used as a fallback. Unfortunately, we don’t see how to incentivize such maintenance if it is unprofitable in the happy flow, where the dominant provers are live.

What is a good sweet spot in the trade-off between accessibility and performance of the proof protocol? How can it be achieved in practice?

Proof production

With motivation to decentralize the proving layer we turn our focus to the proof production production process, which is necessary for L1 state updates. We analyze the production process through its stages:

  1. Division of labor – what are the jobs?
  2. Designation of labor – who performs each job?

While designation of labor is coupled with decentralization, division of labor may be treated in the centralized setting. This observation suggests a centralized starting point. Before diving in, we’d like to acknowledge the Mina protocol whose division of labor and designation of labor are used in production.

Division of labor

Following the above, let us begin with a centralized proving layer operated by a single entity. The prover must handle the stream of blocks output by consensus (likely recorded as L1 checkpoints).

The prover will determine jobs and their scheduling to optimize for some parameter (compression, latency, price) constrained by its computational limitations and the protocol itself. The optimization favored by the prover will depend on incentives. For example, low proving rewards will cause the prover to spin up cheaper servers to optimize for costs at the expense of latency and/or throughput.

Here’s a representative division of labor problem. Assume a consensus throughput of 100K transfers/sec. Further assume the prover can spin up at most 256 proving machines, each at a fixed hourly rate with a throughput of 1K transfers/sec. Lastly, assume a root proof encompassing 100M leaf steps is profitable for an L1 state update. Given a maximal hourly budget, describe which leaf size and tree structure minimizes latency for a profitable root proof. It will be natural to decide on division of labor for particular performance parameters, and only once we have decided what we wish to optimize.

Designation of labor

A decentralized proving layer requires the protocol to designate which provers are eligible for the rewards of a particular job. We dissect the designation process through a series of questions.

  1. Which qualities are sought by the designation process?
  2. Competition or turns (or a combination)?
  3. Which designation process satisfies the above criteria?

Example qualities are proving speed, cost efficiency, and amount of stake. A competition for proving speed takes the form of a race, for cost efficiency - the form of an auction, and for stake - the form of a vote.

Competition has two main advantages: it discovers the best candidate and also incentivizes improvement. Disadvantages depend on the distribution of winners:

  • The “best always wins” model can lead to centralization by disincentivizing participation. Specifically, the concern is that A is ε “better” than B, but always wins. This is especially risky when participation requires effort that will likely go unrewarded, e.g in a deterministic computation race.
  • The “fair share” model (winning chance roughly equals relative performance) is more compatible with decentralization (e.g PoW) but introduces redundancy which increases operation costs. Specifically, if A is ε “better” than B then both have roughly equal winning chances, then they both expend effort to participate.

How to best enjoy the benefits of competition without risking centralization?

In contrast to competition, turn-based monopolies resolve incentive problems well, but may not discover the best candidate for each job or incentivize improvement. A glaring question is: what are the turns based on? Stake-based turns allow stakers to choose provers. This does not seem problematic in principle, but the “classical” role of stake is to provide Sybil resistance only for the consensus layer. What are some other interesting ways of realizing turn-based monopolies? How to incentivize improvement in such a model?

A turn-based model can also be competitive, e.g a turn-based auction. For example, Ethereum’s designation of labor for sequencers is essentially a turn-based auction, with the winner being the first sequencer willing to sell block space to a transaction in exchange for its fee.

L1 state updates

The last part of the Starknet protocol is the L1 state update, which finally achieves L1 security by L1 proof verification. To incentivize provers to actually converge (i.e merge instead of only proving leaves), it seems natural to issue prover rewards only upon L1 state updates. Furthermore, L1 logic can regulate the entire recursive tree associated with the submitted root proof. For example L1 logic may check prover identities, issue individual rewards according to proof size, and reject overpriced proofs.

A key distinction between proof production and L1 state updates is the non-computational nature of the latter. Indeed L1 state updaters merely observe the forest of proofs output by the proof production process and follow their own protocol to perform L1 state updates by “copy-pasting” proofs to L1. The absence of computation precludes wasted computation in case of failed L1 state updates. Furthermore, it renders computational capacity irrelevant and consequently averts centralization around strong machines. These benefits suggest a “best always wins” competitive model for L1 state updates. Specifically, an open race where only the first person to perform an L1 state update receives a reward.

Unfortunately, in our current L1 state update mechanism the submission of proofs for L1 verification also includes state-diffs that require a lot of expensive calldata. Since reverted transactions still pay for all of their data, an open race would inflict non-negligible losses on the losers, disincentivizing participation. Is there a clever way to have an open race protocol for L1 state updates in which the losers’ losses are negligible? One idea is to use a commit-reveal scheme where the first committer deposits collateral in exchange for a brief monopoly. To defend against extended DoS, we may exponentially increase the required collateral.

An open race for L1 state updates can refuse proofs unless submitted by the designated prover as described above. However, it is also interesting to only restrict issuance of rewards to designated provers, without rejecting the proofs themselves. This would allow incentivized parties to perform L1 state updates at personal expense using their own proofs.Note we can always fall back to a stake-based leader schedule for L1 state updates. In this case we simply lose the benefit of updating at the rate of the first incentivized entity.

Pricing and transaction fees

This section discusses the following pair of questions.

  1. Division of funds – how are transaction fees divided between sequencers and provers?
  2. Fee calculation – how do users calculate transaction fees without severely under/overpaying?

Starknet transactions inflict distinct computational burdens on the sequencer and prover layers. Since users send their transactions directly to sequencers, it is natural to facilitate free trade via sequencer transaction fees. Specifically, the sequencer sells block space to users in exchange for transaction fees.

The situation is different for provers, whose jobs are not determined by direct interaction with users but rather restricted to the ledger output by consensus. Hence requiring users to pay a separate proving fee for each transaction feels somewhat contrived. It seems more natural to abstract the division of funds between sequencer and prover away from the user. To this end, the user may specify a single transaction fee for the combined computation inflicted on the protocol, with division of funds occurring at a later stage outside user experience.

This approach is taken by the Mina protocol: transaction fees are paid to sequencers who subsequently use them to purchase proofs. The protocol is not involved in pricing and incentivizes sequencers to purchase proofs by merely coupling block production with proof production. In this architecture, transaction fees become correlated to the proving complexity of previous transactions and not themselves. It is possible to uphold correlation between a transaction’s fee and its proof complexity by moving proof production into the critical path by requiring each sequencer to append to its block a proof of its execution. This architecture further supports a flexible division of labor, while designation of labor is via “best always wins” competition for cost efficiency. It is also possible to move to a “fair share” competition by introducing some non-determinism into proof generation by forgoing completeness. This idea is explored by this paper and serves as the foundation for consensus in the Aleo blockchain.

An alternative approach to division of funds is to have the users separately specify fees for their transactions’ marginal execution and proof complexities.

Both approaches to division of funds face the fee calculation problem, which exists regardless of proofs in the protocol. Ethereum resolves this problem for sequencing using the EIP1559 method of protocol-level price discovery. Since Ethereum block time is several seconds, the responsive base fee algorithm discovers price changes quickly, promoting a pleasant user experience. If proofs are frequently propagated as part of the protocol, an EIP1559-like method will supply responsive posted-prices regardless of the underlying designation of labor. For example, the turn-based auctions for sequencing can be replaced with open auctions, with a collateral deposit required for every bid. The extent to which resulting prices reflect the market depends on accessibility of the protocol. If proofs are infrequently propagated, an EIP1559-like algorithm will be insufficiently responsive and it may be better to record auctions on L1 for quick censorship-resistant price discovery.

Summary

The abundant literature on consensus and the proliferation of blockchains are a great guide for designing the sequencing layer of the protocol. For proofs, less is established. We have presented our thoughts and questions about a decentralized proving layer for Starknet and hope to receive insightful feedback. A future post will discuss how to use checkpoints for a high level of fast finality, and subsequent posts will outline concrete protocol suggestions. See you there!

66 Likes

Hey @ilia, thanks a lot for this great post as always

My take on this is to favor turns over competition for the following reasons:

  • In the long run, Starknet will serve as a settlement layer for other rollups that offer more scalability and advantages such as privacy. Therefore, we should focus on building a truly decentralized Layer 2 network that offers resilience and decentralization, making it the perfect settlement layer for Layer 3s and other applications.
  • In terms of culture and ethos, Starknet would benefit in the long run from being as decentralized as possible. Ethereum won the L1 race by being decentralized and open, the community on Starknet should promote the same ethos (and I think it’s already the case).

Overall, I think that Starknet was built to be the most scalable and efficient Layer 2 network. Its vision was to create Cairo instead of trying to achieve EVM compatibility, and to focus on performance and research instead of bullshit marketing. With the incredible tools provided by Cairo and the soon-to-be-open-sourced STARK prover, we have the opportunity to build a great ecosystem. Starknet can serve as a resilient and decentralized network, with applications and other networks settling on it for its decentralization.

Definitely, the way to go in terms of UX

25 Likes

Dear @matteo, thank you for this feedback on turns vs competition! I don’t have much to say now, but you have given us some food for thought.

Regarding the fee structure: I agree that only having one fee dimension is most convenient for users. However, if the user pays for consumption of distinct resources, each of them will need to be priced by the user (or wallet) somehow, in which case it seems we have not really spared the user any hassle. A different avenue is to consider the user not paying for particular resources and instead subsidizing them at the protocol level through inflation. We will look into such approaches if, for example, the total computational costs of proving turn out to be sufficiently low. What do you think?

16 Likes

For both the sequencing and the proving?

On a very high-level overview, I think that inflation to cover proving costs can be an elegant solution. I particularly like the EIP1559 as it decouples the price required for validators to include transactions and the market price of including a transaction (what I mean here is how much a user is willing to pay to include a tx). I think that the user should pay depending on the market value of including a transaction, ofc it depends on the proving cost but most importantly on the congestion and demand to include tx, and the protocol should handle the incentivization through its token issuance.

We could make inflation vary on proving demand (less demand → less incentives for provers → more token inflation required and vice versa), but I feel that having a more similar mechanism to EIP1559 with burning fees is better as we can have deflation if there’s enough demand.

Moreover, I think this solution legitimates having a token. If users are paying gas as a function of sequencing and proving costs, why use $STARK - I’m paying for a service I could use a stablecoin (it would also remove some complexity to build the incentives). Now if the protocol itself balances the incentives based on utilization, having a token makes sense as it’s not possible to do the same thing with any other coin (as it’s not possible program its issuance as we wish). That way we’re truly creating a network supported by an asset, in the end tokens are so powerful because they’re programmable right?

Just throwing a random thought I had while thinking about this as well, I’m not even sure if it makes sense at all, but while the sequencing layer is something bound to Starknet, maybe the proving layer can be more general. We know that we’ll have a bunch of L3s and app chains built on top of Starknet, what if those rollups could use the same proving layer, that way those rollups can benefit from decentralized and performant proving and the STARK token and ecosystem can accrue value horizontally with all the different rollups built on its tech stack.

12 Likes

Dear @matteo thanks for the additional food for thought.

  • I was only referring to the prover resource.

  • I also think it’s best to have users pay the market price. Beyond that I am undecided on whether provers should usually be paid market price, or constantly overpaid. I briefly touched on this in the buffer problem post (including simplified 1559-type mechanism). There is also the factual matter of actual proving costs in relation to the remaining operation costs of the protocol.

  • As far as the token goes, I agree the ability to control monetary policy is a powerful argument in favor of a native token.

  • We have toyed around with the idea of a “universal proving layer”. Perhaps this is a somewhat premature discussion, but let us at least distinguish between proving as an off-chain service and an actual protocol that explicitly involves designated “provers” and possibly enforces on-chain logic. I haven’t given much thought as to when and why the latter is necessary. Do you have any further thoughts?

13 Likes

From what I’ve seen, protocols chose to overpay at the beggining to bootstrap the network when there’s low demand, but the major problem is to find the right balance, so that it doesn’t harm the token in terms of long term value and centralization. I think it’s also tricky to determine the right incentive, the token will be volatile and so it’s hard to hard code a predetermined inflation for the token that will always satisfy provers. Do you have any idea how to price the proving cost? Also do you think that proving cost will follow some kind of Moore’s law, with regular improvements in cost efficiency that would lead to a scheduled decrease of incentives?

I was thinking about the latter yes. I’ll explore more this idea when I have more time this weekend, but again in a very high-level overview, I think that any team wishing to build a zk-rollup leveraging the Starknet stack could benefit from having a proving layer that is already decentralized, and secured audited etc. That would enable Stark to accrue from its whole ecosystem, and it could be possible to get more composability between the different chains of the ecosystem (my question here is do we have to wait for proofs to be posted on L1 for messaging if the same proving layer is shared). I’m missing a lot of complexity here, especially since some rollups might settle on L1 other on Starknet, and maybe on upper layers - but sharing the proving layer might strengthen the interoperability between these chains, and allow Stark to directly accrue value from this multi-rollup design

7 Likes

@matteo valid points!

  • As far as pricing goes, I’m really out of my depth here, so I’ll just share my primitive intuition. In my opinion the most natural foundation for pricing provers is a market mechanism of price discovery, regardless of whether or not provers will be overpaid. Perhaps a reasonable approximation to such a mechanism is an algorithmic base fee, which may in turn be fed into a minting mechanism to overpay provers. Beyond that, it seems reasonable that proving costs will follow Moore’s law, but my instinct is to pay provers according to demand (i.e market price) as opposed to operation cost.

  • I would love to understand some concrete examples of benefits you have in mind! Also, your mention of “secured audited” provers raises a question: do you think provers should be protocol-level players with some sort of on-chain reputation system, or at least an on-chain commitment of open-source prover software?

12 Likes

If it’s a market price discovery then I tend to think no need to overpay, since those actors will want to do the job anyway, and it’s a fair price for everyone. The only reason that I think would justify overpaying here may be to have more geographic diversity since provers will have to move to regions with low-cost energy as the market matures (not sure that’s the priority tho).

Just found this article while trying to make a state of the art on the subject, I think he’s in the forum just quoting him @stonecoldpat

I’ll put someone from our research team at Empiric on this next week, it’s such an interesting design space

Yes, for the former I think that it’s the best way to set up the cursor between decentralization and performance. For the latter, maybe heavily integrating this factor in the overall reputation system would make sense.

8 Likes
  • Disregarding geographic diversity, there is a concern that fluctuating exchange rates between the token and fiat will disturb the market prices enough to cause unprofitable blocks. See this post.

  • I’ll read the article. Thanks.

  • I am not as convinced by the first statement, so I’d love to hear/read more: why do you think involving provers at the protocol layer achieves a finer balance between performance and decentralization than e.g a free market? Do you suggest somehow enforcing the use of diverse proving software at the protocol level? If not, what prevents all the protocol-level provers from purchasing all proofs from the same off-chain proof provider?

6 Likes

Hey @ilia! Sorry for the late answer,

I understand the buffer problem if the fees paid by the user are directly the fees used to pay the prover (and sequencer). But my take as I explained before is users pay the market price for Starknet’s utilization (this price depends on how congested the network is) - and separately the provers are paid based on the proving market price. Those two prices are somewhat correlated, but sometimes a user might pay less than the actual proving price to include a transaction if there’s a low demand for Starknet and that difference is covered by the token’s inflation (and vice versa).
However, this system raises a lot of questions if the proving layer is general purpose.

If we have a reputation involved, then we can favor stuff outside the scope of performances and costs. With a free market, provers will optimize its setup for the limit of performance required by the protocol, in order to have the cheapest costs possible. With a reputation system, we can require a minimum performance, but we can also favor ethos such as open-source as you said, or prover diversity, and maybe other parameters with for instance involvement in the governance or relative improvement of the prover’s performance.

11 Likes

Dear @matteo thank you for the many insights! Are you familiar with any promising ideas for reputation systems that are practically applicable in blockchain?

4 Likes

Hey @ilia

That’s a good question, at the protocol layer I think that’s somewhat hard to find
I’ve seen some interesting approach on a Layer 1 a while ago but I can’t remember the name nor the paper. I’ll try to find it.

3 Likes

:wink: :wink: :wink:That’s a good question, at the protocol layer I think that’s somewhat hard to find

2 Likes

Is there any possibility that make recursive proof size to automatically adjust for the proof cost. Because user transaction may be higher than what they can afford, if the cost of prover and sequencer will be huge

I’m not sure about automatic, however:

  1. The expect the cost of off-chain proof computation to be negligible compared to the gas cost of L1 state updates.

  2. L1 state updates will not be forced, and the updaters will be able to choose when to submit an L1 state update (taking into account several factors, including the amortization of L1 gas over L2 transactions). I think this will provide sufficient adjustment.

What do you think?

Yeah, the second point is what I mean, the updaters will adjust the recursive proof size, regarding the market price.
The cost of off-chain proof computation is negligible? As far as I know, the proof generation is a computing-intensive work, is there any benchmark to see the real cost?

@F.F I don’t have any data off the top of my head. Our rough estimates are that proof computation is around 1/100 of the capital costs incurred by stakers. Since we plan to couple block production with proof production, it seems reasonable to consider the compute cost as (at least) a second order consideration.

We have recently participated in Taiko testnet for provers and it had competitive proving system - who got the proof on chain faster wins the reward. The results were disappointing for most of the participants as 2 addresses ended up gathering most of the proofs, the 3rd have picked up what was left from those 2.

It also seems that those 3 provers have used some gaming laptops as they went offline during the night and that was a time for others to pick up 1-3 proofs.

Moreover, most of the times the proof was completed almost simultaneously and the last piece was to make a transaction on chain. But only one proof can be accounted, thus 1 transaction was successful and others have wasted their gas with a transaction being eventually failed. So, in testnet (Sepolia) it was ok, but when it comes to real ETH spending, I don’t think there will be a lot of people willing to waste electricity, hardware and ETH for failed attempts.

So, in my opinion, competitive proving will only lead to centralization, as in deterministic computations, like ZK proof generation, the fastest will always win.

Eventually, provers will have to constantly upgrade their hardware as the fastest always wins. And that is not PoW situation with probabilistic computations where 2nd best hardware is still capable of mining blocks. No. As soon as there’s a better equipment out there, you can throw your server away. At the pace of the tech development this will lead to the necessity to switch the hardware at approx. 1-2 times a year. Obviously, the costs of constant upgrades are impossible to cover with the proof fees, as we want to lower them for end users.

That is why I vouch for some turn-based rotation of proof generation with some minimal threshold by proving time that would satisfy quality of service. It is possible to make some kind of a slashing mechanism for violating QoS. The threshold can be defined by average proving time. It is also possible to introduce bigger rewards for faster proving to incentivize hardware update.

I think that it is better to give the block to some randomly selected small set of N provers and give most of the rewards for the winner and the rest the losers to keep them in game. Then randomly selected another subset thus making sure that at least some provers will get the job done as others fail. Some mechanism like that maybe can be seen in Polygon PoS, when validators take turns and if the chosen validator messes up the others get a chance to pick up the block. Or the BNB chain, where they have block producers and candidates.

Less than a year ago Ethereum went green overnight by switching from PoW to PoS, but now ZK rollups are returning those bad features as wasted electricity consumption. To a smaller degree, as everything in L2 is supposed to be smaller than in L1.
Avoiding competitive proving will help to avoid all those unnecessary costs for provers and for environment.

Great experience from Taiko, winner-take-all won’t be a good solution.

@Anton_Gaev_p2p.org thank you for the useful feedback! We are strongly inclined in favor of the turn-based approach, at least at the beginning.