Decentralization: simplest suggestion

Problem Definition: In common blockchains each block is defined by potentially a different miner, and the time slot between two blocks is relatively short (on the order of seconds to minutes). This property provides reasonable decentralization (and thus anti-censorship), and user experience (after at most a few minutes one has pretty high certainty their transaction is added to the chain and will stay there).
It is challenging to find robust construction having similar properties for L2 solutions using validity proofs, as generating a validity proof might take a long time (on the order of 10s of minutes to hours).

Suggestion Scope: This suggestion provides some level of decentralization while optimizing simplicity and robustness, over UX and time-critical censorship resistance. We start by formally describing the suggestion, and later analyzing the suggestion features.


  1. The StarkNet core smart-contract (on L1) manages the leader election (could be PoW, PoS based, or other popular schemes).
  2. Each leader controls a timeslot (on the order of a few hours, in particular, longer than the time it takes to generate a proof) where only they can invoke state_update to the StarkNet contract. They can do as many such invocations as they chose to (similar to sending several in the same timeslot).
  3. The validity of each state_update invocation, and consistency with previous invocations, is enforced by the StarkNet core contract (like it is done with a single sequencer).


  • Simplicity: no p2p protocol is required to synchronize the sequencers. It is sufficient for a sequencer to read the state from L1 (and if applicable an alternative data availability location).
  • Robustness: No need to design/analyze any protocol to enhance the trust of one sequencer in other counterparties.
  • High Throughput: assuming the time to sequence and separate transactions, and transmitting the proofs to L1 is negligible compared to the time it takes to generate a proof, it is easy (given enough servers) to distribute the proving-load to many servers, thus proving many blocks in parallel, with the total latency of ~ 1 proof. We assume that after this latency there is enough time left to submit all those proofs to L1 (each proof is ~5M gas, and currently Ethereum gas limit per block is ~30M, and block time is ~13 sec, thus one can submit proofs with throughput ~27 proof/min).


  • Censorship Resistance: a malicious leader can censor chosen transactions for an entire slot, which is relatively long (the order of hours).
  • UX: Although a trusted leader can report in real-time what are the blocks they are proving, this protocol does not provide any guarantee to users it is indeed the case. In particular, a user needs to wait a relatively long time (the proof latency, which can be hours) before they are sure their transaction was indeed executed.
  • UX even for trusted data: transaction submitted close to the end of a leader timeslot (closer than the time it takes to generate a proof) might not be taken by the leader of that timeslot (not to start proving a proof which won’t be ready in this timeslot) and might be taken by the next sequencer only in the next timeslot (as the next sequencer can not trust whether it is served or not by the current sequencer until the end of the current timeslot). This results in “dead time zones” (which can last for 10s of minutes or even hours) where a user won’t know if their transaction is served (if send during the dead time zone), even based on trust in both the current and next sequencers.

Future Improvements:

  • There is a work in progress to reduce the proving latency. This might improve this solution significantly, as all the issues raised are the result of the long latency of proof generation today.

For the 3rd con, can’t we have the current leader declare (commit) the last txn he included in the batch, even before he proves it, and let the next sequencer already take it?

It still relies on trust, but it might improve the UX in this case.

I agree. It just shifts the complexity a bit, and it isn’t the “simplest” anymore.
For example, the trust in whom?
Say the first sequencer states they did not include transaction t, and the next one, based on this statement commits to include it, and the user now trusts this would indeed be the case situation.

Now let’s assume the first sequencer did not fulfill their promise, and actually did include transaction t, now the next sequencer won’t include it. Who could you say disappointed the user? Was it the first sequencer for actually including t, or maybe it was the second for not including it as promised?

This is not a very big issue IMO, and actually possible to apply in the real world. But IMO it is better not to think of this as the “simplest solution”, but can be implemented as a heuristic over it (with no proof for security/robustness).


Could crList be a solution to mitigate some of the cons of this approach: PBS censorship-resistance alternatives - HackMD (

Definitely adds complexity, but a simplified version with one lead sequencer and only a handful of attesters creating a crList could be an important addition. Side benefit: the attesters can also double up as reserve sequencers, if the lead fails, or maybe even parallelizing proving - though these are more complex to add in later.


Maybe this could be solved with the StarkNet core contract switching prover in the case of censorship. So a prover gets a time slot of X time unless he conducts some kind of a foul and than it switches to the next prover. Question is how do you know if he made a foul. I’m thinking this could work only in extreme cases of censorship. It gets you to sort of the problem the Optimistic rollups solve with the stake and challenging period and all that. And it will be very hard to detect things like MEV (e.g. switching transaction ordering for the prover personal interest).

MEV in this case will always be an issue when there is only one prover/timeslot, as he can rearrange transactions. And things become very discretionary. But I’m guessing the motivation for MEV decreases as well, as having a large enough timeslot for a prover can be quite lucrative financially. So less need for other sources of income.

Just an imaginative thought - it could be cool if we can change the time slot for each prover according to his level of service. (e.g. first class prover gets 5 hours, a 2 hours premium on 3 hour default slot). So the community could always examine the QoS that prover gives and than vote on the best provers that will get better economic conditions (might be a little complicated :))

Anyways the simplicity of this suggestion sounds like a big advantage. I would imagine StarkNet starting with this mechanism, a delegated prover mechanism, where each pover is at first chosen and voted by the protocol, and can be also black listed for bad behavior or low quality of service.

Introducing p2p and parallels proving might be a step down the round when the proving latency significantly optimizes.