StarkNet Account Abstraction Model - Part 1

It is mentioned above that the validation function implemented by an account contract will be limited to some number of steps but couldnt find any exact description of this - in what way is this function limited ?

For instance, what if we have an account contract that implements IAccount interface as well as IERC721 interface - then we could issue an NFT to the public key initialising the account contract and is_valid_signature could check whether the signing public key is the owner of the said NFT which was minted on initialization. This could turn an account into a more general blockchain primitive where account ownership could be transferred independent of the wallet perhaps - and also transfer of ownership of an account could become as simple as transferring the NFT (transferring ownership could also be accomplished by simply changing the stored public key in the account contract but just trying to understand whether this is feasible). Is this right ? or have I misunderstood this concept

27 Likes

I guess you could do something like that.
But do you see here a problem with the validate function limited in steps?

23 Likes

No problem with limiting validation steps, infact I think having this constraint is actually a good decision. I went ahead and hacked a simple version of a token gated account here. I have another question though. How do you intend to collect the fees - like is the sequencer going to call some function in the Account contract to collect the fees (I am assuming its going to be an ERC20) ? I was thinking of an implementation of sponsored accounts and knowing the fee collection mechanism would help a lot.

23 Likes

What are the restrictions preventing nonce abstraction to be implemented at the account contract level: Nonce abstraction · Issue #354 · OpenZeppelin/cairo-contracts · GitHub? I think this would make certain transactions much more efficient in the current naive way that nonces are implemented in account contracts.

23 Likes

Not sure what you mean by restrictions.
It’s a design choice, to simplify things like guaranteeing transaction uniqueness.

24 Likes

Is it possible to use another token for payment after the alpha phase? Do you have such a goal in total?

24 Likes

That is one of the goals for the paymaster mechanism.
You would need to have a paymaster contract that supports this of course.

25 Likes

Hello,

I’ll make a case for nonce abstraction, or at least for the possibility to have more flexibility in nonce management.
Nonce management is hard when you need to send a large number of transactions from a single account, and you can’t multicall them.

  • At least four projects I know have faced this issue (the starknet Edu team, the Empiric team, the Rules team and the snapshot team). And in the Ethereum world it is a widespread problem 1 2 3.
  • If your backend has various processes that need to access the same ressources / wallet, it is quite hard to figure out which nonce to use. Using an incorrect nonce will get your transaction rejected, or stalled.
  • The option is basically “Have all your workload executed sequentially, or have multiple wallets”. But having more wallets, dealing with more keys and adding more permissions in your smart contracts are not necessarily a good thing.

I think nonce abstraction is very useful in the future, but even more so today

  • There is no notion of mempool, so if a transaction is waiting for inclusion but hasn’t been included yet, it is impossible to detect it
  • There is an implicit guarantee of ordering of transactions when I send them to the feeder, but any glitch on StarkWare’s end will mess up ordering and will invalidate all the transactions I sent. This will get worse with optimistic paralelization
  • A transaction with a incorrect nonce is not only bounced back, it is prohibited from being re sent again. Meaning that if I craft a batch of 1k txs, send them at once, and a glitch / error happens at tx 5, all 995 remaining transactions will be discarded AND I have to craft them again and change their payload.
  • This means that currently it is not safe to send more than 1 transaction per block using the same account. That’s little.

A relatively simple solution would be to have the nonce be a two dimensional object. Instead of using [nonce], the wallet/os would use [index, nonce] to check the unicity of a transaction.

  • The OS can still validate that for [index], [nonce] is sequentally incremented
  • It allows operators that need to fire a large number of transactions in paralel to assign indexes to queues, and deal with nonces separately
  • It comes at no cost for regular users. Most users will use index 0 and increment the same nonce, using no extra storage in their contract compared to using a single nonce
30 Likes

Thanks for posting this, Henri. I completely agree and want to continue the conversation by:

  1. adding a bit more color on why use nonces in general and which of those properties might be relaxed
  2. describing our use case
  3. describing a different potential solution (not mutually exclusive).

Why use nonces?

As I think about it, there are two reasons to use nonces: A. they prevent replay attacks, such that resending a payload that has already been sent will not result in a valid transaction. B. They guarantee ordering and completeness of transactions, i.e. sometimes we want to make sure that a second transaction we send can only be accepted if the previous transaction has already been included.

Our use case (at Empiric)

In our case at Empiric (oracle data feeds), we only need property A. but not property B. At Empiric, we have many data publishers that sign their own data and then publish it directly on-chain. This data is published at a high frequency and by many different entities each running on distributed, highly redundant systems.

We need the replay attack guarantee, because otherwise resending past valid data update transactions could lead to draining the funds from our data partners via recurring gas fees.

We do not however need the ordering and completeness guarantees, in fact we actively don’t want them. As Henri describes, it is difficult to ascertain the correct nonce to use given potentially pending transactions and the possibility that transactions sent a few seconds ago may or may not be valid. For instance, a transaction may seem valid locally, but fail because of insufficient gas if the fee changes from the estimate_fee call to posting the transaction (this has happened multiple times). Older data is automatically excluded by logic in our contracts, so we are not worried about old transactions being resent.

Potential suggestion

We currently use timestamps as nonces and simply check that the last stored nonce is less than the nonce of the new transaction being validated. This guarantees transaction uniqueness but allows many transactions to be sent simultaneously and only the most recent transaction will be included.

If I understand the proposed nonce validation by the sequencer properly, checking inequality rather than old_nonce = nonce + 1 would have identical complexity, and the former (proposed here) is strictly more flexible. Contracts that wanted the ordering and completeness guarantee could still check it (similar to the way account contracts check both that and uniqueness today).

PS: Henri’s multi-dimensional nonce structure would also work in theory, in that we could just use index as a timestamp and ignore the second dimension of the nonce. However it would be suboptimal in that we wouldn’t store one timestamp nonce but rather add a new storage slot at every update, which would be quite expensive over the long run.

22 Likes

I like the timestamp solution a lot, but how do you ensure the timestamp sent by the user is correct?
What if “by mistake” the users sends a timestamp 2h in the future, does that mean that then his account is “blocked” for two hours?
And if you rely on the get_block_timestamp existing in the library then it shouldn’t be used since (at the moment) it can be (on purpose?) changed by the sequencer.
Or you just assume the user is sending the correct timestamp?

26 Likes

You are correct in that currently we do not have any checks on the timestamp. For now we just assume the user sends the correct timestamp (we need this assumption for the oracle anyway as all data our partners send is timestamped itself, regardless of transaction validation), with two caveats:

First: If a user sent a timestamp 2 hours in the future (not an unlikely scenario given timezones etc), the account would in practice be blocked but in theory you could just move to using timestamp + 2 hours as the nonce and keep on (in practice blocked because SDKs and wallets might not make this easy to do for most users without going to manipulating raw transactions).

Second: My understanding is that (quoting @Ohad-StarkWare): “when StarkNet is decentralized, timestamps in the frequency of seconds will be enforced at the L2-consensus level”. This was from a conversation 1.5 months ago, would be curious to know if this is this still the plan?

22 Likes

Hi all,

We worked on some schemes to help improve the “nonce” approach:

It involved two protocols:

Multi Nonce = It required two nonces and created a mapping index → nonce. It allows you to define a set of queues and enforce sequential ordering for each queue. It is handy if you want to maintain some ordering, but still support concurrent transactions.

Bitflip = It required a two nonces. Again it is an index → nonce approach, but the goal is to “flip the bits” in the nonce. So you can send X concurrent transactions per queue and require minimal storage updates.

I’d recommend, if possible, to abstract away the nonce concept to allow people to experiment with different approaches. You could implement a basic “sequential nonce” that is plug and playable, but allow others to implement different schemes (like the above) to really benefit from it.

25 Likes

Another approach to nonce abstraction might be to have the system maintain, per (account) contract, the last T nonces accepted (call it LastNonces)

Then, when a transaction is validated, we assert that the nonce in the transaction is higher than the minimum of LastNonces.
Of course, a successful transaction execution will update LastNonces, so it’s always at most T nonces long.

So now we can send several transactions, at most T, to be accepted in parallel.

The size of LastNonces, i.e. value of T, can be decided per account.
And we can even have API in the account contract to allow the owner of the account to change it (with some built-in limitation for safety), allowing for more transactions to be executed in parallel.

This is a bit less robust than the structured nonce approach (2 numbers). For example, I can’t group transactions into different groups.
On the other hand, it’s simpler, and keeps the transaction API the same.

thoughts?

21 Likes

Will it prevent replay? If T is 3 and say LastNonces is currenlty 1,2,3 and I submitted 4 transactions with nonces 4,5,6,7, then the sequencer can pick anyone. Say it picks them as 6 (last becomes 2,3,6), 5 (3, 6, 5), 4 (6, 5, 4) and 7 (5, 4, 7). Now 6 can be replayed. I think that ‘max’ instead of ‘last’ solves this.
Who pays for the T storage cells? Can it be 1M?

22 Likes

we should still maintain order (like today), so 5,4 can’t be played after 6.

I think that’s what you meant by “max”, but it’s not instead, it’s in addition to the last nonces.

In other words, the suggestion is to add a predicate to today’s logic (which asserts that the executed nonces are monotonically increasing).
Only today it’s simply looking at a single number (i.e. T = 1). And with this we’ll have some history.
So the logic can be - accept only if transaction.nonce > min(LastNonces) and transaction.nonce is not in LastNonces.

Maybe another way to look at it is that each contract has T “available slots” to allow for transactions to execute, and while a transaction is not included, it’s “holding” a slot, and when included in a block (by some definition of finality), the slot is “freed”.

24 Likes

After thinking about it some more, I think this needs to be refined a bit.
In this suggestion we keep T last largest nonces per account (T configurable per account).

And the condition to check for a given transaction’s nonce (n) is whether the n > min(LastNonces) and n not in LastNonces.
This, combined with the maintenance of the T largest nonces, should give us a safe solution.

21 Likes

Understanding the rationale behind this limitation, will this limitation still be used to spam the network with txs that can’t be validated?

22 Likes

What limitation exactly are you referring to?

20 Likes

Regarding all suggestions, I would like to mention that when nonces are not “continuous”, then it means sequencers will have more opportunity for MEV since they can censor some transactions and insert others and there will be no way to invalidate that.
In the case of [index, nonce], this can happen if there are some transactions with index 0 and some with index 1. The sequencer can ignore a tail of transactions with index 0 and still insert those with index 1.
In the case of the largest nonces, it is even easier to ignore.
Contrast this with the single sequentially incremented nonce, a sequencer cannot include a tx with nonce n without also including all txs with nonce less than n in the pool.

26 Likes

I believe the benefits of nonce abstraction outweigh the side effects of having nonce abstraction. One such use case I feel is with session keys, they would benefit a lot with a 2D array. The session key → nonce mapping would allow multiple authorised entities to call the wallets in parallel without waiting to get the correct nonce.

I feel that leaving the implementation of the nonce to the account can open interesting nonce variations, while the sequencers pick up transactions that are most beneficial for them.

23 Likes