Starknet Standard Interface Detection

Here is the SNIP Draft’s first version:

cc @sgc-code @yoga_Braavos

What do you mean by consistent? what does interface id type definition have to do with selectors?

In ERC-165 they define the interface id as:

the XOR of all function selectors in the interface

Since selectors are bytes4 in EVM world, that makes the interface id a bytes4 too.

Following the same logic, i think, the interface id should be a felt252 in Starknet.

Here is an update on the elements we’ve been discussing in other channels (the Draft has been updated accordingly):

  • Generic types have been removed from the standard because they can’t be part of the external API of a contract.
  • Arguments of external functions must use a default serializer, for the contract to be SNIP-5 compliant. This supports easier interoperability, allowing calling contracts to trust that the behavior of two different targets exposing the same interface would be compatible regarding serialization. The standard for Tuples, Structs, and Enums is the same that we get from using the derive attribute which is: the concatenation of the serialized fields for Structs and Tuples, and the concatenation of a felt252 acting as a variant identifier and the serialized value for Enums.
  • Types defined in corelib as external types will be treated as core types, but structs and Enums from corelib will be treated as such (ex: u256 is represented as (u128,u128)).
  • The interface id is the starknet_keccak of the XOR of the Extended Function Selectors with the given signature (ASCII encoded).

Also, in ERC-165, a part of the specification is that the supports_interface method must return false for the interface 0xffffffff. As far as I understand, this was included for backward compatibility, to determine that contracts previous to this standard don’t implement the interface. This is not included in the SNIP because it seems unnecessary for the current Starknet state.

With the latest changes about the syntax evolving
Do you think it is useful to add self in the computation?

In my opinion, yes. From the fact that self type is not always the same (@ContractState or ref ContractState), I think it should be part of the signature as any other parameter. Having represented in the signature whether the function can modify the storage or not, looks like a nice extra feature to me. Interested in others’ thoughts.

Shahar also mentioned that the Account interface could be updated to something that looks like:

trait Account<State, TxInp, TxOut> {
  fn __validate__(ref self: State, inp: TxInp) -> bool;
  fn __execute__(ref self: State, inp: TxInp) -> TxOut;

I guess this would also have an impact and needs to be agreed on.

This is interesting, I actually removed the Generics section of the SNIP-5 after some discussion, because generic types weren’t allowed in external functions for smart contracts. I think this behavior remains, so this generic Account trait would not actually be representing the interface, but a blueprint of interfaces that will be defined after implementing the trait with specific types.

Not sure if we want to allow an interface (for SNIP-5 interoperability) with these generic types.

The main goal of SRC5 is to support interoperability, allowing contracts to interact with each other with consistency and simplicity, knowing what behavior to expect from the target.

With the New Syntax interfaces, we have an important issue if try to support Ids of interfaces containing generic types (besides the TContractState that is compiler-generated):

If we have this interface:

trait IMyContract<TContractState, TNumber> {
    fn foo(self: @TContractState, some: TNumber) -> felt252;

A contract implementing it like this:

impl IMyContractImpl of IMyContract<ContractState, felt252> {
        fn foo(self: @ContractState, some: felt252) -> felt252 {

Would break if someone tries to call the external foo method with a u256 instead of a felt252 as the second param, because the actual external method of the contract is not generic, but specific to the type set in the Impl block. Even worst, the call could not break if the calldata is serde-compatible, with potential unexpected bad outcomes.

With this in mind, I think we should NOT ALLOW these interfaces in the standard, so even when they are language supported, they won’t be SRC5 compliant.

For the SNIP terminology I will call these interfaces (with generic types) Interface Blueprints, and the ones without it just Interfaces, because an Interface Blueprint can translate into multiple interfaces for different contracts regarding the input types that will be accepted in the actual external methods of the contract (that can’t be generic at the time being). This naming convention is mostly for updating the SNIP specification.

Summary: I think Interface Blueprints should be NOT ALLOWED, while Interfaces SHOULD.

Interested in opinions and potential other solutions/thoughts around this.

I updated the SNIP to reflect that the Blueprint is not the actual interface used for computing the Id, and interfaces don’t include the ContractState type, as that is not part of the inputs the real external function is expecting (is just a facade for handling self-storage, is not encoded in calldata).

Last SNIP update here.

With the latest changes about the syntax evolving
Do you think it is useful to add self in the computation?

On second thought, I’m not sure if I would include this on the interface id computation or not yet, because I don’t know if we need to encode self somehow in calldata when using call_contract_syscall for calling a contract implementing this interface. If this doesn’t need to be encoded, I wouldn’t add the self type to the interface id. Need to check with the compiler team.

Quick update:

After some discussion, it seems worth having the function output as part of the signature when computing the interface id, and this was added in the last update of the SNIP-5 accordingly.

That’s a nice work @ericnordelo and a very crucial part to ensure contract interop.

About the new syntax, it could make sense to include the self to know if it’s a reference or not. Like this two interfaces which may be close, but one is requiring a ref and the other not, we can make the difference. What do you think?

Also, can you explain the E((),()) representing a bool?

Thanks a lot, very interesting stuff.

As far as I understand there’s no encoding of self, so there is no need to represent it in the interface as it doesn’t affect interoperation.


But in that case there is no way to differentiate view fn from external fn (or even pure fn when that’ll be supported)?

The rule is to represent base types (extern types in corelib) as they are, but Enums and Structs as defined in the SNIP, even when they are located in corelib. bool is an enum defined in corelib, and that’s why is represented as such.

If Enums or Structs in corelib are modified for any reason (improvements, refactor, removals, etc…), having them represented as enums and not as core types allow the SNIP to automatically “acknowledge” the incompatibility of the new interface, by requiring a change of the function signature.

If they are treated as core types (that are, arguably, less likely to be modified), then the new interface would be represented the same even when is not compatible. bool may not be the best example, but there are other enums and structs, that can potentially change, that’s why they are treated like that basically (we could add exceptions like bool to the rule, but this would be an unnecessary overhead IMO, because this signature is just an intermediate state for the id computation).

About the new syntax, it could make sense to include the self to know if it’s a reference or not. Like this two interfaces which may be close, but one is requiring a ref and the other not, we can make the difference. What do you think?

This standard is meant to ensure that a caller contract A knows what to expect from the interface of a called contract B, but the implementation of the called method of B may differ, and the standard is not meant to avoid that.

The self param is a mechanism that restricts how a contract can implement its methods, but it doesn’t affect the interface of the method at all (when called externally). The external interface is basically what you need to pass in call_contract_syscall (and the output), and this is what we mean with encoded data.

Even when self is saying something about the method, is affecting just the local implementation, not the external interface. The call to B is exactly the same when we have self as ref or as a snapshot.

But in that case there is no way to differentiate view fn from external fn (or even pure fn when that’ll be supported)?

Those are implementation restrictions for the contract that don’t modify the public interface, so is not the goal of the standard to differentiate those. For the caller contract A, is the same thing if the method is pure or not from the interface perspective, because the call it needs to execute is the same (same function name, params types, and output type). Even if the protocol treats view and external methods differently (like solidity STATICCALL vs CALL opcodes), the interface is the same, and so it should be represented the same.

I created this tool for computing the interface ids (SRC-5 compliant) directly from Cairo source code automatically, to save the time of manual signature translation, and potential human mistakes from the process (as I made a couple). Sharing it here for visibility, as I think it may be helpful:

This SNIP is well established and has already been adopted by several implementations, but I would like to open an idea for discussion.

What I find missing in this interface detection mechanism is the ability to detect a single function, or a subset of an interface.

In some contexts, it would be useful to know if a contract I’m interacting with, has a specific function I’d like to call. This function might be part of a standard interface, amortized by a SNIP, and might not be.
Only if the contract uses SRC-5, and I know exactly which interact to look for, will I be able to find out if a specific function I need is supported.

If a contract wants to declare support for any subset of its interact, it would to compute and store the XOR for all subsets of its extended selectors.

One idea, is to use bloom filters, storing an aggregation of all extended selectors, and upon query, get a definite negative if a function is not in the interface, with some chance of a false positive result.

Appriciate your input and whether this should be moved to a separate discussion.