Cairo v2.11.0 is out!

Cairo 2.11.0 was just released. This version only involves high-level compiler (Cairo→Sierra) changes and thus can be used to deploy contracts on Starknet v0.13.4. This means that Cairo 2.11.0 is usable on Testnet without delay and will be usable on Mainnet later this month.

We proceed to discuss a few of the noticeable updates of Cairo v2.10.0 (for an exhaustive list of changes, see the release notes):

In addition to changes in Cairo, the language server continues to improve and create a smoother dev experience, you can read about the recent and upcoming changes in this tweet thread.

Another notable change is Scarb’s support for procedural macros. While already in v2.11.0, the compiler<>macro interface is expected to change in the next version. Given the scope of the feature, procedural macros will get their own post in the coming weeks alongside the new interface.

Breaking changes

While not a breaking change, we’d like to use this opportunity to remind developers that Cairo ≥ 2.10.0 requires starknet-foundry ≥ 0.38.0 to function properly.

The new change introduced with Cairo 2.11.0 is that dependency on snforge_std < 0.38.0 (note that we distinguish snforge_std the library, from foundry the testing engine, as one can use new foundry versions with older snforge_std libraries), then for tests to compile you need to add the following to your dev dependencies section in your Scarb.toml:

snforge_std = "x.y.z"
snforge_scarb_plugin = "x.y.z"

That is, whenever you depend on snforge_std , add a dependency on snforge_scarb_plugin with the same version.

Note that this is NOT required when using snforge_std ≥ 0.38.0.

Corelib updates

Iterator trait

In Cairo v2.11.0 the iterator trait is much richer, and includes the following functions:

  • count
  • last
  • advance_by
  • nth
  • map
  • enumerate
  • fold
  • any
  • all
  • find
  • filter
  • zip
  • collect
  • peekable
  • take
  • sum
  • product
  • chain

You can find a few usage examples below, for more examples, see iter_test.cairo in the corelib:

let mut iter = array![1, 2, 3].into_iter().map(|x| 2 * x);

assert_eq!(iter.next(), Some(2));
assert_eq!(iter.next(), Some(4));
assert_eq!(iter.next(), Some(6));
assert_eq!(iter.next(), None);
for (index, elem) in arr.into_iter().enumerate() {
    ...
}
let mut iter = array![0_u32, 1, 2].into_iter().filter(|x| *x > 0);
assert_eq!(iter.next(), Option::Some(1));
assert_eq!(iter.next(), Option::Some(2));
assert_eq!(iter.next(), Option::None);
let mut iter = array![1, 2, 3].into_iter();
let sum = iter.fold(0, |acc, x| acc + x);
assert_eq!(sum, 6);

Option trait

The following functions are added to OptionTrait:

  • ok_or_else
  • and
  • and_then
  • or
  • or_else
  • xor
  • is_some_and
  • is_none_or
  • unwrap_or_else
  • map
  • map_or
  • map_or_else
  • take
  • filter
  • flatten

You can find a few usage examples below, for more examples, see option_test.cairo in the corelib:

let maybe_some_string: Option<ByteArray> = Some("Hello, World!");
let maybe_some_len = maybe_some_string.map(|s| s.len());
assert!(maybe_some_len == Some(13));
let k = 21;
let mut x = Some("foo");
assert_eq!(x.map_or_else(|| 2 * k, |v: ByteArray| v.len()), 3);
x = None;
assert_eq!(x.map_or_else(|| 2 * k, |v: ByteArray| v.len()), 42);
let option: Option<felt252> = None;
assert_eq!(option.ok_or_else(|| 0), Err(0));
let x: Option<Option<u32>> = Some(Some(6));
assert_eq!(Some(6), x.flatten());

Using enum variants

You can now import concrete enum variants, and refer to them without the full path:

mod definitions {
  enum MyEnum {
     Var1,
     Var2
  }
}

use definitions::MyEnum::{Var1, Var2}

let my_enum = Var1;
match my_enum {
  Var1 => 1,
  Var2 => 2
}

In particular, the prelude now includes Option::Some and Option::None, so you can write the following without using the Option:: prefix:

let something = Some(3);
let nothing = None;

Consts updates

Const ContractAddress and ClassHash

You can now instantiate constants of the ContractAddress and ClassHash types, the generic functions contract_address_const and class_hash_const are no longer needed:

use starknet::{ContractAddress, ClassHash};

const class_hash: ClassHash = 0x123.try_into().unwrap();
const contract_address: ContractAddress = 0x123.try_into().unwrap();

Note that we don’t have dedicated literals for those types, hence conversions are necessary.

Const functions

Functions that can be evaluated at compile time can now be marked as const via the const fn syntax, similarly to Rust:

use core::num::traits::Pow;

const mask: u32 = 2_u32.pow(20)

Several functions in the corelib are now marked by const, in fact this was used in the previous section where we applied try_into and unwrap in a const expression.

Storage scrubbing

The Store trait now the additional scrub function:

fn scrub(
    address_domain: u32, base: StorageBaseAddress, offset: u8,
) -> SyscallResult<()>

Since this function has a defsiault implementation, this is a non-breaking change. With scrubbing, you can “remove” from storage by writing zeros instead of the existing value.

:light_bulb:Note that scrubbing storage is an expensive operation: scrubbing will cost 32*TYPE_SIZE L1 blob gas. For example, a struct with 2 u256 members will cost 32*4=128 blob gas to scrub.

New Vec interface

The MutableVecTrait now includes the following functions:

fn allocate(self: T) -> StoragePath<Mutable<Self::ElementType>>;

fn push<+Drop<Self::ElementType>, +starknet::Store<Self::ElementType>>(
    self: T, value: Self::ElementType,
);

fn pop<+Drop<Self::ElementType>, +starknet::Store<Self::ElementType>>(
    self: T,
) -> Option<Self::ElementType>;

Following community feedback, we’re adding the more intuitive push and pop interface to Vec, which replaces the append and write flow. For backward compatibility purposes, the old functions still exist in the Vec trait, but are marked as deprecated.

Below you can find an example of interaction with the new Vec interface:

use starknet::storage::{
   StoragePointerReadAccess,
   StoragePointerWriteAccess,
   Vec,
   MutableVecTrait
};

#[storage]
struct Storage {
    my_vec: Vec<u8>
}

...

self.my_vec.push(1);
self.my_vec.push(2);
self.my_vec.push(3);

assert_eq!(self.my_vec[0].read(), 1);
assert_eq!(self.my_vec[1].read(), 2);
assert_eq!(self.my_vec[2].read(), 3);

// pop
assert_eq!(self.my_vec.pop(), Some(3));
assert_eq!(self.my_vec.len(), 2);

:light_bulb:Note that the pop function is costly as it involves a call to scrub, which involves multiple storage writes (depending on the underlying type’s size).

The append function is now deprecated, and using it will emit a warning (which can be canceled by using the starknet-storage-deprecation feature). When you have a Vec of types that do not implement the Store trait, e.g. a vector of storage nodes, then you’ll need to use the new allocate function, which is similar to the deprecated append (in fact, they have exactly the same implementation):

use starknet::storage::{
    Vec,
    MutableVecTrait,
    Map,
    StorageMapReadAccess,
    StorageMapWriteAccess,
    StoragePointerReadAccess,
    StoragePointerWriteAccess
};

#[starknet::storage_node]
struct node {
    map: Map<u8,u8>
}

#[storage]
struct Storage {
    my_vec: Vec<node>
}

...

self.my_vec.allocate().map.write(1, 1);

To add a new node to my_vec, we must use the allocate function since node itself cannot be instantiated (it only serves to indicate storage paths via its members) and hence can never be an argument for push.

Early return and error propagation in loops

We can now have return statements and use the ? operator inside loops:

fn foo() -> Result<u8, ByteArray> {
    // error propagation
    for i in 1..10_u64 {
        let _converted: u8 = i.try_into().ok_or("fail to convert u64 into u8")?;
    }

    // early return
    for i in 1..100_u64 {
        if (i == 42) {
            return Ok(42);
        }
    }

    Ok(42)
}

No more ; after curly brackets

Semicolon is no longer required after a loop block, i.e. the following is out:

for i in 1..10_u8 {
  println!("{}", i);
};

This is in:

for i in 1..10_u8 {
  println!("{}", i)
}

Deref is extended to methods

In Cairo v2.7.0 we introduced the Deref and DerefMut trait, which allowed transparent access to the members of type Dest from an instance of type Target when there is an impl of Deref<Target, Dest> in the current context. Now, this mechanism is extended to methods that operate on the Dest type, as demonstrated by the code below:

use core::ops::Deref;

struct MySource {
    pub data: u8
}

struct MyTarget {
    pub data: u8
}

#[generate_trait]
impl TargetMutImpl of TargetTrait {
    fn foo(ref self: MyTarget) -> u8 {
        self.data
    }
}

impl SourceDeref of Deref<MySource> {
    type Target = MyTarget;
    fn deref(self: MySource) -> MyTarget {
        MyTarget { 
            data: self.data 
        }
    }
}

Thanks to the Deref impl, we can call foo on MySource:

let mut source = MySource { data: 5 };

let res = source.foo(5);

I will surely try the new version

Cairo v2.11.0 is out with major high-level compiler updates and is compatible with Starknet v0.13.4 (Testnet-ready, Mainnet soon). It adds richer Iterator and Option traits, enum variant imports, const support, a revamped Vec interface with push/pop, early returns in loops, and method access via Deref. Developers using snforge_std < 0.38.0 must now add snforge_scarb_plugin to Scarb.toml. Also, Scarb now supports procedural macros, though interface changes are expected. This release greatly improves dev ergonomics and corelib capabilities.

Hi there, this topic is old now, I won’t comment on the features which was and still are pretty dope to me, but I retrieved few tests I made previously on optimizations (exec and n steps especially rather than bytecode size), so I’m leaving them here as it might help some people looking for hard-to-find in depth related details and avoid them the PITA to do it themselves (at that time I wish I had a quick access to those data).
I haven’t run any version comparison (just noticed that generated contract class bytecode is heavier), but focused on the iterator features as arrays are pretty common to any development, so in my opinion, it’s worth the shot to dig in.

Iterator trait

Setup

scarb 2.12.2 (dc0dbfd50 2025-09-15)
cairo: 2.12.2 ( crates.io: Rust Package Registry )
sierra: 1.7.0

snforge_std = “0.49.0”

comparative

iter.next() vs arr[i]

single call

In this case, results are bare equal.

#[test]
fn test_array() {
    let arr = array![1, 2, 3];

    assert_eq!(*arr[0], 1);
}

#[test]
fn test_iter() {
    let mut iter = array![1, 2, 3].into_iter();

    assert_eq!(iter.next().unwrap(), 1);
}
metrics

[PASS] testing::test::test_iter (l1_gas: ~0, l1_data_gas: ~0, l2_gas: ~40000)
steps: 83
memory holes: 0
builtins: (range_check: 3)
syscalls: ()

[PASS] testing::test::test_array (l1_gas: ~0, l1_data_gas: ~0, l2_gas: ~40000)
steps: 83
memory holes: 0
builtins: (range_check: 3)
syscalls: ()

consecutive calls

Interesting to note that iterator next() is more efficient than an array or span index based access in n steps when consecutively used or ran in a loop

#[test]
fn test_array() {
    let arr = array![
        1,
        2,
        3,
        4,
        5
    ];

    for i in 0..5_u32 {
        assert!(*arr[i] > 0_u32);
    }
}

#[test]
fn test_iter() {
    let mut iter = array![
        1,
        2,
        3,
        4,
        5
    ].into_iter();

    for _ in 0..5_u32 {
        assert!(iter.next().unwrap() > 0_u32);
    }
}
metrics

[PASS] testing::test::test_array (l1_gas: ~0, l1_data_gas: ~0, l2_gas: ~40000)
steps: 227
memory holes: 1
builtins: (range_check: 20)
syscalls: ()

[PASS] testing::test::test_iter (l1_gas: ~0, l1_data_gas: ~0, l2_gas: ~40000)
steps: 209
memory holes: 1
builtins: (range_check: 15)
syscalls: ()

iter.count() vs arr.len()

In this case, array.len() still seems more efficient in steps / range checks

#[test]
fn test_array() {
    let arr = array![1, 2, 3];

    assert_eq!(arr.len(), 3);
}

#[test]
fn test_iter() {
    let mut iter = array![1, 2, 3].into_iter();

    assert_eq!(iter.count(), 3);
}

metrics

[PASS] testing::test::test_array (l1_gas: ~0, l1_data_gas: ~0, l2_gas: ~40000)
steps: 59
memory holes: 0
builtins: (range_check: 3)
syscalls: ()

[PASS] testing::test::test_iter (l1_gas: ~0, l1_data_gas: ~0, l2_gas: ~40000)
steps: 139
memory holes: 0
builtins: (range_check: 7)
syscalls: ()

iter.last() vs arr

Here, array index access remains cheaper with half iterator last() n steps cost.

#[test]
fn test_array() {
    let arr = array![1, 2, 3];

    assert_eq!(*arr[2], 3);
}

#[test]
fn test_iter() {
    let mut iter = array![1, 2, 3].into_iter();

    assert_eq!(iter.last().unwrap(), 3);
}

metrics

[PASS] testing::test::test_array (l1_gas: ~0, l1_data_gas: ~0, l2_gas: ~40000)
steps: 75
memory holes: 0
builtins: (range_check: 3)
syscalls: ()

[PASS] testing::test::test_iter (l1_gas: ~0, l1_data_gas: ~0, l2_gas: ~40000)
steps: 150
memory holes: 0
builtins: (range_check: 7)
syscalls: ()

iter.map() vs manual arr reconstruct

Even with a slight overhead due to iterator next() vs array index access, n step delta is still consequent here and map is clearly a go.

#[test]
fn test_array() {
    let arr = array![1, 2, 3];
    let mut new = ArrayTrait::new();

    for i in 0..3_u32 {
        new.append(*arr[i] * 2);
    }

    assert_eq!(*new[0], 2);
    assert_eq!(*new[1], 4);
    assert_eq!(*new[2], 6);
}

#[test]
fn test_iter() {
    let mut iter = array![1, 2, 3].into_iter();
    let mut new = iter.map(|x| 2 * x);

    assert_eq!(new.next().unwrap(), 2);
    assert_eq!(new.next().unwrap(), 4);
    assert_eq!(new.next().unwrap(), 6);
}

metrics

[PASS] testing::test::test_array (l1_gas: ~0, l1_data_gas: ~0, l2_gas: ~40000)
steps: 203
memory holes: 1
builtins: (range_check: 13)
syscalls: ()

[PASS] testing::test::test_iter (l1_gas: ~0, l1_data_gas: ~0, l2_gas: ~40000)
steps: 102
memory holes: 0
builtins: (range_check: 3)
syscalls: ()

iter.nth(i) vs arr[i] + inc

Results here might depend on your usecase / constraints, both slightly behave differently and due to the very little n step overhead the equivalent of nth on array has been simplified.
In addition, we can note iter.next() is slightly more efficient than iter nth(0).

#[test]
fn test_array() {
    let arr = array![1, 2, 3];
    let mut i = 0_u32;

    assert_eq!(*arr[i], 1);

    i += 1;
}

#[test]
fn test_iter() {
    let mut iter = array![1, 2, 3].into_iter();

    assert_eq!(iter.nth(0).unwrap(), 1);
}

metrics

[PASS] testing::test::test_array (l1_gas: ~0, l1_data_gas: ~0, l2_gas: ~40000)
steps: 83
memory holes: 0
builtins: (range_check: 3)
syscalls: ()

[PASS] testing::test::test_iter (l1_gas: ~0, l1_data_gas: ~0, l2_gas: ~40000)
steps: 100
memory holes: 0
builtins: (range_check: 4)
syscalls: ()

iter.filter() vs manual arr filter

No suprises, same as iterator map() here, pretty OP.

#[test]
fn test_array() {
    let arr = array![0_u8, 1, 2, 3];
    let mut even = ArrayTrait::new();

    for i in 0..4_u32 {
        let elt = *arr[i];
        if elt % 2 == 0 {
            even.append(elt);
        }
    }

    assert_eq!(*even[0], 0);
    assert_eq!(*even[1], 2);
}

#[test]
fn test_iter() {
    let mut iter = array![0_u8, 1, 2, 3].into_iter();
    let mut even = iter.filter(|x| *x % 2 == 0);

    assert_eq!(even.next().unwrap(), 0);
    assert_eq!(even.next().unwrap(), 2);
}

metrics

[PASS] falcon512_integrationtest::falcon512_tests_0::test_array (l1_gas: ~0, l1_data_gas: ~0, l2_gas: ~80000)
steps: 263
memory holes: 1
builtins: (range_check: 26)
syscalls: ()

[PASS] falcon512_integrationtest::falcon512_tests_0::test_iter (l1_gas: ~0, l1_data_gas: ~0, l2_gas: ~40000)
steps: 162
memory holes: 0
builtins: (range_check: 15)
syscalls: ()

conclusion

Overall it seems really worth using iterator, especially on heavy ops or perf critical requirements excepted for few rare cases like single element access with the single inconvenience of caching arr.len() at some point in order to avoid iter.count() calls.
Some other functions hasn’t been tested like find, sum, product, any, all, … but there’s high chances they also end up as efficient as map and filter implementations.