Giza x S-two: Powering verifiable ML with LuminAIR
Published on: June 18, 2025

Giza x S-two: Powering verifiable ML with LuminAIR

This guest post is part of Building with S-two-a series showcasing what teams are building with the fastest prover in the world.

By Raphael Doukhan, Lead zkML, Giza.

Giza is pioneering autonomous financial Web3 by enabling non-custodial algorithmic agents that execute sophisticated Web3 strategies. These agents continuously monitor on-chain and off-chain data, orchestrating complex, multi-step processes with minimal human oversight.

To maintain the trustlessness of this paradigm, Giza has architected a trust-minimized pipeline-from self-custodied session keys in account abstraction wallets, through decentralized agent execution governed by the Giza Protocol, to cryptographic proofs of decision logic.

Central to this effort is LuminAIR, a custom Algebraic Intermediate Representations (AIR) and an an open-source Machine Learning framework that leverages Circle STARKs and S-two to ensure the integrity of computational graphs.

What is LuminAIR?

LuminAIR acts as a bridge, connecting Luminal, a Tinygrad-inspired Rust ML framework, with S-two’s robust constraint system. It takes high-level ML computation graphs and transforms them into Algebraic Intermediate Representations (AIRs).

An AIR is essentially a set of polynomial equations that describe a computation. If all these equations hold true for a given set of inputs and outputs, it mathematically proves the computation was performed correctly. STARKs are proof systems designed to efficiently verify that such an AIR is satisfied, without re-executing the entire computation. In the context of LuminAIR, it allows provers to cryptographically demonstrate that a computational graph has been executed correctly, while verifiers can validate these proofs with significantly fewer resources than re-executing the graph.

With LuminAIR, developers can design, compile, run, prove, and verify computational graphs. For instance, consider a simple graph involving matrix operations:

use luminair_graph::{graph::LuminairGraph, StwoCompiler}; 
use luminal::prelude::*; 

fn main() -> Result<(), Box<dyn std::error::Error>> { 
   let mut cx = Graph::new(); 

   // Define tensors (e.g., inputs 'a' and 'b', weights 'w') 
   let a = cx.tensor((2, 2)).set(vec![1.0, 2.0, 3.0, 4.0]); 
   let b = cx.tensor((2, 2)).set(vec![10.0, 20.0, 30.0, 40.0]); 
   let w = cx.tensor((2, 2)).set(vec![-1.0, -1.0, -1.0, -1.0]); 

   // Build the computational graph: (a * b) + w 
   let c = a * b;          // Element-wise multiplication 
   let mut d = (c + w).retrieve(); // Element-wise addition and mark for output 

   // Compile the graph for STARK proving with Stwo 
   cx.compile(<(GenericCompiler, StwoCompiler)>::default(), &mut d); 

   // Execute the graph to generate an execution trace (witness) 
   let trace = cx.gen_trace()?; 

   // Generate a STARK proof of the computation 
   let proof = cx.prove(trace, settings.clone())?; 

   // Verify the proof cx.verify(proof, settings)?; 

   Ok(()) 
}

This code defines a graph where a and b are multiplied, the result is added to w, and then a proof of this entire process is generated and verified.

LuminAIR s-two

Core components of LuminAIR

  1. Computational Graph & Lazy Execution: Operations like a*b don’t compute immediately. Instead, they are added into a directed acyclic graph. The actual computation and trace generation only occur when explicitly invoked (gen_trace()).
  2. Modular Compilers:
    • GenericCompiler: Performs backend-agnostic optimizations on the graph.
    • StwoCompiler: This is where the magic happens for zkML. It translates each ML operator into its corresponding AIR component definition, ready for S-two. It currently includes the PrimitiveCompiler for this mapping, with future plans for a FuseOpCompiler to create more optimized, combined AIR components.
  3. AIR Components & Lookup Arguments: Each operator (e.g., Add, Mul, Sin) becomes an AIR component with its own set of local algebraic constraints. To ensure data flows correctly between these components (i.e., the output of one operation is correctly used as the input to the next), LuminAIR employs the LogUp protocol.
    The LogUp protocol is a cryptographic technique that efficiently proves relationships between different sets of values. In LuminAIR, it serves two main purposes:

    • Dataflow Integrity: It acts like a set of “plumbing checks,” ensuring that a value produced by one AIR component (e.g., Mul) is precisely the value consumed by the next (e.g., Add). This is done by creating “running sums” that must balance out across the entire computation.
    • Lookup Tables: For non-linear functions (like sin(x) or exp2(x)) that are hard to express with simple polynomials, LogUp allows us to prove that the (input, output) pair for the function comes from a pre-computed, committed table of correct values.

How LuminAIR leverages S-two: A perfect match

Developing LuminAIR required a proof system that offered flexibility in defining custom AIRs with minimal boilerplate. S-two, being Rust-native and designed for exactly this, was the ideal choice. Here’s why:

  • Intuitive AIR Construction: S-two provides a clean, expressive library for defining algebraic constraints. This makes it straightforward to translate the mathematical logic of operations into the polynomial equations S-two understands.
  • Modular AIR Composition: Real-world ML models involve many different operations. S-two’s design allows LuminAIR to define individual AIR “fragments” for each operator (Add, Mul, Sin, etc.) and then seamlessly compose them into a single, cohesive STARK proof for the entire computation.
  • First-Class LogUp Support: As mentioned, LogUp is critical for LuminAIR. S-two provides powerful, built-in abstractions for the LogUp protocol, making it trivial to implement both the dataflow integrity checks and the lookup arguments for non-linear functions.
  • Blazing-Fast Execution: S-two performs most of its prover arithmetic in the M31 prime field (modulo $2^{31}-1$). This Mersenne prime is exceptionally friendly for modern CPUs, leading to excellent performance (you can check the first benchmarks here).
  • Flexible, Parallel Backends: S-two’s architecture supports SIMD acceleration, and can even target GPUs via Ingonyama’s icicle-stwo implementation.

Unlocking verifiable AI: Potential use cases

With LuminAIR, Giza plans to unlock a new class of zkML-powered applications. The first project in development is a recommender system for Rekt News, a leading DeFi journalism platform dedicated to delivering transparent, trustworthy insights. In today’s information ecosystem-where opaque algorithms decide which articles surface-verifiability is paramount. By integrating LuminAIR, Giza is building a recommendation engine that can be cryptographically proved.

Beyond recommendation systems, LuminAIR will power verifiable Autonomous Agents for DeFi protocols. Imagine agents that:

  1. Continuously monitor AMM pool parameters and on-chain oracles.
  2. Execute liquidity-management strategies (e.g., dynamic range orders) with zero trust-every action is backed by a STARK proof.

Giza invites developers, researchers, and DeFi teams to explore LuminAIR. Whether you’re building your own zkML application, integrating verifiable agents into existing protocols, or pioneering entirely new use cases, LuminAIR provides the tools to build with integrity.

LuminAIR development roadmap

LuminAIR’s development is progressing in three strategic phases, ensuring a balance between delivering early utility and achieving long-term scalability for complex models.

Phase 1: Core Primitives (Current) This foundational stage focuses on implementing a core set of 11 primitive operators. This set is sufficient to express a wide range of common ML models, including linear regression, Convolutional Neural Networks (CNNs), and even components of Transformers.

Operator Status
Log2
Exp2
Sin
Sqrt
Recip
Add
Mul
Mod
LessThan
SumReduce
MaxReduce
Contiguous

Phase 2: Fused Operators & Enhanced Developer Experience With the primitives validated, the next phase will focus on performance and ease of use:

  • Compiler Fusion: The StwoCompiler will be enhanced to group sequences of primitive operations into more complex, “fused” operators (e.g., Mul + SumReduce can be fused into a highly optimized MatMul AIR component). This reduces proof overhead and improves prover performance.
  • Python SDK: A user-friendly Python API will be developed, wrapping the Rust core. This will empower data scientists and ML engineers to define, train (using standard tools), and then prove their models with LuminAIR, without needing deep Rust expertise.

Phase 3: On-Chain Verification & GPU Acceleration The final phase aims to bring zkML proofs into live Web3 ecosystems and dramatically scale proving capabilities:

  • Smart Contract Verifier: The LuminAIR verifier will be implemented as a Cairo program, allowing STARK proofs of ML computations to be efficiently verified on Starknet.
  • GPU-Powered Proving: Integration with GPU-accelerated backends for S-two (like Ingonyama’s Icicle-Stwo) will be prioritized to leverage the massive parallelism of GPUs, drastically reducing proof generation times for large-scale models.

Conclusion

LuminAIR, powered by the robust and flexible S-two prover framework, is a significant step towards making AI computations verifiable. By combining Giza’s expertise in zkML with StarkWare’s pioneering work in STARK technology, we are building the infrastructure for a new era of decentralized, autonomous systems where integrity is not just promised but mathematically proven.

Learn More About LuminAIR

ON THIS PAGE

Contact us