🌀 : Demo phase
⭕ : To start
✅ : completed
Prover | Language/Library | Arithmetization | Status |
---|---|---|---|
Stone | Cairo | AIR | 🌀 |
Miden | PolyLang (typescript-like) | - | 🌀 |
RiskZero zkVM | Rust, C , C++ | - | 🌀 |
Boojum(ZKSync) | Rust , C , C++ | - | ⭕ |
Prover | Language/Library | Arithmetization | Status |
---|---|---|---|
Plonk | Noir | - | ⭕ |
Aleo | Leo | - | ⭕ |
Groth16 | Bellman (Rust) | R1CS | ⭕ |
Groth16 | Circom | R1CS | ⭕ |
Marlin/Groth16 | Zokrates | R1CS | ⭕ |
Language |
---|
MASM |
Risc 5 |
- Groth16
- Plonk
- Marlin/Marlin'
- Stark
- Gnark
- Rapidsnark
- Arkworks
- Snarkjs
- Bellman
- Zokrates
- Libsnark
- Plonky2
- Halo2
- Aztec (Implementation of Plonk)
- Hercules (Rust-based with Plonk support)
- inv
- mul
- sub
- exp
- add
- g1-scalar-multiplication
- g2-multi-scalar-multiplication
- pairing
- g2-scalar-multiplication
- g1-multi-scalar-multiplication
- Independent of proving scheme limitations: Some proving systems may have limitations or optimizations that can skew the understanding of a DSL's capabilities. Comparing DSLs independently allows for an evaluation that is not influenced by such factors.
- By comparing DSLs independently of specific proving systems, you can focus on the efficiency and optimization of circuit design. This allows for an assessment of how well each DSL facilitates the creation of efficient and optimized circuits.
- Language features, learning curve
- Analysis under heavy load
- Tooling and ecosystem support
- Prover performance
- Verifier performance
- Proof size
- Proof Generation Time (including witness generation time)
- Peak Memory usage during proof generation
- Average CPU Utilization % during proof generation (Reflects parallelization degree)
- Proof cost (Dependent on field and curve efficiency, proof techniques, and computation model)
- EVM Verifier
- External libraries support
- Ease of Use: Learning curve and user-friendliness of each DSL
- Security Features: Built-in security measures of each DSL
- Community and Ecosystem: Community size, resources, documentation, and support
- Version Tracking: Include version numbers of DSLs for updates and improvements
- Parallelization and Scalability: Support for parallel computations and scaling
Modular Arithmetic Focus: Prioritize modular multiplication operations per second (MMOPS) as a key metric, offering a more concrete and comparable measure across different systems. Field-Specific Benchmarks: Include benchmarks for different field sizes (e.g., 256-bit, 384-bit) to capture performance nuances across various cryptographic fields.
MMOPS/Watt Metric: Adopt a standardized MMOPS/Watt metric for a direct comparison of power efficiency across different hardware setups. Total Cost of Ownership (TCO): Include a more detailed analysis of TCO, factoring in hardware costs, operational expenses, and potential resale values to provide a holistic view of economic efficiency.
Diverse Hardware Testing: Test ZKP systems on a variety of hardware, including CPUs, GPUs, FPGAs, and ASICs, to understand performance across different computational platforms. System Scalability Analysis: Assess how systems scale with increased complexity and workload, providing insights into their real-world applicability.
- Complexity addition via advanced constraints (hashing algorithms, arrays, booleans, data structures, recursion)
- Linux Server: 20 Cores @ 2.3 GHz, 384GB memory
- Macbook M1 Pro: 10 Cores @ 3.2Ghz, 16GB memory
- Icicle: (TBD)
- DSL frameworks without proving systems
- Compute on Icicle
- Benchmarking sequencers
- Benchmarking different zkVMs (e.g., Scroll, Polygon zkEVM, Consensys zkEVM, zkSync, Risc Zero, zkWasm)
- Benchmarking IR compiler frameworks (e.g., zkLLVM)