Quickstart
Get a working example running quickly.
From Scratch (Complete Setup)
If you’re starting from zero, here’s everything you need:
# 1. Install Rust nightly
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
rustup default nightly
# 2. Clone the repository
git clone https://github.com/logos-blockchain/logos-blockchain-testing.git
cd logos-blockchain-testing
# 3. Run your first scenario (downloads dependencies automatically)
POL_PROOF_DEV_MODE=true scripts/run/run-examples.sh -t 60 -v 1 -e 1 host
First run takes 5-10 minutes (downloads ~120MB circuit assets, builds binaries).
Windows users: Use WSL2 (Windows Subsystem for Linux). Native Windows is not supported.
Prerequisites
If you already have the repository cloned:
- Rust toolchain (nightly)
- Unix-like system (tested on Linux and macOS)
- For Docker Compose examples: Docker daemon running
- For Docker Desktop on Apple silicon (compose/k8s): set
NOMOS_BUNDLE_DOCKER_PLATFORM=linux/arm64to avoid slow/fragile amd64 emulation builds versions.envfile at repository root (defines VERSION, NOMOS_NODE_REV, NOMOS_BUNDLE_VERSION)
Note: nomos-node binaries are built automatically on demand or can be provided via prebuilt bundles.
Important: The versions.env file is required by helper scripts. If missing, the scripts will fail with an error. The file should already exist in the repository root.
Your First Test
The framework ships with runnable example binaries in examples/src/bin/.
Recommended: Use the convenience script:
# From the logos-blockchain-testing directory
scripts/run/run-examples.sh -t 60 -v 1 -e 1 host
This handles circuit setup, binary building, and runs a complete scenario: 1 validator + 1 executor, mixed transaction + DA workload (5 tx/block + 1 channel + 1 blob), 60s duration.
Note: The DA workload attaches DaWorkloadExpectation, and channel/blob publishing is slower than tx submission. If you see DaWorkloadExpectation failures, rerun with a longer duration (e.g., -t 120), especially on CI or slower machines.
Alternative: Direct cargo run (requires manual setup):
# Requires circuits in place and NOMOS_NODE_BIN/NOMOS_EXECUTOR_BIN set
POL_PROOF_DEV_MODE=true cargo run -p runner-examples --bin local_runner
Core API Pattern (simplified example):
use std::time::Duration;
use anyhow::Result;
use testing_framework_core::scenario::{Deployer, ScenarioBuilder};
use testing_framework_runner_local::LocalDeployer;
use testing_framework_workflows::ScenarioBuilderExt;
pub async fn run_local_demo() -> Result<()> {
// Define the scenario (1 validator + 1 executor, tx + DA workload)
let mut plan = ScenarioBuilder::topology_with(|t| t.network_star().validators(1).executors(1))
.wallets(1_000)
.transactions_with(|txs| {
txs.rate(5) // 5 transactions per block
.users(500) // use 500 of the seeded wallets
})
.da_with(|da| {
da.channel_rate(1) // 1 channel
.blob_rate(1) // target 1 blob per block
.headroom_percent(20) // default headroom when sizing channels
})
.expect_consensus_liveness()
.with_run_duration(Duration::from_secs(60))
.build();
// Deploy and run
let deployer = LocalDeployer::default();
let runner = deployer.deploy(&plan).await?;
let _handle = runner.run(&mut plan).await?;
Ok(())
}
Note: The examples are binaries with #[tokio::main], not test functions. If you want to write integration tests, wrap this pattern in #[tokio::test] functions in your own test suite.
Important: POL_PROOF_DEV_MODE=true disables expensive Groth16 zero-knowledge proof generation for leader election. Without it, proof generation is CPU-intensive and tests will timeout. This is required for all runners (local, compose, k8s) for practical testing. Never use in production.
What you should see:
- Nodes spawn as local processes
- Consensus starts producing blocks
- Scenario runs for the configured duration
- Node state/logs written under a temporary per-run directory in the current working directory (removed after the run unless
NOMOS_TESTS_KEEP_LOGS=1) - To write per-node log files to a stable location: set
NOMOS_LOG_DIR=/path/to/logs(files will have prefix likenomos-node-0*, may include timestamps)
What Just Happened?
Let’s unpack the code:
1. Topology Configuration
use testing_framework_core::scenario::ScenarioBuilder;
pub fn step_1_topology() -> testing_framework_core::scenario::Builder<()> {
ScenarioBuilder::topology_with(|t| {
t.network_star() // Star topology: all nodes connect to seed
.validators(1) // 1 validator node
.executors(1) // 1 executor node (validator + DA dispersal)
})
}
This defines what your test network looks like.
2. Wallet Seeding
use testing_framework_core::scenario::ScenarioBuilder;
use testing_framework_workflows::ScenarioBuilderExt;
pub fn step_2_wallets() -> testing_framework_core::scenario::Builder<()> {
ScenarioBuilder::with_node_counts(1, 1).wallets(1_000) // Seed 1,000 funded wallet accounts
}
Provides funded accounts for transaction submission.
3. Workloads
use testing_framework_core::scenario::ScenarioBuilder;
use testing_framework_workflows::ScenarioBuilderExt;
pub fn step_3_workloads() -> testing_framework_core::scenario::Builder<()> {
ScenarioBuilder::with_node_counts(1, 1)
.wallets(1_000)
.transactions_with(|txs| {
txs.rate(5) // 5 transactions per block
.users(500) // Use 500 of the 1,000 wallets
})
.da_with(|da| {
da.channel_rate(1) // 1 DA channel (more spawned with headroom)
.blob_rate(1) // target 1 blob per block
.headroom_percent(20) // default headroom when sizing channels
})
}
Generates both transaction and DA traffic to stress both subsystems.
4. Expectation
use testing_framework_core::scenario::ScenarioBuilder;
use testing_framework_workflows::ScenarioBuilderExt;
pub fn step_4_expectation() -> testing_framework_core::scenario::Builder<()> {
ScenarioBuilder::with_node_counts(1, 1).expect_consensus_liveness() // This says what success means: blocks must be produced continuously.
}
This says what success means: blocks must be produced continuously.
5. Run Duration
use std::time::Duration;
use testing_framework_core::scenario::ScenarioBuilder;
pub fn step_5_run_duration() -> testing_framework_core::scenario::Builder<()> {
ScenarioBuilder::with_node_counts(1, 1).with_run_duration(Duration::from_secs(60))
}
Run for 60 seconds (~27 blocks with default 2s slots, 0.9 coefficient). Framework ensures this is at least 2× the consensus slot duration. Adjust consensus timing via CONSENSUS_SLOT_TIME and CONSENSUS_ACTIVE_SLOT_COEFF.
6. Deploy and Execute
use anyhow::Result;
use testing_framework_core::scenario::{Deployer, ScenarioBuilder};
use testing_framework_runner_local::LocalDeployer;
pub async fn step_6_deploy_and_execute() -> Result<()> {
let mut plan = ScenarioBuilder::with_node_counts(1, 1).build();
let deployer = LocalDeployer::default(); // Use local process deployer
let runner = deployer.deploy(&plan).await?; // Provision infrastructure
let _handle = runner.run(&mut plan).await?; // Execute workloads & expectations
Ok(())
}
Deployer provisions the infrastructure. Runner orchestrates execution.
Adjust the Topology
With run-examples.sh (recommended):
# Scale up to 3 validators + 2 executors, run for 2 minutes
scripts/run/run-examples.sh -t 120 -v 3 -e 2 host
With direct cargo run:
# Uses NOMOS_DEMO_* env vars (or legacy *_DEMO_* vars)
NOMOS_DEMO_VALIDATORS=3 \
NOMOS_DEMO_EXECUTORS=2 \
NOMOS_DEMO_RUN_SECS=120 \
POL_PROOF_DEV_MODE=true \
cargo run -p runner-examples --bin local_runner
Try Docker Compose
Use the same API with a different deployer for reproducible containerized environment.
Recommended: Use the convenience script (handles everything):
scripts/run/run-examples.sh -t 60 -v 1 -e 1 compose
This automatically:
- Fetches circuit assets (to
testing-framework/assets/stack/kzgrs_test_params/kzgrs_test_params) - Builds/uses prebuilt binaries (via
NOMOS_BINARIES_TARif available) - Builds the Docker image
- Runs the compose scenario
Alternative: Direct cargo run with manual setup:
# Option 1: Use prebuilt bundle (recommended for compose/k8s)
scripts/build/build-bundle.sh --platform linux # Creates .tmp/nomos-binaries-linux-v0.3.1.tar.gz
export NOMOS_BINARIES_TAR=.tmp/nomos-binaries-linux-v0.3.1.tar.gz
# Option 2: Manual circuit/image setup (rebuilds during image build)
scripts/setup/setup-nomos-circuits.sh v0.3.1 /tmp/nomos-circuits
cp -r /tmp/nomos-circuits/* testing-framework/assets/stack/kzgrs_test_params/
scripts/build/build_test_image.sh
# Run with Compose
NOMOS_TESTNET_IMAGE=logos-blockchain-testing:local \
POL_PROOF_DEV_MODE=true \
cargo run -p runner-examples --bin compose_runner
Benefit: Reproducible containerized environment (Dockerized nodes, repeatable deployments).
Optional: Prometheus + Grafana
The runner can integrate with external observability endpoints. For a ready-to-run local stack:
scripts/setup/setup-observability.sh compose up
eval "$(scripts/setup/setup-observability.sh compose env)"
Then run your compose scenario as usual (the environment variables enable PromQL querying and node OTLP metrics export).
Note: Compose expects KZG parameters at /kzgrs_test_params/kzgrs_test_params inside containers (the directory name is repeated as the filename).
In code: Just swap the deployer:
use anyhow::Result;
use testing_framework_core::scenario::{Deployer, ScenarioBuilder};
use testing_framework_runner_compose::ComposeDeployer;
pub async fn run_with_compose_deployer() -> Result<()> {
// ... same scenario definition ...
let mut plan = ScenarioBuilder::with_node_counts(1, 1).build();
let deployer = ComposeDeployer::default(); // Use Docker Compose
let runner = deployer.deploy(&plan).await?;
let _handle = runner.run(&mut plan).await?;
Ok(())
}
Next Steps
Now that you have a working test:
- Understand the philosophy: Testing Philosophy
- Learn the architecture: Architecture Overview
- See more examples: Examples
- API reference: Builder API Quick Reference
- Debug failures: Troubleshooting