Running Scenarios
This page focuses on how scenarios are executed (deploy → run → evaluate → cleanup), what artifacts you get back, and how that differs across runners.
For “just run something that works” commands, see Running Examples.
Execution Flow (High Level)
When you run a built scenario via a deployer, the run follows the same shape:
flowchart TD
Build[Scenario built] --> Deploy[Deploy]
Deploy --> Capture[Capture]
Capture --> Execute[Execute]
Execute --> Evaluate[Evaluate]
Evaluate --> Cleanup[Cleanup]
- Deploy: provision infrastructure and start nodes (processes/containers/pods)
- Capture: establish clients/observability and capture initial state
- Execute: run workloads for the configured wall-clock duration
- Evaluate: run expectations (after the execution window ends)
- Cleanup: stop resources and finalize artifacts
The Core API
use std::time::Duration;
use testing_framework_core::scenario::{Deployer as _, ScenarioBuilder};
use testing_framework_runner_local::LocalDeployer;
use testing_framework_workflows::ScenarioBuilderExt;
async fn run_once() -> anyhow::Result<()> {
let mut scenario = ScenarioBuilder::topology_with(|t| t.network_star().validators(3).executors(1))
.wallets(20)
.transactions_with(|tx| tx.rate(1).users(5))
.expect_consensus_liveness()
.with_run_duration(Duration::from_secs(60))
.build()?;
let runner = LocalDeployer::default().deploy(&scenario).await?;
runner.run(&mut scenario).await?;
Ok(())
}
Notes:
with_run_duration(...)is wall-clock time, not “number of blocks”..transactions_with(...)rates are per-block.- Most users should run scenarios via
scripts/run/run-examples.shunless they are embedding the framework in their own test crate.
Runner Differences
Local (Host) Runner
- Best for: fast iteration and debugging
- Logs/state: stored under a temporary run directory unless you set
NOMOS_TESTS_KEEP_LOGS=1and/orNOMOS_LOG_DIR=... - Limitations: no node-control capability (chaos workflows that require node control won’t work here)
Run the built-in local examples:
POL_PROOF_DEV_MODE=true \
scripts/run/run-examples.sh -t 60 -v 3 -e 1 host
Compose Runner
- Best for: reproducible multi-node environments and node control
- Logs: primarily via
docker compose logs(and any node-level log configuration you apply) - Debugging: set
COMPOSE_RUNNER_PRESERVE=1to keep the environment up after a run
Run the built-in compose examples:
POL_PROOF_DEV_MODE=true \
scripts/run/run-examples.sh -t 60 -v 3 -e 1 compose
K8s Runner
- Best for: production-like behavior, cluster scheduling/networking
- Logs:
kubectl logs ... - Debugging: set
K8S_RUNNER_PRESERVE=1andK8S_RUNNER_NAMESPACE=...to keep resources around
Run the built-in k8s examples:
POL_PROOF_DEV_MODE=true \
scripts/run/run-examples.sh -t 60 -v 3 -e 1 k8s
Artifacts & Where to Look
- Node logs: configure via
NOMOS_LOG_DIR,NOMOS_LOG_LEVEL,NOMOS_LOG_FILTER(see Logging & Observability) - Runner logs: controlled by
RUST_LOG(runner process only) - Keep run directories: set
NOMOS_TESTS_KEEP_LOGS=1 - Compose environment preservation: set
COMPOSE_RUNNER_PRESERVE=1 - K8s environment preservation: set
K8S_RUNNER_PRESERVE=1