Core Content: Workloads & Expectations
Workloads describe the activity a scenario generates; expectations describe the signals that must hold when that activity completes. This page is the canonical reference for all built-in workloads and expectations, including configuration knobs, defaults, prerequisites, and debugging guidance.
Overview
flowchart TD
I[Inputs<br/>topology + wallets + rates] --> Init[Workload init]
Init --> Drive[Drive traffic]
Drive --> Collect[Collect signals]
Collect --> Eval[Expectations evaluate]
Key concepts:
- Workloads run during the execution phase (generate traffic)
- Expectations run during the evaluation phase (check health signals)
- Each workload can attach its own expectations automatically
- Expectations can also be added explicitly
Built-in Workloads
1. Transaction Workload
Submits user-level transactions at a configurable rate to exercise transaction processing and inclusion paths.
Import:
use testing_framework_workflows::workloads::transaction::Workload;
Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
rate | u64 | Required | Transactions per block (not per second!) |
users | Option<usize> | All wallets | Number of distinct wallet accounts to use |
DSL Usage
use testing_framework_workflows::ScenarioBuilderExt;
ScenarioBuilder::topology_with(|t| t.network_star().validators(3).executors(1))
.wallets(20) // Seed 20 wallet accounts
.transactions_with(|tx| {
tx.rate(10) // 10 transactions per block
.users(5) // Use only 5 of the 20 wallets
})
.with_run_duration(Duration::from_secs(60))
.build();
Direct Instantiation
use testing_framework_workflows::workloads::transaction;
let tx_workload = transaction::Workload::with_rate(10)
.expect("transaction rate must be non-zero");
ScenarioBuilder::topology_with(|t| t.network_star().validators(3).executors(1))
.wallets(20)
.with_workload(tx_workload)
.with_run_duration(Duration::from_secs(60))
.build();
Prerequisites
-
Wallet accounts must be seeded:
.wallets(N) // Before .transactions_with()The workload will fail during
init()if no wallets are configured. -
Proof generation must be fast:
export POL_PROOF_DEV_MODE=trueWithout this, proof generation takes ~30-60 seconds per transaction, causing timeouts.
-
Circuit artifacts must be available:
- Automatically staged by
scripts/run/run-examples.sh - Or manually via
scripts/setup/setup-circuits-stack.sh(recommended) /scripts/setup/setup-nomos-circuits.sh
- Automatically staged by
Attached Expectation
TxInclusionExpectation — Verifies that submitted transactions were included in blocks.
What it checks:
- At least
Ntransactions were included on-chain (where N = rate × user count × expected block count) - Uses BlockFeed to count transactions across all observed blocks
Failure modes:
- “Expected >= X transactions, observed Y” (Y < X)
- Common causes: proof generation timeouts, node crashes, insufficient duration
What Failure Looks Like
Error: Expectation failed: TxInclusionExpectation
Expected: >= 600 transactions (10 tx/block × 60 blocks)
Observed: 127 transactions
Possible causes:
- POL_PROOF_DEV_MODE not set (proof generation too slow)
- Duration too short (nodes still syncing)
- Node crashes (check logs for panics/OOM)
- Wallet accounts not seeded (check topology config)
How to debug:
- Check logs for proof generation timing:
grep "proof generation" $NOMOS_LOG_DIR/executor-0/*.log - Verify
POL_PROOF_DEV_MODE=truewas set - Increase duration:
.with_run_duration(Duration::from_secs(120)) - Reduce rate:
.rate(5)instead of.rate(10)
2. Data Availability (DA) Workload
Drives blob and channel activity to exercise data availability paths and storage.
Import:
use testing_framework_workflows::workloads::da::Workload;
Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
blob_rate_per_block | NonZeroU64 | Required | Blobs to publish per block |
channel_rate_per_block | NonZeroU64 | Required | Channels to create per block |
headroom_percent | u64 | 20 | Extra capacity for channel planning (avoids saturation) |
DSL Usage
use testing_framework_workflows::ScenarioBuilderExt;
ScenarioBuilder::topology_with(|t| t.network_star().validators(3).executors(2))
.da_with(|da| {
da.channel_rate(2) // 2 channels per block
.blob_rate(4) // 4 blobs per block
})
.with_run_duration(Duration::from_secs(120))
.build();
Direct Instantiation
use std::num::NonZeroU64;
use testing_framework_workflows::workloads::da;
let da_workload = da::Workload::with_rate(
NonZeroU64::new(4).unwrap(), // blob_rate_per_block
NonZeroU64::new(2).unwrap(), // channel_rate_per_block
20, // headroom_percent
);
ScenarioBuilder::topology_with(|t| t.network_star().validators(3).executors(2))
.with_workload(da_workload)
.with_run_duration(Duration::from_secs(120))
.build();
Prerequisites
-
Executors must be present:
.executors(N) // At least 1 executorDA workload requires executor nodes to handle blob publishing.
-
Sufficient duration: Channel creation and blob publishing are slower than transaction submission. Allow 120+ seconds.
-
Circuit artifacts: Same as transaction workload (POL_PROOF_DEV_MODE, circuits staged).
Attached Expectation
DaWorkloadExpectation — Verifies blobs and channels were created and published.
What it checks:
- At least
Nchannels were created (where N = channel_rate × expected blocks) - At least
Mblobs were published (where M = blob_rate × expected blocks × headroom) - Uses BlockFeed and executor API to verify
Failure modes:
- “Expected >= X channels, observed Y” (Y < X)
- “Expected >= X blobs, observed Y” (Y < X)
- Common causes: executor crashes, insufficient duration, DA saturation
What Failure Looks Like
Error: Expectation failed: DaWorkloadExpectation
Expected: >= 60 channels (2 channels/block × 30 blocks)
Observed: 23 channels
Possible causes:
- Executors crashed or restarted (check executor logs)
- Duration too short (channels still being created)
- Blob publishing failed (check executor API errors)
- Network issues (check validator/executor connectivity)
How to debug:
- Check executor logs:
grep "channel\|blob" $NOMOS_LOG_DIR/executor-0/*.log - Verify executors stayed running:
grep "panic\|killed" $NOMOS_LOG_DIR/executor-*/*.log - Increase duration:
.with_run_duration(Duration::from_secs(180)) - Reduce rates:
.channel_rate(1).blob_rate(2)
3. Chaos Workload (Random Restart)
Triggers controlled node restarts to test resilience and recovery behaviors.
Import:
use testing_framework_workflows::workloads::chaos::RandomRestartWorkload;
Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
min_delay | Duration | Required | Minimum time between restart attempts |
max_delay | Duration | Required | Maximum time between restart attempts |
target_cooldown | Duration | Required | Minimum time before restarting same node again |
include_validators | bool | Required | Whether to restart validators |
include_executors | bool | Required | Whether to restart executors |
Usage
use std::time::Duration;
use testing_framework_core::scenario::ScenarioBuilder;
use testing_framework_workflows::{ScenarioBuilderExt, workloads::chaos::RandomRestartWorkload};
let scenario = ScenarioBuilder::topology_with(|t| {
t.network_star().validators(3).executors(2)
})
.enable_node_control() // REQUIRED for chaos
.with_workload(RandomRestartWorkload::new(
Duration::from_secs(45), // min_delay
Duration::from_secs(75), // max_delay
Duration::from_secs(120), // target_cooldown
true, // include_validators
true, // include_executors
))
.expect_consensus_liveness()
.with_run_duration(Duration::from_secs(180))
.build();
Prerequisites
-
Node control must be enabled:
.enable_node_control()This adds
NodeControlCapabilityto the scenario. -
Runner must support node control:
- Compose runner: Supported
- Local runner: Not supported
- K8s runner: Not yet implemented
-
Sufficient topology:
- For validators: Need >1 validator (workload skips if only 1)
- For executors: Can restart all executors
-
Realistic timing:
- Total duration should be 2-3× the max_delay + cooldown
- Example: max_delay=75s, cooldown=120s → duration >= 180s
Attached Expectation
None. You must explicitly add expectations (typically .expect_consensus_liveness()).
Why? Chaos workloads are about testing recovery under disruption. The appropriate expectation depends on what you’re testing:
- Consensus survives restarts →
.expect_consensus_liveness() - Height converges after chaos → Custom expectation checking BlockFeed
What Failure Looks Like
Error: Workload failed: chaos_restart
Cause: NodeControlHandle not available
Possible causes:
- Forgot .enable_node_control() in scenario builder
- Using local runner (doesn't support node control)
- Using k8s runner (doesn't support node control)
Or:
Error: Expectation failed: ConsensusLiveness
Expected: >= 20 blocks
Observed: 8 blocks
Possible causes:
- Restart frequency too high (nodes can't recover)
- Consensus timing too slow (increase duration)
- Too many validators restarted simultaneously
- Nodes crashed after restart (check logs)
How to debug:
- Check restart events in logs:
grep "restarting\|restart complete" $NOMOS_LOG_DIR/*/*.log - Verify node control is enabled:
grep "NodeControlHandle" $NOMOS_LOG_DIR/*/*.log - Increase cooldown:
Duration::from_secs(180) - Reduce restart scope:
include_validators = false(test executors only) - Increase duration:
.with_run_duration(Duration::from_secs(300))
Built-in Expectations
1. Consensus Liveness
Verifies the system continues to produce blocks during the execution window.
Import:
use testing_framework_workflows::ScenarioBuilderExt;
DSL Usage
ScenarioBuilder::topology_with(|t| t.network_star().validators(3).executors(1))
.expect_consensus_liveness()
.with_run_duration(Duration::from_secs(60))
.build();
What It Checks
- At least
Nblocks were produced (where N = duration / expected_block_time) - Uses BlockFeed to count observed blocks
- Compares against a minimum threshold (typically 50% of theoretical max)
Failure Modes
Error: Expectation failed: ConsensusLiveness
Expected: >= 30 blocks
Observed: 3 blocks
Possible causes:
- Nodes crashed or never started (check logs)
- Consensus timing misconfigured (CONSENSUS_SLOT_TIME too high)
- Insufficient validators (need >= 2 for BFT consensus)
- Duration too short (nodes still syncing)
How to Debug
- Check if nodes started:
grep "node started\|listening on" $NOMOS_LOG_DIR/*/*.log - Check block production:
grep "block.*height" $NOMOS_LOG_DIR/validator-*/*.log - Check consensus participation:
grep "consensus.*slot\|proposal" $NOMOS_LOG_DIR/validator-*/*.log - Increase duration:
.with_run_duration(Duration::from_secs(120)) - Check env vars:
echo $CONSENSUS_SLOT_TIME $CONSENSUS_ACTIVE_SLOT_COEFF
2. Workload-Specific Expectations
Each workload automatically attaches its own expectation:
| Workload | Expectation | What It Checks |
|---|---|---|
| Transaction | TxInclusionExpectation | Transactions were included in blocks |
| DA | DaWorkloadExpectation | Blobs and channels were created/published |
| Chaos | (None) | Add .expect_consensus_liveness() explicitly |
These expectations are added automatically when using the DSL (.transactions_with(), .da_with()).
Configuration Quick Reference
Transaction Workload
.wallets(20)
.transactions_with(|tx| tx.rate(10).users(5))
| What | Value | Unit |
|---|---|---|
| Rate | 10 | tx/block |
| Users | 5 | wallet accounts |
| Wallets | 20 | total seeded |
DA Workload
.da_with(|da| da.channel_rate(2).blob_rate(4))
| What | Value | Unit |
|---|---|---|
| Channel rate | 2 | channels/block |
| Blob rate | 4 | blobs/block |
| Headroom | 20 | percent |
Chaos Workload
.enable_node_control()
.with_workload(RandomRestartWorkload::new(
Duration::from_secs(45), // min
Duration::from_secs(75), // max
Duration::from_secs(120), // cooldown
true, // validators
true, // executors
))
Common Patterns
Pattern 1: Multiple Workloads
ScenarioBuilder::topology_with(|t| t.network_star().validators(3).executors(2))
.wallets(20)
.transactions_with(|tx| tx.rate(5).users(10))
.da_with(|da| da.channel_rate(2).blob_rate(2))
.expect_consensus_liveness()
.with_run_duration(Duration::from_secs(120))
.build();
All workloads run concurrently. Expectations for each workload run after the execution window ends.
Pattern 2: Custom Expectation
use testing_framework_core::scenario::Expectation;
struct MyCustomExpectation;
#[async_trait]
impl Expectation for MyCustomExpectation {
async fn evaluate(&self, ctx: &RunContext) -> Result<(), DynError> {
// Access BlockFeed, metrics, topology, etc.
let block_count = ctx.block_feed()?.count();
if block_count < 10 {
return Err("Not enough blocks".into());
}
Ok(())
}
}
ScenarioBuilder::topology_with(|t| t.network_star().validators(3).executors(1))
.with_expectation(MyCustomExpectation)
.with_run_duration(Duration::from_secs(60))
.build();
Debugging Checklist
When a workload or expectation fails:
- Check logs:
$NOMOS_LOG_DIR/*/ordocker compose logsorkubectl logs - Verify environment variables:
POL_PROOF_DEV_MODE,NOMOS_NODE_BIN, etc. - Check prerequisites: wallets, executors, node control, circuits
- Increase duration: Double the run duration and retry
- Reduce rates: Half the traffic rates and retry
- Check metrics: Prometheus queries for block height, tx count, DA stats
- Reproduce locally: Use local runner for faster iteration
See Also
- Authoring Scenarios — Step-by-step tutorial for building scenarios
- RunContext: BlockFeed & Node Control — Learn how to use BlockFeed in expectations and access node control
- Examples — Concrete scenario patterns combining workloads and expectations
- Extending the Framework — Implement custom workloads and expectations
- Troubleshooting — Common failure scenarios and fixes