Core Content: Workloads & Expectations
Workloads describe the activity a scenario generates; expectations describe the signals that must hold when that activity completes. This page is the canonical reference for all built-in workloads and expectations, including configuration knobs, defaults, prerequisites, and debugging guidance.
Overview
flowchart TD
I[Inputs<br/>topology + wallets + rates] --> Init[Workload init]
Init --> Drive[Drive traffic]
Drive --> Collect[Collect signals]
Collect --> Eval[Expectations evaluate]
Key concepts:
- Workloads run during the execution phase (generate traffic)
- Expectations run during the evaluation phase (check health signals)
- Each workload can attach its own expectations automatically
- Expectations can also be added explicitly
Built-in Workloads
1. Transaction Workload
Submits user-level transactions at a configurable rate to exercise transaction processing and inclusion paths.
Import:
use testing_framework_workflows::workloads::transaction::Workload;
Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
rate | u64 | Required | Transactions per block (not per second!) |
users | Option<usize> | All wallets | Number of distinct wallet accounts to use |
DSL Usage
use testing_framework_workflows::ScenarioBuilderExt;
ScenarioBuilder::topology_with(|t| t.network_star().nodes(3))
.wallets(20) // Seed 20 wallet accounts
.transactions_with(|tx| {
tx.rate(10) // 10 transactions per block
.users(5) // Use only 5 of the 20 wallets
})
.with_run_duration(Duration::from_secs(60))
.build();
Direct Instantiation
use testing_framework_workflows::workloads::transaction;
let tx_workload = transaction::Workload::with_rate(10)
.expect("transaction rate must be non-zero");
ScenarioBuilder::topology_with(|t| t.network_star().nodes(3))
.wallets(20)
.with_workload(tx_workload)
.with_run_duration(Duration::from_secs(60))
.build();
Prerequisites
-
Wallet accounts must be seeded:
.wallets(N) // Before .transactions_with()The workload will fail during
init()if no wallets are configured. -
Circuit artifacts must be available:
- Automatically staged by
scripts/run/run-examples.sh - Or manually via
scripts/setup/setup-logos-blockchain-circuits.sh(recommended) /scripts/setup/setup-logos-blockchain-circuits.sh
- Automatically staged by
Attached Expectation
TxInclusionExpectation — Verifies that submitted transactions were included in blocks.
What it checks:
- At least
Ntransactions were included on-chain (where N = rate × user count × expected block count) - Uses BlockFeed to count transactions across all observed blocks
Failure modes:
- “Expected >= X transactions, observed Y” (Y < X)
- Common causes: proof generation timeouts, node crashes, insufficient duration
What Failure Looks Like
Error: Expectation failed: TxInclusionExpectation
Expected: >= 600 transactions (10 tx/block × 60 blocks)
Observed: 127 transactions
Possible causes:
- Duration too short (nodes still syncing)
- Node crashes (check logs for panics/OOM)
- Wallet accounts not seeded (check topology config)
How to debug:
- Check logs for proof generation timing:
grep "proof generation" $LOGOS_BLOCKCHAIN_LOG_DIR/*/*.log - Increase duration:
.with_run_duration(Duration::from_secs(120)) - Reduce rate:
.rate(5)instead of.rate(10)
2. Chaos Workload (Random Restart)
Triggers controlled node restarts to test resilience and recovery behaviors.
Import:
use testing_framework_workflows::workloads::chaos::RandomRestartWorkload;
Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
min_delay | Duration | Required | Minimum time between restart attempts |
max_delay | Duration | Required | Maximum time between restart attempts |
target_cooldown | Duration | Required | Minimum time before restarting same node again |
include_nodes | bool | Required | Whether to restart nodes |
Usage
use std::time::Duration;
use testing_framework_core::scenario::ScenarioBuilder;
use testing_framework_workflows::{ScenarioBuilderExt, workloads::chaos::RandomRestartWorkload};
let scenario = ScenarioBuilder::topology_with(|t| {
t.network_star().nodes(3)
})
.enable_node_control() // REQUIRED for chaos
.with_workload(RandomRestartWorkload::new(
Duration::from_secs(45), // min_delay
Duration::from_secs(75), // max_delay
Duration::from_secs(120), // target_cooldown
true, // include_nodes
))
.expect_consensus_liveness()
.with_run_duration(Duration::from_secs(180))
.build();
Prerequisites
-
Node control must be enabled:
.enable_node_control()This adds
NodeControlCapabilityto the scenario. -
Runner must support node control:
- Compose runner: Supported
- Local runner: Not supported
- K8s runner: Not yet implemented
-
Sufficient topology:
- For nodes: Need >1 node (workload skips if only 1)
-
Realistic timing:
- Total duration should be 2-3× the max_delay + cooldown
- Example: max_delay=75s, cooldown=120s → duration >= 180s
Attached Expectation
None. You must explicitly add expectations (typically .expect_consensus_liveness()).
Why? Chaos workloads are about testing recovery under disruption. The appropriate expectation depends on what you’re testing:
- Consensus survives restarts →
.expect_consensus_liveness() - Height converges after chaos → Custom expectation checking BlockFeed
What Failure Looks Like
Error: Workload failed: chaos_restart
Cause: NodeControlHandle not available
Possible causes:
- Forgot .enable_node_control() in scenario builder
- Using local runner (doesn't support node control)
- Using k8s runner (doesn't support node control)
Or:
Error: Expectation failed: ConsensusLiveness
Expected: >= 20 blocks
Observed: 8 blocks
Possible causes:
- Restart frequency too high (nodes can't recover)
- Consensus timing too slow (increase duration)
- Too many nodes restarted simultaneously
- Nodes crashed after restart (check logs)
How to debug:
- Check restart events in logs:
grep "restarting\|restart complete" $LOGOS_BLOCKCHAIN_LOG_DIR/*/*.log - Verify node control is enabled:
grep "NodeControlHandle" $LOGOS_BLOCKCHAIN_LOG_DIR/*/*.log - Increase cooldown:
Duration::from_secs(180) - Increase duration:
.with_run_duration(Duration::from_secs(300))
Built-in Expectations
1. Consensus Liveness
Verifies the system continues to produce blocks during the execution window.
Import:
use testing_framework_workflows::ScenarioBuilderExt;
DSL Usage
ScenarioBuilder::topology_with(|t| t.network_star().nodes(3))
.expect_consensus_liveness()
.with_run_duration(Duration::from_secs(60))
.build();
What It Checks
- At least
Nblocks were produced (where N = duration / expected_block_time) - Uses BlockFeed to count observed blocks
- Compares against a minimum threshold (typically 50% of theoretical max)
Failure Modes
Error: Expectation failed: ConsensusLiveness
Expected: >= 30 blocks
Observed: 3 blocks
Possible causes:
- Nodes crashed or never started (check logs)
- Consensus timing misconfigured (CONSENSUS_SLOT_TIME too high)
- Insufficient nodes (need >= 2 for BFT consensus)
- Duration too short (nodes still syncing)
How to Debug
- Check if nodes started:
grep "node started\|listening on" $LOGOS_BLOCKCHAIN_LOG_DIR/*/*.log - Check block production:
grep "block.*height" $LOGOS_BLOCKCHAIN_LOG_DIR/node-*/*.log - Check consensus participation:
grep "consensus.*slot\|proposal" $LOGOS_BLOCKCHAIN_LOG_DIR/node-*/*.log - Increase duration:
.with_run_duration(Duration::from_secs(120)) - Check env vars:
echo $CONSENSUS_SLOT_TIME $CONSENSUS_ACTIVE_SLOT_COEFF
2. Workload-Specific Expectations
Each workload automatically attaches its own expectation:
| Workload | Expectation | What It Checks |
|---|---|---|
| Transaction | TxInclusionExpectation | Transactions were included in blocks |
| Chaos | (None) | Add .expect_consensus_liveness() explicitly |
These expectations are added automatically when using the DSL (.transactions_with()).
Configuration Quick Reference
Transaction Workload
.wallets(20)
.transactions_with(|tx| tx.rate(10).users(5))
| What | Value | Unit |
|---|---|---|
| Rate | 10 | tx/block |
| Users | 5 | wallet accounts |
| Wallets | 20 | total seeded |
Chaos Workload
.enable_node_control()
.with_workload(RandomRestartWorkload::new(
Duration::from_secs(45), // min
Duration::from_secs(75), // max
Duration::from_secs(120), // cooldown
true, // nodes
))
Common Patterns
Pattern 1: Multiple Workloads
ScenarioBuilder::topology_with(|t| t.network_star().nodes(3))
.wallets(20)
.transactions_with(|tx| tx.rate(5).users(10))
.expect_consensus_liveness()
.with_run_duration(Duration::from_secs(120))
.build();
All workloads run concurrently. Expectations for each workload run after the execution window ends.
Pattern 2: Custom Expectation
use testing_framework_core::scenario::Expectation;
struct MyCustomExpectation;
#[async_trait]
impl Expectation for MyCustomExpectation {
async fn evaluate(&self, ctx: &RunContext) -> Result<(), DynError> {
// Access BlockFeed, metrics, topology, etc.
let block_count = ctx.block_feed()?.count();
if block_count < 10 {
return Err("Not enough blocks".into());
}
Ok(())
}
}
ScenarioBuilder::topology_with(|t| t.network_star().nodes(3))
.with_expectation(MyCustomExpectation)
.with_run_duration(Duration::from_secs(60))
.build();
Debugging Checklist
When a workload or expectation fails:
- Check logs:
$LOGOS_BLOCKCHAIN_LOG_DIR/*/ordocker compose logsorkubectl logs - Check prerequisites: wallets, node control, circuits
- Increase duration: Double the run duration and retry
- Reduce rates: Half the traffic rates and retry
- Check metrics: Prometheus queries for block height and tx count
- Reproduce locally: Use local runner for faster iteration
See Also
- Authoring Scenarios — Step-by-step tutorial for building scenarios
- RunContext: BlockFeed & Node Control — Learn how to use BlockFeed in expectations and access node control
- Examples — Concrete scenario patterns combining workloads and expectations
- Extending the Framework — Implement custom workloads and expectations
- Troubleshooting — Common failure scenarios and fixes