Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Core Content: Workloads & Expectations

Workloads describe the activity a scenario generates; expectations describe the signals that must hold when that activity completes. This page is the canonical reference for all built-in workloads and expectations, including configuration knobs, defaults, prerequisites, and debugging guidance.


Overview

flowchart TD
    I[Inputs<br/>topology + wallets + rates] --> Init[Workload init]
    Init --> Drive[Drive traffic]
    Drive --> Collect[Collect signals]
    Collect --> Eval[Expectations evaluate]

Key concepts:

  • Workloads run during the execution phase (generate traffic)
  • Expectations run during the evaluation phase (check health signals)
  • Each workload can attach its own expectations automatically
  • Expectations can also be added explicitly

Built-in Workloads

1. Transaction Workload

Submits user-level transactions at a configurable rate to exercise transaction processing and inclusion paths.

Import:

use testing_framework_workflows::workloads::transaction::Workload;

Configuration

ParameterTypeDefaultDescription
rateu64RequiredTransactions per block (not per second!)
usersOption<usize>All walletsNumber of distinct wallet accounts to use

DSL Usage

use testing_framework_workflows::ScenarioBuilderExt;

ScenarioBuilder::topology_with(|t| t.network_star().nodes(3))
    .wallets(20)  // Seed 20 wallet accounts
    .transactions_with(|tx| {
        tx.rate(10)   // 10 transactions per block
          .users(5)   // Use only 5 of the 20 wallets
    })
    .with_run_duration(Duration::from_secs(60))
    .build();

Direct Instantiation

use testing_framework_workflows::workloads::transaction;

let tx_workload = transaction::Workload::with_rate(10)
    .expect("transaction rate must be non-zero");

ScenarioBuilder::topology_with(|t| t.network_star().nodes(3))
    .wallets(20)
    .with_workload(tx_workload)
    .with_run_duration(Duration::from_secs(60))
    .build();

Prerequisites

  1. Wallet accounts must be seeded:

    .wallets(N)  // Before .transactions_with()

    The workload will fail during init() if no wallets are configured.

  2. Circuit artifacts must be available:

    • Automatically staged by scripts/run/run-examples.sh
    • Or manually via scripts/setup/setup-logos-blockchain-circuits.sh (recommended) / scripts/setup/setup-logos-blockchain-circuits.sh

Attached Expectation

TxInclusionExpectation — Verifies that submitted transactions were included in blocks.

What it checks:

  • At least N transactions were included on-chain (where N = rate × user count × expected block count)
  • Uses BlockFeed to count transactions across all observed blocks

Failure modes:

  • “Expected >= X transactions, observed Y” (Y < X)
  • Common causes: proof generation timeouts, node crashes, insufficient duration

What Failure Looks Like

Error: Expectation failed: TxInclusionExpectation
  Expected: >= 600 transactions (10 tx/block × 60 blocks)
  Observed: 127 transactions
  
  Possible causes:
  - Duration too short (nodes still syncing)
  - Node crashes (check logs for panics/OOM)
  - Wallet accounts not seeded (check topology config)

How to debug:

  1. Check logs for proof generation timing:
    grep "proof generation" $LOGOS_BLOCKCHAIN_LOG_DIR/*/*.log
    
  2. Increase duration: .with_run_duration(Duration::from_secs(120))
  3. Reduce rate: .rate(5) instead of .rate(10)

2. Chaos Workload (Random Restart)

Triggers controlled node restarts to test resilience and recovery behaviors.

Import:

use testing_framework_workflows::workloads::chaos::RandomRestartWorkload;

Configuration

ParameterTypeDefaultDescription
min_delayDurationRequiredMinimum time between restart attempts
max_delayDurationRequiredMaximum time between restart attempts
target_cooldownDurationRequiredMinimum time before restarting same node again
include_nodesboolRequiredWhether to restart nodes

Usage

use std::time::Duration;

use testing_framework_core::scenario::ScenarioBuilder;
use testing_framework_workflows::{ScenarioBuilderExt, workloads::chaos::RandomRestartWorkload};

let scenario = ScenarioBuilder::topology_with(|t| {
    t.network_star().nodes(3)
})
.enable_node_control()  // REQUIRED for chaos
.with_workload(RandomRestartWorkload::new(
    Duration::from_secs(45),   // min_delay
    Duration::from_secs(75),   // max_delay
    Duration::from_secs(120),  // target_cooldown
    true,                      // include_nodes
))
.expect_consensus_liveness()
.with_run_duration(Duration::from_secs(180))
.build();

Prerequisites

  1. Node control must be enabled:

    .enable_node_control()

    This adds NodeControlCapability to the scenario.

  2. Runner must support node control:

    • Compose runner: Supported
    • Local runner: Not supported
    • K8s runner: Not yet implemented
  3. Sufficient topology:

    • For nodes: Need >1 node (workload skips if only 1)
  4. Realistic timing:

    • Total duration should be 2-3× the max_delay + cooldown
    • Example: max_delay=75s, cooldown=120s → duration >= 180s

Attached Expectation

None. You must explicitly add expectations (typically .expect_consensus_liveness()).

Why? Chaos workloads are about testing recovery under disruption. The appropriate expectation depends on what you’re testing:

  • Consensus survives restarts → .expect_consensus_liveness()
  • Height converges after chaos → Custom expectation checking BlockFeed

What Failure Looks Like

Error: Workload failed: chaos_restart
  Cause: NodeControlHandle not available
  
  Possible causes:
  - Forgot .enable_node_control() in scenario builder
  - Using local runner (doesn't support node control)
  - Using k8s runner (doesn't support node control)

Or:

Error: Expectation failed: ConsensusLiveness
  Expected: >= 20 blocks
  Observed: 8 blocks
  
  Possible causes:
  - Restart frequency too high (nodes can't recover)
  - Consensus timing too slow (increase duration)
  - Too many nodes restarted simultaneously
  - Nodes crashed after restart (check logs)

How to debug:

  1. Check restart events in logs:
    grep "restarting\|restart complete" $LOGOS_BLOCKCHAIN_LOG_DIR/*/*.log
    
  2. Verify node control is enabled:
    grep "NodeControlHandle" $LOGOS_BLOCKCHAIN_LOG_DIR/*/*.log
    
  3. Increase cooldown: Duration::from_secs(180)
  4. Increase duration: .with_run_duration(Duration::from_secs(300))

Built-in Expectations

1. Consensus Liveness

Verifies the system continues to produce blocks during the execution window.

Import:

use testing_framework_workflows::ScenarioBuilderExt;

DSL Usage

ScenarioBuilder::topology_with(|t| t.network_star().nodes(3))
    .expect_consensus_liveness()
    .with_run_duration(Duration::from_secs(60))
    .build();

What It Checks

  • At least N blocks were produced (where N = duration / expected_block_time)
  • Uses BlockFeed to count observed blocks
  • Compares against a minimum threshold (typically 50% of theoretical max)

Failure Modes

Error: Expectation failed: ConsensusLiveness
  Expected: >= 30 blocks
  Observed: 3 blocks
  
  Possible causes:
  - Nodes crashed or never started (check logs)
  - Consensus timing misconfigured (CONSENSUS_SLOT_TIME too high)
  - Insufficient nodes (need >= 2 for BFT consensus)
  - Duration too short (nodes still syncing)

How to Debug

  1. Check if nodes started:
    grep "node started\|listening on" $LOGOS_BLOCKCHAIN_LOG_DIR/*/*.log
    
  2. Check block production:
    grep "block.*height" $LOGOS_BLOCKCHAIN_LOG_DIR/node-*/*.log
    
  3. Check consensus participation:
    grep "consensus.*slot\|proposal" $LOGOS_BLOCKCHAIN_LOG_DIR/node-*/*.log
    
  4. Increase duration: .with_run_duration(Duration::from_secs(120))
  5. Check env vars: echo $CONSENSUS_SLOT_TIME $CONSENSUS_ACTIVE_SLOT_COEFF

2. Workload-Specific Expectations

Each workload automatically attaches its own expectation:

WorkloadExpectationWhat It Checks
TransactionTxInclusionExpectationTransactions were included in blocks
Chaos(None)Add .expect_consensus_liveness() explicitly

These expectations are added automatically when using the DSL (.transactions_with()).


Configuration Quick Reference

Transaction Workload

.wallets(20)
.transactions_with(|tx| tx.rate(10).users(5))
WhatValueUnit
Rate10tx/block
Users5wallet accounts
Wallets20total seeded

Chaos Workload

.enable_node_control()
.with_workload(RandomRestartWorkload::new(
    Duration::from_secs(45),   // min
    Duration::from_secs(75),   // max
    Duration::from_secs(120),  // cooldown
    true,  // nodes
))

Common Patterns

Pattern 1: Multiple Workloads

ScenarioBuilder::topology_with(|t| t.network_star().nodes(3))
    .wallets(20)
    .transactions_with(|tx| tx.rate(5).users(10))
    .expect_consensus_liveness()
    .with_run_duration(Duration::from_secs(120))
    .build();

All workloads run concurrently. Expectations for each workload run after the execution window ends.

Pattern 2: Custom Expectation

use testing_framework_core::scenario::Expectation;

struct MyCustomExpectation;

#[async_trait]
impl Expectation for MyCustomExpectation {
    async fn evaluate(&self, ctx: &RunContext) -> Result<(), DynError> {
        // Access BlockFeed, metrics, topology, etc.
        let block_count = ctx.block_feed()?.count();
        if block_count < 10 {
            return Err("Not enough blocks".into());
        }
        Ok(())
    }
}

ScenarioBuilder::topology_with(|t| t.network_star().nodes(3))
    .with_expectation(MyCustomExpectation)
    .with_run_duration(Duration::from_secs(60))
    .build();

Debugging Checklist

When a workload or expectation fails:

  1. Check logs: $LOGOS_BLOCKCHAIN_LOG_DIR/*/ or docker compose logs or kubectl logs
  2. Check prerequisites: wallets, node control, circuits
  3. Increase duration: Double the run duration and retry
  4. Reduce rates: Half the traffic rates and retry
  5. Check metrics: Prometheus queries for block height and tx count
  6. Reproduce locally: Use local runner for faster iteration

See Also