Parallel Processing
Synode offers two parallelism strategies: lanes for concurrent async execution in the main thread, and worker threads for multi-core parallelism.
Lanes
Lanes split users across concurrent async tasks within a single thread. Each lane gets an equal share of users and processes them independently with full context isolation.
import { generate } from '@synode/core';
await generate(journey, {
users: 10000,
lanes: 4, // 4 concurrent lanes, ~2500 users each
});Lanes use Promise.all internally. They share the same event loop but interleave I/O operations. Good for I/O-bound workloads (file/HTTP adapters).
When to Use Lanes
- Moderate user counts (1k-100k)
- I/O-bound adapters (file writes, HTTP calls)
- Simple setup -- no separate module needed
- Datasets shared in-memory across all lanes
Worker Threads
Worker threads spawn separate V8 isolates for true multi-core parallelism. Each worker loads journeys from a module file and processes its assigned user range.
await generate([], {
users: 100000,
workerModule: './config.ts',
workers: 4,
});When workerModule is set, the journey array passed to generate is ignored -- workers load journeys from the module.
Worker Module Contract
The worker module must export a journeys array. All other exports are optional.
// config.ts
import {
defineJourney,
defineAdventure,
defineAction,
definePersona,
weighted,
} from '@synode/core';
import type { Journey, PersonaDefinition } from '@synode/core';
export const persona: PersonaDefinition = definePersona({
id: 'user',
name: 'User',
attributes: { locale: weighted({ en: 0.7, de: 0.3 }) },
});
export const journeys: Journey[] = [
defineJourney({
id: 'browse',
name: 'Browse',
adventures: [
defineAdventure({
id: 'view',
name: 'View Products',
actions: [
defineAction({ id: 'page-view', name: 'page_view', fields: { url: '/products' } }),
],
}),
],
}),
];Worker Module Exports
| Export | Type | Required | Description |
|---|---|---|---|
journeys | Journey[] | Yes | Journey definitions to execute |
persona | PersonaDefinition | No | Persona for user generation |
datasets | DatasetDefinition[] | No | Dataset definitions to generate per worker |
preloadedDatasets | Dataset[] | No | Pre-built datasets to inject |
Worker Count
Default is os.cpus().length. Override with workers. Maximum: 1024.
await generate([], {
users: 50000,
workerModule: './config.ts',
workers: 8,
});Dataset Handling with Workers
Datasets defined in generate({ datasets }) are pre-generated in the main thread and serialized to each worker. Datasets exported from the worker module are generated independently in each worker.
For large shared datasets, pre-generate in the main thread:
await generate([], {
users: 50000,
workerModule: './config.ts',
workers: 8,
datasets: [largeProductCatalog], // generated once, shared with all workers
});Date Ranges
Assign each user a random start time within a date range. All event timestamps for that user flow forward from their start time.
await generate(journey, {
users: 10000,
lanes: 4,
startDate: new Date('2026-01-01'),
endDate: new Date('2026-03-31'),
});Both startDate and endDate must be provided together. startDate must be before endDate.
Debug Telemetry
Enable debug to collect detailed metrics about the generation run. Saves a JSON report to telemetryPath.
await generate(journey, {
users: 5000,
lanes: 4,
debug: true,
telemetryPath: './telemetry.json',
});Default telemetry path: ./telemetry-report.json.
The telemetry report includes:
- Total events generated
- Users started/completed
- Duration and throughput
- Event validation summary (if schemas configured)
Choosing a Strategy
| Scenario | Strategy | Config |
|---|---|---|
| < 10k users, simple setup | Sequential | lanes: 1 (default) |
| 10k-100k users, I/O-bound | Lanes | lanes: 4-8 |
| > 100k users, CPU-bound | Workers | workerModule + workers: N |
| Large datasets, many users | Workers + shared datasets | workerModule + datasets |
