Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt

Use this file to discover all available pages before exploring further.

Streaming Inference Configuration

This page is the cross-SDK configuration overview for streaming inference. It explains the shared concepts: when to stream, how to choose chunk sizes, and how to aggregate chunk predictions.
For implementation-specific APIs, use the SDK pages:

Streaming vs Batch

Use batch inference when full trials are already available and you are doing offline evaluation, model validation, or analytics. Use streaming inference when feature chunks arrive over time and the application needs low-latency updates before a complete trial has finished.
EEG stream -> preprocessing -> feature chunks -> streaming session -> chunk posteriors -> final decision

Core Configuration

Every streaming setup needs:
  • sampling_rate: acquisition rate in Hz.
  • chunk_size: number of samples per chunk.
  • paradigm: task type such as motor imagery, P300, or SSVEP.
  • feature_type: feature representation such as CSP, bandpower, or ERP amplitude.
  • n_features: feature count per chunk.
  • n_classes: number of output classes.
  • temporal_aggregation: how to reduce feature time structure when required.

Chunk Size Guidelines

Chunk durationTypical useTrade-off
0.25-0.5sFast feedback, games, exploratory interfacesLower latency, less evidence per chunk
0.5-1.0sMost real-time BCI systemsBalanced responsiveness and confidence
1.0-2.0sMedical or high-stakes decisionsHigher confidence, slower feedback
Start with 0.5-1.0s chunks for motor imagery and adjust based on confidence, latency, and user experience.

Aggregation Methods

Streaming produces one posterior per chunk. Trial-level decisions combine those chunk posteriors.
MethodUse when
weighted_voteYou want confidence-weighted decisions across chunks
posterior_meanYou want smooth posterior averaging across chunks
max_confidenceYou trust the single most confident chunk
unanimousYou need conservative decisions only when chunks agree
Use weighted_vote as the default. Switch to posterior_mean when you want smoother probabilities, or max_confidence when the task has short high-signal windows.

Quality Gates

Streaming systems should monitor:
  • confidence (max posterior probability)
  • entropy (prediction uncertainty)
  • class balance over recent trials
  • rejection rate
  • per-chunk and per-trial latency
For Python rejection and quality APIs, see Python SDK API Reference. For end-to-end real-time setup, see Real-Time BCI Setup.

Next Read

Python Streaming Inference

Python StreamingSession examples and STS state handling.

Julia Streaming Inference

Julia streaming API for local chunk processing.

Real-Time BCI Setup

Hardware, LSL, BrainFlow, and acquisition-loop guidance.

Batch Processing

Offline trial processing and diagnostics.