Documentation Index
Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt
Use this file to discover all available pages before exploring further.
Streaming Inference Configuration
This page is the cross-SDK configuration overview for streaming inference. It explains the shared concepts: when to stream, how to choose chunk sizes, and how to aggregate chunk predictions.For implementation-specific APIs, use the SDK pages:
- Python: Python Streaming Inference documents
StreamingSessionandStreamingSessionSTS. - Julia: Julia Streaming Inference documents
init_streaming,process_chunk, andfinalize_trial.
Streaming vs Batch
Use batch inference when full trials are already available and you are doing offline evaluation, model validation, or analytics. Use streaming inference when feature chunks arrive over time and the application needs low-latency updates before a complete trial has finished.Core Configuration
Every streaming setup needs:sampling_rate: acquisition rate in Hz.chunk_size: number of samples per chunk.paradigm: task type such as motor imagery, P300, or SSVEP.feature_type: feature representation such as CSP, bandpower, or ERP amplitude.n_features: feature count per chunk.n_classes: number of output classes.temporal_aggregation: how to reduce feature time structure when required.
Chunk Size Guidelines
| Chunk duration | Typical use | Trade-off |
|---|---|---|
| 0.25-0.5s | Fast feedback, games, exploratory interfaces | Lower latency, less evidence per chunk |
| 0.5-1.0s | Most real-time BCI systems | Balanced responsiveness and confidence |
| 1.0-2.0s | Medical or high-stakes decisions | Higher confidence, slower feedback |
Aggregation Methods
Streaming produces one posterior per chunk. Trial-level decisions combine those chunk posteriors.| Method | Use when |
|---|---|
weighted_vote | You want confidence-weighted decisions across chunks |
posterior_mean | You want smooth posterior averaging across chunks |
max_confidence | You trust the single most confident chunk |
unanimous | You need conservative decisions only when chunks agree |
Quality Gates
Streaming systems should monitor:- confidence (
max posterior probability) - entropy (
prediction uncertainty) - class balance over recent trials
- rejection rate
- per-chunk and per-trial latency
Next Read
Python Streaming Inference
Python
StreamingSession examples and STS state handling.Julia Streaming Inference
Julia streaming API for local chunk processing.
Real-Time BCI Setup
Hardware, LSL, BrainFlow, and acquisition-loop guidance.
Batch Processing
Offline trial processing and diagnostics.