Julia Streaming Inference
Streaming inference processes preprocessed feature chunks as they arrive from your BCI pipeline. It runs locally inNimbusSDK.jl; no network call is made during chunk inference.
For Python streaming, see Python SDK Streaming Inference. For cross-SDK configuration guidance, see Streaming Inference Configuration.
When To Use It
Use Julia streaming when your application needs low-latency feedback before a full trial is complete:- Real-time motor imagery control
- Neurofeedback and training applications
- Assistive interfaces with confidence-based rejection
- Long-running monitoring sessions
NimbusLDA, NimbusQDA, and NimbusProbit.
Streaming Flow
process_chunk().
Basic Setup
Process Chunks
Each chunk should be shaped(n_features, chunk_size).
Finalize A Trial
After enough chunks have been processed for a trial, aggregate them into a final prediction::weighted_vote: weight chunk predictions by confidence.:max_confidence: use the prediction from the most confident chunk.:posterior_mean: average chunk-level posterior distributions.:unanimous: require all chunks to agree, with fallback behavior when they do not.
Chunk Size Guidance
| Paradigm | Typical Chunk Size | Notes |
|---|---|---|
| Motor imagery | 250-500 samples | Balances latency and accuracy at 250 Hz. |
| P300 | 100-200 samples | Short windows for event-related responses. |
| SSVEP | 500-1000 samples | Longer windows help frequency estimates. |
Production Checklist
- Validate each chunk shape before calling
process_chunk(). - Warm up the model with a dummy chunk before the user session.
- Use a confidence threshold for high-stakes actions.
- Keep preprocessing and feature extraction deterministic between calibration and deployment.
- Reset or recreate sessions between independent trials when you do not want chunk history to carry over.
Next Read
Real-time Setup
Acquisition and hardware setup guidance.
Julia SDK API Reference
Complete Julia SDK function reference.
Batch Processing
Offline trial-level inference patterns.
Preprocessing Requirements
Feature preparation requirements before inference.