Documentation Index
Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt
Use this file to discover all available pages before exploring further.
Real-Time BCI Setup
This page owns hardware and acquisition setup: how EEG gets from an amplifier into feature chunks that Nimbus can classify. For SDK streaming APIs, see Streaming Inference Configuration, Python Streaming Inference, and Julia Streaming Inference.Real-Time Pipeline
Acquisition Options
| Option | Best for | Notes |
|---|---|---|
| Lab Streaming Layer (LSL) | Research labs and multi-device synchronization | Common for EEG streams and markers |
| BrainFlow | Consumer and research EEG hardware | Unified API for many boards |
| Vendor SDK | Production devices | Lowest integration overhead when the vendor API is stable |
| File replay | Testing and demos | Useful for deterministic latency tests |
Setup Checklist
- Confirm amplifier sampling rate and channel order.
- Synchronize event markers with EEG samples.
- Apply artifact handling and feature extraction in the same way as calibration.
- Emit chunks with stable shape:
(n_features, chunk_size). - Use the same normalization parameters from training.
- Measure latency around preprocessing and SDK inference separately.
- Add rejection thresholds for low-confidence or high-entropy decisions.
LSL Pattern
Use LSL when you need synchronized EEG and marker streams.BrainFlow Pattern
Use BrainFlow when you want one API across supported EEG boards.Latency Budget
| Stage | Target |
|---|---|
| Acquisition buffer | 1-10ms |
| Preprocessing and feature extraction | 5-50ms, depending on feature type |
| Nimbus inference | 10-25ms typical |
| Application command | 1-10ms |
Production Guardrails
- Warm up model and preprocessing code before the session starts.
- Use fixed-size buffers to avoid unbounded memory growth.
- Reset streaming sessions between trials.
- Log per-stage latency, confidence, entropy, and rejection decisions.
- Keep calibration preprocessing and online preprocessing identical.
Next Read
Streaming Inference Configuration
Choose chunk sizes and aggregation methods.
Python Streaming Inference
Use
StreamingSession in Python.Julia Streaming Inference
Use
init_streaming, process_chunk, and finalize_trial in Julia.Preprocessing Requirements
Prepare feature-space inputs correctly.