Streaming Inference for BCI
Streaming inference is essential for real-time BCI applications. Both Python and Julia SDKs support chunk-by-chunk processing for responsive brain-computer interfaces with sub-25ms latency per chunk.Both SDKs support Bayesian models: Bayesian LDA, Bayesian QDA, Bayesian Softmax, and Bayesian STS (Python SDK only). All support streaming inference with similar latencies.Python SDK users: See Python Streaming Inference for Python-specific detailed examples and patterns.
Streaming vs Batch Processing
Batch Processing
Process complete trials offline:- Python
- Julia
- Offline analysis
- Model training/validation
- Research studies
- Multiple trials available upfront
Streaming Processing
Process chunks as they arrive in real-time:- Python
- Julia
- Real-time BCI control
- Online feedback
- Interactive applications
- Gaming and communication
Setting Up Streaming Inference
Basic Streaming Setup
- Python
- Julia
Chunk Size Selection
Choose chunk size based on your application:- Python
- Julia
Aggregation Methods
Weighted Vote
Weight chunks by confidence (recommended):- Python
- Julia
Majority Vote
Simple majority across chunks:- Python
- Julia
Latest Chunk
Use only the most recent chunk:- Python
- Julia
Real-time BCI Example
Motor Imagery Control
P300 Speller
Performance Monitoring
Real-time Metrics
Track streaming performance:Quality Monitoring
Monitor signal quality during streaming:Advanced Streaming Features
Adaptive Confidence Thresholds
Multi-trial Context
Best Practices
Session Management
Error Handling
Memory Management
Troubleshooting
Low Confidence Issues
Latency Issues
Next Steps
Batch Processing
Learn about offline batch inference
Real-time Setup
Configure your real-time BCI system
Julia SDK
Complete SDK reference
Examples
Working streaming examples
Next: Configure your real-time BCI setup for optimal streaming performance.