Documentation Index
Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt
Use this file to discover all available pages before exploring further.
Batch Inference Configuration
Batch inference processes complete trials offline. Use it for calibration, validation, model comparison, research studies, and quality review. Use Streaming Inference Configuration when predictions must update during a live trial.When To Use Batch
- Train or calibrate a model on labeled trials.
- Evaluate held-out sessions or subjects.
- Compare model families and hyperparameters.
- Run preprocessing diagnostics before deployment.
- Analyze confidence, entropy, accuracy, and ITR after a session.
Data Contracts
| SDK | Typical Batch Shape | Notes |
|---|---|---|
| Python classifiers | (n_trials, n_features) | sklearn-compatible estimator API. |
Python BCIData utilities | (n_features, n_samples, n_trials) | Used by lower-level inference helpers. |
Julia BCIData | (n_features, n_samples, n_trials) | Labels are typically 1-indexed integers. |
Basic Pattern
- Python
- Julia
Evaluation Checklist
- Split by session or subject when testing generalization.
- Estimate normalization parameters on training folds only.
- Report accuracy with confidence and rejection rate.
- Inspect posterior entropy for uncertain trials.
- Compare against a simple
NimbusLDAbaseline before tuning.
Batch Diagnostics
Run diagnostics when batch accuracy is unexpectedly low:Batch vs Streaming
| Need | Use |
|---|---|
| Offline calibration | Batch |
| Cross-validation | Batch |
| Research analysis | Batch |
| Live feedback | Streaming |
| Chunk-by-chunk command updates | Streaming |
Next Read
Feature Normalization
Keep train/test/deployment scales consistent.
Streaming Inference
Configure live chunk processing.
Basic Examples
Compact batch and streaming recipes.
Model Specification
Choose a model before evaluation.