Documentation Index
Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt
Use this file to discover all available pages before exploring further.
Development Guide
This guide covers project organization and production workflow. It does not repeat SDK installation or full examples; use the linked pages for those details.- Python setup: Python SDK Installation
- Julia setup: Julia SDK Quickstart
- Recipes: Basic Examples and Advanced Applications
Recommended Project Structure
Keep acquisition, preprocessing, inference, and application logic separate. That makes it easier to test each stage and swap SDKs or models later.Development Workflow
- Define the BCI paradigm: motor imagery, P300, SSVEP, or another task.
- Lock preprocessing: feature type, sampling rate, channels, temporal window, normalization.
- Train a baseline: start with
NimbusLDA, then compare alternatives only if the baseline fails. - Evaluate offline: use cross-validation and held-out sessions before streaming.
- Add streaming: process feature chunks and aggregate trial decisions.
- Add safety policy: reject low confidence trials and log uncertainty.
- Deploy with monitoring: track latency, confidence, rejection rate, and drift.
Data Contracts
Document these invariants in your project:| Field | Why It Matters |
|---|---|
| Sampling rate | Required for reproducible windows and chunk sizes. |
| Feature type | Models trained on CSP should not receive ERP features. |
| Feature count | Must match the model and metadata. |
| Label encoding | Python workflows can use sklearn-style labels; Julia examples typically use 1-indexed labels. |
| Normalization params | Must be estimated on training data and reused for deployment. |
Model Lifecycle
Treat a deployed model as more than weights. Save:- model object or model identifier
- SDK version
- feature extraction settings
- normalization parameters
- class labels and label mapping
- training session metadata
- validation metrics
Testing Strategy
Prioritize small tests around contracts and failure modes:- Preprocessing output has expected shape and finite values.
- Labels match the number of trials.
- Normalization params are reused, not recomputed on test data.
- Model rejects incompatible feature dimensions.
- Streaming chunks match
chunk_size. - Low-confidence predictions trigger the expected safety path.
Debugging Checklist
When accuracy or confidence is poor:- Confirm data shape and label encoding.
- Check for NaN, Inf, constant features, or raw EEG accidentally passed as features.
- Verify the preprocessing band and time window match the paradigm.
- Reuse training normalization parameters on test data.
- Compare against
NimbusLDAas a baseline. - Inspect confidence and posterior entropy, not only accuracy.
- Run preprocessing diagnostics before tuning model hyperparameters.
Performance Guidance
- Warm up streaming inference before a live session.
- Preallocate buffers in real-time loops.
- Keep filtering and feature extraction outside hot model code when possible.
- Prefer batch inference for offline evaluation.
- Log per-stage latency: acquisition, preprocessing, inference, and application action.
Production Guardrails
Production BCI systems should include:- confidence thresholds by action risk
- trial rejection and retry flows
- session-level health monitoring
- model/version audit logs
- fallback behavior when the stream drops
- clear separation between prediction and command execution
Next Read
External Preprocessing Integration
Export/import handoff from MNE, EEGLAB, OpenViBE, or MATLAB.
Feature Normalization
Cross-session scaling strategy.
Streaming Configuration
Chunking, aggregation, and quality gates.
Model Specification
Choose the right model family.