Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt

Use this file to discover all available pages before exploring further.

Development Guide

This guide covers project organization and production workflow. It does not repeat SDK installation or full examples; use the linked pages for those details. Keep acquisition, preprocessing, inference, and application logic separate. That makes it easier to test each stage and swap SDKs or models later.
my-bci-project/
  src/
    acquisition/       # LSL, BrainFlow, hardware adapters
    preprocessing/     # filtering, artifact handling, feature extraction
    models/            # training, loading, normalization params
    inference/         # batch and streaming inference
    app/               # UI, control, feedback, safety policy
  data/
    raw/
    processed/
    calibration/
  tests/
  notebooks/
  README.md

Development Workflow

  1. Define the BCI paradigm: motor imagery, P300, SSVEP, or another task.
  2. Lock preprocessing: feature type, sampling rate, channels, temporal window, normalization.
  3. Train a baseline: start with NimbusLDA, then compare alternatives only if the baseline fails.
  4. Evaluate offline: use cross-validation and held-out sessions before streaming.
  5. Add streaming: process feature chunks and aggregate trial decisions.
  6. Add safety policy: reject low confidence trials and log uncertainty.
  7. Deploy with monitoring: track latency, confidence, rejection rate, and drift.

Data Contracts

Document these invariants in your project:
FieldWhy It Matters
Sampling rateRequired for reproducible windows and chunk sizes.
Feature typeModels trained on CSP should not receive ERP features.
Feature countMust match the model and metadata.
Label encodingPython workflows can use sklearn-style labels; Julia examples typically use 1-indexed labels.
Normalization paramsMust be estimated on training data and reused for deployment.

Model Lifecycle

Treat a deployed model as more than weights. Save:
  • model object or model identifier
  • SDK version
  • feature extraction settings
  • normalization parameters
  • class labels and label mapping
  • training session metadata
  • validation metrics
Example metadata:
{
  "model": "NimbusLDA",
  "sdk": "nimbus-bci",
  "feature_type": "csp",
  "sampling_rate": 250,
  "n_features": 8,
  "normalization": "zscore",
  "trained_on": "2026-04-25"
}

Testing Strategy

Prioritize small tests around contracts and failure modes:
  • Preprocessing output has expected shape and finite values.
  • Labels match the number of trials.
  • Normalization params are reused, not recomputed on test data.
  • Model rejects incompatible feature dimensions.
  • Streaming chunks match chunk_size.
  • Low-confidence predictions trigger the expected safety path.
For numerical tests, avoid hardcoding exact posteriors unless the model is fully deterministic. Prefer shape, range, monotonicity, and threshold checks.

Debugging Checklist

When accuracy or confidence is poor:
  1. Confirm data shape and label encoding.
  2. Check for NaN, Inf, constant features, or raw EEG accidentally passed as features.
  3. Verify the preprocessing band and time window match the paradigm.
  4. Reuse training normalization parameters on test data.
  5. Compare against NimbusLDA as a baseline.
  6. Inspect confidence and posterior entropy, not only accuracy.
  7. Run preprocessing diagnostics before tuning model hyperparameters.

Performance Guidance

  • Warm up streaming inference before a live session.
  • Preallocate buffers in real-time loops.
  • Keep filtering and feature extraction outside hot model code when possible.
  • Prefer batch inference for offline evaluation.
  • Log per-stage latency: acquisition, preprocessing, inference, and application action.

Production Guardrails

Production BCI systems should include:
  • confidence thresholds by action risk
  • trial rejection and retry flows
  • session-level health monitoring
  • model/version audit logs
  • fallback behavior when the stream drops
  • clear separation between prediction and command execution
See Error Handling for a focused checklist.

Next Read

External Preprocessing Integration

Export/import handoff from MNE, EEGLAB, OpenViBE, or MATLAB.

Feature Normalization

Cross-session scaling strategy.

Streaming Configuration

Chunking, aggregation, and quality gates.

Model Specification

Choose the right model family.