Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt

Use this file to discover all available pages before exploring further.

Batch Inference Configuration

Batch inference processes complete trials offline. Use it for calibration, validation, model comparison, research studies, and quality review. Use Streaming Inference Configuration when predictions must update during a live trial.

When To Use Batch

  • Train or calibrate a model on labeled trials.
  • Evaluate held-out sessions or subjects.
  • Compare model families and hyperparameters.
  • Run preprocessing diagnostics before deployment.
  • Analyze confidence, entropy, accuracy, and ITR after a session.

Data Contracts

SDKTypical Batch ShapeNotes
Python classifiers(n_trials, n_features)sklearn-compatible estimator API.
Python BCIData utilities(n_features, n_samples, n_trials)Used by lower-level inference helpers.
Julia BCIData(n_features, n_samples, n_trials)Labels are typically 1-indexed integers.
All batch workflows should use preprocessed features, not raw EEG.

Basic Pattern

from nimbus_bci import NimbusLDA
from sklearn.model_selection import cross_val_score

X = load_features()  # (n_trials, n_features)
y = load_labels()

clf = NimbusLDA()
scores = cross_val_score(clf, X, y, cv=5)

clf.fit(X, y)
probabilities = clf.predict_proba(X)
predictions = clf.predict(X)

Evaluation Checklist

  • Split by session or subject when testing generalization.
  • Estimate normalization parameters on training folds only.
  • Report accuracy with confidence and rejection rate.
  • Inspect posterior entropy for uncertain trials.
  • Compare against a simple NimbusLDA baseline before tuning.

Batch Diagnostics

Run diagnostics when batch accuracy is unexpectedly low:
report = diagnose_preprocessing(data)

if !isempty(report.errors)
    error("Invalid batch data: $(report.errors)")
end
Common issues include wrong feature shape, raw EEG passed as features, inconsistent label encoding, missing normalization, and mismatched model metadata.

Batch vs Streaming

NeedUse
Offline calibrationBatch
Cross-validationBatch
Research analysisBatch
Live feedbackStreaming
Chunk-by-chunk command updatesStreaming
Most production BCI systems use both: batch for calibration and validation, streaming for live operation.

Next Read

Feature Normalization

Keep train/test/deployment scales consistent.

Streaming Inference

Configure live chunk processing.

Basic Examples

Compact batch and streaming recipes.

Model Specification

Choose a model before evaluation.