Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt

Use this file to discover all available pages before exploring further.

Advanced BCI Applications

This page summarizes advanced BCI patterns without duplicating full SDK walkthroughs. Use it as an implementation checklist, then follow the linked SDK guides for exact APIs.
For Python-specific streaming and sklearn patterns, see Python Streaming Inference and sklearn Integration. For Julia streaming, see Julia Streaming Inference.

Cross-Subject Training

Cross-subject models combine calibration data from multiple users and then adapt to a new user with a small amount of labeled data. Core workflow:
  1. Extract the same feature type for every subject.
  2. Normalize using training subjects only.
  3. Train a conservative baseline model.
  4. Evaluate with subject-wise splits, not random trial splits.
  5. Personalize with a small calibration set for the target subject.
from nimbus_bci import NimbusLDA, estimate_normalization_params, apply_normalization
from sklearn.model_selection import LeaveOneGroupOut

X, y, groups = load_multi_subject_features()

logo = LeaveOneGroupOut()
for train_idx, test_idx in logo.split(X, y, groups):
    norm = estimate_normalization_params(X[train_idx], method="zscore")
    X_train = apply_normalization(X[train_idx], norm)
    X_test = apply_normalization(X[test_idx], norm)

    clf = NimbusLDA(mu_scale=5.0)
    clf.fit(X_train, y[train_idx])
    score = clf.score(X_test, y[test_idx])

Hybrid BCI

Hybrid BCIs combine evidence from multiple paradigms, such as motor imagery plus P300. Recommended pattern:
  • Keep one model per paradigm.
  • Normalize each feature family separately.
  • Combine posterior probabilities or confidence-weighted decisions.
  • Log disagreement between paradigms for later review.
mi_proba = mi_model.predict_proba(mi_features)
p300_proba = p300_model.predict_proba(p300_features)

combined = 0.6 * mi_proba + 0.4 * p300_proba
prediction = combined.argmax(axis=1)
confidence = combined.max(axis=1)
Use conservative thresholds for high-stakes actions and require confirmation when paradigms disagree.

Continuous Control

Continuous control maps repeated predictions into a smoothed command stream.
alpha = 0.3
smoothed = None

for posterior in posterior_stream:
    command = posterior_to_velocity(posterior)
    smoothed = command if smoothed is None else alpha * command + (1 - alpha) * smoothed
    send_control(smoothed)
Keep smoothing outside the classifier. The model should report uncertainty; the control layer should decide how aggressively to act on it.

Adaptive Learning

Adaptation is useful when signal distributions drift during long sessions or across days. Choose the adaptation mechanism by feedback type:
Feedback AvailablePattern
Immediate labelspartial_fit or retraining on recent labeled trials.
Delayed labelsBuffer predictions and update when labels arrive.
No labelsUse active learning or calibration sufficiency checks.
Non-stationary stateUse NimbusSTS in Python.
prediction = clf.predict(trial_features)

if label_available:
    clf.partial_fit(trial_features, [true_label])
For Python active learning, see Active Learning.

Multi-Session Experiments

For experiments that span days or weeks:
  • Use a stable preprocessing pipeline and save its parameters.
  • Save normalization parameters with each trained model.
  • Track hardware, montage, impedance, session time, and participant state.
  • Evaluate same-session and cross-session performance separately.
  • Compare adaptation against a no-adaptation baseline.
Suggested metadata to log:
{
  "subject_id": "S001",
  "session_id": "2026-04-25-AM",
  "paradigm": "motor_imagery",
  "feature_type": "csp",
  "model": "NimbusLDA",
  "normalization": "zscore",
  "sampling_rate": 250
}

Robust Deployment

Production systems should separate classification, control, and safety policy:
  1. Classifier returns prediction, posterior, and confidence.
  2. Quality gate accepts, rejects, or asks for confirmation.
  3. Control layer maps accepted predictions to actions.
  4. Monitoring logs confidence, latency, rejection rate, and drift.
Use fallback behavior when confidence is low:
if confidence >= 0.9:
    execute(prediction)
elif confidence >= 0.7:
    request_confirmation(prediction)
else:
    reject_trial("low confidence")
For detailed safeguards, see Error Handling.

Next Read

Basic Examples

Compact starter recipes.

Model Specification

Compare models before choosing an architecture.

Python Active Learning

Query strategy and calibration stopping guidance.

Streaming Configuration

Chunking and aggregation decisions.