Skip to main content

BCI Model Examples

Note: This page describes conceptual probabilistic models for educational purposes. NimbusSDK currently implements RxLDA and RxGMM models. The examples below illustrate the theoretical foundations that inform these implementations.For working code examples, see:

Motor Imagery: Theoretical Model

Conceptual Probabilistic Model

Motor imagery BCI can be understood through this probabilistic lens: p(class,features)=p(class)p(featuresclass)p(\text{class}, \text{features}) = p(\text{class}) \cdot p(\text{features}|\text{class}) Components:
  1. Prior: p(class)p(\text{class}) - Equal probability for each motor imagery class (left, right, feet, tongue)
  2. Likelihood: p(featuresclass)p(\text{features}|\text{class}) - Gaussian distribution of EEG features given the class
  3. Posterior: p(classfeatures)p(\text{class}|\text{features}) - Computed via Bayes’ rule

RxLDA Implementation

NimbusSDK implements this using RxLDA with shared covariance:
using NimbusSDK

# Load pre-trained RxLDA motor imagery model
model = load_model(RxLDAModel, "motor_imagery_4class_v1")

# Or train from scratch
metadata = BCIMetadata(
    sampling_rate = 250.0,
    paradigm = :motor_imagery,
    feature_type = :csp,
    n_features = 16,
    n_classes = 4
)

features = randn(16, 250, 100)  # 100 trials
labels = rand(1:4, 100)  # 4 classes
training_data = BCIData(features, metadata, labels)

# Train model - learns μ₁, μ₂, μ₃, μ₄ and shared Σ
trained_model = train_model(RxLDAModel, training_data; iterations=50)

# Inference
test_data = BCIData(test_features, metadata)
results = predict_batch(trained_model, test_data)
Mathematical Model: p(xc)=N(xμc,Σ)p(x|c) = \mathcal{N}(x | \mu_c, \Sigma) Where:
  • xx: CSP-extracted features (16-dimensional)
  • c{1,2,3,4}c \in \{1,2,3,4\}: Motor imagery class
  • μc\mu_c: Class-specific mean (learned from data)
  • Σ\Sigma: Shared covariance matrix (learned from data)

Performance Expectations

Motor Imagery with RxLDA

Typical Accuracy: 70-85% for 4-class MI
Calibration Time: 5-10 minutes (50-100 trials)
Inference Latency: 10-20ms per trial
ITR: 15-25 bits/minute (4-second trials)

P300 Detection: Theoretical Model

Conceptual Probabilistic Model

P300 detection distinguishes target from non-target stimuli: p(target,ERP features)=p(target)p(ERPtarget)p(\text{target}, \text{ERP features}) = p(\text{target}) \cdot p(\text{ERP}|\text{target}) Components:
  1. Prior: p(target)p(\text{target}) - Low probability (e.g., 1/6 for 6×6 speller)
  2. Likelihood: p(ERPtarget)p(\text{ERP}|\text{target}) - Gaussian distribution with P300 component
  3. Posterior: p(targetERP)p(\text{target}|\text{ERP}) - Target probability given ERP

RxGMM Implementation

NimbusSDK uses RxGMM for flexible P300 detection:
using NimbusSDK

# Load pre-trained P300 model
model = load_model(RxGMMModel, "p300_binary_v1")

# Or train from scratch
metadata = BCIMetadata(
    sampling_rate = 250.0,
    paradigm = :p300,
    feature_type = :erp,
    n_features = 12,  # ERP amplitudes from 12 channels
    n_classes = 2  # target vs non-target
)

# Training data: post-stimulus ERP features
features = randn(12, 200, 150)  # 150 epochs, 0.8s post-stimulus
labels = rand(1:2, 150)  # 1=target, 2=non-target
training_data = BCIData(features, metadata, labels)

# Train - learns μ₁, Σ₁ (target) and μ₂, Σ₂ (non-target)
trained_model = train_model(RxGMMModel, training_data; iterations=50)

# Inference
results = predict_batch(trained_model, test_data)
Mathematical Model: p(xtarget)=N(xμtarget,Σtarget)p(x|\text{target}) = \mathcal{N}(x | \mu_{\text{target}}, \Sigma_{\text{target}}) p(xnon-target)=N(xμnon-target,Σnon-target)p(x|\text{non-target}) = \mathcal{N}(x | \mu_{\text{non-target}}, \Sigma_{\text{non-target}}) Where:
  • xx: ERP features (12-dimensional)
  • Class-specific means and covariances capture P300 morphology

Performance Expectations

P300 Detection with RxGMM

Typical Accuracy: 85-95% binary detection
Calibration Time: 3-5 minutes (100-150 epochs)
Inference Latency: 15-25ms per epoch
ITR: 10-20 bits/minute (with 10 repetitions)

Temporal Dynamics (Conceptual)

Hidden Markov Models (Future)

Coming Soon

HMM for BCI would model temporal sequences:p(stst1)(State transitions)p(s_t | s_{t-1}) \quad \text{(State transitions)}p(xtst)(Observations)p(x_t | s_t) \quad \text{(Observations)}Use cases:
  • Continuous tracking of brain states
  • Sequence classification (e.g., gesture recognition)
  • Adaptive BCI with state-dependent feedback
Status: Not yet implemented in NimbusSDK

Kalman Filtering (Conceptual)

Future Work

Kalman filters for continuous state estimation:st=Ast1+wt(State dynamics)s_t = A s_{t-1} + w_t \quad \text{(State dynamics)}xt=Hst+vt(Observations)x_t = H s_t + v_t \quad \text{(Observations)}Use cases:
  • Smooth cursor control
  • Continuous movement decoding
  • Real-time adaptation
Status: Planned for future release

Multi-Modal Fusion (Conceptual)

EEG + EMG Integration (Future)

Combining cortical (EEG) and peripheral (EMG) signals: p(intentEEG,EMG)p(EEGintent)p(EMGintent)p(intent)p(\text{intent} | \text{EEG}, \text{EMG}) \propto p(\text{EEG}|\text{intent}) \cdot p(\text{EMG}|\text{intent}) \cdot p(\text{intent}) Potential benefits:
  • Robustness to noise
  • Complementary information
  • Improved accuracy
Status: Not yet implemented - future research direction

Adaptive Models (Conceptual)

Online Learning (Future)

Continuously updating models during use: θt+1=θt+ηθlogp(xtθt)\theta_{t+1} = \theta_t + \eta \nabla_\theta \log p(x_t | \theta_t) Potential benefits:
  • Adapt to non-stationarity
  • Personalization over time
  • Reduced calibration burden
Status: Basic calibration available; full online adaptation planned

Current SDK Capabilities

✅ RxLDA Model

Implemented: Motor imagery, P300, any classification task

✅ RxGMM Model

Implemented: Flexible classification with class-specific covariances

✅ Model Training

Implemented: Supervised training on labeled data

✅ Model Calibration

Implemented: Subject-specific adaptation

⏳ HMM Models

Planned: Temporal sequence modeling

⏳ Kalman Filters

Planned: Continuous state tracking

⏳ Multi-Modal

Future: EEG + EMG fusion

⏳ Online Learning

Future: Continuous adaptation

Working Examples

For actual working code with RxLDA and RxGMM:
Purpose of This Page: This page provides conceptual foundations for understanding probabilistic BCI models. For practical implementation, use the working RxLDA and RxGMM models documented in the examples and API reference.