Skip to main content

Python SDK API Reference

Complete reference for the nimbus-bci Python library.

Classifiers

NimbusLDA

Bayesian Linear Discriminant Analysis with shared covariance.
from nimbus_bci import NimbusLDA

clf = NimbusLDA(
    mu_loc=0.0,
    mu_scale=3.0,
    wishart_df=None,
    class_prior_alpha=1.0
)
Parameters:
  • mu_loc (float, default=0.0): Prior mean location for class means
  • mu_scale (float, default=3.0): Prior scale for class means (> 0)
  • wishart_df (float or None, default=None): Wishart degrees of freedom. If None, set to n_features + 2
  • class_prior_alpha (float, default=1.0): Dirichlet smoothing for class priors (≥ 0)
Methods:
  • fit(X, y): Fit the model
  • predict(X): Predict class labels
  • predict_proba(X): Predict class probabilities
  • partial_fit(X, y, classes=None): Incremental learning
  • score(X, y): Return accuracy score
Attributes:
  • classes_: Unique class labels
  • n_classes_: Number of classes
  • n_features_in_: Number of features
  • model_: Underlying Nimbus model
Example:
from nimbus_bci import NimbusLDA
import numpy as np

# Create and fit
clf = NimbusLDA(mu_scale=5.0)
clf.fit(X_train, y_train)

# Predict
predictions = clf.predict(X_test)
probabilities = clf.predict_proba(X_test)

# Online learning
clf.partial_fit(X_new, y_new)

NimbusQDA

Bayesian QDA with class-specific covariances.
from nimbus_bci import NimbusQDA

clf = NimbusQDA(
    mu_loc=0.0,
    mu_scale=3.0,
    wishart_df=None,
    class_prior_alpha=1.0
)
Parameters:
  • Same as NimbusLDA
Methods:
  • Same as NimbusLDA
Example:
from nimbus_bci import NimbusQDA

# Better for overlapping distributions (e.g., P300)
clf = NimbusQDA()
clf.fit(X_train, y_train)
probs = clf.predict_proba(X_test)

NimbusSoftmax

Bayesian Multinomial Logistic Regression (Polya-Gamma VI).
from nimbus_bci import NimbusSoftmax

clf = NimbusSoftmax(
    w_loc=0.0,
    w_scale=1.0,
    class_prior_alpha=1.0
)
Parameters:
  • w_loc (float, default=0.0): Prior mean for weights
  • w_scale (float, default=1.0): Prior scale for weights
  • class_prior_alpha (float, default=1.0): Dirichlet smoothing
Methods:
  • Same as NimbusLDA
Example:
from nimbus_bci import NimbusSoftmax

# For non-Gaussian decision boundaries
clf = NimbusSoftmax()
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)

NimbusSTS

Bayesian Structural Time Series classifier with Extended Kalman Filter for non-stationary data.
from nimbus_bci import NimbusSTS

clf = NimbusSTS(
    state_dim=None,
    w_loc=0.0,
    w_scale=1.0,
    transition_cov=None,
    observation_cov=1.0,
    transition_matrix=None,
    learning_rate=0.1,
    num_steps=50,
    rng_seed=0,
    verbose=False
)
Parameters:
  • state_dim (int or None, default=None): Dimension of latent state. If None, set to n_classes - 1
  • w_loc (float, default=0.0): Prior mean for feature weights
  • w_scale (float, default=1.0): Prior scale for feature weights
  • transition_cov (float or None, default=None): Process noise covariance Q (controls drift speed). If None, auto-estimated. Typical values:
    • 0.001: Very slow drift (multi-day stability)
    • 0.01: Moderate drift (within-session adaptation)
    • 0.1: Fast drift (rapid environmental changes)
  • observation_cov (float, default=1.0): Observation noise covariance R
  • transition_matrix (ndarray or None, default=None): State transition matrix A. If None, uses identity (random walk)
  • learning_rate (float, default=0.1): Step size for parameter updates
  • num_steps (int, default=50): Number of learning iterations
  • rng_seed (int, default=0): Random seed for reproducibility
  • verbose (bool, default=False): Print convergence diagnostics during training
Methods:
  • fit(X, y): Fit the model
  • predict(X): Predict class labels (stateless)
  • predict_proba(X): Predict class probabilities (stateless)
  • partial_fit(X, y, classes=None): Incremental learning with EKF update
  • score(X, y): Return accuracy score
  • propagate_state(n_steps=1): Advance latent state using prior dynamics only
  • reset_state(): Reset latent state to initial values from training
  • get_latent_state(): Get current latent state (z_mean, z_cov)
  • set_latent_state(z_mean, z_cov=None): Set latent state manually
Attributes:
  • classes_: Unique class labels
  • n_classes_: Number of classes
  • n_features_in_: Number of features
  • model_: Underlying Nimbus model with state parameters
Example - Basic Usage:
from nimbus_bci import NimbusSTS
import numpy as np

# Create and fit
clf = NimbusSTS(transition_cov=0.05, num_steps=100)
clf.fit(X_train, y_train)

# Standard prediction (stateless)
predictions = clf.predict(X_test)
probabilities = clf.predict_proba(X_test)
Example - Stateful Prediction:
from nimbus_bci import NimbusSTS

# Train model
clf = NimbusSTS(transition_cov=0.05)
clf.fit(X_train, y_train)

# Time-ordered prediction with state propagation
for x_t in X_stream:
    clf.propagate_state()  # Advance time
    pred = clf.predict(x_t.reshape(1, -1))
    print(f"Prediction: {pred}")
Example - Online Learning with Delayed Feedback:
from nimbus_bci import NimbusSTS

# Initial training
clf = NimbusSTS(transition_cov=0.05, learning_rate=0.1)
clf.fit(X_calibration, y_calibration)

# Online session
for trial in online_session:
    # 1. Advance time (no measurement)
    clf.propagate_state()
    
    # 2. Predict using current state
    prediction = clf.predict(trial.features.reshape(1, -1))[0]
    
    # 3. Execute action
    execute_action(prediction)
    
    # 4. Get feedback after action completes
    true_label = wait_for_feedback()
    
    # 5. Update state with measurement
    clf.partial_fit(trial.features.reshape(1, -1), [true_label])
Example - State Inspection and Transfer:
from nimbus_bci import NimbusSTS

# Day 1: Train and save state
clf_day1 = NimbusSTS()
clf_day1.fit(X_day1, y_day1)
z_final, P_final = clf_day1.get_latent_state()

# Day 2: Transfer state with increased uncertainty
clf_day2 = NimbusSTS()
clf_day2.fit(X_day2_calib, y_day2_calib)  # Minimal calibration
clf_day2.set_latent_state(z_final * 0.5, P_final * 2.0)

# Use with transferred state
predictions = clf_day2.predict(X_day2_test)
Key Differences from Other Classifiers:
  • Stateful: Maintains and evolves latent state over time
  • Non-stationary: Designed for data with temporal drift
  • State Management: Explicit API for time propagation and state control
  • Use case: Long sessions, cross-day transfer, adaptive BCI
See Bayesian STS Documentation for complete usage guide.

Data Structures

BCIData

Container for BCI features, metadata, and labels.
from nimbus_bci.data import BCIData

data = BCIData(
    features,      # (n_features, n_samples, n_trials)
    metadata,      # BCIMetadata instance
    labels=None    # Optional labels
)
Parameters:
  • features (np.ndarray): Feature array of shape (n_features, n_samples, n_trials)
  • metadata (BCIMetadata): Metadata describing the data
  • labels (np.ndarray, optional): Trial labels
Attributes:
  • features: Feature array
  • metadata: Metadata object
  • labels: Labels (if provided)
  • n_trials: Number of trials
  • n_features: Number of features
  • n_samples: Number of samples per trial

BCIMetadata

Metadata for BCI experiments.
from nimbus_bci.data import BCIMetadata

metadata = BCIMetadata(
    sampling_rate=250.0,
    paradigm="motor_imagery",
    feature_type="csp",
    n_features=16,
    n_classes=4,
    chunk_size=None,
    temporal_aggregation=None
)
Parameters:
  • sampling_rate (float): Sampling rate in Hz
  • paradigm (str): BCI paradigm (“motor_imagery”, “p300”, “ssvep”)
  • feature_type (str): Feature type (“csp”, “bandpower”, “erp”)
  • n_features (int): Number of features
  • n_classes (int): Number of classes
  • chunk_size (int, optional): Chunk size for streaming
  • temporal_aggregation (str, optional): Aggregation method (“mean”, “logvar”, “rms”)

Inference

predict_batch()

Batch inference with comprehensive diagnostics.
from nimbus_bci import predict_batch

result = predict_batch(
    model,           # Trained model
    data,            # BCIData instance
    return_probs=True,
    return_entropy=True,
    return_diagnostics=True
)
Parameters:
  • model (NimbusModel): Trained Nimbus model
  • data (BCIData): Data to predict on
  • return_probs (bool, default=True): Return probabilities
  • return_entropy (bool, default=True): Return entropy
  • return_diagnostics (bool, default=True): Return diagnostics
Returns:
  • BatchResult: Result object with predictions, probabilities, entropy, and diagnostics
Example:
from nimbus_bci import predict_batch, NimbusLDA
from nimbus_bci.data import BCIData, BCIMetadata

clf = NimbusLDA()
clf.fit(X_train, y_train)

metadata = BCIMetadata(
    sampling_rate=250.0,
    paradigm="motor_imagery",
    feature_type="csp",
    n_features=16,
    n_classes=4
)
data = BCIData(features, metadata)

result = predict_batch(clf.model_, data)
print(f"Accuracy: {(result.predictions == labels).mean():.2%}")
print(f"Mean entropy: {result.mean_entropy:.2f} bits")

StreamingSession

Real-time chunk-by-chunk processing.
from nimbus_bci import StreamingSession

session = StreamingSession(
    model,      # Trained model
    metadata    # BCIMetadata with chunk_size
)
Methods:
  • process_chunk(chunk): Process one chunk, returns ChunkResult
  • finalize_trial(method="weighted_vote"): Finalize trial, returns StreamingResult
  • reset(): Reset session for new trial
Example:
from nimbus_bci import NimbusLDA, StreamingSession
from nimbus_bci.data import BCIMetadata

clf = NimbusLDA()
clf.fit(X_train, y_train)

metadata = BCIMetadata(
    sampling_rate=250.0,
    paradigm="motor_imagery",
    feature_type="csp",
    n_features=16,
    n_classes=4,
    chunk_size=125,  # 500ms chunks
    temporal_aggregation="logvar"
)

session = StreamingSession(clf.model_, metadata)

# Process chunks
for chunk in stream:
    result = session.process_chunk(chunk)
    print(f"Chunk: class {result.prediction} ({result.confidence:.2%})")

# Finalize
final = session.finalize_trial(method="weighted_vote")
print(f"Final: class {final.prediction}")

ChunkResult

Result from processing a single chunk. Attributes:
  • prediction (int): Predicted class
  • probabilities (np.ndarray): Class probabilities
  • confidence (float): Confidence (max probability)
  • entropy (float): Entropy in bits

StreamingResult

Result from finalizing a trial. Attributes:
  • prediction (int): Final predicted class
  • probabilities (np.ndarray): Aggregated probabilities
  • confidence (float): Final confidence
  • entropy (float): Final entropy
  • chunk_predictions (list): Predictions from each chunk
  • aggregation_method (str): Method used for aggregation

BatchResult

Result from batch inference. Attributes:
  • predictions (np.ndarray): Predicted classes
  • probabilities (np.ndarray): Class probabilities
  • entropy (np.ndarray): Entropy per trial
  • mean_entropy (float): Mean entropy
  • balance (float): Class balance
  • latency_ms (float): Inference latency
  • calibration (CalibrationMetrics): Calibration metrics

Metrics

compute_entropy()

Compute Shannon entropy from probabilities.
from nimbus_bci import compute_entropy

entropy = compute_entropy(probabilities)  # bits
Parameters:
  • probabilities (np.ndarray): Probability distributions
Returns:
  • np.ndarray: Entropy in bits

compute_calibration_metrics()

Compute Expected Calibration Error (ECE) and Maximum Calibration Error (MCE).
from nimbus_bci import compute_calibration_metrics

metrics = compute_calibration_metrics(
    predictions,
    confidences,
    labels,
    n_bins=10
)
Parameters:
  • predictions (np.ndarray): Predicted classes
  • confidences (np.ndarray): Confidence scores
  • labels (np.ndarray): True labels
  • n_bins (int, default=10): Number of bins
Returns:
  • CalibrationMetrics: Object with ece and mce attributes

calculate_itr()

Calculate Information Transfer Rate.
from nimbus_bci import calculate_itr

itr = calculate_itr(
    accuracy=0.85,
    n_classes=4,
    trial_duration=4.0
)
Parameters:
  • accuracy (float): Classification accuracy (0-1)
  • n_classes (int): Number of classes
  • trial_duration (float): Trial duration in seconds
Returns:
  • float: ITR in bits/minute

assess_trial_quality()

Assess quality of predictions.
from nimbus_bci import assess_trial_quality

quality = assess_trial_quality(
    probabilities,
    entropy,
    confidence_threshold=0.7
)
Parameters:
  • probabilities (np.ndarray): Class probabilities
  • entropy (np.ndarray): Entropy values
  • confidence_threshold (float, default=0.7): Threshold for quality
Returns:
  • TrialQuality: Object with quality metrics

should_reject_trial()

Determine if trial should be rejected based on confidence.
from nimbus_bci import should_reject_trial

reject = should_reject_trial(confidence, threshold=0.7)
Parameters:
  • confidence (float): Confidence score
  • threshold (float, default=0.7): Rejection threshold
Returns:
  • bool: True if trial should be rejected

Utilities

estimate_normalization_params()

Estimate normalization parameters from data.
from nimbus_bci import estimate_normalization_params

params = estimate_normalization_params(
    X,
    method="zscore",  # or "minmax", "robust"
    axis=0
)
Parameters:
  • X (np.ndarray): Data array
  • method (str): Normalization method
  • axis (int): Axis to normalize along
Returns:
  • NormalizationParams: Parameters for normalization

apply_normalization()

Apply normalization to data.
from nimbus_bci import apply_normalization

X_normalized = apply_normalization(X, params)
Parameters:
  • X (np.ndarray): Data to normalize
  • params (NormalizationParams): Normalization parameters
Returns:
  • np.ndarray: Normalized data

diagnose_preprocessing()

Diagnose preprocessing quality.
from nimbus_bci import diagnose_preprocessing

report = diagnose_preprocessing(
    features,
    labels,
    sampling_rate=250.0
)
Parameters:
  • features (np.ndarray): Feature array
  • labels (np.ndarray): Labels
  • sampling_rate (float): Sampling rate
Returns:
  • PreprocessingReport: Diagnostic report

compute_fisher_score()

Compute Fisher score for feature discriminability.
from nimbus_bci import compute_fisher_score

scores = compute_fisher_score(X, y)
Parameters:
  • X (np.ndarray): Features
  • y (np.ndarray): Labels
Returns:
  • np.ndarray: Fisher scores per feature

rank_features_by_discriminability()

Rank features by discriminability.
from nimbus_bci import rank_features_by_discriminability

ranking = rank_features_by_discriminability(X, y)
Parameters:
  • X (np.ndarray): Features
  • y (np.ndarray): Labels
Returns:
  • np.ndarray: Feature indices sorted by discriminability

MNE Integration

from_mne_epochs()

Convert MNE Epochs to BCIData.
from nimbus_bci.compat import from_mne_epochs

data = from_mne_epochs(
    epochs,
    paradigm="motor_imagery",
    feature_type="raw"
)
Parameters:
  • epochs (mne.Epochs): MNE Epochs object
  • paradigm (str): BCI paradigm
  • feature_type (str): Feature type
Returns:
  • BCIData: Converted data

extract_csp_features()

Extract CSP features from MNE Epochs.
from nimbus_bci.compat import extract_csp_features

features, csp = extract_csp_features(
    epochs,
    n_components=8,
    log=True
)
Parameters:
  • epochs (mne.Epochs): MNE Epochs object
  • n_components (int): Number of CSP components
  • log (bool): Apply log transform
Returns:
  • features (np.ndarray): CSP features
  • csp (mne.decoding.CSP): Fitted CSP object

extract_bandpower_features()

Extract bandpower features from MNE Epochs.
from nimbus_bci.compat import extract_bandpower_features

features = extract_bandpower_features(
    epochs,
    bands={"mu": (8, 12), "beta": (13, 30)},
    method="welch"
)
Parameters:
  • epochs (mne.Epochs): MNE Epochs object
  • bands (dict): Frequency bands
  • method (str): Method (“welch” or “multitaper”)
Returns:
  • np.ndarray: Bandpower features

create_bci_pipeline()

Create complete BCI pipeline with MNE and nimbus-bci.
from nimbus_bci.compat import create_bci_pipeline

pipeline = create_bci_pipeline(
    classifier="lda",
    n_csp_components=8,
    bands=(8, 30)
)
Parameters:
  • classifier (str): Classifier type (“lda”, “gmm”, “softmax”)
  • n_csp_components (int): Number of CSP components
  • bands (tuple): Frequency band (low, high)
Returns:
  • sklearn.pipeline.Pipeline: Complete pipeline

Functional API (Backward Compatible)

LDA Functions

from nimbus_bci import (
    nimbus_lda_fit,
    nimbus_lda_predict,
    nimbus_lda_predict_proba,
    nimbus_lda_update
)

# Fit
model = nimbus_lda_fit(X, y, n_classes=4, label_base=0)

# Predict
probs = nimbus_lda_predict_proba(model, X_test)
preds = nimbus_lda_predict(model, X_test)

# Update
model = nimbus_lda_update(model, X_new, y_new)

GMM Functions

from nimbus_bci import (
    nimbus_gmm_fit,
    nimbus_gmm_predict,
    nimbus_gmm_predict_proba,
    nimbus_gmm_update
)

# Same API as LDA functions
model = nimbus_gmm_fit(X, y, n_classes=4, label_base=0)
probs = nimbus_gmm_predict_proba(model, X_test)

Softmax Functions

from nimbus_bci import (
    nimbus_softmax_fit,
    nimbus_softmax_predict,
    nimbus_softmax_predict_proba,
    nimbus_softmax_update
)

# Same API as LDA functions
model = nimbus_softmax_fit(X, y, n_classes=4, label_base=0)
probs = nimbus_softmax_predict_proba(model, X_test)

STS Functions

from nimbus_bci import (
    nimbus_sts_fit,
    nimbus_sts_predict,
    nimbus_sts_predict_proba,
    nimbus_sts_update
)

# Fit
model = nimbus_sts_fit(
    X, y,
    n_classes=4,
    label_base=0,
    state_dim=None,
    transition_cov=0.05,
    observation_cov=1.0,
    learning_rate=0.1,
    num_steps=100,
    rng_seed=0,
    verbose=False
)

# Predict (with optional state evolution)
probs = nimbus_sts_predict_proba(model, X_test, evolve_state=False)
preds = nimbus_sts_predict(model, X_test)

# Update (online learning)
model = nimbus_sts_update(model, X_new, y_new, learning_rate=0.1)
Note: The functional API for STS provides lower-level control. For most use cases, prefer the NimbusSTS class with its state management methods.

Model I/O

from nimbus_bci import nimbus_save, nimbus_load

# Save model
nimbus_save(model, "model.npz")

# Load model
model = nimbus_load("model.npz")

Type Hints

All functions and classes include type hints for better IDE support:
from nimbus_bci import NimbusLDA
import numpy as np
from numpy.typing import NDArray

def train_classifier(
    X: NDArray[np.float64],
    y: NDArray[np.int64]
) -> NimbusLDA:
    clf = NimbusLDA()
    clf.fit(X, y)
    return clf

Next Steps