Python SDK API Reference
Complete reference for the nimbus-bci Python library.Start Here
Python SDK Quickstart
Build and run your first classifier before diving into full API details.
Model Selection
Compare NimbusLDA, NimbusQDA, NimbusSoftmax, and NimbusSTS use cases.
Streaming Inference
Move from batch workflows to real-time BCI inference patterns.
Classifiers
NimbusLDA
Bayesian Linear Discriminant Analysis with shared covariance.mu_loc(float, default=0.0): Prior mean location for class meansmu_scale(float, default=3.0): Prior scale for class means (> 0)wishart_df(float or None, default=None): Wishart degrees of freedom. If None, set ton_features + 2class_prior_alpha(float, default=1.0): Dirichlet smoothing for class priors (≥ 0)
fit(X, y): Fit the modelpredict(X): Predict class labelspredict_proba(X): Predict class probabilitiespartial_fit(X, y, classes=None): Incremental learningscore(X, y): Return accuracy score
classes_: Unique class labelsn_classes_: Number of classesn_features_in_: Number of featuresmodel_: Underlying Nimbus model
NimbusQDA
Bayesian QDA with class-specific covariances.- Same as NimbusLDA
- Same as NimbusLDA
NimbusSoftmax
Bayesian Multinomial Logistic Regression (Polya-Gamma VI).w_loc(float, default=0.0): Prior mean for weightsw_scale(float, default=1.0): Prior scale for weightsclass_prior_alpha(float, default=1.0): Dirichlet smoothing
- Same as NimbusLDA
NimbusSTS
Bayesian Structural Time Series classifier with Extended Kalman Filter for non-stationary data.state_dim(int or None, default=None): Dimension of latent state. If None, set ton_classes - 1w_loc(float, default=0.0): Prior mean for feature weightsw_scale(float, default=1.0): Prior scale for feature weightstransition_cov(float or None, default=None): Process noise covariance Q (controls drift speed). If None, auto-estimated. Typical values:- 0.001: Very slow drift (multi-day stability)
- 0.01: Moderate drift (within-session adaptation)
- 0.1: Fast drift (rapid environmental changes)
observation_cov(float, default=1.0): Observation noise covariance Rtransition_matrix(ndarray or None, default=None): State transition matrix A. If None, uses identity (random walk)learning_rate(float, default=0.1): Step size for parameter updatesnum_steps(int, default=50): Number of learning iterationsrng_seed(int, default=0): Random seed for reproducibilityverbose(bool, default=False): Print convergence diagnostics during training
fit(X, y): Fit the modelpredict(X): Predict class labels (stateless)predict_proba(X): Predict class probabilities (stateless)partial_fit(X, y, classes=None): Incremental learning with EKF updatescore(X, y): Return accuracy scorepropagate_state(n_steps=1): Advance latent state using prior dynamics onlyreset_state(): Reset latent state to initial values from trainingget_latent_state(): Get current latent state (z_mean, z_cov)set_latent_state(z_mean, z_cov=None): Set latent state manually
classes_: Unique class labelsn_classes_: Number of classesn_features_in_: Number of featuresmodel_: Underlying Nimbus model with state parameters
Key Differences from Other Classifiers:
- Stateful: Maintains and evolves latent state over time
- Non-stationary: Designed for data with temporal drift
- State Management: Explicit API for time propagation and state control
- Use case: Long sessions, cross-day transfer, adaptive BCI
Data Structures
BCIData
Container for BCI features, metadata, and labels.features(np.ndarray): Feature array of shape(n_features, n_samples, n_trials)metadata(BCIMetadata): Metadata describing the datalabels(np.ndarray, optional): Trial labels
features: Feature arraymetadata: Metadata objectlabels: Labels (if provided)n_trials: Number of trialsn_features: Number of featuresn_samples: Number of samples per trial
BCIMetadata
Metadata for BCI experiments.sampling_rate(float): Sampling rate in Hzparadigm(str): BCI paradigm (“motor_imagery”, “p300”, “ssvep”)feature_type(str): Feature type (“csp”, “bandpower”, “erp”)n_features(int): Number of featuresn_classes(int): Number of classeschunk_size(int, optional): Chunk size for streamingtemporal_aggregation(str, optional): Aggregation method (“mean”, “logvar”, “rms”)
Inference
predict_batch()
Batch inference with comprehensive diagnostics.model(NimbusModel): Trained Nimbus modeldata(BCIData): Data to predict onreturn_probs(bool, default=True): Return probabilitiesreturn_entropy(bool, default=True): Return entropyreturn_diagnostics(bool, default=True): Return diagnostics
BatchResult: Result object with predictions, probabilities, entropy, and diagnostics
StreamingSession
Real-time chunk-by-chunk processing.process_chunk(chunk): Process one chunk, returns ChunkResultfinalize_trial(method="weighted_vote"): Finalize trial, returns StreamingResultreset(): Reset session for new trial
ChunkResult
Result from processing a single chunk. Attributes:prediction(int): Predicted classprobabilities(np.ndarray): Class probabilitiesconfidence(float): Confidence (max probability)entropy(float): Entropy in bits
StreamingResult
Result from finalizing a trial. Attributes:prediction(int): Final predicted classprobabilities(np.ndarray): Aggregated probabilitiesconfidence(float): Final confidenceentropy(float): Final entropychunk_predictions(list): Predictions from each chunkaggregation_method(str): Method used for aggregation
BatchResult
Result from batch inference. Attributes:predictions(np.ndarray): Predicted classesprobabilities(np.ndarray): Class probabilitiesentropy(np.ndarray): Entropy per trialmean_entropy(float): Mean entropybalance(float): Class balancelatency_ms(float): Inference latencycalibration(CalibrationMetrics): Calibration metrics
Metrics
compute_entropy()
Compute Shannon entropy from probabilities.probabilities(np.ndarray): Probability distributions
np.ndarray: Entropy in bits
compute_calibration_metrics()
Compute Expected Calibration Error (ECE) and Maximum Calibration Error (MCE).predictions(np.ndarray): Predicted classesconfidences(np.ndarray): Confidence scoreslabels(np.ndarray): True labelsn_bins(int, default=10): Number of bins
CalibrationMetrics: Object witheceandmceattributes
calculate_itr()
Calculate Information Transfer Rate.accuracy(float): Classification accuracy (0-1)n_classes(int): Number of classestrial_duration(float): Trial duration in seconds
float: ITR in bits/minute
assess_trial_quality()
Assess quality of predictions.probabilities(np.ndarray): Class probabilitiesentropy(np.ndarray): Entropy valuesconfidence_threshold(float, default=0.7): Threshold for quality
TrialQuality: Object with quality metrics
should_reject_trial()
Determine if trial should be rejected based on confidence.confidence(float): Confidence scorethreshold(float, default=0.7): Rejection threshold
bool: True if trial should be rejected
Utilities
estimate_normalization_params()
Estimate normalization parameters from data.X(np.ndarray): Data arraymethod(str): Normalization methodaxis(int): Axis to normalize along
NormalizationParams: Parameters for normalization
apply_normalization()
Apply normalization to data.X(np.ndarray): Data to normalizeparams(NormalizationParams): Normalization parameters
np.ndarray: Normalized data
diagnose_preprocessing()
Diagnose preprocessing quality.features(np.ndarray): Feature arraylabels(np.ndarray): Labelssampling_rate(float): Sampling rate
PreprocessingReport: Diagnostic report
compute_fisher_score()
Compute Fisher score for feature discriminability.X(np.ndarray): Featuresy(np.ndarray): Labels
np.ndarray: Fisher scores per feature
rank_features_by_discriminability()
Rank features by discriminability.X(np.ndarray): Featuresy(np.ndarray): Labels
np.ndarray: Feature indices sorted by discriminability
MNE Integration
from_mne_epochs()
Convert MNE Epochs to BCIData.epochs(mne.Epochs): MNE Epochs objectparadigm(str): BCI paradigmfeature_type(str): Feature type
BCIData: Converted data
extract_csp_features()
Extract CSP features from MNE Epochs.epochs(mne.Epochs): MNE Epochs objectn_components(int): Number of CSP componentslog(bool): Apply log transform
features(np.ndarray): CSP featurescsp(mne.decoding.CSP): Fitted CSP object
extract_bandpower_features()
Extract bandpower features from MNE Epochs.epochs(mne.Epochs): MNE Epochs objectbands(dict): Frequency bandsmethod(str): Method (“welch” or “multitaper”)
np.ndarray: Bandpower features
create_bci_pipeline()
Create complete BCI pipeline with MNE and nimbus-bci.classifier(str): Classifier type (“lda”, “gmm”, “softmax”)n_csp_components(int): Number of CSP componentsbands(tuple): Frequency band (low, high)
sklearn.pipeline.Pipeline: Complete pipeline
Functional API (Backward Compatible)
LDA Functions
GMM Functions
Softmax Functions
STS Functions
NimbusSTS class with its state management methods.
Model I/O
Type Hints
All functions and classes include type hints for better IDE support:API FAQ
Which classifier should I start with?
Which classifier should I start with?
Start with
NimbusLDA for fast baselines, especially motor imagery. Use NimbusQDA for overlapping distributions and NimbusSTS for non-stationary sessions.When should I use StreamingSession instead of predict_batch?
When should I use StreamingSession instead of predict_batch?
Use
predict_batch for offline trials and evaluation. Use StreamingSession for chunk-by-chunk real-time inference where latency and incremental decisions matter.Do I need MNE-Python to use nimbus-bci?
Do I need MNE-Python to use nimbus-bci?
No. MNE integration is optional. You can use
nimbus-bci with any preprocessing pipeline as long as you provide correctly shaped feature arrays.Next Read
sklearn Integration
Advanced sklearn patterns and best practices
Streaming Inference
Real-time BCI with chunk processing
MNE Integration
Complete EEG preprocessing pipeline
Examples
Working code examples