Documentation Index
Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt
Use this file to discover all available pages before exploring further.
Python SDK API Reference
Complete reference for the nimbus-bci Python library.Start Here
Python SDK Quickstart
Build and run your first classifier before diving into full API details.
Model Selection
Compare NimbusLDA, NimbusQDA, NimbusSoftmax, and NimbusSTS use cases.
Streaming Inference
Move from batch workflows to real-time BCI inference patterns.
Classifiers
NimbusLDA
Bayesian Linear Discriminant Analysis with shared covariance.mu_loc(float, default=0.0): Prior mean location for class meansmu_scale(float, default=3.0): Prior scale for class means (> 0)wishart_df(float or None, default=None): Wishart degrees of freedom. If None, set ton_features + 2class_prior_alpha(float, default=1.0): Dirichlet smoothing for class priors (≥ 0)
fit(X, y): Fit the modelpredict(X): Predict class labelspredict_proba(X): Predict class probabilitiespartial_fit(X, y, classes=None): Incremental learningscore(X, y): Return accuracy score
classes_: Unique class labelsn_classes_: Number of classesn_features_in_: Number of featuresmodel_: Underlying Nimbus model
NimbusQDA
Bayesian QDA with class-specific covariances.- Same as NimbusLDA
- Same as NimbusLDA
NimbusSoftmax
Bayesian Multinomial Logistic Regression (Polya-Gamma VI). Install the optional extra before using this model:w_loc(float, default=0.0): Prior mean for weightsw_scale(float, default=1.0): Prior scale for weightsb_loc(float, default=0.0): Prior mean for biasesb_scale(float, default=1.0): Prior scale for biaseslearning_rate(float, default=0.2): Damping factor for variational updatesnum_steps(int, default=50): Number of variational update sweepsnum_posterior_samples(int, default=50): Number of posterior samples for predictionrng_seed(int, default=0): Random seed for reproducibility
- Same as NimbusLDA
NimbusSTS
Bayesian Structural Time Series classifier with Extended Kalman Filter for non-stationary data.state_dim(int or None, default=None): Dimension of latent state. If None, set ton_classes - 1w_loc(float, default=0.0): Prior mean for feature weightsw_scale(float, default=1.0): Prior scale for feature weightstransition_cov(float or None, default=None): Process noise covariance Q (controls drift speed). If None, auto-estimated. Typical values:- 0.001: Very slow drift (multi-day stability)
- 0.01: Moderate drift (within-session adaptation)
- 0.1: Fast drift (rapid environmental changes)
observation_cov(float, default=1.0): Observation noise covariance Rtransition_matrix(ndarray or None, default=None): State transition matrix A. If None, uses identity (random walk)learning_rate(float, default=0.1): Step size for parameter updatesnum_steps(int, default=50): Number of learning iterationsrng_seed(int, default=0): Random seed for reproducibilityverbose(bool, default=False): Print convergence diagnostics during training
fit(X, y): Fit the modelpredict(X): Predict class labels (stateless)predict_proba(X): Predict class probabilities (stateless)partial_fit(X, y, classes=None): Incremental learning with EKF updatescore(X, y): Return accuracy scorepropagate_state(n_steps=1): Advance latent state using prior dynamics onlyreset_state(): Reset latent state to initial values from trainingget_latent_state(): Get current latent state (z_mean, z_cov)set_latent_state(z_mean, z_cov=None): Set latent state manually
classes_: Unique class labelsn_classes_: Number of classesn_features_in_: Number of featuresmodel_: Underlying Nimbus model with state parameters
Key Differences from Other Classifiers:
- Stateful: Maintains and evolves latent state over time
- Non-stationary: Designed for data with temporal drift
- State Management: Explicit API for time propagation and state control
- Use case: Long sessions, cross-day transfer, adaptive BCI
Data Structures
BCIData
Container for BCI features, metadata, and labels.features(np.ndarray): Feature array of shape(n_features, n_samples, n_trials)for multiple trials or(n_features, n_samples)for one trialmetadata(BCIMetadata): Metadata describing the datalabels(np.ndarray, optional): Trial labels
features: Feature arraymetadata: Metadata objectlabels: Labels (if provided)n_trials: Number of trialsn_samples: Number of samples per trial
BCIMetadata
Metadata for BCI experiments.sampling_rate(float): Sampling rate in Hzparadigm(str): BCI paradigm ("motor_imagery","p300","ssvep","erp", or"custom")feature_type(str): Feature type ("raw","csp","bandpower","erp_amplitude", or"custom")n_features(int): Number of featuresn_classes(int): Number of classeschunk_size(int, optional): Chunk size for streamingtemporal_aggregation(str, default=“mean”): Aggregation method ("mean","logvar","last","max","median","var", or"std")
Inference
predict_batch()
Batch inference with comprehensive diagnostics.model(NimbusModel): Trained Nimbus modeldata(BCIData): Data to predict onnum_posterior_samples(int, default=50): Posterior samples for softmax modelsrng_seed(int, default=0): Random seed for softmax prediction
BatchResult: Result object with predictions, posteriors, entropy, and diagnostics
StreamingSession
Real-time chunk-by-chunk processing.process_chunk(chunk): Process one chunk, returns ChunkResultfinalize_trial(method="weighted_vote"): Finalize trial, returns StreamingResultreset(): Reset session for new trial
ChunkResult
Result from processing a single chunk. Attributes:prediction(int): Predicted classconfidence(float): Confidence (max probability)posterior(np.ndarray): Class posterior probabilitieslatency_ms(float): Processing latency in milliseconds
StreamingResult
Result from finalizing a trial. Attributes:prediction(int): Final predicted classconfidence(float): Final confidenceposterior(np.ndarray): Aggregated posterior probabilitieschunk_posteriors(list): Posterior from each chunkentropy(float): Final entropyaggregation_method(str): Method used for aggregationn_chunks(int): Number of chunks processedlatency_ms(float): Total trial inference latencychunk_latencies_ms(list): Latency for each chunkbalance(float): Class balance across chunkscalibration(CalibrationMetrics or None): Calibration metrics if a label was provided
BatchResult
Result from batch inference. Attributes:predictions(np.ndarray): Predicted classesconfidences(np.ndarray): Maximum posterior probability per trialposteriors(np.ndarray): Class posterior probabilitiesentropy(np.ndarray): Entropy per trialmean_entropy(float): Mean entropymahalanobis_distances(np.ndarray): Distance to each class centeroutlier_scores(np.ndarray): Outlier score per trialbalance(float): Class balancelatency_ms(float): Inference latencyper_trial_latency_ms(np.ndarray): Estimated latency per trialcalibration(CalibrationMetrics or None): Calibration metrics if labels were provided
Metrics
compute_entropy()
Compute Shannon entropy from probabilities.probabilities(np.ndarray): Probability distributions
float: Entropy in bits. For a 2D probability matrix, this is the mean entropy across rows.
compute_calibration_metrics()
Compute Expected Calibration Error (ECE) and Maximum Calibration Error (MCE).predictions(np.ndarray): Predicted classesconfidences(np.ndarray): Confidence scoreslabels(np.ndarray): True labelsn_bins(int, default=10): Number of bins
CalibrationMetrics: Object witheceandmceattributes
calculate_itr()
Calculate Information Transfer Rate.accuracy(float): Classification accuracy (0-1)n_classes(int): Number of classestrial_duration(float): Trial duration in seconds
float: ITR in bits/minute
assess_trial_quality()
Assess quality of predictions.features(np.ndarray): Trial features to check for NaN/Inf artifactsconfidence(float): Prediction confidence in[0, 1]confidence_threshold(float, default=0.6): Minimum confidence for accepting predictionoutlier_threshold(float, default=5.0): Maximum outlier score for accepting predictionentropy(float, optional): Prediction entropy in bitsoutlier_score(float, optional): Mahalanobis-based outlier scoreentropy_threshold(float, default=1.5): Maximum entropy for accepting prediction
TrialQuality: Object with quality metrics
should_reject_trial()
Determine if trial should be rejected based on confidence.confidence(float): Confidence scorethreshold(float, default=0.7): Rejection threshold
bool: True if trial should be rejected
Utilities
estimate_normalization_params()
Estimate normalization parameters from data.X(np.ndarray): Data arraymethod(str): Normalization method
NormalizationParams: Parameters for normalization
apply_normalization()
Apply normalization to data.X(np.ndarray): Data to normalizeparams(NormalizationParams): Normalization parameters
np.ndarray: Normalized data
diagnose_preprocessing()
Diagnose preprocessing quality.data(BCIData): Data to diagnose
PreprocessingReport: Diagnostic report
compute_fisher_score()
Compute Fisher score for feature discriminability.X(np.ndarray): Featuresy(np.ndarray): Labels
np.ndarray: Fisher scores per feature
rank_features_by_discriminability()
Rank features by discriminability.X(np.ndarray): Featuresy(np.ndarray): Labels
np.ndarray: Feature indices sorted by discriminability
MNE Integration
from_mne_epochs()
Convert MNE Epochs to BCIData.epochs(mne.Epochs): MNE Epochs objectparadigm(str): BCI paradigmfeature_type(str): Feature type
BCIData: Converted data
extract_csp_features()
Extract CSP features from MNE Epochs.epochs(mne.Epochs): MNE Epochs objectn_components(int): Number of CSP components
features(np.ndarray): CSP featurescsp(mne.decoding.CSP): Fitted CSP object
extract_bandpower_features()
Extract bandpower features from MNE Epochs.epochs(mne.Epochs): MNE Epochs objectbands(dict): Frequency bandslog_transform(bool, default=True): Apply log transform to band powers
tuple[np.ndarray, list[str]]: Bandpower features and band names
create_bci_pipeline()
Create complete BCI pipeline with MNE and nimbus-bci.model_class(class): Classifier class (NimbusLDA,NimbusQDA, orNimbusSoftmax)preprocessor(str, default=“standard”): Preprocessing method ("standard","robust", orNone)feature_extraction(str, optional): Feature extraction method ("csp"orNone)n_csp_components(int, default=8): Number of CSP components**model_kwargs: Additional arguments passed to the classifier
sklearn.pipeline.Pipeline: Complete pipeline
Functional API (Backward Compatible)
LDA Functions
QDA Functions
Softmax Functions
These functions require the optionalsoftmax extra.
STS Functions
NimbusSTS class with its state management methods.
Model I/O
Active Learning
Active learning helpers reduce calibration cost by ranking unlabeled feature rows, deciding whether streaming trials are worth labeling, and stopping calibration when the model posterior stabilizes.NimbusLDA, NimbusQDA, NimbusSoftmax, NimbusSTS) or a raw NimbusModel snapshot.
Active learning expects preprocessed features. Use
X_pool shaped (n_pool, n_features) for pool-based ranking and stopping, and x_new shaped (n_features,) or (1, n_features) for streaming query decisions.suggest_next_trial()
Rank an unlabeled feature pool by informativeness and return the top n candidates.
model(NimbusModelor fitted Nimbus classifier): Model used to score candidatesX_pool(np.ndarray): Unlabeled feature rows with shape(n_pool, n_features)strategy("entropy","margin","least_confidence", or"bald", default="bald"): Informativeness criterionn(int, default=1): Number of candidates to returnnum_posterior_samples(int, default=256): Posterior samples forbald; also forwarded toNimbusSoftmaxprobability estimatesrng_seed(int, default=0): Deterministic seed for posterior sampling
QueryResult
indices: Top-nindices intoX_pool, sorted from most to least informativescores: Raw informativeness score for each row inX_poolstrategy: Strategy usedn_posterior_samples: Posterior samples used (1for cheap strategies)
strategy="bald" is supported for NimbusLDA, NimbusQDA, and NimbusSoftmax. NimbusSTS supports only the cheap strategies in this release.should_query()
Decide whether a single arriving trial is informative enough to label.
model(NimbusModelor fitted Nimbus classifier): Model used to score the trialx_new(np.ndarray): Single feature row with shape(n_features,)or(1, n_features)strategy("entropy","margin", or"least_confidence", default="entropy"): Cheap informativeness strategythreshold(float): Query cutoffnum_posterior_samples(int, default=50): Forwarded toNimbusSoftmaxprobability estimatesrng_seed(int, default=0): Deterministic seed forNimbusSoftmax
StreamingQueryDecision
should_query: Whether the trial should be labeledscore: Raw informativeness scorethreshold: Threshold usedstrategy: Strategy used
calibration_sufficient()
Decide whether calibration can stop based on a label-free criterion evaluated over the same unlabeled pool.
model(NimbusModelor fitted Nimbus classifier): Current model snapshotX_pool(np.ndarray): Unlabeled feature rows with shape(n_pool, n_features)criterion("posterior_stability"or"expected_info_gain", default="posterior_stability"): Stopping signalprevious(NimbusModelor fitted Nimbus classifier, optional): Previous model snapshot, required forposterior_stabilitythreshold(float): Stop when the signal is below this valuenum_posterior_samples(int, default=64): Posterior samples forexpected_info_gain; forwarded toNimbusSoftmaxrng_seed(int, default=0): Deterministic seed
CalibrationStatus
is_sufficient:Truewhen the stopping signal is belowthresholdsignal: Mean total variation forposterior_stability, or mean BALD in bits forexpected_info_gainthreshold: Threshold usedcriterion: Criterion useddetails: Criterion-specific diagnostics such asmax_tv,min_tv,max_bald, ormin_bald
posterior_stability works for every Nimbus head, including NimbusSTS. expected_info_gain uses BALD and is supported for NimbusLDA, NimbusQDA, and NimbusSoftmax.
Strategy Units
| Quantity | Range | Used by |
|---|---|---|
entropy | [0, log2(n_classes)] bits | suggest_next_trial(), should_query() |
bald | [0, log2(n_classes)] bits | suggest_next_trial(), calibration_sufficient(criterion="expected_info_gain") |
margin | [0, 1] probability gap | suggest_next_trial(), should_query() |
least_confidence | [0, 1 - 1/n_classes] | suggest_next_trial(), should_query() |
posterior_stability | [0, 1] mean total variation | calibration_sufficient() |
Type Hints
All functions and classes include type hints for better IDE support:API FAQ
Which classifier should I start with?
Which classifier should I start with?
Start with
NimbusLDA for fast baselines, especially motor imagery. Use NimbusQDA for overlapping distributions and NimbusSTS for non-stationary sessions.When should I use StreamingSession instead of predict_batch?
When should I use StreamingSession instead of predict_batch?
Use
predict_batch for offline trials and evaluation. Use StreamingSession for chunk-by-chunk real-time inference where latency and incremental decisions matter.Do I need MNE-Python to use nimbus-bci?
Do I need MNE-Python to use nimbus-bci?
No. MNE integration is optional. You can use
nimbus-bci with any preprocessing pipeline as long as you provide correctly shaped feature arrays.Next Read
sklearn Integration
Advanced sklearn patterns and best practices
Streaming Inference
Real-time BCI with chunk processing
MNE Integration
Complete EEG preprocessing pipeline
Examples
Working code examples