Python: NimbusQDA | Julia: NimbusQDA Mathematical Model: Heteroscedastic Gaussian Classifier (HGC)NimbusQDA is a Bayesian classification model with class-specific covariance matrices, making it more flexible than NimbusLDA for modeling complex class distributions.
Available in Both SDKs:
Python SDK: NimbusQDA class (sklearn-compatible)
Julia SDK: NimbusQDA (RxInfer.jl-based)
Both implementations provide class-specific covariances for flexible modeling.
Bayesian QDA extends beyond traditional Gaussian classifiers by allowing each class to have its own covariance structure:
✅ Class-specific covariances (unlike Bayesian LDA’s shared covariance)
✅ More flexible modeling of heterogeneous distributions
✅ Posterior probability distributions with uncertainty quantification
✅ Fast inference (~15-25ms per trial)
✅ Training and calibration support
✅ Batch and streaming inference modes
from nimbus_bci import NimbusQDAimport numpy as np# Create and fit classifierclf = NimbusQDA(mu_scale=3.0)clf.fit(X_train, y_train)# Predict with uncertaintypredictions = clf.predict(X_test)probabilities = clf.predict_proba(X_test)# Better for P300 and overlapping distributions
struct NimbusQDA <: BCIModel mean_posteriors::Vector # MvNormal posteriors for class means precision_posteriors::Vector # Wishart posteriors for class precisions priors::Vector{Float64} # Empirical class priors metadata::ModelMetadata # Model info dof_offset::Int # Degrees of freedom offset (training) mean_prior_precision::Float64 # Mean prior precision (training)end
@model function RxGMM_learning_model(y, labels, n_features, n_classes) # Priors on class means for k in 1:n_classes m[k] ~ MvNormal(0, 10*I) end # Priors on class-specific precisions for k in 1:n_classes W[k] ~ Wishart(n_features + 5, I) # Each class has its own W end # Likelihood for i in eachindex(y) k = labels[i] y[i] ~ MvNormal(m[k], inv(W[k])) # Class-specific precision endend
Pro Tip: Bayesian QDA’s class-specific covariances can overfit to noise with poor data. When in doubt, start with defaults and increase regularization (dof_offset=3, mean_prior_precision=0.03) if you see overfitting.
Important: Always set predictive_dof_offset to match dof_offset for consistency between training and inference phases.
Minimum: 40 trials per class (80 total for 2-class)
Recommended: 80+ trials per class
For calibration: 10-20 trials per class
Bayesian QDA requires at least 2 observations per class to estimate class-specific statistics. Training will fail if any class has fewer than 2 observations.
Critical for cross-session BCI performance!Normalize your features before training for 15-30% accuracy improvement across sessions.
Python
Julia
from sklearn.preprocessing import StandardScalerimport pickle# Estimate normalization from training datascaler = StandardScaler()X_train_norm = scaler.fit_transform(X_train)# Train with normalized featuresclf = NimbusQDA()clf.fit(X_train_norm, y_train)# Save model and scaler togetherwith open("model_with_scaler.pkl", "wb") as f: pickle.dump({'model': clf, 'scaler': scaler}, f)# Later: Apply same normalization to test dataX_test_norm = scaler.transform(X_test)predictions = clf.predict(X_test_norm)
# Estimate normalization from training datanorm_params = estimate_normalization_params(train_features; method=:zscore)train_norm = apply_normalization(train_features, norm_params)# Train with normalized featurestrain_data = BCIData(train_norm, metadata, labels)model = train_model(NimbusQDA, train_data)# Save params with model@save "model.jld2" model norm_params# Later: Apply same params to test datatest_norm = apply_normalization(test_features, norm_params)
Bayesian QDA typically provides 2-5% higher accuracy than Bayesian LDA when class covariances differ significantly, at the cost of ~5-10ms additional latency.
import numpy as np# Class meansprint("Class means:")for k, class_label in enumerate(clf.classes_): print(f" Class {class_label}: {clf.model_['means'][k]}")# Class-specific covariance matricesprint("\nClass-specific covariance matrices:")for k, class_label in enumerate(clf.classes_): print(f" Class {class_label} (first 3x3):") print(clf.model_['covariances'][k][:3, :3])# Compare covariances across classesprint("\nCovariance structure comparison:")for k, class_label in enumerate(clf.classes_): cov_k = clf.model_['covariances'][k] print(f" Class {class_label} variance (diagonal): {np.diag(cov_k)}")# Class priorsprint("\nClass priors:")for k, class_label in enumerate(clf.classes_): print(f" Class {class_label}: {clf.model_['priors'][k]:.3f}")
# Extract point estimates from posterior distributionsusing Distributions# Class means (extract from posterior distributions)println("Class means:")for (k, mean_posterior) in enumerate(model.mean_posteriors) mean_point = mean(mean_posterior) # Extract point estimate println(" Class $k: ", mean_point)end# Class-specific precisions (extract from posterior distributions)println("\nClass-specific precision matrices:")for (k, precision_posterior) in enumerate(model.precision_posteriors) prec_point = mean(precision_posterior) # Extract point estimate println(" Class $k (first 3x3):") println(prec_point[1:3, 1:3])end# Compare covariances across classesprintln("\nCovariance structure comparison:")for k in 1:length(model.precision_posteriors) prec_point = mean(model.precision_posteriors[k]) cov_k = inv(prec_point) # Convert precision to covariance println(" Class $k variance (diagonal): ", diag(cov_k))end# Class priorsprintln("\nClass priors:")for (k, prior) in enumerate(model.priors) println(" Class $k: ", prior)end
Accessing model parameters: The SDK stores full posterior distributions (not just point estimates) for proper Bayesian inference. To get point estimates, use mean(posterior) to extract the mean of the posterior distribution. For precision matrices, use mean(precision_posterior) to get the expected precision matrix.
using Plotsusing Distributions# Compare class covariancesfor k in 1:length(model.precision_posteriors) prec_point = mean(model.precision_posteriors[k]) # Extract point estimate cov_k = inv(prec_point) # Convert precision to covariance heatmap(cov_k, title="Class $k Covariance", colorbar=true)end
✅ Flexible Modeling: Each class has its own covariance
✅ Better for Complex Data: Handles heterogeneous distributions
✅ Higher Accuracy: 2-5% improvement when classes differ significantly
✅ Uncertainty Quantification: Full Bayesian posteriors
✅ Production-Ready: Battle-tested in P300 applications
❌ More Parameters: Requires more training data than NimbusLDA
❌ Slower Inference: ~15-25ms vs ~10-15ms for NimbusLDA
❌ Higher Memory: Stores n_classes precision matrices
❌ More Complex: Longer training time
Is your data non-stationary (drift over time)?├─ Yes → Use Bayesian STS (adaptive, handles temporal dynamics)└─ No → Is speed critical (<15ms)? ├─ Yes → Use Bayesian LDA └─ No → Do classes have different spreads / covariances? ├─ Yes → Use Bayesian QDA (more accurate) └─ No → Is the decision boundary complex / non-Gaussian? ├─ Yes → Use Bayesian Softmax (most flexible) └─ No → Use Bayesian LDA (simplest)