Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt

Use this file to discover all available pages before exploring further.

NimbusProbit — Bayesian Multinomial Probit Regression

Julia: NimbusProbit | Python equivalent: NimbusSoftmax
Mathematical model: Bayesian multinomial probit regression
NimbusProbit is the Julia SDK’s flexible non-Gaussian static classifier. Compared with Gaussian models (NimbusLDA, NimbusQDA), it can represent more complex decision boundaries while still returning posterior probabilities and uncertainty metrics.
Availability
  • Julia SDK: ✅ NimbusProbit
  • Python SDK: ❌ Use NimbusSoftmax for Python’s non-Gaussian static classifier

Quick Start

using NimbusSDK

# One-time setup
NimbusSDK.install_core("nbci_live_your_key")

# Train
model = train_model(
    NimbusProbit,
    train_data;
    iterations = 50
)

# Batch inference
results = predict_batch(model, test_data)
accuracy = sum(results.predictions .== labels) / length(labels)
println("Accuracy: $(round(accuracy * 100, digits=1))%")

When to Use NimbusProbit

  • You are using the Julia SDK and need a flexible static classifier.
  • NimbusLDA / NimbusQDA plateau on complex multi-class data.
  • Class boundaries are non-Gaussian or not well modeled by class-conditional Gaussians.
  • You need calibrated posterior probabilities from a probabilistic model.

When Not to Use It

  • If latency must be minimized: start with NimbusLDA, then NimbusQDA.
  • If class centers and Mahalanobis distance are important: use NimbusLDA or NimbusQDA.
  • If the task is non-stationary or drifting: use NimbusSTS in the Python SDK.
  • If you are using Python: use NimbusSoftmax.

Model Architecture

NimbusProbit is implemented with RxInfer and models a latent multinomial probit representation.
struct NimbusProbit <: BCIModel
    B_posterior                        # Learned regression coefficient posterior
    W_posterior                        # Learned precision matrix posterior
    metadata::ModelMetadata            # Model info
    N::Int                             # Trials per observation
    ξβ::Vector{Float64}                # Prior mean for B
::Matrix{Float64}                # Prior precision for B
    W_df::Float64                      # Wishart degrees of freedom
    W_scale::Matrix{Float64}           # Wishart scale matrix
end

RxInfer Learning Model

@model function NimbusProbit_learning_model(obs, N, X, n_classes, n_features,
                                            ξβ, Wβ, W_df, W_scale)
    B ~ MvNormalWeightedMeanPrecision(ξβ, Wβ)
    W ~ Wishart(W_df, W_scale)

    for i in eachindex(obs)
        Ψ[i] ~ ContinuousTransition(X[i], B, W)
        obs[i] ~ MultinomialPolya(N, Ψ[i])
    end
end
The public API hides the factor-graph details behind train_model() and predict_batch().

Hyperparameters

ParameterDefaultDescription
iterations50Variational inference iterations
showprogressfalseDisplay training progress
N1Number of trials per observation
ξβauto-configuredPrior mean for regression coefficients
auto-configuredPrior precision for regression coefficients
W_dfauto-configuredWishart degrees of freedom
W_scaleauto-configuredWishart scale matrix

Train a Custom Model

using NimbusSDK

metadata = BCIMetadata(
    sampling_rate = 250.0,
    paradigm = :motor_imagery,
    feature_type = :csp,
    n_features = 16,
    n_classes = 4,
    chunk_size = nothing
)

train_data = BCIData(train_features, metadata, train_labels)

model = train_model(
    NimbusProbit,
    train_data;
    iterations = 50,
    showprogress = true,
    name = "motor_imagery_probit",
    description = "4-class MI classifier with NimbusProbit"
)

Tune Hyperparameters

Use stronger priors for noisy or limited data, and weaker priors for clean datasets with many trials.
B_dim = (n_classes - 1) * n_features

model = train_model(
    NimbusProbit,
    train_data;
    iterations = 50,
    N = 1,
= 1e-5 * diageye(B_dim),
    W_df = Float64(n_classes + 5),
    showprogress = true
)
Scenario scaleW_df offsetNotes
Excellent data quality1e-62Minimal regularization
Good data quality1e-55Balanced default
Moderate data quality1e-5 to 1e-45-8Slight regularization
Poor data quality1e-410Stronger regularization
Very limited trials1e-415Maximum regularization

Batch Inference

test_data = BCIData(test_features, metadata, test_labels)
results = predict_batch(model, test_data; iterations = 10)

println("Predictions: ", results.predictions)
println("Mean confidence: ", mean(results.confidences))

accuracy = sum(results.predictions .== test_labels) / length(test_labels)
itr = calculate_ITR(accuracy, 4, 4.0)

println("Accuracy: $(round(accuracy * 100, digits=1))%")
println("ITR: $(round(itr, digits=1)) bits/minute")

Streaming Inference

session = init_streaming(model, metadata_with_chunk_size)

for chunk in eeg_feature_stream
    result = process_chunk(session, chunk; iterations = 10)
    println("Chunk: pred=$(result.prediction), conf=$(round(result.confidence, digits=3))")
end

final_result = finalize_trial(session; method = :weighted_vote)
println("Final: pred=$(final_result.prediction), conf=$(round(final_result.confidence, digits=3))")

Training Requirements

  • Use preprocessed features, not raw EEG.
  • Normalize features before training for cross-session stability.
  • Use enough trials for a flexible multinomial model; start with NimbusLDA / NimbusQDA for small datasets.
  • Keep labels aligned with the Julia SDK’s class convention for the dataset you are using.

Model Inspection

println("B posterior:")
println("  Type: ", typeof(model.B_posterior))
println("  Mean: ", mean(model.B_posterior))

println("\nW posterior:")
println("  Type: ", typeof(model.W_posterior))
println("  Mean: ", mean(model.W_posterior))

println("\nHyperparameters:")
println("  N: ", model.N)
println("  ξβ dimensions: ", size(model.ξβ))
println("  Wβ dimensions: ", size(model.Wβ))
println("  W_df: ", model.W_df)
println("  W_scale dimensions: ", size(model.W_scale))

Model Selection Context

Use NimbusProbit when you are in Julia and need a flexible non-Gaussian static classifier. If you need faster inference or explicit class-center diagnostics, start with NimbusLDA or NimbusQDA. If you are using Python, the analogous non-Gaussian static model is NimbusSoftmax. For the canonical side-by-side comparison, see Model Specification.

Next Read

NimbusSoftmax (Python)

Python’s non-Gaussian static classifier.

Julia SDK API Reference

Full Julia model and inference API.

Bayesian LDA

Faster static model with shared covariance.

Bayesian QDA

Static model with class-specific covariance.

References

Implementation: Theory:
  • Bayesian multinomial probit regression
  • Variational inference with reactive message passing
  • Continuous-transition latent variable models for multinomial classification