Documentation Index
Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt
Use this file to discover all available pages before exploring further.
NimbusProbit — Bayesian Multinomial Probit Regression
Julia:NimbusProbit | Python equivalent: NimbusSoftmaxMathematical model: Bayesian multinomial probit regression
NimbusProbit is the Julia SDK’s flexible non-Gaussian static classifier. Compared with Gaussian models (NimbusLDA, NimbusQDA), it can represent more complex decision boundaries while still returning posterior probabilities and uncertainty metrics.
Availability
- Julia SDK: ✅
NimbusProbit - Python SDK: ❌ Use
NimbusSoftmaxfor Python’s non-Gaussian static classifier
Quick Start
When to Use NimbusProbit
- You are using the Julia SDK and need a flexible static classifier.
NimbusLDA/NimbusQDAplateau on complex multi-class data.- Class boundaries are non-Gaussian or not well modeled by class-conditional Gaussians.
- You need calibrated posterior probabilities from a probabilistic model.
When Not to Use It
- If latency must be minimized: start with
NimbusLDA, thenNimbusQDA. - If class centers and Mahalanobis distance are important: use
NimbusLDAorNimbusQDA. - If the task is non-stationary or drifting: use
NimbusSTSin the Python SDK. - If you are using Python: use
NimbusSoftmax.
Model Architecture
NimbusProbit is implemented with RxInfer and models a latent multinomial probit representation.
RxInfer Learning Model
train_model() and predict_batch().
Hyperparameters
| Parameter | Default | Description |
|---|---|---|
iterations | 50 | Variational inference iterations |
showprogress | false | Display training progress |
N | 1 | Number of trials per observation |
ξβ | auto-configured | Prior mean for regression coefficients |
Wβ | auto-configured | Prior precision for regression coefficients |
W_df | auto-configured | Wishart degrees of freedom |
W_scale | auto-configured | Wishart scale matrix |
Train a Custom Model
Tune Hyperparameters
Use stronger priors for noisy or limited data, and weaker priors for clean datasets with many trials.| Scenario | Wβ scale | W_df offset | Notes |
|---|---|---|---|
| Excellent data quality | 1e-6 | 2 | Minimal regularization |
| Good data quality | 1e-5 | 5 | Balanced default |
| Moderate data quality | 1e-5 to 1e-4 | 5-8 | Slight regularization |
| Poor data quality | 1e-4 | 10 | Stronger regularization |
| Very limited trials | 1e-4 | 15 | Maximum regularization |
Batch Inference
Streaming Inference
Training Requirements
- Use preprocessed features, not raw EEG.
- Normalize features before training for cross-session stability.
- Use enough trials for a flexible multinomial model; start with
NimbusLDA/NimbusQDAfor small datasets. - Keep labels aligned with the Julia SDK’s class convention for the dataset you are using.
Model Inspection
Model Selection Context
UseNimbusProbit when you are in Julia and need a flexible non-Gaussian static classifier. If you need faster inference or explicit class-center diagnostics, start with NimbusLDA or NimbusQDA. If you are using Python, the analogous non-Gaussian static model is NimbusSoftmax.
For the canonical side-by-side comparison, see Model Specification.
Next Read
NimbusSoftmax (Python)
Python’s non-Gaussian static classifier.
Julia SDK API Reference
Full Julia model and inference API.
Bayesian LDA
Faster static model with shared covariance.
Bayesian QDA
Static model with class-specific covariance.
References
Implementation:- RxInfer.jl: https://rxinfer.com/
- Julia source code:
src/models/nimbus_probit/in NimbusSDKCore
- Bayesian multinomial probit regression
- Variational inference with reactive message passing
- Continuous-transition latent variable models for multinomial classification