Documentation Index
Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt
Use this file to discover all available pages before exploring further.
Probabilistic AI & Uncertainty
Brain-computer interfaces operate in an uncertain environment. Neural signals are noisy, brain states change, and calibration data is often limited. Nimbus uses Bayesian inference to expose that uncertainty instead of hiding it behind a single overconfident prediction.Sources Of Uncertainty
- Signal-to-noise ratio: Neural signals often have poor SNR, especially in non-invasive recordings
- Biological artifacts: Eye blinks, muscle activity, cardiac signals, and movement can contaminate recordings
- Cognitive state: Attention, fatigue, learning, and task engagement change signal patterns
- Limited calibration: Small training sets increase model uncertainty for new users and sessions
- Temporal drift: Brain and electrode conditions change during long sessions
- Environmental factors: Movement, attention, fatigue all affect neural recordings
Bayesian Inference For BCI
Both SDKs use Bayesian inference to model uncertainty explicitly with production-ready models: Bayesian LDA, Bayesian QDA, Bayesian Softmax, and Bayesian STS (Python SDK).- Python
- Julia
- Returns full posterior probability distribution, not just a single prediction
- Confidence scores for each prediction
- Can identify uncertain trials and request clarification
- Gracefully handles poor signal quality
What You Can Do With Uncertainty
Uncertainty Quantification
Know when the system is confident vs uncertain about predictions
Adaptive Responses
Adjust behavior based on signal quality and confidence levels
Robust Performance
Graceful degradation when conditions change or signal quality drops
Explainable Decisions
Understand why the system made specific predictions
Types Of Uncertainty
Aleatoric uncertainty is irreducible data uncertainty caused by noisy recordings, biological artifacts, or ambiguous signals. It is handled with quality checks, rejection thresholds, and better preprocessing. Epistemic uncertainty is reducible model uncertainty caused by limited calibration data or unfamiliar sessions. It is reduced by collecting more labels, adapting online, or using active learning.Confidence Measures
Nimbus workflows commonly use:- Maximum posterior probability: confidence of the most likely class.
- Posterior entropy: how spread out the class distribution is.
- Trial quality reports: signal and preprocessing diagnostics.
- Calibration checks: whether more labels are likely to improve the model.
Thresholds By Application
- Safety-critical control: require very high confidence and confirmation.
- Communication aids: balance speed with rejection and correction flows.
- Gaming and feedback: lower thresholds can improve responsiveness.
- Research: log posterior distributions and confidence trends for analysis.
For medical and assistive applications, uncertainty logging is part of system reliability: store confidence scores, quality flags, model versions, and rejected trials alongside predictions.
Model Families
Nimbus exposes probabilistic model families through SDK-native APIs:NimbusLDA: fast Bayesian pooled Gaussian classifier.NimbusQDA: class-specific covariance model for more complex distributions.NimbusSoftmax: Python Bayesian softmax model for non-Gaussian boundaries.NimbusProbit: Julia Bayesian multinomial probit model.NimbusSTS: Python state-space model for non-stationary data.
How It Is Implemented
The Julia SDK uses RxInfer.jl for efficient Bayesian inference through reactive message passing:- Factor graphs represent probabilistic relationships between neural features and brain states
- Variational message passing propagates information efficiently through the graph
- Reactive updates handle streaming data in real-time with minimal latency
- Automatic inference - RxInfer generates efficient inference algorithms automatically
Next Read
Model Selection
Compare available probabilistic models.
Real-Time Setup
Configure acquisition and low-latency inference.
Error Handling
Build confidence-aware safeguards.
Examples
See uncertainty-aware BCI recipes.