Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt

Use this file to discover all available pages before exploring further.

Probabilistic AI & Uncertainty

Brain-computer interfaces operate in an uncertain environment. Neural signals are noisy, brain states change, and calibration data is often limited. Nimbus uses Bayesian inference to expose that uncertainty instead of hiding it behind a single overconfident prediction.

Sources Of Uncertainty

  • Signal-to-noise ratio: Neural signals often have poor SNR, especially in non-invasive recordings
  • Biological artifacts: Eye blinks, muscle activity, cardiac signals, and movement can contaminate recordings
  • Cognitive state: Attention, fatigue, learning, and task engagement change signal patterns
  • Limited calibration: Small training sets increase model uncertainty for new users and sessions
  • Temporal drift: Brain and electrode conditions change during long sessions
  • Environmental factors: Movement, attention, fatigue all affect neural recordings
Traditional deterministic BCI systems assume clean, consistent signals. When these assumptions break down, the systems fail catastrophically with no indication of confidence or reliability.

Bayesian Inference For BCI

Both SDKs use Bayesian inference to model uncertainty explicitly with production-ready models: Bayesian LDA, Bayesian QDA, Bayesian Softmax, and Bayesian STS (Python SDK).
from nimbus_bci import NimbusLDA, compute_entropy

# Train classifier
clf = NimbusLDA()
clf.fit(X_train, y_train)

# Run inference - returns full posterior distribution
predictions = clf.predict(X_test)
probabilities = clf.predict_proba(X_test)  # Full distribution

# Access predictions with confidence
for i, pred in enumerate(predictions):
    confidence = probabilities[i].max()
    posterior = probabilities[i]  # Full distribution over classes
    entropy = compute_entropy(probabilities[i:i+1])[0]
    
    print(f"Prediction: {pred}")
    print(f"Confidence: {confidence:.3f}")
    print(f"Posterior distribution: {posterior}")
    print(f"Entropy: {entropy:.2f} bits")

    # Make confidence-based decisions
    if confidence < 0.7:
        print("Low confidence - consider rejecting this trial")
Key advantages:
  • Returns full posterior probability distribution, not just a single prediction
  • Confidence scores for each prediction
  • Can identify uncertain trials and request clarification
  • Gracefully handles poor signal quality

What You Can Do With Uncertainty

Uncertainty Quantification

Know when the system is confident vs uncertain about predictions

Adaptive Responses

Adjust behavior based on signal quality and confidence levels

Robust Performance

Graceful degradation when conditions change or signal quality drops

Explainable Decisions

Understand why the system made specific predictions

Types Of Uncertainty

Aleatoric uncertainty is irreducible data uncertainty caused by noisy recordings, biological artifacts, or ambiguous signals. It is handled with quality checks, rejection thresholds, and better preprocessing. Epistemic uncertainty is reducible model uncertainty caused by limited calibration data or unfamiliar sessions. It is reduced by collecting more labels, adapting online, or using active learning.

Confidence Measures

Nimbus workflows commonly use:
  • Maximum posterior probability: confidence of the most likely class.
  • Posterior entropy: how spread out the class distribution is.
  • Trial quality reports: signal and preprocessing diagnostics.
  • Calibration checks: whether more labels are likely to improve the model.
Example confidence-based decision:
using NimbusSDK

confidence = results.confidences[end]
posterior = results.posteriors[:, end]

if confidence > 0.9
    execute_command(results.predictions[end])
elseif confidence > 0.7
    show_confirmation_dialog(results.predictions[end], posterior)
else
    request_better_signal()
end

Thresholds By Application

  • Safety-critical control: require very high confidence and confirmation.
  • Communication aids: balance speed with rejection and correction flows.
  • Gaming and feedback: lower thresholds can improve responsiveness.
  • Research: log posterior distributions and confidence trends for analysis.
For medical and assistive applications, uncertainty logging is part of system reliability: store confidence scores, quality flags, model versions, and rejected trials alongside predictions.

Model Families

Nimbus exposes probabilistic model families through SDK-native APIs:
  • NimbusLDA: fast Bayesian pooled Gaussian classifier.
  • NimbusQDA: class-specific covariance model for more complex distributions.
  • NimbusSoftmax: Python Bayesian softmax model for non-Gaussian boundaries.
  • NimbusProbit: Julia Bayesian multinomial probit model.
  • NimbusSTS: Python state-space model for non-stationary data.

How It Is Implemented

The Julia SDK uses RxInfer.jl for efficient Bayesian inference through reactive message passing:
  1. Factor graphs represent probabilistic relationships between neural features and brain states
  2. Variational message passing propagates information efficiently through the graph
  3. Reactive updates handle streaming data in real-time with minimal latency
  4. Automatic inference - RxInfer generates efficient inference algorithms automatically
The mathematical details are handled by SDK APIs. Use model pages for algorithm-specific details and API references for exact function signatures.

Next Read

Model Selection

Compare available probabilistic models.

Real-Time Setup

Configure acquisition and low-latency inference.

Error Handling

Build confidence-aware safeguards.

Examples

See uncertainty-aware BCI recipes.