Skip to main content

Message Passing Architecture

Nimbus uses reactive message passing on factor graphs to perform efficient Bayesian inference for BCI applications. This architecture enables real-time probabilistic reasoning while maintaining scalability and flexibility. Understanding this foundation helps you build more effective BCI systems.

Factor Graphs for BCI

What are Factor Graphs?

Factor graphs are a mathematical framework for representing probabilistic models. They consist of:
  • Variable nodes: Represent unknown quantities (brain states, intentions, etc.)
  • Factor nodes: Represent probabilistic relationships between variables
  • Edges: Connect variables to factors that depend on them

BCI Example: Motor Imagery

Consider a simple motor imagery BCI:
EEG Signals → Neural Features → Brain State → Motor Intention → Cursor Movement
     |              |             |              |              |
  [Factor]      [Factor]      [Factor]      [Factor]      [Factor]
Each factor represents a probabilistic relationship:
  • Signal model: How EEG relates to neural features
  • Feature model: How features relate to brain state
  • State model: How brain state relates to intention
  • Motor model: How intention relates to movement

Advantages for BCI

Modular Design

Each component can be developed and tested independently

Flexible Architecture

Easy to add new sensors, features, or output modalities

Uncertainty Propagation

Uncertainty flows naturally through the entire system

Efficient Inference

Exploit sparsity and local structure for fast computation

Message Passing Inference

How Message Passing Works

Instead of computing the full joint probability distribution (computationally expensive), message passing computes local messages between connected nodes:
  1. Forward pass: Messages flow from observations to hidden variables
  2. Backward pass: Messages flow from priors to observations
  3. Marginal computation: Combine messages to get final beliefs
  4. Reactive updates: Only recompute affected messages when data arrives

Real-time Updates

Traditional batch inference recomputes everything when new data arrives. RxInfer.jl’s reactive message passing enables incremental updates: Traditional batch approach:
  • Recompute entire posterior when new data arrives
  • High latency (100ms+)
  • Wasteful computation
RxInfer reactive approach:
  • Only update affected parts of the factor graph
  • Low latency (10-20ms)
  • Efficient incremental computation

Reactive Programming

NimbusSDK uses RxInfer.jl reactive programming principles:
  • Event-driven: Computation triggered by new data
  • Asynchronous: Non-blocking message updates
  • Efficient: Only update what changed
  • Robust: Handle varying data rates gracefully

BCI-Specific Optimizations

Temporal Models

BCI signals have strong temporal structure. NimbusSDK exploits this: State Space Models: Model how brain states evolve over time
Brain State[t-1] → Brain State[t] → Brain State[t+1]
       |                |                |
   EEG[t-1]         EEG[t]         EEG[t+1]
Benefits:
  • Prediction: Anticipate future brain states
  • Smoothing: Reduce noise by considering temporal context
  • Missing data: Interpolate when signals are corrupted
Multi-scale Dynamics: Different neural processes operate at different timescales
  • Slow Dynamics (seconds): Attention, arousal
  • Medium Dynamics (100ms): Motor planning
  • Fast Dynamics (10ms): Neural oscillations
Bayesian LDA (RxLDA) and Bayesian GMM (RxGMM) handle temporal structure through proper feature aggregation and trial-level inference.

Multi-modal Integration

Modern BCIs combine multiple signal types. Factor graphs naturally handle this:
    EEG → Neural State ← EMG
     |        |         |
   P300    Attention   Muscle
  Speller    Level    Activity
     |        |         |
     └── Intention ──────┘
            |
      Final Command
Each modality provides complementary information that improves overall performance.

Hierarchical Processing

Brain activity operates at multiple levels. Nimbus models this hierarchy:
Global Brain State (Attention, Arousal)
        |
Regional Activity (Motor Cortex, Visual Cortex)  
        |
Local Populations (Individual Electrodes)
        |
Raw Signals (EEG, EMG, EOG)
Higher levels provide context for lower levels, improving inference quality.

Implementation Details

Efficient Message Computation

RxInfer.jl optimizes message passing for BCI workloads:
  • Sparse updates: Only compute messages for affected nodes
  • Caching: Reuse previous computations when possible
  • Topological sorting: Optimal message update order
  • Variational inference: Closed-form message updates (no sampling)

Memory Management

Real-time systems require careful memory management:
  • Message caching: Reuse previous computations
  • Bounded memory: Fixed memory usage regardless of runtime
  • Efficient GC: Minimal garbage collection pressure

Numerical Stability

BCI signals can have extreme values. RxInfer handles this robustly:
  • Log-space computation: Avoid numerical underflow
  • Adaptive precision: Use appropriate numerical types
  • Regularization: Prevent degenerate solutions

Bayesian LDA and Bayesian GMM Models

Model Structure

Both Bayesian LDA (RxLDA) and Bayesian GMM (RxGMM) use factor graph representations: RxLDA: Linear Discriminant Analysis
  • Gaussian observations with shared precision matrix
  • Fast inference due to shared covariance
  • Optimal for well-separated classes
RxGMM: Gaussian Mixture Model
  • Class-specific covariance matrices
  • More flexible for overlapping distributions
  • Handles complex class structures

Inference Process

  1. Observation: Neural features from preprocessed EEG
  2. Message passing: Update beliefs using variational inference
  3. Marginalization: Compute posterior over classes
  4. Prediction: Select class with highest posterior probability

Best Practices

Model Design

Start simple and add complexity gradually. A simple model that works reliably is better than a complex model that fails unpredictably.
  1. Begin with linear models: Add nonlinearity only when needed
  2. Use domain knowledge: Incorporate known neural principles
  3. Validate incrementally: Test each component separately
  4. Monitor performance: Track inference speed and accuracy

Scalability

  1. Exploit sparsity: Most neural connections are sparse
  2. Use hierarchical models: Process at multiple resolutions
  3. Cache computations: Reuse expensive calculations
  4. Profile regularly: Identify and fix bottlenecks

Getting Started

Ready to build with message passing?
Next: Learn how to configure real-time inference for your BCI application.