Message Passing Architecture
Nimbus uses reactive message passing on factor graphs to perform efficient Bayesian inference for BCI applications. This architecture enables real-time probabilistic reasoning while maintaining scalability and flexibility. Understanding this foundation helps you build more effective BCI systems.Factor Graphs for BCI
What are Factor Graphs?
Factor graphs are a mathematical framework for representing probabilistic models. They consist of:- Variable nodes: Represent unknown quantities (brain states, intentions, etc.)
- Factor nodes: Represent probabilistic relationships between variables
- Edges: Connect variables to factors that depend on them
BCI Example: Motor Imagery
Consider a simple motor imagery BCI:- Signal model: How EEG relates to neural features
- Feature model: How features relate to brain state
- State model: How brain state relates to intention
- Motor model: How intention relates to movement
Advantages for BCI
Modular Design
Each component can be developed and tested independently
Flexible Architecture
Easy to add new sensors, features, or output modalities
Uncertainty Propagation
Uncertainty flows naturally through the entire system
Efficient Inference
Exploit sparsity and local structure for fast computation
Message Passing Inference
How Message Passing Works
Instead of computing the full joint probability distribution (computationally expensive), message passing computes local messages between connected nodes:- Forward pass: Messages flow from observations to hidden variables
- Backward pass: Messages flow from priors to observations
- Marginal computation: Combine messages to get final beliefs
- Reactive updates: Only recompute affected messages when data arrives
Real-time Updates
Traditional batch inference recomputes everything when new data arrives. RxInfer.jl’s reactive message passing enables incremental updates: Traditional batch approach:- Recompute entire posterior when new data arrives
- High latency (100ms+)
- Wasteful computation
- Only update affected parts of the factor graph
- Low latency (10-20ms)
- Efficient incremental computation
Reactive Programming
NimbusSDK uses RxInfer.jl reactive programming principles:- Event-driven: Computation triggered by new data
- Asynchronous: Non-blocking message updates
- Efficient: Only update what changed
- Robust: Handle varying data rates gracefully
BCI-Specific Optimizations
Temporal Models
BCI signals have strong temporal structure. NimbusSDK exploits this: State Space Models: Model how brain states evolve over time- Prediction: Anticipate future brain states
- Smoothing: Reduce noise by considering temporal context
- Missing data: Interpolate when signals are corrupted
- Slow Dynamics (seconds): Attention, arousal
- Medium Dynamics (100ms): Motor planning
- Fast Dynamics (10ms): Neural oscillations
Multi-modal Integration
Modern BCIs combine multiple signal types. Factor graphs naturally handle this:Hierarchical Processing
Brain activity operates at multiple levels. Nimbus models this hierarchy:Implementation Details
Efficient Message Computation
RxInfer.jl optimizes message passing for BCI workloads:- Sparse updates: Only compute messages for affected nodes
- Caching: Reuse previous computations when possible
- Topological sorting: Optimal message update order
- Variational inference: Closed-form message updates (no sampling)
Memory Management
Real-time systems require careful memory management:- Message caching: Reuse previous computations
- Bounded memory: Fixed memory usage regardless of runtime
- Efficient GC: Minimal garbage collection pressure
Numerical Stability
BCI signals can have extreme values. RxInfer handles this robustly:- Log-space computation: Avoid numerical underflow
- Adaptive precision: Use appropriate numerical types
- Regularization: Prevent degenerate solutions
Bayesian LDA and Bayesian GMM Models
Model Structure
Both Bayesian LDA (RxLDA) and Bayesian GMM (RxGMM) use factor graph representations: RxLDA: Linear Discriminant Analysis- Gaussian observations with shared precision matrix
- Fast inference due to shared covariance
- Optimal for well-separated classes
- Class-specific covariance matrices
- More flexible for overlapping distributions
- Handles complex class structures
Inference Process
- Observation: Neural features from preprocessed EEG
- Message passing: Update beliefs using variational inference
- Marginalization: Compute posterior over classes
- Prediction: Select class with highest posterior probability
Best Practices
Model Design
Start simple and add complexity gradually. A simple model that works reliably is better than a complex model that fails unpredictably.
- Begin with linear models: Add nonlinearity only when needed
- Use domain knowledge: Incorporate known neural principles
- Validate incrementally: Test each component separately
- Monitor performance: Track inference speed and accuracy
Scalability
- Exploit sparsity: Most neural connections are sparse
- Use hierarchical models: Process at multiple resolutions
- Cache computations: Reuse expensive calculations
- Profile regularly: Identify and fix bottlenecks
Getting Started
Ready to build with message passing?Quick Start
Build your first factor graph BCI
API Reference
Explore the factor graph API
Examples
See message passing in action
Advanced Topics
Create custom factor nodes
Next: Learn how to configure real-time inference for your BCI application.