Why Nimbus BCI
Brain-Computer Interfaces face unique challenges: noisy neural signals, inherent uncertainty, real-time requirements, and the need for explainability in medical applications. Current BCI Challenges:- High Latency: Standard processing takes 200ms+ for trial classification
- No Uncertainty Quantification: Deterministic outputs without confidence measures
- Limited Adaptability: Cannot handle changing brain states or signal quality
- Black Box Models: Deep learning lacks explainability for FDA approval
- ✅ Fast Inference: 10-20ms per trial with Bayesian models
- ✅ Uncertainty Quantification: Full posterior distributions, not just point estimates
- ✅ Training & Calibration: Subject-specific personalization in minutes
- ✅ Explainable: White-box probabilistic models for medical compliance
- ✅ Production-Ready: Batch and streaming modes, quality assessment, performance tracking
Get started with Nimbus
Build your first BCI application in 10 minutes with Julia and NimbusSDK.jl
Core Features
Fast Bayesian Inference
Bayesian LDA and Bayesian GMM models with 10-20ms inference latency using RxInfer.jl reactive message passing
Training & Calibration
Train custom models on your data or calibrate pre-trained models with 10-20 trials per subject
Real-time Streaming
Process EEG chunks as they arrive with chunk-by-chunk inference and weighted aggregation
Uncertainty Quantification
Full posterior distributions with confidence scores for quality assessment and rejection
Implemented Models
NimbusSDK.jl currently provides two production-ready Bayesian models:Bayesian LDA (RxLDA)
Bayesian Linear Discriminant Analysis with shared covariance. Fast (10-15ms), efficient, ideal for well-separated classes like motor imagery. API name:
RxLDAModel.Bayesian GMM (RxGMM)
Bayesian Gaussian Mixture Model with class-specific covariances. More flexible (15-25ms), better for overlapping distributions like P300. API name:
RxGMMModel.BCI Paradigms Supported
Motor Imagery
2-4 class classification
- Left/right hand
- Hands/feet/tongue
- 70-90% accuracy
- RxLDA recommended
P300
Binary classification
- Target/non-target
- Speller applications
- 80-95% accuracy
- RxGMM recommended
SSVEP
Multi-class frequency
- 2-6 target frequencies
- High accuracy (85-98%)
- Works with both models
Quick Example
Use Cases
Assistive Technologies
Wheelchair control, prosthetic limbs, communication devices with explainable, FDA-ready probabilistic models
Research Platforms
Academic research with training capabilities, subject-specific calibration, and comprehensive performance metrics
Neurofeedback
Real-time brain state monitoring with streaming inference and confidence-based quality control
Gaming & Wellness
Low-latency brain control for immersive experiences with adaptive difficulty based on confidence
SDK Architecture
- Local Inference: No API calls during inference - all processing on your machine
- Privacy: Your EEG data never leaves your computer
- Speed: No network latency, consistent <20ms performance
- Offline Capable: Works without internet after initial setup
- Authentication and licensing
- Pre-trained model distribution
- Optional analytics logging
Getting Started
1
Install Julia SDK
2
Get API Key
Contact hello@nimbusbci.com for your API key and license tier
3
Preprocess EEG
Use MNE-Python, EEGLAB, or your tool to extract features (CSP, bandpower, ERP)
4
Run Inference
Load model, process features, get predictions with confidence scores
Documentation
Quickstart
10-minute guide to your first BCI application
Julia SDK Reference
Complete API documentation
Model Training
Train custom models on your data
Preprocessing Guide
Required EEG preprocessing steps
Code Examples
Working Julia code examples
API Reference
Authentication and model registry
Performance
| Metric | RxLDA | RxGMM |
|---|---|---|
| Inference Latency | 10-15ms | 15-25ms |
| Training Time | 10-30s | 15-40s |
| Calibration Time | 5-15s | 8-20s |
| Memory Usage | Low | Moderate |
| Accuracy | 70-90% (MI) | 80-95% (P300) |
Technology Stack
- Core: Julia 1.9+
- Inference Engine: RxInfer.jl - Reactive message passing for Bayesian inference
- Models: RxLDA (shared covariance), RxGMM (class-specific covariances)
- Preprocessing: MNE-Python, EEGLAB, BrainFlow (external)
- API: TypeScript/Vercel serverless (authentication, model registry)
Roadmap
Currently Implemented:- ✅ RxLDA and RxGMM models
- ✅ Batch and streaming inference
- ✅ Training and calibration
- ✅ Quality assessment and ITR calculation
- ✅ API authentication and model registry
- 🚧 Autoregressive (AR) models for rhythm analysis
- 🚧 Hidden Markov Models (HMM) for state detection
- 🚧 Kalman filtering for signal processing
- 🚧 POMDP decision-making frameworks
- 🚧 Multi-modal sensor fusion
Support
Email Support
hello@nimbusbci.com - Technical support and inquiries
Book a Demo
See Nimbus in action with live demonstration
API Status
Check API availability and version
GitHub
NimbusSDK.jl source code, issues, and examples
License
NimbusSDK.jl is commercial software with tiered licensing:- Free: 10K monthly inferences, basic models
- Research: 50K monthly inferences, all features
- Commercial: 500K monthly inferences, priority support
- Enterprise: Unlimited, custom models, on-premise deployment
Built with ❤️ for the neurotechnology community Nimbus BCI Engine - Bringing Bayesian inference to Brain-Computer Interfaces