Documentation Index
Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt
Use this file to discover all available pages before exploring further.
Error Handling For BCI Systems
BCI applications should fail predictably. The most important safeguards are input validation, confidence-aware decisions, and clear fallback behavior when signal quality or streaming health degrades.Python SDK workflows are local and focus on validation, inference, and preprocessing errors. Julia SDK workflows also include API key and core installation errors.
Common Failure Modes
| Category | Typical Symptom | First Check |
|---|---|---|
| Authentication | Julia core install or model access fails | API key, network, cached core state |
| Data shape | Dimension mismatch | Expected feature and trial shape |
| Invalid values | NaN, Inf, or unstable predictions | Preprocessing output and export path |
| Model mismatch | Low confidence or feature errors | Feature count, class count, model family |
| Poor preprocessing | Near-chance accuracy | Frequency band, artifacts, normalization |
| Streaming drift | Confidence falls over time | Session state, electrode quality, fatigue |
Validate Before Inference
Check the data contract before calling model APIs:- Features are finite and non-constant.
- Feature count matches metadata and model expectations.
- Labels match trial count and class encoding.
- Test data uses training normalization parameters.
- Streaming chunks match
(n_features, chunk_size).
Confidence Gates
Do not map every prediction directly to an action. Use thresholds that match application risk.| Confidence | Suggested Action |
|---|---|
>= 0.9 | Execute low-risk command or accept trial. |
0.7-0.9 | Ask for confirmation or show alternatives. |
0.5-0.7 | Reject or request a clearer trial. |
< 0.5 | Stop, recalibrate, or run diagnostics. |
Preprocessing Diagnostics
When confidence is unexpectedly low, check data quality before tuning model hyperparameters.Streaming Recovery
Streaming systems should skip bad chunks, report error rates, and stop when consecutive errors exceed a safe limit.Julia Setup Errors
For Julia SDK deployments:- Run
NimbusSDK.install_core(api_key)during setup, not inside a hot inference loop. - Cache credentials/core installation where appropriate.
- Handle missing or invalid API keys before starting acquisition.
- Keep offline inference assumptions explicit in deployment docs.
Production Checklist
- Validate every calibration file before training.
- Save model metadata, normalization parameters, and SDK versions.
- Log prediction, confidence, posterior summary, latency, and rejection reason.
- Separate classifier output from command execution.
- Use stricter thresholds for safety-critical actions.
- Monitor session-level confidence trends and rejection rates.
- Provide a fallback path for stream loss, repeated bad chunks, or recalibration needs.
Troubleshooting Quick Reference
| Issue | Likely Cause | Fix |
|---|---|---|
| Accuracy near chance | Raw EEG or wrong feature band | Revisit preprocessing and feature extraction. |
| Confidence always low | Poor normalization or noisy trial | Run diagnostics and reuse train normalization. |
| Dimension mismatch | Feature count differs from metadata | Check export shape and model metadata. |
| Works offline, fails streaming | Chunk shape or state handling | Validate chunks and reset sessions between trials. |
| Cross-session drop | Electrode/session scale shift | Use saved normalization params. |
Next Read
Development Workflow
Project structure and production guardrails.
Streaming Configuration
Chunking, aggregation, and quality gates.
Preprocessing Requirements
Prevent data-quality failures upstream.
Basic Examples
Compact recipes for common workflows.