Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt

Use this file to discover all available pages before exploring further.

Error Handling For BCI Systems

BCI applications should fail predictably. The most important safeguards are input validation, confidence-aware decisions, and clear fallback behavior when signal quality or streaming health degrades.
Python SDK workflows are local and focus on validation, inference, and preprocessing errors. Julia SDK workflows also include API key and core installation errors.

Common Failure Modes

CategoryTypical SymptomFirst Check
AuthenticationJulia core install or model access failsAPI key, network, cached core state
Data shapeDimension mismatchExpected feature and trial shape
Invalid valuesNaN, Inf, or unstable predictionsPreprocessing output and export path
Model mismatchLow confidence or feature errorsFeature count, class count, model family
Poor preprocessingNear-chance accuracyFrequency band, artifacts, normalization
Streaming driftConfidence falls over timeSession state, electrode quality, fatigue

Validate Before Inference

Check the data contract before calling model APIs:
  • Features are finite and non-constant.
  • Feature count matches metadata and model expectations.
  • Labels match trial count and class encoding.
  • Test data uses training normalization parameters.
  • Streaming chunks match (n_features, chunk_size).
import numpy as np

def validate_features(X, expected_features=None):
    if not np.isfinite(X).all():
        raise ValueError("Features contain NaN or Inf")
    if expected_features is not None and X.shape[-1] != expected_features:
        raise ValueError("Feature count mismatch")
function validate_chunk(chunk, metadata)
    size(chunk, 1) == metadata.n_features || error("Feature count mismatch")
    all(isfinite, chunk) || error("Chunk contains NaN or Inf")
    return true
end

Confidence Gates

Do not map every prediction directly to an action. Use thresholds that match application risk.
ConfidenceSuggested Action
>= 0.9Execute low-risk command or accept trial.
0.7-0.9Ask for confirmation or show alternatives.
0.5-0.7Reject or request a clearer trial.
< 0.5Stop, recalibrate, or run diagnostics.
if confidence >= 0.9:
    execute(prediction)
elif confidence >= 0.7:
    request_confirmation(prediction)
else:
    reject_trial("low confidence")

Preprocessing Diagnostics

When confidence is unexpectedly low, check data quality before tuning model hyperparameters.
report = diagnose_preprocessing(bci_data)

if !isempty(report.errors)
    error("Preprocessing failed: $(report.errors)")
end

if report.quality_score < 0.7
    @warn "Low preprocessing quality" score=report.quality_score
end
See Preprocessing Requirements and Feature Normalization for upstream fixes.

Streaming Recovery

Streaming systems should skip bad chunks, report error rates, and stop when consecutive errors exceed a safe limit.
consecutive_errors = 0
max_errors = 10

for chunk in stream
    try
        validate_chunk(chunk, metadata)
        result = process_chunk(session, chunk)
        consecutive_errors = 0
    catch err
        consecutive_errors += 1
        @warn "Skipping invalid chunk" exception=err

        if consecutive_errors >= max_errors
            error("Stopping stream after repeated chunk failures")
        end
    end
end
For chunk sizing and aggregation decisions, see Streaming Inference Configuration.

Julia Setup Errors

For Julia SDK deployments:
  • Run NimbusSDK.install_core(api_key) during setup, not inside a hot inference loop.
  • Cache credentials/core installation where appropriate.
  • Handle missing or invalid API keys before starting acquisition.
  • Keep offline inference assumptions explicit in deployment docs.
using NimbusSDK

if !NimbusSDK.is_core_installed()
    NimbusSDK.install_core(ENV["NIMBUS_API_KEY"])
end

Production Checklist

  • Validate every calibration file before training.
  • Save model metadata, normalization parameters, and SDK versions.
  • Log prediction, confidence, posterior summary, latency, and rejection reason.
  • Separate classifier output from command execution.
  • Use stricter thresholds for safety-critical actions.
  • Monitor session-level confidence trends and rejection rates.
  • Provide a fallback path for stream loss, repeated bad chunks, or recalibration needs.

Troubleshooting Quick Reference

IssueLikely CauseFix
Accuracy near chanceRaw EEG or wrong feature bandRevisit preprocessing and feature extraction.
Confidence always lowPoor normalization or noisy trialRun diagnostics and reuse train normalization.
Dimension mismatchFeature count differs from metadataCheck export shape and model metadata.
Works offline, fails streamingChunk shape or state handlingValidate chunks and reset sessions between trials.
Cross-session dropElectrode/session scale shiftUse saved normalization params.

Next Read

Development Workflow

Project structure and production guardrails.

Streaming Configuration

Chunking, aggregation, and quality gates.

Preprocessing Requirements

Prevent data-quality failures upstream.

Basic Examples

Compact recipes for common workflows.