Skip to main content

Error Handling for BCI Systems

Robust error handling is critical for BCI applications, especially in medical and assistive technology contexts where system failures can have serious consequences. NimbusSDK provides comprehensive validation and error detection to ensure reliable operation.

Error Categories

Authentication Errors

Common installation and setup issues:
using NimbusSDK

# 1. Invalid API Key Format
try
    NimbusSDK.install_core("short_key")  # Too short
catch e
    # Error: "Invalid API key format. API keys must be at least 20 characters."
    println("Installation failed: Please check your API key")
end

# 2. Core Already Installed
# The core is installed once per machine.
# To reinstall or update, you can force reinstall:
if NimbusSDK.is_core_installed()
    println("Core already installed - no need to reinstall")
else
    NimbusSDK.install_core("your-api-key")
end

# 3. Network Issues During Installation
try
    NimbusSDK.install_core("your-api-key")
catch e
    # Error: "Cannot connect to Nimbus API. Please check your internet connection."
    println("Please check your internet connection and try again")
end
Best Practice: Handle Installation Errors Gracefully
function safe_install_core(api_key::String; max_retries::Int=3)
    # Check if already installed
    if NimbusSDK.is_core_installed()
        println("✓ Core already installed")
        return true
    end
    
    for attempt in 1:max_retries
        try
            NimbusSDK.install_core(api_key)
            println("✓ Core installed successfully")
            return true
        catch e
            @warn "Installation attempt $attempt failed" exception=e
            if attempt < max_retries
                sleep_time = 2^attempt  # Exponential backoff
                println("Retrying in $sleep_time seconds...")
                sleep(sleep_time)
            else
                @error "Installation failed after $max_retries attempts"
                return false
            end
        end
    end
    return false
end

# Usage
if !safe_install_core("your-api-key")
    println("❌ Cannot proceed without core installation")
    exit(1)
end

Data Validation Errors

NimbusSDK validates data before inference to catch common issues:
using NimbusSDK

# 1. NaN or Inf values
bad_features = randn(16, 250, 10)
bad_features[5, 100, 3] = NaN  # Inject NaN

try
    data = BCIData(bad_features, metadata)
    # Error: "Data contains NaN values. Please clean your data before inference."
catch e
    println("Data validation failed: Clean your data")
    # Fix: Remove or interpolate NaN values
    clean_features = replace_nan_with_mean(bad_features)
end

# 2. Dimension mismatches
wrong_dim_features = randn(16, 250, 10, 2)  # 4D instead of 3D

try
    data = BCIData(wrong_dim_features, metadata)
    # Error: "Features must be 2D or 3D array, got 4D"
catch e
    println("Dimension error: Expected 2D or 3D array")
end

# 3. Feature dimension mismatch
metadata = BCIMetadata(
    sampling_rate = 250.0,
    n_features = 16,  # Expects 16 features
    n_classes = 4
)
features = randn(8, 250, 10)  # Only 8 features!

try
    data = BCIData(features, metadata)
    model = load_model(RxLDAModel, "motor_imagery_4class_v1")
    results = predict_batch(model, data)
    # Error: "Data has 8 features but model expects 16 features"
catch e
    println("Feature dimension mismatch: Check your preprocessing")
end
Best Practice: Pre-validation Function
function validate_bci_data(features, metadata)
    errors = String[]
    warnings = String[]
    
    # Check for NaN/Inf
    if any(isnan.(features))
        push!(errors, "Data contains NaN values")
    end
    if any(isinf.(features))
        push!(errors, "Data contains Inf values")
    end
    
    # Check dimensions
    if ndims(features) < 2 || ndims(features) > 3
        push!(errors, "Features must be 2D or 3D, got $(ndims(features))D")
    end
    
    # Check feature dimension
    actual_features = size(features, 1)
    if actual_features != metadata.n_features
        push!(errors, "Feature count mismatch: $actual_features vs $(metadata.n_features)")
    end
    
    # Check for suspicious statistics
    feature_std = std(features)
    if feature_std < 1e-10
        push!(warnings, "Very small std dev ($feature_std) - data may not be scaled")
    end
    if feature_std > 1e6
        push!(warnings, "Very large std dev ($feature_std) - consider normalization")
    end
    
    # Report issues
    if !isempty(errors)
        println("❌ Validation Errors:")
        for err in errors
            println("  • $err")
        end
        return false
    end
    
    if !isempty(warnings)
        println("⚠️  Validation Warnings:")
        for warn in warnings
            println("  • $warn")
        end
    end
    
    return true
end

# Usage
if validate_bci_data(features, metadata)
    data = BCIData(features, metadata)
    results = predict_batch(model, data)
else
    println("Fix data issues before inference")
end

Model Compatibility Errors

using NimbusSDK

# Load a 4-class motor imagery model
model = load_model(RxLDAModel, "motor_imagery_4class_v1")

# But provide 2-class data
metadata = BCIMetadata(
    sampling_rate = 250.0,
    n_features = 16,
    n_classes = 2  # Wrong!
)

try
    results = predict_batch(model, data)
    # Error: "Data has 2 classes but model expects 4 classes"
catch e
    println("Class count mismatch: Use the correct model for your data")
end
Best Practice: Check Compatibility Before Inference
function check_compatibility(data::BCIData, model)
    # Get model requirements
    model_features = get_n_features(model)
    model_classes = get_n_classes(model)
    
    # Compare
    compatible = true
    
    if data.metadata.n_features != model_features
        @error "Feature mismatch" data=data.metadata.n_features model=model_features
        compatible = false
    end
    
    if data.metadata.n_classes != model_classes
        @error "Class count mismatch" data=data.metadata.n_classes model=model_classes
        compatible = false
    end
    
    return compatible
end

# Usage
if check_compatibility(data, model)
    results = predict_batch(model, data)
else
    println("Model and data are incompatible")
end

Training Errors

using NimbusSDK

# 1. Missing labels
features = randn(16, 250, 50)
metadata = BCIMetadata(...)
data = BCIData(features, metadata)  # No labels!

try
    model = train_model(RxLDAModel, data)
    # Error: "Training requires labeled data. Please provide labels in BCIData."
catch e
    println("Training requires labeled data")
end

# Fix: Provide labels
labels = rand(1:4, 50)
data_with_labels = BCIData(features, metadata, labels)
model = train_model(RxLDAModel, data_with_labels)  # ✓ Works

Streaming Errors

using NimbusSDK

# 1. Chunk size mismatch
metadata = BCIMetadata(
    chunk_size = 250,  # Expects 250 samples per chunk
    # ... other params
)

session = init_streaming(model, metadata)

# But provide wrong-sized chunk
wrong_chunk = randn(16, 125)  # Only 125 samples!

try
    result = process_chunk(session, wrong_chunk)
    # Warning: "Chunk size mismatch" expected=250 actual=125
catch e
    println("Chunk size mismatch")
end

# 2. Feature dimension mismatch
wrong_features_chunk = randn(8, 250)  # Only 8 features instead of 16!

try
    result = process_chunk(session, wrong_features_chunk)
    # Error: "Chunk feature dimension (8) does not match metadata (16)"
catch e
    println("Feature dimension mismatch in chunk")
end
Best Practice: Validate Chunks Before Processing
function safe_process_chunk(session, chunk, metadata)
    # Validate chunk dimensions
    n_features, n_samples = size(chunk)
    
    if n_features != metadata.n_features
        @error "Feature dimension mismatch" expected=metadata.n_features actual=n_features
        return nothing
    end
    
    if !isnothing(metadata.chunk_size) && n_samples != metadata.chunk_size
        @warn "Chunk size mismatch" expected=metadata.chunk_size actual=n_samples
    end
    
    # Check for invalid values
    if any(isnan.(chunk)) || any(isinf.(chunk))
        @error "Chunk contains NaN or Inf values"
        return nothing
    end
    
    # Process chunk
    try
        result = process_chunk(session, chunk)
        return result
    catch e
        @error "Chunk processing failed" exception=e
        return nothing
    end
end

# Usage in streaming loop
for raw_chunk in eeg_stream
    result = safe_process_chunk(session, raw_chunk, metadata)
    
    if !isnothing(result)
        handle_prediction(result)
    else
        # Handle failed chunk
        @warn "Skipping invalid chunk"
    end
end

Warnings vs Errors

NimbusSDK distinguishes between critical errors (throws exceptions) and warnings (logs but continues):

Errors (Execution Stops)

  • NaN/Inf in data: Cannot perform inference
  • Dimension mismatches: Incompatible with model
  • Missing labels: Cannot train without labels
  • Invalid API key format: Authentication fails

Warnings (Execution Continues)

  • Low sampling rate: Below recommended 250 Hz
  • Very high sampling rate: Unusually high (>10 kHz)
  • Small/large feature std dev: Potentially poor scaling
  • Chunk size mismatch: Unexpected chunk size
  • Paradigm mismatch: Data paradigm doesn’t match model
  • Large feature range: Recommend standardization
Example: Handling Warnings
using NimbusSDK
using Logging

# Configure logging to capture warnings
logger = ConsoleLogger(stdout, Logging.Warn)

with_logger(logger) do
    # Low sampling rate triggers warning but continues
    metadata = BCIMetadata(
        sampling_rate = 128.0,  # Low!
        n_features = 16,
        n_classes = 4
    )
    # Warning: "Low sampling rate detected (128.0 Hz). Recommend ≥ 250 Hz..."
    
    data = BCIData(features, metadata)
    results = predict_batch(model, data)  # Still works, but accuracy may suffer
end

Preprocessing Diagnostics

Use built-in diagnostics to identify data quality issues:
using NimbusSDK

# Collect a diagnostic trial
features = collect_your_eeg_trial()  # Your acquisition code
metadata = BCIMetadata(...)
data = BCIData(features, metadata)

# Run diagnostics
report = diagnose_preprocessing(data)

# Check quality
println("Quality Score: $(round(report.quality_score * 100, digits=1))%")

if report.quality_score < 0.7
    println("\n⚠️  Data quality issues detected:")
    
    # Show warnings
    if !isempty(report.warnings)
        for warning in report.warnings
            println("  • $warning")
        end
    end
    
    # Show recommendations
    if !isempty(report.recommendations)
        println("\nRecommendations:")
        for rec in report.recommendations
            println("  • $rec")
        end
    end
    
    # Decide whether to proceed
    if report.quality_score < 0.5
        println("\n❌ Quality too low - cannot proceed")
        exit(1)
    else
        println("\n⚠️  Proceeding with caution")
    end
else
    println("✓ Data quality is acceptable")
end

Production Error Handling Pattern

Complete error handling for production systems:
using NimbusSDK
using Logging

function production_bci_inference(api_key::String, model_name::String, 
                                  features::Array, metadata::BCIMetadata)
    # 1. Authentication
    try
        NimbusSDK.install_core(api_key)
    catch e
        @error "Authentication failed" exception=e
        return nothing
    end
    
    # 2. Load model
    local model
    try
        model = load_model(RxLDAModel, model_name)
    catch e
        @error "Failed to load model" model_name exception=e
        return nothing
    end
    
    # 3. Validate data
    if !validate_bci_data(features, metadata)
        @error "Data validation failed"
        return nothing
    end
    
    # 4. Create BCIData
    local data
    try
        data = BCIData(features, metadata)
    catch e
        @error "Failed to create BCIData" exception=e
        return nothing
    end
    
    # 5. Check compatibility
    if !check_compatibility(data, model)
        @error "Model-data incompatibility"
        return nothing
    end
    
    # 6. Run inference
    local results
    try
        results = predict_batch(model, data)
    catch e
        @error "Inference failed" exception=e
        return nothing
    end
    
    # 7. Validate results
    if mean(results.confidences) < 0.5
        @warn "Low average confidence" mean_conf=mean(results.confidences)
    end
    
    return results
end

# Usage
results = production_bci_inference(
    "your-api-key",
    "motor_imagery_4class_v1",
    features,
    metadata
)

if !isnothing(results)
    println("✓ Inference successful")
    println("Accuracy: $(evaluate_accuracy(results))")
else
    println("❌ Inference pipeline failed - check logs")
end

Streaming Error Recovery

Robust streaming with automatic recovery:
using NimbusSDK

function robust_streaming_session(model, metadata, eeg_stream; 
                                  max_consecutive_errors::Int=10)
    session = init_streaming(model, metadata)
    error_count = 0
    total_chunks = 0
    successful_chunks = 0
    
    for (i, raw_chunk) in enumerate(eeg_stream)
        total_chunks += 1
        
        try
            # Validate chunk dimensions
            if size(raw_chunk, 1) != metadata.n_features
                throw(DimensionMismatch("Feature count mismatch"))
            end
            
            # Process chunk
            result = process_chunk(session, raw_chunk)
            
            # Success - reset error counter
            error_count = 0
            successful_chunks += 1
            
            # Handle result
            handle_bci_command(result)
            
        catch e
            error_count += 1
            @warn "Chunk processing failed" chunk=i exception=e
            
            if error_count >= max_consecutive_errors
                @error "Too many consecutive errors - aborting session" 
                break
            end
            
            # Skip this chunk and continue
            continue
        end
        
        # Report progress periodically
        if total_chunks % 100 == 0
            success_rate = successful_chunks / total_chunks
            println("Processed $total_chunks chunks ($(round(success_rate*100, digits=1))% success)")
        end
    end
    
    # Final report
    println("\n=== Session Complete ===")
    println("Total chunks: $total_chunks")
    println("Successful: $successful_chunks")
    println("Failed: $(total_chunks - successful_chunks)")
end

# Usage
robust_streaming_session(model, metadata, eeg_stream)

Logging Configuration

Configure logging for different environments:
using Logging

# Development: Verbose logging
dev_logger = ConsoleLogger(stdout, Logging.Debug)

# Production: Errors and warnings only
prod_logger = ConsoleLogger(stdout, Logging.Warn)

# Custom: Log to file
function create_file_logger(path::String)
    io = open(path, "w")
    return ConsoleLogger(io, Logging.Info)
end

# Use logger
global_logger(prod_logger)

# Or use for specific blocks
with_logger(dev_logger) do
    # Detailed logging for this section
    results = predict_batch(model, data)
end

Troubleshooting Guide

Common Issues and Solutions

IssueSymptomSolution
NaN in predictionsresults.predictions contains NaNCheck input data for NaN/Inf values
Very low confidenceAll confidences < 0.3Run diagnose_preprocessing() - likely feature extraction issue
Dimension errors”Feature dimension mismatch”Verify n_features matches your preprocessing output
Class mismatch”Model expects X classes”Use correct model or verify your label encoding (1-indexed!)
Slow first inferenceFirst prediction takes secondsNormal JIT compilation - run warmup inferences
Authentication failsCannot validate API keyCheck internet connection or use offline mode
Chunk size warnings”Chunk size mismatch”Ensure all chunks have consistent size matching chunk_size

Best Practices Summary

Error Handling Checklist

Always validate data before inference
Check compatibility between model and data
Handle authentication errors gracefully with offline fallback
Pre-allocate buffers in real-time loops to avoid errors
Use try-catch around critical operations
Log warnings but continue execution when safe
Implement retry logic for transient failures
Run preprocessing diagnostics when confidence is low
Monitor error rates in production systems
Test error paths during development

Next Steps


Tip: Use diagnose_preprocessing() as your first debugging step when predictions are unexpectedly poor or confidence is low.