Skip to main content

Basic BCI Examples

This section provides fundamental examples of using NimbusSDK for brain-computer interface applications. All examples use the actual Bayesian LDA (RxLDA) and Bayesian GMM (RxGMM) models with real Julia SDK code.
For detailed model information, see Bayesian LDA (RxLDA) and Bayesian GMM (RxGMM) documentation.

Motor Imagery Classification

Motor imagery BCI allows users to control devices by imagining movements. This is one of the most common BCI paradigms.

Basic 2-Class Motor Imagery

Classify left vs right hand motor imagery:
using NimbusSDK

# 1. Setup (one-time)
NimbusSDK.install_core("your-api-key")

# 2. Load pre-trained 2-class model
model = load_model(RxLDAModel, "motor_imagery_2class_v1")

# 3. Prepare your preprocessed CSP features
# Features must be CSP-transformed EEG (8-30 Hz bandpass)
# Shape: (n_features × n_samples × n_trials)
csp_features = load_your_csp_features()  # (8, 250, 100) - 8 CSP features, 1s trials, 100 trials
labels = [1, 2, 1, 2, ...]  # 1 = left hand, 2 = right hand

metadata = BCIMetadata(
    sampling_rate = 250.0,
    paradigm = :motor_imagery,
    feature_type = :csp,
    n_features = 8,
    n_classes = 2,
    chunk_size = nothing
)

data = BCIData(csp_features, metadata, labels)

# 4. Run batch inference
results = predict_batch(model, data; iterations=10)

# 5. Analyze results
accuracy = sum(results.predictions .== labels) / length(labels)
itr = calculate_ITR(accuracy, 2, 4.0)  # 2 classes, 4-second trials

println("Accuracy: $(round(accuracy * 100, digits=1))%")
println("ITR: $(round(itr, digits=1)) bits/minute")
println("Mean confidence: $(round(mean(results.confidences), digits=3))")

4-Class Motor Imagery

Classify left hand, right hand, feet, and tongue movements:
using NimbusSDK

# Authenticate and load 4-class model
NimbusSDK.install_core("your-api-key")
model = load_model(RxLDAModel, "motor_imagery_4class_v1")

# Prepare 4-class data
# Classes: 1=left hand, 2=right hand, 3=feet, 4=tongue
metadata = BCIMetadata(
    sampling_rate = 250.0,
    paradigm = :motor_imagery,
    feature_type = :csp,
    n_features = 16,  # More features for 4-class problem
    n_classes = 4
)

data = BCIData(csp_features, metadata, labels)

# Run inference
results = predict_batch(model, data)

# Per-class analysis
for class_id in 1:4
    class_mask = labels .== class_id
    class_acc = sum(results.predictions[class_mask] .== class_id) / sum(class_mask)
    println("Class $class_id accuracy: $(round(class_acc * 100, digits=1))%")
end

# Overall metrics
accuracy = sum(results.predictions .== labels) / length(labels)
itr = calculate_ITR(accuracy, 4, 4.0)
println("\nOverall accuracy: $(round(accuracy * 100, digits=1))%")
println("ITR: $(round(itr, digits=1)) bits/minute")

Training Custom Motor Imagery Model

Train a model on your own data:
using NimbusSDK

NimbusSDK.install_core("your-api-key")

# Collect training data
# Minimum: 50 trials per class recommended
# Optimal: 200+ trials per class
train_features, train_labels = collect_training_data()

train_data = BCIData(
    train_features,
    BCIMetadata(
        sampling_rate = 250.0,
        paradigm = :motor_imagery,
        feature_type = :csp,
        n_features = 16,
        n_classes = 4
    ),
    train_labels
)

# Train RxLDA model (faster, shared covariance)
rxlda_model = train_model(
    RxLDAModel,
    train_data;
    iterations = 50,
    showprogress = true,
    name = "my_motor_imagery",
    description = "Custom 4-class motor imagery"
)

# Or train RxGMM model (more flexible, class-specific covariances)
rxgmm_model = train_model(
    RxGMMModel,
    train_data;
    iterations = 50,
    showprogress = true,
    name = "my_motor_imagery_gmm"
)

# Save trained model
save_model(rxlda_model, "my_motor_imagery.jld2")

# Evaluate on test data
test_results = predict_batch(rxlda_model, test_data)
test_accuracy = sum(test_results.predictions .== test_labels) / length(test_labels)
println("Test accuracy: $(round(test_accuracy * 100, digits=1))%")
P300 BCIs detect attention-related brain responses to visual stimuli.

Binary P300 Classification

Detect target vs non-target stimuli:
using NimbusSDK

# Setup and load P300 model
NimbusSDK.install_core("your-api-key")
model = load_model(RxLDAModel, "p300_binary_v1")

# Prepare P300 ERP features
# Features extracted from 0.2-0.8s post-stimulus window
# Typical: Bandpower or ERP amplitudes from Pz, Cz, Fz
erp_features = load_p300_features()  # (12, 150, 200) - 12 features, 0.6s @ 250Hz, 200 trials
labels = [1, 2, 2, 2, 1, 2, ...]  # 1=target, 2=non-target

metadata = BCIMetadata(
    sampling_rate = 250.0,
    paradigm = :p300,
    feature_type = :erp,
    n_features = 12,
    n_classes = 2
)

data = BCIData(erp_features, metadata, labels)

# Run inference
results = predict_batch(model, data)

# Analyze target detection
target_mask = labels .== 1
target_detected = sum(results.predictions[target_mask] .== 1)
target_total = sum(target_mask)

println("Target detection rate: $(round(100 * target_detected / target_total, digits=1))%")
println("Mean target confidence: $(round(mean(results.confidences[target_mask]), digits=3))")

P300 Speller Application

Implement a P300-based speller:
using NimbusSDK

NimbusSDK.install_core("your-api-key")
model = load_model(RxLDAModel, "p300_speller_v1")

# Spelling matrix (6x6 for 36 characters)
matrix = reshape(collect('A':'Z')  collect('0':'9'), 6, 6)

function spell_character(row_col_trials)
    """
    Spell one character using row/column paradigm
    row_col_trials: Dict with :rows and :cols EEG data
    """
    
    # Analyze row flashes
    row_data = BCIData(row_col_trials[:rows], metadata)
    row_results = predict_batch(model, row_data)
    target_row = argmax([mean(row_results.posteriors[1, i:i+5]) for i in 1:6:30])
    
    # Analyze column flashes
    col_data = BCIData(row_col_trials[:cols], metadata)
    col_results = predict_batch(model, col_data)
    target_col = argmax([mean(col_results.posteriors[1, i:i+5]) for i in 1:6:30])
    
    # Get character at intersection
    character = matrix[target_row, target_col]
    confidence = (row_results.confidences[target_row] + col_results.confidences[target_col]) / 2
    
    return (character=character, confidence=confidence)
end

# Spell a word
word = ""
for _ in 1:5  # Spell 5 characters
    trials = collect_p300_trials()  # Your acquisition function
    result = spell_character(trials)
    
    if result.confidence > 0.7
        word *= result.character
        println("Spelled: $(result.character) (confidence: $(round(result.confidence, digits=2)))")
    else
        println("Low confidence - retry")
    end
end

println("\nSpelled word: $word")

Real-Time Streaming Examples

Streaming Motor Imagery Control

Real-time cursor control with streaming inference:
using NimbusSDK

# Setup
NimbusSDK.install_core("your-api-key")
model = load_model(RxLDAModel, "motor_imagery_4class_v1")

metadata = BCIMetadata(
    sampling_rate = 250.0,
    paradigm = :motor_imagery,
    feature_type = :csp,
    n_features = 16,
    n_classes = 4,
    chunk_size = 250  # 1-second chunks
)

# Initialize streaming session
session = init_streaming(model, metadata)

# Process real-time EEG chunks
cursor_position = [0.0, 0.0]  # [x, y]
movement_speed = 0.05

for chunk in eeg_stream
    # Process chunk (16 features × 250 samples)
    chunk_result = process_chunk(session, chunk)
    
    # Update cursor based on prediction
    if chunk_result.confidence > 0.7
        if chunk_result.prediction == 1  # Left hand
            cursor_position[1] -= movement_speed
        elseif chunk_result.prediction == 2  # Right hand
            cursor_position[1] += movement_speed
        elseif chunk_result.prediction == 3  # Feet
            cursor_position[2] -= movement_speed
        elseif chunk_result.prediction == 4  # Tongue
            cursor_position[2] += movement_speed
        end
        
        update_cursor_display(cursor_position)
    end
end

Adaptive Threshold Control

Adjust control sensitivity based on performance:
using NimbusSDK

# Setup streaming
session = init_streaming(model, metadata)
tracker = OnlinePerformanceTracker(window_size=20)

# Adaptive parameters
confidence_threshold = 0.7
update_interval = 10

trial_count = 0

for trial in trials
    trial_count += 1
    
    # Process trial
    for chunk in collect_trial_chunks()
        result = process_chunk(session, chunk)
        
        # Execute command if confidence exceeds threshold
        if result.confidence > confidence_threshold
            execute_command(result.prediction)
        end
    end
    
    final_result = finalize_trial(session)
    
    # Update tracker with ground truth (if available)
    if !isnothing(ground_truth)
        metrics = update_and_report!(tracker, final_result.prediction, ground_truth, final_result.confidence)
        
        # Adapt threshold every 10 trials
        if trial_count % update_interval == 0
            if metrics.accuracy > 0.85
                # High accuracy - can lower threshold for faster response
                confidence_threshold = max(0.6, confidence_threshold * 0.95)
                println("Lowering threshold to $(round(confidence_threshold, digits=2))")
            elseif metrics.accuracy < 0.65
                # Low accuracy - raise threshold for reliability
                confidence_threshold = min(0.85, confidence_threshold * 1.05)
                println("Raising threshold to $(round(confidence_threshold, digits=2))")
            end
        end
    end
end

Subject-Specific Calibration

Quick Calibration for New Users

Personalize a model with minimal calibration data:
using NimbusSDK

NimbusSDK.install_core("your-api-key")

# Load baseline model
baseline_model = load_model(RxLDAModel, "motor_imagery_baseline_v1")

# Collect quick calibration (10-20 trials per class)
println("Collecting calibration data...")
calib_features, calib_labels = collect_calibration_trials(trials_per_class=15)

calib_data = BCIData(
    calib_features,
    BCIMetadata(
        sampling_rate = 250.0,
        paradigm = :motor_imagery,
        feature_type = :csp,
        n_features = 16,
        n_classes = 4
    ),
    calib_labels
)

# Calibrate model (much faster than training from scratch)
println("Calibrating model...")
personalized_model = calibrate_model(
    baseline_model,
    calib_data;
    iterations = 20  # Fewer iterations needed
)

# Save personalized model
save_model(personalized_model, "subject_123_personalized.jld2")

# Test improvement
println("\nTesting on validation data...")
baseline_results = predict_batch(baseline_model, validation_data)
personalized_results = predict_batch(personalized_model, validation_data)

baseline_acc = sum(baseline_results.predictions .== val_labels) / length(val_labels)
personalized_acc = sum(personalized_results.predictions .== val_labels) / length(val_labels)

println("Baseline accuracy: $(round(baseline_acc * 100, digits=1))%")
println("Personalized accuracy: $(round(personalized_acc * 100, digits=1))%")
println("Improvement: $(round((personalized_acc - baseline_acc) * 100, digits=1))%")

Quality Assessment and Diagnostics

Identify Poor Quality Trials

using NimbusSDK

# Run inference
results = predict_batch(model, data)

# Identify low-confidence trials
low_conf_threshold = 0.65
low_conf_indices = findall(results.confidences .< low_conf_threshold)

println("Low confidence trials: $(length(low_conf_indices))/$(length(results.predictions))")
println("Indices: $low_conf_indices")

# Analyze patterns in low-confidence trials
if !isempty(low_conf_indices)
    println("\nLow confidence trial analysis:")
    for (class_id, class_name) in enumerate(["Left", "Right", "Feet", "Tongue"])
        class_low_conf = sum(labels[low_conf_indices] .== class_id)
        println("  $class_name: $class_low_conf trials")
    end
end

# Overall quality assessment
quality = assess_trial_quality(results)
println("\nOverall quality score: $(round(quality.overall_score, digits=2))")
println("Recommendation: $(quality.recommendation)")

Preprocessing Quality Check

using NimbusSDK

# Diagnose preprocessing quality
report = diagnose_preprocessing(data)

println("Preprocessing Quality Report")
println("="^50)
println("Overall score: $(round(report.quality_score * 100, digits=1))%")

if !isempty(report.errors)
    println("\n⚠️  ERRORS:")
    for error in report.errors
        println("  • $error")
    end
end

if !isempty(report.warnings)
    println("\n⚠️  WARNINGS:")
    for warning in report.warnings
        println("  • $warning")
    end
end

if !isempty(report.recommendations)
    println("\n💡 RECOMMENDATIONS:")
    for rec in report.recommendations
        println("  • $rec")
    end
end

# Feature discriminability analysis
if hasfield(typeof(report), :feature_scores)
    println("\nTop discriminative features:")
    top_features = sortperm(report.feature_scores, rev=true)[1:5]
    for (rank, feat_idx) in enumerate(top_features)
        println("  $rank. Feature $feat_idx (score: $(round(report.feature_scores[feat_idx], digits=3)))")
    end
end

Performance Comparison

RxLDA vs RxGMM

Compare the two models:
using NimbusSDK

NimbusSDK.install_core("your-api-key")

# Train both models on same data
println("Training RxLDA...")
rxlda = train_model(RxLDAModel, train_data; iterations=50, showprogress=true)

println("\nTraining RxGMM...")
rxgmm = train_model(RxGMMModel, train_data; iterations=50, showprogress=true)

# Test both models
println("\n" * "="^50)
println("Model Comparison")
println("="^50)

for (model, name) in [(rxlda, "RxLDA"), (rxgmm, "RxGMM")]
    results = predict_batch(model, test_data)
    accuracy = sum(results.predictions .== test_labels) / length(test_labels)
    mean_conf = mean(results.confidences)
    itr = calculate_ITR(accuracy, 4, 4.0)
    
    println("\n$name:")
    println("  Accuracy: $(round(accuracy * 100, digits=1))%")
    println("  Mean confidence: $(round(mean_conf, digits=3))")
    println("  ITR: $(round(itr, digits=1)) bits/min")
end

println("\nRecommendation:")
println("  • RxLDA: Faster, good for well-separated classes")
println("  • RxGMM: More flexible, better for overlapping distributions")

Next Steps


All examples use actual NimbusSDK.jl functions and the real RxLDA/RxGMM models. No mocked or simulated code.