Skip to main content

NimbusSDK.jl - Julia SDK Reference

Python SDK Users: Looking for Python documentation? See Python SDK API Reference.This page documents the Julia SDK (NimbusSDK.jl).
The NimbusSDK.jl Julia package provides production-ready Bayesian inference for Brain-Computer Interface (BCI) applications. Built on RxInfer.jl, it offers three models (Bayesian LDA, Bayesian GMM, and Bayesian MPR) with batch and streaming inference capabilities.

Installation

NimbusSDK.jl is now available in the public Julia General Registry:
using Pkg

# Install the public wrapper (registered package)
Pkg.add("NimbusSDK")
After installing the wrapper, install the proprietary core with your license key:
using NimbusSDK

# Install the commercial core (one-time setup)
NimbusSDK.install_core("your-api-key-here")
Get your API key at: nimbusbci.com/dashboard

Requirements

  • Julia ≥ 1.9
  • Valid NimbusSDK license key
  • Preprocessed EEG features (CSP, bandpower, etc.) - not raw EEG
What changed? NimbusSDK.jl is now a public wrapper package in the Julia General Registry. The proprietary inference core (NimbusSDKCore) is automatically installed when you provide your API key. No more private registry setup needed!

Quick Start

using NimbusSDK

# One-time setup: Install core with your API key
NimbusSDK.install_core("your-api-key")

# Load model
model = load_model(RxLDAModel, "motor_imagery_4class_v1")

# Prepare data
data = BCIData(features, metadata, labels)

# Run inference
results = predict_batch(model, data)

Setup

install_core()

Install the proprietary NimbusSDKCore with your API key. This is a one-time setup that downloads and configures the commercial inference engine.
install_core(api_key::String) -> Bool
Parameters:
  • api_key::String - Your NimbusSDK API key (format: nbci_live_... or nbci_test_...)
Returns: true if installation successful Example:
using NimbusSDK

# One-time setup (downloads and installs core)
NimbusSDK.install_core("nbci_live_...")

# After installation, you can use the SDK in any project
using NimbusSDK
model = load_model(RxLDAModel, "motor_imagery_4class_v1")
The core installation is persistent. You only need to run install_core() once per machine. After that, simply using NimbusSDK will work in any Julia project.

check_installation()

Verify that the core is installed and working correctly.
check_installation() -> Bool
Returns: true if core is installed and operational
Note: check_installation() is provided by the NimbusSDK wrapper package. For direct NimbusSDKCore usage, check authentication status using NimbusSDKCore.AUTH_STATE[].

Authentication

For most users: Authentication is handled automatically by NimbusSDK.install_core(). The functions below are for advanced users working directly with NimbusSDKCore.

authenticate()

Authenticate with NimbusSDKCore using an API key. This function validates the key remotely and caches credentials locally.
authenticate(api_key::String; offline_mode::Bool=false) -> AuthSession
Parameters:
  • api_key::String - Your NimbusSDK API key (format: nbci_live_... or nbci_test_...)
  • offline_mode::Bool - If true, skip remote validation and use cached credentials (default: false)
Returns: AuthSession object containing authentication state Example:
using NimbusSDKCore

# Authenticate online (validates with API)
session = NimbusSDKCore.authenticate("nbci_live_your_key_here")

# Authenticate offline (uses cached credentials)
session = NimbusSDKCore.authenticate("nbci_live_your_key_here"; offline_mode=true)

invalidate_session()

Clear the current authentication session and cached credentials.
invalidate_session() -> Nothing
Example:
using NimbusSDKCore

# Clear authentication
NimbusSDKCore.invalidate_session()

AuthSession

Authentication session object containing license information and API state.
struct AuthSession
    api_key::String
    user_id::String
    license_type::Symbol  # :trial, :research, :commercial, :enterprise
    expires_at::DateTime
    features_enabled::Vector{Symbol}  # [:batch_inference, :streaming, :training, ...]
    usage_quota::Union{Int, Nothing}  # Remaining quota (nothing = unlimited)
    usage_quota_max::Union{Int, Nothing}  # Maximum quota
    refresh_token::Union{String, Nothing}  # Refresh token for renewing access
end

Key Management Functions

save_api_key()

Save an API key to local storage for later use.
save_api_key(api_key::String) -> Nothing

get_stored_api_key()

Retrieve a previously saved API key.
get_stored_api_key() -> Union{String, Nothing}
Returns: API key string or nothing if no key is stored

delete_stored_api_key()

Delete a stored API key from local storage.
delete_stored_api_key() -> Nothing

authenticate_from_storage()

Authenticate using a previously saved API key.
authenticate_from_storage() -> AuthSession
Returns: AuthSession if authentication successful Throws: Error if no stored key or authentication fails

Quota Management

refresh_quota()

Refresh API quota information from the server.
refresh_quota() -> AuthSession
Returns: Updated AuthSession with refreshed quota information

check_quota_and_refresh()

Check quota status and refresh if needed.
check_quota_and_refresh() -> Bool
Returns: true if quota is available, false if exhausted

get_quota_status()

Get current quota status without refreshing.
get_quota_status() -> NamedTuple
Returns: Named tuple with remaining::Int, monthly_limit::Int, usage_percentage::Float64 Example:
using NimbusSDKCore

# Check quota
status = get_quota_status()
println("Quota remaining: $(status.remaining) / $(status.monthly_limit)")
println("Usage: $(round(status.usage_percentage, digits=1))%")

Models

NimbusSDK provides three Bayesian inference models:

RxLDAModel

Primary Name: Bayesian LDA (Bayesian Linear Discriminant Analysis)
API Name: RxLDAModel
Mathematical Model: Pooled Gaussian Classifier (PGC)
Linear Discriminant Analysis with shared precision matrix. Fast inference with good performance for well-separated classes. Fields:
  • mean_posteriors::Vector - Full posterior distributions for class means (MvNormal objects, one per class)
  • precision_posterior - Full posterior distribution for shared precision matrix (Wishart object, shared across all classes)
  • priors::Vector{Float64} - Empirical class priors from training data (must sum to 1.0)
  • metadata::ModelMetadata - Model metadata
  • dof_offset::Int - Degrees of freedom offset used during training (default: 2)
  • mean_prior_precision::Float64 - Mean prior precision strength used during training (default: 0.01)
Accessing model parameters: To get point estimates from posterior distributions, use mean(model.mean_posteriors[k]) for class means and mean(model.precision_posterior) for the precision matrix. The SDK stores full posterior distributions (not just point estimates) for proper Bayesian inference.

RxGMMModel

Primary Name: Bayesian GMM (Bayesian Gaussian Mixture Model)
API Name: RxGMMModel
Mathematical Model: Heteroscedastic Gaussian Classifier (HGC)
Gaussian Mixture Model with class-specific covariance matrices. More flexible, handles overlapping distributions. Fields:
  • mean_posteriors::Vector - Full posterior distributions for class means (MvNormal objects, one per class)
  • precision_posteriors::Vector - Full posterior distributions for precision matrices (Wishart objects, one per class)
  • priors::Vector{Float64} - Empirical class priors from training data (must sum to 1.0)
  • metadata::ModelMetadata - Model metadata
  • dof_offset::Int - Degrees of freedom offset used during training (default: 2)
  • mean_prior_precision::Float64 - Mean prior precision strength used during training (default: 0.01)
Accessing model parameters: To get point estimates from posterior distributions, use mean(model.mean_posteriors[k]) for class means and mean(model.precision_posteriors[k]) for class-specific precision matrices. The SDK stores full posterior distributions (not just point estimates) for proper Bayesian inference.

RxPolyaModel

Primary Name: Bayesian MPR (Bayesian Multinomial Probit Regression)
API Name: RxPolyaModel
Mathematical Model: Bayesian Multinomial Probit Regression
Bayesian multinomial probit regression with continuous transitions. Most flexible for complex multinomial classification tasks. Fields:
  • B_posterior - Learned regression coefficients posterior
  • W_posterior - Learned precision matrix posterior
  • metadata::ModelMetadata - Model metadata
  • N::Int - Number of trials per observation
  • Hyperparameters: ξβ, , W_df, W_scale

load_model()

Load a pre-trained or custom model.
load_model(ModelType, model_name::String) -> Model
load_model(ModelType, filepath::String) -> Model
Parameters:
  • ModelType - RxLDAModel, RxGMMModel, or RxPolyaModel
  • model_name::String - Model identifier or filepath
Example:
# Load from Nimbus model zoo
model = load_model(RxLDAModel, "motor_imagery_4class_v1")

# Load custom model
model = load_model(RxLDAModel, "my_model.jld2")

save_model()

Save a trained or loaded model to disk.
save_model(model, filepath::String)
Parameters:
  • model - RxLDAModel, RxGMMModel, or RxPolyaModel
  • filepath::String - File path to save model (typically .jld2 format)

train_model()

Train a new model on labeled data.
train_model(ModelType, train_data::BCIData; kwargs...) -> Model
Parameters (common to all models):
  • ModelType - RxLDAModel, RxGMMModel, or RxPolyaModel
  • train_data::BCIData - Labeled training data (must include labels!)
  • iterations::Int - Number of inference iterations (default: 50)
  • showprogress::Bool - Show training progress (default: false)
  • name::String - Model name (default: "untitled_model")
  • description::String - Model description (default: "")
Model-specific hyperparameters:
  • RxLDAModel / RxGMMModel
    • dof_offset::Int – Degrees of freedom offset for Wishart priors during training
      • Default: 2, range: ([1, 5])
    • mean_prior_precision::Float64 – Prior precision for class means
      • Default: 0.01, range: ([0.001, 0.1])
  • RxPolyaModel
    • N::Int – Trials per observation (default: 1)
    • ξβ::Union{Nothing, Vector{Float64}} – Prior mean for regression coefficients (nothing → auto)
    • Wβ::Union{Nothing, Matrix{Float64}} – Prior precision for regression coefficients (nothing → auto)
    • W_df::Union{Nothing, Float64} – Wishart degrees of freedom (nothing → auto)
    • W_scale::Union{Nothing, Matrix{Float64}} – Wishart scale matrix (nothing → auto)
Hyperparameters for each model type are documented in more detail on the corresponding model pages: /models/rxlda, /models/rxgmm, and /models/rxpolya.
Example (RxLDA):
# Train with default hyperparameters
model = train_model(
    RxLDAModel,
    train_data;
    iterations = 50,
    showprogress = true,
    name = "my_motor_imagery_model",
    description = "4-class motor imagery with CSP"
)

# Train with custom hyperparameters
model = train_model(
    RxLDAModel,
    train_data;
    iterations = 50,
    showprogress = true,
    name = "my_tuned_model",
    description = "4-class MI with tuned hyperparameters",
    dof_offset = 3,              # More regularization for noisy data
    mean_prior_precision = 0.05  # Stronger prior
)

calibrate_model()

Fine-tune a pre-trained model with subject-specific data (faster than training from scratch).
calibrate_model(
    base_model,
    calib_data::BCIData;
    iterations::Int = 20
) -> Model
Parameters:
  • base_model - Pre-trained model to calibrate
  • calib_data::BCIData - Calibration data with labels
  • iterations::Int - Number of calibration iterations (default: 20)
Hyperparameters preserved (v0.2.0+): calibrate_model() automatically uses the same hyperparameters (dof_offset, mean_prior_precision, etc.) as the base model. You cannot override them during calibration.
Example:
base_model = load_model(RxLDAModel, "motor_imagery_baseline_v1")
personalized_model = calibrate_model(base_model, calib_data; iterations=20)

# The personalized model inherits all hyperparameters from base_model

Data Structures

BCIData

Container for BCI data with features, metadata, and optional labels.
struct BCIData
    features::Array{Float64, 3}  # (n_features × n_samples × n_trials)
    metadata::BCIMetadata
    labels::Union{Nothing, Vector{Int}}  # Optional labels for training/evaluation
end
Example:
data = BCIData(
    csp_features,  # (16, 250, 20) - 16 features, 250 samples, 20 trials
    BCIMetadata(...),
    labels  # [1, 2, 3, 4, 1, 2, ...] - trial labels (1-indexed)
)

BCIMetadata

Metadata describing the BCI data properties.
struct BCIMetadata
    sampling_rate::Float64       # Hz (e.g., 250.0)
    paradigm::Symbol             # :motor_imagery, :p300, :ssvep, :erp
    feature_type::Symbol         # :csp, :bandpower, :time_domain, :frequency_domain, :raw_filtered
    n_features::Int              # Number of features (e.g., 16 for CSP)
    n_classes::Int               # Number of classes (e.g., 4)
    chunk_size::Union{Nothing, Int}  # For streaming: samples per chunk
    temporal_aggregation::Symbol # :mean, :median, :logvar, :none
end
Example:
metadata = BCIMetadata(
    sampling_rate = 250.0,
    paradigm = :motor_imagery,
    feature_type = :csp,
    n_features = 16,
    n_classes = 4,
    chunk_size = nothing,  # Batch mode
    temporal_aggregation = :logvar  # For CSP features in MI
)

Inference Functions

predict_batch()

Perform batch inference on multiple trials.
predict_batch(
    model,
    data::BCIData;
    iterations::Int = 10
) -> BatchResult
Parameters:
  • model - RxLDAModel, RxGMMModel, or RxPolyaModel
  • data::BCIData - Data to predict (labels optional)
  • iterations::Int - Number of inference iterations (default: 10)
Returns:
struct BatchResult
    predictions::Vector{Int}              # Predicted class for each trial
    confidences::Vector{Float64}          # Confidence (max posterior) for each trial
    posteriors::Matrix{Float64}           # Full posterior distributions (n_trials × n_classes)
    free_energy::Union{Float64, Nothing}  # Mean RxInfer free energy if available
    entropy::Vector{Float64}              # Shannon entropy per trial (bits)
    mean_entropy::Float64                 # Average entropy across trials
    mahalanobis_distances::Matrix{Float64}  # Distances to each class center (n_trials × n_classes)
    outlier_scores::Vector{Float64}       # Minimum distance to any class (per trial)
    latency_ms::Int                       # Total batch latency in milliseconds
    per_trial_latency_ms::Vector{Float64} # Latency per trial in milliseconds
    balance::Float64                      # Class distribution balance (0–1)
    confidence_calibration::Union{CalibrationMetrics, Nothing}  # Calibration metrics if labels available
end
Example:
results = predict_batch(model, data)

println("Predictions: ", results.predictions)
println("Mean confidence: ", mean(results.confidences))

# Calculate accuracy if labels available
accuracy = sum(results.predictions .== data.labels) / length(data.labels)
println("Accuracy: $(round(accuracy * 100, digits=1))%")

init_streaming()

Initialize a streaming session for chunk-by-chunk inference.
init_streaming(model, metadata::BCIMetadata) -> StreamingSession
Parameters:
  • model - Loaded model
  • metadata::BCIMetadata - Metadata with chunk_size set
Returns: StreamingSession for processing chunks Example:
metadata = BCIMetadata(
    sampling_rate = 250.0,
    paradigm = :motor_imagery,
    feature_type = :csp,
    n_features = 16,
    n_classes = 4,
    chunk_size = 250  # 1 second chunks at 250 Hz
)

session = init_streaming(model, metadata)

process_chunk()

Process a single chunk of data during streaming.
process_chunk(session::StreamingSession, chunk::Array{Float64, 2}; iterations::Int = 10) -> ChunkResult
Parameters:
  • session::StreamingSession - Active streaming session
  • chunk::Array{Float64, 2} - Chunk data (n_features × chunk_size)
  • iterations::Int - Number of inference iterations for this chunk (default: 10)
Returns: ChunkResult
struct ChunkResult
  prediction::Int              # Predicted class for this chunk
  confidence::Float64          # Confidence for this chunk
  posterior::Vector{Float64}   # Posterior distribution for this chunk
  latency_ms::Float64          # Processing time for this chunk (ms)
end
Example:
for chunk in eeg_stream
    result = process_chunk(session, chunk)
    println("Prediction: $(result.prediction), Confidence: $(result.confidence)")
end

finalize_trial()

Finalize a trial by aggregating results from all processed chunks.
finalize_trial(
    session::StreamingSession;
    method::Symbol = :weighted_vote,
    temporal_weighting::Bool = true
) -> StreamingResult
Parameters:
  • session::StreamingSession - Active streaming session
  • method::Symbol - Aggregation method (:weighted_vote, :max_confidence, :unanimous)
  • temporal_weighting::Bool - Apply paradigm-specific temporal weights (default: true)
Returns: StreamingResult with final prediction and diagnostics
struct StreamingResult
  prediction::Int                      # Aggregated prediction
  confidence::Float64                  # Aggregated confidence
  posterior::Vector{Float64}           # Aggregated posterior
  entropy::Float64                     # Entropy of final posterior (bits)
  aggregation_method::Symbol           # Aggregation method used
  n_chunks::Int                        # Number of chunks in trial
  latency_ms::Float64                  # Total latency (ms)
  chunk_latencies_ms::Vector{Float64}  # Latency per chunk
  balance::Float64                     # Class distribution balance across chunks
  confidence_calibration::Union{CalibrationMetrics, Nothing}  # Calibration metrics if label provided
end
Example:
# Process trial
for chunk in trial_chunks
    process_chunk(session, chunk)
end

# Get final prediction
final_result = finalize_trial(session; method=:weighted_vote, temporal_weighting=true)
println("Final prediction: $(final_result.prediction)")
println("Confidence: $(final_result.confidence)")
println("Aggregation method: $(final_result.aggregation_method)")

Utility Functions

calculate_ITR()

Calculate Information Transfer Rate (ITR) in bits/minute.
calculate_ITR(
  accuracy::Float64,
  n_classes::Int,
  trial_duration::Float64;
  clip_negative::Bool = false
) -> Float64
Parameters:
  • accuracy::Float64 - Classification accuracy (0.0 to 1.0)
  • n_classes::Int - Number of classes
  • trial_duration::Float64 - Trial duration in seconds
  • clip_negative::Bool - If true, negative ITR values (below chance) are clipped to 0.0
Example:
accuracy = 0.85
n_classes = 4
trial_duration = 4.0  # seconds

itr = calculate_ITR(accuracy, n_classes, trial_duration)
println("ITR: $(round(itr, digits=1)) bits/minute")

should_reject_trial()

Check if a trial should be rejected based on confidence threshold.
should_reject_trial(confidence::Float64, threshold::Float64 = 0.7) -> Bool

assess_trial_quality()

Assess the quality of inference results.
assess_trial_quality(result::BatchResult) -> TrialQuality
Returns:
struct TrialQuality
    overall_score::Float64
    confidence_acceptable::Bool
    recommendation::String
end

diagnose_preprocessing()

Diagnose preprocessing quality and provide recommendations.
diagnose_preprocessing(data::BCIData) -> PreprocessingReport
Returns:
struct PreprocessingReport
    quality_score::Float64
    errors::Vector{String}
    warnings::Vector{String}
    recommendations::Vector{String}
end
Example:
report = diagnose_preprocessing(data)

if !isempty(report.errors)
    @error "Preprocessing issues: $(report.errors)"
end

println("Quality score: $(round(report.quality_score * 100, digits=1))%")

compute_fisher_score()

Compute Fisher discriminant scores for feature selection.
compute_fisher_score(features::Matrix{Float64}, labels::Vector{Int}) -> Vector{Float64}
Parameters:
  • features::Matrix{Float64} - Feature matrix (n_features × n_samples)
  • labels::Vector{Int} - Class labels
Returns: Fisher scores for each feature (higher scores indicate better discriminability) Example:
# Compute discriminability of each feature
fisher_scores = compute_fisher_score(features, labels)

# Find most discriminative features
for (i, score) in enumerate(fisher_scores)
    println("Feature $i: Fisher score = $(round(score, digits=3))")
end

rank_features_by_discriminability()

Rank features by their discriminability using Fisher scores.
rank_features_by_discriminability(features::Matrix{Float64}, labels::Vector{Int}) -> Vector{Int}
Parameters:
  • features::Matrix{Float64} - Feature matrix (n_features × n_samples)
  • labels::Vector{Int} - Class labels
Returns: Indices of features sorted by discriminability (most discriminative first) Example:
# Get feature ranking
ranked_indices = rank_features_by_discriminability(features, labels)

println("Most discriminative features:")
for (i, idx) in enumerate(ranked_indices[1:5])
    println("$i. Feature $idx (Fisher score: $(round(fisher_scores[idx], digits=3)))")
end

aggregate_chunks()

Aggregate chunk-level predictions and confidences into a final trial result.
aggregate_chunks(
    predictions::Vector{Int},
    confidences::Vector{Float64},
    n_classes::Int;
    method::Symbol = :weighted_vote
) -> NamedTuple
Parameters:
  • predictions::Vector{Int} - Predictions from each chunk
  • confidences::Vector{Float64} - Confidences from each chunk
  • n_classes::Int - Number of classes
  • method::Symbol - Aggregation method (:weighted_vote, :majority_vote, :max_confidence)
Returns: Named tuple with prediction, confidence, and posterior Example:
# Aggregate chunk results
final_result = aggregate_chunks(
    chunk_predictions,
    chunk_confidences,
    4;  # 4 classes
    method = :weighted_vote
)

println("Final prediction: $(final_result.prediction)")
println("Final confidence: $(round(final_result.confidence, digits=3))")

get_paradigm_defaults()

Get default parameters for a specific BCI paradigm.
get_paradigm_defaults(paradigm::Symbol) -> NamedTuple
Parameters:
  • paradigm::Symbol: Paradigm name (:motor_imagery, :p300, :ssvep, :erp)
Returns: Named tuple with default parameters for the paradigm Example:
# Get motor imagery defaults
defaults = get_paradigm_defaults(:motor_imagery)
println("Chunk size: $(defaults.chunk_size) samples")
println("Confidence threshold: $(defaults.confidence_threshold)")
println("Aggregation method: $(defaults.aggregation_method)")

# Use in metadata
metadata = BCIMetadata(
    sampling_rate = 250.0,
    paradigm = :motor_imagery,
    feature_type = :csp,
    n_features = 16,
    n_classes = 4,
    chunk_size = defaults.chunk_size,
    temporal_aggregation = :logvar
)

Model Utility Functions

get_n_features()

Get the number of features expected by a model.
get_n_features(model::BCIModel) -> Int
Parameters:
  • model - RxLDAModel, RxGMMModel, or RxPolyaModel
Returns: Number of features expected by the model Example:
model = load_model(RxLDAModel, "motor_imagery_4class_v1")
n_features = get_n_features(model)  # e.g., 16

get_n_classes()

Get the number of classes a model can classify.
get_n_classes(model::BCIModel) -> Int
Parameters:
  • model - RxLDAModel, RxGMMModel, or RxPolyaModel
Returns: Number of classes the model can classify Example:
model = load_model(RxLDAModel, "motor_imagery_4class_v1")
n_classes = get_n_classes(model)  # e.g., 4

get_paradigm()

Get the target paradigm for a model.
get_paradigm(model::BCIModel) -> Union{Symbol, Nothing}
Parameters:
  • model - RxLDAModel, RxGMMModel, or RxPolyaModel
Returns: Paradigm symbol (:motor_imagery, :p300, etc.) or nothing if paradigm-agnostic Example:
model = load_model(RxLDAModel, "motor_imagery_4class_v1")
paradigm = get_paradigm(model)  # :motor_imagery

list_available_models()

List available models from the Nimbus model registry based on your license.
list_available_models(; paradigm=nothing, model_type=nothing) -> Vector{Dict}
Parameters:
  • paradigm - Optional filter by BCI paradigm (:motor_imagery, :p300, :ssvep)
  • model_type - Optional filter by model type (:RxLDA, :RxGMM, :RxPolya)
Returns: Vector of model information dictionaries with keys: name, version, type, paradigm, n_features, n_classes, requires_license Example:
using NimbusSDKCore

# List all available models
all_models = list_available_models()

# Filter by paradigm
mi_models = list_available_models(paradigm=:motor_imagery)

# Filter by model type
lda_models = list_available_models(model_type=:RxLDA)

# Print model information
for model in all_models
    println("$(model.name): $(model.type) - $(model.paradigm)")
end

get_model_info()

Get detailed information about a specific model from the registry.
get_model_info(model_name::String) -> Union{Dict, Nothing}
Parameters:
  • model_name::String - Name of the model to look up
Returns: Model information dictionary or nothing if not found Example:
info = get_model_info("motor_imagery_4class_v1")
if !isnothing(info)
    println("Model: $(info.name)")
    println("Features: $(info.n_features)")
    println("Classes: $(info.n_classes)")
end

check_model_license_compatibility()

Check if your current license allows access to a specific model.
check_model_license_compatibility(model_name::String) -> Bool
Parameters:
  • model_name::String - Name of the model to check
Returns: true if your license allows access, false otherwise Example:
if check_model_license_compatibility("motor_imagery_4class_v1")
    model = load_model(RxLDAModel, "motor_imagery_4class_v1")
else
    @warn "Your license does not allow access to this model"
end

Data Validation

validate_data()

Validate BCI data for common issues before inference.
validate_data(data::BCIData) -> Bool
Description: Validates data for NaN/Inf values, correct dimensions, and provides warnings for suspicious data patterns. Returns: true if validation passes Throws: Error if validation fails Example:
# Validate data before inference
try
    validate_data(data)
    println("✓ Data validation passed")
    results = predict_batch(model, data)
catch e
    if isa(e, DataValidationError)
        @error "Data validation failed: $(error_msg(e))"
    else
        rethrow(e)
    end
end

validate_chunk()

Validate a single chunk for streaming inference.
validate_chunk(chunk::Matrix{Float64}, metadata::BCIMetadata) -> Bool
Description: Validates chunk dimensions and data quality for streaming inference.

check_model_compatibility()

Check if a model is compatible with your data.
check_model_compatibility(model::BCIModel, data::BCIData) -> Bool
Parameters:
  • model - RxLDAModel, RxGMMModel, or RxPolyaModel
  • data::BCIData - Data to check compatibility with
Returns: true if model and data are compatible Example:
model = load_model(RxLDAModel, "motor_imagery_4class_v1")
data = BCIData(features, metadata, labels)

if check_model_compatibility(model, data)
    results = predict_batch(model, data)
else
    @error "Model and data are incompatible"
end

Feature Normalization

Critical for cross-session BCI! EEG amplitude varies 50-200% across sessions. Proper normalization improves accuracy by 15-30%.
Feature normalization is essential for BCI models used across different sessions or subjects. NimbusSDK provides comprehensive normalization utilities.

NormalizationParams

Storage for normalization parameters.
struct NormalizationParams
    method::Symbol
    means::Vector{Float64}
    stds::Vector{Float64}
    mins::Vector{Float64}
    maxs::Vector{Float64}
    medians::Vector{Float64}
    mads::Vector{Float64}
    computed_from_n_trials::Int
end
Fields:
  • method - Normalization method (:zscore, :minmax, :robust, :none)
  • means, stds - Per-feature statistics for z-score normalization
  • mins, maxs - Per-feature statistics for min-max normalization
  • medians, mads - Per-feature statistics for robust normalization
  • computed_from_n_trials - Number of trials used to compute statistics

estimate_normalization_params()

Estimate normalization parameters from training data.
estimate_normalization_params(
    features::Array{Float64, 3};
    method::Symbol = :zscore
) -> NormalizationParams
Parameters:
  • features::Array{Float64, 3} - Training features (n_features × n_samples × n_trials)
  • method::Symbol - Normalization method
    • :zscore - Z-score normalization (mean=0, std=1) [default, recommended]
    • :minmax - Min-max scaling to [0, 1]
    • :robust - Robust normalization using median and MAD (outlier-resistant)
    • :none - No normalization
Returns: NormalizationParams object with computed statistics Example:
# Compute normalization params from training data
train_features = randn(16, 250, 100)
norm_params = estimate_normalization_params(train_features; method=:zscore)

# Save params with model for consistent test-time normalization
@save "model.jld2" model norm_params
Normalization should be computed AFTER feature extraction but BEFORE model training. The same normalization parameters must be applied to test data.

apply_normalization()

Apply pre-computed normalization to features.
apply_normalization(
    features::Array{Float64, 3},
    params::NormalizationParams
) -> Array{Float64, 3}
Parameters:
  • features::Array{Float64, 3} - Features to normalize (n_features × n_samples × n_trials)
  • params::NormalizationParams - Pre-computed normalization parameters
Returns: Normalized features with same shape as input Example:
# Training phase
train_features = randn(16, 250, 100)
norm_params = estimate_normalization_params(train_features; method=:zscore)
train_normalized = apply_normalization(train_features, norm_params)

# Test phase (using same parameters)
test_features = randn(16, 250, 20)
test_normalized = apply_normalization(test_features, norm_params)

normalize_features()

Convenience function to estimate and apply normalization in one call.
normalize_features(
    features::Array{Float64, 3};
    method::Symbol = :zscore
) -> Array{Float64, 3}
Parameters:
  • features::Array{Float64, 3} - Features with shape (n_features × n_samples × n_trials)
  • method::Symbol - Normalization method (:zscore, :minmax, :robust, :none)
Returns: Normalized features with same shape as input
This function computes normalization parameters from the input data itself. For proper train/test separation, use estimate_normalization_params() and apply_normalization() separately.
Example:
features = randn(16, 250, 100)
normalized = normalize_features(features; method=:zscore)

check_normalization_status()

Check if features appear normalized and get recommendations.
check_normalization_status(
    features::Array{Float64, 3};
    tolerance::Float64 = 0.1
) -> NamedTuple
Parameters:
  • features::Array{Float64, 3} - Features to check (n_features × n_samples × n_trials)
  • tolerance::Float64 - Tolerance for detecting unnormalized data
Returns: Named tuple with:
  • appears_normalized::Bool - Whether data appears normalized
  • mean_abs_mean::Float64 - Mean absolute value of per-feature means
  • mean_std::Float64 - Mean of per-feature standard deviations
  • recommendations::Vector{String} - Suggested actions
Example:
features = randn(16, 250, 100)
status = check_normalization_status(features)

println("Normalized: ", status.appears_normalized)
if !status.appears_normalized
    println("Recommendations:")
    for rec in status.recommendations
        println("  • ", rec)
    end
end

Normalization Best Practices

Correct Workflow:
# 1. Estimate params from TRAINING data only
train_features = csp_features_train  # (16 × 250 × 80)
norm_params = estimate_normalization_params(train_features; method=:zscore)

# 2. Apply to BOTH training and test data
train_norm = apply_normalization(train_features, norm_params)
test_norm = apply_normalization(test_features, norm_params)

# 3. Save params with your model
@save "model_with_norm.jld2" model norm_params

# 4. Later: Load and apply same params
@load "model_with_norm.jld2" model norm_params
new_data_norm = apply_normalization(new_data, norm_params)
Common Pitfalls: Never normalize train and test separately
Never normalize raw EEG (do it after features)
Never forget to save normalization params
See the complete Feature Normalization guide for detailed documentation.

Performance Metrics

BCIPerformanceMetrics

Container for BCI performance metrics.
struct BCIPerformanceMetrics
    accuracy::Float64                 # Classification accuracy (0–1)
    information_transfer_rate::Float64  # ITR in bits/minute
    false_positive_rate::Float64      # Average FPR across classes
    false_negative_rate::Float64      # Average FNR across classes
    mean_confidence::Float64          # Average confidence across trials
    mean_trial_duration::Float64      # Trial duration in seconds
    selection_rate::Float64           # Successful selections per minute
end

OnlinePerformanceTracker

Track performance metrics in real-time with a sliding window.
struct OnlinePerformanceTracker
    predictions::Vector{Int}
    true_labels::Vector{Int}
    confidences::Vector{Float64}
    timestamps::Vector{DateTime}
    window_size::Int
end

OnlinePerformanceTracker(window_size::Int = 100) -> tracker
Parameters:
  • window_size::Int - Number of recent trials to include in metrics (default: 100)
Example:
tracker = OnlinePerformanceTracker(window_size=50)

for (pred, true_lbl, conf) in zip(results.predictions, data.labels, results.confidences)
    metrics = update_and_report!(tracker, pred, true_lbl, conf)
    println("Running accuracy: $(round(metrics.accuracy * 100, digits=1))%")
end

full_metrics = get_metrics(tracker, n_classes=4, trial_duration=4.0)
println("ITR: $(full_metrics.information_transfer_rate) bits/min")

update_and_report!()

Update the tracker with a new prediction and return current metrics.
update_and_report!(
    tracker::OnlinePerformanceTracker,
    prediction::Int,
    true_label::Int,
    confidence::Float64
) -> BCIPerformanceMetrics
Parameters:
  • tracker::OnlinePerformanceTracker - Tracker to update
  • prediction::Int - Predicted class label
  • true_label::Int - True class label
  • confidence::Float64 - Prediction confidence
Returns: BCIPerformanceMetrics computed over the sliding window

get_metrics()

Get comprehensive performance metrics from the tracker.
get_metrics(
    tracker::OnlinePerformanceTracker,
    n_classes::Int,
    trial_duration::Float64
) -> BCIPerformanceMetrics
Parameters:
  • tracker::OnlinePerformanceTracker - Tracker to query
  • n_classes::Int - Number of classes
  • trial_duration::Float64 - Trial duration in seconds
Returns: BCIPerformanceMetrics with all computed metrics

Calibration Metrics

compute_balance()

Compute class distribution balance (how evenly distributed classes are).
compute_balance(predictions::Vector{Int}, n_classes::Int) -> Float64
Parameters:
  • predictions::Vector{Int} - Predicted class labels
  • n_classes::Int - Number of classes
Returns: Balance score (0.0 = completely imbalanced, 1.0 = perfectly balanced) Example:
balance = compute_balance(results.predictions, 4)
println("Class balance: $(round(balance, digits=3))")

compute_calibration_metrics()

Compute confidence calibration metrics (ECE/MCE).
compute_calibration_metrics(
    confidences::Vector{Float64},
    predictions::Vector{Int},
    true_labels::Vector{Int};
    n_bins::Int = 10
) -> CalibrationMetrics
Parameters:
  • confidences::Vector{Float64} - Prediction confidences
  • predictions::Vector{Int} - Predicted class labels
  • true_labels::Vector{Int} - True class labels
  • n_bins::Int - Number of bins for calibration (default: 10)
Returns: CalibrationMetrics struct with ece, mce, n_bins Example:
cal_metrics = compute_calibration_metrics(
    results.confidences,
    results.predictions,
    data.labels
)
println("ECE: $(round(cal_metrics.ece, digits=3))")
println("MCE: $(round(cal_metrics.mce, digits=3))")

CalibrationMetrics

Container for calibration metrics.
struct CalibrationMetrics
    ece::Float64      # Expected Calibration Error (0-1, lower is better)
    mce::Float64      # Maximum Calibration Error (0-1, lower is better)
    n_bins::Int       # Number of bins used
end

Enhanced Diagnostic Functions

compute_entropy()

Compute Shannon entropy for a single probability distribution.
compute_entropy(probabilities::Vector{Float64}) -> Float64
Parameters:
  • probabilities::Vector{Float64} - Probability distribution (must sum to 1.0)
Returns: Entropy in bits (0 = certain, log₂(n_classes) = maximum uncertainty) Example:
# Compute entropy for a single trial
entropy = compute_entropy(results.posteriors[1, :])
println("Trial entropy: $(round(entropy, digits=3)) bits")

compute_mean_entropy()

Compute mean entropy across multiple probability distributions.
compute_mean_entropy(posteriors::Matrix{Float64}) -> Float64
Parameters:
  • posteriors::Matrix{Float64} - Posterior distributions (n_trials × n_classes)
Returns: Mean entropy in bits Example:
mean_entropy = compute_mean_entropy(results.posteriors)
println("Mean entropy: $(round(mean_entropy, digits=3)) bits")

compute_mahalanobis_distances()

Compute Mahalanobis distances from each trial to each class center.
compute_mahalanobis_distances(
    model::BCIModel,
    features::Array{Float64, 3}
) -> Matrix{Float64}
Parameters:
  • model - RxLDAModel or RxGMMModel (RxPolya returns zeros)
  • features::Array{Float64, 3} - Feature array (n_features × n_samples × n_trials)
Returns: Distance matrix (n_trials × n_classes) Example:
distances = compute_mahalanobis_distances(model, data.features)
println("Distances shape: $(size(distances))")

compute_outlier_scores()

Compute outlier scores (minimum distance to any class) for each trial.
compute_outlier_scores(
    model::BCIModel,
    features::Array{Float64, 3}
) -> Vector{Float64}
Parameters:
  • model - RxLDAModel or RxGMMModel
  • features::Array{Float64, 3} - Feature array (n_features × n_samples × n_trials)
Returns: Outlier scores (higher = more outlier-like) Example:
outlier_scores = compute_outlier_scores(model, data.features)
outliers = findall(outlier_scores .> 5.0)
println("Outlier trials: $outliers")

Error Handling

Common exceptions:
  • AuthenticationError - Invalid or expired API key
  • DataValidationError - Invalid data format or dimensions
  • ModelCompatibilityError - Model incompatible with data
  • QuotaExceededError - API quota limit exceeded
try
    results = predict_batch(model, data)
catch e
    if isa(e, DataValidationError)
        @error "Data validation failed" error_msg(e)
    elseif isa(e, AuthenticationError)
        @error "Authentication failed - check API key"
    else
        @error "Inference failed" e
    end
end

Configuration

Model Compatibility

Check if a model is compatible with your data:
check_model_compatibility(model::BCIModel, data::BCIData) -> Bool
Note: See the Data Validation section for details on check_model_compatibility().

Next Steps

Support