NimbusSDK.jl - Julia SDK Reference
Python SDK Users: Looking for Python documentation? See Python SDK API Reference.This page documents the Julia SDK (NimbusSDK.jl).
Installation
NimbusSDK.jl is now available in the public Julia General Registry:Requirements
- Julia ≥ 1.9
- Valid NimbusSDK license key
- Preprocessed EEG features (CSP, bandpower, etc.) - not raw EEG
What changed? NimbusSDK.jl is now a public wrapper package in the Julia General Registry. The proprietary inference core (NimbusSDKCore) is automatically installed when you provide your API key. No more private registry setup needed!
Quick Start
Setup
install_core()
Install the proprietary NimbusSDKCore with your API key. This is a one-time setup that downloads and configures the commercial inference engine.api_key::String- Your NimbusSDK API key (format:nbci_live_...ornbci_test_...)
true if installation successful
Example:
The core installation is persistent. You only need to run
install_core() once per machine. After that, simply using NimbusSDK will work in any Julia project.check_installation()
Verify that the core is installed and working correctly.true if core is installed and operational
Note:
check_installation() is provided by the NimbusSDK wrapper package. For direct NimbusSDKCore usage, check authentication status using NimbusSDKCore.AUTH_STATE[].Authentication
For most users: Authentication is handled automatically by
NimbusSDK.install_core(). The functions below are for advanced users working directly with NimbusSDKCore.authenticate()
Authenticate with NimbusSDKCore using an API key. This function validates the key remotely and caches credentials locally.api_key::String- Your NimbusSDK API key (format:nbci_live_...ornbci_test_...)offline_mode::Bool- Iftrue, skip remote validation and use cached credentials (default:false)
AuthSession object containing authentication state
Example:
invalidate_session()
Clear the current authentication session and cached credentials.AuthSession
Authentication session object containing license information and API state.Key Management Functions
save_api_key()
Save an API key to local storage for later use.get_stored_api_key()
Retrieve a previously saved API key.nothing if no key is stored
delete_stored_api_key()
Delete a stored API key from local storage.authenticate_from_storage()
Authenticate using a previously saved API key.AuthSession if authentication successful
Throws: Error if no stored key or authentication fails
Quota Management
refresh_quota()
Refresh API quota information from the server.AuthSession with refreshed quota information
check_quota_and_refresh()
Check quota status and refresh if needed.true if quota is available, false if exhausted
get_quota_status()
Get current quota status without refreshing.remaining::Int, monthly_limit::Int, usage_percentage::Float64
Example:
Models
NimbusSDK provides three Bayesian inference models:RxLDAModel
Primary Name: Bayesian LDA (Bayesian Linear Discriminant Analysis)API Name:
RxLDAModelMathematical Model: Pooled Gaussian Classifier (PGC) Linear Discriminant Analysis with shared precision matrix. Fast inference with good performance for well-separated classes. Fields:
mean_posteriors::Vector- Full posterior distributions for class means (MvNormal objects, one per class)precision_posterior- Full posterior distribution for shared precision matrix (Wishart object, shared across all classes)priors::Vector{Float64}- Empirical class priors from training data (must sum to 1.0)metadata::ModelMetadata- Model metadatadof_offset::Int- Degrees of freedom offset used during training (default: 2)mean_prior_precision::Float64- Mean prior precision strength used during training (default: 0.01)
Accessing model parameters: To get point estimates from posterior distributions, use
mean(model.mean_posteriors[k]) for class means and mean(model.precision_posterior) for the precision matrix. The SDK stores full posterior distributions (not just point estimates) for proper Bayesian inference.RxGMMModel
Primary Name: Bayesian GMM (Bayesian Gaussian Mixture Model)API Name:
RxGMMModelMathematical Model: Heteroscedastic Gaussian Classifier (HGC) Gaussian Mixture Model with class-specific covariance matrices. More flexible, handles overlapping distributions. Fields:
mean_posteriors::Vector- Full posterior distributions for class means (MvNormal objects, one per class)precision_posteriors::Vector- Full posterior distributions for precision matrices (Wishart objects, one per class)priors::Vector{Float64}- Empirical class priors from training data (must sum to 1.0)metadata::ModelMetadata- Model metadatadof_offset::Int- Degrees of freedom offset used during training (default: 2)mean_prior_precision::Float64- Mean prior precision strength used during training (default: 0.01)
Accessing model parameters: To get point estimates from posterior distributions, use
mean(model.mean_posteriors[k]) for class means and mean(model.precision_posteriors[k]) for class-specific precision matrices. The SDK stores full posterior distributions (not just point estimates) for proper Bayesian inference.RxPolyaModel
Primary Name: Bayesian MPR (Bayesian Multinomial Probit Regression)API Name:
RxPolyaModelMathematical Model: Bayesian Multinomial Probit Regression Bayesian multinomial probit regression with continuous transitions. Most flexible for complex multinomial classification tasks. Fields:
B_posterior- Learned regression coefficients posteriorW_posterior- Learned precision matrix posteriormetadata::ModelMetadata- Model metadataN::Int- Number of trials per observation- Hyperparameters:
ξβ,Wβ,W_df,W_scale
load_model()
Load a pre-trained or custom model.ModelType-RxLDAModel,RxGMMModel, orRxPolyaModelmodel_name::String- Model identifier or filepath
save_model()
Save a trained or loaded model to disk.model-RxLDAModel,RxGMMModel, orRxPolyaModelfilepath::String- File path to save model (typically.jld2format)
train_model()
Train a new model on labeled data.ModelType-RxLDAModel,RxGMMModel, orRxPolyaModeltrain_data::BCIData- Labeled training data (must include labels!)iterations::Int- Number of inference iterations (default: 50)showprogress::Bool- Show training progress (default: false)name::String- Model name (default:"untitled_model")description::String- Model description (default:"")
- RxLDAModel / RxGMMModel
dof_offset::Int– Degrees of freedom offset for Wishart priors during training- Default:
2, range: ([1, 5])
- Default:
mean_prior_precision::Float64– Prior precision for class means- Default:
0.01, range: ([0.001, 0.1])
- Default:
- RxPolyaModel
N::Int– Trials per observation (default:1)ξβ::Union{Nothing, Vector{Float64}}– Prior mean for regression coefficients (nothing→ auto)Wβ::Union{Nothing, Matrix{Float64}}– Prior precision for regression coefficients (nothing→ auto)W_df::Union{Nothing, Float64}– Wishart degrees of freedom (nothing→ auto)W_scale::Union{Nothing, Matrix{Float64}}– Wishart scale matrix (nothing→ auto)
Hyperparameters for each model type are documented in more detail on the corresponding model pages:
/models/rxlda, /models/rxgmm, and /models/rxpolya.calibrate_model()
Fine-tune a pre-trained model with subject-specific data (faster than training from scratch).base_model- Pre-trained model to calibratecalib_data::BCIData- Calibration data with labelsiterations::Int- Number of calibration iterations (default: 20)
Hyperparameters preserved (v0.2.0+):
calibrate_model() automatically uses the same hyperparameters (dof_offset, mean_prior_precision, etc.) as the base model. You cannot override them during calibration.Data Structures
BCIData
Container for BCI data with features, metadata, and optional labels.BCIMetadata
Metadata describing the BCI data properties.Inference Functions
predict_batch()
Perform batch inference on multiple trials.model-RxLDAModel,RxGMMModel, orRxPolyaModeldata::BCIData- Data to predict (labels optional)iterations::Int- Number of inference iterations (default: 10)
init_streaming()
Initialize a streaming session for chunk-by-chunk inference.model- Loaded modelmetadata::BCIMetadata- Metadata withchunk_sizeset
StreamingSession for processing chunks
Example:
process_chunk()
Process a single chunk of data during streaming.session::StreamingSession- Active streaming sessionchunk::Array{Float64, 2}- Chunk data (n_features × chunk_size)iterations::Int- Number of inference iterations for this chunk (default: 10)
ChunkResult
finalize_trial()
Finalize a trial by aggregating results from all processed chunks.session::StreamingSession- Active streaming sessionmethod::Symbol- Aggregation method (:weighted_vote,:max_confidence,:unanimous)temporal_weighting::Bool- Apply paradigm-specific temporal weights (default: true)
StreamingResult with final prediction and diagnostics
Utility Functions
calculate_ITR()
Calculate Information Transfer Rate (ITR) in bits/minute.accuracy::Float64- Classification accuracy (0.0 to 1.0)n_classes::Int- Number of classestrial_duration::Float64- Trial duration in secondsclip_negative::Bool- If true, negative ITR values (below chance) are clipped to 0.0
should_reject_trial()
Check if a trial should be rejected based on confidence threshold.assess_trial_quality()
Assess the quality of inference results.diagnose_preprocessing()
Diagnose preprocessing quality and provide recommendations.compute_fisher_score()
Compute Fisher discriminant scores for feature selection.features::Matrix{Float64}- Feature matrix (n_features × n_samples)labels::Vector{Int}- Class labels
rank_features_by_discriminability()
Rank features by their discriminability using Fisher scores.features::Matrix{Float64}- Feature matrix (n_features × n_samples)labels::Vector{Int}- Class labels
aggregate_chunks()
Aggregate chunk-level predictions and confidences into a final trial result.predictions::Vector{Int}- Predictions from each chunkconfidences::Vector{Float64}- Confidences from each chunkn_classes::Int- Number of classesmethod::Symbol- Aggregation method (:weighted_vote,:majority_vote,:max_confidence)
prediction, confidence, and posterior
Example:
get_paradigm_defaults()
Get default parameters for a specific BCI paradigm.paradigm::Symbol: Paradigm name (:motor_imagery,:p300,:ssvep,:erp)
Model Utility Functions
get_n_features()
Get the number of features expected by a model.model-RxLDAModel,RxGMMModel, orRxPolyaModel
get_n_classes()
Get the number of classes a model can classify.model-RxLDAModel,RxGMMModel, orRxPolyaModel
get_paradigm()
Get the target paradigm for a model.model-RxLDAModel,RxGMMModel, orRxPolyaModel
:motor_imagery, :p300, etc.) or nothing if paradigm-agnostic
Example:
list_available_models()
List available models from the Nimbus model registry based on your license.paradigm- Optional filter by BCI paradigm (:motor_imagery,:p300,:ssvep)model_type- Optional filter by model type (:RxLDA,:RxGMM,:RxPolya)
name, version, type, paradigm, n_features, n_classes, requires_license
Example:
get_model_info()
Get detailed information about a specific model from the registry.model_name::String- Name of the model to look up
nothing if not found
Example:
check_model_license_compatibility()
Check if your current license allows access to a specific model.model_name::String- Name of the model to check
true if your license allows access, false otherwise
Example:
Data Validation
validate_data()
Validate BCI data for common issues before inference.true if validation passes
Throws: Error if validation fails
Example:
validate_chunk()
Validate a single chunk for streaming inference.check_model_compatibility()
Check if a model is compatible with your data.model-RxLDAModel,RxGMMModel, orRxPolyaModeldata::BCIData- Data to check compatibility with
true if model and data are compatible
Example:
Feature Normalization
Feature normalization is essential for BCI models used across different sessions or subjects. NimbusSDK provides comprehensive normalization utilities.NormalizationParams
Storage for normalization parameters.method- Normalization method (:zscore,:minmax,:robust,:none)means,stds- Per-feature statistics for z-score normalizationmins,maxs- Per-feature statistics for min-max normalizationmedians,mads- Per-feature statistics for robust normalizationcomputed_from_n_trials- Number of trials used to compute statistics
estimate_normalization_params()
Estimate normalization parameters from training data.features::Array{Float64, 3}- Training features (n_features × n_samples × n_trials)method::Symbol- Normalization method:zscore- Z-score normalization (mean=0, std=1) [default, recommended]:minmax- Min-max scaling to [0, 1]:robust- Robust normalization using median and MAD (outlier-resistant):none- No normalization
NormalizationParams object with computed statistics
Example:
Normalization should be computed AFTER feature extraction but BEFORE model training. The same normalization parameters must be applied to test data.
apply_normalization()
Apply pre-computed normalization to features.features::Array{Float64, 3}- Features to normalize (n_features × n_samples × n_trials)params::NormalizationParams- Pre-computed normalization parameters
normalize_features()
Convenience function to estimate and apply normalization in one call.features::Array{Float64, 3}- Features with shape (n_features × n_samples × n_trials)method::Symbol- Normalization method (:zscore,:minmax,:robust,:none)
check_normalization_status()
Check if features appear normalized and get recommendations.features::Array{Float64, 3}- Features to check (n_features × n_samples × n_trials)tolerance::Float64- Tolerance for detecting unnormalized data
appears_normalized::Bool- Whether data appears normalizedmean_abs_mean::Float64- Mean absolute value of per-feature meansmean_std::Float64- Mean of per-feature standard deviationsrecommendations::Vector{String}- Suggested actions
Normalization Best Practices
Correct Workflow:❌ Never normalize raw EEG (do it after features)
❌ Never forget to save normalization params See the complete Feature Normalization guide for detailed documentation.
Performance Metrics
BCIPerformanceMetrics
Container for BCI performance metrics.OnlinePerformanceTracker
Track performance metrics in real-time with a sliding window.window_size::Int- Number of recent trials to include in metrics (default: 100)
update_and_report!()
Update the tracker with a new prediction and return current metrics.tracker::OnlinePerformanceTracker- Tracker to updateprediction::Int- Predicted class labeltrue_label::Int- True class labelconfidence::Float64- Prediction confidence
BCIPerformanceMetrics computed over the sliding window
get_metrics()
Get comprehensive performance metrics from the tracker.tracker::OnlinePerformanceTracker- Tracker to queryn_classes::Int- Number of classestrial_duration::Float64- Trial duration in seconds
BCIPerformanceMetrics with all computed metrics
Calibration Metrics
compute_balance()
Compute class distribution balance (how evenly distributed classes are).predictions::Vector{Int}- Predicted class labelsn_classes::Int- Number of classes
compute_calibration_metrics()
Compute confidence calibration metrics (ECE/MCE).confidences::Vector{Float64}- Prediction confidencespredictions::Vector{Int}- Predicted class labelstrue_labels::Vector{Int}- True class labelsn_bins::Int- Number of bins for calibration (default: 10)
CalibrationMetrics struct with ece, mce, n_bins
Example:
CalibrationMetrics
Container for calibration metrics.Enhanced Diagnostic Functions
compute_entropy()
Compute Shannon entropy for a single probability distribution.probabilities::Vector{Float64}- Probability distribution (must sum to 1.0)
compute_mean_entropy()
Compute mean entropy across multiple probability distributions.posteriors::Matrix{Float64}- Posterior distributions (n_trials × n_classes)
compute_mahalanobis_distances()
Compute Mahalanobis distances from each trial to each class center.model-RxLDAModelorRxGMMModel(RxPolya returns zeros)features::Array{Float64, 3}- Feature array (n_features × n_samples × n_trials)
compute_outlier_scores()
Compute outlier scores (minimum distance to any class) for each trial.model-RxLDAModelorRxGMMModelfeatures::Array{Float64, 3}- Feature array (n_features × n_samples × n_trials)
Error Handling
Common exceptions:AuthenticationError- Invalid or expired API keyDataValidationError- Invalid data format or dimensionsModelCompatibilityError- Model incompatible with dataQuotaExceededError- API quota limit exceeded
Configuration
Model Compatibility
Check if a model is compatible with your data:check_model_compatibility().
Next Steps
Bayesian LDA (RxLDA)
Detailed documentation for Bayesian LDA
Bayesian GMM (RxGMM)
Detailed documentation for Bayesian GMM
Bayesian MPR (RxPolya)
Detailed documentation for Bayesian MPR
Preprocessing Guide
How to prepare your EEG data
Batch Processing
Efficient offline batch inference
Streaming Inference
Real-time chunk-by-chunk processing
Code Examples
Complete working examples
Support
- Email: [email protected]
- Documentation: https://docs.nimbusbci.com
- GitHub: https://github.com/nimbusbci/NimbusSDK.jl
- Issues: https://github.com/nimbusbci/NimbusSDK.jl/issues