Documentation Index Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt
Use this file to discover all available pages before exploring further.
Basic BCI Examples
Use these recipes after installation and preprocessing are complete. They show the smallest useful pattern for each workflow and link to deeper SDK-specific guides instead of repeating full setup instructions.
Motor Imagery
Motor imagery workflows usually use CSP or bandpower features from 8-30 Hz EEG.
from nimbus_bci import NimbusLDA
from nimbus_bci.compat import extract_csp_features
from sklearn.model_selection import cross_val_score
# epochs: MNE Epochs object, already filtered and artifact-cleaned
X, csp = extract_csp_features(epochs, n_components = 8 )
y = epochs.events[:, 2 ]
clf = NimbusLDA()
scores = cross_val_score(clf, X, y, cv = 5 )
clf.fit(X, y)
print ( f "CV accuracy: { scores.mean() :.2%} " )
using NimbusSDK
using Statistics
NimbusSDK . install_core ( "your-api-key" )
features = load_csp_features () # (n_features, n_samples, n_trials)
labels = load_labels () # 1-indexed labels
metadata = BCIMetadata (
sampling_rate = 250.0 ,
paradigm = :motor_imagery ,
feature_type = :csp ,
n_features = size (features, 1 ),
n_classes = length ( unique (labels)),
chunk_size = nothing
)
data = BCIData (features, metadata, labels)
model = train_model (NimbusLDA, data; iterations = 50 )
results = predict_batch (model, data)
accuracy = mean (results . predictions .== labels)
println ( "Accuracy: $( round (accuracy * 100 , digits = 1 )) %" )
Use NimbusLDA first for a fast baseline. Try NimbusQDA when class covariance differs substantially or P300-like target/non-target distributions overlap.
P300 Detection
P300 examples usually use ERP amplitude features from a post-stimulus time window.
from nimbus_bci import NimbusQDA
# epochs: target/non-target MNE epochs
sfreq = epochs.info[ "sfreq" ]
start = int ( 0.3 * sfreq)
stop = int ( 0.5 * sfreq)
X = epochs.get_data()[:, :, start:stop].mean( axis = 2 )
y = epochs.events[:, 2 ]
clf = NimbusQDA()
clf.fit(X, y)
proba = clf.predict_proba(X[: 10 ])
predictions = clf.predict(X[: 10 ])
using NimbusSDK
features = load_erp_features () # (n_channels, n_samples, n_trials)
labels = load_labels ()
metadata = BCIMetadata (
sampling_rate = 250.0 ,
paradigm = :p300 ,
feature_type = :erp_amplitude ,
n_features = size (features, 1 ),
n_classes = length ( unique (labels)),
chunk_size = nothing
)
data = BCIData (features, metadata, labels)
model = train_model (NimbusQDA, data; iterations = 50 )
results = predict_batch (model, data)
Streaming
Streaming uses short feature chunks and aggregates chunk predictions into a trial decision.
from nimbus_bci import NimbusLDA, StreamingSession, BCIMetadata
clf = NimbusLDA().fit(X_train, y_train)
metadata = BCIMetadata(
sampling_rate = 250.0 ,
paradigm = "motor_imagery" ,
feature_type = "csp" ,
n_features = X_train.shape[ 1 ],
n_classes = len ( set (y_train)),
chunk_size = 250 ,
)
session = StreamingSession(clf.model_, metadata)
for chunk in feature_stream:
chunk_result = session.process_chunk(chunk)
final = session.finalize_trial( method = "weighted_vote" )
session = init_streaming (model, metadata)
for chunk in feature_stream
result = process_chunk (session, chunk)
end
final = finalize_trial (session; method = :weighted_vote )
For detailed streaming APIs, use Python Streaming Inference , Julia Streaming Inference , and Streaming Inference Configuration .
Calibration And Normalization
Estimate normalization parameters from calibration or training data only, then reuse them for test and deployment data.
from nimbus_bci import estimate_normalization_params, apply_normalization
norm = estimate_normalization_params(X_train, method = "zscore" )
X_train_norm = apply_normalization(X_train, norm)
X_test_norm = apply_normalization(X_test, norm)
norm = estimate_normalization_params (train_features; method = :zscore )
train_norm = apply_normalization (train_features, norm)
test_norm = apply_normalization (test_features, norm)
See Feature Normalization for cross-session details.
Diagnostics
Run diagnostics when confidence is unexpectedly low or accuracy drops across sessions.
report = diagnose_preprocessing (bci_data)
if ! isempty (report . errors)
error ( "Preprocessing errors: $(report . errors) " )
end
println ( "Quality score: $( round (report . quality_score * 100 , digits = 1 )) %" )
For Python trial-level quality checks, see the Python SDK API Reference .
Choosing A Model
Model Use First When Notes NimbusLDAYou need a fast baseline Best default for many motor imagery workflows. NimbusQDAClasses have different covariance Often useful for P300 and overlapping distributions. NimbusSoftmaxPython, nonlinear class boundaries Requires the optional softmax extra. NimbusProbitJulia, multinomial regression Best for Julia multinomial workflows. NimbusSTSPython, non-stationary sessions Tracks latent state over time.
For a full comparison, see Model Specification .
Next Read
Python SDK Quickstart First Python workflow from install to inference.
Julia SDK Quickstart First Julia workflow with NimbusSDK.jl.
Advanced Applications Higher-level deployment patterns.
Error Handling Production safeguards and failure modes.