Documentation Index Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt
Use this file to discover all available pages before exploring further.
External Preprocessing Integration
Nimbus expects preprocessed features, not raw EEG. This page covers the handoff between external EEG tools and Nimbus SDKs: feature shape, label conventions, export formats, and validation.
What This Page Owns
Use this page when you preprocess outside the SDK language that will run inference:
MNE-Python features exported for Julia
EEGLAB or MATLAB features exported for Python or Julia
OpenViBE feature streams saved to CSV or MAT files
Cross-tool shape and label validation
Target Data Shapes
Target Expected Shape Notes Python classifiers (n_trials, n_features)Standard sklearn-style tabular features. Python BCIData utilities (n_features, n_samples, n_trials)Used by lower-level batch/streaming utilities. Julia BCIData (n_features, n_samples, n_trials)Labels are usually 1-indexed integers.
For streaming, chunks should be shaped (n_features, chunk_size).
MNE-Python To Julia
import numpy as np
from scipy.io import savemat
# Example MNE/CSP output: (n_trials, n_components, n_times)
X_csp = csp.fit_transform(epochs.get_data(), labels)
# Julia expects: (n_features, n_samples, n_trials)
features_julia = np.transpose(X_csp, ( 1 , 2 , 0 ))
savemat( "features_for_julia.mat" , {
"features" : features_julia.astype( "float64" ),
"labels" : labels.astype( "int64" ),
"sampling_rate" : float (epochs.info[ "sfreq" ]),
})
using MAT
using NimbusSDK
data = matread ( "features_for_julia.mat" )
features = Float64 .(data[ "features" ])
labels = Int .( vec (data[ "labels" ]))
metadata = BCIMetadata (
sampling_rate = Float64 (data[ "sampling_rate" ]),
paradigm = :motor_imagery ,
feature_type = :csp ,
n_features = size (features, 1 ),
n_classes = length ( unique (labels)),
chunk_size = nothing
)
bci_data = BCIData (features, metadata, labels)
EEGLAB Or MATLAB Export
MATLAB arrays often already use (channels/features, samples, trials), which matches Julia BCIData.
% features: n_features x n_samples x n_trials
% labels: 1 x n_trials, preferably 1-indexed
save( "features_for_nimbus.mat" , "features" , "labels" , "sampling_rate" )
using MAT
data = matread ( "features_for_nimbus.mat" )
features = Float64 .(data[ "features" ])
labels = Int .( vec (data[ "labels" ]))
If labels come from a zero-indexed pipeline, convert them before training or evaluation:
if minimum (labels) == 0
labels .+= 1
end
OpenViBE CSV Export
OpenViBE often exports time-series rows. Segment the stream into trials before passing data to Nimbus.
import numpy as np
import pandas as pd
from scipy.io import savemat
df = pd.read_csv( "openvibe_features.csv" )
# Example columns: timestamp, f1, f2, ..., label, trial_id
feature_cols = [c for c in df.columns if c.startswith( "f" )]
trial_ids = df[ "trial_id" ].unique()
trials = []
labels = []
for trial_id in trial_ids:
trial = df[df[ "trial_id" ] == trial_id]
trials.append(trial[feature_cols].to_numpy().T)
labels.append( int (trial[ "label" ].iloc[ - 1 ]))
features = np.stack(trials, axis = 2 ) # (n_features, n_samples, n_trials)
savemat( "openvibe_for_nimbus.mat" , {
"features" : features.astype( "float64" ),
"labels" : np.asarray(labels, dtype = "int64" ),
})
Handoff Validation
Run these checks before loading exported data into an SDK:
import numpy as np
def validate_export ( features , labels ):
assert features.ndim == 3 , "Expected (features, samples, trials)"
assert features.shape[ 2 ] == len (labels), "Labels must match trials"
assert np.isfinite(features).all(), "Features contain NaN or Inf"
assert len (np.unique(labels)) >= 2 , "Need at least two classes"
using NimbusSDK
report = diagnose_preprocessing (bci_data)
if ! isempty (report . errors)
error ( "Preprocessing export failed validation: $(report . errors) " )
end
Normalization Handoff
Estimate normalization parameters on training data only, then save them with the model or exported feature bundle.
norm_params = estimate_normalization_params (train_features; method = :zscore )
train_norm = apply_normalization (train_features, norm_params)
test_norm = apply_normalization (test_features, norm_params)
See Feature Normalization for the full workflow.
Next Read
Preprocessing Requirements Feature extraction requirements and paradigm guidance.
Python MNE Integration Native Python SDK workflows with MNE.
Julia SDK Quickstart Load exported features into Julia workflows.
Feature Normalization Keep feature scales consistent across sessions.