Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.nimbusbci.com/llms.txt

Use this file to discover all available pages before exploring further.

Real-Time BCI Setup

This page owns hardware and acquisition setup: how EEG gets from an amplifier into feature chunks that Nimbus can classify. For SDK streaming APIs, see Streaming Inference Configuration, Python Streaming Inference, and Julia Streaming Inference.

Real-Time Pipeline

EEG amplifier -> acquisition transport -> preprocessing -> feature chunks -> Nimbus streaming session -> application command
Nimbus expects feature chunks, not raw EEG. Your acquisition loop should filter, clean, epoch/window, and extract features before calling the SDK.

Acquisition Options

OptionBest forNotes
Lab Streaming Layer (LSL)Research labs and multi-device synchronizationCommon for EEG streams and markers
BrainFlowConsumer and research EEG hardwareUnified API for many boards
Vendor SDKProduction devicesLowest integration overhead when the vendor API is stable
File replayTesting and demosUseful for deterministic latency tests

Setup Checklist

  1. Confirm amplifier sampling rate and channel order.
  2. Synchronize event markers with EEG samples.
  3. Apply artifact handling and feature extraction in the same way as calibration.
  4. Emit chunks with stable shape: (n_features, chunk_size).
  5. Use the same normalization parameters from training.
  6. Measure latency around preprocessing and SDK inference separately.
  7. Add rejection thresholds for low-confidence or high-entropy decisions.

LSL Pattern

Use LSL when you need synchronized EEG and marker streams.
from pylsl import StreamInlet, resolve_stream

eeg_stream = resolve_stream("type", "EEG")[0]
marker_stream = resolve_stream("type", "Markers")[0]

eeg_inlet = StreamInlet(eeg_stream)
marker_inlet = StreamInlet(marker_stream)

while True:
    samples, timestamps = eeg_inlet.pull_chunk(timeout=0.0)
    markers, marker_times = marker_inlet.pull_chunk(timeout=0.0)

    if samples:
        feature_chunk = preprocess_to_features(samples, markers)
        result = session.process_chunk(feature_chunk)
        handle_prediction(result.prediction, result.confidence)

BrainFlow Pattern

Use BrainFlow when you want one API across supported EEG boards.
from brainflow.board_shim import BoardShim, BrainFlowInputParams, BoardIds

params = BrainFlowInputParams()
board = BoardShim(BoardIds.CYTON_BOARD.value, params)

board.prepare_session()
board.start_stream()

try:
    while True:
        data = board.get_current_board_data(250)
        feature_chunk = preprocess_to_features(data)
        result = session.process_chunk(feature_chunk)
        handle_prediction(result.prediction, result.confidence)
finally:
    board.stop_stream()
    board.release_session()

Latency Budget

StageTarget
Acquisition buffer1-10ms
Preprocessing and feature extraction5-50ms, depending on feature type
Nimbus inference10-25ms typical
Application command1-10ms
Profile each stage separately. If latency is high, the bottleneck is often acquisition buffering or feature extraction rather than model inference.

Production Guardrails

  • Warm up model and preprocessing code before the session starts.
  • Use fixed-size buffers to avoid unbounded memory growth.
  • Reset streaming sessions between trials.
  • Log per-stage latency, confidence, entropy, and rejection decisions.
  • Keep calibration preprocessing and online preprocessing identical.

Next Read

Streaming Inference Configuration

Choose chunk sizes and aggregation methods.

Python Streaming Inference

Use StreamingSession in Python.

Julia Streaming Inference

Use init_streaming, process_chunk, and finalize_trial in Julia.

Preprocessing Requirements

Prepare feature-space inputs correctly.