-
Notifications
You must be signed in to change notification settings - Fork 0
Home
ElectroCUDA – robust preprocessing & analysis for electrophysiology. Core features include noise-resistant signal processing, robust statistics & extensive hardware acceleration (GPU & CPU).
ElectroCUDA is intended for any multichannel field potential recordings (LFP/EEG/MEG), but development has focused on intracranial EEG (ECoG/sEEG) thus far.
Code was developed using data from 100+ neurosurgical patients (ECoG/sEEG) & 25+ scalp EEG subjects. See Tan et al., 2022 for peer-reviewed ECoG results using a very early version of electroCUDA
Robust statistical & signal processing principles are central to electroCUDA. This mitigates noise, outliers & overfitting while improving SNR, feature extraction, statistical inference & hypothesis testing.
Open science principles are followed by electroCUDA and its dependencies with the notable exception of Matlab & CUDA (free-ish but proprietary). I hope to go fully open-source by switching to Julia as it matures.
Modular programming principles improve electroCUDA's performance & external interoperability. Many functions can be mix-and-matched with other ephysio packages with minimal editing (caution: may reduce validity & performance). Routines use modular call stacks with layered hardware acceleration involving some combination of: compiled CUDA binaries, GPU vectorization, CPU vectorization, CPU thread parallelization & CPU process parallelization.
- Main page: Setup and usage
Code is compute-intensive & designed for workstations or HPC nodes (gaming PCs might suffice). GPU acceleration can be disabled in most functions as it requires an Nvidia CUDA GPU.
Routines are highly automated and avoid manual workflows aside from exploration & testing; manual interaction can introduce human error/bias in preprocessing & analysis.
- Main page: Visualization
Currently documented are functions for plotting cortical meshes, electrodes, and summary statistics. More to come.
ElectroCUDA uses code and algorithmic concepts from the following colleagues. I owe them many thanks:
-
Stanford – Laboratory of Behavioral and Cognitive Neuroscience
-
Carnegie Mellon – TarrLab
- Ying Yang, Daniel Leeds, John Pyles, Kegan Landfair & Michael Tarr
-
UCSD – Schwartz Center for Computational Neuroscience
- Jason Palmer, Makoto Miyakoshi & EEGLAB team
-
École normale supérieure – Laboratoire des Systèmes Perceptifs
- Alain de Cheveigné & NoiseTools team
ElectroCUDA was developed using data from:
This work was supported by National Science Foundation Graduate Research Fellowship DGE-1650604 and Department of Defense Grant 13RSA281.
electroCUDA is free and open-source under GNU GPL 3.0
This code is for research purposes only and is not intended for clinical or medical use.
Use this code at your own risk. Users assume full responsibility for any eventuality related to this content.
USE AND DISTRIBUTION OF THIS SOFTWARE MAY BE SUBJECT TO UNIVERSITY OF CALIFORNIA INTELLECTUAL PROPERTY RIGHTS AND UNITED STATES MANDATES FOR FEDERALLY-FUNDED RESEARCH.
THE CONTENT HEREIN IS PROVIDED "AS IS" WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES. IN NO EVENT SHALL THE AUTHORS AND CONTRIBUTORS OF CONTENT HEREIN BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES AND/OR ADVERSE OUTCOMES RELATED IN ANY WAY TO THE USE OF THIS CONTENT. ANY USE OF THIS CONTENT IMPLIES ACCEPTANCE OF THESE TERMS.