Skip to content

Latest commit

 

History

History
424 lines (362 loc) · 10.3 KB

README.md

File metadata and controls

424 lines (362 loc) · 10.3 KB

Code for the paper "Removing Noise from Extracellular Neural Recordings Using Fully Convolutional Denoising Autoencoders"

Paper: https://ieeexplore.ieee.org/abstract/document/9630585

Cite as:

@INPROCEEDINGS{9630585,  
  author={Kechris, Christodoulos and Delitzas, Alexandros and Matsoukas, Vasileios and Petrantonakis, Panagiotis C.},  
  booktitle={2021 43rd Annual International Conference of the IEEE Engineering in Medicine \& Biology Society (EMBC)},   
  title={Removing Noise from Extracellular Neural Recordings Using Fully Convolutional Denoising Autoencoders},   
  year={2021},  
  volume={},  
  number={},  
  pages={890-893},  
  doi={10.1109/EMBC46164.2021.9630585}
}

Abstract

Extracellular recordings are severely contaminated by a considerable amount of noise sources, rendering the denoising process an extremely challenging task that should be tackled for efficient spike sorting. To this end, we propose an end-to-end deep learning approach to the problem, utilizing a Fully Convolutional Denoising Autoencoder, which learns to produce a clean neuronal activity signal from a noisy multichannel input. The experimental results on simulated data show that our proposed method can improve significantly the quality of noise-corrupted neural signals, outperforming widely-used wavelet denoising techniques.


Requirements

  • Python (tested with v3.8): Used for the data generation and the network's development

  • Matlab (tested with R2020b): Used for the development of the wavelet denoising methods to compare the network's performance

In order to install the necessary Python libraries, run the following command:

pip install -r requirements.txt

Note: To run the dataset generation scripts, you should also install the MEArec Python library. Instructions can be found here.


Dataset

The extracellular recordings that were used for training and evaluation are available in two formats, i.e. .mat and .tfrecord.

.
|-- data/
|   |-- mat/
|   |-- TFRecord/
.

Data are organized as follows:

e_mix33_n<L>_iter<K>.{mat,tfrecord}

where

  • <L> is the noise level in μV (<L> = [7, 9, 15, 20]) and
  • <K> is the recording number (<K> = [0, 1, ..., 9])

How to run

Fully Convolutional Denoising Autoencoder

cd fcdae_network

For Training & Evaluation:

Run

python fc_dae_model_train_and_test.py

Wavelet denoising methods DWT and SWT

cd wavelet_denoising

For:

  • Discrete Wavelet Transform (DWT): Run dwt_denoising.m
  • Stationary Wavelet Transform (SWT): Run swt_denoising.m

Data generation

In case you want to recreate a dataset, follow the instructions below:

cd data_generation

Step 1 (Optional): Run the following command to generate the extracellular templates

python generate_templates.py

The templates will be saved in the folder data_generation/templates/ in .h5 format. Step 1 can be completely omitted by using the existing templates located in the aforementioned folder.

Step 2: Run the following command to create the extracellular recordings

python generate_recordings.py

The recordings will be saved in the folder data/mat/ in .mat format and in the folder data_generation/h5_recordings/ in .h5 format. In case you want to alter the generation settings of the recordings, you can change the parameters in the .yaml files located at data_generation/recording_settings/.

Step 3: Run the following command to convert the extracellular recordings data in .tfrecord format.

python generate_tfrecords.py

After the conversion, the recordings will be saved in the folder data/TFRecord/ in .tfrecord format. These files are used for the autoencoder's training.


Results

Per channel SNR Improvement


Fully Convolutional Denoising Autoencoder (FCDAE)
Input Noise Level (μV / (mindB,maxdB)) SNR Improvement CH1 (dB) SNR Improvement CH2 (dB) SNR Improvement CH3 (dB) SNR Improvement CH4 (dB)
7 / (0.60, 2.87) 8.184 7.672 6.207 8.429
9 / (-1.36, 0.77) 10.012 7.901 7.241 9.455
15 / (-5.29, -3.59) 9.997 9.888 9.506 11.223
20 / (-7.02, -5.97) 10.579 10.460 11.029 11.499

Discrete Wavelet Transform (DWT)
Input Noise Level (μV / (mindB,maxdB)) SNR Improvement CH1 (dB) SNR Improvement CH2 (dB) SNR Improvement CH3 (dB) SNR Improvement CH4 (dB)
7 / (0.60, 2.87) 1.898 4.095 2.866 3.061
9 / (-1.36, 0.77) 4.105 4.117 3.424 3.757
15 / (-5.29, -3.59) 5.970 6.675 6.016 6.422
20 / (-7.02, -5.97) 6.963 6.961 7.104 7.285

Stationary Wavelet Transform (SWT)
Input Noise Level (μV / (mindB,maxdB)) SNR Improvement CH1 (dB) SNR Improvement CH2 (dB) SNR Improvement CH3 (dB) SNR Improvement CH4 (dB)
7 / (0.60, 2.87) 1.899 4.557 3.380 3.219
9 / (-1.36, 0.77) 3.680 4.021 3.649 3.692
15 / (-5.29, -3.59) 6.415 7.503 6.770 6.946
20 / (-7.02, -5.97) 7.569 7.627 7.673 7.836

Per channel RMSE


Fully Convolutional Denoising Autoencoder (FCDAE)
Input Noise Level (μV / (mindB,maxdB)) RMSE CH1 RMSE CH2 RMSE CH3 RMSE CH4
7 / (0.60, 2.87) 0.02321 0.02344 0.02341 0.02363
9 / (-1.36, 0.77) 0.02339 0.02332 0.02342 0.02342
15 / (-5.29, -3.59) 0.03237 0.03233 0.03240 0.03235
20 / (-7.02, -5.97) 0.03673 0.03664 0.03662 0.03667


Discrete Wavelet Transform (DWT)
Input Noise Level (μV / (mindB,maxdB)) RMSE CH1 RMSE CH2 RMSE CH3 RMSE CH4
7 / (0.60, 2.87) 0.04727 0.03507 0.03402 0.04321
9 / (-1.36, 0.77) 0.04649 0.03689 0.03657 0.04553
15 / (-5.29, -3.59) 0.05209 0.04758 0.04767 0.05496
20 / (-7.02, -5.97) 0.05535 0.05442 0.05636 0.05825

Stationary Wavelet Transform (SWT)
Input Noise Level (μV / (mindB,maxdB)) RMSE CH1 RMSE CH2 RMSE CH3 RMSE CH4
7 / (0.60, 2.87) 0.04731 0.03344 0.03221 0.04279
9 / (-1.36, 0.77) 0.04891 0.03823 0.03608 0.04638
15 / (-5.29, -3.59) 0.04998 0.04389 0.04393 0.05199
20 / (-7.02, -5.97) 0.05209 0.05092 0.05312 0.05500