Skip to content

Commit

Permalink
update for WS2024
Browse files Browse the repository at this point in the history
  • Loading branch information
behinger committed Oct 10, 2024
1 parent d1f7d71 commit 6e6688d
Show file tree
Hide file tree
Showing 13 changed files with 552 additions and 391 deletions.
2 changes: 0 additions & 2 deletions _quarto.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,6 @@ website:
text: 🎯 Milestones
- section: "Exercises"
contents:
- href: exercises_overview.qmd
text: ℹ️ Organisation
- href: exercises/exercises.qmd
text: 🏋️‍♂️ Exercise sheets
- section: "Resources"
Expand Down
13 changes: 12 additions & 1 deletion bibliography.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ editor:
---

## Text books
You should be able to get these books via the library. If not, please contact me

#### Hari & Puce - MEG/EEG Primer
Good book with a general overview of many topics discussed in the course
Expand All @@ -15,4 +16,14 @@ The standard book for ERP analyses


#### Analyzing Neural Time Series Data: Theory and Practice
The standard book for time-frequency analysis (a bit on the older side though, still excellent read)
The standard book for time-frequency analysis (a bit on the older side though, still excellent read)


## Websites
### Learning EEG
This is a really nice website with beautiful images.
[https://www.learningeeg.com/](https://www.learningeeg.com/)

### Steve Luck's ERPLab tutorial
This website / book is very applied, and unfortunately uses ERPLab and Matlab throughout. Nevertheless, it is a very nice ressources accompanying the course.
[Applied Event-related Potential Data Analysis](https://socialsci.libretexts.org/Bookshelves/Psychology/Biological_Psychology/Applied_Event-Related_Potential_Data_Analysis_(Luck))
4 changes: 2 additions & 2 deletions course.qmd
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## Course Philosophy

You are here because you want to learn something. My job is to make that easier and provide opportunities. I will provide lectures whenever they are ready. Homeworks and all solutions are already available at the start of the course. In principle, you can decide to not join most of the weekly "Discussions" and teach yourself - but the discussions will be fun and helpful as you will see. Attending the milestones is mandatory
You are here because you want to learn something. My job is to make that easier and provide opportunities. I will provide lectures whenever they are ready. Homeworks and solutions are already available at the start of the course. In principle, you can decide to not join most of the weekly "Discussions" and teach yourself - but the discussions will be fun and helpful as you will see. Attending the milestones is mandatory though.

At the end of the course, you should be able to analyse a complete EEG dataset in different ways. This requires conceptual knowledge and implementation practice. Importantly, this course requires you to actually want to learn! I will give you lots of flexibility, but in the end, you must make the time and effort to learn the content.

Expand All @@ -10,7 +10,7 @@ I will support, but I will not enforce.
The formal requirements to pass the course, is to get a grade >=4.0 in the semester project and to present at each milestone (ungraded). There is no requirement to join the other seminar sessions, to do the homeworks or watch the lectures. All of these things will be extremely helpful though, and I recommend to watch them early in the semester (you have to watch them anyway!)

## Exercises
The homeworks are voluntarily, but **HIGHLY** recommended. The semester project is based on the exercises. If you went through them and understood the content, the backbone of the semester project will be finished already.
The homeworks are voluntarily, but **HIGHLY** recommended. The semester project is based on the exercises. If you went through them and understood the content, the semester project will be easier to tackle.

A note to solutions: I provide solutions so that you can check yourself in case you get stuck. This is dangerous from a learning perspective because easy access to solutions also means you might not challenge yourself enough. I highly recommend trying out various ways on a problem first, and only to go for the solution if you do not succeed after 10-20min .

Expand Down
76 changes: 50 additions & 26 deletions exercises/ex1_overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,35 +8,33 @@ run `pip install mne`
or if you like conda:
`conda install -c conda-forge mne`

similarly, install `pip install mne-bids`, a tool to load the EEG data easily.

Test the installation by
`import mne`

## Load data & plot continuous EEG
We will be using a typical P3 oddball dataset. We expect a positive response over parietal/central electrodes (Cz/Pz) starting at 300-400ms. Something like this: https://www.neurobs.com/manager/content/docs/psychlab101_experiments/Oddball%20Task%20(Visual)/description.html
If you want to read the details you can find it here (the dataset is also part of the semester project): https://psyarxiv.com/4azqm/. You can also investigate the `sub-002_task-P3_eeg.json` for a task description which is automatically downloaded.
## Download data
We will be using a typical P3 oddball dataset. We expect a positive response over parietal/central electrodes (Cz/Pz) starting at 300-400ms, [something like this](https://www.neurobs.com/manager/content/docs/psychlab101_experiments/Oddball%20Task%20(Visual)/description.html).

If you want to read the details you can find it [here](https://psyarxiv.com/4azqm/). You can also investigate the `sub-030_task-P3_eeg.json` for a task description which is one of the files you will download next:

Please download the data from [](https://osf.io/9cnmx/)


Next, you need the [ccs_eeg_utils.py](ccs_eeg_utils.py) file which you can add to your python code e.g. via
```
```python
import sys
sys.path.insert(0,'..')
sys.path.insert(0,'.')
```

For the ccs_eeg_utils, you will need mne-bids - a tool to manage multi-subject/session EEG data.
```
pip install mne-bids
```

The actual data you can download from https://osf.io/thsqg/
`["channels.tsv","events.tsv","eeg.fdt","eeg.json","eeg.set"]`
and put them into `../local/bids/sub-002/ses-P3/eeg/sub-002_ses-P3_task-P3_XYZ` with `XZY` being the filename.

## Load data & plot continuous EEG
```python
# Load the data
from mne_bids import (BIDSPath,read_raw_bids)

# path where to save the datasets.
bids_root = "../local/bids"
subject_id = '002' # recommend subject 2 for now
subject_id = '030' # recommend subject 30 for now


bids_path = BIDSPath(subject=subject_id,task="P3",session="P3",
Expand All @@ -50,20 +48,39 @@ ccs_eeg_utils.read_annotations_core(bids_path,raw)

```

**T:** Extract a single channel and plot the whole timeseries. You can directly interact with the `raw` object, e.g. `raw[1:10,1:5000]` extracts the first 10 channels and 2000 samples.
**Q:** What is the unit/scale of the data (in sense of "range" of data)?
**Task:** Extract a single channel and plot the whole timeseries.

:::callout-note

You can directly interact with the `raw` object, e.g. `raw[1:10,1:5000]`, which extracts the first 10 channels and 5000 samples.

You can also use `raw.get_data()` to get the whole data as a numpy array.

:::

::: callout-tipp

For now we can use simple matplotlib to visualize the data, e.g.:
```python
from matplotlib import pyplot as plt
plt.plot(raw[10,:][0].T)
```

:::

**Question:** What is the range of the data (in sense of min-y to max-y in µ-volt)?

**T:** Have a look at `raw.info` and note down what the sampling frequency is (how many EEG-samples per second)
**Task:** Have a look at `raw.info` and note down what the sampling frequency is (how many EEG-samples per second)

## Epoching

**T:** We will epoch the data now. Formost we will cut the raw data to one channel using `raw.pick_channels(["Cz"])` - note that this will permanently change the "raw" object and "deletes" alle other channels from memory. If you want rather a copy you could use `raw_subselect = raw.copy().pick_channels(["Cz"]))`.
**Task:** We will epoch the data now. Formost we will cut the raw data to one channel using `raw.pick(["Cz"])` - note that this will permanently change the `raw` object and **removes** alle other channels from memory. If you want rather a copy you could use `raw_subselect = raw.copy().pick(["Cz"]))`.


**T** Let's investigate the annotation markers. Have a look at raw.annotations. These values reflect the values in the bids `*_events.tsv` file (have a look at the files in `../local/bids/sub-002/sub-002_task-P3_events.tsv`). BIDS is a new standard to share neuroimaging and other physiological data. It is not really a fileformat, but more of a folder & filename structure with some additional json files. I highly recommend to put your data into bids-format as soon as possible. It helps you stay organized and on top of things!
**Task** Let's investigate the annotation markers. Have a look at raw.annotations. These values reflect the values in the bids `*_events.tsv` file (have a look at this file via `../local/bids/sub-030/sub-030_task-P3_events.tsv`). BIDS is a new standard to share neuroimaging and other physiological data. It is not really a fileformat, but more of a folder & filename structure with some additional json files. I highly recommend to put your data into bids-format as soon as possible. It helps you stay organized and on top of things!


**T** MNE-speciality: We have to convert annotations to events with `evts,evts_dict = mne.events_from_annotations(raw)`. Have a look at evts - it shows you the sample, the duration and event-id (with the look-up table evts_dict). In this case we only want to look at stimulus evoked responses, so we subset the event table (note: this could be done after epoching too)
**Task** MNE-speciality: We have to convert annotations to events with `evts,evts_dict = mne.events_from_annotations(raw)`. Have a look at evts - it shows you the sample, the duration and event-id (with the look-up table evts_dict). In this case we only want to look at stimulus evoked responses, so we subset the event table (note: this could be done after epoching too)

```python
# get all keys which contain "stimulus"
Expand All @@ -72,19 +89,26 @@ wanted_keys = [e for e in evts_dict.keys() if "stimulus" in e]
evts_dict_stim=dict((k, evts_dict[k]) for k in wanted_keys if k in evts_dict)
```

**T** Epoch the data with `epochs = mne.Epochs(raw,evts,evts_dict_stim,tmin=-0.1,tmax=1)`
**Task** Epoch the data with `epochs = mne.Epochs(raw,evts,evts_dict_stim,tmin=-0.1,tmax=1)`

**Task** Now that we have the epochs we should plot them. Plot all trials 'manually', (without using mne's functionality) (`epochs.get_data()`).

::: callout-note

You should plot one line per trial.

:::

**T** Now that we have the epochs we should plot them. Plot all trials 'manually', (without using mne's functionality) (`epochs.get_data()`).
**Q** What is the unit/scale of the data now?
**Question** What is the scale-range of the epoched data now?

## My first ERP

**T** But which epochs belong to targets and which to distractors? This is hidden in the event-description. Using the following lines you can find out which indices belong to which trial-types
**Task** But which epochs belong to targets and which to distractors? This is hidden in the event-description. Using the following lines you can find out which indices belong to which trial-types
```python
target = ["stimulus:{}{}".format(k,k) for k in [1,2,3,4,5]]
distractor = ["stimulus:{}{}".format(k,j) for k in [1,2,3,4,5] for j in [1,2,3,4,5] if k!=j]
```
Now index the epochs `evoked = epochs[index].average()` and average them. You can then plot them either via `evoked.plot()` or with `mne.viz.plot_compare_evokeds([evokedA,evokedB])`.

**Q** What is the unit/scale of the data now? Set it into context to the other two scales you reported (**Q**'s higher up).
**Question** What is the unit/scale of the data now? Set it into context to the other two scales you reported before

2 changes: 1 addition & 1 deletion exercises/ex3_filter.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ import importlib
import ccs_eeg_utils

bids_root = "../local/bids"
subject_id = '002'
subject_id = '030'


bids_path = BIDSPath(subject=subject_id,task="P3",session="P3",
Expand Down
2 changes: 1 addition & 1 deletion exercises/ex4_cleaning.qmd
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Signal processing and analysis of human brain potentials (EEG) [Exercise 4]
## Cleaning Data
**T:** Download the `P3` dataset for Subject `30` (before 2020-11-20 12.53 this stated subject 9, sorry!) from the [ERPcore](https://osf.io/thsqg/).
We will re-use the dataset we used in the first exercise.

**T:** Go through the dataset using the MNE explorer and clean it. You can use `raw.plot()` for this. If you are working from a jupyter notebook, try to use `%matplotlib qt` for better support of the cleaning window. To get an understanding how the tool works, press `help` or type `?` in the window. (Hint: You first have to add a new annotation by pressing `a`)

Expand Down
2 changes: 1 addition & 1 deletion exercises/ex5_ICA.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ you can call the infomax algorithm using `mne.preprocessing.infomax(x.T,verbose=
from mne_bids import (BIDSPath,read_raw_bids)
import mne_bids
bids_root = "../local/bids"
subject_id = '002'
subject_id = '030'


bids_path = BIDSPath(subject=subject_id,task="P3",session="P3",
Expand Down
4 changes: 2 additions & 2 deletions exercises/exercises.qmd
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
## General remarks to exercises
These exercises are not intendend to be particularly difficult or very involved - this is by design. They are designed to help you think in different ways about the lecture material. The exercises are highly recommended but not mandatory. It is best if you generate a jupyter notebook for each exercise and upload it every week until Wednesday 12.00 (some exercises are over two weeks and specified as such in the course-overview)
These exercises are not intendend to be brain teasers (they can be nevertheless 🙈) - this is by design. They are designed to help you think in different ways about the lecture material. The exercises are highly recommended but not mandatory. It is best if you generate a jupyter notebook for each exercise and upload it till the discussion session.

You are encouraged to work in groups of 2.

Exercises are only graded pass/failed and if requested I will try to provide feedback. Again: Exercises are not mandatory to pass the course. The course-grade is solely based on the semesterproject.
Exercises are only graded pass/failed and if requested via e-mail I will provide feedback. Again: Exercises are not mandatory to pass the course. The course-grade is solely based on the semesterproject.

## The exercises
|||
Expand Down
Loading

0 comments on commit 6e6688d

Please sign in to comment.