An open API service indexing awesome lists of open source software.

https://github.com/palday/arpeggio

Music and prediction study from Mantegna and Ostarek
https://github.com/palday/arpeggio

Last synced: 3 months ago
JSON representation

Music and prediction study from Mantegna and Ostarek

Awesome Lists containing this project

README

        

Analysis for decoding predicted tones with Francesco and Markus

You will find different files for each of the three different parts of the experiment (i.e. music, language, decoding).

- a description of your trigger codes

Music & Language & Decoding:
1 to 6 button press
Music & Language:
7, 8, 9 conditions (congruent, intermediate, mismatch)
11 first sentence/sequence onset
18 prime onset
21 second sentence/sequence onset
27 target onset ****
Music :
31 to 54 Trial Identifier
Language:
31 to 165 Trial Identifier
Decoding:
101 to 106 Note Identifier
Practice:
Higher than 200 (we place those triggers just to be sure that everything is working as it is supposed to be)

- what time windows you care about in the EEG (or what components you care about, e.g. MMN, N200, N400, etc.)

After the onset of the target (i.e. trigger number 27) we would probably like to look at the P300 but our main focus will be on the N400 (according to the literature in the music part this may be shifted 100 or 200 ms afterwards).

We want to do an oscillatory analysis (comparing the first sentence/sequence against the second sentence/sequence) and temporal decoding for the music part as well.

- which caps you used (the equidistant caps or the 10-20 caps)

MPI equidistant montage 64 electrodes.

Some notes about participants:
- subject number 2, I accidentally swapped the two electrodes set for the music part so we have to rename the electrodes in order to make them consistent with the other recordings.
- subject number 6, we will probably reject this participant, she couldn't sit still so data are very noisy.