= MEG visual tutorial: Group analysis = ''Authors: Francois Tadel, Elizabeth Bock. '' The aim of this tutorial is to reproduce in the Brainstorm environment the analysis described in the SPM tutorial "[[ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/Publications/SPM12_manual_chapter.pdf|Multimodal, Multisubject data fusion]]". The data processed here consists in simultaneous MEG/EEG recordings of 19 subjects performing simple visual task on a large number of famous, unfamiliar and scrambled faces. This tutorial follows another page that explains how to process [[Tutorials/VisualSingle|one single subject]] in details. <> == License == This dataset was obtained from the OpenfMRI project (http://www.openfmri.org), accession #ds117. It is made available under the Creative Commons Attribution 4.0 International Public License. Please cite the following reference if you use these data: <
>Wakeman DG, Henson RN, [[http://www.nature.com/articles/sdata20151|A multi-subject, multi-modal human neuroimaging dataset]], Scientific Data (2015) Any questions, please contact: rik.henson@mrc-cbu.cam.ac.uk == Download and installation == First, make sure you have enough space on your hard drive, at least '''100Gb''': * The Brainstorm database with all the data imported, downloaded from this website: '''50Gb''' * The database after processing: '''100Gb''' You can follow this tutorial after processing the recordings for the 19 subjects (6 runs per subject) as illustrated in the [[Tutorials/VisualSingle|single subject tutorial]]. Otherwise, we provide a Brainstorm protocol that includes all the imported data, downsampled at 275Hz: * Go to the [[http://neuroimage.usc.edu/bst/download.php|Download]] page, download the file '''TutorialGroup.zip'''. * Unzip this file in your Brainstorm database folder (brainstorm_db). * In Brainstorm, menu File > Load protocol > Load from folder > Select '''brainstorm_db/TutorialGroup''' The database you need in order to follow this tutorial should contain the following: * The individual anatomy imported from FreeSurfer or BrainSuite for each subject (19 subjects). * The sensor level averages (MEG+EEG) for each run (downsampled to 275Hz or not). * The forward model, noise covariance and inverse models for each subject and each run. <
><
> {{attachment:db_tutorialgroup.gif||height="346",width="422"}} * This protocol TutorialGroup is produced from the single subject protocol TutorialVisual with the script: '''brainstorm3/toolbox/script/tutorial_visual_copy.m''' <
><
><)>><><)>> * If you are not using the package TutorialGroup.zip, make sure you exclude the bad subjects from the analysis: sub001, sub005, sub016. You would also need to add some selection processes to select only the averages (and not the continuous recordings or the trials). == Overview of the analysis == === Coregistration of the acquisition runs === For each subject, all the runs have been registered to a common head position with MaxFilter. To verify this, select all the channel files within one subject, right-click > Display sensors > MEG (all). The surfaces representing the MEG helmet are perfectly overlapping for all the runs. When the runs are not aligned, it looks like [[http://neuroimage.usc.edu/brainstorm/Tutorials/ChannelFile#Multiple_runs_and_head_positions|this]]. . {{attachment:run_coreg.gif||height="163",width="486"}} This means that we can safely average or compare the MEG sensor values across runs within one subject. However, it is not reliable to average MEG recordings across subjects, because of the [[http://neuroimage.usc.edu/brainstorm/Tutorials/Workflows#MEG_recordings|anatomical differences between subjects]]. This doesn't mean that we can estimate the sources only once per subject. We have computed different SSP projectors and different bad channels for the each acquisition run. To be able to use this information efficiently we should estimate the sources for the trial averages for each run separately, then average the sources across runs. The forward model is the same for all the runs within one subject, therefore it can be computed for the first run and copied to all the other runs. === Objectives === The objectives for this tutorial are to reproduce the analysis presented in the following documents: * Wakeman DG, Henson RN, [[http://www.nature.com/articles/sdata20151|A multi-subject, multi-modal human neuroimaging dataset]]<
>Scientific Data (2015) * Wakeman DG, Henson RN, [[ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/Publications/WakemanHenson_biomag2010_poster.pdf|Functional and structural connctivity in face processing: MEG, EEG, fMRI, MRI and DWI data]], Biomag 2016 * SPM12 manual, chapter 41: [[ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/Publications/SPM12_manual_chapter.pdf|Multimodal, Multisubject data fusion]] The methodology that we will follow for computing the averages and the other statistics are described in the tutorial "[[Tutorials/Workflows|Workflows]]". == Subject averages == We will start by computing the subject-level averages for all the data types we have: sensor-level recordings, source maps and time-frequency results. We will use a weighted average to group the results we have for each run (weighted by the number of good trials). We want to compute the averages for each experimental condition separately (famous, unfamiliar, scrambled). Since the files are already selected in the interface, we will also compute a grand average across subject. In this context, we will consider that each subject has the same weight in the group average (option "weighted" not selected). Note that it is not accurate to average MEG recordings across subjects, but we can do it just to get a general idea ([[http://neuroimage.usc.edu/brainstorm/Tutorials/Workflows#MEG_recordings|more information]]). === Sensor averages === * Drag and drop all the subjects in Process1. Select button '''[Process recordings]'''. * Select process '''Average > Average files''': '''By trial group (subject average)'''<
>Arithmetic average, '''Weighted''' * Add process '''Average > Average files''': '''By trial group (grand average)'''<
>Arithmetic average, '''__Not__ weighted''' <
><
> {{attachment:average_process_data.gif||height="500",width="492"}} * In output, you get three group averages in the "Group analysis/Inter-subject" folder (top), and three subject averages per subject in the "intra-subject" folders (bottom). {{attachment:average_files_data.gif||height="273",width="653"}} === Source averages === * Keep all the subjects selected in Process1. Select button '''[Process sources]'''. * Run process '''Average > Average files''': '''By trial group (subject average)'''<
>Arithmetic average, '''Weighted'''<
><
>{{attachment:average_process_sources.gif}} * This time, we cannot compute a group average directly because the source spaces used to estimate the brain activity do not match between subjects. We will have to go through an additional step of projection on a template anatomy. === Time-frequency averages === * Keep all the subjects selected in Process1. Select button '''[Process time-freq]'''. * === Results === sub001 sub002 sub003 sub004 sub005 sub006 sub007 sub008 sub009 sub010 sub011 sub012 sub013 sub014 sub015 sub016 sub017 sub018 sub019 == Group averages == == Scripting == The following script from the Brainstorm distribution reproduces the analysis presented in this tutorial page: '''brainstorm3/toolbox/script/tutorial_visual_''''''group.m''' <)>><><)>> <>