Size: 19498
Comment:
|
Size: 19487
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 171: | Line 171: |
{{attachment:canolty2menu.gif}} | {{attachment:canolty2menu.gif}} |
Line 178: | Line 179: |
Line 181: | Line 181: |
* {{attachment:canotlyTree224.gif}} |
{{attachment:canotlyTree224.gif}} |
Line 192: | Line 190: |
{{attachment:CanoltyERPLabel.gif}} | {{attachment:CanoltyERPLabel.gif}} |
Line 216: | Line 215: |
* {{attachment:Canolty1-8.3Options.gif}} |
{{attachment:Canolty1-8.3Options.gif}} |
Phase-amplitude coupling
This tutorial introduces the concept of phase-amplitude coupling (PAC) and the metrics used in Brainstorm to estimate it. Those tools are illustrated on three types of data: simulated recordings, rat intra-cranial recordings and MEG signals.
Phase-amplitude coupling
Illustrated introduction and mathematical background.
Simulated recordings
Step-by-step instructions with as many screen captures as possible: generation and analysis of the signals.
Rat recordings
How to download the data.
Step-by-step instructions to analyze the recordings.
MEG recordings (CURRENTLY BEING UPDATED - UNFINISHED)
Step-by-step instructions to analyze the wMNE source signals for Phase Amplitude Coupling.
In order to do this part of the tutorial you will need to get the file sample_resting.zip from the Download page.
Preparation of the anatomy, basic pre-processing and source modeling will be only mentioned briefly and will be similar to the continuous recordings tutorials found here: Continuous Recordings Tutorial
Step 1: Pre-processing
The first steps include importing the anatomy and the functional data and projecting the sources. If you unsure how to do this the detailed steps can be found in the Continuous Recordings tutorial or within the tutorials for the '12 Easy steps for Brainstorm', all of which are available from this page: Tutorials
Before doing the PAC analysis we need the pre-processed files to analyze. Start a new protocol (or at least a new subject), import the data and create links to the MEG data, do some basic pre-processing and project sources as described briefly here in order to get the same results.
Anatomy
- Import the freesurfer anatomy folder and define the fiducial points.
- The MRI coordinates be (+/- a few millimeters):
- NAS: x=128, y=225, z=135
- LPA: x=54, y=115, z=107
- RPA: x=204, y=115, z=99
- AC: x=133, y=137, z=152
- PC: x=132, y=108, z=150
- IH: x=133, y=163, z=196 (anywhere on the midsagittal plane)
Functional data
- The sample_resting download contains two 10 minute resting state runs. We are going to use the first one which is the one labelled 'subj002_spontaneous_20111102_02_AUX.ds'.
- Use the review raw file to access this file through the brainstorm interface. Click yes to refine registration with the head points.
Pre-Processing
- All data should be pre-processed and checked for artifacts prior to doing analyses such as PAC (including marking bad segments, and correcting for artifacts such as eye blinks and heartbeats with SSPs).
- In the channel file for this data set the ECG channel is called 'EEG057' and the VEOG channel is called 'EEG058'.
- Use the detect eye blinks and detect heartbeat functions from the SSP option in the event tab with these channel names to detect and mark events for the heartbeats and eye blinks.
- Then use the ‘Compute SSP: Eyeblinks’ and ‘Compute SSP: Heartbeats’ to project away these artifacts from the data. Make sure to write 'MEG' in the 'Sensor Types or Names' option box if it is not already. For consistency with this tutorial use (only) the first component for each SSP.
- For the purposes of this tutorial, the data were not stringently checked and no bad sections were marked, but this is an important step for real analysis in which excessive noise can interfere with the PAC metrics.
For more information regarding dealing with artifacts and SSPs, view this tutorial: Artifact Tutorial
Importing
- Once the data is pre-processed and ready for further analysis we will now import the data into Brainstorm, project the sources and do the PAC analysis
- To import the data right click on the 'Link to raw file' and click on the first option which is 'Import in database'
PIC HERE
- You should get the option box as above. We are going to leave all these options at default. When we click 'Import' brainstorm will now create a new file with the data imported with our SSPs applied and DC offset removed.
- Note: importing this long recording will create a new large file (~3 gb) and may take a couple minutes. Due to the way Brainstorm reads raw files vs. imported files it is will not it is more computationally demanding to open the imported files (try opening the 'Link to raw file' vs. opening the imported file) which is why we did all the steps in which we need to scroll through the data with the link to the raw file.
Project Sources
- The imported file should have saved as a new condition in our tree in the brainstorm database. At this point we still have the sensor data and now want to projec the data into source space. We will need a head model and noise covariance matrix (as well as the imported anatomy) in order to do this.
- To compute the head model, right click on the newly imported file (which should be labelled 'Raw(0.00s,600.00s) and click on the compute head model. Use the overlapping spheres model and keep all of the options at their default values.
In the original zip download folder there is an empty room recording from when this resting data was collected. Right click on the subject in the database, click 'review raw file' and select this file. Then right click on the 'Link to raw file' of the noise recording, go to the 'noise covariance' submenu and click on 'compute from recordings'. An option box will pop up, within which you should keep all the default values. Click okay to create the noise covariance matrix. This new file should now be available in the tree. Right click on the Noise covariance file and click 'copy to other conditions' to copy this file to all the other conditions were we need it.
Further information as well as the importance and relevance of noise covariance is described here: Noise Covariance Tutorial
- You should now have a condition with imported data, a head model and noise covariance that looks like this:
PIC HERE
- Right click on the raw file again and click 'compute sources'. Use the Minimum norm estimate (wMNE) and keep all the default settings.
If you have the first 10 minute files from the resting_sample file projected onto the individual anatomy with one cardiac SSP and one Blink SSP active. We are now ready to run the PAC analysis.
Step 2: Using the PAC Function
Once you have the sources projected onto the anatomy proceed with the following instructions to use the PAC function on the source data.
The Function
- The function for Phase Amplitude Coupling analysis is found in the frequency menu in the process selection menu.
- Drag and drop the sources file into the dropbox in the process 1 tab. Click on run, go to frequency and click on Phase Amplitude Coupling.
Process Options
Once you click on 'Phase-amplitude coupling' you should get a pop-up box with the following options.
Time Window: The time segment of the input file to be analyzed for PAC.
Nesting Frequency Band (low): The frequency band of interest for the frequency for phase (the low, nesting frequencies).
- This can be a wide exploratory range (2 - 30 Hz) or a much smaller and specific range (ex. theta: 4-8Hz)
Nested Frequency Band (high): The frequency band of interest for the frequencies for amplitude (the high, nested frequencies).
The frequency band of interest for the frequencies for amplitude (the high, nested frequencies). This can be a wide exploratory range (ex. 40 - 250 Hz) or a smaller and specifc range (ex. low gamma: 40-80Hz)
- Note: The nested frequency can only be as high as your sampling rate has the resolution to yield.
Processing Options (These options should be left at default options unless you know how to use them)
Parallel processing toolbox: PAC analysis of each time series (of a vertex or sensor) is independent of the PAC analysis of every other time series. This function is done with a loop and each iteration of the loop is independent of the one previously. The parallel processing toolbox uses a parfor loop in which multiple time series can be processed in parallel.
Use Mex files: Mex files are available for running this process and contribute to speeding up the computation time.
Number of signals to process at once: The file that is given to the function is processed in blocks, and this option signifies how many time series are in each block.
Output Options
Save average PAC across trials: If this box is selected and multiple files have been given to the process, then the process will save the average PAC across all trials in a new file.
Save the full PAC maps: If this option is selected then the full PAC comodulogram will be saved for each time series (ie - the process saves the values for each and every frequency pairing for each and every time series). If this option is not selected then the process only save the values at the maxPAC - the frequency pairing with the most PAC coupling.
Saving the full PAC maps entails saving (for each time series) a matrix of [NumberOfTimeSeries x 1 x HighFreqs x LowFreqs]. This will very quickly create very large files (in source space it will save a matrix roughly [15000 x 1 x 39 x 12] which is a very large file. Only click this option if you do want to use / look at the maps.
We will first test the process by computed the PAC for a single vertice. This will allow us to examine what the PAC process does and visualize the result.
- In the PAC option box, if the options are not already filled out, fill in time window as the full length of the file (0 - 599.9996) and the frequency options as the wide bands of low: 2 - 30 and high: 40 - 150.
- We are going to arbitrarily compute this on source #65. (Vertices in the source data are labelled with numbers). Write '65' in the source indices option.
- Click on run.
- PAC files that are computed will be saved under the file containing the time series from which they were computed under the name 'MaxPAC'.It should look like this in brainstorm.
- Double click on the 'MaxPAC file to open the PAC map - the comodulogram. You should open a file which looks like this.
- We have this map available because we selected the 'Save Full PAC maps' and can therefore now visualize all the frequencies pairings and their PAC strenghs for each time series.
- The small white circle indicates the circle indicates the PAC pairing with the strongest coupling and it is the results relevant to this pairing that are displayed on the top of the comodulogram.
- You can click to any frequency pairing and the relevant results will be printed at the bottom of the comodulogram.This allows you to explore the values represented in the comodulogram. Here I clicked on PAC at around at 9 Hz nesting and can check that the coupling strength is similar to the maxPAC value of 11.47 Hz.
Step 3: Verifying with Canolty Maps
Canolty maps are a type of Time Frequency decomposition that offer another way to visualize the data and serve as a complimentary tool to visualize and assess Phase-Amplitude Coupling.
DESCRIPTION OF THE PROCESS
The Function
The Canolty Map's function is also found in the Frequency tab from the process functions.
There are two ways to use Canolty maps - you can manually input a low frequency of interest or you can give it the maxPAC file and it will take the low frequency at the maxPAC value.
- Process 1 tab - Drop a file of time series into the process one tab and manually select the low-frequency of interest.
- Process 2 tab - Drop a file of time series into File A and the maxPAC file (from the file in A) into File B. This process will make the Canolty maps by finding (for each vertex) the low frequency defined in the maxPAC file and use that to create the Canolty map.
We will continue by doing the Process2 version to compliment are maxPAC results.
Click on the Process2 tab. In the FileA box drop the original time series (the source data file). In the FileB box drop the maxPAC file that we just created for source.
(IMAGE HERE)
When you click on the Canolty Maps (process2) function you should get a an options box like this.
Process Options
Time Window: the time segment from the input file to be used to compute the Canolty map.
Epoch Time: The length of the epochs used.
Source Indices: which time series from the given file to compute the Canolty Maps for.
Number of signals to process at once: This process is also done in blocks and this option allows for setting the block size (can be left at the default value).
Save averaged low frequency signals: In order to create the Canolty maps, Brainstorm filters the input time series at the low frequency of interest. This option saves that filtered signal, which can be useful for viisualization.
The only difference in the Process1 version of Canolty Maps is the additional required field of Nesting Frequency. In this case you can enter in any low frequency of interest with which to compute the Canolty Map(s).
- First we will run the Canolty map on the single vertice PAC we ran on vertice number 224.
- To do this using the Process2 version with the maxPAC file, click on the Process2 button at the bottom of the dropbox and drop the source data into FileA and the MaxPAC file for vertice 224 into FileB
- When in the Process2 format, clicking on run will only show the processes available. Canolty maps is still under the frequency tab, but the drop down menu will look a bit different.
- Click run and the option box for the Canolty process will pop up. In the case where the given maxPAC file does not include PAC values for all the time series in FileA (such as this case where the maxPAC only contains the PAC values for 1 of the vertices in FileA) Brainstorm does not automatically determine which PAC information it has and this information has to be given.
- In the 'Source Indices' option write '224' so that it will use the maxPAC for this vertice and compute the Canolty map.
- Canolty maps are saved under the time series file from which they come - similar to the maxPAC. However, if the lowFreq signal is saved the link to this file is saved below. In Brainstorm your database should look something like this.
- Double click on the 'Canolty map' file to open it. You should see the following image. At the top the image the vertice number and low freq are written.
- Here we can see that the Canolty map corroborates what was represented in the maxPAC file. With the data plotted controlling the phase of the low freq (here - 11.47Hz) we can see that the amplitude of the gamma (indicated by the colour) is patterned such that it appears to
- We can also verify the frequencies of the nested frequencies. Similar to our PAC comodulogram, we can see in the Canolty map that the nested frequency is predominantly around 80 Hz, but that we see (less) PAC occuring at other frequencies, such as around 120Hz.
- In the Canolty map itself there is no representation of the lowFrequncy used. It can be useful to visualize the low frequency, especially if the Canolty maps look a bit 'messier'. The low frequency is accessble through the other link in the brainstorm database. Double click on the file named 'Canolty ERP'. This will open the time series filtered at the low frequency of interest that was used to compute the canolty map.
- TEXT
- By arranging the Canolty map and Low Frequency file you can get a sense of how the low frequency oscillation and high frequency amplitude relate to each. The time selection on the two is synchronized, so try clicking at particular parts of the low frequncy file and examining the power in the Canolty map. You should notice gamma amplitude is low at the peak and trough of the oscillation and high between them.
- There are other more quantitative ways of verifying the phase of the coupling (such as by using the value extracted by the maxPAC function. This is for visualization purposes, a sanity check and converging evidence of the coupling.
xx
- You may remember that in the PAC comodulogram for vertice 224 the maxPAC value was at 11.47 Hz but that there was also other areas of high PAC, including an almost equal coupling intensity at 8.3 Hz.
- Canolty maps only portray information relevant to the low frequency used to create the map - therefore we cannot make any conclusions about PAC at lowFreq = 8.3 with the Canolty map we have made with lowFreq = 11.47 Hz.
- We will now examinethe PAC at lowFreq = 8.3 with a new Canolty map using the Process1 version.
- Since 8.3 Hz is not the low frequency at the maxPAC pairing in the maxPAC pair we cannot examine this by giving the maxPAC file, we must manually specify it as a low frequency of interest.
- Go back to Process1 and drop the source time series.
- Press run, go to frequency and click on the Canolty maps process
- We need to specify the vertice of interest (224) and the low frequency of interest (8.3 Hz). Fill out the option box as follows.
- Press run
- Open the resultant file and you should see the following Canolty map
- Again, we can see that when filtered based on the low frequency of 8.3 Hz the high frequency amplitude rises and falls in a consistent manner, supporting that there is indeed PAC.
- We can also visualize the relation by using the low frequency filtered signal that we saved again.