Connectivity computation approach

Hey @Marc.Lalancette, this is the new thread that @Raymundo.Cassani requested about the connectivity computation.

Below is the code that I used to test to run lagged coh, unconstrained sources, with PCA. Please let me know if you'd like more info. Thanks again for the help.

%process CCC with the intended settings
sFiles = bst_process('CallProcess', 'process_cohere1n', sFiles, [], ...
    'timewindow',    [a, b], ...
    'scouts',        {'scouts', {'Premotor_L', 'Premotor_R', 'S1_L', 'S1_R', 'M1_L', 'M1_R', 'SMA_L', 'SMA_R'}}, ...
    'flatten',       0, ...
    'scouttime',     'after', ...  % after    connectivity metric
    'scoutfunc',     'mean', ...  % Mean
    'scoutfuncaft',  'mean', ...  % Mean
    'pcaedit',       struct(...
        'Method',         'pcaa', ...
        'Baseline',       [-2, 0], ...
        'DataTimeWindow', [-5, 5], ...
        'RemoveDcOffset', 'file'), ...
    'removeevoked',  0, ...
    'cohmeasure',    'lcohere2019', ...  % Lagged coherence / Corrected imaginary coherence:  LC = |imag(C)|/sqrt(1-real(C)^2)
    'tfmeasure',     'morlet', ...  % Morlet wavelets
    'tfedit',        struct(...
        'Comment',         'Complex,1-60Hz', ...
        'TimeBands',       [], ...
        'Freqs',           [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50], ...
        'MorletFc',        1, ...
        'MorletFwhmTc',    3, ...
        'ClusterFuncTime', 'none', ...
        'Measure',         'none', ...
        'Output',          'all', ...
        'RemoveEvoked',    0, ...
        'SaveKernel',      0), ...
    'timeres',       'none', ...  % Full (requires epochs)
    'avgwinlength',  1, ...
    'avgwinoverlap', 50, ...
    'outputmode',    'input');  % separately for each file

Some text from the original text to add back some context for the question:

This script does not use PCA at any given moment:

  1. The parameter flatten is set to 0. (Thus not flattening of the unconstrained sources)
  2. The parameter scouttime is set to 'after', which means that the Scout function is applied after the connectivity metric is computed.
  3. The parameter scoutfuncaft is set to mean. This is the Scout function to be used after the connectivity metric.

What was your goal at using PCA? Flattening? Scout aggregation?

The required RAM increases quick when using the scout function after, unconstrained sources, multiple frequencies, and metrics (as coherence) that require complex number storage for their computation. With the provided parameters, just the cross-spectrum, G_xy*(f), that needs to be stored in memory has the size:

[nSources, nSources, nFreqs ] of complex double (16 bytes per number), so in total size in bytes is:
(6513 * 6513 * 43) * (16 bytes) = 29GB

Raymundo,
Oh man! I missed that. Thank you for pointing that out - that was my issue. I updated the script so flatten = 1. It's running better now (2171x2171 instead of 6500x6500). My goal with PCA is to flatten.

Per subject, we have 4 'runs' of 30 events x 3 sessions. As I'm running this script now, there is flattening occurring for the 30 events individually (each run of 30 has its own folder: session 1-run 1; session 1-run 2, etc).
Marc says in the tutorial here (https://neuroimage.usc.edu/brainstorm/Tutorials/PCA#PCA_options) that everything regarding a participant needs to be PCA-flattened at one time.
First, is flattening the 30 events at once okay? It seems to not to agree with Marc's suggestion that everything about a subject needs to be flattened at once, I just want to confirm. I assume Marc's suggestion is that all data of one participant must be flattened at the same time. Is that correct?

If the above is correct, how would I flatten sources across folders in a script? Within the GUI, it seems I can load all files linked to source into the process window at the bottom, click results--> run--> sources --> unconstrained to flat map?
If this is correct, I can just generate an m script from it. Is this the correct approach?

Adam

Hi Adam,

Sorry for the delayed reply. Yes, you understood the tutorial correctly. It is recommended to run pca on all the data for one subject at once. Therefore you could run the flattening process after selecting all the subject's runs and even sessions (if there are multiple sessions). But it is not required to run this separately from the connectivity process: since you select the output mode "input", the connectivity would be saved separately for each epoch, and the PCA on everything selected at once. @Raymundo.Cassani please confirm if I remember this correctly.

That is correct if and only if, in the PCA options, the option across all epochs/files is selected.

Check this tutorial page on flattening with PCA, where all the options in the panel are explained:
https://neuroimage.usc.edu/brainstorm/Tutorials/PCA