= Tutorial 9: Processes = This tutorial uses Sabine Meunier's somatotopy experiment, called ''TutorialCTF ''in your Brainstorm database. The ''Processes ''tab in Brainstorm window is a very generic tool that can be used to apply a function or extract data from any set of data files (recordings or sources). Its main applications are: pre-processing, averaging, and data extraction. <> == Files to process == First, you have to select the files you want to process. You can drad'n'drop any node that contains recordings or sources from the database explorer to the ''Processes ''list. * Click on sources file: Subject01 / Right / ERF / MN:MEG(Kernel). Drag'n'drop it to the empty list in ''Processes ''list. * Click on condition: Subject01 / Left. Add it to the list. * Click alternatively on the ''Sources'' and ''Recordings'' buttons in the ''Processes'' tab. This indicates if you are going to process sources or recordings files. Observe that the number of files (the numbers indicated in brackets) changes according to the modality selected.<
><
> {{attachment:panelProcesses.gif}} * Note that you can put almost anything in this list: recordings, sources, conditions, subjects, even the whole procotol. It will take recursively all the recordings or sources files that are contained in each node. == Process selection == * Select ''Sources''. Click on ''Run''. This big window appears:<
><
> {{attachment:panelStatRun.gif}} * '''Processes selection''': list of processes available for the selected data: * __Comment__: String that will be displayed in the database explorer. A " | " before means that the following string ("zscore") will be appened to the original file comment. * __Process files__: One input file => one output file * '''Z-score noise normalization''': For each input file, substract the average and divide by the variance of the baseline. * '''Remove baseline mean (DC offset)''': For each input file, substract the average of the baseline. * '''Bandpass filtering''': Hi-pass, low-pass, band-pass filtering using the Signal Processing Toolbox. * '''Spatial smoothing''': Smooth the sources activations on the cortex surface. Useful before comparing different runs or different subjects, to correct possible small errors of localization. * '''Opposite values''': Just replace the input values with their opposite * __Averaging__: Many input files => a few averaged output files * '''Average by condition''': One output for each condition (ie. grand average) * '''Average by subject''': One output for each subject * '''Average everything''': One output file only * __Extract data__: Get a structured subset of the input data (sources, scouts, clusters, ...) * '''Average over a time window''': Select a block of data in each input file, and average over the time dimension. * '''Variance over a time window''': Same, but get the variance instead of the mean. * '''Extract data block''': Select a block of data in each input file. * '''Time windows''': * __Time window__: Define the time instants that will be processed. * __Baseline__: Some processes require the definition of a baseline / pre-stimulus time window for all the input files. * '''Sources''': * Use absolute values of sources activations. If checked, the process will be applied on the absolute values of the input files, instead of the initial values.<
>This is usually indispensable when processing sources amplitudes, as the relative values given by the minimum estimation are not really meaningful. * '''Scouts''': * When scouts are available in the "Scout" tab of the Brainstorm interface, they also appear here. * Check the "Use only selected scouts" if you want to compute only the scouts time series instead of extracting all the sources time series. * Check the "Merge selected scouts" if you want the selected scouts to be concatenated and considered as only one big scout. * '''Output''': * __Brainstorm database__: The results of the processes are stored in new files that will appear in the Brainstorm database explorer. <
>This option is not always available, because some processes produce output that are not compatible with Brainstorm database structure. Example: If you extract the average of a scout over a certain time window, this cannot be stored in the database. * '''Overwrite initial files''': Check this option if you are applying pre-processing functions on recordings and if you don't want to keep the original files. * '''Select output condition''': A meaningful default path (Subject/Condition) to store the output files is usually selected by the program, according to the process and files processed. <
>Check this option if you want to override Brainstorm's default, and enter you own output path. * __User defined file__: Store all the output information in one unique file, that has to be located ''__outside__'' of the Brainstorm database folder. * Possible output formats: Matlab .mat, or ASCII matrices * This options is usually useful if you are are extracting blocks of information that you want to process later with another software (R, Excel, Statistica), or with your own Matlab scripts. * For example: Extract 5 time windows x 3 scouts x 10 conditions x 60 subjects => You would have to start 5 times only the process and create 5 different files. * __Export to a Matlab variable__: Produce exactly the same output as the previous option, but export it directly into the current Matlab workspace instead of saving it to a file. == Examples == === Z-Score noise normalization === * Keep the file selection done at the beginning of this tutorial. Select the "z-score noise normalization" process. * Set the baseline to [-49.6, -12]ms (after 12ms, the values are linear interpolations due to the electric stimulation artifact removal) * Leave the other options to their default values. Click on Run. Wait until the progress bar disappears. * Have a look to the database explorer: Two new files appeared, one for each input file.<
> {{attachment:ex1_db.gif}} * For Right: open the ERF time series, the original sources, and the zscore.<
> {{attachment:ex1_ts.gif}} {{attachment:ex1_sources.gif}} {{attachment:ex1_zscore.gif}} * Notice that the two cortical views are displayed with different colorbars. * The initial minimum norm result is shown in pA.m (amplitude range around 10e-12), with the the "sources" colormap. * The z-score values are shown without any unit, with the "stat" colormap. At a given time, a value of 20 for a given source means that its amplitude is 20 times superior to its average amplitude during the baseline. === Scout time series === * We will now extract the average value around the main response peak (41ms - 50ms), for both scouts (LeftSS, RightSS), for both conditions (Left, Right). * Close all the windows * Remove the previous file selection in Processes tab (right click > Clear sample files) * Drag'n'drop the whole subject node into the Processes list (node "Subject01"), to process all the sources from Subject01 * Click on Run. There are different types of "sources" files available for Subject01, so you will be asked what files you want to process (only regular minimum norm results, zscore files, or everything). Select "Normal". * Edit the process options: * '''Process type''': Average over a time window * '''Time window''': 40.80ms - 49.60ms * '''Sources''': Keep checked the option "Use absolute values" (because we are doing a time average of sources; averaging relative values might mask some sources) * '''Scouts''': Check "Use only selected scouts", and select both scouts (LeftSS, RightSS) * '''Output''': Matlab variable * Click on Run. Enter "peak_values" as the output variable name. * In Matlab command window, type "peak_values", to see what is the content of the output structure {{{ >> peak_values peak_values = ImagingKernel: [] ImageGridAmp: [2x2 double] Whitener: [151x151 double] nComponents: 1 Comment: 'MN: MEG | timemean(41ms,50ms)' Function: 'minnorm' Time: [114 125] ImageGridTime: [0.0408 0.0496] ChannelFlag: [182x1 double] GoodChannel: [1x151 double] HeadModelFile: '/Subject01/Right/headmodel_surf_os_meg_CD.mat' SurfaceFile: '/Subject01/tess_cortex_concat.mat' DataFile: '/Subject01/Right/data_...mat' DescFileName: {'/Subject01/Right/results_...mat' '/Subject01/Left/results_...mat'} DescCluster: {2x2 cell} DescProcessType: 'timemean' }}} * All the fields but the last three ones are the fields that where present in the sources files that you precessed. Among them: * '''!ImageGridAmp''': [Nscouts x Nfiles] matrix. <
>The function that is used to combine the different sources into a single scout value is the function selected in the Scout tab, menu View > Time series options. Cf. previous tutorial. * '''!ImageGridTime''': Time window that was used. * Three fields were added to describe where the data came from, ie. the meaning of the !ImageGridAmp rows and columns.<
>They all have the same number of rows and / or the same number of columns as the !ImageGrindAmp matrix. * '''!DescFileName''': [1 x Nfiles], Explains from which file came from each column of !ImageGridAmp. * '''!DescCluster''': [Nscouts x Nfiles], Explains from which cluster / scout came each value in !ImageGridAmp. * '''!DescProcessType''': Name of the process that was applied. * By looking at these fields, we can understand that the [2 x 2] matrix in !ImageGridAmp represents the average values between 40.8ms and 49.6ms for: {{{ [ Right / LeftSS, Left / LeftSS; Right / RightSS, Left / RightSS] }}} === Average recordings === * Let's now average all the recordings files we have for Subject01, even if it doesn't make any sense. It is just to illustrate the averaging process. * Select the files to process: * Keep the "Subject01" node in the Processes list (from previous example) * Select the "Recordings" button in the Processes tab. The "[4]" you see after Subject01 informs you that you are about to process four files (Right/ERF, Right/Std, Left/ERF, Left/Std). * Click on Run. Edit the processes options: * Process: Average by subject, or Average everything (both will lead to the same result as there is only one subject) * Keep the defaults for the other options. * Click on Run. * What happens with the database: * As you average recordings from two different conditions, the result cannot be stored in any of the existing condition. * A new condition will appear the tree, ''(Intra-subject)'', to store data that is not anymore connected to any specific condition or run, but that is still dependent only on ''Subject01''. * This ''(Intra-subject)'' node, as all the other ''"condition"'' nodes in Brainstorm database, needs a "channel file" that describes the sensors. But there is by default no channel file in this new node. * What Brainstorm offers as a very simple fix, is to create a new channel file by averaging the positions of the sensors from the conditions involved in the average. This is why you see this dialog box:<
> {{attachment:combineChannel.gif}} * Click on Yes. It will average the channel files from Left and Right conditions to create the channel file in ''(Intra-subject)''.<
> {{attachment:intraSubject.gif}} * Do not try to understand the meaning of the values in '''' file. We averaged recordings and variances from two different conditions: it doesn't make any sense. * '''__Warning__''': Averaging the positions of the sensors is not an accurate way to average recordings from different runs. If the position of the head within the sensors array changed a lot between the two runs, this method can introduce an important error, because you average fields that do not come from the same position in space. * If you really want to average MEG recordings coming from different runs properly, you have to register them with dedicated tools, that transform the fields recorded and make them spatially compatible. * Brainstorm will be able to perform this operation soon, but for the moment you still have to do that with another program. * This "channel file average" is just a quick and inaccurate tool to allow the users to compute an average in case the head of the subject did not move to much between the runs. * To check the difference between sensors positions in two runs: select all the channel files at the same time, right click, menu Display sensors.<
> {{attachment:comparePositions.gif}} {{attachment:comparePositionsChan.gif}} * If you want to avoid the problems related with the position of the head: '''__work in source space only__'''. == Next == Comparisons between different conditions with the [[Tutorials/TutStat|Statistics]] tab.