Size: 9014
Comment:
|
Size: 1027
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 2: | Line 2: |
This tutorial uses Sabine Meunier's somatotopy experiment, called ''TutorialCTF ''in your Brainstorm database. | The main window includes a graphical batching interface that directly benefits from the database explorer: files are organized as a tree of subjects and conditions, and simple drag-and-drop operations readily select files for subsequent batch processing. Most of the Brainstorm features are (or will be) available through this interface, including pre-processing of the recordings, averaging, time-frequency decompositions, and computing statistics. A full analysis pipeline can be created in a few minutes, saved in the user’s preferences and reloaded in one click, executed directly or exported as a Matlab script. |
Line 4: | Line 4: |
The ''Processes ''tab in Brainstorm window is a very generic tool that can be used to apply a function or extract data from any set of data files (recordings or sources). Its main applications are: pre-processing, averaging, and data extraction. <<TableOfContents(2)>> == Files to process == First, you have to select the files you want to process. You can drad'n'drop any node that contains recordings or sources from the database explorer to the ''Processes ''list. * Click on sources file: Subject01 / !StimRightThumb / ERP / MN:MEG(Kernel). Drag'n'drop it to the empty list in ''Processes ''list. * Click on condition: Subject01 / !StimLeftThumb. Add it to the list. * Click alternatively on the ''Sources'' and ''Recordings'' buttons in the ''Processes'' tab. This indicates if you are going to process sources or recordings files. Observe that the number of files (the numbers indicated in brackets) changes according to the modality selected.<<BR>><<BR>> {{attachment:panelProcesses.gif}} * Note that you can put almost anything in this list: recordings, sources, conditions, subjects, even the whole procotol. It will take recursively all the recordings or sources files that are contained in each node. == Process selection == * Select ''Sources''. Click on ''Run''. This big window appears:<<BR>><<BR>> {{attachment:panelStatRun.gif}} * '''Processes selection''': list of processes available for the selected data: * __Process files__: One input file => one output file * '''Z-score noise normalization''': For each input file, substract the average and divide by the variance of the baseline. * '''Remove baseline mean (DC offset)''': For each input file, substract the average of the baseline. * '''Bandpass filtering''': Hi-pass, low-pass, band-pass filtering using the Signal Processing Toolbox. * '''Spatial smoothing''': Smooth the sources activations on the cortex surface. Useful before comparing different runs or different subjects, to correct possible small errors of localization. * '''Opposite values''': Just replace the input values with their opposite * __Averaging__: Many input files => a few averaged output files * '''Average by condition''': One output for each condition (ie. grand average) * '''Average by subject''': One output for each subject * '''Average everything''': One output file only * __Extract data__: Get a structured subset of the input data (sources, scouts, clusters, ...) * '''Average over a time window''': Select a block of data in each input file, and average over the time dimension. * '''Variance over a time window''': Same, but get the variance instead of the mean. * '''Extract data block''': Select a block of data in each input file. * '''Time windows''': * __Time window__: Define the time instants that will be processed. * __Baseline__: Some processes require the definition of a baseline / pre-stimulus time window for all the input files. * '''Sources''': * Use absolute values of sources activations. If checked, the process will be applied on the absolute values of the input files, instead of the initial values.<<BR>>This is usually indispensable when processing sources amplitudes, as the relative values given by the minimum estimation are not really meaningful. * '''Scouts''': * When scouts are available in the "Scout" tab of the Brainstorm interface, they also appear here. * Check the "Use only selected scouts" if you want to compute only the scouts time series instead of extracting all the sources time series. * Check the "Merge selected scouts" if you want the selected scouts to be concatenated and considered as only one big scout. * '''Output''': * __Brainstorm database__: The results of the processes are stored in new files that will appear in the Brainstorm database explorer. <<BR>>This option is not always available, because some processes produce output that are not compatible with Brainstorm database structure. Example: If you extract the average of a scout over a certain time window, this cannot be stored in the database. * '''Overwrite initial files''': Check this option if you are applying pre-processing functions on recordings and if you don't want to keep the original files. * '''Select output condition''': A meaningful default path (Subject/Condition) to store the output files is usually selected by the program, according to the process and files processed. <<BR>>Check this option if you want to override Brainstorm's default, and enter you own output path. * __User defined file__: Store all the output information in one unique file, that has to be located ''__outside__'' of the Brainstorm database folder. * Possible output formats: Matlab .mat, or ASCII matrices * This options is usually useful if you are are extracting blocks of information that you want to process later with another software (R, Excel, Statistica), or with your own Matlab scripts. * For example: Extract 5 time windows x 3 scouts x 10 conditions x 60 subjects => You would have to start 5 times only the process and create 5 different files. * __Export to a Matlab variable__: Produce exactly the same output as the previous option, but export it directly into the current Matlab workspace instead of saving it to a file. == Examples == === Z-Score noise normalization === * Keep the file selection done at the beginning of this tutorial. Select the "z-score noise normalization" process. * Set the baseline to [-49.6, -12]ms (after 12ms, the values are linear interpolations due to the electric stimulation artifact removal) * Leave the other options to their default values. Click on Run. Wait until the progress bar disappears. * Have a look to the database explorer: Two new files appeared, one for each input file.<<BR>> {{attachment:ex1_db.gif}} * For !StimRightThumb: open the ERP time series, the original sources, and the zscore.<<BR>> {{attachment:ex1_ts.gif}} {{attachment:ex1_sources.gif}} {{attachment:ex1_zscore.gif}} * Notice that the two cortical views are displayed with different colorbars. * The initial minimum norm result is shown in pA.m (amplitude range around 10e-12), with the the "sources" colormap. * The z-score values are shown without any unit, with the "stat" colormap. At a given time, a value of 20 for a given source means that its amplitude is 20 times superior to its average amplitude during the baseline. === Scout time series === * We will now extract the average value around the main response peak (41ms - 50ms), for both scouts (LeftSS, RightSS), for both conditions (StimLeftThumb, StimRightThumb). * Close all the windows * Remove the previous file selection in Processes tab (right click > Clear sample files) * Drag'n'drop the whole subject node into the Processes list (node "Subject01"), to process all the sources from Subject01 * Click on Run. There are different types of "sources" files available for Subject01, so you will be asked what files you want to process (only regular minimum norm results, zscore files, or everything). Select "Normal". * Edit the process options: * Process type: Average over a time window * Sources: check the option "Use absolute values" * Output: Matlab variable * Click on Run. Enter "peak_values" as the output variable name. * In Matlab command window, type "peak_values", to see what is the content of the output structure {{{ >> peak_values peak_values = ImagingKernel: [] ImageGridAmp: [15010x2 double] Whitener: [151x151 double] nComponents: 1 Comment: 'MN: MEG | timemean(-50ms,250ms)' Function: 'minnorm' Time: [1 375] ImageGridTime: [-0.0496 0.2496] ChannelFlag: [182x1 double] GoodChannel: [1x151 double] HeadModelFile: '/Subject01/StimRightThumb/headmodel_surf_os_meg_CD.mat' SurfaceFile: '/Subject01/tess_cortex_concat.mat' DataFile: '/Subject01/StimRightThumb/data_....mat' DescFileName: {'/Subject01/StimRightThumb/results_....mat' '/Subject01/StimLeftThumb/results__....mat'} DescCluster: {} DescProcessType: 'timemean' }}} * All the fields but the last three ones are the fields that where present in the sources files that you precessed: * '''ImageGridAmp''': [Nscouts x Ntime] matrix. <<BR>>There is no time dimension in this file, as it is an average over time, but in order to represent the time extension of this values, the static value is == Next == This is the end of the first tutorial, based on CTF MEG recordings. Other important notions are going to be introduced in a tutorial based on EEG data : clusters of electrodes, and statistical analysis. But please go again through all the importation / anatomy definition |
== Selecting files to process == == Creating a pipeline == == Plugin structure == The available processes are organized in a plug-in structure. Any Matlab script that is added to the plug-in folder and has the right format will be automatically detected and made available in the GUI. This mechanism makes the contribution from other developers to Brainstorm very easy. |
Tutorial 9: Processes
The main window includes a graphical batching interface that directly benefits from the database explorer: files are organized as a tree of subjects and conditions, and simple drag-and-drop operations readily select files for subsequent batch processing. Most of the Brainstorm features are (or will be) available through this interface, including pre-processing of the recordings, averaging, time-frequency decompositions, and computing statistics. A full analysis pipeline can be created in a few minutes, saved in the user’s preferences and reloaded in one click, executed directly or exported as a Matlab script.
Selecting files to process
Creating a pipeline
Plugin structure
The available processes are organized in a plug-in structure. Any Matlab script that is added to the plug-in folder and has the right format will be automatically detected and made available in the GUI. This mechanism makes the contribution from other developers to Brainstorm very easy.