= Tutorial 10: Graphical scripting = ''Authors: Francois Tadel, Sylvain Baillet'' The main window includes a graphical batching interface that directly benefits from the database explorer: files are organized as a tree of subjects and conditions, and simple drag-and-drop operations readily select files for subsequent batch processing. Most of the Brainstorm features are available through this interface, including pre-processing of the recordings, averaging, time-frequency decompositions, and computing statistics. A full analysis pipeline can be created in a few minutes, saved in the user’s preferences and reloaded in one click, executed directly or exported as a Matlab script. The available processes are organized in a plug-in structure. Any Matlab script that is added to the plug-in folder (brainstorm3/toolbox/process/functions/) and has the right format will be automatically detected and made available in the GUI. This mechanism makes the contribution from other developers to Brainstorm very easy. <> == Selecting files to process == First thing to do is to define the files you are going to process. This is done easily by picking files or folders in the database explorer and dropping them in the empty list of the Process1 tab. 1. Drag and drop the following nodes in the Process1 list: Right/ERF (recordings), Right (condition), and Subject01 (subject)<
><
> {{attachment:files1.gif}} * The number in the brackets next to each node represents the number of data files that were found in each of them. The node ERF "contains" only itself (1), Subject01/Right contains ERF and Std files (2), and Subject01 contains 2 conditions x 2 recordings (4). * The total number of files, ie. the sum of all those values, appears in the title of the panel "Files to process [7]". 1. The buttons on the left side allow you to select what type of file you want to process: Recordings, sources, time-frequency, other. Now select the second button "Sources". All the counts are updated and now reflect the number of sources files that are found for each node.<
><
> {{attachment:files2.gif}} 1. If you select the third button "Time-frequency", you would see "0" everywhere because there are no time-frequency decompositions in the database yet.<
><
> {{attachment:files3.gif}} 1. Now clear the list from all the files. You may either right-click on the list (popup menu ''Clear list''), or select all the nodes (holding ''Shift ''or ''Crtl ''key) and then press the ''Delete ''key. 1. Select both files Left/ERF and Right/ERF in the tree (holding ''Ctrl ''key), and put the in Process list. We are going to apply some functions on those two files. You cannot distinguish them after they are dropped in the list, because they are both referred as "ERF". If at some point you need to know what is in the list, just leave you mouse over a node for a few seconds, and a tooltip would give you information about it. Just like in the database explorer.<
><
> {{attachment:files4.gif}} == Creating a pipeline == === List of processes === 1. Click on Run. The Process selection window appears, with which you can create an analysis pipeline (ie. a list of process that are applied on the selected files one after the other). The first button in the toolbar shows the list of processes that are currently available. If you click on a menu, it's added to the list. <
><
> {{attachment:pipeline1.gif}} 1. Some menus appear in grey (example: Sources > Spatial smoothing). This means that they are not meant to be applied to the type of data that you have in input, or at the end of the current pipeline. The "spatial smoothing" process may only be run on source files. 1. When you select a process, a list of options specific to this process is shown in the window. * To delete a process: Select it and press the ''Delete'' key, or the big cross in the toolbar. * With the "up arrow" and "down arrow" buttons in the toolbar, you can move up/down a process in the pipeline. 1. Now add the following processes, and set their options: * '''Pre-process > Band-pass filter''': 2Hz - 30Hz * In some processes, you can specify the type(s) of sensors on which you want to apply the process. This way you can for instance apply different filters on the EEG and the MEG, if you have both in the same files. * '''Extract > Extract time''': 40.00ms - 49.60ms, overwrite input file * This will extract from each file a small time window around the main response peak. * Selecting the overwrite option will replace the previous file (bandpass), with the output of this process (bandpass+extract). This option is usually unselected for the first process in the list, then selected automatically. * '''Average > Average time''': Overwrite input file * Compute the average over this small time window.<
><
> {{attachment:pipeline2.gif}} 1. Save your pipeline: Click on the last button in the toolbar > Save > New... > Type "process_avg45". === Saving/exporting a pipeline === The last button in the the toolbar offers a list of menus to save, load and export the pipelines. . {{attachment:pipeline3.gif}} * '''Load''': List of processes that are saved in the user preferences * '''Load from file''': Import a pipeline from a pipeline_...mat file (previously saved with the menu "Save as Matlab matrix") * '''Save''': Save the pipeline in the user preferences, to be able to access it really fast after * '''Save as Matlab matrix''': Exports the pipeline for a Matlab structure in a .mat file. Allows different users to exchange their analysis pipelines (or a single user between different computers). * '''Generate .m script''': This option generates a human-readable Matlab script that can be re-used for other purposes or modified. * '''Delete''': Remove a pipeline that is saved in the user preferences. * '''Reset options''': Brainstorm saves automatically for each user the options of all the processes. This menu removes all the saved options, and set them back to the default values. Here is the Matlab script that is generated automatically for this pipeline. . {{attachment:script.gif}} . Click on [Run] in the pipeline window. After a few seconds, you will see two new files in the database, and the "Report viewer" window. === Report viewer === Each time the pipeline editor is used to run a list of processes, a report is generated and saved in the user home folder (/home/username/reports/). The report viewer shows as an HTML page some of the information saved in this report structure: the date and duration of execution, the list of processes, the input and output files. It reports all the warning and errors that happened during the execution. The report viewer does not necessarily appear automatically at the end of the last process: it is shown only when more than one processes were executed, or when any of the processes returned an error or a warning. When running processes manually from a script, the calls bst_report(Start, Save, Open) explicitly indicate when the logging of the events should start and stop. You can add images to the reports for quality control using the process "File > Save snapshot". {{attachment:report.gif|output.gif}} After you close the report window, you can re-open the last report with the main menu of the Brainstorm window: '''File > Report viewer'''. With the buttons in the toolbar, you can go back to the previous reports saved from the same protocol. === Reviewing the results === 1. The file names have list of tags that were appended to represent all the processes that were applied to them. <
><
> {{attachment:output.gif}} 1. Quick reminder: the file history record all this information too (right-click > File > View file history), so you are free to rename those files in something shorter (example: "ERF_avg45").<
><
> {{attachment:history.gif}} 1. Now double-click on the processed file in Right condition. * You see only horizontal lines. It is not a bug, it is because you averaged the data in time. There is no temporal dimension anymore. * Press Ctrl+T, to bring up a "2D sensor cap" view. It represents the average topography between 40ms and 50ms, filtered between 2Hz and 30Hz.<
><
> {{attachment:fig_avg.gif}} 1. Now have a look at the contents of the files: * Select both new files (holding shift or control key) * Right-click on one of them > File > Export to Matlab > "avg45_". * It creates two new variables in your Matlab workspace, "avg45_1" and "avg45_2". Type "avg45_1" in the command window to see the fields in the structure:<
><
> {{attachment:avg45.gif}} * This is the exact same structure as the processed file in condition Left. All those fields were documented in the tutorial #2. Just notice the dimension of the recordings matrix F: [182 x 2], and the Time vector: [0.0400 0.0496] * In a general way, all the averages in time are represented in Brainstorm by two identical columns, one for the beginning and one for the end of the time window they represent. 1. Delete the two processed files, they are of no use later. == Another example: Z-score and scouts == 1. Clear the files list, drag and drop "Subject01". 1. Select the "Sources" button of the Process1 panel, it should indicate two files. Click on run. 1. Create the following pipeline: * Standardize > Compute Z-score: baseline = [-50ms,-1ms] * Extract > Scouts time series: select both scouts, and select "Concatenate output in one matrix"<
><
> {{attachment:pipeline_zscore.gif}} 1. Run the pipeline. It creates three new files in the database: 1. {{attachment:zscore_output.gif}} Let's have a detailed look at each of those files: * '''Left / ERF / MN | zscore''': Output of the Z-score process for the Left sources * Double-click on it to display it. Display the original sources at the same time<
><
> {{attachment:zscore_cortex.gif}} * The Z-score operation subtracts the average and divides by the variance of the baseline. The goal is to give less importance to the sources that have a strong variability in the baseline, because they are considered as noise, and this way increase the visibility of the significant effects. * The maps look relatively similar in terms of localization, except that for the Z-score, we do not observe any pattern of activity in the poles of the temporal lobes. The variance in those areas was high during the baseline, they have been "removed" by the Z-score process. * Note that the units changed. The values of the Z-score are without units (the minimum norm values were divided by their standard deviation) * '''Right / ERF / MN | zscore''': Output of the Z-score process for the Right sources. Same thing. * '''Scouts: LeftSS RightSS [2 files]''': Scouts times series for the two Z-score files * This file was saved in a new node in the database: "Intra-subject". This is where are stored the results of processes that have in inputs files from different conditions within the same subject. * It contains the concatenated output of 2 scouts x 2 conditions, in one unique matrix. If we had not selected the "Concatenate" option, two files would have been saved, one in condition Left, one in condition Right. * This is a type of file that was not introduced yet in those tutorials. It is called simply "matrix", and may contain any type of data. Because we don't exactly know what they contain, the operations that can be performed on them are very limited. . {{attachment:matrix_popup.gif}} {{attachment:matrix_view.gif}} Last thing to do before going to the next tutorial: delete those three files. == Next == [[Tutorials/TutStat|Statistics: Process two sets of data]]. <>