Tutorial 10: Graphical scripting

Authors: Francois Tadel, Sylvain Baillet

The main window includes a graphical batching interface that directly benefits from the database explorer: files are organized as a tree of subjects and conditions, and simple drag-and-drop operations readily select files for subsequent batch processing. Most of the Brainstorm features are available through this interface, including pre-processing of the recordings, averaging, time-frequency decompositions, and computing statistics. A full analysis pipeline can be created in a few minutes, saved in the user’s preferences and reloaded in one click, executed directly or exported as a Matlab script.

The available processes are organized in a plug-in structure. Any Matlab script that is added to the plug-in folder (brainstorm3/toolbox/process/functions/) and has the right format will be automatically detected and made available in the GUI. This mechanism makes the contribution from other developers to Brainstorm very easy.

Selecting files to process

First thing to do is to define the files you are going to process. This is done easily by picking files or folders in the database explorer and dropping them in the empty list of the Process1 tab.

  1. Drag and drop the following nodes in the Process1 list: Right/ERF (recordings), Right (condition), and Subject01 (subject)

    files1.gif

    • The number in the brackets next to each node represents the number of data files that where found in each of them. The node ERF "contains" only itself (1), Subject01/Right contains ERF and Std files (2), and Subject01 contains 2 conditions x 2 recordings (4).
    • The total number of files, ie. the sum of all those values, appears in the title of the panel "Files to process [7]".
  2. The buttons on the left side allow you to select what type of file you want to process: Recordings, sources, time-frequency, other. Now select the second button "Sources". All the counts are updated and now reflect the number of sources files that are found for each node.

    files2.gif

  3. If you select the third button "Time-frequency", you would see "0" everywhere because there are no time-frequency decompositions in the database yet.

    files3.gif

  4. Now clear the list from all the files. You may either right-click on the list (popup menu Clear list), or select all the nodes (holding Shift or Crtl key) and then press the Delete key.

  5. Select both files Left/ERF and Right/ERF in the tree (holding Ctrl key), and put the in Process list. We are going to apply some functions on those two files. You cannot distinguish them after they are dropped in the list, because they are both referred as "ERP". If at some point you need to know what is in the list, just leave you mouse over a node for a few seconds, and a tooltip would give you information about it. Just like in the database explorer.

    files4.gif

Creating a pipeline

List of processes

  1. Click on Run. The Process selection window appears, with which you can create an analysis pipeline (ie. a list of process that are applied on the selected files one after the other). The first button in the toolbar shows the list of processed that are currently available. If you click on a menu, it's added to the list.

    pipeline1.gif

  2. Some menus appear in grey (example: Sources > Spatial smoothing). This means that they are not meant to be applied to the type of data that you have in input, or at the end of the current pipeline. The "spatial smoothing" process may only be run on source files.

  3. When you select a process, a list of options specific to this process is shown in the window.
    • To delete a process: Select it and press the Delete key, or the big cross in the toolbar.

    • With the "up arrow" and "down arrow" buttons in the toolbar, you can move up/down a process in the pipeline.
  4. Now add the following processes, and set their options:
    • Pre-process > Band-pass filter: 2Hz - 30Hz

      • In some processes, you can specify the type(s) of sensors on which you want to apply the process. This way you can for instance apply different filters on the EEG and the MEG, if you have both in the same files.
    • Extract > Extract time: 40.00ms - 49.60ms, overwrite initial file

      • This will extract from each file a small time window around the main response peak.
      • Selecting the overwrite option will replace the previous file (bandpass), with the output of this process (bandpass+extract). This option is usually unselected for the first process in the list, then selected automatically.
    • Average > Average over time: Overwrite initial file

      • Compute the average over this small time window.

        pipeline2.gif

  5. Save your pipeline: Click on the last button in the toolbar > Save > New... > Type "process_avg45".

Saving/exporting a pipeline

The last button in the the toolbar offers a list of menus to save, load and export the pipelines.

Here is the Matlab script that is generated automatically for this pipeline.

Click on Ok, in the pipeline window. After a few seconds, you will see two new files in the database, and the "Report viewer" window.

Report viewer

Each time the pipeline editor is used to executed to run a list of processes, a report is generated and saved in the user home folder (/home/username/reports/). The report viewer shows as an HTML page some of the information saved in this report structure: the date and duration of execution, the list of processes, the input and output files. It reports all the warning and errors that happen during the execution.

The report viewer does not necessarily appear automatically at the end of the last process: it is shown only when more than one processes were executed, or when any of the processes returned an error or a warning.

When running processes manually from a script, the calls bst_report(Start, Save, Open) explicitly indicate when the logging of the events should start and stop.

You can add images to the reports for quality control using the process "File > Save snapshot".

output.gif

After you close the report window, you can re-open the last report with the main menu of the Brainstorm window: File > Report viewer.

With the buttons in the toolbar, you can go back to the previous reports saved from the same protocol.

Reviewing the results

  1. The file names have list of tags that were appended to represent all the processes that were applied to them.

    output.gif

  2. Quick reminder: the file history record all this information too (right-click > File > View file history), so you are free to rename those files in something shorter (example: "ERF_avg45").

    history.gif

  3. Now double-click on the processed file in Right condition.
    • You see only horizontal lines. It is not a bug, it is because you averaged the data in time. There is no temporal dimension anymore.
    • Press Ctrl+T, to bring up a "2D sensor cap" view. It represents the average topography between 40ms and 50ms, filtered between 2Hz and 30Hz.

      fig_avg.gif

  4. Now have a look at the contents of the files:
    • Select both new files (holding shift or control key)
    • Right-click on one of them > File > Export to Matlab > "avg45_".

    • It creates two new variables in your Matlab workspace, "avg45_1" and "avg45_2". Type "avg45_1" in the command window to see the fields in the structure:

      avg45.gif

    • This is the exact same structure as the processed file in condition Left. All those fields where documented in the tutorial #2. Just notice the dimension of the recordings matrix F: [182 x 2], and the Time vector: [0.0400 0.0496]
    • In a general way, all the averages in time are represented in Brainstorm by two identical columns, one for the beginning and one for the end of the time window they represent.
  5. Delete the two processed files, they are of no use after.

Another example: Z-score and scouts

  1. Clear the files list, drag and drop "Subject01".
  2. Select the "Sources" button of the Process1 panel, it should indicate two files. Click on run.
  3. Create the following pipeline:
    • Standardize > Compute Z-score: baseline = [-50ms,-1ms]

    • Extract > Scouts time series: select both scouts, and select "Concatenate output in one matrix"

      pipeline_zscore.gif

  4. Run the pipeline. It creates three new files in the database:
  5. zscore_output.gif

Let's have a detailed look at each of those files:

Last thing to do before going to the next tutorial: delete those three files.

Feedback





Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Next

?Statistics: Process two sets of data.

Tutorials/TutProcesses (last edited 2014-04-18 16:26:58 by 69)