11879
Comment:
|
11964
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
= Tutorial 9: Graphical scripting = The main window includes a graphical batching interface that directly benefits from the database explorer: files are organized as a tree of subjects and conditions, and simple drag-and-drop operations readily select files for subsequent batch processing. Most of the Brainstorm features are (or will be) available through this interface, including pre-processing of the recordings, averaging, time-frequency decompositions, and computing statistics. A full analysis pipeline can be created in a few minutes, saved in the user’s preferences and reloaded in one click, executed directly or exported as a Matlab script. |
= Tutorial 10: Graphical scripting = ''Authors: Francois Tadel, Sylvain Baillet'' |
Line 4: | Line 4: |
The available processes are organized in a plug-in structure. Any Matlab script that is added to the plug-in folder (brainstorm3/toolbox/process/functions/) and has the right format will be automatically detected and made available in the GUI. This mechanism makes the contribution from other developers to Brainstorm very easy. | The main window includes a graphical batching interface that directly benefits from the database explorer: files are organized as a tree of subjects and conditions, and simple drag-and-drop operations readily select files for subsequent batch processing. Most of the Brainstorm features are available through this interface, including pre-processing of the recordings, averaging, time-frequency decompositions, and computing statistics. A full analysis pipeline can be created in a few minutes, saved in the user’s preferences and reloaded in one click, executed directly or exported as a Matlab script. The available processes are organized in a plug-in structure. Any Matlab script that is added to the plug-in folder (brainstorm3/toolbox/process/functions/) and has the right format will automatically be detected and made available in the GUI. This mechanism makes the contribution from other developers to Brainstorm very easy. |
Line 9: | Line 11: |
First thing to do is to define the files you are going to process. This is done easily by picking files or folders in the database explorer and droping them in the empty list of the Process1 tab. | First thing to do is to define the files you are going to process. This is done easily by picking files or folders in the database explorer and dropping them in the empty list of the Process1 tab. |
Line 11: | Line 13: |
1. Drag and drop the following nodes in the Process1 list: Right/ERF (recordings), Left (condition), and Subject01 (subject)<<BR>><<BR>> {{attachment:files1.gif}} * A number between brackets appear next to each node: it represents the number of data files that where found in each of them. The node ERF "contains" only itself (1), Subject01/Right contains ERF and Std files (2), and Subject01 contains 2 conditions x 2 recordings (4). * The total number of files, ie. the sum of all those values, appears in the title of the panel "Files to process [8]". 1. The buttons on the left side allow you to select what type of file you want to process: Recordings, sources, time-frequency, other. Now select the second button "Sources". All the counts are updated and now reflect the number of sources files that are found for each node<<BR>><<BR>> {{attachment:files2.gif}} |
1. Drag and drop the following nodes in the Process1 list: Right/ERF (recordings), Right (condition), and Subject01 (subject)<<BR>><<BR>> {{attachment:files1.gif}} * The number in the brackets next to each node represents the number of data files that were found in each of them. The node ERF "contains" only itself (1), Subject01/Right contains ERF and Std files (2), and Subject01 contains 2 conditions x 2 recordings (4). * The total number of files, ie. the sum of all these values, appears in the title of the panel "Files to process [7]". 1. The buttons on the left side allow you to select what type of file you want to process: Recordings, sources, time-frequency, other. Now select the second button "Sources". All the counts are updated and now reflect the number of sources files that are found for each node.<<BR>><<BR>> {{attachment:files2.gif}} |
Line 17: | Line 19: |
1. Select both files Left/ERF and Right/ERF in the tree (holding ''Ctrl ''key), and put the in Process list. We are going to apply some functions on those two files. You cannot distiguish them after they are dropped in the list, because they are both referred as "ERP". If at some point you need to know what is in the list, just leave you mouse over a node for a few seconds, and a tooltip would give you information about it. Just like in the database explorer.<<BR>><<BR>> {{attachment:files4.gif}} | 1. Select both files Left/ERF and Right/ERF in the tree (holding ''Ctrl ''key), and put the in Process list. We are going to apply some functions on these two files. You cannot distinguish them after they are dropped in the list, because they are both referred as "ERF". If at some point you need to know what is in the list, just leave you mouse over a node for a few seconds, and a tooltip would give you information about it. Just like in the database explorer.<<BR>><<BR>> {{attachment:files4.gif}} |
Line 21: | Line 23: |
1. Click on Run. The Process selection window appears, with which you can create an analysis pipeline (ie. a list of process that are applyied on the selected files one after the other). The first button in the toolbar shows the list of processed that are currently available. If you click on a menu, it's added to the list. <<BR>><<BR>> {{attachment:pipeline1.gif}} | 1. Click on Run. The Process selection window appears, with which you can create an analysis pipeline (ie. a list of process that are applied on the selected files one after the other). The first button in the toolbar shows the list of processes that are currently available. If you click on a menu, it's added to the list. <<BR>><<BR>> {{attachment:pipeline1.gif}} |
Line 29: | Line 31: |
* '''Extract > Extract time''': 40.00ms - 49.60ms, overwrite initial file | * '''Extract > Extract time''': 40.00ms - 49.60ms, overwrite input file |
Line 31: | Line 33: |
* Selecting the overwrite option will replace the previous file (bandpass), with the output of this process (bandpass+extract). This option is usually unselected for the first process in the list, then selected automatically. * '''Average > Average over time''': Overwrite initial file |
* Selecting the overwrite option will replace the previous file (bandpass), with the output of this process (bandpass+extract). This option is usually unselected for the first process in the list, then automatically selected. * '''Average > Average time''': Overwrite input file |
Line 43: | Line 45: |
* '''Save''': Save the pipeline in the user preferencies, to be able to access it really fast after | * '''Save''': Save the pipeline in the user preferences, to be able to access it really fast after |
Line 47: | Line 49: |
* '''Reset options''': Brainstorm saves automatically for each user the options of all the processes. This menu removes all the saved options, and set them back to the default values. | * '''Reset options''': Brainstorm automatically saves for each user the options of all the processes. This menu removes all the saved options, and set them back to the default values. |
Line 49: | Line 51: |
Here is the Matlab script that is generated automatically for this pipeline. | Here is the Matlab script that is automatically generated for this pipeline. |
Line 54: | Line 56: |
Click on Ok, in the pipeline window. After a few seconds, you will see two new files in the database, and the "Report viewer" window. | Click on [Run] in the pipeline window. After a few seconds, you will see two new files in the database, and the "Report viewer" window. |
Line 57: | Line 59: |
Each time the pipeline editor is used to executed to run a list of processes, a report is generated and saved in the user home folder (/home/username/reports/). The report viewer shows as an HTML page some of the information saved in this report structure: the date and duration of execution, the list of processes, the input and output files. It reports all the warning and errors that happen during the execution. | Each time the pipeline editor is used to run a list of processes, a report is generated and saved in the user home folder (/home/username/reports/). The report viewer shows as an HTML page some of the information saved in this report structure: the date and duration of execution, the list of processes, the input and output files. It reports all the warning and errors that happened during the execution. |
Line 59: | Line 61: |
The report viewer does not necessarily appear automatically at the end of the last process: it is shown only when more than one processes were executed, or when any of the processes returned an error or a warning. | The report viewer does not necessarily appear at the end of the last process: it is shown only when more than one processes were executed, or when any of the processes returned an error or a warning. |
Line 61: | Line 63: |
When running processes manually from a script, the calls bst_report(Start, Save, Open) explicitely indicate when the logging of the events should start and stop. | When running processes manually from a script, the calls bst_report(Start, Save, Open) explicitly indicate when the logging of the events should start and stop. |
Line 73: | Line 75: |
1. Quick reminder: the file history record all this information too (right-click > File > View file history), so you are free to rename those files in something shorter (example: "ERF_avg45").<<BR>><<BR>> {{attachment:history.gif}} | 1. Quick reminder: the file history record all this information too (right-click > File > View file history), so you are free to rename these files in something shorter (example: "ERF_avg45").<<BR>><<BR>> {{attachment:history.gif}} |
Line 81: | Line 83: |
* This is the exact same structure as the processed file in condition Left. All those fields where documented in the tutorial #2. Just notice the dimension of the recordings matrix F: [182 x 2], and the Time vector: [0.0400 0.0496] | * This is the exact same structure as the processed file in condition Left. All these fields were documented in the tutorial #2. Just notice the dimension of the recordings matrix F: [182 x 2], and the Time vector: [0.0400 0.0496] |
Line 83: | Line 85: |
1. Delete the two processed files, they are of no use after. | 1. Delete the two processed files, they are of no use later. |
Line 85: | Line 87: |
== Another example: z-score and scouts == 1. Clear the files list and drag'n'drop "Subject01". 1. Select the "Results" button of the Process1 panel, it should indicate two files. Click on run. |
== Another example: Z-score and scouts == 1. Clear the files list, drag and drop "Subject01". 1. Select the "Sources" button of the Process1 panel, it should indicate two files. Click on run. |
Line 89: | Line 91: |
* Standardize > Compute z-score: baseline = [-50ms,-1ms] | * Standardize > Compute Z-score: baseline = [-50ms,-1ms] |
Line 95: | Line 97: |
Let's have a detailed look at each of those files: | Let's have a detailed look at each of these files: |
Line 97: | Line 99: |
* '''Left / ERF / MN | zscore''': Output of the zscore process for the Left sources | * '''Left / ERF / MN | zscore''': Output of the Z-score process for the Left sources |
Line 99: | Line 101: |
* The z-score operation substracts the average and divide by the variance of the baseline. The goal is to give less importance to the sources that havea strong variability in the baseline, because they are considered as noise, and this way increase the visibility of the significant effects. * The maps look relatively similar in terms of localization, except that for the z-score, we do not observe any pattern of activity in the poles of the temporal lobes. The variance in those areas was high during the baseline, they have been "removed" by the z-score process. * Note that the units changed. The values of the z-score are without units (the minimum norm values where divided by their standard deviation) * '''Right / ERF / MN | zscore''': Output of the zscore process for the Right sources. Same thing. * '''Extract scouts: LeftSS RightSS [2 files]''': Scouts times series for the two zscore files |
* The Z-score operation subtracts the average and divides by the variance of the baseline. The goal is to give less importance to the sources that have a strong variability in the baseline, because they are considered as noise, and this way increase the visibility of the significant effects. * The maps look relatively similar in terms of localization, except that for the Z-score, we do not observe any pattern of activity in the poles of the temporal lobes. The variance in these areas was high during the baseline, they have been "removed" by the Z-score process. * Note that the units changed. The values of the Z-score are without units (the minimum norm values were divided by their standard deviation) * '''Right / ERF / MN | zscore''': Output of the Z-score process for the Right sources. Same thing. * '''Scouts: LeftSS RightSS [2 files]''': Scouts times series for the two Z-score files |
Line 106: | Line 108: |
* This is a type of file that was not introduced yet in those tutorials. It is called simply "matrix", and may contain any type of data. Because we don't exactly know what they contain, the operations that can be performed on them are very limited. | * This is a type of file that was not introduced yet in these tutorials. It is called simply "matrix", and may contain any type of data. Because we don't exactly know what they contain, the operations that can be performed on them are very limited. |
Line 110: | Line 112: |
Last thing to do before going to the next tutorial: delete those three files. | Last thing to do before going to the next tutorial: delete these three files. |
Line 113: | Line 115: |
[[Tutorials/TutStat|Process with two sets of input]]. | [[Tutorials/TutStat|Statistics: Process two sets of data]]. <<EmbedContent(http://neuroimage.usc.edu/bst/get_feedback.php?Tutorials/TutProcesses)>> |
Tutorial 10: Graphical scripting
Authors: Francois Tadel, Sylvain Baillet
The main window includes a graphical batching interface that directly benefits from the database explorer: files are organized as a tree of subjects and conditions, and simple drag-and-drop operations readily select files for subsequent batch processing. Most of the Brainstorm features are available through this interface, including pre-processing of the recordings, averaging, time-frequency decompositions, and computing statistics. A full analysis pipeline can be created in a few minutes, saved in the user’s preferences and reloaded in one click, executed directly or exported as a Matlab script.
The available processes are organized in a plug-in structure. Any Matlab script that is added to the plug-in folder (brainstorm3/toolbox/process/functions/) and has the right format will automatically be detected and made available in the GUI. This mechanism makes the contribution from other developers to Brainstorm very easy.
Selecting files to process
First thing to do is to define the files you are going to process. This is done easily by picking files or folders in the database explorer and dropping them in the empty list of the Process1 tab.
Drag and drop the following nodes in the Process1 list: Right/ERF (recordings), Right (condition), and Subject01 (subject)
- The number in the brackets next to each node represents the number of data files that were found in each of them. The node ERF "contains" only itself (1), Subject01/Right contains ERF and Std files (2), and Subject01 contains 2 conditions x 2 recordings (4).
- The total number of files, ie. the sum of all these values, appears in the title of the panel "Files to process [7]".
The buttons on the left side allow you to select what type of file you want to process: Recordings, sources, time-frequency, other. Now select the second button "Sources". All the counts are updated and now reflect the number of sources files that are found for each node.
If you select the third button "Time-frequency", you would see "0" everywhere because there are no time-frequency decompositions in the database yet.
Now clear the list from all the files. You may either right-click on the list (popup menu Clear list), or select all the nodes (holding Shift or Crtl key) and then press the Delete key.
Select both files Left/ERF and Right/ERF in the tree (holding Ctrl key), and put the in Process list. We are going to apply some functions on these two files. You cannot distinguish them after they are dropped in the list, because they are both referred as "ERF". If at some point you need to know what is in the list, just leave you mouse over a node for a few seconds, and a tooltip would give you information about it. Just like in the database explorer.
Creating a pipeline
List of processes
Click on Run. The Process selection window appears, with which you can create an analysis pipeline (ie. a list of process that are applied on the selected files one after the other). The first button in the toolbar shows the list of processes that are currently available. If you click on a menu, it's added to the list.
Some menus appear in grey (example: Sources > Spatial smoothing). This means that they are not meant to be applied to the type of data that you have in input, or at the end of the current pipeline. The "spatial smoothing" process may only be run on source files.
- When you select a process, a list of options specific to this process is shown in the window.
To delete a process: Select it and press the Delete key, or the big cross in the toolbar.
- With the "up arrow" and "down arrow" buttons in the toolbar, you can move up/down a process in the pipeline.
- Now add the following processes, and set their options:
Pre-process > Band-pass filter: 2Hz - 30Hz
- In some processes, you can specify the type(s) of sensors on which you want to apply the process. This way you can for instance apply different filters on the EEG and the MEG, if you have both in the same files.
Extract > Extract time: 40.00ms - 49.60ms, overwrite input file
- This will extract from each file a small time window around the main response peak.
- Selecting the overwrite option will replace the previous file (bandpass), with the output of this process (bandpass+extract). This option is usually unselected for the first process in the list, then automatically selected.
Average > Average time: Overwrite input file
Compute the average over this small time window.
Save your pipeline: Click on the last button in the toolbar > Save > New... > Type "process_avg45".
Saving/exporting a pipeline
The last button in the the toolbar offers a list of menus to save, load and export the pipelines.
Load: List of processes that are saved in the user preferences
Load from file: Import a pipeline from a pipeline_...mat file (previously saved with the menu "Save as Matlab matrix")
Save: Save the pipeline in the user preferences, to be able to access it really fast after
Save as Matlab matrix: Exports the pipeline for a Matlab structure in a .mat file. Allows different users to exchange their analysis pipelines (or a single user between different computers).
Generate .m script: This option generates a human-readable Matlab script that can be re-used for other purposes or modified.
Delete: Remove a pipeline that is saved in the user preferences.
Reset options: Brainstorm automatically saves for each user the options of all the processes. This menu removes all the saved options, and set them back to the default values.
Here is the Matlab script that is automatically generated for this pipeline.
Click on [Run] in the pipeline window. After a few seconds, you will see two new files in the database, and the "Report viewer" window.
Report viewer
Each time the pipeline editor is used to run a list of processes, a report is generated and saved in the user home folder (/home/username/reports/). The report viewer shows as an HTML page some of the information saved in this report structure: the date and duration of execution, the list of processes, the input and output files. It reports all the warning and errors that happened during the execution.
The report viewer does not necessarily appear at the end of the last process: it is shown only when more than one processes were executed, or when any of the processes returned an error or a warning.
When running processes manually from a script, the calls bst_report(Start, Save, Open) explicitly indicate when the logging of the events should start and stop.
You can add images to the reports for quality control using the process "File > Save snapshot".
After you close the report window, you can re-open the last report with the main menu of the Brainstorm window: File > Report viewer.
With the buttons in the toolbar, you can go back to the previous reports saved from the same protocol.
Reviewing the results
The file names have list of tags that were appended to represent all the processes that were applied to them.
Quick reminder: the file history record all this information too (right-click > File > View file history), so you are free to rename these files in something shorter (example: "ERF_avg45").
- Now double-click on the processed file in Right condition.
- You see only horizontal lines. It is not a bug, it is because you averaged the data in time. There is no temporal dimension anymore.
Press Ctrl+T, to bring up a "2D sensor cap" view. It represents the average topography between 40ms and 50ms, filtered between 2Hz and 30Hz.
- Now have a look at the contents of the files:
- Select both new files (holding shift or control key)
Right-click on one of them > File > Export to Matlab > "avg45_".
It creates two new variables in your Matlab workspace, "avg45_1" and "avg45_2". Type "avg45_1" in the command window to see the fields in the structure:
- This is the exact same structure as the processed file in condition Left. All these fields were documented in the tutorial #2. Just notice the dimension of the recordings matrix F: [182 x 2], and the Time vector: [0.0400 0.0496]
- In a general way, all the averages in time are represented in Brainstorm by two identical columns, one for the beginning and one for the end of the time window they represent.
- Delete the two processed files, they are of no use later.
Another example: Z-score and scouts
- Clear the files list, drag and drop "Subject01".
- Select the "Sources" button of the Process1 panel, it should indicate two files. Click on run.
- Create the following pipeline:
Standardize > Compute Z-score: baseline = [-50ms,-1ms]
Extract > Scouts time series: select both scouts, and select "Concatenate output in one matrix"
- Run the pipeline. It creates three new files in the database:
Let's have a detailed look at each of these files:
Left / ERF / MN | zscore: Output of the Z-score process for the Left sources
Double-click on it to display it. Display the original sources at the same time
- The Z-score operation subtracts the average and divides by the variance of the baseline. The goal is to give less importance to the sources that have a strong variability in the baseline, because they are considered as noise, and this way increase the visibility of the significant effects.
- The maps look relatively similar in terms of localization, except that for the Z-score, we do not observe any pattern of activity in the poles of the temporal lobes. The variance in these areas was high during the baseline, they have been "removed" by the Z-score process.
- Note that the units changed. The values of the Z-score are without units (the minimum norm values were divided by their standard deviation)
Right / ERF / MN | zscore: Output of the Z-score process for the Right sources. Same thing.
Scouts: LeftSS RightSS [2 files]: Scouts times series for the two Z-score files
- This file was saved in a new node in the database: "Intra-subject". This is where are stored the results of processes that have in inputs files from different conditions within the same subject.
- It contains the concatenated output of 2 scouts x 2 conditions, in one unique matrix. If we had not selected the "Concatenate" option, two files would have been saved, one in condition Left, one in condition Right.
- This is a type of file that was not introduced yet in these tutorials. It is called simply "matrix", and may contain any type of data. Because we don't exactly know what they contain, the operations that can be performed on them are very limited.
Last thing to do before going to the next tutorial: delete these three files.
Next
?Statistics: Process two sets of data.