= Tutorial 28: Scripting = '''[TUTORIAL UNDER DEVELOPMENT: NOT READY FOR PUBLIC USE] ''' ''Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet'' The previous tutorials explained how to use Brainstorm in an interactive way to process one subject with two acquisition runs. In the context of a typical neuroimaging study, you may have tens or hundreds of subjects to process in the same way, it is unrealistic to do everything manually. Some parts of the analysis can be processed in batches with no direct supervision, others require more attention. This tutorial introduces tools and tricks that will help you assemble an efficient analysis pipeline. Requirements: You need a license for the Matlab environment in order to use these tools and execute custom scripts. If you are running the compiled version of Brainstorm with the MCR library, the only custom code you can run is through the menu File > Matlab console and the process "Run Matlab command". <> == Starting a new script == The easiest way to get started with a new Brainstorm script is to use the script generator, already introduced in the tutorial [[http://neuroimage.usc.edu/brainstorm/Tutorials/PipelineEditor#Saving_a_pipeline|Select files and run processes]]. Select some files in the Process1 or Process2 tabs, select a list of processes, and use the menu '''Generate .m script'''. The example below should work on the the protocol "TutorialIntroduction" created during the introduction tutorials. * In the Process1 tab, leave the selection box empty and click on [Run]. Instead of selecting the files from the Brainstorm interface, we will select them directly from the database using a script. * Select process '''File > Select files: Recordings''': <
>Subject='''Subject01''', Condition='''[Empty]''', File comment='''Avg: deviant''' (the space is important). <
>This will select the averages of the deviant condition for both runs (total of 2 files).<
><
> [SCREEN CAPTURE] * Add process '''Pre-process > Band-pass filter''': Lower cutoff='''0Hz''', Upper cutoff='''30Hz''', Mirror. <
> '''Add process File > Save snapshot''': Recordings time series, Sensor type=MEG. <
>This will apply a low-pass filter at 30Hz and save a screen capture of the signals in the report. <
><
> [SCREEN CAPTURE] * Do not run the pipeline, instead select the menu '''Generate .m script'''. It saves a new .m file and opens it in the Matlab editor. Close the pipeline editor window and look at the script.<
><
> [SCREEN CAPTURE x2] === Script header === === Script body === You will find one block per process you selected. They all have the same syntax: <
> output_files = '''bst_process'''('CallProcess', process_name, input_files_A, input_files_B, options_list); * '''process_name''': String indicating the function corresponding to the process to execute. To know from the pipeline editor what is the path to the process function: hover your mouse over the selected process, as illustrated in [[http://neuroimage.usc.edu/brainstorm/Tutorials/PipelineEditor#Saving_a_pipeline|this tutorial]]. * '''input_files_A''': List of input files in Process1, or FilesA in Process2. It can be a cell array of files names (full path, or relative path from the protocol folder), or an array of structures describing the files in the database (returned by a previous call to bst_process). * '''input_files_B''': Empty for Process1, or FilesB in Process2. Cell array of strings or array of struct. * '''options_list''': Pairs of (option_name, option_values), one for each option of the process. * '''output_files''': Array of structures describing the files in output of the process. If the process created new files, this variable contains the new files. If the process didn't create new files or was modifying exiting files, most of the time this variable would contain the same files as the input list. {{{ % Process: Select data files in: Subject01/*/Avg: deviant sFiles = bst_process('CallProcess', 'process_select_files_data', sFiles, [], ... 'subjectname', SubjectNames{1}, ... 'condition', '', ... 'tag', 'Avg: deviant', ... 'includebad', 0, ... 'includeintra', 0, ... 'includecommon', 0); % Process: Low-pass:30Hz sFiles = bst_process('CallProcess', 'process_bandpass', sFiles, [], ... 'highpass', 0, ... 'lowpass', 30, ... 'mirror', 1, ... 'sensortypes', 'MEG, EEG', ... 'overwrite', 0); % Process: Snapshot: Recordings time series sFiles = bst_process('CallProcess', 'process_snapshot', sFiles, [], ... 'target', 5, ... % Recordings time series 'modality', 1, ... % MEG (All) 'orient', 4, ... % bottom 'time', 0.11, ... 'contact_time', [0, 0.1], ... 'contact_nimage', 12, ... 'threshold', 20, ... 'Comment', 'Run'); }}} You can edit this section manually, particularly to edit the options or change the input/outputs. The options are easy to read and understand: === Script footer === == Starting Brainstorm == . - gui / nogui - selecting protocol - delete existing protocol == Selecting files == - Inputs / outputs - Select processes - Adding tags to help with the file selection later == File manipulation == * Modify a structure manually: Export to Matlab/Import from Matlab * File manipulation: file_short, file_fullpath, in_bst_*... * Documentation of all file structures: point at the appropriate tutorials * Select files from the database (with bst_get and processes) == Loop over subject and runs == Creating loops is not supported yet by the script generator, but relatively easy to do from a script without having to know too much about Matlab programming. 1) Fill the cell array SubjectNames with all your subjects names, with the same dimensions as the list of input raw files (sFiles) . 2) Add a "for" loop that includes all the bst_process() calls (leave the bst_report() calls and input definition outside) 3) Inside the loop, replace SubjectNames with SubjectNames{i} and sFiles with sFiles(i) == How to process many subjects == This section proposes a standard workflow for processing a full group study with Brainstorm. It contains the same steps of analysis as the introduction tutorials, but separating what can be done automatically from what should be done manually. This workflow can be adapted to most ERP studies (stimulus-based). * '''Prototype''': Start by processing one or two subjects completely '''interactively''' (exactly like in the introduction tutorials). Use the few pilot subjects that you have for your study to prototype the analysis pipeline and check manually all the intermediate stages. Take notes of what you're doing along the way, so that you can later write a script that reproduces the same operations. * '''Anatomical fiducials''': Set NAS/LPA/RPA and compute the MNI transformation for each subject. * '''Segmentation''': Run FreeSurfer/BrainSuite to get surfaces and atlases for all the subjects. * '''File > Batch MRI fiducials''': This menu prompts for the selection of the fiducials for all the subjects and saves a file __fiducials.m__ in each segmentation folder. You will not have to redo this even if you have to start over your analysis from the beginning. * '''Script''': Write a loop that calls the process "Import anatomy folder" for all the subjects. * '''Alternatives''': Create and import the subjects one by one and set the fiducials at the import time. Or use the default anatomy for all the subjects (or use [[Tutorials/TutWarping|warped templates]]). * '''Script #1''': Pre-processing: Loop on the subjects and the acquisition runs. * '''Create link to raw files''': Link the subject and noise recordings to the database. * '''Event markers''': Read and group triggers from digital and analog channel, fix stimulation delays * '''Evaluation''': Power spectrum density of the recordings to evaluate their quality. * '''Pre-processing''': Notch filter, sinusoid removal, band-pass filter. * '''Evaluation''': Power spectrum density of the recordings to make sure the filters worked well. * '''Cleanup''': Delete the links to the original files (the filtered ones are copied in the database). * '''Detect artifacts''': Detect heartbeats, Detect eye blinks, Remove simultaneous. * '''Compute SSP''': Heartbeats, Blinks (this selects the first component of each decomposition) * '''Compute ICA''': If you have some artifacts you'd like to remove with ICA (no default selection). * '''Screenshots''': Check the MRI/sensors registration, PSD before and after corrections, SSP. * '''Export the report''': One report per subject, or one report for all the subjects, saved in HTML. * '''Manual inspection #1''': * '''Check the reports''': Information messages (number of events, errors and warnings) and screen captures (registration problems, obvious noisy channels, incorrect SSP topographies). * '''Mark bad channels''': Open the recordings, select the channels and mark them as bad. Or use the process "Set bad channels" to mark the same bad channels in multiple files. * '''Fix the SSP/ICA''': For the suspicious runs: Open the file, adjust the list of blink and cardiac events, remove and recompute the SSP decompositions, manually select the components. * '''Detect other artifacts''': Run the process on all the runs of all the subjects at once (select all the files in Process1 and run the process, or generate the equivalent script). * '''Mark bad segments''': Review the artifacts detected in 1-7Hz and 40-240Hz, keep only the ones you really want to remove, then mark the event categories as bad. Review quickly the rest of the file and check that there are no other important artifacts. * '''Additional SSP''': If you find one type of artifact that repeats (typically saccades and SQUID jumps), you can create additional SSP projectors, either with the process "SSP: Generic" or directly from a topography figure (right-click on the figure > Snapshot> Use as SSP projector). * '''Script #2''': Subject-level analysis: Epoching, averaging, sources, time-frequency. * '''Importing''': Process "Import MEG/EEG: Events" and "Pre-process > Remove DC offset". * '''Averaging''': Average trials by run, average runs by subject (registration problem in MEG). * '''Noise covariance''': Compute from empty room or resting recordings, copy to other folders. * '''Head model''': Compute for each run, or compute once and copy if the runs are co-registered. * '''Sources''': Compute for each run, average across runs and subjects in source space for MEG. * '''Time-frequency''': Computation with Hilbert transform or Morlet wavelets, then normalize. * '''Screenshots''': Check the quality of all the averages (time series, topographies, sources). * '''Export the report''': One report per subject, or one report for all the subjects, saved in HTML. * '''Manual inspection #2''': * '''Check the reports''': Check the number of epochs imported and averaged in each condition, check the screen capture of the averages (all the primary responses should be clearly visible). * '''Regions of interest''': If not using predefined regions from an atlas, define the scouts on the anatomy of each subject (or on the template and then project them to the subjects). * '''Script #3''': Group analysis, ROI-based analysis, etc. * '''Averaging''': Group averages for the sensor data, the sources and the time-frequency maps. * '''Statistics''': Contrast between conditions or groups of subjects. * '''Regions of interest''': Any operation that involve scouts. == Final script == The following script from the Brainstorm distribution reproduces the introduction tutorials ("Get started"): '''brainstorm3/toolbox/script/tutorial_introduction.m''' <)>><><)>> <
>For an example of a script illustrating how to create loops, look at the tutorial [[Tutorials/VisualSingle|MEG visual: single subject]]. '''brainstorm3/toolbox/script/tutorial_visual_single.m''' <)>><><)>> == Report viewer == Click on Run to start the script. As this process is taking screen captures, do not use your computer for something else at the same time: if another window covers the Brainstorm figures, it will not capture the right images. At the end, the report viewer is opened to show the status of all the processes, the information messages, the list of input and output files, and the screen captures. The report is saved in your home folder ($home/.brainstorm/reports). If you close this window, you can get it back with the menu File > Report viewer. <> <>