Tutorial 1: Create a new protocol

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

How to read these tutorials

The goal of these introduction tutorials is to guide you through most of the features of the software. All the pages use the same example dataset. The results of one section are most of the time used in the following section, so read these pages in the correct order.

Advanced

Some pages may contain too many details for your level of interest or competence. The sections marked as [Advanced] are not required for you to follow the tutorials until the end. You can skip them the first time you go through the documentation. You will be able to get back to the theory later if you need.

Please follow first these tutorials with the data we provide. This way you will be able to focus on learning how to use the software. It is better to start with some data that is easy to analyze. After going through all the tutorials, you should be comfortable enough with the software to start analyzing your own recordings.

You will observe minor differences between the screen captures presented in these pages and what you obtain on your computer: different colormaps, different values, etc. The software being constantly improved, some results changed since we produced the illustrations. When the changes are minor and the interpretation of the figures remain the same, we don't necessarily update the images in the tutorial.

If you are interested only in EEG or intra-cranial recordings, don't think that a MEG-based tutorial is not adapted for you. Most of the practical aspects of the data manipulation is very similar in EEG and MEG. First start by reading these introduction tutorials using the MEG example dataset provided, then when you are familiar with the software, go through the tutorial "EEG and Epilepsy" to get some more details about the processing steps that are specific for EEG, or read one of the SEEG/ECOG tutorials available in the section "Other analysis scenarios".

Presentation of the experiment

All the introduction tutorials are based on a simple auditory oddball experiment:

  • One subject, two acquisition runs of 6 minutes each.
  • Subject stimulated binaurally with intra-aural earphones.
  • Each run contains 200 regular beeps and 40 easy deviant beeps.
  • Recordings with a CTF MEG system with 275 axial gradiometers.
  • Anatomy of the subject: 1.5T MRI, processed with FreeSurfer 5.3.

  • More details will be given about this dataset along the process.
  • Full dataset description available on this page: Introduction dataset.

Brainstorm folders

Brainstorm needs different directories to work properly. If you put everything in the same folder, you would run into many problems. Try to understand this organization before creating a new database.


1. Program directory: "brainstorm3"

  • Contains all the program files: Matlab scripts, compiled binaries, templates, etc.
  • There is no user data in this folder.
  • You can delete it and replace it with a newer version at anytime, your data will be safe.
  • Recommended location:
    • Windows: Documents\brainstorm3

    • Linux: /home/username/brainstorm3

    • MacOS: Documents/brainstorm3


2. Database directory: "brainstorm_db"

  • Created by user.
  • Contains all the Brainstorm database files.
  • Managed by the application: do not move, delete or add files by yourself.
  • Recommended location:
    • Windows: Documents\brainstorm_db

    • Linux: /home/username/brainstorm_db

    • MacOS: Documents/brainstorm_db


3. User directory: ".brainstorm"

  • Created at Brainstorm startup. Typical location:

    • Windows: C:\Users\username\.brainstorm

    • Linux: /home/username/.brainstorm

    • MacOS: /Users/username/.brainstorm

  • Contains:
    • brainstorm.mat: Brainstorm user preferences.

    • defaults/: Anatomy templates downloaded by Brainstorm.

    • mex/: Some mex files that have to be recompiled.

    • plugins/: Plugins downloaded by Brainstorm (see tutorial Plugins).

    • process/: Personal process folder (see tutorial How to write your own process).

    • reports/: Execution reports (see tutorial Run processes).

    • tmp/: Temporary folder, emptied every time Brainstorm is started with user confirmation.
      You may have to change the location of the temporary folder if you have a limited amount of storage or a limited quota in your home folder (see below).

Be sure that the paths to the Program, Database, and User directories do not contain special characters. See related forum post.


4. Original data files:

  • Recordings you acquired and you want to process with Brainstorm.
  • Put them wherever you want but not in any of the previous folders.

Starting Brainstorm for the first time

  1. If you haven't read the installation instructions, do it now: Installation.

  2. Start Brainstorm from Matlab or with the compiled executable.

    BST> Starting Brainstorm:
    BST> =================================
    BST> Version: 28-Jan-2015
    BST> Checking internet connectivity... ok
    BST> Compiling main interface files...
    BST> Emptying temporary directory...
    BST> Deleting old process reports...
    BST> Loading configuration file...
    BST> Initializing user interface...
    BST> Starting OpenGL engine... hardware
    BST> Reading plugins folder...
    BST> Loading current protocol...
    BST> =================================
  3. Read and accept the license file.
  4. Select your Brainstorm database directory (brainstorm_db).
  5. If you do something wrong and don't know how to go back, you can always re-initialize Brainstorm by typing "brainstorm reset" in the Matlab command window, or by clicking on [Reset] in the software preferences (menu File > Edit preferences).

Main interface window

The Brainstorm window described below is designed to remain on one side of the screen. All the space of the desktop that is not covered by this window will be used for opening other figures.

Do not try to maximize this window, or the automatic management of the data figures might not work correctly. Keep it on one side of your screen, just large enough so you can read the file names in the database explorer.

main_window.gif

The text is too small

If you have a high-resolution screen, the text and icons in the Brainstorm window may not scale properly, leading the interface to be impossible to use. Select the menu File > Edit preferences, the slider at the bottom of the option window lets you increase the ratio of the Brainstorm interface. If it doesn't help, try changing the scaling options in your operating system preferences.

preferences.gif

Database structure

Brainstorm allows you to organize your recordings and analysis with three levels of definition:

  • Protocol

    • Group of datasets that have to be processed or displayed together.
    • A protocol can include one or several subjects.
    • Some people would prefer to call this experiment or study.

    • You can only open one protocol at a time.
    • Your Brainstorm database is a collection of protocols.
  • Subject

    • A person who participated in a given protocol.
    • A subject contains two categories of information: anatomy and functional data.
    • Anatomy: Includes at least an MRI volume and some surfaces extracted from the MRI.

    • Functional data: Everything that is related with the MEG/EEG acquisition.

    • For each subject, it is possible to use either the actual MRI of the person or one of the anatomy templates available in Brainstorm.
  • Sub-folders

    • For each subject, the functional files can be organized in different sub-folders.
    • These folders can represent different recordings sessions (aka acquisition runs) or different experimental conditions.
    • The current structure of the database does not allow more than one level of sub-folders for each subject. It is not possible to organize the files by session AND by condition.

      db.gif

Database files

  • The database folder "brainstorm_db" is managed completely from the graphic user interface (GUI).
  • All the files in the database have to be imported through the GUI. Do not try to copy files by yourself in the brainstorm_db folder, it won't work.
  • Everything in this folder is stored in Matlab .mat format, with the following architecture:
    • Anatomy data: brainstorm_db/protocol_name/anat/subject_name

    • Functional data: brainstorm_db/protocol_name/data/subject_name/subfolder/

  • Most of the files you see in the database explorer in Brainstorm correspond to files on the hard drive, but there is no one-to-one correspondence. There is extra information stored in each directory, to save properties, comments, default data, links between different items, etc. This is one of the reasons for which you should not try to manipulate directly the files in the Brainstorm database directory.
  • The structure of the database is saved in the user preferences, so when you start the program or change protocol, there is no need to read again all the files on the hard drive.
  • If Brainstorm or Matlab crashes before the database structure is correctly saved, the files that are displayed in the Brainstorm database explorer may differ from what is actually on the disk. When this happens, you can force Brainstorm to rebuild the structure from the files on the hard drive: right-click on a folder > Reload.

Create your first protocol

  1. Menu File > New protocol.

    menuFile.gif

  2. Edit the protocol name and enter: "TutorialIntroduction".
    It will automatically update the anatomy and datasets paths. Do not edit manually these paths, unless you work with a non-standard database organization and know exactly what you are doing.

  3. Default properties for the subjects: These are the default settings that are used when creating new subjects. It is then possible to override these settings for each subject individually.
    • Default anatomy: (MRI and surfaces)

      • No, use individual anatomy:
        Select when you have individual MRI scans for all the participants of your study.

      • Yes, use default anatomy:
        Select when you do not have individual scans for the participants, and you would like to use one of the anatomy templates available in Brainstorm.

    • Default channel file: (Sensors names and positions)

      • No, use one channel file per acquisition run: Default for all studies
        Different head positions: Select this if you may have different head positions for one subject. This is usually not the case in EEG, where the electrodes stay in place for all the experiment. In MEG, this is a common setting: one recording session is split in multiple acquisition runs, and the position of the subject's head in the MEG might be different between two runs.
        Different number of channels: Another use case is when you have multiple recordings for the same subject that do not have the same number of channels. You cannot share the channel file if the list of channels is not strictly the same for all the files.
        Different cleaning procedures: If you are cleaning artifacts from each acquisition run separately using SSP or ICA projectors, you cannot share the channel file between them (the projectors are saved in the channel file).

      • Yes, use one channel file per subject: Use with caution
        This can be a setting adapted to EEG: the electrodes are in the same position for all the files recorded on one subject, and the number of channels is the same for all the files. However, to use this option, you should not be using SSP/ICA projectors on the recordings, or they should be computed for all the files at once. This may lead to some confusion and sometimes to manipulation errors. For this reason, we decided not to recommend this setting.

      • Yes, use only one global channel file: Not recommended
        This is never a recommended setting. It could be used in the case of an EEG study where you use only standard EEG positions on a standard anatomy, but only if you are not doing any advanced source reconstruction. If you share the position of the electrodes between all the subjects, it will also share the source models, which are dependent on the quality of the recordings for each subject. This is complicated to understand at this level, it will make more sense later in the tutorials.

  4. In the context of this study, we are going to use the following settings:

    • No, use individual anatomy: Because we have access to a T1 MRI scan of the subject.

    • No, use one channel file per condition/run: The typical MEG setup.

  5. Click on [Create].

    createNewProtocol.gif

Protocol exploration

The protocol is created and you can now see it in the database explorer. It is represented by the top-most node in the tree.

  • You can switch between anatomy and functional data with the three buttons just above the database explorer. Read the tooltips of the buttons to see which one does what.
  • In the Anatomy view, there is a Default anatomy node. It contains the ICBM152 anatomy, distributed by the Montreal Neurological Institute (MNI), which is one of the template anatomy folders that are distributed with Brainstorm.

  • The Default anatomy node contains the MRI and the surfaces that are used for the subjects without an individual anatomy, or for registering multiple subjects to the same anatomy for group analysis.

  • There are no subjects in the database yet, so the Functional data views are empty.

  • Everything you can do with an object in the database explorer (anatomy files, subjects, protocol) is accessible by right-clicking on it.

  • treeNewProtocol.gif

Set up a backup

Just like any computer work, your Brainstorm database is always at risk. Software bugs, computer or network crashes and manipulation errors can cause the loss of months of data curation and computation. If the database structure gets corrupted or if you delete or modify accidentally some files, you might not be able to get your data back. There is no undo button!

You created your database, now take some time to find a way to make it safe. If you are not familiar with backuping systems, watch some online tutorials explaining how to set up an automatic daily or weekly backup of your sensitive data. It might seem annoying and useless now, but could save you weeks in the future.

Changing the temporary folder

Some processes need to create temporary files on the hard drive. For example, when epoching MEG/EEG recordings, Brainstorm would first create a temporary folder "import_yymmdd_hhmmss", store all the epochs in it, then move them to the database when the epoching process is completed. The name of the temporary folder indicates its creation time (year/month/day_hour_minutes_seconds). At the end of the process, all the temporary files should be deleted automatically.

The default folder where Brainstorm stores its temporary files is located in the user folder ($HOME/.brainstorm/tmp/), so before importing recordings or calculating large models, you have to make sure you have enough storage space available.

If you work on a centralized network where all the computers are sharing the same resources, the system admins may impose limited disk quotas to all users and encourage them to use local hard drives instead of the limited and shared user folder. In such context, Brainstorm may quickly fill up your limited quota and at some point block your user account.

If the amount of storage space you have for your user folder is limited (less than 10Gb), you may have to change the temporary folder used by Brainstorm. Select the menu File > Edit preferences and set the temporary directory to a folder that is local to your computer, in which you won't suffer from any storage limitation.

If a process crashes or is killed before it deletes its temporary files, they would remain in the temporary folder until explicitly deleted. When starting Brainstorm, you would always get offered to delete the previous temporary files: always agree to it, unless they correspond to files generated by another session running simultaneously. Alternatively, you can delete these temporary files by clicking on the Empty button in the same preferences window. More information in the Scripting tutorial.

preferences.gif

Summary

  • Different folders for:
    • the program (brainstorm3).

    • the database (brainstorm_db).

    • your original recordings.

  • Never modify the contents of the database folder by yourself.
  • Do not put the original recordings in any of the Brainstorm folders, import them with the interface.
  • Do not try to maximize the Brainstorm window: keep it small on one side of your screen.

Roadmap

The workflow described in these introduction tutorials include the following steps:

  • Importing the anatomy of the subjects, the definition of the sensors and the recordings.
  • Pre-processing, cleaning, epoching and averaging the EEG/MEG recordings.
  • Estimating sources from the imported recordings.
  • Computing measures from the brain signals of interest in sensor space or source space.

workflow.gif

Advanced

Moving a database

If you are running out of disk space or need to share your database with someone else, you may need to copy or move your protocols to a different folder or drive. Each protocol is handled independently by Brainstorm, therefore in order to move the entire database folder (brainstorm_db), you need to repeat the operations below for each protocol in your database.

Copy the raw files

The original continuous files are not saved in the Brainstorm database. The "Link to raw files" depend on a static path on your local computer and cannot be moved easily to a new computer. You can copy them inside the database before moving the database to a different computer/hard drive using the menu: File > Export protocol > Copy raw files to database. This will make local copies in .bst format of all your original files. The resulting protocol would be larger but portable. This can also be done file by file: right-click > File > Copy to database.

Export a protocol

The easiest option to share a protocol with a collaborator is to export it as a zip file.

  • Export: Use the menu File > Export protocol > Export as zip file.
    Avoid using spaces and special characters in the zip file name.

  • Import: Use the menu File > Load protocol > Load from zip file.
    The name of the protocol created in the brainstorm_db folder is the name of the zip file. If there is already a protocol with this label, Brainstorm would return an error. To import the protocol as a different name, you only need to rename the zip file before importing it.

  • Size limitation: This solution is limited to smaller databases: creating zip files larger than a few Gb can take a lot of time or even crash. For larger databases, prefer the other options below.

Export a subject

Similar as the protocol export, but extracts only the files needed by a single subject.

  • Export: Right-click on the subject > File > Export subject.

  • Import as new protocol: Use the menu File > Load protocol > Load from zip file.

  • Import in an existing protocol: Use the menu File > Load protocol > Import subject from zip.

Move a protocol

To move a protocol to a different location:

  1. [Optional] Set up a backup of your entire brainstorm_db folder if your haven't done it yet. There will be no undo button to press if something bad happens.
  2. [Optional] Copy the raw files to the database (see above)
  3. Unload: Menu File > Delete protocol > Only detach from database.

  4. Move: Move the entire protocol folder to a different location. Remember a protocol folder should be located in the "brainstorm_db" folder and should contain only two subfolders "data" and "anat". Never move or copy a single subject manually.

  5. Load: Menu File > Load protocol > Load from folder > Select the new location of the protocol

  6. If you want to move the entire "brainstorm_db" folder at once, make sure you detach all the protocols in your Brainstorm installation first.

Duplicate a protocol

To duplicate a protocol in the same computer:

  • Copy: Make a full copy of the protocol to duplicate in the brainstorm_db folder, e.g. TutorialIntroduction => TutorialIntroduction_copy. Avoid using any space or special character in the new folder name.

  • Load: Menu File > Load protocol > Load from folder > Select the new protocol folder



Tutorial 2: Import the subject anatomy

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet, Chinmay Chinara

Brainstorm orients most of its database organization and processing stream for handling anatomical information together with the MEG/EEG recordings, because its primary focus was to estimate brain sources from MEG/EEG, which ideally requires an accurate spatial modelling of the head and sensors.

If you don't have anatomical scans of your subjects or are not interested in any spatial display, various solution will be presented along the tutorials, starting from the last section of this page. Be patient and follow everything as instructed, you will get to the information you need.

Download

The dataset we will use for the introduction tutorials is available online.

  • Go to the Download page of this website, and download the file: sample_introduction.zip

  • Unzip it in a folder that is not in any of the Brainstorm folders (program folder or database folder).
  • This is really important that you always keep your original data files in a separate folder: the program folder can be deleted when updating the software, and the contents of the database folder is supposed to be manipulated only by the program itself.

Create a new subject

The protocol is currently empty. You need to add a new subject before you can start importing data.

  1. Switch to the anatomy view (first button just above the database explorer).
  2. Right-click on the top folder TutorialIntroduction > New subject.
    Alternatively: Use the menu File > New subject.

    create_subject.gif

  3. The window that opens lets you edit the subject name and settings. It offers again the same options for the default anatomy and channel file: you can redefine for one subject the default values set at the protocol level if you need to. See previous tutorial for help.

    create_subject_options.gif

  4. Keep all the default settings and click on [Save].

Right-click doesn't work

If the right-click doesn't work anywhere in the Brainstorm interface and you cannot get to see the popup menus in the database explorer, try to connect a standard external mouse with two buttons. Some Apple pointing devices do not interact very well with Java/Matlab.

Alternatively, try to change the configuration of your trackpad in the system preferences.

Import the anatomy

For estimating the brain sources of the MEG/EEG signals, the anatomy of the subject must include at least three files: a T1-weighted MRI volume, the envelope of the cortex and the surface of the head.

Brainstorm cannot extract the cortex envelope from the MRI, you have to run this operation with an external program of your choice. The results of the MRI segmentation obtained with the following programs can be automatically imported: FreeSurfer, BrainSuite, BainVISA, CAT12, and CIVET. CAT is the only application fully interfaced with Brainstorm, and available for download as a Brainstorm plugin. However, FreeSurfer is more considered as a reference in this domain, therefore this is the solution we decided to demonstrate in these tutorials.

The anatomical information of this study was acquired with a 1.5T MRI scanner, the subject had a marker placed on the left cheek. The MRI volume was processed with FreeSurfer 7.1, the result of this automatic segmentation process is available in the downloaded folder sample_introduction/anatomy.

  1. Make sure that you are still in the anatomy view for your protocol.
  2. Right-click on the subject folder > Import anatomy folder:

    • Set the file format: FreeSurfer + Volume atlases

    • Select the folder: sample_introduction/anatomy

    • Click on [Open]
  3. Number of vertices of the cortex surface: 15000 (default value).
    This option defines the number of points that will be used to represent the cortex envelope. It will also be the number of electric dipoles we will use to model the activity of the brain. This default value of 15000 was chosen empirically as a good balance between the spatial accuracy of the models and the computation speed. More details later in the tutorials.

    import_anat_menu.gif

  4. The MRI views should be correct (axial/coronal/sagittal), you just need to make sure that the marker on the cheek is really on the left of the MRI. Then you can proceed with the fiducial selection.

    import_anat_marker.gif

Using the MRI Viewer

To help define these fiducial points, let's start with a brief description of the MRI Viewer:

  • Navigate in the volume:

    • Click anywhere on the MRI slices to move the cursor.
    • Use the sliders below the views.
    • Use the mouse wheel to scroll through slices (after clicking on the view to select it).
    • On a MacBook pad, use the two finger-move up/down to scroll.

  • Zoom: Use the magnifying glass buttons at the bottom of the figure, or the corresponding shortcuts (keyboard [+]/[-], or [CTRL]+mouse wheel).

  • Image contrast: Click and hold the right mouse button on one image, then move up and down.

  • Select a point: Place the cursor at the spot you want and click on the corresponding [Set] button.

  • Display the head surface: Click on the button "View 3D head surface" to compute and display the head surface. Click on the surface to move the coordinates of the cursor in the MRI Viewer figure. When the fiducials are not defined, they appear as floating a few centimeters away from the head.

    head_fiducials.gif

  • More information about all the coordinates displayed in this figure: CoordinateSystems

Fiducial points

Brainstorm uses a few reference points defined in the MRI to align the different files:

  • Required: Three points to define the Subject Coordinate System (SCS):

    • Nasion (NAS), Left ear (LPA), Right ear (RPA)
    • This is used to register the MEG/EEG sensors on the MRI.
  • Optional: Three additional anatomical landmarks (NCS):

    • Anterior commissure (AC), Posterior commissure (PC) and any interhemispheric point (IH).
    • Computing the MNI normalization sets these points automatically (see below), therefore setting them manually is not required.
  • For instructions on finding these points, read the following page: CoordinateSystems.

Nasion (NAS)

  • In this study, we used the real nasion position instead of the CTF coil position.

    import_anat_nas.gif
    MRI coordinates: 127, 213, 139

Left ear (LPA)

  • In this study, we used the connection points between the tragus and the helix (red dot on the CoordinateSystems page) instead of the CTF coil position or the left and right preauricular points.

    import_anat_lpa.gif
    MRI coordinates: 52, 113, 96

Right ear (RPA)

  • import_anat_rpa.gif
    MRI coordinates: 202, 113, 91

Anterior commissure (AC)

  • import_anat_ac.gif
    MRI coordinates: 127, 119, 149

Posterior commissure (PC)

  • import_anat_pc.gif
    MRI coordinates: 128, 93, 141

Inter-hemispheric point (IH)

  • This point can be anywhere in the mid-sagittal plane, these coordinates are just an example.

    import_anat_ih.gif
    MRI coordinates: 131, 114, 206

Type the coordinates

  • If you have the coordinates of the fiducials already written somewhere, you can type or copy-paste them instead of the pointing at them in with the cursor. Right-click on the figure > Edit fiducials positions > MRI coordinates.

    import_anat_typefid.gif

Validation

  • Once you are done with the fiducial selection, click on the [Save] button, at the bottom-right corner of the MRI Viewer figure.
  • The automatic import of the FreeSurfer folder resumes. At the end you get many new files in the database and a 3D view of the cortex and scalp surface. Here again you can note that the marker is visible on the left cheek, as expected.

    import_anat_files.gif

  • The next tutorial will describe these files and explore the various visualization options.
  • Close all figures and clear memory: Use this button in the toolbar of the Brainstorm window to close all the open figures at once and to empty the memory from all the temporary data that the program keeps in memory for faster display.

    close_all.gif

Graphic bugs

If you do not see the cortex surface through the head surface, or if you observe any other issue with the 3D display, there might be an issue with the OpenGL drivers. You may try the following options:

  • Update the drivers for your graphics card.
  • Upgrade your version of Matlab.
  • Run the compiled version of Brainstorm (see Installation).

  • Turn off the OpenGL hardware acceleration: Menu File > Edit preferences > Software or Disabled.

  • Send a bug report to MathWorks.

For Linux users with an integrated GPU and NVIDIA GPU, if you experience the troubles above, or the slow navigation in the 3D display (usually with 2 or more surfaces). Verify that you are using the NVIDIA GPU as primary GPU. More information depending on your distribution: Ubuntu, Debian and Arch Linux.

MNI normalization

For comparing results with the literature or with other imaging modalities, the normalized MNI coordinate system is often used. To be able to get "MNI coordinates" for individual brains, an extra step of normalization is required.

To compute a transformation between the individual MRI and the ICBM152 space, you have two available options, use the one of your choice:

  • In the MRI Viewer: Click on the link "Click here to compute MNI normalization".

  • In the database explorer: Right-click on the MRI > MNI normalization.

Select the first option maff8: This method is embedded in Brainstorm and does not require the installation of SPM12. However, it requires the automatic download of the file SPM12 Tissue Probability Maps. If you do not have access to internet, see the instructions on the Installation page.

It is based on an affine co-registration with the MNI ICBM152 template from the SPM software, described in the following article: Ashburner J, Friston KJ, Unified segmentation, NeuroImage 2005.

mni_norm.gif

Note that this normalization does not modify the anatomy, it just saves a transformation that enables the conversion between Brainstorm coordinates and MNI coordinates. After computing this transformation, you have access to one new line of information in the MRI Viewer.

  • mni_coordinates.gif

This operation also sets automatically some anatomical points (AC, PC, IH) if not defined yet. After the computation, make sure they are correctly positioned. You can run this computation while importing the anatomy, when the MRI viewer is displayed for the first time, this will save you the trouble of marking the AC/PC/IH points manually.

MacOS troubleshooting

Error "mexmaci64 cannot be opened because the developer cannot be verified":

Alternatives

The head surface looks bad: You can try computing another one with different properties.

No individual anatomy: If you do not have access to an individual MR scan of the subject, or if its quality is too low to be processed with FreeSurfer, you have other options:

Other options for importing the FreeSurfer anatomical segmentation:

  • Automated import: We selected the menu Import anatomy folder for a semi-manual import, in order to select manually the position of the anatomical fiducials and the number of points of the cortex surface. If you are not interested in setting accurately the positions of the fiducials, you can use the menu Import anatomy folder (auto): it computes the linear MNI normalization first and use default fiducials defined in MNI space, and uses automatically 15000 vertices for the cortex.

  • FreeSurfer options: We selected the file format FreeSurfer + Volume atlases for importing the ASEG parcellation in the database. This slows down the import and increases the size on the hard drive. If you know you won't use it, select FreeSurfer instead. A third menu is avalaible to also import the cortical thickness as source files in the database.



Tutorial 3: Display the anatomy

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

Anatomy folder

The anatomy of the subject "Subject01" should now contain all the files Brainstorm could import from the FreeSurfer segmentation results:

  • MRI: T1-weighted MRI, resampled and re-aligned by FreeSurfer.

  • ASEG / DKT / Desikan-Killiany / Destrieux: Volume parcellations (including subcortical regions)

  • Head mask: Head surface, generated by Brainstorm.
    If this doesn't look good for your subject, you can recalculate another head surface using different parameters: right-click on the subject folder > Generate head surface.

  • Cortex_336231V: High-resolution pial envelope generated by FreeSurfer.

  • Cortex_15002V: Low-resolution pial envelope, downsampled from the original one by Brainstorm.

  • Cortex_cereb_17005V: Low-res pial envelope + cerebellum surface extracted from ASEG

  • White_*: White matter envelope, high and low resolution.

  • Mid_*: Surface that represents the mid-point between the white and cortex envelopes.

  • Subcortical: Save FreeSurfer subcortical regions as in the ASEG volume, but tesselated as surfaces.

  • For more information about the files generated by FreeSurfer, read the FreeSurfer page.

    import_anat_files.gif

Default surfaces

  • There are four possible surface types: cortex, inner skull, outer skull, head.
  • For each type of surface, one file is selected as the one to use by default for all the operations.
  • This selected surface is displayed in green.

  • Here, there is only one "head" surface, which is selected.
  • The mid, cortex and white surfaces can all be used as "cortex" surfaces, only one can be selected at a time. By default, the low-resolution cortex should be selected and displayed in green.

  • To select a different cortex surface, you can double-click on it or right-click > Set as default.

MRI Viewer

Right-click on the MRI to get the list of the available display menus:

  • mri_popup.gif

Open the MRI Viewer. This interface was already introduced in the previous tutorial. It corresponds to the default display menu if you double-click on the MRI from the database explorer. Description of the window:

  • MIP Anatomy: Maximum Intensity Projection. When this option is selected, the MRI viewer shows the maximum intensity value across all the slices in each direction. This maximum does not depend on the selected slice, therefore if you move the cursor, the image stays the same.

  • Neurological/Radiological: There are two standard orientations for displaying medical scans. In the neurological orientation, the left hemisphere is on the left of the image, in the radiological orientation the left hemisphere is on the right of the image.

  • Coordinates: Position of the cursor in different coordinate systems. See: CoordinateSystems

  • Colormap: Click on the colorbar and move up/down (brightness) or left/right (contrast)

  • Popup menu: All the figures have additional options available in a popup menu, accessible with a right-click on the figure. The colormap options will be described later in the tutorials, you can test the other options by yourself.

    mri_viewer.gif mri_viewer_mip.gif

MRI contact sheets

You can get collections of slices in any direction (axial, coronal or sagittal) with the popup menus in the database explorer or the MRI Viewer figure.

  • Zoom: mouse wheel (or two finger-move on a MacBook pad)

  • Move in zoomed image: click + move

  • Adjust contrast: right click + move up/down

    mri_axial.gif

MRI in 3D

Right-click on the MRI file in the database explorer > Display > 3D orthogonal slices.

  • mri_3d.gif

  • Simple mouse operations:

    • Rotate: Click + move. Note that two different types of rotations are available: at the center of the figure the object will follow you mouse, on the sides it will do a 2D rotation of the image.

    • Zoom: Mouse wheel, or two finger-move on a MacBook pad.

    • Move: Left+right click + move (or middle-click + move).

    • Colormap: Click on the colorbar and move up/down (brightness) or left/right (contrast).

    • Reset view: Double click anywhere on the figure.

    • Reset colormap: Double-click on the colorbar.

    • Move slices: Right click on the slice to move + move.
      (or use the Resect panel in the Surface tab)

  • Popup operations (right-click on the figure):
    • Colormap: Edit the colormap, detailed in another tutorial.

    • MRI display: For now, contains mostly the MIP option (Maximum Intensity Projection).

    • Get coordinates: Pick a point in any 3D view and get its coordinates.

    • Snapshots: Save images or movies from this figure.

    • Figure: Change some of the figure options or edit it using the Matlab tools.

    • Views: Set one of the predefined orientation.

    • Note the indications in the right part of the popup menu, they represent the keyboard shortcut for each menu.
  • Keyboard shortcuts:
    • Views shortcuts (0,1,2...9 and [=]): Remember them, they will be very useful when exploring the cortical sources. To switch from left to right, it is much faster to press a key than having to rotate the brain with the mouse.

    • Zoom: Keys [+] and [-] for zooming in and out.

    • Move slices: [x]=Sagittal, [y]=Coronal, [z]=Axial, hold [shift] for reverse direction.

  • Surface tab (in the main Brainstorm window, right of the database explorer):
    • This panel is primarily dedicated to the display of the surfaces, but some controls can also be useful for the 3D MRI view.
    • Transparency: Changes the transparency of the slices.

    • Smooth: Changes the background threshold applied to the MRI slices. If you set it zero, you will see the full slices, as extracted from the volume.

    • Resect: Changes the position of the slices in the three directions.

Surfaces

To display a surface you can either double-click on it or right-click > Display. The tab "Surface" contains buttons and sliders to control the display of the surfaces.

  • The mouse and keyboard operations described for the 3D MRI view also apply here.
  • Smooth: Inflates the surface to make all the parts of the cortex envelope visible.
    This is just a display option, it does not actually modify the surface.

  • Color: Changes the color of the surface.

  • Sulci: Shows the bottom of the cortical folds with a darker color. We recommend to keep this option selected for the cortex, it helps for the interpretation of source locations on smoothed brains.

  • Edge: Display the faces of the surface tesselation.

  • Resect: The sliders and the buttons Left/Right/Struct at the bottom of the panel allow you to cut the surface or reorganize the anatomical structures in various ways.

  • Multiple surfaces: If you open two surfaces from the same subject, they will be displayed on the same figure. Then you need to select the surface you want to edit before changing its properties. The list of available surfaces is displayed at the top of the Surface tab.

  • At the bottom of the Surface tab, you can read the number of vertices and faces in the tesselation.

    surface_options.gif surface_mesh.gif

Get coordinates

  • Close all the figures. Open the cortex surface again.
  • Right-click on the 3D figure, select "Get coordinates".
  • Click anywhere on the cortex surface: a yellow cross appears and the coordinates of the point are displayed in all the available coordinates systems.
  • You can click on [View/MRI] to see where this point is located in the MRI, using the MRI Viewer.

    surface_coordinates.gif

Subcortical regions: Volume

The standard FreeSurfer segmentation pipeline generates multiple volume parcellations of anatomical regions, all including the ASEG subcortical parcellation. Double-click on a volume parcellation to open it for display. This opens the MRI Viewer with two volumes: the T1 MRI as the background, and the parcellation as a semi-transparent overlay.

  • Adjust the transparency of the overlay from the Surface tab, slider Transp.
  • The name of the region under the cursor appears at the top-right corner. The integer before this name is the label of the ROI, ie. the integer value of the voxel under the cursor in the parcellation volume.

    aseg_volume.gif

  • Close the MRI viewer.
  • Double-click again on the subject's MRI to open it in the MRI viewer.
  • Observe that the anatomical label is also present at the top-right corner of this figure; in this case, the integer reprents the voxel value of the displayed MRI. This label information comes from the ASEG file: whenever there are volume parcellations available for the subject, one of them is loaded in the MRI Viewer by default. The name of the selected parcellation is displayed in the figure title bar.
  • You can change the selected parcellation with the right-click popup menu Anatomical atlas. You can change the parcellation scheme, disable its use to make the MRI work faster, or show the parcellation volume as an overlay (menu Show atlas). More information in the tutorial Using anatomy templates.

    aseg_label.gif

Subcortical regions: Surface

Brainstorm reads the ASEG volume labels and tesselates some of these regions, then groups all the meshes in a large surface file where the regions are identified in an atlas called "Structures". It identifies: 8 bilateral structures (accumbens, amygdala, caudate, hippocampus, pallidum, putamen, thalamus, cerebellum) and 1 central structure (brainstem).

These structures can be useful for advanced source modeling, but will not be used in the introduction tutorials. Please refer to the advanced tutorials for more information: Volume source estimation and Deep cerebral structures.

  • aseg.gif

With the button [Struct] at the bottom of the Surface tab, you can see the structures separately.

  • resect_struct.gif

Registration MRI/surfaces

The MRI and the surfaces are represented using the different coordinate systems and could be misregistered for various reasons. If you are using the automated segmentation pipeline from FreeSurfer or BrainSuite you should never have any problem, but if something goes wrong or in the case of more manual import procedures it is always good to check that the MRI and the surfaces are correctly aligned.

  • Right-click on the low-res cortex > MRI Registration > Check MRI/surface registration

  • The calculation of the interpolation between the MRI and the cortex surface takes a few seconds, but the result is then saved in the database and will be reused later.
  • The yellow lines represent the re-interpolation of the surface in the MRI volume.

    surface_register.gif

Advanced

Interaction with the file system

For most manipulations, it is not necessary to know exactly what is going on at the level of the file system, in the Brainstorm database directory. However, many things are not accessible from the Brainstorm interface, you may sometimes find it useful to manipulate some piece of data directly from the Matlab command window.

Where are the files ?

  • Leave your mouse for a few seconds over any node in the database explorer, a tooltip will appear with the name and path of the corresponding file on the hard drive.
  • Paths are relative to current protocol path (brainstorm_db/TutorialIntroduction). What is displayed in the Brainstorm window is a comment and may have nothing to do with the real file name. For instance, the file name corresponding to "head mask" is Subjec01/tess_head_mask.mat.

  • Almost all the files in the database are in Matlab .mat format. You can load and edit them easily in the Matlab environment, where they appear as structures with several fields.

    file_tooltip.gif

Popup menu: File

Right-click on a surface file: many menus can lead you to the files and their contents.

  • file_menu.gif

  • View file contents: Display all the fields in the Matlab .mat file.

    file_contents.gif

  • View file history: Review the History field in the file, that records all the operations that were performed on the file since if was imported in Brainstorm.

    file_history.gif

  • Export to file: Export in one of the supported mesh file format.

  • Export to Matlab: Load the contents of the .mat file in the Matlab base workspace. It is then accessible from the Matlab command window.

  • Import from Matlab: Replace the selected file with the content of a variable from the Matlab base workspace. Useful to save back in the database a structure that was exported and modified manually with the Matlab command window.

  • Copy / Cut / Paste: Allow you to copy/move files in the database explorer. Keyboard shortcuts for these menus are the standard Windows shortcuts (Ctrl+C, Ctrl+X, Ctrl+V). The database explorer also supports drag-and-drop operations for moving files between different folders.

  • Delete: Delete a file. Keyboard shortcuts: Delete key.

  • Rename: Change the Comment field in the file. It "renames" the file in the database explorer, but does not change the actual file name on the hard drive. Keyboard shortcut: F2

  • Copy file path to clipboard: Copies the full file name into the system clipboard, so that you can paste it in any other window (Ctrl+V or Paste menu)

  • Go to this directory (Matlab): Change the current Matlab path, so that you can access the file from the Matlab Command window or the Matlab Current directory window

  • Show in file explorer: Open a file explorer window in this directory.

  • Open terminal in this folder: Start a system console in the file directory (Linux and MacOS only).

What are all these other files ?

  • If you look in brainstorm_db/TutorialIntroduction with the file explorer of your operating system, you'll find many other directories and files that are not visible in the database explorer.

    file_disk.gif

  • The protocol TutorialIntroduction is divided in Anatomy and Datasets directories:

    • Each subject in anat is described by an extra file: brainstormsubject.mat

    • Each folder in data is described by an extra file: brainstormstudy.mat

  • anat/@default_subject: Contains the files of the default anatomy (Default anatomy)

  • data/@default_study: Files shared between different subjects (Global common files)

  • data/@inter: Results of inter-subject analysis

  • data/Subject01/@default_study: Files shared between different folders in Subject01

  • data/Subject01/@intra: Results of intra-subject analysis (across different folders)

Advanced

On the hard drive: MRI

Right-click on the MRI > File > View file contents:

  • contents_mri.gif

Structure of the MRI files: subjectimage_*.mat

  • Comment: String displayed in the database explorer to represent the file.

  • Cube: [Nsagittal x Ncoronal x Naxial] full MRI volume. Cube(1,1,1) is in the left, posterior, inferior corner.

  • Voxsize: Size of one voxel in millimeters (sagittal, coronal, axial).

  • SCS: Defines the Subject Coordinate System. Points below are in MRI (millimeters) coordinates.

    • NAS: (x,y,z) coordinates of the nasion fiducial.

    • LPA: (x,y,z) coordinates of the left ear fiducial.

    • RPA: (x,y,z) coordinates of the right ear fiducial.

    • R: [3x3] rotation matrix from MRI coordinates to SCS coordinates.

    • T: [3x1] translation matrix from MRI coordinates to SCS coordinates.

    • Origin: MRI coordinates of the point with SCS coordinates (0,0,0).

  • NCS: Defines the MNI coordinate system, either with a linear or a non-linear transformation.

    • AC: (x,y,z) coordinates of the Anterior Commissure.

    • PC: (x,y,z) coordinates of the Posterior Commissure.

    • IH: (x,y,z) coordinates of an Inter-Hemisperic point.

    • (Linear transformation)
      • R: [3x3] rotation matrix from MRI coordinates to MNI coordinates.

      • T: [3x1] translation matrix from MRI coordinates to MNI coordinates.

    • (Non-linear transformation)
      • iy: 3D floating point matrix: Inverse MNI deformation field, as in SPM naming conventions. Same size as the Cube matrix, it gives for each voxel its coordinates in the MNI space, and is therefore used to convert from MRI coordinates to MNI coordinates.

      • y: 3D floating point matrix: Forward MNI deformation field, as in SPM naming conventions. For some MNI coordinates, it gives their coorrespondance in the original MRI space. To be interpreted, it has to be used with the matrix y_vox2ras.

      • y_vox2ras: [4x4 double], transformation matrix that converts from voxel coordinates of the y volume to MNI coordinates.

      • y_method: Algorithm used for computing the normalization ('segment'=SPM12 Segment)

    • Origin: MRI coordinates of the point with NCS coordinates (0,0,0).

  • Header: Header from the original file format (.nii, .mgz, ...)

  • Histogram: Result of the internal analysis of the MRI histogram, mainly to detect background level.

  • InitTransf: [Ntransform x 2] cell-matrix: Transformations that are applied to the MRI before importing the surfaces. Example: {'vox2ras', [4x4 double]}

  • Labels: [Nlabels x 3] cell-matrix: For anatomical parcellations, this field contains the names and RGB colors associated with each integer label in the volume. Example:<<BR>>{0, 'Background', [0 0 0]}
    {1, 'Precentral L', [203 142 203]}

  • History: List of operations performed on this file (menu File > View file history).

Useful functions

  • /toolbox/io/in_mri_bst(MriFile): Read a Brainstorm MRI file and compute the missing fields.

  • /toolbox/io/in_mri(MriFile, FileFormat=[]): Read a MRI file (format is auto-detected).

  • /toolbox/io/in_mri_*.m: Low-level functions for reading all the file formats.

  • /toolbox/anatomy/mri_*.m: Routines for manipulating MRI volumes.

  • /toolbox/gui/view_mri(MriFile, ...): Display an imported MRI in the MRI viewer.

  • /toolbox/gui/view_mri_3d(MriFile, ...): Display an imported MRI in a 3D figure.

Advanced

On the hard drive: Surface

Right-click on any cortex surface > File > View file contents:

  • file_contents.gif

Structure of the surface files: tess_*.mat

  • Atlas: Array of structures, each entry is one menu in the drop-down list in the Scout tab.

    • Name: Label of the atlas (reserved names: "User scouts", "Structures", "Source model")

    • Scouts: List of regions of interest in this atlas, see the Scout tutorial.

  • Comment: String displayed in the database explorer to represent the file.

  • Curvature: [Nvertices x 1], curvature value at each point.

  • Faces: [Nfaces x 3], triangles constituting the surface mesh.

  • History: List of operations performed on this file (menu File > View file history).

  • iAtlas: Index of the atlas that is currently selected for this surface.

  • Reg: Structure with registration information, used to interpolate the subject's maps on a template.

  • SulciMap: [Nvertices x 1], binary mask marking the botton of the sulci (1=displayed as darker).

  • tess2mri_interp: [Nvoxels x Nvertices] sparse interpolation matrix MRI<=>surface.

  • VertConn: [Nvertices x Nvertices] Sparse adjacency matrix, VertConn(i,j)=1 if i and j are neighbors.

  • Vertices: [Nvertices x 3], coordinates (x,y,z) of all the points of the surface, in SCS coordinates.

  • VertNormals: [Nvertices x 3], direction (x,y,z) of the normal to the surface at each vertex.

Useful functions

  • /toolbox/io/in_tess_bst(SurfaceFile): Read a Brainstorm surface file and compute the missing fields.

  • /toolbox/io/in_tess(TessFile, FileFormat=[], sMri=[]): Read a surface file (format is auto-detected).

  • /toolbox/io/in_tess_*.m: Low-level functions for reading all the file formats.

  • /toolbox/anatomy/tess_*.m: Routines for manipulating surfaces.

  • /toolbox/gui/view_surface(SurfaceFile, ...): Display an imported surface in a 3D figure.

  • /toolbox/gui/view_surface_data(SurfaceFile, OverlayFile, ...): Display a surface with a source map.

  • /toolbox/gui/view_surface_matrix(Vertices, Faces, ...): Display a mesh in a 3D figure.



Tutorial 4: Channel file / MEG-MRI coregistration

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

The anatomy of your subject is ready. Before we can start looking at the MEG/EEG recordings, we need to make sure that the sensors (electrodes, magnetometers or gradiometers) are properly aligned with the MRI and the surfaces of the subject.

In this tutorial, we will start with a detailed description of the experiment and the files that were recorded, then we will link the original CTF files to the database in order to get access to the sensors positions, and finally we will explore the various options for aligning these sensors on the head of the subject.

License

This dataset (MEG and MRI data) was collected by the MEG Unit Lab, McConnell Brain Imaging Center, Montreal Neurological Institute, McGill University, Canada. The original purpose was to serve as a tutorial data example for the Brainstorm software project. It is presently released in the Public Domain, and is not subject to copyright in any jurisdiction.

We would appreciate though that you reference this dataset in your publications: please acknowledge its authors (Elizabeth Bock, Peter Donhauser, Francois Tadel and Sylvain Baillet) and cite the Brainstorm project seminal publication.

Presentation of the experiment

Experiment

  • One subject, two acquisition runs of 6 minutes each.
  • Subject stimulated binaurally with intra-aural earphones (air tubes+transducers), eyes opened and looking at a fixation cross on a screen.
  • Each run contains:
    • 200 regular beeps (440Hz).
    • 40 easy deviant beeps (554.4Hz, 4 semitones higher).
  • Random inter-stimulus interval: between 0.7s and 1.7s seconds, uniformly distributed.
  • The subject presses a button when detecting a deviant with the right index finger.
  • Auditory stimuli generated with the Matlab Psychophysics toolbox.
  • The specifications of this dataset were discussed initially on the FieldTrip bug tracker:
    http://bugzilla.fieldtriptoolbox.org/show_bug.cgi?id=2300.

MEG acquisition

  • Acquisition at 2400Hz, with a CTF 275 system, subject in sitting position

  • Recorded at the Montreal Neurological Institute in December 2013
  • Anti-aliasing low-pass filter at 600Hz, files saved with the 3rd order gradient
  • Downsampled at a lower sampling rate: from 2400Hz to 600Hz: The only purpose for this resampling is to make the introduction tutorials easier to follow the on a regular computer.

  • Recorded channels (340):
    • 1 Stim channel indicating the presentation times of the audio stimuli: UPPT001 (#1)
    • 1 Audio signal sent to the subject: UADC001 (#316)
    • 1 Response channel recordings the finger taps in response to the deviants: UDIO001 (#2)
    • 26 MEG reference sensors (#5-#30)
    • 274 MEG axial gradiometers (#31-#304)
    • 2 EEG electrodes: Cz, Pz (#305 and #306)
    • 1 ECG bipolar (#307)
    • 2 EOG bipolar (vertical #308, horizontal #309)
    • 12 Head tracking channels: Nasion XYZ, Left XYZ, Right XYZ, Error N/L/R (#317-#328)
    • 20 Unused channels (#3, #4, #310-#315, #329-340)
  • 3 datasets:
    • S01_AEF_20131218_01_600Hz.ds: Run #1, 360s, 200 standard + 40 deviants

    • S01_AEF_20131218_02_600Hz.ds: Run #2, 360s, 200 standard + 40 deviants

    • S01_Noise_20131218_02_600Hz.ds: Empty room recordings, 30s long

  • Average reaction times for the button press after a deviant tone:
    • Run #1: 515ms +/- 108ms

    • Run #2: 596ms +/- 134ms

Head shape and fiducial points

  • 3D digitization using a Polhemus Fastrak device driven by Brainstorm (S01_20131218_01.pos)

  • More information: Digitize EEG electrodes and head shape

  • The output file is copied to each .ds folder and contains the following entries:
    • The position of the center of CTF coils.
    • The position of the anatomical references we use in Brainstorm:
      Nasion and connections tragus/helix, as illustrated here.

    • Around 150 head points distributed on the hard parts of the head (no soft tissues).

  • Switch to the "functional data" view.

    view_functional.gif

  • Right-click on the subject folder > Review raw file

    • Select the file format: "MEG/EEG: CTF (*.ds...)"

    • Select all the .ds folders in: sample_introduction/data

    • In the CTF file format, each session of recordings is saved in a folder with the extension "ds". The different types of information collected during each session are saved as different files in this folder (event markers, sensor definitions, bad segments, MEG recordings).
  • review_menu.gif

  • Refine registration now? YES
    This operation is detailed in the next section.

    review_refine.gif

  • Percentage of head points to ignore: 0
    If you have some points that were not digitized correctly and that appear far from the head surface, you should increase this value in order to exclude them from the fit.
    review_outliers.gif

Automatic registration

The registration between the MRI and the MEG (or EEG) is done in two steps. We start with a first approximation based on three reference points, then we refine it with the full head shape of the subject.

Step 1: Fiducials

  • The initial registration is based on the three fiducial points that define the Subject Coordinate System (SCS): nasion, left ear, right ear. You have marked these three points in the MRI viewer in the previous tutorial.

  • These same three points have also been marked before the acquisition of the MEG recordings. The person who recorded this subject digitized their positions with a tracking device (such as a Polhemus FastTrak or Patriot). The position of these points are saved in the dataset.

  • When we bring the MEG recordings into the Brainstorm database, we align them on the MRI using these fiducial points: we match the NAS/LPA/RPA points digitized with the ones we located in the MRI Viewer.
  • This registration method gives approximate results. It can be good enough in some cases, but not always because of the imprecision of the measures. The tracking system is not always very precise, the points are not always easy to identify on the MRI slides, and the very definition of these points does not offer a millimeter precision. All this combined, it is easy to end with an registration error of 1cm or more.

  • The quality of the source analysis we will perform later is highly dependent on the quality of the registration between the sensors and the anatomy. If we start with a 1cm error, this error will be propagated everywhere in the analysis.

    polhemus_setup.gif polhemus_beth.jpg

Step 2: Head shape

  • To improve this registration, we recommend our users to always digitize additional points on the head of the subjects: around 100 points uniformly distributed on the hard parts of the head (skull from nasion to inion, eyebrows, ear contour, nose crest). Avoid marking points on the softer parts (cheeks or neck) because they may have a different shape when the subject is seated on the Polhemus chair or lying down in the MRI. More information on digitizing head points.

  • We have two versions of the full head shape of the subject: one coming from the MRI (the head surface, represented in grey in the figures below) and one coming from the Polhemus digitizer at the time of the MEG/EEG acquisition (represented as green dots).
  • The algorithm that is executed when you chose the option "Refine registration with head points" is an iterative algorithm that tries to find a better fit between the two head shapes (grey surface and green dots), to improve the intial NAS/LPA/RPA registration. This technique usually improves significantly the registration between the MRI and the MEG/EEG sensors.

  • Tolerance: If you enter a percentage of head points to ignore superior to zero, the fit is performed once with all the points, then the head points the most distant to the cortex are removed, and the fit is executed a second time with the head points that are left.
  • The two pictures below represent the registration before and after this automatic head shape registration (left=step 1, right=step 2). The yellow surface represents the MEG helmet: the solid plastic surface in which the subject places his/her head. If you ever see the grey head surface intersecting this yellow helmet surface, there is obviously something wrong with the registration.
  • At the end of the import process, you can close the figure that shows the final registration.

    refine_before.gif refine_after.gif

  • A window reporting the distance between the scalp and the head points is displayed. You can use these values as references for estimating whether you can trust the automatic registration or not. Defining whether the distances are correct or abnormal depend on your digitization setup.

    refine_outliers.gif

Defaced volumes

When processing your own datasets, if your MRI images are defaced, you might need to proceed in a slightly different way. The de-identification procedures remove the nose and other facial features from the MRI. If your digitized head shape includes points on the missing parts of the head, this may cause an important bias in automatic registration. In this case it is advised to remove the head points below the nasion before proceeding to the automatic registration, as illustrated in this tutorial.

New files and folders

Many new files are now visible in the database explorer:

  • Three folders representing the three MEG datasets that we linked to the database. Note the tag "raw" in the icon of the folders, this means that the files are considered as new continuous files.
  • S01_AEF_20131218_01_600Hz: Subject01, Auditory Evoked Field, 18-Dec-2013, run #01

  • S01_AEF_20131218_02_600Hz: Subject01, Auditory Evoked Field, 18-Dec-2013, run #02

  • S01_Noise_20131218_02_600Hz: Subject01, Noise recordings (no subject in the MEG)

  • All three have been downsampled from 2400Hz to 600Hz.

Each of these new folders show two elements:

  • Channel file: Defines the types and names of channels that were recorded, the position of the sensors, the head shape and other various details. This information has been read from the MEG datasets and saved as a new file in the database. The total number of data channels recorded in the file is indicated between parenthesis (340).

  • Link to raw file: Link to the original file that you imported. All the relevant meta-data was read from the MEG dataset and copied inside the link itself (sampling rate, number of samples, event markers and other details about the acquisition session). But no MEG/EEG recordings were read or copied to the database. If we open this file, the values are read directly from the original files in the .ds folder.

    review_tree.gif

Review vs Import

When trying to bring external data into the Brainstorm environment, a common source of confusion is the difference between the two popup menus Review and Import:

  • Review raw file: Allows you to create a link to your original continuous data file. It reads the header and sensor information from the file but does not copy the recordings in the database. Most of the artifact cleaning should be done directly using these links.

  • Import MEG/EEG: Extract segments of recordings (epochs) from an external file and saves copies of them in the Brainstorm database. You should not be using this menu until you have fully pre-processed your recordings, or if you are importing files that are already epoched or averaged.

Display the sensors

Right-click on the CTF channels file and try all the display menus:

  • CTF Helmet: Shows a surface that represents the inner surface of the MEG helmet.

  • CTF coils (MEG): Display the MEG head coils of this CTF system: they are all axial gradiometers, only the coils close to the head are represented. The small squares do not represent the real shape of the sensors (the CTF coils are circular loops) but an approximation made in the forward model computation.

  • CTF coils (ALL): Display all the MEG sensors, including the reference magnetometers and gradiometers. The orientation of the coils is represented with a red segment.

  • MEG: MEG sensors are represented as small white dots and can be selected by clicking on them.

  • ECG / EOG: Ignore these menus, we do not have proper positions for these electrodes.

  • Misc: Shows the approximate positions of the EEG electrodes (Cz and Pz).

  • Use the [Close all] button to close all the figures when you are done.

    channel_menu.gif

channel_display.gif

Advanced

Sensor map

Here is a map with the full list of sensor names for this CTF system, it could be useful for navigating in the recordings. Click on the image for a larger version.

?action=AttachFile&do=view&target=snap_3conditions.jpg

Advanced

Manual registration

If the registration you get with the automatic alignment is incorrect, or if there was an issue when you digitized the position of the fiducials or the head shape, you may have to realign manually the sensors on the head. Right-click on the channel file > MRI Registration:

  • Check: Show all the possible information that may help to verify the registration.

  • Edit: Opens a window where you can move manually the MEG helmet relative to the head.
    Read the tooltips of the buttons in the toolbar to see what is available, select an operation and then right-click+move up/down to apply it. From a scientific point of view this is not exactly a rigorous operation, but sometimes it is much better than using wrong default positions.
    IMPORTANT: this refinement can only be used to better align the headshape with the digitized points - it cannot be used to correct for a subject who is poorly positioned in the helmet (i.e. you cannot move the helmet closer to the subjects head if they were not seated that way to begin with!)

    channel_manual.gif

  • Refine using head points: Runs the automatic registration described earlier.

  • In the 3D views, the head points can be color-coded to represent the distance to the scalp. Right-click on the figure > Channels > Color head points by distance (shortcut CTRL+H). The colorbar indicates in millimeters the distance of each point to the scalp, as compute by bst_surfdist.m.

    refine_dist.png

There is nothing to change here, but remember to always check the registration scalp/sensors.

Advanced

Multiple runs and head positions

Between two acquisition runs the subject may move in the MEG helmet, the relative position of the MEG sensors with the head surface changes. At the beginning of each MEG run, the positions of the head localization coils are detected and used to update the position of the MEG sensors.

  • The two AEF runs 01 and 02 were acquired successively. The position of the subject's head in the MEG helmet was estimated twice, once at the beginning of each run.
  • To evaluate visually the displacement between the two runs, select at the same time all the channel files you want to compare (the ones for run 01 and 02), right-click > Display sensors > MEG.

    channel_multiple.gif

  • Typically, we would like to group the trials coming from multiple acquisition runs. However, because of the subject's movements between runs, it is usually not possible to directly compare the MEG values between runs. The sensors may not capture the activity coming from the same regions of the brain.
  • You have three options if you consider grouping information from multiple runs:
    • Method 1: Process all the runs separately and average between runs at the source level: The more accurate option, but requires more work, computation time and storage.

    • Method 2: Ignore movements between runs: This can be acceptable if the displacements are really minimal, less accurate but much faster to process and easier to manipulate.

    • Method 3: Co-register properly the runs using the process Standardize > Co-register MEG runs: Can be a good option for displacements under 2cm.
      Warning: This method has not be been fully evaluated on our side, use at your own risk. Also, it does not work correctly if you have different SSP projectors calculated for multiple runs.

  • In this tutorial, we will illustrate only method 1: runs are not co-registered.

Advanced

Edit the channel file

Display a table with all the information about the individual channels. You can edit all the values.

  • Right-click on the channel of the first folder (AEF#01) > Edit channel file:

    channel_edit.gif

  • Index: Index of the channel in the data matrix. Can be edited to reorder the channels.

  • Name: Name that was given to the channel by the acquisition device.

  • Type: Type of information recordeded (MEG, EEG, EOG, ECG, EMG, Stim, Other, "Delete", etc)

    • You may have to change the Type for some channels. For instance if an EOG channel was saved as a regular EEG channel, you have to change its type to prevent it from being used in the source estimation.
    • To delete a channel from this file: select "(Delete)" in the type column.
  • Group: Used to define sub-group of channels of the same type.

    • SEEG/ECOG: Each group of contacts can represent a depth electrode or a grid, and it can be plotted separately. A separate average reference montage is calculated for each group.
    • MEG/EEG: Not used.
  • Comment: Additional description of the channel.

    • MEG sensors: Do not edit this information if it is not empty.
  • Loc: Position of the sensor (x,y,z) in SCS coordinates. Do not modify this from the interface.
    One column per coil and per integration point (information useful for the forward modeling).

  • Orient: Orientation of the MEG coils (x,y,z) in SCS coordinates). One column per Loc column.

  • Weight: When there is more than one coil or integration point, the Weight field indicates the multiplication factor to apply to each of these points.

  • To edit the type or the comment for multiple sensors at once, select them all then right-click.
  • Close this figure, do not save the modifications if you made any.

Advanced

On the hard drive

Some other fields are present in the channel file that cannot be accessed with the Channel editor window. You can explore these other fields with the File menu, selecting View file contents or Export to Matlab, as presented in the previous tutorial.

  • channel_contents.gif

Structure of the channel files: channel_*.mat

  • Comment : String that is displayed in the Brainstorm database explorer.

  • MegRefCoef: Noise compensation matrix for CTF and 4D MEG recordings, based on some other sensors that are located far away from the head.

  • Projector: SSP/ICA projectors used for artifact cleaning purposes. See the SSP tutorial.

  • TransfMeg / TransfMegLabel: Transformations that were applied to the positions of the MEG sensors to bring them in the Brainstorm coordinate system.

  • TransfEeg / TransfEegLabel: Same for the position of the EEG electrodes.

  • HeadPoints: Extra head points that were digitized with a tracking system.

  • Channel: An array that defines each channel individually (see previous section).

  • Clusters: An array of structures that defines channels of clusters, with the following fields:

    • Sensors: Cell-array of channel names

    • Label: String, name of the cluster

    • Color: RGB values between 0 and 1 [R,G,B]

    • Function: String, cluster function name (deault: 'Mean')

  • History: Describes all the operations that were performed with Brainstorm on this file. To get a better view of this piece of information, use the menu File > View file history.

  • IntraElectrodes: Definition of iEEG devices, documented in the SEEG tutorial.

Useful functions

  • /toolbox/io/import_channel.m: Read a channel file and save it in the database.

  • /toolbox/io/in_channel_*.m: Low-level functions for reading all the file formats.

  • /toolbox/io/in_bst_channel.m: Read a channel file saved in the database.

  • /toolbox/sensors/channel_*.m: Routines for manipulating channel files.

  • /toolbox/gui/view_channels(ChannelFile, Modality, ...): Display the sensors in a 3D figure.

Advanced

Additional documentation



Tutorial 5: Review continuous recordings

Authors: Francois Tadel, Elizabeth Bock, John C Mosher, Sylvain Baillet

Open the recordings

Let's look at the first file in the list: AEF#01.
Right-click on the Link to raw file. Below the first to menus, you have the list of channel types:

  • MEG: 274 axial gradiometers

  • ECG: 1 electrocadiogram, bipolar electrode across the chest

  • EOG: 2 electrooculograms (vertical and horizontal)

  • Misc: EEG electrodes Cz and Pz

  • ADC A: Unused

  • ADC V: Auditory signal sent to the subject

  • DAC: Unused

  • FitErr: Fitting error when trying to localize the three head localization coils (NAS, LPA, RPA)

  • HLU: Head Localizing Unit, displacements in the three directions (x,y,z) for the three coils

  • MEG REF: 26 reference sensors used for removing the environmental noise

  • Other: Unused

  • Stim: Stimulation channel, records the stim triggers generated by the Psychophysics toolbox and other input channels, such as button presses generated by the subject

  • SysClock: System clock, unused

Select > MEG > Display time series (or double-click on the file).

  • link_menu.gif

It will open a new figure and enable many controls in the Brainstorm window.

review_epoch.gif

Navigate in time

The files we have imported here are shown the way they have been saved by the CTF MEG system: as contiguous epochs of 1 second each. These epochs are not related with the stimulus triggers or the subject's responses, they are just a way of saving the files. We will first explore the recordings in this epoched mode before switching to the continuous mode.

From the time series figure

  • Click: Click on the white or grey parts of figure to move the time cursor (red vertical line).
    If you click on the signals, it selects the corresponding channels. Click again to unselect.

  • Shortcuts: See the tooltips in the time panel for important keyboard shortcuts:
    Left arrow, right arrow, page up, page down, F3, Shift+F3, etc...

  • Bottom bar: The red square in the bottom bar represents the portion of the file that is currently displayed from the current file or epoch. Right now we show all the epoch #1. This will be more useful in the continuous mode.

  • Zoom: Scroll to zoom horizontally around the time cursor (mouse wheel or two-finger up/down).

  • [<<<] and [>>>]: Previous/next epoch or page

From the time panel

  • Time: [0, 998]ms is the time segment over which the first epoch is defined.

  • Sampling: We downsampled these files to 600Hz for easier processing in the tutorials.

  • Text box: Current time, can be edited manually.

  • [<] and [>]: Previous/next time sample - Read the tooltip for details and shortcuts

  • [<<] and [>>]: Previous/next time sample (x10) - Read the tooltip for details and shortcuts

  • [<<<] and [>>>]: Previous/next epoch or page - Read the tooltip for details and shortcuts

From the page settings

  • Epoch: Selects the current time block that is displayed in the time series figure.

  • Start: Starting point of the time segment displayed in the figure. Useful in continuous mode only.

  • Duration: Length of this time segment. Useful in continuous mode only.

Time selection

  • In the time series figure, click and drag your mouse for selecting a time segment.
  • At the bottom of the figure, you will see the duration of the selected block, and min/max values.
  • Useful for quickly estimating the latencies between two events, or the period of an oscillation.
  • To zoom into the selection: Shift+left click, middle click, or right-click > Time selection > Zoom into.

  • Click anywhere on the figure to cancel this time selection.

    review_timesel.gif

Epoched vs. continuous

  • The CTF MEG system can save two types of files: epoched (.ds) or continuous (_AUX.ds).
  • Here we have an intermediate storage type: continuous recordings saved in "epoched" files. The files are saved as small blocks of recordings of a constant time length (1 second in this case). All these time blocks are contiguous, there is no gap between them.
  • Brainstorm can consider this file either as a continuous or an epoched file. By default it imports the regular .ds folders as epoched, but we need to change this manually.
  • Right-click on the "Link to raw file" for AEF#01 > Switch epoched/continuous
    You should get a message: "File converted to: continuous".

  • Double-click on the "Link to raw file" again. Now you can navigate in the file without interruptions. The box "Epoch" is disabled and all the events in the file are displayed at once.

    review_continuous.gif

  • With the red square at the bottom of the figure, you can navigate in time (click in the middle and drag with the mouse) or change the size of the current page (click on the left or right edge of the red square and move your mouse).

    resize_page.gif

  • Increase the duration of the displayed window to 3 seconds (Page settings > Duration).

    review_setpage.gif

  • Close the figure.
  • Repeat this operation with the other files to convert them all to a continuous mode.
    • AEF#02 > Switch epoched/continuous

    • Noise > Switch epoched/continuous

Display mode: Butterfly/Column

  • Close all the figures.
  • Double-click on the AEF#01 Link to raw file to open the MEG recordings.
  • What we see are all the traces of the 274 sensors overlaid on top of each other.
  • Click on the "Display mode" button in the toolbar of the Record tab.

    review_switch.gif

  • All the signals are now displayed, one below the other, but because we have 274 MEG channels the figure is still unreadable. We need to select only a subset of these sensors.

    review_column.jpg

Montage selection

  • You can use the montage menu to select a group of sensors. This menu is accessible in two ways:
    • Record toolbar > Drop-down menu.

    • Figure popup menu > Right-click on the figure > Montage

  • Pre-defined groups of channels are available for some common MEG and EEG systems.
    Notice the keyboard shortcut on the right for All channels (Shift+A). You can define your own (Shift+B, C...) if you go to Edit montages.

  • You can also use this menu to create your own sensor selections or more complex montages.
    A separate tutorial is dedicated to the montage editor.

  • Select the group: CTF LT (Left Temporal).

    review_montage.gif

  • More information about the Montage editor.

Channel selection

If you click on the white or grey areas of the figure, it changes the current time.
If you click on the lines representing the recorded signals instead, it selects the corresponding channels.

  • When some channels are selected, an additional menu "Channels" is visible in the figure popup.
  • Select "View selected" or press [Enter] to open the selected channels in a separate window.
  • The management of the bad channels will be introduced in a separate tutorial.

    channel_select.gif

Amplitude scale

A variety of display options allows you to adjust the amplitude scale for the recordings (vertical axis). Most of these options are available in the right part of the time series figure, some are repeated in the Record tab of the Brainstorm window.

  • review_scale.gif

  • Increase/decrease gain: Buttons [+] and [-] on the right side of the figure. The shortcuts for these buttons are indicated in the tooltips (leave the mouse for a short while over a button): right-click and move your mouse, hold the Shift key and scroll, or use the keys "+" and "-".

  • Auto-scale amplitude: Button [AS] in the figure.
    Selected: the vertical scale is adapted to the new maximum amplitude when you scroll in the file.
    Not selected: The vertical scale is fixed, scrolling in the file does not affect the axis resolution.

  • Flip Y axis: Exchange the direction of the Y axis, to have the peaks of negative values pointing up. Useful mostly for clinical EEG.

  • Set amplitude scale: Opens a window to enter the amplitude scale manually. The value corresponds to the space between two horizontal lines in this figure.

  • Set axis resolution: See section "Time and amplitude resolution" below.

  • Remove DC offset: Button [DC] in the Record tab. When selected, the average value over the entire current time window is subtracted from each channel. This means that if you change the length of the time window, the value that is removed from each channel may change. Always keep this option selected for unprocessed MEG recordings, unless you use a high-pass filter.

  • Normalize signals: Divide each signal by its maximal amplitude in the displayed time window. The signals displayed with this normalization are unitless.

  • Apply CTF compensation: Button [CTF] in the Record tab. Enable/disable the CTF noise correction based on the reference sensors, when it is not already applied in the file. In the current file, the CTF 3rd order gradient compensation is already applied, therefore this option is not available.

  • Vertical zoom: Use the zoom/scrolll buttons on the right of the figure or your mouse (CTRL+Mouse wheel to zoom, middle-click+move to scroll) in order to look at specific channels without having to change the montage.

  • review_vscroll.gif

  • Uniform amplitude scales: Force all the time series figures to use the same amplitude scale. Option available in the Record tab with the button uniform_button.gif or from the figure options menu when at least two time series figures are visible. More details.

Advanced

Time and amplitude resolution

In the Brainstorm interface, the axis resolution is usually set implicitly: you can set the size of the window, the duration or recordings reviewed at once and the maximum amplitude to show in the figure. These parameters are convenient to explore the recordings interactively but don't allow us to have reproducible displays with constant time and amplitude resolutions.

However, some applications are very sensitive to the horizontal and vertical scaling, such as the visual detection of epileptic spikes. The shapes of traces the epileptologists try to identify are altered by the axes resolution. This is detailed in the tutorial EEG and Epilepsy.

For this reason, we also added an option to set the figure resolution explicitly. The distance unit on a screen is the pixel, we can set precisely how much time is represented by one pixel horizontally and how much amplitude is represented by one pixel vertically.
Display menu in the right part of the figure > Amplitude > Set axis resolution (shortcut: CTRL+O)

Note that this interface does not store the input values, it just modifies the other parameters (figure size, time window, max amplitude) to fit the resolution objectives. If you modify these parameters after setting the resolution (resize the figure, leave the button [AS] selected and scroll in time, etc) the resolution is lost, you have to set it again manually.

review_resolution.gif

Filters for visualization

With the Filter tab, you can apply a band-pass filter to the recordings, or remove a set of specific frequencies (example: the 50Hz or 60Hz power lines contamination and their harmonics). The filters are applied only to the time window that is currently loaded. If the segment is too short for the required filters, the results might be inaccurate.

These visualization filters provide a quick estimate for visualization only, the results are not saved anywhere. To filter properly the continuous files, please use the Process1 tab (see tutorial #10).
The option "Filter all results" is not useful for now, it will be described later.

After testing the high-pass, low-pass and notch filters, uncheck them. Otherwise you may forget about them, they would stay on until you restart Brainstorm. Note that as long as there are visualization filters applied, the title of the Filter tab remains red.

  • review_filter.gif

Mouse and keyboard shortcuts

Keyboard shortcuts

  • Left / right arrows:

    • Change current time, sample by sample
    • +Control key: Jump to previous/next epoch or page (same as [<<<] and [>>>])

    • +Shift key: Jump to previous/next event (you need to have one event selected)

    • MacOS: These shortcuts are different, please read the tooltips for [>], [>>] and [>>>]

  • Page-up / page-down:

    • Change current time, 10 samples at a time
    • +Control key: Jump to the next/previous epoch or page, 10x faster

  • F3/Shift+F3: Jump to the next/previous epoch or page (10% overlap between 2 pages)

  • F4/Shift+F4: Jump to the next/previous half-page (50% overlap)

  • F6/Shift+F6: Jump to the next/previous page with no overlap (0% overlap)

  • Plus / minus: Adjust the vertical scale of the time series

  • Shift + Letter: Changes the montage

  • Control + B: Mark selected time segment as bad

  • Control + D: Dock figure

  • E: Add / delete event marker

  • Control + E: Add / delete event marker for the selected channels

  • Control + F: Open a copy of the figure, not managed by the Brainstorm window manager

  • Control + H: Hide/show selected event group

  • Control + I: Save figure as image

  • Control + J: Open a copy of the figure as an image

  • Control + O: Set axes resolution

  • Control + L: Change display mode of events (dots, lines or hidden)

  • Control + J: Open a screen capture of the figure

  • Control + T: Open a 2D topography window at the current time

  • Enter: Display the selected channels in a separate figure

  • Escape: Unselect all the selected channels

  • Delete: Mark the selected channels as bad

  • 1 2 3 4 5 6 7 8 9: User-defined shortcuts for new events (tutorial #7)

Mouse shortcuts

  • Click on a channel: Select the channel

  • Click: Change current time

  • Shift + click: Force the selection of the current time (even when clicking on a channel)

  • Click + move: Select time range

  • Right-click: Display popup menu

  • Right-click + move: Adjust the vertical scale of the time series

  • Scroll: Zoom around current time

  • Shift + scroll: Adjust the vertical scale of the time series

  • Control + scroll: Zoom vertically

  • Central click + move: Move in a zoomed figure

  • Double click: Restore initial zoom settings (or edit the notes associated to the clicked event)



Tutorial 6: Multiple windows

Authors: Francois Tadel

General organization

This tutorial is a parenthesis to explain how the figures are positioned on the screen and how you can organize your workspace more efficiently. One interesting feature of the Brainstorm interface is the ability to open easily multiple views or multiple datasets simultaneously.

The buttons in the menu "Window layout options" can help you organize all the opened figures in an efficient way. There are four options for the automatic placement of the figures on the screen and you have the possibility to save your own specific working environment.

Remember that the Brainstorm window is designed to remain on one side of the screen. All the space of the desktop that is not covered by this window will be used for opening other figures. This available space is designated in the menus below as "Full area". Do not try to maximize the Brainstorm window, or the automatic management of the data figures might not work correctly.

  • toolbarWindows.gif

Automatic figure positioning

  • Layout options: Defines how the figures are positioned on the screen

    • Tiled: All the figures have similar sizes.

    • Weighted: Some figures containing more information are given more space on the screen. This mode is mostly useful when reviewing continuous recordings.

    • Full area: Each figure takes all the space available for figures.

    • None: The new figures are displayed at the default Matlab position, always at the same place, and never re-organized after. Selecting this option can be useful if the auto-arrangement does not work well on your system or if you want to organize your windows by yourself. It is also automatically selected when using "user setups" (see below).

  • One screen / two screens: If you have multiple monitors, Brainstorm can try to place the database window on one screen and all the other figures on the other screen. If you force Brainstorm to use only one screen, all the figures should stay on the same screen.

  • Full screen: If selected, the figures are set to their maximum size, covering the Brainstorm window

  • Show all figures: If you have many figures hidden by some other fullscreen window (Matlab, Firefox to read this tutorial, etc), you don't have click on all of them in the taskbar to get them back. Just make the Brainstorm window visible and click on this button, it would bring all the figures back (not working on some Linux window managers).

  • User setups: You can save a combination of figures currently opened on your desktop and re-use it later on a different dataset. It can be very useful for reviewing long continuous files.

  • Close all figures: Last button in the toolbar. Close everything and free the allocated memory.

Example

  • Double-click on AEF#01 Link to raw file to open the MEG sensors.

  • Open the EOG signals for the same file: Right-click on the file > EOG > Display time series.

  • Open a 2D topography for the MEG sensors: Right-click on the file > MEG > 2D sensor cap.
    This view represents the values of the all the MEG sensors at the current time point. This type of figures will be described in another tutorial.

  • Cycle through the options: Tiled, Weighted, Full area.

    layout_tiled.gif layout_weighted.gif

  • Select the option None, close all the figures (using the [Close all] button), and re-open them.
    Notice that now the position of the figures is not managed by Brainstorm anymore.

  • Select again Weighted: the figures are automatically re-arranged again.

  • Test the option Full screen.

  • If you have two screens connected, you can try the options One screen / Two screens.

Advanced

Multiple views of the same data

  • Keep all the existing figures: MEG, EOG, 2D topography.
  • Open another time series view of the same file, same MEG sensors.
    • Note that if you double-click again on the file, it just selects the existing figure.
    • To force opening another view: Right-click on the file > MEG > Display time series.

    • Only the first view that was open on this file shows the events bar and the time navigation bar at the bottom of the figure. If you want the two MEG figures displayed in the exact same way, you can close everything, then start by opening the EOG view, then two MEG views.
  • Re-arrange the figures in a nicer way.
  • Select montage "CTF LT" for one figure, and montage "CTF RF" for the other.

    • You can change individually the display properties of each figure.
    • When creating a new figure, it re-uses the last display properties that were used.
    • To change the properties of one figure, you have first to select this figure. Clicking on the title bar of the figure is not enough, you have to click inside the figure (this is due to some limitations of the Matlab figures implementation).

    • When the new figure is selected, the controls in the Record tab are updated, and you can change the display properties for this figure.
  • There is currently a limitation relative to the continuous file viewer: it is not possible to review two continuous datasets at the same time. This is usually not a problem because we typically review the continuous files one after the other. It will be possible to open multiple data files after we import them in the database, this is what is really useful.

    layout_user.gif

Advanced

User setups

  • Keep the four figures previously created (MEG LT, MEG RF, EOG, 2D sensor).
  • In the menu "Window layout options" > User setups > New setup > "Test".

  • Close all the figures (using the Close all button).
  • Double-click again on the Link to raw file to open MEG sensors.
  • In the menu "Window layout options" > User setups > Test.
    It should restore your desktop exactly like it was when you saved it.

  • Note that the layout None is now selected. Using custom window configurations disables the automatic arrangement of the windows on the screen.

  • This feature is interesting for users who need to review numerous files everyday in a very specific way, for instance in the case of visual inspection of epilepsy recordings. It can save them a substantial amount of time to load their reviewing environment in just one click.

Advanced

Uniform amplitude scales

  • Set the display mode "butterfly" for the two MEG time series figures:
    Uncheck the first button in the Record tab.

  • With the button Uniform amplitude scale uniform_button.gif , in the Record tab, you can change the way the amplitude of multiple time series figures is scaled.

  • Selected: All the time series figures with similar units have the same y-axis scale, you can compare visually the amplitudes between two datasets.

    uniform_yes.gif

  • Not selected: Each figure is scaled independently to its own maximum amplitude.

    uniform_no.gif

Graphic bugs

If you observe any graphic problem with these displays, there might be an issue with the OpenGL drivers. You may try the following options:

  • Update the drivers for your graphics card.
  • Upgrade your version of Matlab.
  • Run the compiled version of Brainstorm (see Installation).

  • Turn off the OpenGL hardware acceleration: Menu File > Edit preferences > Software or Disabled.

  • Send a bug report to the Mathworks.


Tutorial 7: Event markers

Authors: Francois Tadel, Elizabeth Bock, John C Mosher

Lists of events

You probably noticed colored dots on top of the recordings in the MEG figure. They represent the event markers saved in this dataset. In this documentation, they can be called indifferently events, markers or triggers. Some are stimulus triggers that were generated by the stimulation computer (Psychtoolbox-3), others are the subject responses recorded from a button box. This tutorial shows how to manipulate these markers.

  • Open the MEG recordings for file AEF#01.

  • Make sure it is configured as presented here: Montage "CTF LT", [DC] button selected, 3s pages.
  • All the markers available in the file are listed in the Events section of the Record tab.
  • On the left, you have the groups of events and the number of occurrence for each group:
    • 200 standard audio stimulations

    • 40 deviant audio stimulations

    • 40 button responses: The subject presses a button with the right index finger when a deviant is presented. This is a very easy task so all the deviants are detected

  • On the right, you have the list of the time instants at which the selected event occurs.
  • These two lists are interactive. If you click on an event group (left list), it shows the corresponding occurrences in the right list. If you click on one particular event in the right list, the file viewer jumps to it. It works the other way as well: if you click on a dot representing an event in the MEG figure, the corresponding event group and occurrence are selected in the Record tab.

    events_list.gif

Adding events

The markers can represent either stimulation triggers or subject responses that were recorded during the acquisition. It can also be useful to add new markers during the analysis of the recordings, to identify events of interest that are not detected at the time of the recordings, such as artifacts (eye movements, heartbeats, subject movements) or specific patterns of brain activity (epileptic spikes).

  • Create a new group of events "Test" with the menu Events > Add group.

  • Click on this new category to select it. It contains no occurrences at the beginning (x0).

    events_newgroup.gif

  • Then place the time cursor (red vertical bar) where you want to add a new marker "Test".
  • Add a few occurrences with any of the three methods:
    • In the Record tab: Select the menu Events > Add / delete event

    • In the time series figure: Right-click > Add / delete event

    • In the time series figure: Press key E

  • If the display is too dense, it can be difficult to set the current time instead of selecting a channel. Note that you can click outside of the white area to select the time (on top of the figure), or use the shortcut Shift+click.

    events_newmarker.gif

  • Remove all the event occurrences in "Test", but not the group itself. Use any of the three methods:
    • In the Record tab: Select one or more event occurrences, press the Delete key.

    • In the time series figure: Click on an event dot and right-click > Add / delete event.

    • In the time series figure: Click on an event dot and press key E.

Extended events

You can also use this interface to create events that have a temporal extension, i.e., that last for more than one time sample. This can be used to define bad segments in the recordings.

  • In the time series window, select a time range (click + move).
  • Add an event: menus or key E.

  • The first occurrence you add in an event group defines its type: single time point (simple events), or time range (extended events). You cannot mix different types of events in a group. You get an error when you try to add a time segment in an event category that already contains a simple event.

    events_extended.gif

  • Remove the event group "Test": Click on it in the list and press the Delete key.

Bad segments

It is very common to have portions of the recordings heavily contaminated by events coming from the subject (eye blinks, movements, heartbeats, teeth clenching) or from the environment (stimulation equipment, elevators, cars, trains, building vibrations...). Some of them are well defined and can be removed efficiently, some are too complex to be modeled efficiently. For this last category, it is usually safer to mark the noisy segments as bad, and ignore them for the rest of the analysis.

To mark a segment of recordings as bad, the procedure is the same as for defining an extended event: select a time window, and then tag it as bad with one of the following methods.

  • In the Record tab: Select the menu Events > Reject time segment,

  • In the time series figure: Right-click > Reject time segment,

  • In the time series figure: Press Ctrl + B

It creates a new event group BAD, and adds an extended event to it. Later, when epoching this file (extracting time blocks and saving them in the database), the trials that contain a bad segment will be imported but tagged as bad, and ignored in the rest of the analysis.

You can create multiple groups of bad segments, for instance to identify different types of artifacts. Any event group that contains the tag "BAD" will be considered as indicating bad segments.

events_bad.gif

Advanced

Hide event groups

When you have too many events in the viewer, seeing the ones you are interested in can be difficult. This will be the case for insteance after we detect the heartbeats in the signal, we will have one event every second, which is not always interesting to see. Each event category can be selectively hidden.

  • In the record tab, select the group of events you want to hide.
  • Use the menu Events > Show/Hide group, or press shortcut key H.

  • The event group is greyed out, and the corresponding markers disappear from the viewer.

    events_hide.gif

Advanced

Channel events

Some events can be attached to only one or a few channels. This is useful for instance for reviewing clinical EEG recordings, where neurologists are tagging epileptic activity only on a subset of the channels.

  • First select the channels of interest by clicking on them (the signals should turn red).
  • Place the time cursor where you want to create the event (click on the white or grey areas of the figure, or use the shortcut Shift+Click).
  • Right-click anywhere on the figure > Add/delete channel event, or shortcut Ctrl+E.

  • The event marker appears directly on the selected channel, and the name of the channel appears in the list of event times (in the Brainstorm window).

    events_channel.gif

  • Then you can deselect the channel (click again on it) or press the Escape key before creating a new event attached to a different channel.

  • If no channel is selected, you can proceed in this alternate way: position the time cursor where you want to create the event, right-click directly on the channel to which to want to attach the event, and select "Add/delete channel event".

Advanced

Notes

Additional comments can be added to the event, in case additional details should be displayed in the file viewer. This is mostly useful for reviewing clinical recordings as well.

  • Right-click on any event marker or event text (or double-click on it) > Edit notes.

  • Enter the text to display next to the marker.

    events_note.gif

  • Alternatively, you can double-click on the event in the list of event times (in the Brainstorm window).

Advanced

Display modes

Three display modes are available for the event markers: dots, lines or hidden. Select the corresponding menu in the display options, or press CTRL+L multiple times.

events_dots.gif events_lines.gif

Advanced

Custom shortcuts

When reviewing long recordings and adding manually lots of events (eg. when marking manually epileptic spikes), using the menus presented above is not convenient because they require many mouse clicks.

Using the menu Events > Edit keyboard shortcuts, you can associate custom events to the key 1 to 9 of the keyboard. Define the name of the event type to create for each key, and then simply press the corresponding key to add/delete a marker at the current time position. Three options are available for each event type:

  • Simple: Create a simple event where the red time cursor is placed.

  • Full page: Create an extended event including the entire page of recordings, then move to the next page of recordings. This option was added for a specific application (sleep staging) that consists in labelling blocks of 30s through the entire file.

  • Extended: Create an extended event with the time window indicated on the right of the panel around the time cursor.

events_shortcuts.gif

Saving modifications

Now you can delete all the event groups that you've just created and leave only the initial ones (button, standard, deviant): select the event groups and press Delete, or use the menu Events > Delete group.

When you close the continuous file viewer, or the last figure that shows a part of the raw file, the dataset is unloaded, the file is released and the memory is freed.

If you edited the events for this file, you are asked whether to save the modifications or not. If you answer "Yes", the modifications are saved only in the database link (Link to raw file), not in the original file itself. Therefore, you would see the changes the next time you double-click on the "link to raw file" again, but not if you open the original .ds file in another protocol or with an external program.

events_save.gif

Note that events you edit are not automatically saved until that moment. As you would do with any other type of computer work, save your work regularly, to limit the damages of a program or computer crash. In the Record tab, use the menu File > Save modifications.

Advanced

Other menus

events_menus.gif

File

  • Import in database: Import blocks of the current continuous file into the database. Equivalent to a right click on the "Link to raw file" in the database explorer > Import in database.

  • Save modifications: Save the modifications made to the events in the database link.

  • Add events from file: Import events from an external file. Many file formats are supported.

  • Read events from channel: Read the information saved during the acquisition in a digital auxiliary channel (eg. a stimulus channel) and generate events.

  • Detect analog triggers: Detect transition events in an external analog channel, such as the voltage of a photodiode exposed to light or a microphone recording a sound.

  • Export all events: Save all the events in an external file.

  • Export selected events: Same as "Export all events" but exports only the selected events.

Events

  • Rename group: Rename the selected group of events. Shortcut: double-click.

  • Set color: Change the color associated with an event group.

  • Mark group as bad: Add a tag "bad" in the event name, so that it is considered as bad segment.

  • Sort groups: Reorders the event groups by name, or by time of the first occurrence.

  • Merge groups: Merge two event groups into a new group. Initial groups are deleted. To keep them, duplicate them before merging.

  • Duplicate groups: Make a copy of the selected event groups.

  • Convert to simple events: Convert a group of extended events (several time points for each event), to simple events (one time point). An option window asks you whether to keep the first, middle, last sample or every sample of the extended events.

  • Convert to extended events: Convert simple events to segments of a fixed length.

  • Combine stim/response: Create new groups of events based on stim/response logic.
    Example: Stimulus A can be followed by response B or C. Use this process to split the group A in two groups: AB, followed by B; and AC, followed by C.

  • Detect multiple responses: Finds the multiple responses (events that are too close to each other)

  • Group by name: Combine different event groups by name.

  • Group by time: Combine simultaneous events and creates new event groups.

  • Add time offset: Adds a constant time to all the events in a group, to compensate for a delay.

  • Edit keyboard shortcuts: Custom associations between keys 1..9 and events

  • Reject time segment: Mark the current time selection as bad.

  • Jump to previous/next event: Convenient way of browsing through all the markers in a group.
    Shortcut: Shift + left/right

Advanced

On the hard drive

The nodes "Link to raw file" you see in the database explorer are represented by .mat files on the hard drive. They contain all the header information extracted from the raw files, but do not contain a full copy of the recordings.

All the additional information created from the Brainstorm interface (event markers, bad channels, SSP projectors) are not saved back to the original raw files, they are only saved in the "Link to raw file". The names of these files start with the tag data_0raw_, they share the same structure as all the imported epochs (introduced later in the tutorials).

To explore the contents of these link files, right-click on them use the popup menus File > View file contents or File > Export to Matlab.

link_contents.gif

  • F: sFile structure, documents completely the continuous raw file, described below.
    (for imported epochs, .F contains directly the MEG/EEG recordings [Nchannels x Ntime])

  • Comment: String used to represent the file in the database explorer.

  • Time: First and last time points recorded in the continuous file.

  • ChannelFlag: [Nchannels x 1] list of good/bad channels (good=1, bad=-1)

  • DataType: Type of data stored in this file.

    • 'raw' = Link to a continuous raw file
    • 'recordings' = Imported epoch
  • Device: Acquisition system that recorded the dataset.

  • Events: Not used in the case of links.

  • Leff: Effective number of averages = Number of input files averaged to produce this file.

  • History: Describes all the operations that were performed with Brainstorm on this file. To get a better view of this piece of information, use the menu File > View file history.


sFile structure: This structure is passed directly to all the input/output functions on continuous files.

  • filename: Full path to the continuous raw file.

  • format: Format of the continuous raw file.

  • device: Acquisition system that recorded the dataset. Same as Link.Device.

  • condition: Name of the folder in which this file is supposed to be displayed.

  • comment: Original file comment.

  • byteorder: Endianness, 'l' = Little Endian, 'b' = Big Endian

  • prop: Structure, basic properties of the recordings

    • times: First and last time points recorded in the continuous file.

    • sfreq: Sampling frequency

    • Leff: Number of files that were averaged to produce this file.

    • currCtfComp: Level of CTF compensation currently applied.

    • destCtfComp: Level of CTF compensation in which we want to view the file (usually: 3)

  • epochs: Array of structures used only in the case of continuous recordings saved as "epochs"

  • events: Array of structures describing the event markers in the file, one structure per event group:

    • label: Name of the event group

    • color: [r,g,b] Color used to represent the event group, in Matlab format

    • epochs: [1 x Nevt] Indicate in which epoch the event is located (index in the sFile.epochs array), or 1 everywhere for files that are not saved in "epoched" mode.
      Nevt = number or occurrences of the event = number of markers in this group

    • times: [1 x Nevt] Time in seconds of each marker in this group (times = samples / sfreq), aligned on exact sample instants (times = round(times*sfreq)/sfreq).
      For extended events: [2 x Nevt], first row = start, second row = end

    • reactTimes: Not used anymore

    • select: Indicates if the event group should be displayed in the viewer.

    • channels: {1 x Nevt} Cell array of cell-arrays of strings. Each event occurrence can be associated with one or more channels, by setting .channels{iEvt} to a cell-array of channel names.

    • notes: {1 x Nevt} Cell-array of strings: additional comments for each event occurrence

  • header: Structure describing additional header information, depending on the original file format.

  • channelflag: List of good/bad channels, same information as Link.ChannelFlag.

Useful functions

  • in_bst_data(DataFile, FieldsList): Read the structure for a data file.



Tutorial 8: Stimulation delays

Authors: Francois Tadel, Elizabeth Bock

The event markers that are saved in the data files might have delays. In most cases, the stimulation triggers saved by the acquisition system indicate when the stimulation computer requested a stimulus to be presented. After this request, the equipment used to deliver the stimulus to the subject (projector, screen, sound card, electric or tactile device) always introduce some delays. Therefore, the stimulus triggers are saved before the instant when the subject actually receives the stimulus.

For accurate timing of the brain responses, it is very important to estimate these delays precisely and if possible to account for them in the analysis. This tutorial explains how to correct for the different types of delays in the case of an auditory study, if the output of the sound card is saved together with the MEG/EEG recordings. A similar approach can be used in visual experiments using a photodiode.

Note for beginners

This entire tutorial can be considered as advanced. It is very important to correct for the stimulation delays in your experiments, but if you are not using any stimulation device, you do not need this information. However, if you skip the entire tutorial, you will have uncorrected delays and it will be more difficult to follow along the rest of the tutorials. Just go quickly through the actions that are required and skip all the explanations.

Advanced

Documented delays

Reminder: The full description of this auditory dataset is available on this page: Introduction dataset.

Delay #1: Production of the sound

  • The stimulation software generates the request to play a sound, the corresponding trigger is recorded in the stim channel by the MEG acquisition software.
  • Then this request goes through different software layers (operating system, sound card drivers) and the sound card electronics. The sound card produces an analog sound signal that is sent at the same time to the subject and to MEG acquisition system. The acquisition software saves a copy of it in an audio channel together with the MEG recordings and the stim channel.
  • The delay can be measured from the recorded files by comparing the triggers in the stim channel and the actual sound in the audio channel. We measured delays between 11.5ms and 12.8ms (std = 0.3ms). These delays are not constant, we should adjust for them. Jitters in the stimulus triggers cause the different trials to be aligned incorrectly in time, hence "blurred" averaged responses.

Delay #2: Transmission of the sound

  • The sound card plays the sound, the audio signal is sent with a cable to two transducers located in the MEG room, close to the subject. This causes no observable delay.
  • The transducers convert the analog audio signal into a sound (air vibration). Then this sound is delivered to the subject's ears through air tubes. These two operations cause a small delay.
  • This delay cannot be estimated from the recorded signals: before the acquisition, we placed a sound meter at the extremity of the tubes to record when the sound is delivered. We measured delays between 4.8ms and 5.0ms (std = 0.08ms). At a sampling rate of 2400Hz, this delay can be considered constant, we will not compensate for it.

Delay #3: Recording of the signals

  • The CTF MEG systems have a constant delay of 4 samples between the analog channels (MEG/EEG, auditory,etc) and the digital channels (stim, buttons, etc), because of an anti-aliasing filter that is applied to the first and not the second. This translates here to a constant 'negative' delay of 1.7ms, meaning the analog channels are delayed when compared to the stim channels.

  • Many acquisition devices (EEG and MEG) have similar hidden features, read correctly the documentation of your hardware before analyzing your recordings.


delays_sketch_small.gif


Evaluation of the delay

Let's display simultaneously the stimulus channel and the audio signal.

  • Right-click AEF#01 link > Stim > Display time series: The stim channel is UPPT001.

  • Right-click AEF#01 link > ADC V > Display time series: The audio channel is UADC001.

  • In the Record tab, set the duration of display window to 0.200s.

  • Jump to the third event in the "standard" category.
  • We can observe that there is a delay of about 13ms between the time where the stimulus trigger is generated by the stimulation computer and the moment where the sound is actually played by the sound card of the stimulation computer (delay #1).

    delays_evaluate.gif

  • What we want to do is to discard the existing triggers and replace them with new, more accurate ones created based on the audio signal. We need to detect the beginning of the sound on analog channel UADC001.
  • Note that the representation of the oscillation of the sound tone is really bad here. The frequency of this standard tone is 440Hz. It was correctly captured by the original recordings at 2400Hz, but not in the downsampled version we use in the introduction tutorials. It should still be good enough for performing the detection of the stimulus.

Detection of the analog triggers

Detecting the standard triggers

Run the detection of the "standard" audio triggers on channel UADC001 for file AEF#01.

  • Keep the same windows open as previously.
  • In the Record tab, select the menu File > Detect analog triggers.

  • This opens the Pipeline editor window with the process Events > Detect analog triggers selected. This window will be introduced later, for now we will just use it to configure the process options. Configure it as illustrated below:

    delays_detect.gif

Advanced


Explanation of the options (for future reference, you can skip this now):

  • Event name: Name of the new event category created to store the detected triggers.
    We can start with the event "standard", and call the corrected triggers "standard_fix".

  • Channel name: Channel on which to perform the detection (audio channel UADC001).

  • Time window: Time segment on which you want to detect analog triggers.
    Leave the default time window or check the box "All file", it will do the same thing.

  • Amplitude threshold: A trigger is set whenever the amplitude of the signal increases above X times the standard deviation of the signal over the entire file. Increase this value if you want the detection to be less sensitive.

  • Min duration between two events: If the event we want to detect is an oscillation, we don't want to detect a trigger at each cycle of this oscillation. After we detect one, we stop the detection for a short time. Use a value that is always between the duration of the stimulus (here 100ms) and the inter-stimulus interval (here > 700ms).

  • Apply band-pass filter before the detection: Use this option if the effect you are trying to detect is more visible in a specific frequency band. In our case, the effect is obvious in the broadband signal, we don't need to apply any filter.

  • Reference: If you have an approximation of the triggers timing, you can specify it here. Here we have the events "standard" and we want to detect a trigger in their neighborhood.
    If we do not use this option, the process creates only one new group with all the audio signals, without distinction between the deviant and standard tones.

  • Detect falling edge (instead of rising edge): Detects the end of the tone instead of the beginning.

  • Remove DC offset: If the signal on which we perform the detection does not oscillate around zero or has a high continuous component, removing the average of the signal can improve the detection. This should be selected when using a photodiode with a pull-up resistor.

  • Enable classification: Tries to automatically classify the different types of events that are detected based on the morphology of the signal in the neighborhood of the trigger.

Results of the detection

  • Navigate through a few of the new "standard_fix" events to evaluate if the result is correct. You can observe that the corrected triggers are consistently detected after the rising portion of the audio signal, two samples after the last sample where the signal was flat.
  • This means that we are over-compensating delay #1 by 3.3ms. But at least this delay is constant and will not affect the analysis. We can count this as a constant delay of -3.3ms.

    delays_results.gif

Detecting the deviant triggers

  • Repeat the same operation for the deviant tones.
  • In the Record tab, select the menu File > Detect analog triggers.

    delays_deviant.gif

Some cleaning

  • We will use the corrected triggers only, we can delete the original ones to avoiding any confusion.
  • Delete the event groups "deviant" and "standard" (select them and press the Delete key).

  • Rename the group "deviant_fix" into "deviant" (double-click on the group name).
  • Rename the group "standard_fix" into "standard".
  • Close all: Answer YES to save the modifications.

    delays_final.gif

Repeat on acquisition run #02

Repeat all the exact same operations on the link to file AEF#02:

  • Right-click AEF#02 link > Stim > Display time series: The stim channel is UPPT001.

  • Right-click AEF#02 link > ADC V > Display time series: The audio channel is UADC001.

  • In the Record tab, select menu File > Detect analog triggers: standard_fix

  • In the Record tab, select menu File > Detect analog triggers: deviant_fix

  • Check that the events are correctly detected.
  • Delete the event groups "deviant" and "standard" (select them and press the Delete key).

  • Rename the group "deviant_fix" into "deviant" (double-click on the group name).
  • Rename the group "standard_fix" into "standard".
  • Close all: Answer YES to save the modifications.

Delays after this correction

We compensated for the jittered delays (delay #1), but not for hardware delays (delay #2). Note that delay #3 is no longer an issue since we are not using the orginal stim markers, but the more accurate audio signal. The final delay between the "standard_fix" triggers and the moment when the subject gets the stimulus is now delay #2 and the over-compensation.

Final constant delay: 4.9 - 3.3 = 1.6ms

We decide not to compensate for this delay because it is very short and does not introduce any jitter in the responses. It is not going to change anything in the interpretation of the data.

Advanced

Detection of the button responses

The subject presses a button with the right index finger when a deviant is presented. We don't really need to correct this category of events because it is already correct. You can skip this section if you are not interested in parsing digital channels.

The digital channel Stim/UDIO001 contains the inputs from the response button box (optic device, negligible delay). Each bit of the integer value on this channel corresponds to the activation of one button. We can read this channel directy to get accurate timing for the button presses.

  • Right-click AEF#01 link > Stim > Display time series: The response channel is UDIO001.

  • In the Record tab: Set the page duration to 3 seconds.

    delays_button.gif

  • Note on the DC removal: You may see the base value of the UDIO001 channel "below" the zero line. This is an effect of the DC correction that is applied on the fly to the recordings. The average of the signals over the current page is subtracted from them. To restore the real value you can uncheck the [DC] button in the Record tab. Atlernatively, just remember that the reference line for a channel doesn't necessarily mean "zero" when the DC removal option is on.

  • In the Record tab, select menu File > Read events from channel: UDIO001 / Value

    delays_button_detect.gif

  • Reject events shorter than X samples: This option is not needed here, but can become useful when the transitions between the values of the stim channels are not as clean as the sharp steps in this tutorial dataset. For example:

    • The individual bits are not changing exactly at the same time,
    • Values added (bit-wise) at transitions between 2 non-zero values when downsampling (e.g. CTF),
    • "Button bouncing": non-ideal behavior of any switch which generates multiple transitions from a single input.
  • You get a new event category 64, this is the value of the UDIO001 at the detected transitions. There are 40 of them, one for the each button press. We can use this as a replacement for the original button category.

    delays_button_fix.gif

  • To make things clearer: delete the button group and rename 64 into button.

    delays_button_final.gif

  • Close all: Answer YES to save the modifications.
  • Optionally, you can repeat the same operation for the other run, AEF#02. But we will not use the "button" markers in the analysis, so it is not very useful.
  • Note that these events will have delay #3 (when compared to MEG/EEG) since they are recorded on a digital channel.

Advanced

Another example: visual experiments

We have discussed here how we could compensate for the delays introduced in an auditory experiment using a copy of the audio signal saved in the recordings. A similar approach can be used for other types of experiments. Another typical example is the use a photodiode in visual experiments.

When sending images to the subject using a screen or a projector, we usually have jittered delays coming from the stimulation computer (software and hardware) and due to the refresh rate of the device. These delays are difficult to account for in the analysis.

To detect accurately when the stimulus is presented to the subject, we can place a photodiode in the MEG room. The diode produce a change in voltage when presented with a change in light input, for example black to white on the screen. This is typically managed with a small square of light in the corner of the stimulus screen - turning white when the stimulus appears on the screen and then black at all other times. The signal coming from this photodiode can be recorded together with the MEG/EEG signals, just like we did here for the audio signal. Depending on the photodiode, it is recommended to use a pull-up resistor when recording the signal. Then we can detect the triggers on the photodiode output channel using the menu "detect analog triggers", including the use of the 'Remove DC offset' option.

delays_video.png



Tutorial 9: Select files and run processes

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

The Brainstorm window includes a graphical batching interface. With the two tabs Process1 and Process2 in the lower part of the window, you can select files from the database explorer and assemble a processing pipeline. Most of the operations available in the interface can also be executed this way, including everything we've been doing with Brainstorm so far.

On the other hand, some features are only available this way. It is the case for the frequency filters we will need for the pre-processing of our auditory recordings. This tutorial is a parenthesis to explain how to select files and run processes, we will resume with the cleaning of the recordings in the next tutorial.

Selecting files to process

The tab Process1 contains a empty box in which you can drag and drop any number of files or folders from the database explorer. The easiest way to understand how it works is to try it.

  • Try to drag and drop, in Process1, all the nodes you currently have in your database explorer.
  • You will see that it accepts all the folders and all the recordings, but not the channel files.
  • When you add a new node, the interface evaluates the number of files of the selected type that each of them contain. The number in the brackets next to each node represents the number of data files that were found in them.
  • On top of the list, a comment shows the total number of files that are currently selected.

    process1_example.gif

  • The buttons on the left side allow you to select what type of file you want to process: Recordings, sources, time-frequency, other. When you select another button, all the counts are updated to reflect the number of files of the selected type that are found for each node.
  • Right now, if you select another file type, it would show only "0" everywhere because there are no sources or time-frequency decompositions available in the database yet.

    process1_sources.gif

  • To remove files from the Process1 list:
    • Select the nodes to remove (holding Shift or Ctrl key) and press the Delete key.

    • Right-click on the list > Clear list

Filter by name

When you have lots of files in a folder, like multiple source reconstructions or time-frequency files for each trial, it is difficult to grab just the ones you are interested in. After selecting your folders in the Process1 box, you can refine the selection with the Filter search box at the bottom-right corner of the window.

  • The example below shows how to select the data files corresponding to the noise recordings: by typing "Noise" in the search box and selecting the option "Search file paths". We cannot perform the search "by name" because all the data files have the same name "Link to raw file".

    process1_search.gif

  • Reminder: To see the file name corresponding to a node in the database, leave your mouse over it for a few seconds. You can do this both in the database explorer and the Process1 list.

    process_filename.gif

The options offered in the Filter menu are:

  • Search file paths: Look for the string in the full file paths (including their relative path).

  • Search names: Look for the string in the names of the files, ie. what is displayed in the database explorer to represent them (the .Comment field).

  • Search parent names: Extends the search to the name of the parent files (applicable only to source and time-frequency files, which can depend on a data file).

  • Select files: Only the files that contain the string are selected.

  • Exclude files: Only the files that DO NOT contain the string are selected.

  • Reset filters: Removes the current file filters applied on Process1 and Process2.

  • Case insensitive: Note that the search is not sensitive to case.

  • Boolean logic: You can combine different keywords to make a more precise search using advanced search queries. See the following section for more information.

Selecting processes

  • Clear the file list and the search filters.
  • Select all three datasets we have linked to our protocol.
    You can select the three "link to raw file" nodes, the three folders or the entire subject node.

  • Click on the [Run] button at the bottom-left corner of the Process1 tab.

  • The Pipeline editor window appears. You can use it to create an analysis pipeline, i.e., a list of processes that are applied to the selected files one after the other. The first button in the toolbar shows the list of processes that are currently available. If you click on a menu, it's added to the list.

  • Some menus appear in grey. This means that they are not designed to be applied to the type of data that you have in input, or at the end of the current pipeline.
  • In the current example, we have a file with the type "continuous raw recordings", so we have access mostly to menus to manipulate event markers, run cleaning procedures and import data blocks. You can recognize a few operations that we executed in the previous tutorials: "Event > Read from channel" and "Event > Detect analog triggers".

    process1_addprocess.gif

  • When you select a process, a list of options specific to this process is shown in the window.
  • To delete a process: Select it and press the Delete key, or use the [X] button in the toolbar.

  • After selecting a first process, you can add another one. The output of the first process will be passed to the second process without giving back the control to the user. This is how you can build a full analysis pipeline with the interface.
  • After adding a few processes, you can move a process up or down in the pipeline with the [up arrow] and [down arrow] buttons in the toolbar. Click on a process in the pipeline to edit its options.
  • Select and delete a few processes to understand how this interface works. Just do not click on RUN.

Plugin structure

All the menus available in the pipeline editor are actually plugins for Brainstorm. The processes are functions that are independent from each other and automatically detected when starting Brainstorm.

Any Matlab script that is added to the plugin folder (brainstorm3/toolbox/process/functions/) and has the right format will automatically be detected and made available in the GUI. This mechanism makes it easy for external contributors to develop their own code and integrate it in the interface.

More information: How to write your own process

To see where the function corresponding to a process is on the hard drive: select the process in the pipeline editor, then leave your mouse for a few seconds over its title.

  • process_path.gif

Note for beginners

Everything below is advanced documentation, you can skip it for now.




Advanced

Search Database

Sometimes when working with huge protocols, you can get lost in the size of your database tree. While filtering from the process box as introduced in the previous section is one way to select the files you are looking for, we have introduced a more straightforward approach to search for file(s) in your database. At the right below the protocol selection dropdown, you can click on the magnifying glass to open up the search dialog.

  • search_new.png

From there, you can create a new search query from the GUI, or type / paste an existing search query string (see the following section for more details). Let's select "New Search" to create a new query from the GUI.

  • search_dialog.png

From this menu, you can create a search query to apply on your active protocol. It has different options:

  • Search by: The file metadata to use for the search.

    • Name: Name of the file in Brainstorm
    • File type: Type of the file, see dropdown when selected for possible values
    • File path: Path of the file in the Brainstorm database folder
    • Parent name: Name of any parent file in the database tree (e.g. Subject or Folder)
  • Equality: Type of equality to apply.

    • Contains: File metadata contains the entered value
    • Contains (case): Same as contains, but case sensitive
    • Equals: Exact equality, the file metadata is equal to the entered value
    • Equals (case): Same as equals, but case sensitive
  • Not: Whether to invert the selected equality, e.g. DOES NOT CONTAIN vs CONTAINS.

  • Search for: The value to search for.

  • Remove: To remove the search row if not needed anymore.

  • + and: To add a search row, with the AND boolean logic. If you have two rows A and B, then the returned files will match both search A and B.

  • + or: To add a search row, with the OR boolean logic. If you have two rows A and B, then the returned files will match both search A or B.

In the above example, we are looking for raw files (File type = Raw data) whose parent name contains the word "noise". This allows us to search for raw noise recordings.

  • search_noise_result.png

Notice that you now have multiple tabs in your Brainstorm database. The "Database" tab contains all files in your protocol, whereas the "noise" tab only contains the files that pass the search and their parents. You can have multiple searches/tabs active so that you can easily create pipelines by dragging and dropping different search results in the process box. Do keep in mind that if you drag and drop a parent object in the process box (e.g. Subject01) with an active search, only files that pass the active search will be processed by the pipeline.

Once a search is created, you can interact with it in different ways. You can right click on the tab and Edit the search on the fly from the GUI, Copy the search to clipboard as a query string to use it in a script, or Close the search.

  • search_right_click_tab.png

You can also click on the magnifying glass when a search is active to get more options such as Saving the search for later use and Generating a process call to apply this search in a script.

  • search_edit_options.png

If you click Generate process call, a line of script will be generated for you to use your search query as a process in a script. It will also be copied to clipboard.

  • search_process.png

Notice that your search was created to a query string:

  • ([parent CONTAINS "noise"] AND [type EQUALS "RawData"])

This advanced query syntax is described in the following section.

Advanced search queries

For advanced users, you can write more complex search queries that can combine multiple keywords and types of keywords using boolean logic. You can do this using the Brainstorm search GUI and then copy your search as text to re-use later. These queries work for both database searches and process filters. The syntax is rigid such that the order of the commands is important, so we recommend you use the search GUI whenever possible to avoid errors. Search queries can contain the following types of elements:

  • Search parameters: These are simple searches that are on a specific type of value. They need to be written in [square brackets]. They look like the following:

    • [searchFor EQUALITY NOT "value"]
    • SearchFor: Which field of the files metadata to search for It can have the following values, in lower case:

      • Name: Searches using the file name in Brainstorm
      • Type: Searches using the file type in Brainstorm
      • Path: Searches using the file path in the Brainstorm database folder
      • Parent: Searches using the parents name in the Brainstorm database tree
    • Equality: The type of equality you want to use to compare the file value to the searched value. It can have the following values, in upper case:

      • CONTAINS: Whether the searchFor field contains the text "value"
      • CONTAINS_CASE: Same as CONTAINS, but case sensitive
      • EQUALS: Whether the searchFor field exactly equals the text "value"
      • EQUALS_CASE: Same as EQUALS, but case sensitive
    • NOT: (optional) add this reserved keyword to return the opposite results of the search, so for example, all files that do NOT CONTAIN the text "value".

    • "value": the text you want to search for, in double quotes.

  • Boolean operators: These are used to group together search parameters and search blocks using boolean logic. Considering search parameters a, b and c, the following will return files that pass searches a and a, or does not pass search c:

    • (a AND b) OR NOT c
    • AND: This combines search parameters and blocks such that both conditions have to be met.

    • OR: This combines search parameters and blocks such that either conditions have to be met

    • NOT: This precedes a search block or parameter such that the condition result is reversed. So if a condition had to be met, it now has to not be met.

    • Important note: AND and OR operators cannot be mixed together (you cannot have both in the same search block), because otherwise it creates uncertainties.

  • Search blocks: These are combinations of search parameters and boolean operators, wrapped in (round brackets). You cannot have different boolean operators in the same block

Example

(([name CONTAINS "test1"] AND [type EQUALS "Matrix"]) OR NOT [parent CONTAINS "test2"])

Effect: This will match all matrix files containing text "test1" or all files whose parent docontains the text "test2".

Limitations of the GUI

The GUI does not support multiple nested search blocks. It only allows for one OR block followed by one AND block. If your query is more advanced than this, you will not be able to edit it with the search GUI. We recommend you use the process filter box instead.

Advanced

Saving a pipeline

After preparing your analysis pipeline by listing all the operations to run on your input files, you can either click on the [Run] button, or save/export your pipeline. The last button in the the toolbar offers a list of menus to save, load and export the pipelines.

  • pipeline_example.gif

  • Load: List of pipelines that are saved in the user preferences on this computer.

  • Load from .mat file: Import a pipeline from a pipeline_...mat file.

  • Save: Save the pipeline in the user preferences.

  • Save as .mat matrix: Exports the pipeline as Matlab structure in a .mat file. Allows different users to exchange their analysis pipelines, or a single user between different computers.

  • Generate .m script: This option generates a Matlab script.

  • Delete: Remove a pipeline that is saved in the user preferences.

  • Reset options: Brainstorm automatically saves the options of all the processes in the user preferences. This menu removes all the saved options and sets them back to the default values.

Advanced

Automatic script generation

Here is the Matlab script that is generated for this pipeline.

pipeline_script.gif

Reading this script is easy: input files at the top, one block per process, one line per option. You can also modify them to add personal code, loops or tests. Many features are still missing in the pipeline editor, but the generated scripts are easy enough for users with basic Matlab knowledge to edit and improve them.

Running this script from Matlab or clicking on the [Run] button of the pipeline editor produce exactly the same results. In both cases you will not have any interaction with the script, it could be executed without any direct supervision. You just get a report in the end that describes everything that happened during the execution.

These scripts cannot be reloaded in the pipeline editor window after being generated. If you work on a long analysis pipeline, save it in your user preferences before generating the corresponding Matlab script.

Advanced

Process: Select files with tag

Since we are discussing the file selection and the pipeline execution, we can explore a few more available options. We have seen how to filter the files in the Process1 box using the Filter search box. We can get to the exact same result by using the process File > Select files: By tag before the process you want to execute, to keep only a subset of the files that were placed in the Process1 list.

It is less convenient in interactive mode because you don't immediately see the effect of your file filter, but it can be very useful when writing scripts. You can also combine search constraints by adding the same process multiple times in your pipeline, which is not possible with the search box.

  • Make sure you still have the three datasets selected in the Process1 list.
  • Select the process: File > Select files: By tag

  • Select the options: Search: "Noise", Search the file names, Select only the files with the tag.
  • Click on [Run] to execute the process.

    process1_select.gif

  • This process is useless if not followed immediately by another process that does something with the selected files. It does nothing but selecting the file, but we can observe that the operation was actually executed with the report viewer.

Advanced

Report viewer

Everytime the pipeline editor is used to run a list of processes, a report is created and logs all the messages that are generated during the execution. These reports are saved in the user home folder: $HOME/.brainstorm/reports/.

The report viewer shows, as an HTML page, some of the information saved in this report structure: the date and duration of execution, the list of processes, and the input and output files. It reports all the warnings and errors that occurred during the execution.

The report is displayed at the end of the execution only if there were more than one process executed, or if an error or a warning was reported. In this case, nothing is displayed.

You can always explicitly open the report viewer to show the last reports: File > Report viewer.

reports.gif

When running processes manually from a script, the calls to bst_report explicitly indicate when the logging of the events should start and stop.

You can add images to the reports for quality control using the process File > Save snapshot, and send the final reports by email with the process File > Send report by email.

With the buttons in the toolbar, you can go back to the previous reports saved from the same protocol.

More information: Scripting tutorial

Advanced

Error management

  • Select the same files and same process: File > Select files: By tag

  • Note that the options you used during the previous call are now selected by default.
  • Instead of "Noise", now search for a string that doesn't exist in the file name, such as "XXXX".

    search_xxxx.gif

  • Click on [Run] to execute the process. You will get the following error.

    search_error.gif

  • If you open the report viewer, it should look like this.

    search_report.gif

Advanced

Control the output file names

If you are running two processes with different parameters but that produce exactly the same file paths and file names, you wouldn't be able to select them with this process. But immediately after calling any process, you can add the process File > Add tag to tag one specific set of files, so that you can easily re-select them later.

Example: You run the time-frequency decomposition twice with different options on the same files, tag the files after calculating them with different tags.

  • addTag.gif

Additional documentation



Tutorial 10: Power spectrum and frequency filters

Authors: Hossein Shahabi, Francois Tadel, Elizabeth Bock, John C Mosher, Richard Leahy, Sylvain Baillet

We are now going to process our continuous recordings to remove the main sources of noise. Typically, we can expect contaminations coming from the environment (power lines, stimulation equipment, building vibrations) and from the subject (movements, blinks, heartbeats, breathing, teeth clenching, muscle tension, metal in the mouth or the body). In this tutorial, we will focus first on the noise patterns that occur continuously, at specific frequencies.

We can correct for these artifacts using frequency filters. Usually we prefer to run these notch and band-pass filters before any other type of correction, on the continuous files. They can be applied to the recordings without much supervision, but they may create important artifacts at the beginning and the end of the signals. Processing the entire continuous recordings at once instead of the imported epochs avoids adding these edge effects to all the trials.

Evaluation of the noise level

Before running any type of cleaning procedure on MEG/EEG recordings, we always recommend to start with a quick evaluation of the noise level. An easy way to do this is to estimate the power spectrum of all the signals over the entire recordings.

  • Clear the list of files in the Process1 tab.
  • Select the three datasets we have linked to our protocol.
    You can select the three "link to raw file" nodes, the three folders or the entire subject node.

    psd_select_files.gif

  • Click on [Run] to open the pipeline editor window.
  • Select the process "Frequency > Power spectrum density (Welch)"

  • This process evaluates the power of the MEG/EEG signals at different frequencies, using the Welch's method (see Wikipedia or MathWorks). It splits the signals in overlapping windows of a given length, calculates the Fourier Transform (FFT) of each of these short segments, and averages the power of the FFT coefficients for all the overlapping windows.

    psd_options.png

  • Set the options as follows (click on [Edit] for the additional options):
    • Time window: [All file]
      Portion of the input file you want to use for estimating the spectrum.
      It is common to observe huge artifacts at the beginning or the end of the recordings, in this case you should exclude these segments from the calculation of the PSD.
      In practice, using just the first 100s or 200s of the file can give you a good enough impression of the quality of the recordings.

    • Window length: 4 seconds
      Estimator length = length of the overlapping time windows for which we calculate the FFT. The number of time samples in the estimator is the same as the number of frequency bins in the output file. Increasing this parameter increases the output frequency resolution (distance between two frequency bins) but degrades the stability of the estimator, as it also decreases the total number of averaged time windows. A Hamming window is applied to each estimator window before the computation of the FFT. See forum post: Effect of window length on the PSD

    • Overlap: 50%
      How much overlap do we want between two consecutive time windows.

    • Units: Physical: U^2/Hz
      Scaling of the spectrum. This only affects the values on the Y axis of the spectrum. Physical units should be used in most cases.
      "Normalized" gives normalized frequencies from 0 to 2pi (Hz·s).
      "Before Nov 2020" reproduces the older Brainstorm spectrum scaling (see this forum post).

    • Sensor types or names: MEG
      Defines the list of channels (names or types) on which you want to apply the process.

    • Frequency definition: Matlab's FFT default
      You have the option to directly use the frequency binning returned by the FFT, or run an additional step of averaging these bins in larger frequency bands. Note that you can freely edit these frequency bands.

    • Output: Save individual PSD value.
      This option will separately estimate the PSD for each of the three files in input, and create three files in output. If you select the other option (save average), it calculates the same three files but averages them on the fly and saves only one file in the database.

    • Implementation details: See function brainstorm3/toolbox/timefreq/bst_psd.m

  • Click on [Run] to start the execution of the process.
  • Troubleshooting: If you get "Out of memory" errors, try to run this PSD estimation on a shorter time segment. For example, set the time window to [0,100s] instead of the full file. This process starts by loading all the needed recordings in memory, you might not have enough memory available on your system to fit the entire dataset.

  • It produces three new files, that appear as "depending" on the three datasets. The comments of the files indicate how many overlapping windows could be used to estimate the PSD. "179/4000ms" means 179 windows of 4s each (total 716s). With the 50% overlap, it sums up to a little less than 2x the file length (360s).

    psd_output_files.gif

Interpretation of the PSD

File: AEF#01

  • Double-click on the new PSD file for run #01 to display it (or right-click > Power spectrum).

    psd_display.gif

  • The power spectrum is displayed in a figure similar to the time series, but the X axis represents the frequencies. Most of the shortcuts described for the recordings are also valid here. Clicking on the white parts of the figure or using the arrow keys of the keyboard move the frequency cursor. The current frequency can also be controlled with a new slider displayed in the Brainstorm window, just below the time panel.
  • Each black line represents the power spectrum for one channel. If you click on a channel, it gets highlighted in red. Click again for deselecting. Right-click on a selected channel to read its name.

    psd_run1.png

  • The frequency resolution of this power spectrum, ie. the distance between two frequency bins represented in this figure, is about 0.15Hz. This precision depends on the length of the estimator window you used. The FFT is computed on the number of time samples per window (4s*600Hz), rounded to the next power of 2 (nextpow2), and represents the full spectrum of the file (0-600Hz).
    Frequency resolution = sampling_freq / 2^nextpow2(estimator_length*sampling_freq) = 0.1465 Hz

  • The shape of this graph is normal, it does not indicate anything unexpected:
    • Peaks related with the subject's alpha rhythms: around 10Hz and 20Hz.

    • Peaks related with the power lines: 60Hz, 120Hz and 180Hz.
      These datasets were recorded in Canada, where the alternating powerline current is delivered at 60Hz. In Europe you would observe similar peaks at 50Hz, 100Hz and 150Hz.

  • Add a topography view for the same file, with one of the two methods below:
    • Right-click on the PSD file > 2D Sensor cap.

    • Right-click on the spectrum figure > 2D Sensor cap (shortcut: Ctrl+T)

  • Scroll in frequencies to see the spatial distribution of each frequency bin:

    psd_topo_ncm4.png

  • We have already identified two artifacts we will need to remove: the eye movements and the 60Hz+harmonics from the power lines.

File: AEF#02

  • Open the spectrum view for the run AEF#02.
    To view the signal units instead of dB, select Display Tab > Measure > Power. Then from the display options icon on the right of the figure, select Amplitude > Log scale

    psd_run2_units.png

    psd_run2.png

  • Add a 2D sensor cap view for the same file. Scroll to zoom in/out.
    To display the sensor names, right-click on the figure > Channels > Display sensors.

  • This spectrum looks very similar to the run #01: same alpha and power line peaks.
  • Additionally, we observe higher signal power between 30Hz and 100Hz on many occipital sensors. This is probably related to some tension in the neck muscles due to an uncomfortable seating position in the MEG. We will see later whether these channels need to be tagged as bad or not.

    psd_neck.png

File: Noise recordings

  • Open the spectrum view for the noise recordings.

    psd_noise.png

  • This shows the power spectrum of the signals that are recorded when there is no subject in the MEG room. It gives a good and simple representation of the instrumental noise. If you had one bad MEG sensor, you would see it immediately in this graph. Here everything looks good.

X Log-scale

  • One option is worth mentioning when displaying power spectra: the logarithmic scale for the X axis. In the diaply options for the PSD figure > Frequency > Log scale. It is sometimes better adapted to represent this type of data than a linear scale (especially with higher sampling frequencies).

    psd_log.png

Elekta-Neuromag and EEG users

The Elekta-Neuromag MEG systems combine different types of sensors with very different amplitude ranges, therefore you would not observe the same types of figures. Same thing for EEG users, this might not look like what you observe on your recordings.

For now, keep on following these tutorials with the example dataset to learn how to use all the Brainstorm basic features. Once you're done, read additional tutorials in the section "Other analysis scenarios" to learn about the specificities related with your own acquisition system.

Apply a notch filter

This filter has been updated in 2019. In the new configuration, the user can define the 3-dB bandwidth of the filter. Please consider that a smaller bandwidth means a sharper filter which in some cases makes filter unstable. In case you want to reproduce the old filter, you can check the box "Use old filter implementation". The 3-dB bandwidth is not applicable to the old configuration.

For illustration purposes, we will now run a frequency filter to remove the 60Hz+harmonics from the continuous files. Notch filters are adapted for removing well identified contaminations from systems oscillating at very stable frequencies.

  • Keep all the three datasets selected in the Process1 box.
    Remember to always apply the same filters on the subject recordings and the noise recordings.

  • Click on [Run] to open the Pipeline editor.
  • Run the process: Pre-process > Notch filter

    notch_process.gif

    • Process the entire file at once: NO

    • Sensor types or names: MEG

    • Frequencies to remove: 60, 120, 180 Hz

    • 3-dB notch bandwidth: 2 Hz

    • The higher harmonics (240Hz) are not clearly visible, and too high to bother us in this analysis.
  • This process creates three new datasets, with additional "notch" tags. These output files are saved directly in the Brainstorm database, in a binary format specific to Brainstorm (.bst).

    notch_output.gif

  • If you delete the folders corresponding to the original files (before the filter), your original recordings in the .ds folders are not altered. If you delete the folders corresponding to the filtered files, you will lose your filtered recordings in .bst format.
  • To check where the file corresponding to a "Link to raw file" is actually saved on the hard drive, right-click on it > File > Show in file explorer.

  • Important: This is an optional processing step. Whether you need this on your own recordings depends on the analysis you are planning to run on the recordings (see advanced sections below).

Evaluation of the filter

  • Right-click on the Process1 list > Clear list.

  • Drag and drop the three filtered files in Process1.

    notch_psd.gif

  • Run again the PSD process "Frequency > Power spectrum density (Welch)" on the new files, with the same parameters as before, to evaluate the quality of the correction.

  • Double-click on the new PSD files to open them.

    notch_evaluation.png

  • Scroll to zoom in and observe what is happening around 60Hz (before / after).

    notch_zoom.png

  • See below an example of how this filter can affect the time series: top=before, bottom=after.
    We show the reference sensor BR2 because it shows a lot more 60Hz than any MEG sensor (sensor type "MEG REF"), ie. oscillations with a period of 16.7ms.
    Note the edge effect at the beginning of the signal: the signals below are 1.5s long, the notch filter at 60Hz is visibly not performing well during the first 500ms (blue window).

    notch_timeseries2.gif

  • If you look in the list of events, you would see a new category "transient_notch". This corresponds to the time period during which can expect significant edge effects due to the filtering. Brainstorm doesn't mark these blocks as bad by default, you would have to do it manually - you will see how to do this in one of the following tutorials. In the case of this dataset, the transient duration is much shorter than the delay before the first stimulation, so not relevant in our processing pipeline. See the advanced sections below for more details about the estimation of this transient duration.

    notch_transient.gif

Some cleaning

To avoid any confusion later, delete the links to the original files:

  • Select the folders containing the original files and press Delete (or right-click > File > Delete).

    notch_delete.gif

  • Always read the confirmation messages carefully, you will avoid some bad surprises.

    notch_confirmation.gif

  • This is what your database explorer should look like at the end of this tutorial:

    notch_final.gif

Note for beginners

Everything below is advanced documentation, you can skip it for now.




Advanced

What filters to apply?

The frequency filters you should apply depend on the noise present in your recordings, but also on the type of analysis you are planning to use them for. This sections provides some general recommendations.

High-pass filter

  • Purpose: Remove the low frequencies from the signals. Typically used for:

    • Removing the arbitrary DC offset and slow drifts of MEG sensors (< 0.2Hz),

    • Removing the artifacts occurring at low frequencies (< 1Hz, e.g. breathing or eye movements).

  • Warnings:

    • Edge effects: Transient effects you should discard at the start and end of each filtered signal because the filtering window extends into time periods outside those for which you have data.
    • Avoid using on epoched data: You need long segments of recordings to run a high-pass filter.
    • Be careful with the frequency you choose if you are studying cognitive processes that may include sustained activity in some brain regions (eg. n-back memory task).

Low-pass filter

  • Purpose: Remove the high frequencies from the signals. Typically used for:

    • If the components of interest are below for example 40Hz, you may discard the faster components in the signal by applying a low-pass filter with a frequency cutoff below 40Hz.
    • Removing strong noise occurring at high frequencies (eg. muscle contractions, stimulators).
    • Display averages: In an event-related design, you will import and average multiple trials. You may low-pass filter these averages for display and interpretation purposes.
    • Statistics: In an event-related study with multiple subjects, the latency of the brain response of interest may vary between subjects. Smoothing the subject-level averages before computing a statistic across subjects may help reveal the effect of interest.
  • Warnings:

    • Edge effects: Transient effects you should discard at the start and end of each filtered signal because the filtering window extends into time periods outside those for which you have data.
    • It is always better to filter continuous (non-epoched data) when possible.
    • When filtering averages: Import longer epochs, average them, filter, then remove the beginning and the end of the average to keep only the signals that could be filtered properly

Band-pass filter

  • Purpose: A band-pass filter is the combination of a low-pass filter and a high-pass filter, it removes all the frequencies outside of the frequency band of interest.

  • Warnings: The same considerations and warnings as for high and low pass filtering apply here.

Notch filter

  • Purpose: Remove a sinusoidal signal at a specific frequency (power lines noise, head tracking coils).

  • Warnings:

    • Use only if needed: It is not always recommended to remove the 50-60Hz power lines peaks. If you don't have a clear reason to think that these frequencies will cause a problem in your analysis, you don't need to filter them out.
    • In an ERP analysis, the averaging of multiple trials will get rid of the 50-60Hz power line oscillations because they are not time-locked to the stimulus.
    • If you are using a low-pass filter, do not a apply a notch filter at a higher frequency (useless).
  • Alternatives: If the notch filter is not giving satisfying result, you have two other options.

    • Band-stop filter: Similar to the notch filter, but more aggressive on the data.
      Useful for removing larger segments of the spectrum, in case the power line peaks are spread over numerous frequency bins or for suppressing other types of artifacts.

    • Sinusoid removal: This process can do a better job at removing precise frequencies by identifying the sinusoidal components and then subtracting them from the signals in the time domain. This is not a frequency filter and works best on short segments of recordings.
      Run it on the imported epochs rather than on the continuous files.

When to apply these filters?

  • Continuous files: Frequency filters used for pre-processing purposes should be applied before epoching the recordings. In general, filters will introduce transient effects at the beginning and the end of each time series, which make these parts of the data unreliable and they should be discarded. If possible, it is safer and more efficient to filter the entire recordings from the original continuous file at once.

  • Before SSP/ICA cleaning: Artifact cleaning with SSP/ICA projectors require all the channels of data to be loaded in memory at the same time. Applying a frequency filter on a file that contains projectors requires all the file to be loaded and processed at once, which may cause memory issues. Pre-processing filters are rarely changed, whereas you may want to redo the SSP/ICA cleaning multiple times. Therefore it is more convenient to apply the filters first.

  • Imported epochs: Filtering epochs after importing them in the database is possible but requires extra attention: you may need to import longer epochs to be able to deal with the edge effects.

  • After averaging: You may low-pass filter the averaged epochs for display or statistical purposes but again be aware of the edge effects.

  • Empty room measurements: In principle, all the filters that are applied to the experimental data also need to be applied, with the same settings, to the noise recordings. In the source estimation process, we will need all the files to have similar levels of noise, especially for the calculation of the noise covariance matrix. This applies in particular when some channels are noisy.

  • Think first: Never apply a frequency filter without a clear reason (artifacts, predefined frequency ranges of interest, etc.) and without keeping the side effects under control. Avoid when possible.

Filter specifications: Low-pass, high-pass, band-pass

  • Process: Pre-process > Band-pass filter

    specs_bandpass.gif

  • Process options:

    • Lower cutoff frequency: Defines a high-pass filter (enter 0 for a low-pass filter)

    • Upper cutoff frequency: Defines a low-pass filter (enter 0 for a high-pass filter)

    • Stopband attenuation: The higher the attenuation, the higher the performance of the filter, but longer the transient duration. Use the default (60dB) unless you need shorter edge effects.

    • Use old filter: For replicating results obtained with older versions of Brainstorm.

    • View filter response: Click on this button to display the impulse and frequency response of your filter, and confirm that the responses appear reasonable.

  • Filter design:

    • Description: Even-order linear phase FIR filter, based on a Kaiser window design. The order N is estimated using Matlab's kaiserord function and the filter generated with fir1. Because the filters are linear phase, we can (and do) compensate for the filter delay by shifting the sequence backward in time by M=N/2 samples. This effectively makes the filters zero-phase and zero-delay.

    • Ripple and attenuation: The allowed ripple in pass and attenuation in stop band are set by default to 10^(-3) and 60dB respectively (note that with Kaiser window design, errors in pass and stopband will always be equal). Transitions between pass and stopbands are set to 15 percent of the upper and lower passband edges. However, when the lower edge of the passband is 5hz or lower we set the transition width to 50 percent of the lower passband edge.

    • Filtering function: The FIR bandpass filter can be performed in frequency domain (fftfilt function) or in time domain (filter function). The two approaches give the same results, but they have different execution times depending on the filter oder. The time-domain filtering is faster for low-order filters and much slower for high-order filters. The process selects automatically what approach to use.

  • Edge effects:

    • Transient (full): With any filtering operation there will always be a transient effect at the beginning of the filtered data. For our filter, this effect will last for half of the filter order: M=N/2 samples. We strongly recommend that your data records are sufficiently long that you can discard these M=N/2 samples. Because we are using zero-phase filtering, there is a similar N/2 effect at the end of the sampled data – these samples should also be discarded.

    • Transient (99% energy): For some filters, the full transient window might be longer than your epochs. However, most of the energy is carried by the beginning of the filter, and you can obtain amplitudes acceptable for most analysis after a fraction of this full window. For this reason we also mention a much shorter window in the documentation of the filter, which corresponds to the duration needed to obtain 99% of the total energy in the impulse response. This duration corresponds to the "transient" event markers that are added to the recordings when applying filters.

      bandpass_transient.gif

    • Adjust the parameters: If possible, always discard the full transient window. If the edge effect affects too much of your data, adjust the filter parameters to reduce filter order (increase the lower cut-off frequency or reduce the stopband attenuation). If you cannot get any acceptable compromise, you can consider discarding shorter transient windows, but never go below this "99% energy" window.

    • Mirroring: We included an option to mirror the data at the start and end of the record instead of padding the signal with zeros. This will reduce the apparent N/2 transients at the start and end of your data record, but you should be aware that these samples are still unreliable and we do not recommend using them.

    • [TODO] Check this "99% energy" criteria in the case of high-pass filters, it does not seem very useful...

  • Additional recommendations:

    • Filter order: The key issue to be aware of when filtering is that the specification you choose for your filter will determine the length of the impulse response (or the filter order) which in turn will affect the fraction of your data that fall into the "edge" region. The most likely factor that will contribute to a high filter order and large edge effects is if you specify a very low frequency at the lower edge of the passband (i.e. the high pass cut-off frequency).

    • Detrending: If your goal is to remove the DC signal we recommend you first try detrending the data (removes average and best linear fit) to see if this is sufficient. If you still need to remove other low frequency components, then pick the highest cut-off frequency that will fit your needs.

    • Design optimization: If you are performing bandpass filtering and are not satisfied with the results, you can investigate filtering your data twice, once with a low pass filter and once with a high pass filter. The advantage of this is that you can now separately control the transition widths and stop band attenuation of the two filters. When designing a single BPF using the Kaiser (or any other) window, the maximum deviation from the desired response will be equal in all bands, and the transition widths will be equal at the lower and upper edges of the pass band. By instead using a LPF and a HPF you can optimize each of these processes separately using our filtering function.

  • Linear phase, no distortion, zero delay: As described earlier, FIR filters have a linear phase in the frequency domain. It means that all samples of the input signal will have a same delay in the output. This delay is compensated after filtering. Consequently, no distortion happens during the filtering process. To illustrate this property, we considered a chirp signal in which the oscillation frequency grows linearly. The signal is band-pass filtered in two frequency ranges. The following plot represents the original signal and its filtered versions with our proposed filters. Results show that the input and output signals of this filter are completely aligned without any delay or distortion.

    bandpass_chirp.gif

  • Function: brainstorm3/toolbox/math/bst_bandpass_hfilter.m

  • External call: process_bandpass('Compute', x, Fs, HighPass, LowPass, 'bst-hfilter', isMirror, isRelax)

  • Code:

    1 function [x, FiltSpec, Messages] = bst_bandpass_hfilter(x, Fs, HighPass, LowPass, isMirror, isRelax, Function, TranBand, Method) 2 % BST_BANDPASS_HFILTER Linear phase FIR bandpass filter. 3 % 4 % USAGE: [x, FiltSpec, Messages] = bst_bandpass_hfilter(x, Fs, HighPass, LowPass, isMirror=0, isRelax=0, Function=[detect], TranBand=[], Method='bst-hfilter-2019') 5 % [~, FiltSpec, Messages] = bst_bandpass_hfilter([], Fs, HighPass, LowPass, isMirror=0, isRelax=0, Function=[detect], TranBand=[], Method='bst-hfilter-2019') 6 % x = bst_bandpass_hfilter(x, Fs, FiltSpec) 7 % 8 % DESCRIPTION: 9 % - A linear phase FIR filter is created. 10 % - Function "kaiserord" and "kaiser" are used to set the necessary order for fir1. 11 % - The transition band can be modified by user. 12 % - Requires Signal Processing Toolbox for the following functions: 13 % kaiserord, kaiser, fir1, fftfilt. If not, using Octave-based alternatives. 14 % 15 % INPUT: 16 % - x : [nChannels,nTime] input signal (empty to only get the filter specs) 17 % - Fs : Sampling frequency 18 % - HighPass : Frequency below this value are filtered in Hz (set to 0 for low-pass filter only) 19 % - LowPass : Frequency above this value are filtered in Hz (set to 0 for high-pass filter only) 20 % - isMirror : isMirror (default = 0 no mirroring) 21 % - isRelax : Change ripple and attenuation coefficients (default=0 no relaxation) 22 % - Function : 'fftfilt', filtering in frequency domain (default) 23 % 'filter', filtering in time domain 24 % If not specified, detects automatically the fastest option based on the filter order 25 % - TranBand : Width of the transition band in Hz 26 % - Method : Version of the filter (2019/2016-18) 27 % 28 % OUTPUT: 29 % - x : Filtered signals 30 % - FiltSpec : Filter specifications (coefficients, length, ...) 31 % - Messages : Warning messages, if any 32 33 % @============================================================================= 34 % This function is part of the Brainstorm software: 35 % https://neuroimage.usc.edu/brainstorm 36 % 37 % Copyright (c) University of Southern California & McGill University 38 % This software is distributed under the terms of the GNU General Public License 39 % as published by the Free Software Foundation. Further details on the GPLv3 40 % license can be found at http://www.gnu.org/copyleft/gpl.html. 41 % 42 % FOR RESEARCH PURPOSES ONLY. THE SOFTWARE IS PROVIDED "AS IS," AND THE 43 % UNIVERSITY OF SOUTHERN CALIFORNIA AND ITS COLLABORATORS DO NOT MAKE ANY 44 % WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF 45 % MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, NOR DO THEY ASSUME ANY 46 % LIABILITY OR RESPONSIBILITY FOR THE USE OF THIS SOFTWARE. 47 % 48 % For more information type "brainstorm license" at command prompt. 49 % =============================================================================@ 50 % 51 % Authors: Hossein Shahabi, Francois Tadel, John Mosher, Richard Leahy, 52 % 2016-2019 53 54 55 %% ===== PARSE INPUTS ===== 56 % Filter is already computed 57 if (nargin == 3) 58 FiltSpec = HighPass; 59 % Default filter options 60 else 61 if (nargin < 9) || isempty(Method) 62 Method = 'bst-hfilter-2019' ; 63 end 64 if (nargin < 8) || isempty(TranBand) 65 TranBand = []; 66 end 67 if (nargin < 7) || isempty(Function) 68 Function = []; % Auto-detection based on the filter order later in the code 69 end 70 if (nargin < 6) || isempty(isRelax) 71 isRelax = 0; 72 end 73 if (nargin < 5) || isempty(isMirror) 74 isMirror = 0; 75 end 76 FiltSpec = []; 77 end 78 Messages = []; 79 80 81 %% ===== CREATE FILTER ===== 82 if isempty(FiltSpec) 83 % ===== FILTER SPECIFICATIONS ===== 84 Nyquist = Fs/2; 85 % High-pass filter 86 if ~isempty(HighPass) && (HighPass ~= 0) 87 f_highpass = HighPass / Nyquist; % Change frequency from Hz to normalized scale (0-1) 88 switch Method 89 case 'bst-hfilter-2019' 90 if isempty(TranBand) || TranBand==0 91 if (HighPass <= 5) 92 LwTranBand = .5 ; %Hz 93 else 94 LwTranBand = 1 ; %Hz 95 end 96 f_highstop = f_highpass - LwTranBand/Nyquist; 97 else 98 f_highstop = max(0, HighPass - TranBand) / Nyquist; 99 % f_highstop = max(0.2, HighPass - TranBand) / Nyquist; 100 TranBand = (f_highpass - f_highstop)*Nyquist ; % Adjusted Transition band 101 end 102 case 'bst-hfilter-2016' 103 % Default transition band 104 if (HighPass <= 5) % Relax the transition band if HighPass<5 Hz 105 f_highstop = .5 * f_highpass; 106 else 107 f_highstop = .85 * f_highpass; 108 end 109 end 110 else 111 f_highpass = 0; 112 f_highstop = 0; 113 LwTranBand = 1 ; 114 end 115 % Low-pass filter 116 if ~isempty(LowPass) && (LowPass ~= 0) 117 f_lowpass = LowPass / Nyquist; 118 switch Method 119 case 'bst-hfilter-2019' 120 if isempty(TranBand) || TranBand==0 121 UpTranBand = 1 ; 122 UpTranBand = min(UpTranBand,LwTranBand) ; 123 f_lowstop = f_lowpass + UpTranBand/Nyquist; 124 else 125 f_lowstop = f_lowpass + TranBand/Nyquist; 126 end 127 case 'bst-hfilter-2016' 128 % Default transition band 129 if f_highpass==0 % If this is a low-pass filter 130 f_lowstop = 1.05 * f_lowpass; 131 else 132 f_lowstop = 1.15 * f_lowpass; 133 end 134 end 135 else 136 f_lowpass = 0; 137 f_lowstop = 0; 138 end 139 % If both high-pass and low-pass are zero 140 if (f_highpass == 0) && (f_lowpass == 0) 141 Messages = ['No frequency band in input.' 10]; 142 return; 143 % Input frequencies are too high 144 elseif (f_highpass >= 1) || (f_lowpass >= 1) 145 Messages = sprintf('Cannot filter above %dHz.\n', Nyquist); 146 return; 147 end 148 % Transition parameters 149 if isRelax 150 Ripple = 10^(-2); 151 Atten = 10^(-2); % Equals 40db 152 else 153 Ripple = 10^(-3); % pass band ripple 154 Atten = 10^(-3); % Equals 60db 155 end 156 157 % ===== DESIGN FILTER ===== 158 % Build the general case first 159 fcuts = [f_highstop, f_highpass, f_lowpass, f_lowstop]; 160 mags = [0 1 0]; % filter magnitudes 161 devs = [Atten Ripple Atten]; % deviations 162 % Now adjust for desired properties 163 fcuts = max(0,fcuts); % Can't go below zero 164 fcuts = min(1-eps, fcuts); % Can't go above or equal to 1 165 166 % We have implicitly created a bandpass, but now adjust for desired filter 167 if (f_lowpass == 0) % User didn't want a lowpass 168 fcuts(3:4) = []; 169 mags(3) = []; 170 devs(3) = []; 171 end 172 if (f_highpass == 0) % User didn't want a highpass 173 fcuts(1:2) = []; 174 mags(1) = []; 175 devs(1) = []; 176 end 177 178 % Generate FIR filter 179 % Using Matlab's Signal Processing toolbox 180 if bst_get('UseSigProcToolbox') 181 [n,Wn,beta,ftype] = kaiserord(fcuts, mags, devs, 2); 182 n = n + rem(n,2); % ensure even order 183 b = fir1(n, Wn, ftype, kaiser(n+1,beta), 'noscale'); 184 % Using Octave-based functions 185 else 186 [n,Wn,beta,ftype] = oc_kaiserord(fcuts, mags, devs, 2); 187 n = n + rem(n,2); % ensure even order 188 b = oc_fir1(n, Wn, ftype, oc_kaiser(n+1,beta), 'noscale'); 189 end 190 191 % Filtering function: Detect the fastest option, if not explicitely defined 192 if isempty(Function) 193 % The filter() function is a bit faster for low-order filters, but much slower for high-order filters 194 if (n > 800) % Empirical threshold 195 Function = 'fftfilt'; 196 else 197 Function = 'filter'; 198 end 199 end 200 201 % Compute the cumulative energy of the impulse response 202 E = b((n/2)+1:end) .^ 2 ; 203 E = cumsum(E) ; 204 E = E ./ max(E) ; 205 % Compute the effective transient: Number of samples necessary for having 99% of the impulse response energy 206 [tmp, iE99] = min(abs(E - 0.99)) ; 207 208 % Output structure 209 FiltSpec.b = b; 210 FiltSpec.a = 1; 211 FiltSpec.order = n; 212 FiltSpec.transient = iE99 / Fs ; % Start up and end transients in seconds (Effective) 213 % FiltSpec.transient_full = n / (2*Fs) ; % Start up and end transients in seconds (Actual) 214 FiltSpec.f_highpass = f_highpass; 215 FiltSpec.f_lowpass = f_lowpass; 216 FiltSpec.fcuts = fcuts * Nyquist ; % Stop and pass bands in Hz (instead of normalized) 217 FiltSpec.function = Function; 218 FiltSpec.mirror = isMirror; 219 % If empty input: just return the filter specs 220 if isempty(x) 221 return; 222 end 223 end 224 225 %% ===== FILTER SIGNALS ===== 226 % Transpose signal: [time,channels] 227 [nChan, nTime] = size(x); 228 % Half of filter length 229 M = FiltSpec.order / 2; 230 % If filter length > 10% of data length 231 edgePercent = 2*FiltSpec.transient / (nTime / Fs); 232 if (edgePercent > 0.1) 233 Messages = [Messages, sprintf('Start up and end transients (%.2fs) represent %.1f%% of your data.\n', 2*FiltSpec.transient, 100*edgePercent)]; 234 end 235 236 % Remove the mean of the data before filtering 237 xmean = mean(x,2); 238 x = bst_bsxfun(@minus, x, xmean); 239 240 % Mirroring requires the data to be longer than the filter 241 if (FiltSpec.mirror) && (nTime < M) 242 Messages = [Messages, 'Warning: Data is too short for mirroring. Option is ignored...' 10]; 243 FiltSpec.mirror = 0; 244 end 245 % Mirror signals 246 if (FiltSpec.mirror) 247 x = [fliplr(x(:,1:M)), x, fliplr(x(:,end-M+1:end))]; 248 % Zero-padding 249 else 250 x = [zeros(nChan,M), x, zeros(nChan,M)] ; 251 end 252 253 % Filter signals 254 switch (FiltSpec.function) 255 case 'fftfilt' 256 if bst_get('UseSigProcToolbox') 257 x = fftfilt(FiltSpec.b, x')'; 258 else 259 x = oc_fftfilt(FiltSpec.b, x')'; 260 end 261 case 'filter' 262 x = filter(FiltSpec.b, FiltSpec.a, x, [], 2); 263 end 264 265 % Remove extra data 266 x = x(:,2*M+1:end); 267 % Restore the mean of the signal (only if there is no high-pass filter) 268 if (FiltSpec.f_highpass == 0) 269 x = bst_bsxfun(@plus, x, xmean); 270 end

Filter specifications: Notch

  • Description: 2nd order IIR notch filter with zero-phase lag (implemented with filtfilt).

  • Reference: Mitra, Sanjit Kumar, and Yonghong Kuo. Digital signal processing: a computer-based approach. Vol. 2. New York: McGraw-Hill, 2006. MatlabCentral #292960

  • Edge effects: It is computed based on the 99% energy of the estimated impulse response.

  • Function: brainstorm3/toolbox/process/functions/process_notch.m

  • External call: [x, FiltSpec, Messages] = Compute(x, sfreq, FreqList, Method, bandWidth)

Filter specifications: Band-stop

  • Description: 4th order Butterworth IIR filter with zero-phase lag (implemented with filtfilt)

  • Reference: FieldTrip: x = ft_preproc_bandstopfilter(x, sfreq, FreqBand, [], 'but', 'twopass')

  • Edge effects It is computed based on the 99% energy of the estimated impulse response.

  • Function: brainstorm3/toolbox/process/functions/process_bandstop.m

  • External call: x = process_bandstop('Compute', x, sfreq, FreqList, FreqWidth)

On the hard drive

The names of the files generated by the process "Power spectrum density" start with the tag timefreq_psd, they share the same structure as all the files that include a frequency dimension.

To explore the contents of a PSD file created in this tutorial, right-click on it and use the popup menus
File > View file contents or File > Export to Matlab.

  • psd_contents.gif

Structure of the time-frequency files: timefreq_psd_*.mat

  • TF: [Nsignals x Ntime x Nfreq] Stores the spectrum information. Nsignals is the number of channels that were selected with the option "MEG" in the PSD process. Nfreq is the number of frequency bins. There is no time dimension (Ntime = 1).

  • Comment: String used to represent the file in the database explorer.

  • Time: Window of time over which the file was estimated.

  • TimeBands: Defined only when you select the option "Group in time bands".
    Always empty for the PSD files because there is no time dimension.

  • Freqs: [1 x Nfreq] List of frequencies for which the power spectrum was estimated (in Hz).

  • RefRowNames: Only used for connectivity results.

  • RowNames: [Nsignals x 1] Describes the rows of the TF matrix (first dimension). Here it corresponds to the name of the MEG sensors, in the same order as is the .TF field.

  • Measure: Function currently applied to the FFT coefficients {power, none, magnitude, log, other}

  • Method: Function that was used to produce this file {psd, hilbert, morlet, corr, cohere, ...}

  • DataFile: File from which this file was calculated = Parent file in the database explorer.

  • DataType: Type of file from which this file was calculated (file type of .DataFile).

    • 'data' = Recordings
    • 'cluster' = Recordings grouped by clusters of sensors
    • 'results' = Source activations
    • 'scouts' = Time series associated with a region of interest
  • SurfaceFile / GridLoc / GridAtlas / Atlas: Used only when the input file was a source file.

  • Leff: Effective number of averages = Number of input files averaged to produce this file.

  • Options: Most relevant options that were passed to the function bst_timefreq.

  • History: Describes all the operations that were performed with Brainstorm on this file. To get a better view of this piece of information, use the menu File > View file history.

Useful functions

  • in_bst_timefreq(PsdFile): Read a PSD or time-frequency file.

  • in_bst(FileName): Read any Brainstorm file.

  • bst_process('LoadInputFile', FileName, Target): The most high-level function for reading Brainstorm files. "Target" is a string with the list of sensor names or types to load (field RowNames).

  • bst_psd(F, sfreq, WinLength, WinOverlap): Computation of the Welch's power spectrum density.

Additional documentation

Forum discussions



Tutorial 11: Bad channels

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

It is common during the acquisition to have a few sensors that are recording values that will not be usable in the data analysis. In MEG, a sensor can be damaged or unstable. In EEG, the quality of the connection between the electrode and the scalp is sometimes too low to record anything interesting.

It is important to identify the sensors with poor signal quality at an early stage of the pre-processing, because the efficiency of the artifact removal will depend on it. If you try to remove blink and cardiac artifacts with some bad sensors it may not work very well, and worse, it will propagate the bad signals to all the channels.

This tutorial will explain the various ways we have to handle the bad channels. Note that the recordings from this auditory experiment do not contain any bad sensors, therefore the entire tutorial is optional. If you are not interested, you can skip it and will still be able to follow the next tutorials.

Advanced

Identifying bad channels

Some bad channels are easy to detect, their signals look either completely off or totally flat compared with the other surrounding sensors. Some others are more difficult to identify. The examples below are taken from other datasets.

  • The power spectrum density (PSD) is usually a good way to spot a few bad channels, this is why we always recommend to compute it for all the datasets:

    psd_neck.gif bad_psd.gif

  • Simply looking at the signals traces, some channels may appear generally noisier than the others:

    bad_signal.gif

  • Looking at a 2D sensor topography, if one sensor shows very different values from its neighbors for extended periods of time, you can doubt of its quality:

    bad_topo.gif

Advanced

Selecting sensors

  • Double-click on the recordings for run #01 to open the MEG sensors.
  • Right-click on the time series figure > View topography (or press Ctrl+T).

  • Right-click on the topography figure > Channels > Display sensors (or press Ctrl+E).

  • If you can't see anything because the topography figure is too small, you can change the way the figures are automatically arranged. In the top-right corner of the Brainstorm figure, select the menu "Window layout options > Tiled".

  • You can select one channel by clicking on its signal or on the dot representing it in the topography figure. Note that the sensor selection is automatically reported to the other figure.

    select_channel.gif

  • You can select multiple sensors at the same time the topography figure.
    Right-click on the figure, then hold the mouse button and move the mouse.

    select_multiple.gif

  • Select a few sensors, then right-click on one of the figures and check out the Channels menu:

    select_popup.gif

    • View selected: Show the time series of the selected sensors.

    • Mark selected as bad: Remove sensors from the display and all the further computations.

    • Mark non-selected as bad: Keep only the selected channels.

    • Reset selection: Unselect all the selected sensors.

    • Mark all channels as good: Brings back all the channels to display.

    • Edit good/bad channels: Opens an interface that looks like the channel editor, but with one extra column to edit the status (good or bad) of each channel.

Advanced

Marking bad channels

  • Select a few channels, right-click > Channels > Mark selected as bad (or press the Delete key).

  • The selected channels disappear from the two views. In the time series figure, the signals are not visible anymore, in the topography the corresponding dots disappear and the values of the magnetic fields around the missing sensors get re-interpolated based on what is left.

    select_delete.gif

  • With the time series figure, you can display the signals that have been tagged as bad.
    In the Record tab, select the montage "Bad channels".
    In this view, you cannot select the channels, they are not available anymore.

    bad_montage.gif

  • Right-click on a figure > Channels > Edit good/bad channels.
    This menu open a window very similar to the Channel Editor window, with additional green and red dots to indicate the status of each channel. Click on the dot to switch it to good or bad.

    bad_edit.gif

Advanced

From the database explorer

Many options to change the list of bad channels are available from the database explorer.

  • The menus are available if you right-click one data file (or link to raw file). In this case, the selected operation is applied only on the selected file.

    bad_popup_file.gif

  • The same menus are also available for all the folders. In this case, the selected operation is applied recursively to all the data files (and links to raw files) that are found in the folder.

    bad_popup_db.gif

  • With this batching ability of the database explorer, you can quickly tag some bad channels in all the recordings of a subject or for the entire protocol. You can also get a quick overview of all the bad channels in all the files at once with the menu View all bad channels.

    bad_view.gif

  • Restore all the good channels before moving to the next tutorial. For instance, right-click on the protocol folder TutorialIntroduction > Good/bad channels > Mark all channels as good.

Advanced

Epoching and averaging

The list of bad channels is saved separately for each dataset.

At this stage of the analysis, the database contains only links to continuous files. When you import epochs from a continuous file, the list of bad channels will be copied from the raw file to all the imported data files.

Then you will be able to redefine this list for each epoch individually, tagging more channels as bad, or including back the ones that are ok. This way it is possible to exclude from the analysis the channels that are too noisy in a few trials only, for instance because of some movement artifacts.

When averaging, if an epoch contains one bad channel, this bad channel is excluded from the average but all the other channels are kept. If the same channel is good in other trials, it will be considered as good in the average. This means that not all the channels have the same number of trials for calculating the average.

This may cause the different channels of an averaged file to have different signal-to-noise ratios, which may lead to confusing results. However, we decided to implement the average in this way to be able to keep more data in the studies with a low number of trials and a lot of noise.

Advanced

On the hard drive

The list of bad channels is saved for each data file separately, in the field ChannelFlag.
This vector indicates for each channel #i if it is good (ChannelFlag(i)= 1) or bad (ChannelFlag(i)= -1).

Right-click on a link to a continuous file > File > View file contents:

bad_file.gif

For raw data files, this information is duplicated in the sFile structure (F field) in order to be passed easily to the low-level reading functions. If you are planning to modify the list of bad channels manually, you need to change two fields: mat.ChannelFlag and mat.F.channelflag. For imported data, you just need to modify the field mat.ChannelFlag.



Tutorial 12: Artifact detection

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

The previous tutorials illustrated how to remove noise patterns occurring continuously and at specific frequencies. However, most of the events that contaminate the MEG/EEG recordings are not persistent, span over a large frequency range or overlap with the frequencies of the brain signals of interest. Frequency filters are not appropriate to correct for eye movements, breathing movements, heartbeats or other muscle activity.

For getting rid of reproducible artifacts, one popular approach is the Signal-Space Projection (SSP). This method is based on the spatial decomposition of the MEG/EEG recordings for a selection of time samples during which the artifact is present. Therefore we need to identify when each type of artifacts occurs in the recordings. This tutorial shows how to automatically detect some well defined artifacts: the blinks and the heartbeats.

Observation

Let's start by observing the type of contamination the blinks and heartbeats cause to the MEG recordings.

  • Run #01: Double-click on the link to show the MEG sensors.

  • Configuration: Page of 3 seconds, view in columns, selection of the "CTF LT" sensors (the left-temporal sensors will be a good example to show at the same time the two types of artifacts).

  • EOG: Right-click on the link > EOG > Display time series. Two channels are classified as EOG:

    • VEOG: Vertical electrooculogram (two electrodes placed below and above one eye)

    • HEOG: Horizontal electrooculogram (two electrodes placed on the temples of the subject)

    • On these traces, there is not much happening for most of the recordings except for a few bumps. This subject is sitting very still and not blinking much. We can expect MEG recordings of a very good quality.
  • ECG: Right-click on the link > ECG > Display time series.
    The electrocardiogram was recorded with a bipolar montage of electrodes across the chest. You can recognize the typical shape of the electric activity of the heart (P, QRS and T waves).

  • Find a blink: Scroll through the recordings using the F3 shortcut until you see a large blink.

    • Remember you can change the amplitude scale with many shortcuts (eg. right-click + move).
    • To keep the scale fixed between two pages: Uncheck the button [AS] (auto-scale).

    • For instance, you can observe a nice blink at 20.8s (red cursor in the screen capture below).
    • On the same page, you should be able to observe the contamination due to a few heartbeats, corresponding to the peaks of the ECG signal (eg. 19.8s, shown as a blue selection below).
  • The additional data channels (ECG and EOG) contain precious information that we can use for the automatic detection of the blinks and heartbeats. We strongly recommend that you always record these signals during your own experiments, it helps a lot with the data pre-processing.

    observe.gif

Detection: Heartbeats

In the Record tab, select the menu: "Artifacts > Detect heartbeats".

  • It automatically opens the pipeline editor, with the process "Detect heartbeats" selected.
  • Channel name: Name of the channel that is used to perform the detection. Select or type "ECG".

  • Time window: Time range that the algorithm should scan for amplitude peaks. Leave the default values to process the entire file, or check the option [All file].

  • Event name: Name of the event group created for saving the detected events. Enter "cardiac".

    detect_ecg.gif

  • Click on Run. After the process stops, you can see a new event category "cardiac". The 464 (aprox.) heartbeats for 360s of recordings indicate an average heart rate of 77bpm, everything looks normal.

  • You can check a few of them, to make sure the "cardiac" markers really indicate the ECG peaks. Not all peaks need to be detected, but you should have a minimum of 10-20 events marked for removing the artifacts using SSP, described in the following tutorials.

    detect_ecg_done.gif

Now do the same thing for the blinks: Menu "Artifacts > Detect eye blinks".

  • Channel name: VEOG

  • Time window: All file

  • Event name: Blink

    detect_eog.gif

  • Run, then look quickly at the 15 detected blinks (shortcut: Shift+Right arrow).

    detect_eog_done.gif

Remove simultaneous blinks/heartbeats

We will use these event markers as the input to our SSP cleaning method. This technique works well if each artifact is defined precisely and as independently as possible from the other artifacts. This means that we should try to avoid having two different artifacts marked at the same time.

Because the heart beats every second or so, there is a high chance that when the subject blinks there is a heartbeat not too far away in the recordings. We cannot remove all the blinks that are contaminated with a heartbeat because we would have no data left. But we have a lot of heartbeats, so we can do the contrary: remove the markers "cardiac" that are occurring during a blink.

In the Record tab, select the menu "Artifacts > Remove simultaneous". Set the options:

  • Remove events named: "cardiac"

  • When too close to events: "blink"

  • Minimum delay between events: 250ms

    detect_simult.gif

After executing this process, the number of "cardiac" events goes from 465 to 456. The deleted heartbeats were all less than 250ms away from a blink.

Run #02: Running from a script

Let's perform the same detection operations on Run #02, using this time the Process1 box.

  • Close everything with the [X] button at the top-right corner of the Brainstorm window.

  • Select the run AEF #02 in the Process1 box, then select the following processes:

  • Events > Detect heartbeats: Select channel ECG, check "All file", event name "cardiac".

  • Events > Detect eye blinks: Select channel VEOG, check "All file", event name "blink".

  • Events > Remove simultaneous: Remove "cardiac", too close to "blink", delay 250ms.

    detect_script.gif

  • Open the Run#02 recordings (MEG+EOG+ECG) and verify that the detection worked as expected. You should get 472 cardiac events and 19 blink events.

Advanced

Artifacts classification

If the EOG signals are not as clean as here, the detection processes may create more than one category, for instance: blink, blink2, blink3. The algorithm not only detects specific events in a signal, it also classifies them by shape. For two detected events, the signals around the event marker have to be sufficiently correlated (> 0.8) to be classified in the same category. At the end of the process, all the categories that contain less than 5 events are deleted.

In the good cases, this can provide an automatic classification of different types of artifacts, for instance: blinks, saccades and other eye movements. The tutorial MEG median nerve (CTF) is a good illustration of appropriate classification: blink groups the real blinks, and blink2 contains mostly saccades.

  • detect_classification.gif

In the bad cases, the signal is too noisy and the classfication fails. It leads to either many different categories, or none if all the categories have less than 5 events. If you don't get good results with the process "Detect eye blinks", you can try to run a custom detection with the classification disabled.

At the contrary, if you obtain one category that mixes multiple types of artifacts and would like to automatically separate them in different sub-groups, you can try the process "Events > Classify by shape". It is more powerful than the automatic classification from the event detection process because it can run on multiple signals at the same type: first it reduces the number of dimensions with a PCA decomposition, then runs a similar classification procedure.

Advanced

Detection: Custom events

These two processes "Detect heartbeats" and "Detect eye blinks" are in reality shortcuts for a generic process "Detect custom events". This process can be used for detecting any kind of event based on the signal power in a specific frequency band. We are not going to use it here, but you may have to use it if the standard parameters do not work well, or for detecting other types of events.

  • The signal to analyze is read from the continuous file (options "Channel name" and "Time window").
  • Frequency band: The signal is filtered in a frequency band where the artifact is easy to detect. For EOG: 1.5-15Hz ; for ECG: 10-40Hz.

  • Threshold: An event of interest is detected if the absolute value of the filtered signal value goes over a given number of times the standard deviation. For EOG: 2xStd, for ECG: 4xStd

  • Minimum duration between two events: If the filtered signal crosses the threshold several times in relation with the same artifact (eg. muscle activity in an EMG channel), we don't want to trigger several events but just one at the beginning of the activity. This parameter would indicate the algorithm to take only the maximum value over the given time window; it also prevents from detecting other events immediately after a successful detection. For the ECG, this value is set to 500ms, because it is very unlikely that the heart rate of the subject goes over 120 beats per minute.

  • Ignore the noisy segments: If this option is selected, the detection is not performed on the segments that are much noisier than the rest of the recordings.

  • Enable classification: If this option is selected, the events are classified by shape in different categories, based on correlation measure. In the end, only the categories that have more than 5 occurrences are kept, all the other successful detections are ignored.

    detect_custom.gif

Advanced

In case of failure

If the signals are not as clean as in this sample dataset, the automatic detection of the heartbeats and blinks may fail with the standard parameters. You may have to use the process "Detect custom events" and adjust some parameters. For instance:

  • If nothing is detected: decrease the amplitude threshold, or try to adjust the frequency band.
  • If too many events are detected: increase the amplitude threshold or the minimum duration between two events.
  • If too many categories of events are generated, and you have a very little number of events in the end: disable the classification.
  • To find the optimal frequency band for an artifact, you can open the recordings and play with the online band-pass filters in the Filter tab. Keep the band that shows the highest amplitude peaks.

If you cannot get your artifacts to be detected automatically, you can browse through the recordings and mark all the artifacts manually, as explained in the tutorial Event markers..

Advanced

Other detection processes

Events > Detect analog trigger

  • See tutorial Stimulation delays.

  • This is used to detect events on any channel (MEG, EEG, STIM, Analog, etc), where the baseline is relatively stable and the events will predictably cross a threshold. This is useful when you want to detect a single time point (simple event) at the start of an event, as in these examples:

    analog_detection_audio.png analog_detection_pd.png analog_detect_saccade.png

Events > Detect custom events

  • See tutorial Artifact detection.

  • This is used to detect events on any channel (MEG, EEG, STIM, Analog, etc) where the baseline is relatively stable and the events will predictably cross a threshold. This is useful when you want to detect a simple event at the peak of an event, as in these examples:

    custom_detect_jump.png custom_detect_muscle.png

Events > Detect events above threshold

  • See tutorial MEG visual: single subject.

  • This is used to detect signal on any channel (MEG, EEG, STIM, Analog, etc) that is above a defined threshold value. This is useful when you want to detect all time points when the signal is above the threshold (extended events), as in these examples:

    threshold_detect_blink.png threshold_detect_event.png

  • The extended event can be converted to a single event (when the rising or falling edge is desired). in the Record tab, select the event to convert, then in the menu Events > Convert to simple event > select Start, Middle, or End to indicate where the marker should be placed.

Events > Detect other artifacts

Events > Detect movement

Synchronize > Transfer events

Artifacts > Detect bad channels: Peak-to-peak

  • With imported data: Reject channels and trials from imported data.

  • With raw data: Create bad events (specific channel or all channels) on the raw data.

  • This process is usually not recommended, as the amplitude of the signal is not always a good marker of the quality of the channel.

Artifacts > Detect bad: amplitude and gradient thresholds

  • With imported data: Reject trials from imported data.

  • With raw data: Create bad events (all channels) on the raw data. Data is analyzed in windows which length is defined by the user.

  • This process detects artifactual segments for MEG recordings based on the distribution of the signal peak-to-peak amplitude and the signal numerical gradient. This process is based on the AUTO option of the MEG trial rejection method available in ArtifactScanTool.

  • Segments are identified as artifactual if they surpass the specified thresholds. These thresholds can be set in two ways: auto, thresholds are defined as n times the median absolute deviations (MAD) of each signal characteristic; manual: user provides the thresholds. Once the process is executed, the report shows the distribution and thresholds for the two signal characteristics.

    mad_detect_report.png

Advanced

Additional documentation



Tutorial 13: Artifact cleaning with SSP

Authors: Francois Tadel, Elizabeth Bock, John C Mosher, Sylvain Baillet

As previously said, the frequency filters are not adapted to remove artifacts that are transient or overlapping in frequency domain with the brain signals of interest. Other approaches exist to correct for these artifacts, based on the spatial signature of the artifacts.

If an event is very reproducible and occurs always at the same location (eg. eye blinks and heartbeats), the sensors will always record the same values when it occurs. We can identify the topographies corresponding to this artifact (ie. the spatial distributions of values at one time point) and remove them from the recordings. This spatial decomposition is the basic idea behind two widely used approaches: the SSP (Signal-Space Projection) and ICA (Independent Component Analysis) methods.

This introduction tutorial will focus on the SSP approach, as it is a lot simpler and faster but still very efficient for removing blinks and heartbeats from MEG recordings. For cleaning EEG data, ICA is ofter better indicated - the interface for running ICA decompositions is very similar and is described in an advanced tutorial.

Overview

The general SSP objective is to identify the sensor topographies that are typical of a specific artifact, then to create spatial projectors to remove the contributions of these topographies from the recordings.

  1. We start by identifying many examples of the artifact we are trying to remove. This is what we've been doing in the previous tutorial with the creation of the "cardiac" and "blink" events.
  2. We extract a short time window around each of these event markers and concatenate in time all the small blocks of recordings.
  3. We run a principle components analysis (PCA) on the concatenated artifacts in order to get a decomposition in various spatial components (number of components = number of sensors).
  4. If it works well, we can find in the first few principal components some topographies that are very specific of the type of artifact we are targeting. We select these components to remove.
  5. We compute a linear projector for each spatial component to remove and save them in the database (in the "Link to raw file"). They are not immediately applied to the recordings.
  6. Whenever some recordings are read from this file, the SSP projectors are applied on the fly to remove the artifact contributions. This approach is fast and memory efficient.
  7. Note that these tools are available on continuous files only ("Link to raw file") and cannot be applied to recordings that have already been imported in the database.

    ssp_intro.gif

The order matters

This procedure has to be repeated separately for each artifact type. The order in which you process the artifacts matters, because for removing the second artifact we typically use the recordings cleaned with the first set of SSP projectors. We have to decide which one to process first.

It works best if each artifact is defined precisely and as independently as possible from the other artifacts. If the two artifacts happen simultaneously, the SSP projectors calculated for the blink may contain some of the heartbeat topography and vice versa. When trying to remove the second artifact, we might not be able to clearly isolate it anymore.

Because the heart beats every second or so, there is a high chance that when the subject blinks there is a heartbeat not too far away in the recordings. Therefore a significant number of the blinks will be contaminated with heartbeats. But we have usually a lot of "clean" heartbeats, we can start by removing these ones. To correctly isolate these two common artifacts, we recommend the following procedure:

  • Remove the markers "cardiac" that are occurring during a blink (done in the previous tutorial),
  • Compute the cardiac SSP (with no eye movements, because we removed the co-occurring events),

  • Compute the blink SSP (with no heartbeats, because they've already been taken care of).

If you have multiple modalities recorded simultaneously, for example MEG and EEG, you should run this entire procedure twice, once for the EEG only and once for the MEG only. You will always get better results if you process the different types of sensors separately. Same thing when processing Elekta-Neuromag recordings: separately process the magnetometers (MEG MAG) and the gradiometers (MEG GRAD).

SSP: Heartbeats

Double-click on the link to show the MEG sensors for Run #01.
In the Record tab, select the menu: "Artifacts > SSP: Heartbeats".

  • Event name: Name of the event to use to calculate the projectors, enter "cardiac".

  • Sensor types: Type of sensors for which the projection should be calculated ("MEG"). Note that you will always get better results if you process the different types of sensors separately.

  • Compute using existing SSP projectors: You have the option to calculate the projectors from the raw recordings, or from the recordings filtered with the previously computed SSP projectors.
    Unless you have a good reason for not considering the existing projectors, you should select this option. Then if the results are not satisfying, try again with the option disabled.
    For this step it doesn't make any difference because there are not projectors yet in the file.

    ssp_ecg_process.gif

After the computation is done, a new figure is displayed, that lets you select the active projectors.

  • On the left: The projector categories where each row represents the result of an execution of this process (usually one for each sensor type and each artifact).

  • On the right: The spatial components returned by the PCA decomposition. The percentage indicated between brackets is the singular value for this each component, normalized for this decomposition (percentage = Si / sum(Si), see technical details at the end of this page).

  • Percentage: More practically, it indicates the amount of signal that was captured by the component during the decomposition. The higher it is, the more the component is representative of the artifact recordings that were used to calculate it. In the good cases, you would typically see one to three components with values that are significantly higher than the others.

  • When a component is selected, it means that it is removed from the recordings. A spatial projector is computed and applied to the recordings on the fly when reading from the continuous file.
  • Default selection: The software selects the first component and leaves the others unselected. This selection is arbitrary and doesn't mean the cleaning is correct, you should always manually review the components that you want to remove.

    ssp_select1.gif

Evaluate the components

The percentage indicated for the first value (9%) is much higher than the following ones (5%, 5%, 4%, 3%...), this could indicate that it targets relatively well the cardiac artifact. Let's investigate this.

  • Click on the first component, then click on the toolbar button [Display component topography]. This menu shows the spatial distribution of the sensor values for this component. Note that you don't have to select the component (ie. check the box) to display it. This topography seems to correspond to a strong dipolar activity located relatively far from the sensor array, it matches the type of artifact we expect from the heart activity.



  • ssp_select2.gif

  • The second button "Display component topography [No magnetic interpolation]" produces the same figure but without the reinterpolation of the magnetic fields that is typically applied to the MEG recordings in Brainstorm, it may help understand some difficult cases. This magnetic interpolation will be detailed later in the introduction tutorials.

  • You can display multiple components in the same figure: select them at the same time in the list (holding the Shift/Ctrl/Cmd button of your keyboard) and then click on the button "Display topography". No other strong components looks like it could be related with the heartbeats.

    ssp_multiple.gif

  • The last button in the toolbar [Display component time series], opens a figure that represents the evolution of the contribution of this component over time. The higher the amplitude, the more present the selected topography in the recordings. Click on it to show the component #1, then display the ECG signal at the same time (right-click on the file > ECG > Display time series).

    ssp_cardiac_good.gif

  • We observe that the "SSP1" trace correlates relatively well with the ECG trace, in the sense that we captured most the ECG peaks with this component. However, the component seems also to capture much more signal than just the heartbeats: many alpha oscillations and some of the ocular activity. The example below shows a blink in the EOG, ECG and SSP component #1.

    ssp_cardiac_bad.gif

  • If you remove this component from the recordings, you can expect to see most of the artifacts related with the cardiac activity to go away, but you will also remove additional signal elements that were not really well identified. The job is done but it causes some unwanted side effects.

  • It is in general possible to refine the SSP decomposition by going back to the selection of "cardiac" markers that we used to compute it. You could look at all the ECG peaks individually and remove the markers located in segments of recordings that are noisier or that contain a lot of alpha activity (~10Hz). You would need to delete this SSP decomposition and run again the same process.
  • Alternatively, or if you don't manage to extract a clean cardiac component with a PCA/SSP decomposition, you could try to run an ICA decomposition instead. You might be able to get better results, but it comes with significant computation and manual exploration times. Note that for some subjects, the cardiac artifact is not very strong and could be simply ignored in the analysis.

Evaluate the correction

The topography of the component #1 looks like it represents the heart activity and its temporal evolution shows peaks where we identified heartbeats. It is therefore a good candidate for removal, we just need to make sure the signals look good after the correction before validating this choice.

  • Show the left-temporal MEG sensors (CTF LT) and select/unselect the first SSP component.
  • Repeat this for different time windows, to make sure that the cardiac peaks in the MEG sensors really disappear when the projector #1 is selected and that the rest is not altered too much.
  • No correction:

    ssp_ecg_off.gif

  • Cardiac component #1 removed:

    ssp_ecg_on.gif

  • In this example we will consider that the current decomposition is good enough.
    Make sure you select the component #1, then click on [Save] to validate the modifications.

  • After this window is closed, you can always open it again from the Record tab with the menu Artifacts > Select active projectors. At this stage of the analysis, you can modify the list of projectors applied to the recordings at any time.

Let's try the same thing with the eye blinks.

  • Select the process "Artifacts > SSP: Eye blinks"

  • Run it on the event type "blink", that indicates the peaks of the VEOG signal.
    Select the option "Compute using existing projectors" (if this step doesn't seem to work correctly, try again without selecting this option).

    ssp_eog_process.gif

  • You see now a new category of projectors. Based on the distribution of values, this first component is most likely a good representation of the artifact we are trying to remove. The second one could be a good candidate as well.

    ssp_select_eog1.gif

  • Select the first three components and display their topographies:

    ssp_eog_topo.gif

  • Component #1: Most likely a blink,

  • Component #2: Probably a saccade (another type of eye movement),

  • Component #3: Not related with the eye movements (maybe related with the alpha activity).
  • As a side note, if you had not selected the option "Compute using existing SSP/ICA projectors", you would have obtained the projectors below, which correspond to the topography of the artifact in the original signals (without considering the cardiac projector). It is normal if the topographies we obtain after removing the cardiac peaks are slightly different, this is because they are computed on the different subspace of the signals. The relative singular values is smaller after the cardiac correction, maybe because the recordings we used to compute it already contained some eye movements.

    ssp_eog_topo_no.gif

  • Display the times series for these three components, together with the EOG signals. You have to uncheck temporarily the component #1 to be able to display its signal. When it is checked, it is removed from the signal therefore it corresponds to a flat trace.
    The figure below shows the EOG and SSP values between 318s and 324s. The SSP1 trace matches the blink observed in VEOG and SSP2 matches the saccade observed in HEOG.

    ssp_eog_ts.gif

  • Left-temporal MEG signals when there is no component selected:

    ssp_eog_off.gif

  • With only the component #2 selected (saccade):

    ssp_eog_on1.gif

  • With components #1 and #2 selected (blink + saccade):

    ssp_eog_on2.gif

  • Keep the components #1 and #2 selected and click on [Save] to validate your changes.

Run #02

Reproduce the same operations on Run #02:

  • Close everything with the [X] button at the top-right corner of the Brainstorm window.
  • Open the MEG recordings for run AEF #02 (double-click on the file link).

  • Artifacts > SSP: Heartbeats: Event name "cardiac", sensors "MEG", use existing SSP projectors.
    Select component #1, click on [Save].

    ssp_run02_ecg.gif

  • Artifacts > SSP: Eye blinks: Event name "blink", sensors "MEG", use existing SSP projectors.
    Select component #1, click on [Save].

    ssp_run02_eog.gif

  • Note that in this second session, the representation of the saccade was not as clear as in the first file. The distribution of the percentage values does not show any clear component other from the blink one, and the topographies are not as clear. In general, the saccade processing requires a separate step, we will illustrate this in the next tutorial.

Note for beginners

Everything below is advanced documentation, you can skip it for now.




Advanced

SSP: Generic

The calculation of the SSP for the heartbeats and the eye blinks are shortcuts to a more generic process "Artifacts > SSP: Generic". You may need this process if the standard parameters do not work of if you want to use this technique to remove other types of artifacts.

  • Time window: What segment of the file you want to consider.

  • Event name: Markers that are used to characterize the artifact. If you don't specify any event name, it will use the entire time window as the input of the PCA decomposition.

  • Event window: Time segment to consider before and after each event marker. We want this time window to be longer than the artifact effect itself, in order to have a large number of time samples representative of a normal brain activity. This helps the PCA decomposition to separate the artifact from the ongoing brain activity.

  • Frequency band: Definition of the band-pass filter that is applied to the recordings before calculating the projector. Usually you would use the same frequency band as we used for the detection, but you may want to try to refine this parameter if the results are not satisfying.

  • Sensor types or names: List of sensor names or types for which the SSP projectors are calculated. You can get better results if you process one sensor type at a time.

  • Compute using existing SSP/ICA projectors: Same as in the heartbeats/blinks processes.

  • Save averaged artifact in the database: If you check this option with an event name selected, the process will save the average of all the artifact epochs (event marker + event window) before and after the application of the first component of the SSP decomposition. This is illustrated in the next section.

  • Method to calculate the projector:

    • PCA: What was described until now: SVD decomposition to extract spatial components.

    • Average: Uses only one spatial component, the average of the time samples at which the selected events occur. This has no effect if there are no events selected.

      ssp_generic.gif

Advanced

Averaged artifact

One efficient way of representing the impact of this artifact correction is to epoch the recordings around the artifacts before and after the correction and compute the average of these epochs.

  • Run the process "SSP: Generic" with:
    • The default blink options: event "blink", [-200,+200]ms, [1.5-15]Hz.
    • The option "Computing existing SSP/ICA projectors" disabled.

    • The option "Save averaged artifact in the database" selected.

    • The option panel should look like the screen capture in the previous section.
  • Look at the topography of the first component. You can notice that the percentage value is higher than what we got previously, and that the topography looks different than previously.

    ssp_generic_topo.gif

  • This difference comes from the fact that this time we did not use the cardiac SSP to compute the blink SSP ("Compute existing SSP" disabled). This could indicate that there is some form of cross-contamination of the "blink" and "cardiac" events that we designed here. The origin of the common signals between the different segments of artifact is sometimes due to important alpha waves (around 10Hz) that are present for most of the recordings. It don't matter much, you just have to remember that the computation order matters and that you can try variations of the suggested workflow to fit better your recordings.
  • Otherwise, the difference between this topography and the previous one could be only due to the fact that they represent the artifact in different subspaces (in the first case, one dimension has already been removed). Even if the two artifacts were completely independant (the two removed dimensions are orthogonal), the topographies would look slightly different.
  • You should see now two additional files in your database. They are both the average of the 19 blinks identified in the recordings, [-200,+200]ms around the "blink" events. The top row shows the average before the SSP correction, the bottom row the same average but recomputed after removing the first component of the decomposition. The artifact is gone.

    ssp_eog_average.gif

  • Delete this new category, and make sure you get back to the previous settings (first component of both "cardiac" and "blink" selected). Click on [Save] to validate this modification.

    ssp_final_selection.gif

Advanced

Troubleshooting

You have calculated your SSP projectors as indicated here but you don't get any good results. No matter what you do, the topographies don't look like the targeted artifact. You can try the following:

  • Review one by one the events indicating the artifacts, remove the ones that are less clear or that occur close to another artifact.
  • Select or unselect the option "Compute using existing SSP".
  • Change the order in which you compute the projectors.
  • Use the process "SSP: Generic" and modify some parameters:
    • Use a narrower frequency band: especially the EOG, if the projectors capture some of the alpha oscillations, you can limit the frequency band to [1.5 - 9] Hz.
    • Reduce or increase the time window around the peak of the artifact.
    • Change the method: Average / SSP.
  • If you have multiple acquisition runs, you may try to use all the artifacts from all the runs rather than processing the files one by one. For that, use the Process2 tab instead of Process1. Put the "Link to raw file" of all the runs on both sides, Files A (what is used to compute the SSP) and Files B (where the SSP are applied).

Always look at what this procedure gives you in output. Most of the time, the artifact cleaning will be an iterative process where you will need several experiments to adjust the options and the order of the different steps in order to get optimal results.

Advanced

SSP Theory

The Signal-Space Projection (SSP) is one approach to rejection of external disturbances. Here is a short description of the method by Matti Hämäläinen, from the MNE 2.7 reference manual, section 4.16.

Unlike many other noise-cancellation approaches, SSP does not require additional reference sensors to record the disturbance fields. Instead, SSP relies on the fact that the magnetic field distributions generated by the sources in the brain have spatial distributions sufficiently different from these generated by external noise sources. Furthermore, it is implicitly assumed that the linear space spanned by the significant external noise patterns has a low dimension.

Without loss of generality we can always decompose any n-channel measurement b(t) into its signal and noise components as:

  • b(t) = bs(t) + bn(t)

Further, if we know that bn(t) is well characterized by a few field patterns b1...bm, we can express the disturbance as

  • bn(t) = Ucn(t) + e(t) ,

where the columns of U constitute an orthonormal basis for b1...bm, cn(t) is an m-component column vector, and the error term e(t) is small and does not exhibit any consistent spatial distributions over time, i.e., Ce = E{eeT} = I. Subsequently, we will call the column space of U the noise subspace. The basic idea of SSP is that we can actually find a small basis set b1...bm such that the conditions described above are satisfied. We can now construct the orthogonal complement operator

  • P = I - UUT

and apply it to b(t) yielding

  • b(t) ≈ Pbs(t) ,

since Pbn(t) = PUcn(t) ≈ 0. The projection operator P is called the signal-space projection operator and generally provides considerable rejection of noise, suppressing external disturbances by a factor of 10 or more. The effectiveness of SSP depends on two factors:

  1. The basis set b1...bm should be able to characterize the disturbance field patterns completely and

  2. The angles between the noise subspace space spanned by b1...bm and the signal vectors bs(t) should be as close to π/2 as possible.

If the first requirement is not satisfied, some noise will leak through because Pbn(t) ≠ 0. If the any of the brain signal vectors bs(t) is close to the noise subspace not only the noise but also the signal will be attenuated by the application of P and, consequently, there might by little gain in signal-to-noise ratio.

Since the signal-space projection modifies the signal vectors originating in the brain, it is necessary to apply the projection to the forward solution in the course of inverse computations.

Advanced

SSP Algorithm

The logic of the SSP computation is the following:

  1. Take a small time window around each marker to capture the full effect of the artifact, plus some clean brain signals before and after. The default time window is [-200,+200]ms for eye blinks, and [-40,+40]ms for the heartbeats.
  2. Filter the signals in a frequency band of interest, in which the artifact is the most visible (in practice, we extract a segment long enough so that it can be filtered properly, and cut it after filtering).
  3. Concatenate all these time blocks into a big matrix A = [b1, ..., bm]

  4. Compute the singular value decomposition of this matrix A: [U,S,V] = svd(A, 'econ')

  5. The singular vectors Ui with the highest singular values Si are an orthonormal basis of the artifact subspace that we want to subtract from the recordings. The software selects by default the vector with the highest eigenvalue. Then it is possible to redefine interactively the selected components.

  6. Calculate the projection operator: P⊥i = I - UiUiT

  7. Apply this projection on the MEG or EEG recordings F: F = P⊥iF

  8. The process has to be repeated separately several times for each sensor type and each artifact.

Steps #1 to #5 are done by the processes "Artifact > SSP" in the Record tab: the results, the vectors Ui, are saved in the channel file (field ChannelMat.Projector(i).Components).

Steps #6 and #7 are calculated on the fly when reading a block of recordings from the continuous file: when using the raw viewer, running a process a process on the continuous file, or importing epochs in the database.

Step #8 is the manual control of the process. Take some time to understand what you are trying to remove and how to do it. Never trust blindly any fully automated artifact cleaning algorithm, always check manually what is removed from the recordings, and do not give up if the first results are not satisfying.

Advanced

Extract the time series

It could be useful to save the SSP or ICA time series in a new file for further processing. Here is one solution to get there:

  • First, make sure you do not remove the components you are interested in: open the continuous recordings, Record tab > Artifacts > Select active projectors, unselect the components you want to study, so that they are kept in the imported data.

    ssp_ts_0.gif

  • Import the segments of recordings of interest from the continuous file: select the option Apply SSP/ICA projectors, otherwise the projectors would be discarded from the new channel file in the imported folder.

    ssp_ts_1.gif

  • To review the SSP/ICA time series (optional): open the recordings you just imported, and select the menu Artifacts > Load projector as montages in the Record tab. The projectors are made available in the montage menu.

    ssp_ts_2.gif

  • To create a new file with the SSP/ICA time series in the database: select the file you imported in Process1 and run the process Standardize > Apply montage, with the option Create new folders selected.

    ssp_ts_2.gif

Advanced

On the hard drive

The projectors are saved in the channel file associated with the recordings. This means that they will be shared by all the files that share the same channel file. As a consequence, you cannot share the channel files between acquisition runs if you are planning to use different SSP projectors for different runs.

You can find them in the field ChannelMat.Projector (array of structures):

  • Comment: String representing the projector in the window "Select active projectors".

  • Components: [Nsensors x Ncomponents], each column is one spatial component.

  • CompMask: [1 x Ncomponents], Indicates if each component is selected or not (0 or 1).

  • Status: 0=Category not selected, 1=Category selected, 2=Projectors already applied to the file.

  • SingVal: [1 x Ncomponents], Singular values of the SVD decomposition for each component.

ssp_storage.gif

Advanced

Additional documentation

Articles

  • C. D. Tesche, M. A. Uusitalo, R. J. Ilmoniemi, M. Huotilainen, M. Kajola, and O. Salonen, "Signal-space projections of MEG data characterize both distributed and well-localized neuronal sources," Electroencephalogr Clin Neurophysiol, vol. 95, pp. 189-200, 1995.
  • M. A. Uusitalo and R. J. Ilmoniemi, "Signal-space projection method for separating MEG or EEG into components," Med Biol Eng Comput, vol. 35, pp. 135-40, 1997.

Tutorials and forum discussions



Tutorial 14: Additional bad segments

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

We have already corrected the recordings for the artifacts at fixed frequencies (power lines) and for some standard and reproducible artifacts (heartbeats and blinks). There are many other possible sources of noise that can make the recordings unusable in our analysis. This tutorial introduces how to identify and mark these bad segments.

Manual inspection

It is very important to mark bad segments (noisy sections) of the recordings before running any fancy analysis. It may save you hours of repeated work later, when you discover after processing all your data that you have to redo everything because you have left some huge artifacts in the recordings.

In terms of quality control, there is no automatic method that will give you results as good as a manual screening of the recordings. We recommend you always take a few minutes to scroll through all your files to identify and tag all the noisy segments. Do this full screening after you're done with all the other pre-processing steps (filtering and SSP/ICA cleaning) to remove what has not been corrected with other techniques.

At the beginning, it is not easy to separate what is too noisy from what is acceptable. This is usually an iterative process: at the first attempt you guess, you average the trials and estimate the sources and finally realize that are some eye movements left that are masking your effect of interest. You have to delete everything, add some bad segments and try again. On the contrary, if you reject too much data at the beginning, you may not have enough trials to observe your effect correctly. The balance is not easy to find, but you'll get good at it quickly. Brainstorm offers tools to do these operations easily, so a few trials and errors are not too dramatic. Just make sure you check the quality of your data at every step of the analysis, so that you don't go too far in the wrong direction.

To review your recordings and check for major artifacts, you can for instance:

  • Display all the sensors in columns, you don't need to see each sensor individually.
  • Use a large time window (page duration = 5-20s).
  • Increase the gain of the sensors to see clearly the abnormally large values.
  • Scroll using the shortcuts corresponding to buttons [<<<] and [>>>]: F3, Shift+F3, F4, etc.

  • Unselect the autoscale option ([AS] button on the right side of the figure) so that the amplitude doesn't change every time you move to the next page, it will make the review easier.

When you identify something that doesn't look good:

  • Click and move your mouse to select the segment you consider as noisy.
  • Mark the segment as bad: Right-click > Reject time segment (or Ctrl+B).

  • The example below shows Run #02 from 206s to 211s:

    bad_select.gif

Automatic detection

We have developed some tools to help with this screening procedure. The process "Artifacts > Detect other artifacts" identifies epochs of time that contain typical artifacts from eye movement, subject movement or muscle contractions. While it is still advised that you visually inspect all of your data, this process can help identify areas that contain artifacts which you may want to mark as bad segments.

Currently, the process runs only on continuous raw links and identifies artifacts in two frequency bands, chosen because of the predictability of artifacts in these bands.

  • 1-7 Hz: Typically related to subject movement, eye movements and dental work (or other metal)

  • 40-240 Hz: Related to muscle noise and some sensor artifacts

  • Note that the alpha band (8-12 Hz) is specifically avoided here since some alpha oscillations can be quite high in amplitude and falsely detected as artifact.

Important notes

  • Before running this detection it is highly recommended that you run the cleaning processes for cardiac and eye blink artifacts.
  • This process is currently being tested. If you find a bug or have other comments related to its performance, please provide comments here or on the user forum.

  • We recommend you use the markers that this process creates as suggestions, not as the actual reality. Do not use this method fully automatically, always review its results.

Recommendations for usage

  • Start by running the process on one run per subject. Scan though the recording and confirm that the detection is performing well.
  • Adjust the threshold as needed, then run the detection on the other runs for that subject.
  • If there are many eye movements, the "1-7 Hz" detection can work well for computing an SSP projector. This is done using the menu "Artifacts > SSP: Generic" as described below. If a suitable projector is found and applied, re-run the artifact detection to find the remaining artifacts that were not removed.

Run #01

We will now apply this process on the first acquisition session:

  • Double-click on the link to show the MEG sensors for Run #01.

  • The process will exclude the segments that are already marked as bad from the detection. If you have marked bad segments at the beginning of this tutorial, delete them all: Select the category "BAD" and press the delete key, or menu Events > Delete group.

  • In the Record tab, select the menu: "Artifacts > Detect other artifacts".

    • Time window: The time window for which the detection will be performed.

    • Sensitivity: 1 is very sensitive, 5 very conservative, 3 works well for a variety of conditions.

    • Frequency band selection: Check the box for which band(s) you will perform the detection.

      bad_detect_process.gif

  • After running the process, event types are created, one for each frequency band. They contain extended events indicating the start and end of the epoch. The time resolution is 1 second and therefore the epoch may, in fact, be a bit longer than the actual artifact. You can manually refine the time definition if you wish and mark some or all events as bad.

    bad_detect_evt1.gif

  • Check quickly the segments that were marked as possible artifacts. They all correspond to some form of irregularities in the recordings but there are very few of them. We will just flag all the segments detected in both categories as bad.
  • For considering these events as bad, select them both in the events list and use the menu
    "Events > Mark group as bad". Alternatively, you can rename the events and add the tag "bad_" in their name, it would have the same effect.

    bad_rename_evt1.gif

  • Unload all the data with the [X] in the toolbar of the Brainstorm window. Save the modifications.

Run #02

Repeat the same operation on the second data file:

  • Double-click on the link to show the MEG sensors for Run #02.

  • Run the menu "Artifacts > Detect other artifacts" with the same parameters as the first file.

    bad_detect_evt2.gif

  • The category "1-7Hz" contains many saccades, maybe enough for computing an SSP projector.
  • If you are not interested in seeing how to remove the saccades with SSP projectors, just mark the two groups as bad: with the menu "Events > Mark group as bad", or renaming them in "bad_1-7Hz" and "bad_40-240Hz". Then go directly to the next tutorial.

Advanced

Saccade SSP

This run #02 is a good example to illustrate how we can use SSP projectors to remove the artifacts caused by eye saccades. You could mark the saccades manually or use the pre-selection available in "1-7Hz".

  • Rename category "1-7Hz" in "saccade" and delete category "40-240Hz".

  • Open the EOG recordings at the same time: right-click on file > EOG > Display time series.

  • Keep only the saccades: Delete all the events that do not show a clear step in the HEOG channel.

    bad_saccades.gif

  • Run the process "Artifacts > SSP: Generic" with the following options:

    ssp_saccades.gif


Note: the event window option will not be used because the events "saccade" are extended events and already include their duration.

  • The first component removes the artifact very well. Keep it selected and click on [Save].

    ssp_saccades_topo.gif

  • Run again the process "Detect other artifacts". There are now less events detected in 1-7Hz.

  • Rename the categories in "bad_1-7Hz" and "bad_40-240Hz" to flag them as bad segments.

    bad_rename_evt2.gif

Advanced

Elekta-Neuromag SQUID jumps

MEG signals recorded with Elekta-Neuromag systems frequently contain SQUID jumps. They are easy to spot visually in the recordings, they look like sharp steps followed by a change of baseline value. These jumps are due to the instability of the electronics, which fails at controlling the state of the SQUID during the recording sessions.

These steps cause important issues in the analysis of the signal, both in amplitude and in frequency. They are difficult to detect and remove, especially when some pre-processing with the Elekta software has already been applied. Running MaxFilter/SSS on MEG recordings with a SQUID jump on one sensor propagates the artifact to all the sensors.

The best approach is to remove these jumps from the analysis:

  • By marking them as bad segments manually, if their number is reasonable (in Brainstorm).
    The 1-7Hz artifact detection employed here will usually catch them.

  • By marking the sensors as bad if only a few of them are affected (before running MaxFilter).

  • By computing SSP projectors to remove the jumps (before running MaxFilter/SSS).
    As of today, this approach is possible only if you use MNE or MNE-Python for the early stages of pre-processing. When the free SSS algorithm implemented in MNE-Python is made available in Brainstorm, we will be able to use this approach in Brainstorm as well.

An example before MaxFilter (SQUID jump visible on one sensor only):

  • squid_jumps.png

Examples after MaxFilter (SQUID jump propagated on all the sensors):

  • artifact_jumps_sss.gif

Advanced

Additional documentation



Tutorial 15: Import epochs

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

We can consider that our datasets are clean from any major artifact. We will now proceed to the analysis of the brain signals we recorded in response to the auditory stimulation. There are two major types of processing workflows for MEG/EEG, depending on whether we are dealing with an event-related paradigm or a steady-state/resting-state study.

This tutorial will only focus on the event-related case: series of stimuli are sent to the subject and we have the corresponding triggers marked in the recordings. We will base our analysis on these triggers, import short epochs around each of them and average them. You will find in the advanced tutorials a scenario of MEG resting-state analysis.

Import in database

Until now, we've only been looking at data that was read from continuous files. The raw file viewer provides rapid access to the recordings, but many operations can only be applied to short segments of recordings that have been imported in the database. We will refer to these as "epochs" or "trials".

  • Right-click on Run#01 > Import in database.

    import_popup.gif

  • Set the import options as described below:

    import_options.gif

  • Time window: Time range of interest. We are interested by all the stimulations, so do not change this parameter. The default values always represent the entire file.

  • Split: Useful to import continuous recordings without events, to import successive chunks of the same duration. We do not need this here.

  • Events selection: Check the "Use events" option, and select both "standard" and "deviant".
    The number between parenthesis represents the number of occurrences of each event in the selected time window (changes if you modify the time definition at the top of the window)

  • Epoch time: Time segment that is extracted around each event marker. Set it to [-100,+500]ms.
    This option is disabled for extended events: if you want to enable it, you need to convert the extended events to simple events first.

  • Apply SSP/ICA projectors: Use the active projectors calculated during the previous pre-processing steps. Always check the summary of the projectors that are selected.
    Here there are 2 categories ("cardiac" and "blink") with a total of 3 projectors selected (one in "cardiac" and two in "blink", the blink and the saccade). Keep this option selected.

  • Remove DC Offset: Check this option, select Time range: [-100, -1.7]ms. For each epoch, it will:

    • Compute the average of each channel over the baseline (pre-stimulus interval: [-100,-1.7]ms)
    • Subtract it from the channel at every time instant (full epoch interval: [-100,+500]ms).
    • This option removes the baseline value of each sensor. In MEG, the sensors record variations around a somewhat arbitrary level, therefore this operation is always needed, unless it was already applied during one of the pre-processing steps.
  • Resample recordings: Keep this unchecked.

  • Create a separate folder for each epoch type: Do not check this option.

    • If selected: a new folder is created for each event type ("standard" and "deviant")
    • If not selected: all the epochs are saved in a new folder, the same one for all the events, that has the same name as the initial raw file. This is what we want because we have two acquisition runs with different channel files (different head positions and different SSP projectors) to import for the same subject. If we select this option, the "standard" epochs of both runs would be imported in the same folder and would end up sharing the same channel file, which is not correct.

One new folder appears in Subject01. It contains a channel file and two trial groups.

  • The channel file is copied from the continuous file.
  • To expand a group of trials and show all the files: double-click on it or click on the "+" next to it.
  • The SSP projectors calculated in the previous tutorial were applied on the fly while reading from the continuous file. These epochs are clean from eye blinks and power line contamination.
  • Note that the trials that are overlapping with a BAD segment are tagged as bad in the database explorer (marked with a red dot). All the bad trials are going to be ignored in the rest of the analysis, because they are ignored by the Process1 and Process2 tabs (see next tutorial).

    import_new_folder.gif import_bad.gif

Review the individual trials

After reviewing the continuous file with the "columns" view (channels one below the other) it can be useful to also review the imported trials with the "butterfly" view (all the channels superimposed).

  • Double-click on the first trial for the "deviant" condition.
  • Switch to the "butterfly" display mode: in the Record tab, click on the first button in the toolbar.

    import_review.gif

  • Right-click on the figure > Navigator > Next data file, or use the keyboard shortcut F3.
    This way you can quickly review all the trials to make sure that there is no obvious problem.
    Mac users: The keys "Fx" are obtained by holding the "Fn" key simultaneously.

    import_navigator.gif

To manually tag a trial as bad, you have three options:

  • Right-click on the trial file in the database > Reject trial.

  • Right-click on the figure > Reject trial.

  • Use the keyboard shortcut Ctrl+B.

  • To set all the trials back as good in a group: right-click on the trials group > Accept bad trials.

Raster plot

You can also get an overview of the values of one specific sensor over all the trials at once.

  • Right-click on the group of trials "deviant" > Display as image > MEG.

  • You can change the selected sensor with drop-down menu in the Display tab, or use the up/down arrows on your keyboard after clicking on the figure.
  • The bad trials are already marked, but if they were not this view could help you identify them easily.

    erpimage.gif

Run #02

Repeat the same operations for the second dataset:

  • Right-click on Run#02 > Import in database.

  • Import events "standard" and "deviant" with the same options.

    import_run02.gif

Advanced

Epoch length

We imported epochs of 600ms (100ms baseline + 500ms post-stimulus) but did not justify this choice.
The length of the epochs you import should be chosen very carefully. If you realize later your epochs are too short or too long, you would have to start over your analysis from this point.
The epoch length to consider depends on:

The experimental design

  • The minimum duration between two stimuli defines the maximum length you can consider analyzing after the stimulus. You should design your experiment so that it always includes the entire evoked response, plus an additional segment that you can use as a baseline for the following epoch.
  • In this study, the inter-stimulus interval (ISI) is random between 0.7s and 1.7s. The minimum ISI (700ms) is long enough to include the entire auditory evoked response, but not the button press that follows a deviant tone. In some cases (late subject response and short ISI), the following stimulation occurs while the brain is still processing the button press. The baseline of some epochs may contain motor and somatosensory components.
  • For data processing, it is always better to have longer ISI, but it also means increasing the duration of the experiment or decreasing the number of repetitions, which leads to other problems. The trade-off between data quality and recording time in this experiment is acceptable, very few trials are actually contaminated by the motor response to the previous trial. We will ignore this problem in the following tutorials, but you could decide to reject these few trials in your own analysis.
  • Here we consider only a short baseline (100ms) to avoid including too much motor activity.
    We will only study the auditory response, therefore 500ms post-stimulus is enough.

The processing pipeline

You may have to artificially extend the epochs of interest for technical reasons. Most filters cause edge effects, ie. unreliable segments of data at the beginning and the end of the signal. When applied on short epochs, they might destroy all the data of interest.

For avoiding this, you can add a few hundred milliseconds before and after your epoch of interest. It doesn't matter if it overlaps with the previous or the next epoch. After running the operations that required longer signals, you can cut your epochs back to the desired epoch length. Examples:

  • Time-frequency (Morlet wavelets):
    When estimating the power at frequency f Hz, you get incorrect values for at least one period (T=1/f) at the beginning and the end of the signal. For example, at 2Hz you need to discard the first and last 500ms of your time-frequency maps (1/2Hz=0.5s).

  • Low-pass filtering:
    With any filtering operation there will always be a transient effect at the beginning of the filtered data. After filtering, you need to discard the time windows corresponding to these effects. Their duration depends on the order of the filter: this is documented in the tutorial Power spectrum and frequency filters.

  • Hilbert transform:
    Same considerations as for the low-pass filter. This process starts by filtering the signals in various frequency bands, using the same function as the band-pass and low-pass filters.

  • Normalizations:
    The normalization procedures that use a baseline from the same epoch (Z-score, ERS/ERD, baseline correction) usually work better with longer baselines. The longer the clean baseline, the better the estimation of the average and standard deviation over this baseline. If your baseline is too short, the quality of your normalization will be low.
    If you normalize time-frequency maps or filtered source averages, you have to additionally exclude the edge effects from the baseline, and consider an even longer baseline.

In this tutorial, we decided to work with very short epochs (600ms only) so that all the analysis would run on most computers, including personal laptops. For any type of frequency analysis on the recordings, this will be too short. When processing your own recordings, you should increase the size of the epochs beyond the segment that you are actually planning to study.

Advanced

On the hard drive

Right-click on any imported epoch > File > View file contents:

  • import_struct.gif

Structure of the imported epochs: data_*.mat

  • F: [Nchannels x Ntime] recordings time series, in Volts.

  • Std: [Nchannels x Ntime] Standard deviation or standard error, when available (see next tutorial).

  • Comment: String displayed in the database explorer to represent this file.

  • ChannelFlag: [Nchannels x 1] One value per channel, 1 means good, -1 means bad.

  • Time: [1 x Ntime] Time values for each sample recorded in F, in seconds.

  • DataType: Type of information saved in the F matrix.

  • Device: Name of the acquisition system used to record this file.

  • Leff: Effective number of averages. For averaged files, number of trials that were used to compute this file.

  • Events: Time markers available in the file (stimulus triggers or other events)

    • label: Name of the event group.

    • color: [r,g,b] Color used to represent the event group, in Matlab format.

    • epochs: [1 x Nevt] Only ones for imported epochs.

    • times: [1 x Nevt] Time in seconds of each marker in this group (times = samples / sfreq).
      For extended events: [2 x Nevt], first row = start, second row = end.

    • reactTimes: Not used anymore.

    • select: Indicates if the event group should be displayed in the viewer.

    • channels: {1 x Nevt} Cell array of cell-arrays of strings. Each event occurrence can be associated with one or more channels, by setting .channels{iEvt} to a cell-array of channel names.

    • notes: {1 x Nevt} Cell-array of strings: additional comments for each event occurrence

  • History: Operations performed on file since it was imported (menu "View file history").

File history

Right-click on any imported epoch > File > View file history:

  • import_history.gif

List of bad trials

  • There is no field in the file structure that says if the trial is good or bad.
    This information is saved at the level of the folder, in the brainstormstudy.mat file.

  • Right-click on an imported folder > File > Show in file explorer.

    import_folder.gif

  • Load the brainstormstudy.mat file into Matlab, the bad trials are listed in the cell array "BadTrials":

    import_study.gif

Useful functions

  • in_bst_data(DataFile, FieldsList): Read an imported epoch.

  • in_bst(FileName, TimeWindow): Read any Brainstorm data file with the possibility to load only a specific part of the file. "TimeWindow" is a range of time values in seconds: [tStart, tStop].

  • bst_process('LoadInputFile', FileName, Target, TimeWindow): The most high-level function for reading data files. "Target" is a string with the list of sensor names or types to load.

Additional documentation



Tutorial 16: Average response

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

All the epochs we have imported in the previous tutorial are represented by matrices that have the same size (same number of channels, same number of time points), therefore they can be averaged together by experimental condition. The result is called indifferently "evoked response", "average response", "event-related field" in MEG (ERF) or "event-related potential" in EEG (ERP). It shows the components of the brain signals that are strictly time-locked to the presentation of a stimulus.

  • average_slide.gif

Averaging

We will now compute the average responses for both the "standard" and "deviant" conditions. Two constraints have to be taken into consideration at this stage.

Averaging runs separately: With MEG, it is not recommended to average sensor data across acquisition runs with different head positions (ie. different "channel files"). If the head of the subject moved between two blocks of recordings, the sensors do not record the same parts of the brain before and after, therefore the runs cannot be compared directly. With EEG, you can generally skip this recommendation.

Number of trials: When computing subject-level averages for experimental conditions with different number of trials, you have two options. You can either use the same number of trials for all the conditions and subjects (to make them "more comparable") or use all the available good trials (more samples lead to better estimations of the mean and variance). Here we will go with the second option, using all the trials. See this advanced section for more details.

  • Drag and drop all the "standard" and "deviant" trials for both runs in Process1.

  • In the Process1 box, the number of imported trials (comment in the database explorer, eg. "40 files") does not match the number of files selected for processing (between brackets, eg. "[39]"). This difference is due to the bad trials that we have in these folders. The trials tagged with a red dot in the database explorer are ignored by all the processes. The total number of selected files is 457 instead of 479, it means that we have a total of 22 bad trials.

  • Select the process "Average > Average files".
    Select the options: By trial group (folder), Arithmetic average, Keep all the event markers.

    average_folder.gif

  • You get two new files for each acquisition run. The number between parenthesis indicates how many good trials were used to compute each average.

    average_files.gif

Process options: Average

Description of all the options of the process: Average > Average files.

  • Everything: Averages all the files selected in Process1 together, creates only one file in output.

  • By subject: Groups the files by subject (ignoring the folders), creates one file per subject.

  • By folder (subject average): Groups by subject and by folder, ignoring the trial groups.
    In the current configuration, it would produce two files, one for each acquisition run.

  • By folder (grand average): Groups by folder, across the subjects. All the files located in folders with the same name would be averaged together, no matter in what subject they are.

  • By trial group (folder average): Groups by set of trials with the same name, separately for each folder and each subject. Here it creates four groups (two folders x two trial groups).

  • By trial group (subject average): Groups by set of trials with the same name, for each subject. The separation in folders is ignored. Here it would produce two files (deviant and standard).

  • By trial group (grand average): Groups by set of trials with the same name, ignoring the classification by folder or subject.

  • Function: Documented directly in the option panel.

  • Weighted average: When averaging single trials, the number of files is saved in the field Leff of the average file. When re-averaging the averages across acquisition sessions or subjects, this field Leff can be used to weigh each file with the number of trials from which it was computed:
    mean(x) = sum(Leff(i) * x(i)) / sum(Leff(i))
    In most cases, this option should be selected when averaging within a subject and disabled when averaging across subjects. It has no impact in the current example (no averages, Leff=1).

  • Keep all the event markers: If this option is selected, all the event markers that were available in all the individual trials are reported to the average file. It can be useful to check the relative position of the artifacts or the subject responses, or quickly detect some unwanted configuration such as a subject who would constantly blink immediately after a visual stimulus.

Visual exploration

The average response contains interesting information about the brain operations that occur shortly after the presentation of the stimulus. We can explore two dimensions: the location of the various brain regions involved in the sensory processing and the precise timing of their activation. Because these two types of information are of equal interest, we typically explore the recordings with two figures at the same time, one that shows all the signals in time and one that shows their spatial distribution at one instant.

  • Open the MEG recordings for the deviant average in Run#01: double-click on the file.

  • In the Record tab: Select the "butterfly" view more (first button in the toolbar).
  • In the Filter tab: Add a low-pass filter at 40Hz.

  • In the Record tab: Delete the "cardiac" event type, we are not interested by their distribution.

    deviant_ts.gif

  • This figure shows a typical clean evoked response, with a high signal-to-noise ratio. This represents the brain response to a simple auditory stimulation, the large peak around 90ms probably corresponds to the main response in the primary auditory cortex.

  • The green line represents the global field power (GFP), i.e. the standard deviation of all the sensors values at each time point. This measure is sometimes used to identify transient or stable states in ERP/ERF. You can hide it with the display options menu Extra > Show GFP.

  • This is the response to the deviant beeps (clearly higher in pitch), for which the subject is supposed to press a button to indicate that he/she detected the target. These responses are represented with the "button" events, distributed between 350ms and the end of the file (many responses happened after 500ms). Because of the variability in the response times, we can already anticipate that we won't be able to study correctly the motor response from this average. For studying the activity in the motor area, we need to epoch the recordings again around the "button" events.

Add a spatial view:

  • Open a 2D topography for the same file (right-click on the figure > View topography, or Ctrl+T).

  • Review the average as a movie with the keyboard shortcuts (hold the left or right arrow key).

  • At 90ms, we can observe a typical topography for a bilateral auditory response. Both on the left sensors and the right sensors we observe field patterns which seem to indicate a dipolar-like activity in the temporal or central regions.

    deviant_topo.gif

  • Close everything with the button [X] in the top-right corner of the Brainstorm window.
    Accept to save the modifications (you deleted the "cardiac" events).

  • Open the "standard" average in the same way and delete the "cardiac" markers.

Repeat the same operations for Run#02:

  • Open the MEG recordings for deviant and standard.
  • Delete the "cardiac" markers in both files.
  • Open a 2D topography and review the recordings.
  • Close everything.

Interpretation

Let's display the two conditions "standard" and "deviant" side-by-side, for Run#01.

  • Right-click on average > MEG > Display time series.

  • Right-click on average > MISC > Display time series (EEG electrodes Cz and Pz)

  • Right-click on average > MEG > 2D Sensor cap

  • In the Filter tab: add a low-pass filter at 40Hz (it makes the figures easier to read).

  • In the Record tab: you can set a common amplitude scale for all the figures with the button [=].

  • Here are the results for the standard (top) and deviant (bottom) beeps:

    average_summary.gif

The legend in blue shows names often used in the EEG ERP literature:

  • P50: 50ms, bilateral auditory response in both conditions.

  • N100: 95ms, bilateral auditory response in both conditions.

  • MMN: 230ms, mismatch negativity in the deviant condition only (detection of an abnormality).

  • P200: 170ms, in both conditions but much stronger in the standard condition.

  • P300: 300ms, deviant condition only (decision making in preparation of the button press).

  • Some seem to have a direct correspondence in MEG (N100) some don't (P300).

Additional quality check with the event markers:

  • The standard average shows two unwanted events between 100ms and 200ms post-stimulus, one "blink" and one "button" response. The trials that contain them should be marked as bad and the average recomputed, because the subject is probably not doing the task correctly.
  • We will not do this here because the SNR is high enough anyway, but remember that this option "Keep all events" from the averaging process provides a good summary of the recordings and can help you identify some bad trials.

Advanced

Averaging bad channels

The bad channels can be defined independently for each trial, therefore we can have different numbers of data points averaged for different electrodes. If we have a channel A considered good for NA trials, the corresponding channel in the average file is computed in this way: sum(NA trials) / NA.

In the average file, a channel is considered good if it is good in at least one trial, and considered as bad if it is bad in all the trials. The entire file is then considered as if it were computed from the maximum number of good trials: Nmax = max(Ni), i=1..Ntrials.

This procedure allows the conservation of the maximum amount of data. However it may cause some unwanted effects across channels: the SNR might be higher for some channels than others. If you want to avoid this: mark the channels as bad in all the trials, or report all the bad channels to the average file. This can be done easily using the database explorer, see tutorial Bad channels.

Advanced

Averaging across runs

As said previously, it is usually not recommended to average MEG recordings in sensor space across multiple acquisition runs because the subject might have moved between the sessions. Different head positions were recorded for each run, so we will reconstruct the sources separately for each each run to take into account these movements.

However, in the case of event-related studies it makes sense to start our data exploration with an average across runs, just to evaluate the quality of the evoked responses. We have seen in tutorial #4 that the subject almost didn't move between the two runs, so the error would be minimal.

  • channel_multiple.gif

Let's compute an approximate average across runs. We will run a formal average in source space later.

  • To run the same process again with different parameters: File > Reload last pipeline. Select:

  • By trial group (subject average): One average per experimental condition, across acquisition runs

  • Arithmetic average + Standard error: Save the standard error across all the trials in the same file

  • Keep all the event markers: Select this option, we are interested in the button press events.

  • The two averages are saved in the folder "Intra-subject". This is where all the results of processes involving multiple folders, within one subject, will be saved.

    average_stderror.gif

Advanced

Standard error

If you computed the standard deviation or the standard error together with an average, it will be automatically represented in the time series figures.

  • Double-click on one of the AvgStderr files to display the MEG sensors.
    The light-grey area around the sensors represent the maximum standard error around the maximum and minimum values across all the sensors.

  • Delete the useless events (cardiac and saccade).
  • Select two sensors and plot them separately (right-click > Channels > View selected, or "Enter").
    The green and red areas represent, at each time point, the standard error around the signal.

    stderror.gif

Advanced

Number of trials

You should always be careful when comparing averages computed from different numbers of trials. In most cases, you can safely include all the trials in your averages, even in the case of imbalanced designs. However, for very low numbers of trials or when comparing peak amplitudes, having the same number of trials becomes more critical. See the following references for more details:

Advanced

Selecting equal numbers of trials

If you decided you want to use the same number of trials across all the experimental conditions and/or across all the subjects, you can use a process to select them easily from the database.

  • Drag and drop all the "standard" and "deviant" trials for both runs in Process1.

  • Select the process "Files > Select uniform number of trials".
    Select the options: By trial group (folder) and Uniformly distributed.

    select_process.gif

  • If you click on [Run], it doesn't do anything but highlighting the first selected file in the database explorer. This process just performs a file selection, it needs to be followed by another process that uses the selected files for computing something. However, you can see what was done in the process report. The reports are displayed only when an error or a warning was reported, but you can open them manually to check for additional messages. Menu File > Report viewer.

    select_report.gif

  • The comment in the report shows the 4 groups of trials that were identified based on the option we selected (group "by trial group and folder"), with the number of good trials per group.
    The process picked 39 trials in each group, uniformly distributed in the list of available trials.
    Example of trial indices selected for Run01/standard: [1, 6, 11, 16, 21, 26, 31, 36, ..., 188, 193]

  • To average these selected trials together, you would just need to add the process "Average > Average files" after this selection process in the pipeline editor.

    average_selected.gif

Process options

Available options in the process: File > Select uniform number of trials.

  • By folder: Groups by subject and by folder, ignoring the trial groups.
    Here, it would identify two groups, one for each acquisition run: Run01, Run02.

  • By trial group (folder): Groups by set of trials with the same name, separately for each folder and each subject. Here it would identify four groups: Run01/deviant, Run01/standard, Run02/deviant, Run01/standard.

  • By trial group (subject): Groups by set of trials with the same name, for each subject. The separation in folders is ignored. Here it would identify two groups: deviant, standard.

How many trials to select in each group:

  • Number of trials per group: This number of trials must be available in all the groups. If set to 0, the group with the lowest number of good trials is identified and the same number of trials is selected from all the other groups.

How to select trials in a group that contains more than the requested number (Nf files, selecting only Ns):

  • Random selection: Select a random subset of Ns trials. Trial indices: randperm(Nf,Ns)

  • First in the list: Select the first Ns trials. Trial indices: 1:Ns

  • Last in the list: Select the last Ns trials. Trial indices: Nf-Ns+1:Nf

  • Uniformly distributed: Select Ns equally spaced trials. Trial indices: round(linspace(1, Nf, Ns)))

On the hard drive

The average files have the same structure as the individual trials, described in the tutorial Import epochs.

  • average_struct.gif

Differences with the imported epochs

  • F: [Nchannels x Ntime] average recordings across all the trials, in Volts.

  • Std: [Nchannels x Ntime] standard error or standard deviation across all the trials, in Volts.

  • Leff: Effective number of averages = Number of good trials that were used to compute the file



Tutorial 17: Visual exploration

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

This tutorial illustrates the options Brainstorm offers to represent graphically and explore interactively the evoked responses we computed in the previous tutorial. It shows how to produce spatial maps of the sensors, temporal averages, save screen captures and movies.

2D/3D topography

The sensor values at one time instant can be represented on a surface. Each amplitude value gets associated with a color using a colormap (described in the next tutorial). We call this type of representation "sensor topography", it shows the spatial distribution of the magnetic fields (or electric potentials).

  • Show the MEG signals for the standard average in Run#01 (double-click on the file). This gives us a direct feedback of the current time instant and allows to jump quickly to a different time.

  • Right-click on the same file > MEG > select the menus: 3D sensor cap, 2D sensor cap, 2D disc.

    explore_topo.gif

  • 3D sensor cap: Represents the real 3D positions of the sensors.

  • 2D sensor cap: The sensors are projected on a 2D plane. Realistic distribution of the sensors.

  • 2D disc: The sensors are projected on a sphere, then displayed as flat disc. Sometimes distorted...

  • In each of these views, you can add markers to indicate the sensors and their labels.
    Right-click on the figure > Channels > Display sensors/labels (or Ctrl+E).

  • You can change the number of contour lines: Right-click > Contour lines > 0-20.

    explore_contours.gif

  • In the 3D view, you can notice a hole in the right-occipital area. It corresponds to a damaged sensor in the MEG system we used for collecting this dataset.
  • For EEG/sEEG/ECoG recordings, there is an additional representation mode available:
    "3D Electrode". This will be detailed in the advanced tutorials corresponding to these modalities.

    explore_3delectrode.gif

Advanced

Magnetic interpolation

By default, some of the views re-interpolate the fields that are recorded by the MEG sensors to get smoother displays. A simple inverse problem and forward problem are solved to reconstruct the magnetic fields on a high-resolution surface of virtual magnetometers (function channel_extrapm.m).

On Elekta-Neuromag systems, this interpolation has the effect of converting the topographies of the planar gradiometers into topographies of magnetometers, which deeply affects the display.

The menu "No magnetic interpolation" offers the same views, but without using this reconstruction of the magnetic field. A spatial interpolation of the values between the sensors is performed instead.

  • explore_nointerp.gif

Advanced

2D Layout

The menu 2D Layout represents, in the same figure, the spatial information (the values for each channel is represented where the sensor is actually located) and the temporal information (instead of just one single value, we represent the signal around the current time).
The light gray lines represent the zero amplitude (horizontal) and the current time (vertical lines).

To zoom in/out in each small graph, use the buttons at the bottom-right corner of the figure, or the corrsponding mouse shortcuts: Ctrl+mouse wheel and Shift+mouse wheel. To select multiple sensors simultaneously: right-click and move your mouse to enlarge the selection rectangle.

  • explore_2dlayout.gif

You can use this display mode to compare multiple files:
Select multiple files in the database explorer, right-click on any of them > 2DLayout.

  • multi2dlayout.gif

Advanced

Display as image

The menu "Display as image" shows the same information as the "time series" view, but the values for each sensor are represented with a color instead of a line.

  • explore_image.gif

Advanced

Time selection

Click somewhere on the white part of the time series figure, hold the mouse button, and drag your mouse left or right: A transparent blue rectangle appears to represent the time selection. If you right-click on the figure, new options become available in the popup menu:

  • Set current time: Move the time cursor where the right-click occurred. The shortcut Shift+Click can be useful when trying to move in time on dense displays in columns view.

  • Set selection manually: Type the beginning and end of the selected window (in milliseconds).

  • Average time: Average over the selected time window and save it as a new file in the database.
    Note that the best way to do this is to run the process "Average > Average time".

  • Export to database: Extract the recordings and save them in a new file in the database.
    If some sensors are selected, only their values are extracted, all the others are set to zero.
    Note that the best way to do this is to run the process "Extract > Extract time".

  • Export to file: Same, but in a user-defined file (not in the database).

  • Export to Matlab: Same, but export the selection as a variable in the current Matlab workspace.

    explore_timeselect.gif

Advanced

Snapshots

Using Brainstorm, you will quickly feel like saving the beautiful images you produce. Your operating system already provides some nice tools for doing this. Many other options are available in the "Snapshot" menu, accessible with a right-click on any Brainstorm figure.

Operating system

  • Windows/Linux: Press the PrintScreen key on your keyboard and paste the copied screen in your favorite image or text editor. The combination Alt+PrintScreen only copies the figure that is currently selected.

  • MacOS: Many more options available, Google for the best ones (see example).

Snapshot menu

  • The options available in the Snapshot menu depend on the type of data represented. Examples:

    explore_snapshot.gif

  • Save as image: Save the figure in a file, without the title bar and borders. Many formats available.

  • Open as image: Capture the figure and open it in as an image. This can be useful if you want to visually compare the selected figure with another one that you cannot display at the same time (because they have different time or frequency definitions).

  • Open as figure: Similar, but copies the figure as a new Matlab figure with some interactivity.

  • Contact sheet and movies: See next section.

  • Export to database: Save the recordings in the figure as a new entry in the database.
    If there are selected channels, only their values will be saved, the others being set to zero.

  • Export to file: Extract the time series displayed in this figure (or only the selected sensors), and save them in a file. Several exchange file formats available for exporting to another program.

  • Export to Matlab: Same thing, but exports the structure in a variable of the Matlab workspace.

  • Save as SSP projector: Create an SSP projector that removes the current topography.

  • Save surface: Save the surface in a file, with the current modifiers applied (smooth, resect).

Advanced

Movie studio

  • Movie (horizontal/vertical): Rotate spatially the 3D scene.

  • Movie (time): Selected figure: Create .avi movies to show the evolution of the selected figure.

    • The dimensions of the movie depend on the actual size of the figure on the screen.
      Resize the figure to the appropriate dimensions for the movie before using this menu.

    • Zoom in/out (mouse wheel) and move the image (middle click+move) to give enough space to the time stamp that is added at the bottom-left of the rendered movie.
    • Don't do anything else while rendering: the captured figure must be visible all the time.
  • Movie (time): All figures: Instead of capturing one figure only, it captures them all. Arrange your figures the way you want and create a movie of all your workspace at once.

    movie_options.gif movie.gif

Advanced

Contact sheets

A contact sheet is a large image representing many time frames of the same figure.

  • Same recommendations as for movies: if you don't want the final image to be too big, reduce the size of the figure, zoom in, move, hide the colorbar. Keep the figure visible during the capture.
  • At the end, the image is displayed in an image viewer with which you can zoom (menu or wheel), move (click+move) and save the image (File > Save as).

  • Example for the standard average, run#01:

    contact_options.gif contact_standard.gif

Advanced

Edit the figures

All the figures can be edited with the Figure popup menu:

  • explore_figure.gif

If you select both "Matlab controls" and "Plot edit toolbar", you will get all the tools available in the Matlab environment to explore the data and edit the figure. Select the button "Edit plot" to edit the graphic properties of an object (eg. select a signal, then right-click on it to edit its properties) or unselect it to get back to the regular Brainstorm figure interactivity.

  • figure_edit.gif

Advanced

Mouse shortcuts

Scroll

  • Mouse wheel: Zoom in / zoom out

  • Control + mouse wheel: Change the length of the displayed time window (2D Layout)

  • Control + mouse wheel: Vertical zoom (time series)

Click

  • Left click + move: Rotate (3D) or select (time)

  • Middle click + move: Move in zoomed figure (ie. panning)

  • Left click + right click + move: Move in zoomed figure (ie. panning)

  • Shift + left click: Force setting the current time, ignoring if a line was clicked (time series)

  • Right click + move: Vertical zoom (time series)

  • Right click + move: Select sensors (2D topography)

  • Right click: Popup menu

  • Double-click: Restore initial view

Click on something

  • Click on a line: Select a sensor

  • Shift + click on a line: Select a sensor and unselect all the others (2D topography)

  • Click on the colorbar + move: Change contrast (up/down) and brightness (left/right)

Advanced

Keyboard shortcuts

Here is a memo of all the keyboard shortcuts for time series and topography figures. If you don't remember them, you can find most of them in the figure popup menus.

  • Arrows: Left, right, PageUp, PageDown: Move in time

  • Delete: Mark selected sensors as bad

  • Shift + Delete: Mark non-selected sensors as bad (=keeps ony the selected sensors)

  • Enter: View time series for the selected sensors

  • Escape: Unselect all the selected sensors

  • Shift + Escape: Set all the bad sensors as good (=brings back all the channels in the display)

  • Ctrl + A: Show axis on 3D figures (X,Y,Z)

  • Ctrl + B: Set trial as bad

  • Ctrl + D: Dock/undock figure in Matlab's figures list

  • Ctrl + E: Show sensors markers and labels (E=Electrode) or add an event marker (E=Event)

  • Ctrl + F: Copy figure, removes all the callbacks and detach from Brainstorm figure management

  • Ctrl + I: Save figure as image

  • Ctrl + J: Open figure as an image

  • Ctrl + R: Open Time series view (R=Recordings)

  • Ctrl + S: Open Sources view (S=Sources)

  • Ctrl + T: Open 2D sensor cap view (T=Topography)

  • Shift + letter: Change selected montage

  • F1, F2, F3: with or without Shift, calls the database navigator (F1=subject, F2=condition, F3=file)

  • 1, 2, 3, 4, 5, 6, 7, 8, 9, 0: Set a pre-defined 3D view

  • + / -: Increase/decrease the channel gain (vertical zoom for time series)

  • =: Apply view to all figures

  • *: Apply montage to all figures

  • Notes for Mac users:
    • PageDown = Fn + DOWN

    • PageUp = Fn + UP

    • F1 = Fn + F1

    • Mouse wheel = Two finger up/down on the MacBook pad

Advanced

Graphic bugs

If you observe any graphic problem with these displays, there might be an issue with the OpenGL drivers. You may try the following options.

  • Update the drivers for your graphics card.
  • Upgrade your version of Matlab.
  • Run the compiled version of Brainstorm (see Installation).

  • Turn off the OpenGL hardware acceleration: Menu File > Edit preferences > Software or Disabled.

  • Send a bug report to the Mathworks.


Tutorial 18: Colormaps

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet, Rana El Khoury Maroun

When displaying signals on the sensor array or on the cortex surface, we need to convert the amplitude of the signals into colors. The way the values are mapped to colors has a lot of influence on the visual interpretation of the figures. The selection of the appropriate colormap is an important step of the data exploration.

Colormap menus

Brainstorm keeps track of many user-defined colormaps: anatomy, EEG, MEG, sources, stat, time, time-frequency, etc. You can go to the Colormaps menu in the main window to see this list.

  • colormap_all.gif

Usually, you will use only popup menus from specific figures to edit the colormaps.

  • Open a topography view for the standard average (right-click > MEG > 2D Sensor cap).

  • Right-click on the figure, you will only see the menu "Colormap: MEG recordings".

  • If you modify a colormap, the changes will be applied to all the figures, saved in your user preferences and available the next time you start Brainstorm.

Standard color arrays

A colormap is an array of colors that are indexed and then mapped to values. It is represented by a [Nx3] matrix, where N is the number of colors available in it. Each color is coded with three values corresponding its relative levels of red, green and blue. In Matlab, the colors are coded between 0 and 1. To get an example, type "jet" in the Matlab command window, you will get the default values for the "jet" colormap.

We offer two ways of creating this array of colors in Brainstorm: you can use standard color arrays (modulated in contrast and brightness) or define your own.

Colormap name: The standard colormaps are referred to with names (bone, gray, jet, rwb, etc).
Pick a different color set in the menu to update all the figures with similar data types.

  • colormap_name.gif

Brightness: Moves the center of the color array up and down. Example values: -80, 0, +80.
The term "brightness" is not well adapted for rbw, jet or hsv. It makes more sense for colormaps with only one tint that varies in intensity, such as the gray colormap. We use it here for lack for a better word.

  • colormap_brightness.gif

Contrast: Changes the distance between the first and last colors. Example values: -80,0,+80.

  • colormap_contrast.gif

You can modify these values by clicking directly on the color bar. Hold the mouse button, then:

  • Move up/down to change the brightness,

  • Move left/right to change the contrast.

Advanced

Custom color arrays

To edit your own list of colors, use the menu "New..." at the end of the list of standard colormaps.

  • Open a 2D sensor cap view for the MEG sensors for the standard average (Run#01).
    Right-click on the figure Colormap: MEG recordings > Colormap > New.

  • Enter the name of the new colormap and the number of colors it will contain.

    colormap_new.gif colormap_edit1.gif

  • Each color in this color array is represented with a little square. The arrows in the the second row can be selected and deleted (delete key) or edited (double-click). They represent the key colors between which Matlab interpolates the other colors. Click on the second row to add more key colors.

    colormap_edit2.gif

  • Once you are satisfied with your new colormap, click on [Ok].
    It will update the figure. A new menu is now available in the list of colormap names.

    colormap_edit3.gif

  • To delete the custom colormap currently selected, use the menu "Delete".

Color mapping

After defining the colors, we need to define how we want to map them with the values. The information necessary to do this color mapping is the value corresponding to the first and last colors. The color indices will be scaled linearly between these extrema.

Absolute values: Display the absolute values of the recordings, instead of the original values. This has the effect of constraining the color mapping to positive values only. It is not very useful for exploring the recordings: in EEG and MEG, the sign of the values is very important.

  • colormap_absolute.gif

Maximum: Method used to estimate the minimum and maximum values of the colorbar.

  • Global: The bounds of the colormap are set to the extrema values found in the entire file.
    Example: if you use the rbw colormap and the min and max values are [-200ft, +200ft], the colors will be mapped in the following way: -200ft is blue, +200ft is red, 0ft is white. The mapping is identical for all the time samples. If you select this option at t=0ms, the 2D topography figure will turn almost white because the average values are low before the brain response.

  • Local: Uses the local min and max values at the current time frame AND for each figure, instead of the global min and max. Example: A t=0ms, the extrema values are roughly [-30ft, +30ft]. So the colors will be mapped in order to have: -30ft=blue and +30ft=red.

  • Custom: You can manually set the min/max bounds of the colorbar. It does not have to be symmetrical around zero. If you set the values to [-40,+20] ft, the white colors would correspond to values around -10ft, and values around 0ft would be displayed in pale red.

    colormap_max.gif

  • You can usually keep the option Local when looking at recordings, it is easier to read.
    But keep in mind that it is not because you see flashy colors that you necessarily have strong effects. It's always a matter of colormap configuration.

Range: Use symmetrical or non-symmetrical colormaps.

  • [-max, max]: Symmetrical colorbar around the absolute value of the maximum.
    Example: at t=170ms, the range is [-220ft, +90ft], the color mapping used is [-220ft, +220ft].

  • [min, max]: Uses the real min and max. Useful for displaying values that are not centered on zero. Example: at t=170ms, the mapping used is [-220ft, +90ft], white is not zero.

  • This option is ignored when the option "Maximum: Custom" is selected.

    colormap_range.gif

Advanced

Colormap management

Remember that when you change any of the options above, it is saved in your user preferences. If you close Brainstorm and start it again, the colormap configuration stays the same.

To reset the colormap to its default values:

  • Double-click on the color bar, or

  • Use the menu Restore defaults.

Two additional menus can help you manipulate the colormaps:

  • Display colorbar: In case you want to hide the color bar. Useful for contact sheets and movies.

  • Permanent menu: Open a window that displays this colormap sub-menu, for faster access.

    colormap_contact.gif colormap_permanent.gif

Advanced

New default colormaps

Recently, the default colormaps of Brainstorm were changed because they lack important attributes of a good colormap: they don’t have linear lightness and they are not perceptually uniform. This can either cause details in the visualization to be hidden or create features that don’t exist in the underlying data, which results in a distortion of the perceived pattern. For that reason, new default colormaps were added to better represent the underlying data.

Here are the new colormaps created with their chosen names:

new_cmap_names.png

Three other colormaps were added: viridis and magma (taken from mpl colormaps) as well as a variation of viridis (viridis2). The colormaps were created using the viscm tool, which allows designing a colormap that has linear lightness and hue changes.

This paper presents the work done in more detail: colormap_optimization.pdf

JET Alternative

A new colormap created by Google, the Turbo colormap, was recently added. It is presented as a an improved rainbow colormap that can be used as an alternative to the popular JET colormap, resulting in a perceptually linear colormap.

turbo_vs_jet.png

More information can be found on the following Google Blog post: https://ai.googleblog.com/2019/08/turbo-improved-rainbow-colormap-for.html



Tutorial 19: Clusters of sensors

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

In EEG, most of the analysis can be done directly on the electrode values. This tutorial explains how to group sensors in a cluster and overlay channels from different conditions in the same graph.

In MEG, we tend to avoid working at the sensor level because the signals are ambiguous, we work mostly in the source space, which is the topic of the next tutorials. If you are planning to work only with MEG recordings, you may skip this tutorial.

Cluster tab

The Cluster tab is not shown by default in the Brainstorm interface.

  • Click on the [+] button next to the Scout tab > Add: Cluster.
    If the list of tabs is too busy, you can close some of them in the same way.

    cluster_tab.gif

  • The Cluster tab is similar to the Scout tab, but much simpler. It contains a toolbar to create and edit clusters, a list of existing clusters and a few display options at the bottom.

    cluster_panel.gif

  • [NEW SEL]: Create a cluster containing the channels that are currently selected in the interface.

  • [NEW IND]: Create a cluster with channels selected by index, name or type.

  • [Display clusters time series]: Shows the time series of the selected clusters for all the files that are currently open in the interface.

  • Edit: Menus to modify the selected clusters.

Creating clusters

A cluster is a group of one or multiple channels. There are two ways for creating a new one.

  • Display the MEG recordings for the standard average of Run#01 (double-click on the file).
    Add a 2D topography (Ctrl+T), display the sensors and select a group of them (right-click+move).

    cluster_select.gif

  • In the Cluster tab, click on [NEW SEL]. It creates a new cluster "c1".
    Double-click on it to rename it to "cmax" (or menu Edit > Rename).

  • In the Cluster tab, click on [NEW IND]. Enter the name of the sensor "MLT12". Rename it to "cmin".

    cluster_ind.gif

  • Selection: When you change the selected cluster in the list, it selects the corresponding sensors in all the figures. The number of channels involved in the selection is displayed at the bottom of the tab. You can select multiple clusters at once, holding the Ctrl/Shift/Command button while clicking.

  • Saving: The clusters are saved in the channel file. When you close all the figures relative to a dataset, it prompts for saving the modifications in the channel file. When opening again the dataset, the Cluster tab will open automatically to show the saved clusters.

    cluster_save.gif

Displaying clusters

We now have two clusters available in the list. The second one contains only one channel, displaying it corresponds to displaying the channel time series. The first one contains multiple channels, we need to apply a function to group them into one signal only.

  • The cluster function used to group multiple signals into one is shown on the right of the cluster list. By default it uses a simple average ("Mean"). You can edit this cluster function with the menu Edit > Set cluster function.

  • Select one cluster and click on the [Display cluster] button (third from the left in the toolbar).

    cluster_view1.gif

  • Select the two clusters at once and click on [Display cluster] again.

    cluster_view2.gif

  • To have the two signals in the same graph, select the option "Overlay: Cluster". Click on [Display].

    cluster_view3.gif

  • In the database explorer: Double-click on the deviant average of the same run.
    Click on [Display cluster] again, now it displays the clusters values for both conditions.

    cluster_view4.gif

  • Unselect "Overlay: Cluster" and select "Overlay: Files". If both conditions are selected in the database explorer and both clusters are selected in the Cluster tab, it will group the signals in a different way: one graph per cluster, each one showing the two condition averages.

    cluster_view5.gif

From the database explorer

Once the clusters are defined, you can apply them to any number of files in the database in one click.

  • Close all the figures with the [X] button in the top-right corner of the Brainstorm figure.
  • In the Cluster tab, select only one of the two clusters, select Overlay: Files.

  • In the database explorer, select all the averages for both runs. Right-click on one of the selected files > Clusters time series. On this graph, you can verify that the selected cluster behaves in the same way in the two acquisition runs.

    cluster_tree1.gif

    cluster_tree2.gif

Processes

Extract > Clusters time series: The process computes the clusters signals and save them in a new file in the database. Select the files of interest in Process1, and select the option "Concatenate" if you want the cluster signals from all the input files to be saved in the matrix file. If the option is not selected, the process creates one new matrix file for each input file.

cluster_process.gif

Import > Channel file > Import clusters of channels: The process allows to automate the loading of the same clusters for multiple datasets. In Process1, select at least one data file in each folder in which you want to import the clusters (it will only add it once per channel file, even if you have many data files from the same folder in input), then select the cluster file to import. This cluster file can be generated from the Cluster tab: select one or more clusters and click on the [Save] button.

cluster_import.gif

Scripting

The function db_set_clusters.m allows adding clusters to multiple channels programmatically. Read the header of the function for help with the parameters. Here are some examples:

Add the clusters defined in array of structures 'sClusters' to the channel file of folder pointed by iStudy:

db_set_clusters([], iStudy, sClusters);

Copy all the clusters from a channel file to all the other channel files in the same subject:

db_set_clusters(ChannelFile, 'AllConditions');

Remove all the clusters from all the channel files in the database.

db_set_clusters([], 'AllSubjects', []);

Get a structure template for sCluster:

sCluster = db_template('cluster');

Copy the clusters selected in the Clusters tab of the Brainstorm figure to specific studies:

panel_cluster('CopyClusters', iStudies);


Tutorial 20: Head modeling

Authors: Francois Tadel, Elizabeth Bock, John C Mosher, Richard Leahy, Sylvain Baillet

The following tutorials describe how cerebral currents can be estimated from the MEG/EEG recordings we have processed so far. To achieve this, we need to consider two distinct modeling problems: the modeling of the electromagnetic properties of the head and of the sensor array (a.k.a. head model or forward model), and the estimation of the brain sources which produced the data, according to the head model in question. That second step is known as source modeling or solving an inverse problem. It requires that forward modeling of head tissues and sensor characteristics is completed first. This tutorial explains how to compute a head model for the participant to the auditory oddball experiment.

Advanced

Why estimate sources?

Reconstructing the activity of the brain from MEG or EEG recordings involves several sophisticated steps. Although Brainstorm simplifies the procedures, it is important to decide whether source modeling is essential to answer the neuroscience question which brought you to collect data in the first place.

If one of your primary objectives is to identify and map the regions of the brain involved in a specific stimulus response or behavior, source estimation can help address this aspect. Empirical interpretations of sensor topographies can inform where brain generators might be located: which hemisphere, what broad aspect of the anatomy (e.g., right vs. left hemisphere, frontal vs. posterior regions). Source estimation improves anatomical resolution further from the interpretation of sensor patterns. The spatial resolution of MEG and EEG depends on source depth, the principal orientation of the neural current flow, and overall SNR: still, a sub-centimeter localization accuracy can be expected in ideal conditions, especially when contrasting source maps between conditions in the same participant. As for other imaging modalities, spatial resolution of group-level effects (i.e. after averaging across multiple participants) is limited by the accuracy of anatomical registration of individual brain structures, which are very variable between participants, and intersubject variations in functional specialization with respect to cortical anatomy.

Source mapping is a form of spatial deconvolution of sensor data. In EEG in particular, scalp topographies are very smooth and it is common that contributions from distant brain regions overlap over large clusters of electrodes. Moving to the source space can help discriminating between contributing brain regions.

In MEG, source maps can be a great asset to alleviate some issues that are specific to the modality. Indeed in MEG and contrarily to EEG, the head of the participant is not fixed with respect to sensor locations. Hence data sensor topographies depend on the position of the subject's head inside the MEG sensor array. Therefore, between two runs of acquisition, or between subjects with different head shapes and sizes and positions under the helmet, the same MEG sensors may pick up signals from different parts of the brain. This problem does not exist in EEG, where electrodes are attached to the head and arranged according to standard positions.

Another important point to consider when interpreting MEG sensor maps and that can be solved by working in the MEG source space instead, is that MEG manufacturers use different types of sensor technology (e.g., magnetometers vs. gradiometers; axial vs. tangential gradiometers, etc. yielding different physical measures). This is not an issue with EEG, with essentially one sensor type (electrodes, dry or active, all measuring Volts).

Nevertheless, if your neuroscience question can be solved by measuring signal latencies over broad regions, or other aspects which do not depend crucially on anatomical localization (such as global signal properties integrated over all or clusters of sensors), source modeling is not required. To sort out this question will influence the time and computational resources required for data analysis (source analysis multiplies the needs in terms of disk storage, RAM and CPU performance).

Advanced

The origins of MEG/EEG signals

To better understand how forward and inverse modeling work, we need to have a basic understanding of the physiological origins of MEG/EEG signals. Note that, as always with modeling, we need to deal with various degrees of approximation.

Overall, it is assumed that most of - but not exclusively - the MEG/EEG signals are generated by postsynaptic activity of ensembles of cortical pyramidal neurons of the cerebral cortex. The reason is essentially in the morphology and mass effect of these cells, which present elongated shapes, and are grouped in large assemblies of cells oriented in a similar manner approximately normal to the cortex. Mass effects of close-to-simultaneous changes in post-synaptic potentials across the cell group add up in time and space. These effects can conveniently be modeled at a mesoscopic spatial scale with electric dipoles distributed along the cortical mantle (green arrows in figure below). Note that there is growing evidence that MEG and EEG are also sensitive to deeper, cortical and subcortical structures, including brain nuclei and the cerebellum. Brainstorm features advanced models of these structures, as an option to your analysis. The emphasis in this tutorial is on cortical source models, for simplicity.

The primary and volume currents generated by current dipoles create differences in electrical potentials and magnetic fields that can be detected outside the head. They can be measured with electrodes placed on the skin (EEG, with respect to a reference) or very sensitive magnetic detectors (MEG).

origins.gif

Matti Hamalainen, 2007

Advanced

Source models

Dipole fitting vs distributed models

MEG/EEG source estimation consists in modeling brain activity with current dipoles. A current dipole is a convenient model equivalent to the net post-synaptic electrophysiological activity of local assemblies of neurons. Two main approaches have been explored for source MEG/EEG estimation: dipole fitting methods - where the position and amplitude of one to a few equivalent current dipoles (ECD) are estimated over relatively short time windows - and distributed models - where the location (and typically, the orientation) of a large number dipoles is fixed; the dipoles sample a spatial grid covering the entire brain volume or the cortical surface - requiring estimation of the amplitude of a vast number of dipoles in a fixed grid at each time point.

Equivalent dipole fitting approaches are quite straightforward and can be adequate when the number of brain regions expected to be active is small (ideally only one). Therefore, it is most adequate for responses at early post-stimulus latencies. They cannot generalize to capture complex dynamics over extended period of time (epochs) and the associated estimation techniques are quite sensitive to initial conditions (how many dipoles to fit? where does the search start? etc). Our strategy in Brainstorm is to promote distributed source models, which are less user dependent, can generalize to all experimental conditions, and yield time-resolved image volumes that can be processed in many different, powerful ways (group statistics, spatial segmentation, use of regions of interest, correspondence with fMRI, etc.)

Source constraints

When opting for distributed source models, the positions and orientations of the elementary dipoles that will define the "voxel" grid of the source images produced need to be defined. This set of dipoles is called the source space. By default, Brainstorm constrains the source space to the cortex, where signal-to-noise and sensitivity is maximum in MEG/EEG. Note however that more complete models that include subcortical structures and the cerebellum are available in Brainstorm. Therefore, one decision you need to make before proceeding with source imaging is whether more complete source spaces are required to answer your neuroscience question.

For this tutorial, we use the simple approach where current dipoles are automatically assigned to each of the vertices of the cortical surface (see the nodes in the grey mesh in the leftmost image below). When importing the anatomy of the subject, we downsampled the cortex surface to 15,000 vertices.

This default number of 15,000 vertices is empirical. In our experience, this balances the adequate geometrical sampling of cortical folds with the volume of data to be analyzed. To use a smaller number of vertices (sources) oversimplifies the shape of the brain; to use more vertices yields considerably larger data volumes, without necessarily adding to spatial resolution, and may lead to practical hurdles (CPU and memory issues.)

Orientation constraints

After defining the locations of the dipoles, we also need to define their orientations. Brainstorm features two main options: unconstrained dipole orientations or orientations constrained perpendicularly with respect to the cortical surface.

In the unconstrained case, three orthogonal dipoles are assigned to each vertex of the cortex surface. This triplet can account mathematically for local currents flowing in arbitrary directions. The total number of elementary sources used in that case amounts to 45,000 dipoles (3 orientations x 15,000 vertices).

In the constrained case, one dipole is assigned to each vertex with its orientation perpendicular to the cortical surface. The benefit to this option is that it restricts the number of dipoles used to 15,000 (one per vertex). Results are also easier to process and visualize. However, there are some instances where such constraint is exaggerated and may bias source estimation, for instance when the individual anatomy is not available for the participant.

In the Brainstorm workflow, this orientation constraint is offered as an option of the inverse model and will be discussed in the following tutorial sections. In the present tutorial, we compute the forward model corresponding to a grid of 15,000 cortical sources without orientation constraints (hence a total of 45,000 dipoles). Note that the orientation constraint can be applied subsequently in the workflow: We do not have to take such hard decision (constrained vs. unconstrained source orientation) at this stage.

Whole-brain model

The constraint of restricting source locations to the cortical surface can be seen as too restrictive in some cases, especially if subcortical areas and cerebellum are regions of interest to the study. Brainstorm features the possibility to use the entire brain volume as source space (see green dots below: they represent dipole locations sampling the entire brain volume). One minor drawback of such model is that the results produced are impractical to review. We encourage users interested in more sophisticated approaches to add non-cortical structures to their MEG/EEG model to consult the sections concerning Volume and Mixed Head Volumes in the advanced tutorials about source modeling.

source_space.gif source_volume.gif

Advanced

Forward model

We now need to obtain a model that explains how neural electric currents (the source space) produce magnetic fields and differences in electrical potentials at external sensors (the sensor space), given the different head tissues (essentially white and grey matter, cerebrospinal fluid (CSF), skull bone and skin).

  • The process of modeling how data values can be obtained outside of the head with MEG/EEG from electrical current dipoles in the brain is called forward modeling or solving a forward problem.

  • In Brainstorm, we call the outcome of this modeling step a "head model", a.k.a. forward model, leadfield matrix or gain matrix in the MEG/EEG literature.

  • In this tutorial we will use the default source space: a lower-resolution cortical surface representation, with 15,000 vertices, serving as location support to 45,000 dipoles (see above: models with unconstrained orientation). Note that we use the terms dipole and source interchangeably.

  • We will obtain a matrix [Nsensors x Nsources] that relates the activity of the 45,000 sources to the sensor data collected during the experiment.

    forward_inverse.gif

Available methods for MEG forward modeling

  • Single sphere: The head geometry is simplified as a single sphere, with homogeneous electromagnetic properties.

  • Overlapping spheres: Refines the previous model by fitting one local sphere under each sensor.

  • OpenMEEG BEM: Symmetric Boundary Element Method from the open-source software OpenMEEG. Described in an advanced tutorial: BEM head model.

  • DUNEuro FEM: Finite Element Method from the open-source software DUNEuro. Described in an advanced tutorial: FEM head model

  • MEG: Overlapping spheres.
    Magnetic fields are less sensitive to heterogeneity of tissue in the brain, skull and scalp than are the scalp potentials measured in EEG. We have found that this locally fittedspheres approach (one per sensor) achieves reasonable accuracy relative to more complex BEM-based methods: [Leahy 1998], [Huang 1999].

  • EEG: OpenMEEG BEM.
    Since EEG measures differential electric potentials on the scalp surface it depends on the effects of volume conduction (or secondary currents) to produce the signals we measure. As a result EEG is very sensitive to variations in conductivity not only in the tissue near the brain's current sources but also with the skull and scalp. Some tissues are very conductive (brain, CSF, skin), some are less (skull). A realistic head model is advised for integrating their properties correctly. When computing a BEM model is not an option, for instance if OpenMEEG crashes for unknown reasons, then Berg's three-layer sphere can be an acceptable option.

  • sEEG/ECoG: The OpenMEEG BEM option is the only model available for this data modality.

Computation

The forward models depend on the anatomy of the subject and characteristics of EEG/MEG sensors: the related contextual menus are accessible by right-clicking over channel files in the Brainstorm data tree.

  • In the imported Run#01, right-click on the channel file or the folder > Compute head model.
    Keep the default options selected: Source space=Cortex, Forward model=Overlapping spheres.

    compute_popup.gif compute_options.gif

  • A new file will then appear in the database. Headmodel files are saved in the same folder as the channel file's.
    This file is required for EEG/MEG source estimation: This next step will be described in details in the following tutorial sections.

  • Right-click on the head model file > Check spheres. This window shows the spheres that were estimated to compute the head model. You can visualize and verify their location by following the indications written in green at the bottom of the window: use left/right arrows. At each step, the current sensor marker is displayed in red, and the sphere shown is the local estimation of the shape of the inner skull immediately below the sensor.

    headmodel_popup.gif headmodel_spheres.gif

  • Although in principle, the overlapping-sphere method requires the inner skull surface, this data is not always available for every participant. If not available, a pseudo-innerskull surface is estimated by Brainstorm using a dilated version of the cortex envelope.

Repeat the same operation for the other file. We now have two different acquisition runs with two different relative positions of the head and of the sensors. We now need to compute two different head models (one per head/sensor location set).

  • In the imported Run#02, right-click on the channel file > Compute head model.

    run02_popup.gif run02_files.gif

Advanced

Database explorer

This section contains additional considerations about the management of the head model files.

  • If multiple head models were computed in the same folder (e.g., after experimenting different forward models), one will be displayed in green and the others in black. The model in green is selected as the default head model: it will be used for all the following computation steps (e.g., source estimation). To change the default to another available head model, double-click on another head model file (or right-click over that file > Set as default head model).

  • You can use the database explorer for batching the computation of head models (across runs, subjects, etc.). The "Compute head model" item is available in contextual menus at multiple instances and all levels of the database explorer. The same forward model type is obtained recursively, visiting all the folders contained in the selected node(s) of the database explorer.

Advanced

On the hard drive

Right-click on any head model entry > File > View file contents:

  • headmodel_contents.gif

Structure of the head model files: headmodel_*.mat

  • MEGMethod: Type of forward model used for MEG sensors ('os_meg', 'meg_sphere', 'openmeeg' or empty).

  • EEGMethod: Type of forward model used for EEG sensors ('eeg_3sphereberg', 'openmeeg' or empty).

  • ECOGMethod: Type of forward model used for ECoG sensors ('openmeeg' or empty).

  • SEEGMethod: Type of forward model used for sEEG sensors ('openmeeg' or empty).

  • Gain: Leadfield matrix, [Nsensors x Nsources] (in practice, equivalent to [Nsensors x 3*Nvertices]).

  • Comment: String displayed in the database explorer to represent this file.

  • HeadModelType: Type of source space used for this head model ('surface', 'volume', 'mixed').

  • GridLoc: [Nvertices x 3], (x,y,z) positions of the grid of source points. In the case of a surface head model, it corresponds to a copy of the 'Vertices' matrix from the cortex surface file.

  • GridOrient: [Nvertices x 3], directions of the normal to the surface for each vertex point (copy of the 'VertNormals' matrix of the cortex surface). Empty in the case of a volume head model.

  • GridAtlas: In the case of mixed head models, contains a copy of the "Source model options" atlas structure that was used for creating the model.

  • SurfaceFile: Relative path to the cortex surface file related with this head model.

  • Param: In case of a surface head model, it contains a description of the sphere that was estimated for each sensor (Center/Radius).

  • History: Date and brief description of the method used for computing the head model.

Gain matrix

The Gain matrix is the most important piece of information in the structure. It stores the leadfields for 3 orthogonal orientations (x,y,z) at each grid point (p1, p2, etc). The information relative to each pair sensor <-> grid source point is stored as successive columns of the matrix are ordered as: [p1_x, p1_y, p1_z, p2_x, p2_y, p2_z ...]. For the tutorial introduction dataset, with 15002 sources, the gain matrix has 45006 columns.

To convert this unconstrained leadfield matrix to that of an orientation-constrained model, where the orientation of each dipole is fixed and normal to the cortex surface:

  • Export the head model file to the HeadModel structure: Right-click > File > Export to Matlab.

  • At the Matlab prompt:
    > Gain_constrained = bst_gain_orient(HeadModel.Gain, HeadModel.GridOrient);

  • The dimension of the output matrix is three times smaller (now only one source orientation at each location): [Nsensors x Nvertices]

Useful functions

  • in_bst_headmodel(HeadModelFile, ApplyOrient, FieldsList): Read contents of the head model file.

  • bst_gain_orient(Gain, GridOrient): Apply orientation constraints.

Additional documentation

Articles

Tutorials and forum discussions



Tutorial 21: Noise and data covariance matrices

Authors: Francois Tadel, Elizabeth Bock, John C Mosher, Richard Leahy, Sylvain Baillet

Modeling and measuring certain characteristics of the noise contaminating the data is beneficial to source estimation. For instance, minimum norm estimators can integrate second-order sample statistics of sensor noise (summarized into a noise covariance matrix, see below). Beamformers further require similar sample statistics for the data portion of interest (summarized into a data covariance matrix). This first section of this tutorial explains how to obtain a noise covariance estimate from MEG empty room recordings.

Noise covariance

Instrumental noise ("sensor noise") can be readily captured in MEG using two or more minutes of empty room measurements. We encourage the use of noise recordings collected the same day as the subject's recordings (if possible just before the session) and pre-processed in the same manner (with same sampling rate and same frequency filters applied) as the participant data. In this study we have already prepared a 2-min segment of noise recordings that we will use to estimate noise covariance sample statistics.

Right-click on the entry for noise recordings > Noise covariance. Available menus:

  • Compute from recordings: Use the selected recordings to estimate noise covariance statistics.

  • No noise modeling: Use an identity matrix as noise covariance. This option is useful when no noise recording is available (e.g. ongoing EEG without any baseline of no interest).

  • Import from file: Use noise covariance statistics available from another source (e.g., obtained using the MNE software.)

  • Import from Matlab: Import any [Nchannels x Nchannels] matrix as noise covariance matrix from the Matlab workspace.

    noisecov_popup.gif

Select the menu Noise covariance > Compute from recordings. Available options:

  • Files: The top part of this window shows a summary of the files that have been selected to estimate the noise: 1 file of 120s at 600Hz. Total number of time samples in selected file: 72,000. We can also choose to use only a portion of this file, with the option "baseline". The large continuous files are split in blocks of a maximum of 10,000 samples that are then processed as different files.

  • Remove DC offset: All the selected blocks of data are baseline corrected and concatenated to form a large matrix "F". There are two options for baseline correction:

    • Block by block: The average value of each channel is subtracted from each block before concatenating files together. Let Fi contain data from block #i:
      F = Concatenate[Fi - mean(Fi)].

    • Global: The average value of each channel is removed after concatenation (same correction for all blocks): F = Concatenate[Fi] - mean(Concatenate[Fi]).

  • The sample noise covariance is computed from F: NoiseCov = F * F' / Nsamples

    noisecov_options.gif

For this tutorial, keep the default options and click on [OK].

  • One new file is created and appears under the noise data folder, next to the channel file. The corresponding contextual menus are:
  • Display as image: Opens a display of the noise covariance matrix as an indexed image (same as double-click on file). This can be useful to quickly control the quality of the recordings: for instance, noisier channels appear as rows/columns marked in red.

  • Copy to other folders: Copy the file obtained to all the other folders of the same subject, to avoid re-computing the noise covariance statistics again for each folder from the same session.

  • Copy to other subjects: Copy this file to all the folders of all the subjects in the protocol.

  • You can also copy a noise covariance file to another folder just like any other file:
    Right-click > File > Copy/Paste, or keyboard shortcuts Ctrl+C/Ctrl+V.

    noisecov_file.gif noisecov_display.gif

Right-click on the the noise covariance file > Copy to other folders: We need this file in the two folders where the epochs were imported before we compute the respective source models.

  • noisecov_copy.gif

Advanced

Variations on how to estimate sample noise covariance

The sample noise covariance matrix is straightforward to obtain. Brainstorm's interface features a lot of flexibility to select the files and time windows used to calculate the sample statistics. You need to have a clear understanding of the concept of "noise" to pick the best possible option. We support the notion that noise covariance accounts for contaminants that remain present in the data after preprocessing is complete. Hence it is not meant to account for eye blinks, heartbeats, muscle artifacts, flat or bad channels and noisy segments: all of these need to be taken care of during previous preprocessing steps, as show in previous tutorial sections. The noise covariance entry is to account for remaining and stationary instrumental, sensor and environmental noise components. For this reason, the ideal scenario is to use segments of recordings that contain exclusively this type of contaminant, or segments of recordings deemed not to contain any of brain signals of interest. This section is advanced reading material that can be used as a reference in a different experimental context.

The case of MEG

From empty-room recordings: actual noise measurements (due to the instrument, environment) using empty-room conditions (no subject under the MEG helmet) are possible in MEG. We recommend you obtain 2 to more minutes of empty-room data right before bringing the subject in the MEG room, or right after the experiment is finished.
You can verify quantitatively how stable and reproducible is the noise covariance estimated (e.g., during the day/week). The MEG system may be located in a "quiet environment", allowing that you re-use the same noise recordings and therefore, noise covariance matrix, for all runs and subjects acquired on the same day.

From resting-state segments: When studying evoked responses (aka event-related responses), you may want to collect a few minutes of recordings where the subject is resting, i.e. not performing the task. Using these segments to obtain the noise covariance estimates implies that resting brain activity will be assumed as "noise", which we know now is not the case. We therefore encourage MEG users to acquire short empty-room runs instead.

From pre-stimulus baseline: Using concatenated pre-stimulation baseline segments from individual trials is in principle similar to the previous scenario, hence has similar limitations: regions prominently active during pre-stimulation baseline will tend to be attenuated by source analysis.

The case of EEG

It is less straightforward to estimate sensor noise from EEG data, because the electrodes need to be attached to a conductive medium (i.e. the scalp) to produce signals. Therefore only the last two options shown above in the MEG section are possible in EEG:
resting baseline and pre-stimulation baseline.

EEG noise levels depend on the quality of the electrode connection with the skin. It varies considerably between subjects, and during acquisition. Indeed, the conductive gel/paste/solution used for contact tends to dry up, which affects impedance. The electrode cap/locations can also move slightly, depending on how cooperative the participants. To account for variable noise levels between subjects, it is therefore preferrable to use one channel file per subject, because it allows the definition of one noise covariance entry per participant. In some specific cases, if the quality of the recordings varies a lot over time, it is preferrable to segment long recordings into shorter runs, and obtain different noise covariance matrices for each.

EEG in resting-state conditions

If the target brain activity to your experiment is resting, resting segments cannot be used to obtain noise statistics. For MEG, use empty-room measurements as explained above. For EEG, there are possible options: use sensor variance estimates, or avoid estimating empirical noise statistics.
Option #1: Calculate noise covariance over a long segment of resting recordings, but save only the diagonal elements, i.e. the variance measured at each sensor. This option is available in Brainstorm's advanced options of source computation: select the option "Diagonal noise covariance".
Option #2: Select "No noise modeling" in the contextual menu. This option uses the identity matrix as noise covariance matrix, and therefore assumes equal, unit variance of noise on every sensor. In inverse modeling, this is equivalent to assuming that noise is homoskedastic, and equivalent on all sensors. With this latter option, if data quality is not even on all electrodes, a higher noise level on some sensors may be explained with stronger, spurious source activity.

Epilepsy

Interictal spikes are often considered as events of interest in epilepsy studies. In that respect, they can be considered as a form of brain activity evoked by the epileptogenic process, very much like a stimulus-related response. For this reason, one shall be cautious about how to define noise conditions from ongoing EEG or MEG recordings to study epilepsy. Defining a segment of time adjacent to the spike as "background noise" is certainly not justified. In practice, however, one often uses a data segment clear from visible interictal activity. As discussed above, MEG has the additional, preferred option of using empty-room data as a baseline - an option not available with EEG.

The same options as above are available to the epilepsy researcher:
Option #1a: Estimate noise covariance statistics from blocks of recordings away from the peak of any identified interictal spike, and keep only the diagonal (the variance of each sensor time series). This option is available in Brainstorm's advanced options of source computation: select the option "Diagonal noise covariance".
Option #1b: If a large period of time clear of visible interictal events is available, use this latter to estimate the full noise covariance array.
Option #2(MEG): Use empty-room data as baseline.
Option #3: Select "No noise modeling" in the popup menu (identity matrix, unit variance of noise on every sensor).

Advanced

Recommendations

  • Long noise recordings: In order to obtain a more accurate estimation of the noise covariance, a sufficiently large number of time samples is required: at least N*(N+1)/2 samples, (N is the number of sensors.) This amounts to 40s of data for CTF275 recordings at 1000Hz, or 20s for 128-channel EEG at 500Hz. Remember to always use as much data as possible to estimate noise covariance statistics.

  • Do not use signal averages: For this reason, you should never compute the noise covariance matrix from averaged responses. If you want to import recordings that you have fully pre-processed with another program, we recommend you import the individual trials and use them to estimate noise covariance. If you can only import the averaged responses in Brainstorm, please be aware that you may under estimate noise levels and bias source results.

  • Using one block: If you want to use a segment of "quiet" recordings in a continuous file: right-click on the continuous file > Noise covariance > Compute from recordings, then copy the noise covariance to the other folders. This is the case described in this tutorial.

  • Use single trials: If you want to use the pre-stimulation baseline from single trials, first import the trials in the database, then select all the groups of imported trials at once, right-click on one of them > Noise covariance > Compute from recordings, and finally copy the file to the other folders.

  • Using multiple continuous blocks: This is similar to the single trial case. Import in the database all the blocks you consider as quiet resting baselines, then select all the imported blocks in the database explorer > Noise covariance > Compute from recordings.

Advanced

Data covariance

Beamforming source mapping further requires the estimation of data covariance statistics. The LMCV beamformer uses an estimated data covariance matrix to compute spatial filters that define the beamformer. It is important that this covariance matrix contains contributions from all sources that contribute to the data over the time window to be analyzed (signal + noise).

We have found, as have others, that it is important to use a relatively long time window for robust beamformer performance. This also helps to avoid numerical instability when inverting the data covariance, as required in computing the beamformer kernel matrix. Our recommendations:

  • Evoked responses: Use a window that spans prestim through the end of the response of interest, with a minimum of 500ms total duration.

  • Spontaneous data: Use the full time window to be analyzed, with a minimum of 30s of data.

The procedure to compute the data covariance is very similar to the noise covariance:

  • For run#01, select all the trials, right-click > Data covariance > Compute from recordings.

    datacov_popup.gif

  • We need to specify two time windows:
    Baseline: Pre-stimulus time, used for DC offset correction (subtracts the baseline mean).
    Data: Time segment of interest (let's use all the time available post-stimulus in this tutorial example).

    datacov_options.gif

  • Repeat the same steps for run#02.

    datacov_files.gif

Advanced

On the hard drive

Right-click on any noise covariance file > File > View file contents:

  • noisecov_contents.gif

Structure of the noise/data covariance files: noisecov_*.mat / ndatacov_*.mat

  • Comment: String displayed in the database explorer to represent this file.

  • NoiseCov: [nChannels x nChannels] noise covariance: F * F' ./ (nSamples-1)
    Unknown values are set to zero.

  • FourthMoment: [nChannels x nChannels] fourth order moments: F.^2 * F'.^2 ./ (nSamples-1)

  • nSamples: [nChannels x nChannels] number of time samples that were used for each pair of sensors. This is not necessarily the same value everywhere, some channels can be declared as bad only for a few trials.

Related functions

  • process_noisecov.m: Function for process "Sources > Compute covariance (noise or data)"

  • bst_noisecov.m: Computes the data/noise covariance matrices.

  • panel_noisecov.m: Options panel.

Advanced

Additional documentation



Tutorial 22: Source estimation

Authors: Francois Tadel, Elizabeth Bock, Rey R Ramirez, John C Mosher, Richard M Leahy, Sylvain Baillet

This section describes how to estimate brain activity accounting for scalp recordings.

Background

Estimating brain activity at potentially thousands of brain locations (determined by the forward head model) from much fewer sensor locations is a so-called ill-posed inverse problem. One implication is that an infinite number of source activity patterns may explain equivalently well the sensor data. These aspects are explained in detail here and here.

Such ill-posedness is not specific to EEG/MEG. It is quite typical in many other fields of science and engineering.

  • forward_inverse.gif

There is a vast EEG/MEG literature on the question. Brainstorm features three well-documented types of approaches: minimum-norm imaging, beamforming, and dipole modeling.

One common advantage between these approaches is that they are computationally efficient, even on large datasets. The estimates of brain source activity are derived via a linear recombination of sensor recordings. Brainstorm therefore computes a kernel ("a large matrix") conveniently stored in the database and that can be multiplied with sensor data arrays to obtain source time series, at specific brain locations, or across the entire brain.

Below we first describe the options of the minimum-norm imaging approach, then beamformers and dipole modeling. These latter are technically similar.

Source estimation options

Method

minnorm_options.gif

Minimum-norm (MN) imaging

  • MN imaging estimates the amplitude of brain sources distributed across the brain or constrained to the cortex. The MN solution to the ill-posed EEG/MEG inverse problem is the one that best fits the sensor data with minimum overall amplitude of brain activity.
  • MN imaging requires the specification of noise statistics via a so-called noise covariance matrix. MEG users can best estimate noise statistics from an empty-room recording. If not available, as in EEG, noise covariance can be derived directly from recordings (e.g., from pre-stim segments, if relevant to the scientific question) or be assumed as uniform across sensors as described below.

Beamforming

  • Brainstorm features the well-studied linearly constrained minimum variance (LCMV) beamformer to estimate source activity through a spatial filtering procedure. In lay terms, beamformer scans through all potential brain locations specified in the head model and estimates their respective contributions to sensor data, while attenuating the contributions from other brain regions.
  • Beamforming is technically similar to MN imaging, although it is more sensitive to approximations of the head model than MN imaging, and requires specific additional inputs. It is also blind to sources at different brain locations, but which time series are highly correlated.

  • LCMV beamformers require the specification of the data (noise+signal) covariance matrix, estimated directly from the recordings. LCMV source images can either be used as such, or post-processed, with a dipole model fitted at every time point of their largest peak(s). Please refer to the section further below for more detail about beamformers.

Dipole modeling

  • Brainstorm features a simple localization approach that adjusts the parameters of a single current dipole fitted to the sensor data at each point in time. As mentioned above, a LCMV-type map is first produced, and an equivalent current dipole is fitted at the strongest peak location of that map (more detail here).

Recommended option

  • After exploring many, more sophisticated alternatives, our preference is to use simple and robust imaging approaches such as MN imaging or beamforming over single dipole scanning. However, this decision is often a matter of tradition in different resarch groups or subfields.
  • Brainstorm enables the convenient comparison of these three approaches on your own data, with all possible sub-options, as detailed below.

MN imaging variants

minnorm_options_measure.gif

By default, MN imaging estimates the amplitude of brain electrical currents at each grid location determined by the forward head model (i.e., either in volume or on the cortical surface). As discussed here, the currents are expressed in A-m. Brainstorm does not normalize by surface area (A/m, i.e., current surface density) or volume (A/m^2, i.e., current volume density). Nonetheless, we refer to this default setting as yielding a current density map.

  • Current density map option: implements a L2-minimum norm estimate of brain current. FOr consistency, Brainstorm's method is identical to MNE's. Please refer to the MNE manual (Section 6, "The current estimates") for a technical description. Units are scaled to pA-m.

To further compensate for the inhomogenous sensitivity of EEG/MEG with depth and orientation of the current flow, we recommend that the current density maps obtained with this option be further standardized using a z-score transformation with respect to a specific time segment of no interest (e.g., pre-stimulus baseline) or experimental condition (e.g., resting-state).

Alternatively, such standardization can be achieved directly with respect to global noise and data covariance statistics via the dSPM and sLORETA options.

  • dSPM [recommended]: the derivations are those of the dynamical Statistical Parametric Mapping approach by Dale et al. (2000), based on the default MN option above and scaled with respect to the noise covariance. The resulting dSPM maps are a set of z-scores. Units: unitless "z".

  • sLORETA: Standardized LOw Resolution brain Electromagnetic TomogrAphy approach proposed by Pasqual-Marqui (2002).. The default current density maps are normalized with respect to the theoretical data covariance, found as the sum of the noise covariance and a model of brain signal covariance (see original paper for detail). (Note that the sLORETA data covariance is not the empirical data covariance estimated directly from the data, as used in beamformers, but rather the theoretical “statistical” data covariance used in minimum norm estimation). Per Eq 16 and 17 of Pascual-Marqui’s technical details paper, Brainstorm employs sLORETA’s concept of the unitless Resolution Kernell R. Thus the units of Brainstorm’s sLORETA implementation are output in units of A-m, which are the units of the minimum norm source estimation, which pass through the square root of the inverted resolution kernel.

Source model: Dipole orientations [TODO]

The current flow of neural activity at each source localtion is modeled by the orientation of an equivalent current dipole. Brainstorm features the following options to determine this orientation:

minnorm_options_orient.gif
  • Constrained: Normal to cortex: this option is available only when working with "surface" grid locations (such as the cortical surface). Current dipoles are oriented normally to the cortical surface, to model the orientation of macrocolumns of pyramidal neurons perpendicular to the cortex.
    Size of the imaging kernel: [Nvertices x Nchannels].

  • Loose: This option is available only when working with "surface" grid locations (such as the cortical surface). In addition to a dipole normal to cortex as above, two additional dipoles are adeed in the tangential plane at each cortical location. Their amplitude is constrained below a fraction of the main normal dipole's. The recommended values are between 0.1 and 0.6. This option relaxes the constraint of strict orientation to the cortex to account for anatomical and physiological uncertainties.
    Size of the imaging kernel: [3*Nvertices x Nchannels].

  • Unconstrained: This option is available for both "surface" and "volume" source grids. There are 3 orthogonal dipoles at each grid location along the x, y, and z ("Cartesian") directions of the coordinate system.
    Size of the imaging kernel: [3*Nvertices x Nchannels].

  • Recommended option: The fully constrained option require only one dipole per source grid location, instead of three. Therefore, the source and kernel files are smaller, faster to compute and display. However, when using MRI templates instead of individual anatomy, loose/unconstrained orientation models may account for some of the model uncertainties (see this section).

Sensors

minnorm_options_sensors.gif

Brainstorm automatically detects the type of sensors (mEg, EEG, etc.) available from the head model selected for source imaging. In the example above, only MEG sensors are available. Select one or all the sensor types available you are interested in.

However, cross-modality calculations -- the fusion between MEG and EEG data to yield a joint source map -- are very sensitive to covariance calculations and head model approximations. As of Spring of 2018, we have also elected to NOT account for cross-covariances between different sensor types. If you wish to obtain a joint, multimodal source model, we recommend that you compute each source map separately and then combine them visually or quantitatively.

Computing sources for an average

We describe here a basic example of how to use Brainstorm to obtain a MN imaging maps of event-related average sensor data.

  • In Run#01, right-click on the average response for the deviant stim > Compute sources [2018].
    Select the options: Minimum norm imaging, Current density map, Constrained: Normal to cortex.

    minnorm_single.gif
    (The other "Compute sources" menu is for legacy options and implementations of the same imaging options.)

  • The outcome of this process is a new dependent file of the sensor data, indicated with a brain icon. The file label (aka "comment") indicates "MN", which stands for "minimum-norm", and "Constr", which stands for "Constrained: normal orientation".

    minnorm_single_tree.gif

Display: Cortex surface

  • Right-click on this new source file with the brain icon and select > Cortical activations > Display on cortex.

    minnorm_single_popup.gif

  • Double-click on the recordings for the deviant average to display the sensor time series alongside their brain source maps.
    In the filter tab, add a low-pass filter at 40Hz to smooth the time series a bit.

    display_cortex.gif

  • Note how the display of the sensor and brain map windows are sync'd in time (click anywhere in the white portion of the time series window or use the left/right keyboard arrows to change the time stamp). You can also use all the menus and shortcuts introduced in the anatomy tutorial to use pre-set displays (0-6 keys).
  • Edit the display properties of the brain map in the Surface tab:

    • Amplitude: applies a minimum threshold to the source amplitude values displayed. The threshold is defined as a percentage ratio of the maximum amplitude value of the currrent color scale.

    • Min size: removes the smaller clusters in the source map, which number of source is smaller than the "min size" value entered.

    • Transparency: changes the opacity of the source map on the cortex surface.

Note that these parameters only adjust the visualization of source maps. They do not have effect on the actual source time series.

A few more words about the amplitude threshold parameter:

  • The maximum of the current colorbar depends on the Sources colormap parameters. If "Maximum: Global" is selected, the maximum indicated should be around 150 pA.m in the present dataset. This value represents the maximum of the source map across the entire dataset (across space and time). You can change the colorbar maximum value with the colormap option "Maximum: Custom".

  • On the screen capture below, the threshold value is set to 20%: only sources with amplitudes greater than 0.20*150 = 30 pA.m are shown.
    The threshold value is shown in the colorbar with a horizontal white line.

  • In the current data example, the source map should indicate strong activations around the primary auditory cortex around 91ms, bilaterally.

    display_sliders.gif

Recommended post-processing steps

The original source maps may look noisy or patchy. This is due to the strict orientation constraint used in the brain mapping procedure, which emphasizes the sensitivity of brain current strengths to the curvature of the cortex (this effect is more pronounced with MEG than EEG).

Please be cautious not to interpret disconnected colored patches as distinct brain activations without further post processing. The absolute spatial resolution of MEG source mapping is limited (~5-10mm, worse in EEG), although its relative resolution between experimental conditions, hence with post processing, can be much finer (1mm or less, see for instance this retinotopy study).

For now, you may generate smoother versions of the source maps by applying a spatial smoothing process (process "Sources > Spatial smoothing"), or using unconstrained source models, or standardizing source amplitude by applying a z-score transformation with respect to a time period of reference.

Brain maps obtained with dSPM or sLORETA are also standardize, more immune to orientation confounds (see below for more detail).

Display: MRI Viewer

  • Right-click on the source file for the deviant average > Cortical activations > Display on MRI (MRI Viewer).

  • See Brainstorm's MRI Viewer detailed tutorial in Sections #2 and #3.

  • This display shows cortical source activity interpolated in the the MRI volume. Set the amplitude threshold to 0% to visualize the cortex ribbon used onto which source activity is mapped.

    display_mriviewer.gif

  • Visualization parameters:
    • MIP Anatomy: check this box to obtain a "glass brain" view of the structural MRI, across all three spatial dimensions. MIP stands for "Maximum Intensity Projection"

    • MIP Functional: to obtain a glass brain view of the source map.

    • Amplitude threshold: in the Surface tab of the main Brainstorm window, to apply a threshold on the source map, as explained for the 3-D cortex view above.

    • Current time: shows the current time stamp in the data recordings, at the top-right of the main Brainstorm window. You also use the right/left arrows to move in time, or click anywhere in the white area of the sensor time series window.

  • Overlay options: Right-on the figure to get access additional options.

    • Smooth: Increase this value to smooth the source map further, for visualization purposes. This parameter defines the size of the smoothing kernel, in number of voxels.

    • Interpolation sources>MRI: Parameters to compute the interpolation on the MRI volume of the source maps, available on a source space of 15000 points located on the cortex surface. This procedure associates to each of the millions of MRI voxels a value computed from these 15000 source values. This menu lets you control some parameters of this interpolation.

    • Interpolate values: If enabled, performs a distance-weighted interpolation with 3 nearest neighbors (default for source results). If disable, performs a single nearest-neighbor lookup instead (useful to represent integer values that should not be interpolated, such as parcel labels).

    • Distance threshold: Maximum distance between a colored voxel and a grid point.

    • Resolution: Spatial resolution of the interpolated source maps. By default, the interpolation computes a source volume downsampled by a factor 3 with respect to the anatomical MRI, in order to make the computation faster. In that case, each colored element has a dimension of 3x3x3 voxels. If a higher resolution is needed (e.g. for displaying SEEG data) this parameter can be lowered to 1 or 2 voxels. Using a 1-voxel resolution requires the computation of 3*3*3=27 times more voxel values than with the default option, leading to much longer computation times.

display_smooth.gif

Display: MRI 3D

  • Right-click on the source file for the deviant average > Cortical activations > Display on MRI (3D).

  • We detailed this feature in the previous tutorial sections about MRI and surface visualization.
    Keep right mouse button pressed and move the mouse to change the MR slices displayed. You can also use the Resect panel of the Surface tab.

display_mri3d.gif

Sign of constrained maps

Source brain maps consist of time series that are complex curves of positive and negative values.

You can visualize how the sign of source amplitudes is distributed across the cortex using the cortical display of sources: set the amplitude threshold to 0%, then make sure the colormap shows relative (i.e., both positive and negative) values. For this, right click over the colorbar Colormap: Sources > uncheck the "Absolute values" option. At any time, you can double-click on the colorbar to reset the colormap options to default values.

As shown below, a typical brain map will show stripes of positive and negative values, with sign changes around sulcal locations. This is another manifestation of the limited absolute spatial resolution of MEG/EEG source mapping. Sources of opposite sides of sulcus are oriented in opposite directions by default. Source mapping shows they have oppositive signs meaning that the respective neural currents are estimated as flowing in the same direction. We will see later how this sign ambiguity can be managed via either the processing of rectified source time series (if you wish to map source amplitude effects only). It is crucial to preserve the sign though if you are in interested in frequency specific brain activity, such as spectral, time-frequency and connectivity analyses.

  • display_negative.gif

More on sign ambiguity: On opposite walls of a sulcus, brain source are very close to each other, with opposite orientations. If the true brain activity sits only on one side of a sulcus as shown below with a green arrow, the MN-imaging brain map lower spatial resolution will spread the estimated currents over multiple nearby locations, shown with the red and blue arrows below, which have opposite default directions that are imposed by anatomy (dipoles pointing outwards the cortical surface). The signs of the current flows will be opposite, with positive values (red arrows) on one side of the sulcus and negative values on the other side (blue arrows).

  • minnorm_sketch.gif

For visualization purposes, we are mostly interested at this stage in visualizing the magnitude of brain activity, hence the default colormap option "absolute values" being selected.

Unconstrained orientations

The "loose constraints" and "unconstrained" options for source orientations yield 3 time series per brain location (from three orthogonal elementary sources), which increases the dimensionality of the source maps, hence complexify their interpretation, but produces smoother renderings of current flows. We recommend these options when using an MRI template instead of the individual MRI volume of study participants, or when studying subcortical brain structures. Unconstrained maps can subsequently be converted to "flat" maps, i.e. a single orientation per location, which may be needed for some processes.

Here we will illustrate the fully unconstrained case. The procedure for the loose constraints options is similar.

  • In Run#01, right-click on the trial average of the deviant condition > Compute sources [2018].
    Select the options: Minimum norm imaging, Current density map, Unconstrained.

  • Double-click on the source file produced (brain icon, labelled with "Unconstr") and on the corresponding sensor data file. The two brain maps below represent the same file at 91ms, with different colormap options (absolute values on the left, relative values on the right). Explanations below.

    minnorm_unconstr_all.gif

  • Again, using unconstrained/loose source orientations consists of triplets of dipoles with orthogonal orientations at each cortex location. To display their activity in a visually-meaningful manner, we first need to reduce the dimensions of the source data and color code the outcome. Brainstorm displays the norm of the vectorial sum of the three orientations at each vertex.
    S = sqrt(Sx2 + Sy2 + Sz2)

    minnorm_unconstr_sketch.gif

  • This explains that only positive values are displayed over the source map. Note that the actual full values along each orientation (x,y,z) are signed and can be retrieved from the source file for further derivations (e.g., spectral analyses).

  • The unconstrained/loose orientation maps are typically smoother than with the constrained orientation option (but see post-processing steps below) because they are less overly sensitive to changes in the curvature of the cortex.
  • You may delete the unconstrained source file: we will not use this option further in the introduction tutorials. Please refer to the tutorial EEG and epilepsy for further exploration of this option.

Standardization of source maps

Standardization procedures can compensate some of the bias of MN imaging source maps towards superficial source locations (in both MEG and EEG) and radially oriented current flows (in MEG). It also enables a fairer comparison of brain activity between individuals, based on its relative change with a data segment of reference.

Reference data segments can be extracted from empty-room recordings (MEG), pre-stimulus baseline or resting state data (MEG and EEG).

dSPM and sLORETA proceed to such standardization within their respective source mapping procedures. Brainstorm also features a Z-score normalization process, which enables a versatile definition of the reference data segment.

Source map standardization does not alter the dynamics of the source time series and only scales their respective amplitude changes. The scaling factors are different at each brain location, hence the resulting source maps will look different than the original MN images, but with the same temporal dynamics.

dSPM, sLORETA (embedded standardization)

  • In Run#01, right-click on the average sensor data of the deviant condition > Compute sources [2018].
    Select successively the two normalization options: dSPM, sLORETA, (constrained).

    minnorm_normfiles.gif

  • Double-click on all the resulting source files to compare them (screen capture below is at the 143-ms time point):

    minnorm_normalized.gif

  • Current density maps (MN imaging): without further standardization, MN imaging tends be biased towards the most superficial gyral crowns (see Z-score procedure below).

  • dSPM: compensates some of this bias. The amplitude of brain maps are converted to z-scores. You may adjust the z-scores with the number of trials used to obtain the sample average via specific process "Sources > Scale averaged dSPM" (see Averaging normalized values).

  • sLORETA: the statistics are proper to the method and therefore, are not related to z-scores.

Z-score transformation of brain maps

  • The Z-transformation of MN imaging brain maps centre source amplitudes around their mean over a reference segment, and scale them with respect to their standard deviation over the same reference segment: Z = (Data - μ) / σ

  • The mean and standard deviation parameters are estimated from the data and are specific of each source location. Hence, sources with lower fluctuations over the reference segment (e.g., with less noise, or overall smaller amplitude) will be boosted by the transformation.
  • In Process1: Select the constrained current density map (file MN: MEG(Constr)).

  • Run process "Standardize > Baseline normalization", [-100,-1.7]ms, Z-score transformation
    Do not select "Use absolute values" as for now, we wish to preserve the sign of source amplitudes.

    zscore_process.gif

  • Double-click on the new brain map file (with the "| zscore" tag).

    zscore_cortex.gif

  • We can appreciate that the standardized map is now qualitatively similar to dSPM and sLORETA versions.
  • A map value of 3 means that the source amplitude at this time point is 3 times above the typical amplitude of that same source during the reference segment.
  • Please note Brainstorm's online filters do not affect the z-transformed source maps. Using the Filter tab and applying a bandpass filter will update the display all sensor time series and source maps on display, except the z-transormed maps.

  • If you wish to use a reference segment from another data file (e.g., a resting state recording as reference for the current task recording), use the Process2 process tab by dragging and dropping the reference source map in the Files A list on the left, and the source maps to be standardized in the Files B list on the right.

Recommended options

  • Use non-standardized current density maps (MN imaging) for:

    • Computing imaging kernels shared across single trials.
    • Averaging files across several MEG/EEG runs.
    • Deriving spectral, time-frequency decompositions, phase-amplitude, or connectivity measures on single trials.
  • Use standardized maps (dSPM, sLORETA, Z-score) for:

    • Producing source maps of trial averages (ERP/ERF).
    • Before running a group analysis of source maps.
  • Recommended standardization approach:
    • dSPM and sLORETA are convenient linear measures which are easy to manipulate with Brainstorm.
    • sLORETA maps are spatially smoother but with no direct statistical interpretation without further inference testing.
    • dSPM and Z-score maps are statistically interpretable as excursions with respect to a reference data segment (Z-score), or with respect to noise covariance statistics (dSPM).

Delete your experiments

Before we proceed with the rest of the tutorial, please delete the source maps computed so far.

  • Select all the source files you computed until now and delete them.

    delete_norm.gif

Computing sources for single trials

MN imaging models are linear: their respective imaging kernels can be pre-computed from anatomy and sensor locations only, then can be applied at once on long, ongoing or several epoched data segments on the fly. Here we show how to obtain source maps across multiple single trial epochs.

  • Right-click on the head model or the folder for Run#01 > Compute sources [2018].
    Select: Minimum norm imaging, Current density map, Constrained: Normal to cortex

    minnorm_shared_popup.gif

  • This procedure produces a shared imaging kernel (white brain icon) that is comon to all data files in Run #01. If you right-click on this new file, a warning message pops up that says "Inversion kernel". This file does not contain source maps, only the linear operator (kernel) to convert the sensor data into source maps.

    minnorm_shared_kernel.gif

  • All data files in Run #01 (averages and single epochs) now show a link to a source file (grey brain icon), which is a virtual brain map that is computed on the fly when visualized: Brainstorm multiplies the imaging kernel by the sensor data time series, for greater computational and data storage efficacy. You may now visualize and manipulate these virtual source maps just like the maps we computed above.

    minnorm_links.gif

Averaging in source space

Computing the average

  • First compute the same source model for the the second acquisition run.
    In Run#02, right-click on the head model or the folder > Compute sources [2018].
    Select: Minimum norm imaging, Current density map, Constrained: Normal to cortex

    minnorm_run02.gif

  • Now we have the source maps available for all the recordings, we can average them in source space across runs. This allows us to average MEG recordings that were recorded with different head positions (in this case Run#01 and Run#02 have different channel files so they could potentially have different head positions preventing the direct averaging at the sensor level).

  • Thanks to the linearity of the minimum norm model: MN(Average(trials)) = Average(MN(trials))
    The two following approaches are equivalent:

    1. Averaging the sources of all the individual trials across runs,
    2. Averaging the sources for the sensor averages that we already computed for each run.
  • We will use the second option: using the sources for the sensor-level averages. It is a lot faster because it needs to read 4 files (one average file per run and per condition) instead of 456 files (total number of good trials in the two runs).
  • Drag and drop to the Process1 tab the average recordings for Run01 and Run02, then press the [Process sources] button on the left to select the source files instead of the MEG recordings.

  • Run process "Average > Average files":
    Select "By trial group (subject average)" to average together files with similar names.
    Select "Arithmetic average" function.
    Check "Weighted average" to account for the different numbers of trials in both runs.

    average_process.gif

  • The two averages that are produced (one for each condition) are saved in the folder Intra-subject. This is where all the files computed using information from multiple folders within the same subject are saved. If you prefer to have them somewhere else, you can create new folders and move them there, just like you would do with a regular file explorer.

    average_files.gif

  • The file comments say "2 files" because they were computed from two averages each (one from each run), but the number of corresponding trials is correctly updated in the file structure.
    Right-click on each of them > File > View file contents, and check the Leff field:
    78 trials for the deviant condition, 378 trials for the standard condition.

  • Double-click on the source averages to display them (deviant=top, standard=bottom).
    Open the sensor-level averages as a time reference.
    Use the predefined view "Left, Right" for looking at the two sides at the same time (shortcut: "7").

    average_display_mn.gif

Visualization filters

  • Note that opening the source maps can be very long because of the filters for visualization. Check in the Filter tab, you may have a filter applied with the option "Filter all results" selected. In the case of averaged source maps, the 15,000 source signals are filtered on the fly when you load a source file. This filtering of the full source files can take a significant amount of time, consider unchecking this option if the display is too slow on your computer.

    filter_sources.gif

  • It was not a problem until now because the source files were saved in the compact form (Kernel*recordings) and the visualization filters were applied on the recordings, then projected to the source space. This fast option is not available anymore with these averages across runs.
  • The visualization filters will not be available anymore after we apply a Z-score normalization. If we want to display Z-score source maps that are smoothed in time, we will have to apply explicitly the filters on the file, with the Process1 tab.

Low-pass filter

  • Clear the Process1 list, then drag and drop the new averages in it.
  • Run process "Pre-process > Band-pass filter": [0,40] Hz

    average_filter.gif

  • Epochs are too short: Look at the filter response, the expected transient duration is at least 78ms. The first and last 78ms of the average should be discarded after filtering. However, doing this would get rid of almost all the 100ms baseline, which we need for normalization. As mentioned here, we should have been importing longer epochs in order to filter and normalize the averages properly.

    average_filter2.gif

Z-score normalization

  • In Process1, select the two filtered averages.
  • Run process "Standardize > Baseline normalization", baseline=[-100,-1.7]ms, Z-score.

    average_zscore.gif

  • Four new files are accessible in the database: two filtered and two filtered+normalized.

    average_zscore_files.gif

  • Double-click on the source averages to display them (deviant=top, standard=bottom).

    average_display_zscore.gif

  • The Z-score source values at 90ms are higher for the standard condition (~25) than for the deviant condition (~15). We observe this because the two conditions have very different signal-to-noise ratios. The standard condition has about 5x more trials, therefore the standard deviation over the baseline is a lot lower, leading to higher Z-score.
  • Delete the non-normalized filtered files, we will not use them in the following tutorials.

    average_zscore_files2.gif

Note for beginners

Everything below is advanced documentation, you can skip it for now.




Advanced

Averaging normalized values

Averaging normalized source maps within a single subject requires more attention than averaging current density maps. Since averaging reduces variance, the resulting source maps will have a different statistical distribution than the nominal distribution of the individual maps.

For example, averaging z-score normalized maps will result in maps with variance less than 1. The same holds true for dSPM maps. Assuming independent samples, the variance of an average of N maps drops by 1/N. For this reason, it is generally recommended to select the "Weighted average" option in the ‘Average files’ process when averaging trials or source maps (which performs mean(x) = (N1*sum(x1(i)) + N2*sum(x2(i)) + …)/ (N1+N2+…) ) in order to keep track of the number of samples and the actual variance of averaged statistical maps.

dSPM

  • An averaged dSPM map has variance equal to 1/N (and thus standard deviation equal to 1/sqrt(N)). Therefore one could multiply the averaged dSPM map by sqrt(N) in order to maintain variance 1 under the null hypothesis. In previous versions of Brainstorm, this was done automatically when visualizing the files, and when averaging source maps with the option "Adjust normalized source maps for SNR increase". To simplify the interface and make the interpretation of maps more intuitive and consistent with other cases (min-norm, z-scored), we now dropped this option. Thus averaging dSPM maps now results in maps with variance less than 1, and is consistent with handling min-norm, z-scored and sLORETA maps.
  • Ajusting an averaged dSPM file by this sqrt(N) factor is still possible manually, eg. in order to visualize cortical maps that can be interpreted as Z values. Select the average dSPM files in Process1 and run process "Sources > Scale averaged dSPM". This should be used only for visualization and interpretation, scaled dSPM should never be averaged or used for any other statistical analysis.

    dspm_scale.gif

Z-score

  • The same SNR issues arise while averaging Z-scores: the average of the Z-scores is lower than the Z-score of the average.
  • When computing averages at the subject level: Always avoid averaging Z-score maps.
    Average the current density maps, then normalize.

sLORETA

  • This normalization is not based on the SNR of signal, but rather on the spatial smoothness of the maps. Managing these maps is similar to min-norm maps: the variance of the individual maps is not explicitly modeled or known analytically.
  • As in other cases, sLORETA(Average(trials)) = Average(sLORETA(trials)), and this relationship is guaranteed to hold with averaging uneven number of samples when using the option "Weighted average".

Advanced

Display: Contact sheets and movies

A good way to represent what is happening in time is to generate contact sheets or videos. Right-click on any figure and go to the menu Snapshot to check out all the possible options. For a nicer result, take some time to adjust the size of the figure, the amplitude threshold and the colormap options (hiding the colorbar can be a good option for contact sheets).

A time stamp is added to the captured figure. The size of the text font is fixed, so if you want it to be readable in the contact sheet, you should make you figure very small before starting the capture. The screen captures below where produced with the colormap "hot".

  • Contact sheet: Right-click on any figure > Snapshot > Time contact sheet: Figure

    contact_options.gif contact_image.gif

  • Movies: Right-click on any figure > Snapshot > Movie (time): All figures

    video_options.gif http://neuroimage.usc.edu/wikidocs/average_sources_unconstr.avi

Advanced

Model evaluation

One way to evaluate the accuracy of the source reconstruction if to simulate recordings using the estimated source maps. This is done simply by multiplying the source time series with the forward model:
MEG_simulated [Nmeg x Ntime] = Forward_model [Nmeg x Nsources] * MN_sources [Nsources x Ntime]
Then you can compare visually the original MEG recordings with the simulated ones. More formally, you can compute an error measure from the residuals (recordings - simulated).

To simulate MEG recordings from a minimum norm source model, right-click on the source file, then select the menu "Model evaluation > Simulate recordings".

  • model_popup.gif

Open side-by-side the original and simulated MEG recordings for the same condition:

  • model_results.gif

Advanced

Advanced options: Minimum norm

Right-click on the deviant average for Run#01 > Compute sources [2018].
Click on the button [Show details] to bring up all the advanced minimum norm options.

minnorm_details.gif

Depth weighting

Briefly, the use of various depth weightings was far more debated in the 1990s, before the introduction of MNE normalization via dSPM, sLORETA, and other "z-scoring" methods, which mostly cancel the effects of depth weighting (put another way, after normalization min norm results tend to look quite similar whether depth weighting is used or not).

By modifying the source covariance model at each point in the source grid, deeper sources are "boosted" to increase their signal strength relative to the shallower dipoles; otherwise, the resulting MNE current density maps are too dominated by the shallower sources. If using dSPM or sLORETA, little difference in using depth weighting should be noted. To understand how to set these parameters, please refer to the MNE manual. (options --depth, --weightexp and --weightlimit).

Noise covariance regularization [TODO]

MNE and dipole modeling are best done with an accurate model of the noise covariance, which is generally computed from experimental data. As such, these estimates are themselves prone to errors that arise from relatively too few data points, weak sensors, and strange data dependencies that can cause the eigenspectrum of the covariance matrix to be illconditioned (i.e. a large eigenvalue spread or matrix condition number). In Brainstorm, we provide several means to "stabilize" or "regularize" the noise covariance matrix, so that source estimation calculations are more robust to small errors.

  • Regularize noise covariance: The L2 matrix norm is defined as the largest eigenvalue of its eigenspectrum. This option adds to the covariance matrix a diagonal matrix whos entries are a fraction of the matrix norm. The default is 0.1, such that covariance matrix is stabilized by adding to it an identity matrix that is scaled to 10% of the largest eigenvalue.

  • Median eigenvalue: The eigenspectrum of MEG data can often span many decades, due to highly colored spatial noise, but this broad spectrum is generally confined to the first several modes only. Thus the L2 norm is many times greater than the majority of the eigenvalues, and it is difficult to prescribe a conventional regularization parameter. Instability in the inverse is dominated by defects found in the smallest eigenvalues. This approach stabilizes the eigenspectrum by replicating the median (middle) eigenvalue for the remainder of the small eigenvalues.

  • Diagonal noise covariance: Deficiencies in the eigenspectrum often arise from numerical inter-dependencies found among the channels, particularly in covariance matrices computed from relatively short sequences of data. One common method of stabilization is to simply take the diagonal of the covariance matrix and zero-out the cross-covariances. Each channel is therefore modeled as independent of the other channels. The eigenspectrum is now simply the (sorted) diagonal values.

  • No covariance regularization: We simply use the noise covariance matrix as computed or provided by the user.

  • Automatic shrinkage: Stabilization method of Ledoit and Wolf (2004), still under testing in the Brainstorm environment. Basically tries to estimate a good tradeoff between no regularization and diagonal regularization, using a "shrinkage" factor. See Brainstorm code "bst_inverse_linear_2018.m" for notes and details.

  • Recommended option: This author (Mosher) votes for the median eigenvalue as being generally effective. The other options are useful for comparing with other software packages that generally employ similar regularization methods. [TODO]

Regularization parameter [TODO]

In minimum norm estimates, as mentioned above in the comparisons among methods, the data covariance matrix is essentially synthesized by adding the noise covariance matrix to a modeled signal covariance matrix. The signal covariance matrix is generated by passing the source prior through the forward model. The source prior is in turn prescribed by the source model orientation and the depth weighting.

A final regularization parameter, however, determines how much weight the signal model should be given relative to the noise model, i.e. the "signal to noise ratio" (SNR). In Brainstorm, we follow the definition of SNR as first defined in the original MNE software of Hamalainen. The signal covariance matrix is "whitened" by the noise covariance matrix, such that the whitened eigenspectrum has elements in terms of SNR (power). We find the mean of this spectrum, then take the square root to yield the average SNR (amplitude). The default in MNE and in Brainstorm is "3", i.e. the average SNR (power) is 9.

  • Signal-to-noise ratio: Use SNR of 3 as the classic recommendation, as discussed above.

  • RMS source amplitude: An alternative definition of SNR, but still under test and may be dropped. [TODO]

Output mode

As mentioned above, these methods create a convenient linear imaging kernel that is "tall" in the number of elemental dipoles (one or three per grid point) and "wide" only in the number of sensors. At subsequent visualization time, we efficiently multiply the kernel with the data matrix to compute the min norm images.

For some custom purposes, however, a user may find it convenient to pre-multiply the data matrix and generate the full source estimation matrix. This would only be recommended in small data sets, since the full results can become quite large.

  • Kernel only: Saves only the linear inverse operator, a model that converts sensor values into source values. The size of this matrix is: number of sources (15000) x number of MEG sensors (274). The multiplication with the recordings is done on the fly by Brainstorm in a transparent way. For long recordings or numerous epochs, this form of compact storage helps saving a lot of disk space and computation time, and it speeds up significantly the display. Always select this option when possible.

  • Full results: Saves in one big matrix the values of all the sources (15,000) for all the time samples (361). The size in memory of such a matrix is about 45Mb for 600ms of recordings. This is still reasonable, so you may use this option in this case. But if you need to process longer recordings, you may face "Out of memory" errors in Matlab, or fill your hard drive quickly.

  • Full results [15000x361] = Inverse kernel [15000x274] * Recordings [274x361]

Advanced options: LCMV beamformer

As mentioned in the introduction above, two other methods can be selected for source estimation, a beamformer and dipole modeling. In this section, we review the options for the beamformer. On top of the noise covariance matrix, you need to estimate a data covariance matrix in order to enable the option "LCMV beamformer" in the interface.

Note that pre-whitening with the noise covariance matrix has not yet been implemented for the LCMV beamformer, and only the data covariance is used in the current version. The noise covariance has no impact on the LCMV beamformer results. However, if there is no noise covariance file available in the database, the "Compute sources" interface returns an error: to go around this limitation, you may select the "No noise modeling (identity matrix)" option in the contextual menu for the noise covariance.

lcmv_options.gif

Measure

The only option "Pseudo Neural Activity Index" (PNAI), is named after the definition of the Neural Activity Index (NAI). We have modified Van Veen’s definition to rely strictly on the data covariance, without need for a separate noise covariance matrix, but the basic premise is the same as in dSPM, sLORETA, and other normalizations. Viewing the resulting "map," in an identical manner to that with MNE, dSPM, and sLORETA described above, reveals possibly multiple sources as peaks in the map. Note that PNAI scores are unitless, because they correspond to standardized statistical fluctuations of neural currents. In this way, PNAI scores are analogous to z-scoring.

Dipole orientations

We recommend you choose "unconstrained" and let the later Dipole scanning process, which finds the best fitting dipole at each time point, optimize the orientation with respect to the data.

Data covariance regularization

Same definitions as in MNE, only applied to the data covariance matrix, rather than the noise covariance matrix. Our recommendation is to use median eigenvalue.

Advanced options: Dipole modeling

Dipole modeling fits a single dipole at each potential source location to produce a dipole scanning map. This map can be viewed as a indication of how well, and where, the dipole fits at each time point. However, we recommend using the subsequent best-dipole fitting routine (dipole scanning) to determine the final location and orientation of the dipole (one per time point). Please note that this function does not fit multiple simultaneous dipoles.

Although not widely recognized, dipole modeling and beamforming are more alike than they are different – when comparing the inverse operators required to compute the dipole scanning map (dipole modeling) and the beamformer output map (LCMV), we see that they differ only in that the former uses an inverse noise covariance matrix while the latter replaces this with the inverse of the data covariance.

dipoles_options.gif

Measure

This field is now missing, but the resulting imaging kernel file is directly analogous to the PNAI result from LCMV beamforming. The user can display this scanning measure just as with the LCMV case, where again the normalization and units are a form of z-scoring.

Dipole orientations

Use "unconstrained source" modeling and let the process "dipole scanning" optimize the orientation of the dipole for every time instance.

Noise covariance regularization

Similarly, use "median eigenvalue".

The tutorial "MEG current phantom (Elekta)" demonstrates dipole modeling of 32 individual dipoles under realistic experimental noise conditions.

Advanced

Combining MEG+EEG for source estimation

Magnetoencephalography and EEG sensor data can be processed jointly to produce combined source estimates. Joint processing presents unique challenges because EEG and MEG use head models that exhibit differing sensitivities to modeling errors, which can in turn lead to inconsistencies between EEG and MEG with respect to the (common) source model. In practice joint processing is relatively rare (Baillet et al., 1999). However, these data are complementary, which means that joint processing can potentially yield insights that cannot be seen with either modality alone.

For example, in the evoked responses in the data set used here, the first peak over the occipital areas is observed in MEG (90 ms) slightly before EEG (110 ms). This delay is too large to be caused by acquisition imprecisions. This indicates that we are not capturing the same brain processes with the two modalities, possibly because the orientation and type of activity in the underlying cortical sources is different.

MEG and EEG have different sensitivities to source orientation and depth. Given the challenges of joint processing, our advice is to first look at the source reconstructions for the two modalities separately before trying to use any type of fusion technique.

Advanced

On the hard drive

Constrained shared kernel

Right-click on a shared inverse file in the database explorer > File > View file contents.

  • kernel_contents.gif

Structure of the source files: results_*.mat

Mandatory fields:

  • ImagingKernel: Linear inverse operator that must be multiplied by the recordings in order to get the full source time series. If defined, ImageGridAmp must be empty.

    • Size for constrained sources [Nsources x Nchannels]
    • Size for unconstrained sources [3xNsources x Nchannels]. Order of sources: source1.x, source1.y, source1.z, ... sourceN.z)
  • ImageGridAmp: Full source time series, in Amper.meter. If this field is defined, ImagingKernel must be empty. If you want this field to be set instead of ImagingKernel, make sure you select the advanced option Full results when estimating the sources.

    • Size for constrained sources [Nsources x Ntime]
    • Size for unconstrained sources [3xNsources x Ntime]. Order of sources: source1.x, source1.y, source1.z, ... sourceN.z)
  • Time: [1 x Ntime] Time values for each sample recorded in F, in seconds.

  • nComponents: Number of dipoles per grid point: 1=Constrained, 3=Unconstrained, 0=Mixed. In the case of mixed head models, the atlas GridAtlas documents region by region how the list of grid points matches the list of dipoles.

  • Function: Type of values currently saved in the file: 'mn', 'mnp', 'dspm', 'dspm2018', 'dspm2018sc', 'sloreta', 'lcmv', 'lcmvp', 'lcmvnai', 'lcmvpow', 'gls', 'glsp', 'glsfit', 'glschi', 'zscore', 'ersd'...

  • HeadModelType: Type of source space used for this head model ('surface', 'volume', 'mixed').

  • HeadModelFile: Relative path to the head model used to compute the sources.

  • SurfaceFile: Relative path to the cortex surface file related with this head model.

  • Atlas: Used only by the process "Sources > Downsample to atlas".

  • GridLoc: [Nvertices x 3], (x,y,z) positions of the grid of source points. In the case of a surface head model, it is empty and you read directly the positions from the surface file.

  • GridOrient: [Nvertices x 3], direction of the normal to the surface for each vertex point (copy of the 'VertNormals' matrix of the cortex surface). Empty in the case of a volume head model or unconstrained sources.

  • GridAtlas: Atlas "Source model" used with mixed source models.

  • GoodChannel: [1 x Nchannels] Array of channel indices used to estimate the sources.

  • DataFile: Relative path to the recordings file for which the sources where computed. If this field is set, the source file appears as a dependent of the DataFile.

  • Comment: String displayed in the database explorer to represent this file.

  • History: Operations performed on the file since it was create (menu "View file history").

Optional fields:

  • Options: Structure that contains all the options of the inverse calculation. This is saved in the file only for bookkeeping.

  • Whitener: Noise covariance whitener computed in bst_inverse_linear_2018.m

  • DataWhitener: Data covariance whitener computed in bst_inverse_linear_2018.m

  • SourceDecompVa: [3 x Nsources] Concatenated right singular vectors from the SVD decomposition of the whitened leadfield for each source (only for unconstrained sources).

  • SourceDecompSa: [3 x Nvertices] Vector diagonal of the singular values from the SVD decomposition of the whitened leadfield for each source (only for unconstrained sources).

  • Std: For averaged files, number of trials that were used to compute this file.

  • DisplayUnits: String, force the display of this file using a specific type of units.

  • ChannelFlag: [Nchannels x 1] Copy of the ChannelFlag field from the original data file.

  • Leff: Effective number of averages. For averaged files, number of trials that were used to compute this file. For source files that are attached to a data file, we use the Leff field from the data file.

Full source maps

In Intra-subject, right-click on one of the normalized averages > File > View file contents.

  • zscore_contents.gif

This file has the same structure as a shared inverse kernel, with the following differences:

  • It contains the full time series (ImageGridAmp) instead of an inverse operator (ImagingKernel).

  • The Z-score process updated the field Function ('mn' => 'zscore')

  • The links are not real files on the hard drive, if you select the menu "View file contents" for any of them it would display the structure of the corresponding shared kernel.
  • They are saved in the database as strings with a specific structure: "link|kernel_file|data_file". This string associates a shared inverse operator with some recordings. The two files have to be available to load the this file. All the functions in Brainstorm are equipped to reconstruct the full source matrix dynamically.

Filename tags

  • _KERNEL_: Indicates that the file contains only an inverse kernel, it needs to be associated with recordings to be opened.

Useful functions

  • in_bst_results(ResultsFile, LoadFull, FieldsList): Load a source file and optionally reconstruct the full source time series on the fly (ImagingKernel * recordings).

  • in_bst(FileName, TimeBounds, LoadFull): Load any Brainstorm data file with the possibility to load only a specific part of the file.

  • bst_process('LoadInputFile', FileName, Target, TimeWindow, OPTIONS): The most high-level function for reading data files, can compute scout values on the fly.

Additional documentation

Articles

Tutorials

Forum discussions



Tutorial 23: Scouts

Authors: Francois Tadel, Elizabeth Bock, John C Mosher, Richard Leahy, Sylvain Baillet

In Brainstorm jargon, a scout represents a region of interest (ROI) in the available source space. It is a subset of dipoles defined on the cortex surface or the head volume. This tutorial explains how to create one or several scouts, use them to represent the activity in specific brain regions and compare the responses between different experimental conditions.

Here, all the steps are done from the scout panel, but scout time series can also be saved with the process "Extract > Scout time series" for further analysis, or computed directly within various processes, e.g. many under Frequency or Connectivity.

Hypothesis

For all the brain imaging experiments, it is highly recommended to have a clear hypothesis to test before starting the analysis of the recordings. With this auditory oddball experiment, we would like to explore the temporal dynamics of the auditory network, the deviant detection and the motor response. According to the literature, we expect to observe at least the following effects:

  • Bilateral response in the primary auditory cortex (P50, N100), in both experimental conditions (standard and deviant beeps).

  • Bilateral activity in the inferior frontal gyrus and the auditory cortex corresponding to the detection of an abnormality (latency: 150-250ms) for the deviant beeps only.

  • Decision making and motor preparation, for the deviant beeps only (after 300ms).

We will start by creating regions of interest corresponding to the auditory cortices to illustrate the tools, then define other regions to better explore the dynamics of the brain response.

Creating a scout

Almost all the features related to scout manipulation are accessible in the Scout tab in the main Brainstorm window. The scouts are automatically saved in the surface file from which they are created, and they are loaded and automatically displayed each time the surface is loaded.

An atlas designates, in this context, a list of scouts. For one cortex surface, we can have as many atlases as needed. An atlas can be an anatomical parcellation (like the ones loaded when using FreeSurfer), a random parcellation generated by Brainstorm, or a user-defined list of ROIs. All the surfaces contain, by default, an empty atlas called "User scouts", for the user to create new regions of interest.

First vertex (seed)

  • In Intra-subject, double-click on the normalized standard average.
    Open the average recordings for the standard condition, to have a time reference.
    Go to the first peak in the response, around 90ms.
    In the Surface tab, increase the amplitude threshold to see something relatively focal. The response is larger in the left hemisphere, so let's start with the left.

    open_files.gif

  • Switch to the Scout tab, click on the first button in the toolbar (the big cross).
    In the 3D figure, click where we expect to find the primary auditory cortex (rotate and zoom before if necessary). A small green dot with the label "1" appears where you clicked. Your first scout is created in the User scouts atlas, and contains only one vertex for the moment.

    scout_create.gif

  • If you are not satisfied with the position of the vertex you selected, delete the new scout (select it in the list and press the Delete key, or menu Scout > Delete) and try again.

  • Rename it "A1L", for primary auditory left: double-click on it in the list, or menu Scout > Rename.

    scout_rename.gif

  • In light grey, you can see in the list the letters "LT", this means that based on the anatomical atlases imported from FreeSurfer, the point you clicked is in the left temporal lobe.

Growing a scout

For now, our scout contains only one vertex of the cortex surface. Most of the time, the aim of a scout is to extract the average activity over a larger region. The buttons in the Scout size section offer basic operations to define the scout extension.

  • [>]: Add the closest vertex (with respect to the seed) to the current scout.

  • [<]: Remove the furthest vertex (with respect to the seed) from the current scout.

  • [>>]: Grow scout in all directions.

  • [<<]: Shrink scout in all directions.

  • 'Constrained': If this button is pressed, only the vertices that have a source value above the threshold will be added to the scout (its growth will be limited to the colored patch on the surface).

  • Add vertex manually: Select the Select point button, then select again the "A1L" scout. When you click on a vertex on the surface, it is added to the selected scout.

  • Remove vertex manually: Same as previously, but holding the SHIFT key at the same time.

Grow the scout A1L to 20 vertices, not in constrained mode. You can read the number of vertices and the estimated cortical area just below the [<<] and [>>] buttons.

  • scout_grow.gif

Display time series

Select the scout in the list, click on the second button in the toolbar [Display scouts time series]. It displays the signal associated with this region for the entire time window (-100ms to 500ms).
We can now observe the early response at 50ms (P50) that was not very obvious before.

  • scout_display.gif

3D display options

In the toolbar on the right side of the scouts list, you can find a list of display options. Leave your mouse over each button for a few seconds to get a short description.

  • buttonLoad.gif Load Atlas

  • buttonSave.gif Save selected scouts

  • buttonSel.gif Show all the scouts

  • buttonSel2.gif Show only the selected scouts

  • Hide all: Uncheck both buttons above to hide the scouts in the 3D figure.

  • buttonContour.gif Show / hide the contour line

  • buttonLabel.gif Show / hide the scouts labels

  • buttonOpaque.gif Scout patch: Opaque

  • buttonTransp.gif Scout patch: Transparent

  • buttonNone.gif Scout patch: None

  • buttonRegion.gif Display the color of the region instead of the scout color (only for anatomical atlases)

  • buttonMri.gif Center MRI on scout (open the MRI viewer and show the position of the scout's seed)

Scout function

We have extended the scout A1L to 20 vertices. Because this is a source model with constrained dipoles orientations, we have one source only per vertex. The region A1L corresponds to 20 signals.

The signals are grouped together into one unique signal that is then used to represent the activity of the region of interest. In the list of available scouts, you can see the indication [Mean] next to the name of the scout. It represents the name of the function that is used for combining all the source signals into one. This function can be changed individually for each scout, with the menu Scout > Set function.

Here is a description of the different options. In the case of unconstrained sources (3 signals for each vertex, one for each orientation), the function is applied separately for each orientation and produces 3 time series instead of one. For more details, see the code of bst_scout_value.m.

  • Mean: Average all the signals.

    scout_mean.gif

  • PCA: Take the first mode of the Principal Component Analysis decomposition of all the signals. Here, this is computed on a single file for display, but PCA works best when all epochs and conditions are processed together, through the process tabs. See PCA tutorial.

  • Fast PCA: Same as PCA function, but computes the first PCA decomposition based only on a subset of the strongest signals. It gives very similar results, but its computation is much faster for scouts that group a large number of dipoles (>50).

    scout_pca.gif

  • Mean(norm): Average the absolute values of all the signals: mean(abs(signal))

    scout_meannorm.gif

  • Max: For each time point, get the maximum across all the vertices.
    Signed maximum: m = max(abs(signal)), scout = sign(signal(m)) * m

    scout_max.gif

  • Power: Average the square of all the signals: mean(signal^2)

    scout_power.gif

  • RMS: Square root of average the square of all the signals: sqrt(mean(signal^2))

    scout_rms.gif

  • All: Do not apply any operation on the scouts signals, returns all the signals.

    scout_all.gif

Note that the function selected here is ignored when running a process with a scout function option, like "Extract > Scout time series" or Connectivity.

Option: Absolute / relative

As said in the previous tutorial, the minimum norm current amplitudes can be positive or negative depending on the dipole orientation. This means that the values of the scouts, depending on the function that is used, may be positive or negative too. Most of the time we are interested in visualizing the absolute values of the scouts time series, to compare the activation level in different conditions or subjects. But it is sometimes easier to understand the temporal dynamic of a ROI with the relative values.

At the bottom of the Scout tab, you can choose to display either Absolute or Relative values.
The effect of this options changes whether you are processing source files with constrained (1 signal per vertex) or unconstrained (3 signals per vertex) dipoles orientations:

Constrained: Apply the scout function to all source signals, then:

  • Absolute: abs(ScoutFunc(sources))

    scout_absolute.gif

  • Relative: ScoutFunc(sources)

    scout_relative.gif

Unconstrained: Apply the scout function to the source signals for each orientation (Sx,Sy,Sz) separately, and then returns either the norm of the 3 orientations, or each orientation separately (except PCA, see below):

  • Absolute: Returns one time series:
    sqrt( ScoutFunc(Sx)2 + ScoutFunc(Sy)2 + ScoutFunc(Sz)2)

    scout_unconstr_abs.gif

  • Relative: Returns three time series:
    A1L1=ScoutFunc(Sx), A1L2=ScoutFunc(Sy), A1L3=ScoutFunc(Sz)

    scout_unconstr_rel.gif

  • Since PCA can also be used to combine the 3 orientations, it is applied here across all sources and orientations at once, producing a single time series, like in the constrained case above.

Display only: Note that this relative/absolute selection is a display option, it is not saved in the scouts themselves. It is used only when displaying the scouts time series with the [Display scouts time series] button of the Scout tab. In all other cases, such as the extraction of the scouts values from a script, this relative/absolute option is ignored. In those cases, there may also be an option to flatten unconstrained source orientations first, resulting in a single time series per scout instead of 3.

Multiple conditions

We can easily compare the activity between multiple conditions and multiple regions of interest.

  • In Intra-subject, open at the same time the normalized sources for deviant and standard condition.

  • Select the scout A1L and click on [Display scouts time series].
    This computes the scouts time series for all the files that are currently open.

    scout_nooverlay2.gif

  • Use the checkbox Uniform amplitude scale to configure the amplitude of multiple axes.
    Select it to force multiple graphs to use the same scale, unselect it to scale each graph separately.


    scout_nooverlay.gif

  • At the bottom of the Scout tab, select the option "Overlay: Files", then click [Display] again.

    scout_overlay_cond.gif

  • The Z-score value for the standard condition is a lot higher than the deviant condition because of the number of trials we used for computing the two averages (5x more trials for the standard). The SNR is higher with more trials, the baseline has less variance so the Z-score is higher.

  • To overlay the results of two conditions averaged with very different numbers of trials, it makes more sense to display the scout time series for non-normalized maps (averaged current density).

    scout_overlay_cond2.gif

Other regions of interest

  • Let's place all the regions of interest starting from the easiest to identify.
    • Open the normalized average source files (standard and deviant), together with the average recordings for the deviant and standard condition for Run#01 for an easier time navigation.

    • In the Surface tab, smooth the cortical surface at 70%.

    • For each region: go to the indicated time point, adjust the amplitude threshold in the Surface tab, identify the area of interest, click on its center, grow the scout, rename it.
    • Grow all the regions to the same size: 20 vertices.

  • A1L: Left primary auditory cortex (Heschl gyrus) - Already marked.

    • The most visible region in both conditions. Active during all the main steps of the auditory processing: P50, N100, MMN, P200, P300.
    • Standard condition, t=90ms, amplitude threshold=50%

      scout_a1l_constr.gif

  • A1R: Right primary auditory cortex (Heschl gyrus)

    • The response in this region is not as obvious than A1L. These binaural auditory stimulations should be generating similar responses in both left and right auditory cortices at early latencies. Possible explanations for this observation:
      • The earplug was not adjusted on the right side and the sound was not well delivered.
      • The subject's hearing from the right ear is impaired.
      • The response is actually stronger in the left auditory cortex for this subject.
      • The orientation of the source makes it more difficult for the MEG sensors to capture.
    • Deviant condition, t=90ms, amplitude threshold=50%

      scout_a1r_constr.gif

  • IFGL: Left inferior frontal gyrus (Brodmann area 44)

    • Involved in the auditory processing, particularly while processing irregularities.
    • You can use the atlas "Brodmann-thresh" available in the Scout tab for identifying this region.
    • Deviant condition, t=130ms, amplitude threshold=30%

      scout_ifgl_constr.gif

  • M1L: Left motor cortex

    • The subject taps with the right index when a deviant is presented.
    • The motor cortex responds at very early latencies together with the auditory cortex, in both conditions (50ms and 100ms). The subject is ready for a fast response to the task.
    • At 170ms, the peak in the standard condition probably corresponds to an inhibition: the sound heard is not a deviant, there is no further motor processing required.
    • At 230ms, the peak in the deviant condition is probably a motor preparation. At 350ms, the motor task begins, the subject moves the right hand (recorded reaction times 500ms +/- 108ms).
    • We cannot expect to have clear responses during the motor response because of the averaging. The response times are variable, so in order to get a better representation of the regions involved we should import and average the trials based on the response triggers.
    • Deviant condition, t=440ms, amplitude threshold=25%

      scout_m1l_constr.gif

Advanced

Multiple scouts

We can display the activity of multiple regions simultaneously.

  • Close everything (toolbar button [X]).
  • Open the two normalized average source files (standard and deviant).

  • In the Scout tab, select the A1L and IFGL scouts simultaneously.

  • Select the option "Overlay:Scouts", do not select "Overlay:Files", Absolute values.

  • Click on [Display scouts time series].

    overlay_scouts.gif

  • Now select the option "Overlay:Files", do not select "Overlay:Scouts", click on [Display].

    overlay_conditions.gif

  • In both conditions, we observe similar early responses (P50 and N100), then it diverges.
    In the deviant condition, we observe a pattern A1-IFG-A1 that is absent in the standard condition.

Advanced

From the database explorer

You have to display the sources on the cortex to create the scouts. But once they are created, you can directly display the scouts time series from the database explorer. It means that you can quickly compare the values for a scout between many different conditions without having to open them all.

  • Close everything (toolbar button [X]).
  • Select the option "Overlay:Files", do not select "Overlay:Scouts".

  • Select the two average recordings in Run#01 folder > right-click > Scouts time series.
    Note that this menu is present at all the levels in the database explorer.

    scout_tree_popup.gif

  • If no scout is currently loaded in the Scout tab, it shows all the scouts available in the surface.
    If a surface is loaded, and at least one scout selected in the Scout tab, this popup menu would display only the selected scouts.

    scout_tree.gif

  • Once the list of scouts is loaded in the Scout tab, you can select one of them, and then display its values for all the trials of a single condition (overlay:files).

    scout_tree_trials.gif

Advanced

Sign flip

In the case of source models with constrained orientations (normal to the cortex), the sign of the current can be an issue. If the region is large enough to include vertices with normals in opposite directions, averaging the source values may cancel out the activity.

Let's use the example from the previous tutorial and consider that one scout contains all the dipoles corresponding to both the red arrows (positive values) and the blue arrows (negative values). If we average all the values from this scout, we get a value close to zero.

  • minnorm_sketch.gif

To avoid this, a mechanism was added in the scout calculation, to flip the sign of sources with opposite directions before the averaging. We start by finding the dominant orientation of the scout, then flip the sign of the values that are not in the same direction (scalar product of the orientations < 0).

If the sign of some sources is flipped, you get a message in the Matlab command window, for example:
BST> Flipped the sign of 7 sources.

Advanced

Scout toolbar and menus

Menu: Atlas

  • menu_atlas.gif

  • New atlas:

    • Empty atlas: Create a new empty atlas.

    • Copy atlas: Duplicate an atlas and all the scouts it contains.

    • Copy selected scouts: Create a new atlas and copy only the selected scouts to it.

    • Source model options: Create a special atlas "Source model", with which you can attribute different modeling constraints to each region, after merging the cortex with some subcortical regions. This is required to use the option "Custom head model" (mixed models).

    • Volume scouts: Create a volume atlas in order to define 3D scouts (volume head models).

  • From subject anatomy: Import as surface or volume scouts the ROIs defined in one of the volume parcellation available in the subject's anatomy (MNI-based atlases or subject parcellations from CAT12 or FreeSurfer).

  • Load atlas: Load ROIs or a cortical parcellation coming from FreeSurfer or BrainSuite as a new atlas.

  • Rename atlas: Change the name that appears in the atlas list to refer to the selected atlas.

  • Delete atlas: Delete the current atlas and all the scouts in contains.

  • Add scouts to atlas: Load ROIs or a cortical parcellation and add them to the current atlas.

  • Subdivide atlas: Splits all the scouts of the current atlas in smaller ROIs. Available options:

    scoutSubdivide.gif

  • Subdivide selected scouts: Same thing, but processes only the selected scouts.

  • Surface clustering: Create a parcellation of the cortex surface and saves it as a new atlas.
    Only the "Homogeneous parcellation" clustering is currently available.

  • Save modifications: Force the current modifications to be saved to the cortex surface.

  • Undo all modifications: Restore the atlas the way it was the last time the surface was loaded.

Menu: Scout

  • menu_scout.gif

  • New: coordinates: Create a new scout starting from the vertex that is the closest to the specified coordinates. You can enter coordinates in MRI, SCS or MNI coordinates.

  • Add vertices: The user can select some points on the cortex surface and add them to the scout.
    Equivalent: Click on Select point (first toolbar button) then select the scout in list.

  • Edit in MRI: Open an interface to edit the selected scout as a 3D ROI, slice by slice. Only the vertices contained in the 3D mask are kept, all the volume information is lost. It means that the mask you intended to draw might be very different from what you get as a scout at the end. This is not a very reliable tool, as there is no direct correspondence between the volume and the surface.

  • Set function: Set the scout function for the selected scouts.

  • Set region: Set the cortical region for the selected scouts.

  • Rename: Rename the selected scout. Shortcut: double-click.

  • Set color: Change the display color of the selected scout.

  • Delete: Delete selected scouts. Shortcut: Delete key

  • Merge: Join two or more selected scouts.

  • Duplicate: Duplicate selected scouts.

  • Difference: With two scouts A and B selected, this menu removes the vertices included scouts A from scout B, or vice versa. The direction of the difference is asked in a dialog window.

  • Intersect: With two or more scouts selected, this menu creates a new scout with the vertices that are common for all the selected scouts.

  • Export to Matlab: Export the structures of the selected scouts to the Matlab environment and makes them accessible from the command window. This menu can be useful to get quickly the list of vertex indices or modify a scout manually.

  • Import from Matlab: Import scouts structures that you modified manually from your Matlab command window directly as new scouts.

  • Project to: Project the selected scout on another surface available in the database.

  • Project to: Contralateral hemisphere: Transfer scouts defined on one hemisphere to the other hemisphere. This feature is only available with FreeSurfer, and the recon-all segmentation pipeline must be executed with additional options. In the anatomy of the tutorial dataset, this option was not used, and therefore this menu generates an error. However, this option was used in the in the ICBM152 2023b template anatomy. To get this contralateral projection to work, FreeSurfer recon-all must be executed again with the extra option, and the anatomy of the subject must be imported again. More information.

  • Edit surface: Create a new surface containing only the desired parts (remove or keep only the selected scouts). This is useful for instance for selecting one sub-cortical region from the Aseg FreeSurfer atlas (see the FreeSurfer tutorial).

Menu: Sources

  • menu_sources.gif

  • Correlation with sensor: Create a new scout with all the sources that are strongly correlated with a given sensor.

  • Expand with correlation: Computes the correlation between the values for the scout's seed (first point) and all the other sources. The sources that have correlation coefficient superior to a given threshold are added to the scout.

  • Maximal value (new scout): Find the vertex with the maximal intensity at the current time, and create a scout centered on it.

  • Maximal value (selected scout): Move the scout's seed to the source that has the maximum amplitude in the scout, at the current time.

  • Simulate recordings: Multiply the selected scouts with the forward model. Simulate the scalp data that would be recorded if only the selected cortex region was activated; all the other sources are set to zero. Create a new data file in the database.
    If no scout selected: simulate recordings produced by the activity of the entire cortex.

Advanced

Scout region

A scout is defined by its name, and it has several properties: a list of vertices and an aggregating function. These are usually enough to explore the activity at the cortex level the way we did it in these tutorials. An extra property can be defined on the scout: the explicit classification in a brain region. This property is used only in more advanced functional connectivity analysis, for the representation of the NxN connection graphs. It is introduced here for reference purpose.

A brain region in Brainstorm is following a hierarchy with three levels: hemisphere / lobe / sub-region. The definition at each level is optional: a region can be classified only at the hemisphere level, or at the hemisphere+lobe level, or none of them. It depends on level or hierarchy you are interested in to explore the connectivity graphs.

The region for a scout can be set with the Scout > Set region menus, and is encoded in a string that contains at least 2 characters: "HLxxx". H represents the hemisphere (L,R,U), L stands for the lobe (F,PF,C,P,T,O,L,U), and xxx for the sub-region name (optional). For both the hemisphere and the lobe, the value "U" stands for "Undefined", meaning that the classification is simply not set. The menu Set region>Custom region... lets you directly edit this string.

When set, the region string is shown before the scout name in the list, representing only the defined levels. It doesn't show the letters U for "undefined".

  • scout_region.gif

Advanced

On the hard drive

The scouts are saved in the surface file on which they have been defined.
In the anatomy view, right-click on the selected cortex surface (cortex_15002V) > View file contents.

scout_harddrive.gif

iAtlas: Index of the atlas that is currently selected for this surface.

Atlas: Array of structures, each entry is one menu in the drop-down list in the Scout tab.

  • Name: Label of the atlas (reserved names: "User scouts", "Structures", "Source model").

  • Scouts: Array of structures, one per scout in this atlas.

    • Vertices: Array of indices of the vertices that are part of the scout.

    • Seed: Index of the central point of the scout (or the most relevant one).

    • Color: [r,b,g] color array, with values between 0 and 1.

    • Label: Display name of the scout (must be unique in this atlas).

    • Function: Scout function {'Mean', 'PCA', 'FastPCA', 'Mean_norm', 'Max', 'Power', 'All'}

    • Region: Code name for indicating the anatomical region in which the scout is located.

    • Handles: Graphic handles if the scout is currently displayed (always empty in a file).

Useful functions

  • bst_scout_value: Combine multiple signals into one.

  • process_extract_scout: Process "Extract > Scouts time series"

  • view_scouts: Compute scouts time series and displays them in a new figure.

Additional documentation



Tutorial 24: Time-frequency

Authors: Francois Tadel, Dimitrios Pantazis, Elizabeth Bock, Sylvain Baillet

This tutorial introduces how to compute time-frequency decomposition of MEG/EEG recordings and cortical currents using complex Morlet wavelets and Hilbert transforms.

Introduction

Some of the MEG/EEG signal properties are difficult to access in time domain (graphs time/amplitude). A lot of the information of interest is carried by oscillations at certain frequencies, but the amplitude of these oscillations is sometimes a lot lower than the amplitude of the slower components of the signal, making them difficult to observe.

The averaging in time domain may also lead to a cancellation of these oscillations when they are not strictly locked in phase across trials. Averaging trials in time-frequency domain allows to extract the power of the oscillation regardless of the phase shifts. For a better understanding of this topic, we recommend the lecture of the following article: Bertrand O, Tallon-Baudry C (2000), Oscillatory gamma activity in humans: a possible role for object representation.

In Brainstorm we offer two approaches for computing time-frequency decomposition (TF): the first one is based on the convolution of the signal with series of complex Morlet wavelets, the second filters the signal in different frequency bands and extracts the envelope of the filtered signals using the Hilbert transform.

slide.gif

Morlet wavelets

Complex Morlet wavelets are very popular in EEG/MEG data analysis for time-frequency decomposition. They have the shape of a sinusoid, weighted by a Gaussian kernel, and they can therefore capture local oscillatory components in the time series. An example of this wavelet is shown below, where the blue and red curves represent the real and imaginary part, respectively.

Contrary to the standard short-time Fourier transform, wavelets have variable resolution in time and frequency. For low frequencies, the frequency resolution is high but the time resolution is low. For high frenqucies, it's the opposite. When designing the wavelet, we basically decide a trade-off between temporal and spectral resolution.

  • tf_resolution.gif

To design the wavelet, we first need to choose a central frequency, ie. the frequency where we will define the mother wavelet. All other wavelets will be scaled and shifted versions of the mother wavelet. Unless interested in designing the wavelet at a particular frequency band, the default 1Hz should be fine.

Then, the desirable time resolution for the central frequency should be defined. For example, we may wish to have a temporal resolution of 3 seconds at frequency 1 Hz (default parameters). These two parameters, uniquely define the temporal and spectral resolution of the wavelet for all other frequencies, as shown in the plots below. Resolution is given in units of Full Width Half Maximum of the Gaussian kernel, both in time and frequency.

  • waveletOptions.gif

Edge effects

Users should pay attention to edge effects when applying wavelet analysis. Wavelet coefficients are computed by convolving the wavelet kernel with the time series. Similarly to any convolution of signals, there is zero padding at the edges of the time series and therefore the wavelet coefficients are weaker at the beginning and end of the time series.

From the figure above, which designs the Morlet wavelet, we can see that the default wavelet (central frequency fc=1Hz, FWHM_tc=3sec) has temporal resolution 0.6sec at 5Hz and 0.3sec at 10Hz. In such case, the edge effects are roughly half these times: 300ms in 5Hz and 150ms in 10Hz.

More precisely, if f is your frequency of interest, you can expect the edge effects to span over FWHM_t seconds: FWHM_t = FWHM_tc * fc / f / 2. Examples of such transients are given in the figures below.

edgeEffect5Hz.gif edgeEffect10Hz.gif

We also need to consider these edge effects when using the Hilbert transform approach. The band-pass filters used before extracting the signal envelope are relatively narrow and may cause long transients. To evaluate the duration of these edge effects for a given frequency band, use the interface of the process "Pre-process > Band-pass filter" or refer to the filters specifications (tutorial #10).

Simulation

We will illustrate the time-frequency decomposition process with a simulated signal.

  • The following code generates a sum of three sinusoids (2Hz, 20Hz, 50Hz) with random white noise. The 50Hz and noise are present everywhere, the 2Hz and 20Hz start only after two seconds.

    f1 = 2; f2 = 20; f3 = 50;
    i =2000:6000;
    Data(1,i) = sin(f1*2*pi*t(i)) + 0.4 * cos(f2*2*pi*t(i));
    Data = Data + 0.2 * sin(f3*2*pi*t) + 0.4 * rand(1,6000);
  • Empty the Process1 list (right-click > Clear list) then click on [Run].

  • Run process: Simulate > Simulate generic signals.
    Ntime=6000, Sampling frequency=1000Hz (signal duration = 6000/1000 = 6 seconds).
    Copy-paste the few lines of code above to generate the sum of three sinusoids.

    simulate_process.gif

  • Double-click on the new file to look at the simulated signal.

    simulate_display.gif

  • In Process1, select the simulated signal.
  • Run process: Frequency > Time-frequency (Morlet wavelets).
    Select the option Spectral flattening: The normalization will be discussed later.

    example_process.gif

  • Click on the button [Edit] to see all the process options.
    Time definition: Same as input, Frequency definition: Linear 1:1:60, Compute measure: Power.

    example_options.gif

Process options

Comment: String that will be displayed in the database explorer to represent the output file.

Time definition

  • Same as input file: The output file has the same time definition as the input file.
    In this example, it means: 6000 samples between 0 and 6s.

  • Group in time bands: This option adds a step of computation. First it computes the TF decomposition for all the input file, then averages the power by time band. To define a time band:

    • Enter your own time bands in the text area, one line per time band, with the following format: "name / time definition / function"

    • Click on the button [Generate] to automatically create a list of time bands with the same length. You will be asked the maximal length of each time band.
    • The function is the measure we take to combine the values for all the individual frequencies into one for the frequency band. Possible values are: mean, max, std, median.

Frequency definition: Frequencies for which the power will be estimated at each time instant.

  • Linear: You can specify the frequencies with the Matlab syntax start:step:stop.
    The default is "1:1:60", which produces 60 values [1, 2, 3, 4, ..., 59, 60].

  • Log: With the option start:N:stop, produces a list of N frequencies logarithmically scaled between "start" and "stop". For example "1:40:80" is converted to [1, 1.5, 2.1, 2.7, ..., 61.5, 65.8, 75, 80]

  • Group in frequency bands: As for the time definition, this option leads to a two-step process. First it computes the TF decomposition for several values in the frequency band, then it averages the power of TF coefficients per frequency band. To define a frequency band:

    • One line per frequency band, with the format "name / frequency definition / function"

    • The frequency definition is a Matlab expression evaluated with an eval() call. If the frequency definition contains only two values, Brainstorm adds two extra values in the middle so that the final averaged value is a bit more robust. Example of valid expressions:
      "2,4": Evaluates to [2,4], and then expands to the frequency vector [2, 2.66, 3.33, 4]
      "2:0.5:4": Evaluates to [2 2.5 3 3.5 4]
      "2, 2.5, 3, 3.5, 4": Evaluates to [2 2.5 3 3.5 4]

    • The function is the measure we take to combine the values for all the individual frequencies into one for the frequency band. Possible values are: mean, max, std, median.

Morlet wavelet options

  • Central frequency: Frequency where the mother wavelet is designed. All other wavelets will be shifted and scaled versions of the mother wavelet

  • Time resolution (FWHM): Temporal resolution of the wavelet at the central frequency (in units of Full Width Half Maximum). Click [Display] to see the resolution of the wavelet for other frequencies.

Compute the following measure:

  • The convolution of the signal with complex Morlet wavelets returns the complex coefficients for each frequency/time/sensor. Typically, what we display is the power of the coefficients (square of the amplitude: abs(TF)2). You can choose if you want to apply this transformation or not.

  • Power: Computes the "power" transformation immediately after the TF decomposition. This discards the phase information, but produces files that are twice smaller and a lot easier to process.

  • Magnitude: Save the magnitude of the complex values instead of the power: abs(TF).

  • None: Save the TF coefficients as they are computed (complex values). This can be useful if you plan to use these decompositions for other purposes that require the phase.

  • Some combinations of options may disable this choice. If you select frequency bands, the program will have to compute the power before averaging the values, therefore "none" is not an option.

Display: Time-frequency map

  • Right on the new time-frequency file > Time-freq: One matrix (same as double-clicking).
    This menu displays the time-frequency decomposition of the first (and only) signal. The Brainstorm window shows two new elements: the tab "Display" and the frequency slider.

    example_file.gif

  • We can easily identify the three horizontal bars as the three sinusoids in the simulated signals, and the trade-off between accuracy in time and accuracy in frequency. Set the measure to "magnitude" in the Display tab if you don't. Click on the figure to move the time-frequency cursor and explore the two axes of the plane.
    • 2Hz: High frequency resolution but poor time resolution (supposed to start sharply at 2s)

    • 20Hz: Better time resolution but poorer frequency resolution (17-24Hz)

    • 50Hz: Continuous over the 6s - Frequency resolution gets even worse (40-60Hz). It looks discontinuous because this oscillation has the same amplitude as the white noise we added in the signal (weight 0.4, relatively to the 2Hz oscillation).

  • Current frequency: Slider that shows the current frequency selected in all the figures.
    Just like the time, the frequency selection is centralized and managed by one control only for all the figures. As a consequence, it is impossible to display TF files with different frequency definitions at the same time. This can be perceived as an annoying limitation, but it allows all the simultaneous displays to be consistent at anytime and makes the interface more intuitive to manipulate, with lower risks of mistakes in the interpretation of the different figures.

  • List of signals: This drop-down list shows the signal currently displayed in the selected figure. In this case, there is only one channel of data called "s1". It will be more useful later.

  • Hide edge effects: When this option is selected, the time-frequency coefficients that could not be properly estimated because of a lack of time samples are hidden. It allows you to see only the information that is really reliable. The lower the frequency, the longer the edge effects. In the screen capture below, the colormap has been changed to "jet" and the maximum set manually to 0.2 (measure=power).

    example_hide.gif

  • Smooth display: Re-interpolates the time-frequency maps on a finer grid to produce nicer plots.

  • Measure: Type of measure that is currently represented in the selected figure. The entries that are enabled depend on the type of data that is saved in the file. In this case, we saved directly the power of the wavelet coefficients in the file, we discarded the angle/phase information, so the "phase" option is disabled. The other options are: Magnitude = sqrt(power), Log = 10*log10(power)

  • Colormap: As explained in the previous tutorials, you can change the colormap by clicking+moving on the colorbar on the right of the figure. Double-click on the colorbar to restore the defaults.

Display: Mouse and keyboard shortcuts

Mouse shortcuts

  • Left-click: Selection of current time and frequency.

  • Left-click + move: Select a time/frequency range. The legends of the X/Y axis show the selection.

    example_select.gif

  • Mouse wheel: Zoom in time, centered on the current time cursor.

  • Control + mouse wheel: Zoom in frequencies, centered on the current frequency cursor.

  • Right-click + move, or Control + left-click + move: Move in the zoomed image.

  • Double-click: Restore initial view.

Keyboard shortcuts:

  • Left/right arrows: Change the current time.

  • Page-up/page-down: Change the current time, 10 time samples at a time.

  • Up/down arrows: Change the the sensor displayed in this figure.

  • Control + up/down arrows: Change the current frequency.

  • Enter: View the original time series for this sensor.

  • Control + R: View the original MEG recordings.

  • Control + T: View the time-frequency 2D topography.

  • Control + I: Save as image.

  • Control + D: Dock figure in the Matlab environment.

Figure popup menu

  • example_popup.gif

  • Set selection manually: Does the same thing as drawing a time/freq selection square on a figure, but by typing the values for time and frequency manually.

  • Export to database: Save the selection for the displayed sensor in a new file in the database.

  • Export to file: Same as "Export to database", but the saved file is not registered in the database.

  • Export to Matlab: Same as "Export to database", but the output structure is sent to a variable in the Matlab base workspace instead of being saved to a file.

Display: Power spectrum and time series

Right-click on the file in the database or directly on the time-frequency figure to access these menus.

  • Power spectrum: For the current time, shows the power for all the frequencies.

  • Time series: For the current frequency, shows the power for all the time samples.

  • Example: Power spectrum density at 0.5s and power time series at 2Hz.
    We see the oscillation at the 50Hz in the PSD plot, and the oscillation at 2Hz in the TS plot.

    example_psd1.gif

  • Example: Power spectrum density at 4s and power time series at 20Hz.
    We see all three oscillations in the PSD plot, and the oscillation at 20Hz in the TS plot.

    example_psd2.gif

  • Note that if you right-click on the file in the database explorer and then select one of these menus, it will show all the signals. If you right-click on an existing time-frequency figure, it will show only the selected signal. It doesn't make any difference here because there is only one signal, but it will with the MEG recordings.

Normalized time-frequency maps

The brain is always active, the MEG/EEG recordings are never flat, some oscillations are always present in the signals. Therefore we are often more interested in the transient changes in the power at certain frequencies than at the actual power. A good way to observe these changes is to compute a deviation of the power with respect with a baseline.

There is another reason for which we are usually interested in standardizing the TF values. The power of the time-frequency coefficients are always lower in the higher frequencies than in the lower frequencies, the signal carries a lot less power in the faster oscillations than in the slow brain responses. This 1/f decrease in power is an observation that we already made with the power spectrum density in the filtering tutorial. If we represent the TF maps with a linear color scale, we will always see values close to zero in the higher frequency ranges. Normalizing each frequency separately with respect with a baseline helps obtaining more readable TF maps.

No normalization

The values we were looking at were already normalized (checkbox "Spectral flattening" in the process options), but not with respect to a baseline. We will now compute the non-normalized power obtained with the Morlet wavelets and try the various options available for normalizing them.

  • In Process1, keep the simulated signal selected.
  • Run process: Frequency > Time-frequency (Morlet wavelets), No spectral flattening.

    example_process_none.gif

  • Double-click on the file. As expected, we only see the lower frequencies in this representation: the power of the 2Hz oscillation is a lot larger than the power at 20Hz or 60Hz.

    example_display_none.gif

Spectrum normalization

  • In Process1: Select this new non-normalized file "Power,1-60Hz".
  • Run process: Standardize > Spectrum normalization, Method=1/f compensation.

    norm_multiply.gif

  • This produces exactly the same results as previously (option "Spectral flattening" in the time-frequency process). It multiplies the power at each frequency bin with the frequency value (eg. multiplies by 20 the power at 20Hz), in order to correct for the 1/f shape we observe in the power spectrum. This works well for the lower part of the spectrum and up to 60-80Hz, but past this range it tends to overcompensate the higher frequencies.
    Note that it does not do any form of baseline correction: the 50Hz oscillation is visible everywhere.

Baseline normalization

  • The second way to proceed is to normalize the power with respect to its average level during a reference time period. We can consider the oscillations at 2Hz and 20Hz as our events of interest, and the 50Hz as noise we want to get rid of. The segment from 0 to 2 seconds does not contain any of the signals of interest, therefore we can consider it as a baseline.

  • However, we will not be able to use the full segment [0,2]s because of the edge effects we described at the beginning of this tutorial. The time-frequency map at 2Hz with the display option "Hide edge effects" (left figure below) shows that the power could not be estimated correctly before 0.75s, therefore we shouldn't use it as a baseline. The power time series at 2Hz (right) shows that the power related with the 2Hz oscillation starts to increase significantly after 1.25s, therefore it's not really a "baseline" anymore. This leaves only the segment [0.75,1.25]s available.

    norm_baseline_2hz.gif

  • At 20Hz, the expected transient effects are only 75ms long, therefore we could use a much longer baseline if we were not interested by the lower frequencies: [0.075, 1.925]s.

    norm_baseline_20hz.gif

  • In this case, we have a very long "resting" time segment (2s), therefore the edge effects are not a problem for picking a time window for the baseline normalization. We will use the first time window mentioned, [0.75,1.25]s as it is long enough (500ms) to estimate the baseline mean power. In real-life cases, with shorter epochs, it is sometimes difficult to find an acceptable trade-off between the baseline duration and the exclusion of the edge effects, especially for the lower frequencies.

  • Run process: Standardize > Baseline normalization, Baseline=[0.75, 1.25]s.
    Method=Event-related perturbation: ERS/ERD stands for "event-related synchronization / desynchronization", a widely used normalization measure for time-frequency power maps. It evaluates the deviation from the mean over the baseline, in percents: (x-mean)/mean*100.

    norm_ersd_process.gif

  • Double-click on the new "ersd" file. The colormap type changed from "Timefreq" to "Stat2", which uses by default the "rwb" color set and shows relative values. Indeed, the ERS/ERD values can be positive or negative: the power at a given time sample can be higher or lower than the average level during the baseline. In the simple simulation we used, there is no power decrease at any frequency after 2s, so the strong values are mostly positive. However, if you look in the file, you would see that there are many small negative values (due to the random noise we added).

    norm_ersd_file.gif

  • Note that the 50Hz disappeared because it was present during the baseline, while the 2Hz and 20Hz oscillations show high positive values (between 100% and 2000% increase from baseline).

  • Remember to select your baseline very carefully according to the frequency range you are interested in. See below examples obtained with different baselines: 0.075-1.925s, 0-2s, 0-6s.
    Change the colormap to "jet" if you prefer, and adjust the colormap contrast as needed.

    norm_ersd_compare.gif

  • This video may help you understand better what are the implications of the baseline selection: http://www.mikexcohen.com/lecturelets/whichbaseline/whichbaseline.html

Advanced

Tuning the wavelet parameters

Time resolution

You can adjust the relative time and frequency resolution of your wavelet transformation by adjusting the parameters of the mother wavelet in the options of the process.

  • Increasing the option "time resolution" will produce longer wavelets at a given frequency, hence increase the frequency accuracy (lower Δf) and decrease the time accuracy (higher Δt). Expect longer edge effects.

  • Decrease the time resolution will produce shorter wavelets at a given frequency, hence decrease the frequency accuracy (higher Δf) and increase the time accuracy (higher Δt). Expect shorter edge effects.

  • You can modify one or the other parameter, what is important is the product of the two. All the following combinations fc/FWHM_t produce the same results because their product is constant: (1Hz,3s), (3Hz,1s), (6Hz,0.5s), (60Hz,0.05s)

    tf_resolution_process.gif

Examples for a constant central frequency of 1Hz with various time resolutions: 1.5s, 4s, 10s.

  • tf_resolution_compare.gif

Frequency axis

You can also obtain very different representations of the data by changing the list of frequencies for which you estimate the power. You can change this in the options of the process.

  • tf_freqlist_process.gif

Examples: Log 1:20:150, Log 1:300:150, Linear 15:0.1:25

  • tf_freqlist_compare.gif

Advanced

Hilbert transform

We can repeat the same analysis with the other approach available for exploring the simulated signal in the time-frequency plane. The process "Frequency > Hilbert transform" first filters the signals in various frequency bands with a band-pass filter, then computes the Hilbert transform of the filtered signal. The magnitude of the Hilbert transform of a narrow-band signal is a measure of the envelope of this signal, and therefore gives an indication of the activity in this frequency band.

  • slide_hilbert.gif

No normalization

Let's compute the same three results as before: non-normalized, spectral flattening, baseline normalization.

  • In Process1, select the simulated signal.
  • Run process: Frequency > Hilbert transform, No spectral flattening, Do not mirror.

    hilbert_process.gif

  • In the advanced options panel, keep the default options: Default frequency bands and Power.

    hilbert_options.gif

  • Double-click on the new file. The figure now has only 6 rows, one for each frequency band.

    hilbert_file.gif

  • The non-normalized results are already easy to interpret:
    • delta (2-4Hz): Includes the 2Hz oscillation, contribution starts at 2s

    • beta (15-29Hz): Includes the 20Hz oscillation, contribution starts at 2s

    • gamma1(30-59Hz): Includes the 50Hz oscillation, contribution starts at the beginning (0s)

  • Right-click on the file or figure > Time series. Example for delta and beta.

    hilbert_ts.gif

Normalization

  • In Process1, select the non-normalized Hilbert-based decomposition.
  • Run process: Standardize > Spectrum normalization, Method=1/f compensation.

  • Run process: Standardize > Baseline normalization, Baseline=[0.75, 1.25]s, Method=ERS/ERD.

  • Display the two normalized files side by side.

    hilbert_normalized.gif

Method specifications

  • Band-pass filters: Same filters as in the process "Pre-process > Band-pass filter", with the option "Stop-band attenuation: 60dB". For details, see the tutorial Power spectrum and frequency filters.

  • Edge effects: To estimate the duration of the transient effects for each frequency band, select the process "Band-pass filter", enter the frequency band of interest and click "View filter response". Example for the alpha band:

    hilbert_filter_alpha.gif

  • Hilbert transformation: Using the Matlab's hilbert() function.

  • Extraction of the envelope: Power of the complex Hilbert transform, abs(hilbert(x))2.

MEG recordings: Single trials

Let's go back to our auditory oddball paradigm and apply the concepts to MEG recordings. We will use all the trials available for one condition to estimate the average time-frequency decomposition.

Spectrum normalization

  • In Process1, select all the deviant trials in Run#01.

  • Run process: Frequency > Time-frequency (Morlet wavelets), No spectral flattening.
    In the advanced options, select: Log 1:40:150, Power, Save average time-frequency maps.

    tf_trials_process.gif tf_trials_options.gif

  • Save individual TF maps: This option stops the computation here and saves in the database one time-frequency file for each input file (40 files), with one TF map for each scout.

  • Save average TF maps: Instead of saving the TF for each file separately, it automatically computes the average of the power of all the TF. This is a good choice if you do not plan to use independently all the TF files, because it saves a lot of time and disk space.

  • Remove evoked response from each trial before computing TF: This option first computes the average of all the trials, then subtracts this average from each trial before computing the time-frequency decomposition. This brings the signals to a slightly more stationary state, which may help for the evaluation of the frequency contents.

Baseline normalization

  • Double-click on the new file Avg,Power,1-150Hz (MEG). Select "Hide edge effects".
    In the drop-down list, select sensor MLP56 (the one with the strongest respons at 90ms).
    Right-click on the TF figure > Time series.

    tf_trials_evaluate.gif

  • Defining a baseline is now a lot trickier than with 6s-long simulated signal. The epochs are only 600ms long, and the power at many frequencies could not be estimated correctly. If we want all the values to be "good" after 0s, we cannot use anything below 15Hz. If we want to normalize the values, we have to go even higher: 30Hz if we want a baseline of 50ms before 0.

  • The epochs we use in this tutorial are too short to perform a correct time-frequency analysis. We should have imported at least 200ms more on each side, just for controlling the edge effects. You should always think carefully about the length of the epochs you import in your database if you are planning to run any form of frequency or time-frequency analysis.

  • For the purpose of illustrating the tools available in Brainstorm, we will keep on working with these short epochs. Let's try to do the best we can with what we have here. We could use a baseline of 50ms to get a correct estimation above 30Hz, but this is probably a bit too short. We propose to include some more of the baseline (75ms), hoping there are not major edge effects in this segment.
  • In Process1, select the average time-frequency file.
  • Run process: Standardize > Baseline normalization, Baseline=[-75, 0]ms, Method=ERS/ERD.

    tf_trials_normalize.gif

  • The new menus available to display this file are described in the next section.

    tf_trials_popup.gif

Things to avoid

  • Avoid computing the time-frequency decomposition of the average of the trials, you would miss some of the induced response, the brain activity in higher frequencies that is not strictly time-locked to the stimulus, and not aligned in phase across trials. Always prefer the computation of the average of the time-frequency power maps of each trials, as we did here.
    This is well documented in: Bertrand O, Tallon-Baudry C (2000).

  • Avoid using the Hilbert transform approach on short recordings or averages, always use the wavelet approach in these cases. The band-pass filters used for the lower frequency bands may have very high orders, leading to long transients. The example below shows the expected transients for the default frequency bands using the process "Frequency > Hilbert transform", they can be much more invalidating than with the process "Frequency > Time-frequency (Morlet wavelets)".

    hilbert_trials_transients.gif

Display: All channels

Three menus display the time-frequency of all the sensors with different spatial organizations. All the figures below represent the ERS/ERD-normalized average. They use the "jet" colormap, which is not the default configuration for these files. To get the same displays, change the colormap configuration:
right-click on the figure > Colormap: Stat2 > Colormap > jet.

  • All channels: All the maps are displayed one after the other, in the order they are saved in the file.

    tf_allchannels.gif

  • 2D Layout (maps): Show each TF map where the sensor is located on a flattened 2D map. Most display options are available, such as the colormap management and the option "Hide edge effects".

    tf_2dlayout_maps.gif tf_2dlayout_hide.gif

  • 2D Layout (no overlap): Similar to the the previous display, but the positions of the images are reorganized so that they do not overlap.

    tf_2dlayout_nooverlap.gif

  • Image [channel x time]: Shows the values of all the sensors over time for one frequency.

    tf_image.gif

Useful shortcuts for the first three figures:

  • Click: Clicking on any small TF image opens a new figure with only the selected sensor.

  • Shift + click: Opens the original recordings time series of the selected sensor, when available. Here, we display an average of time-frequency maps, so this menu has no effect.

  • Mouse wheel: Zoom in/out.

  • Right click + move: Move in a zoomed figure.

Display: Topography

The menus below show the distribution of TF power over the sensors, for one time point and one frequency bin, very similarly to what was introduced in tutorial Visual exploration.

  • 2D Sensor cap / 2D Disc / 3D Sensor cap: 175ms, 8Hz

    tf_topo.gif

  • 2D Layout: 8Hz (black), 35Hz (white)

    tf_2dlayout8.gif tf_2dlayout35.gif

Useful shortcuts for these figures:

  • Left/right arrows: Change the current time.

  • Up/down arrows: Change the current frequency.

  • Control + E: Display the sensors markers/names.

  • Shift + click on a sensor: Displays the time-frequency decomposition for that specific sensors.

  • Right click + move: Select a group of sensors.

  • Shift + scroll: Change the gain of the time series (2D Layout).

  • Control + scroll: Change the length of the window displayed around the current time (2D Layout).

Advanced

Scouts

Similar calculations can be done at the level of the sources, either on the full cortex surface or on a limited number of regions of interests. We will start with the latter as it is usually an easier approach.

  • Drag and drop all the deviant trials from both runs, select [Process sources].

  • Run process "Frequency > Time-frequency (Morlet wavelets)".
    Select the option "Use scouts" and select all the scouts defined in the previous tutorial.

    scouts_process.gif

  • In the advanced options, select "Scout function: After" and "Output: Save average".
    Run the process (it may take a while).

    scouts_options.gif

  • The scout function was introduced in the previous tutorial. It is the method we use to group the time series for the 20 dipoles we have in each scout into one unique signal. When computing the TF of one scout, we have the choice between applying this function before or after the time-frequency decomposition itself.

    • Before: Extract the 20 source signals, apply the scout function to get one signal, run the TF decomposition of this signal. This is faster but may lead to information loss.

    • After: Extract the 20 source signals, run the TF decomposition of the 20 signals, apply the scout function on the power of the TF maps. Always prefer this option when possible.

  • Rename the new file to add a tag "Deviant" in it. Then right-click > Time-freq: All scouts.

    scouts_file.gif

  • In Process1, select the new average TF file.
  • Run process: Standardize > Baseline normalization, Baseline=[-75, 0]ms, Method=ERS/ERD.

    scouts_ersd.gif

Advanced

Full cortical maps

Computing the time-frequency decomposition for all the sources of the cortex surface is possible but complicated because it can easily generate gigantic files, completely out of the reach of most computers. For instance the full TF matrix for each trial we have here would be [Nsources x Ntimes x Nfrequencies] = [15000 x 361 x 40] double-complex = 3.2 Gb!

We have two ways of going around this issue: computing the TF decomposition for fewer frequency bins or frequency bands at a time, or as we did previously, use only limited number of regions of interest.

  • In Process1, keep all the deviant trials both conditions selected, select [Process sources].

  • Run process "Frequency > Hilbert transform", No spectral flattening, Mirror signal before.
    To process the entire brain, do not select the option "Use scouts".

    sources_process.gif

  • In the advanced options, select "Optimize storage: No", this option is not available when computing on the fly the average of multiple trials. Save the power, Save the average Hilbert maps.

    sources_options.gif

  • Optimize the storage of the time frequency file: Let's describe this option in more details.

    • When computing the TF decomposition of a source file, we are actually applying sequentially two linear transformations to the original recordings: the TF analysis and the source inversion. These two processes can be permuted: TF(Inverse(Recordings)) = Inverse(TF(Recordings)).
    • Therefore we can optimize the TF computation time by applying the wavelet transformation only to the sensor recordings, and then multiply the wavelet complex coefficients by the inverse operator (ImagingKernel). This trick is always used in the computation of the Hilbert and Morlet transforms.

    • When we have the option to the save the complex values (constrained sources and no averaging), this can also be used to optimize the storage of the files. In these cases, we save only the wavelet transformation of the sensor data. Later, when the file is loaded for display, the imaging kernel is applied on the fly. This can be disabled explicitly with this option.
  • Rename the new Hilbert file to include the tag "Deviant", and select it in Process1.
  • Run process: Standardize > Baseline normalization, Baseline=[-75, 0]ms, Method=ERS/ERD.

  • Right-click on the Hilbert file > Display on cortex.
    The frequency slider now shows frequency bands ("alpha:8-12Hz") instead of frequencies ("12Hz"). You can explore the source activity in time and frequency dimensions. The screen capture below shows the activity at 175ms: a 60% increase in the alpha band around the auditory cortex and a 20% decrease in the beta band around the motor cortex.

    sources_display.gif

  • Shift + click on the cortex surface: Displays the TF decomposition of the selected source.

  • Right-click on the brain: Selects the closest vertex and displays the popup menu at the same time. The first three menus are relative to the source that was just clicked.

    sources_vertex.gif

Advanced

Unconstrained sources

In the current example, we are working with the simple case: sources with constrained orientations. The unconstrained case is more difficult to deal with, because we have to handle correctly the three orientations we have at each vertex.

  • Full cortex: Computes the TF decompositions for all the sources (3*15000=45000), then sum at each location the power for the three orientations.

  • Scouts: Option "Scout function" in the process.

    • Before: Extract the 20*3=60 source signals, apply the scout function to get three signals (one per orientation), run the TF decomposition of the three signals, and finally sum the power of the three TF maps. This is faster but may lose some frequency resolution (especially for constrained sources).

    • After: Extract the 20*3=60 source signals, run the TF decomposition of the 60 signals, apply the scout function on the power of the TF maps for each orientation separately, and finally sum the power obtained for the three orientations.

  • The storage optimization option is not available with unconstrained sources.

Advanced

Getting rid of the edge effects

To avoid making mistakes in the manipulation of the data and producing more readable figures, we encourage you to cut out the edge effects from your time frequency maps after computation.

  • In Process1, select the very first computed in this tutorial: Test/Simulation/Power,1-60Hz | multiply
  • Run the process: "Extract > Extract time", Time window = [0.75, 5.25]s

  • Open the new file, select the option "Hide edge effects": Almost everything left in this new file is correctly estimated. Brainstorm keeps track of the edge effects in the TFmask field of the file.

    tf_cut.gif

  • We recommend you do the same when epoching your recordings: import trials that are longer than necessary, and after the time-frequency estimation, remove the unnecessary segments.

Advanced

On the hard drive

Right click on the first TF file we computed > File > View file contents.

  • tf_contents.gif

Structure of the time-frequency files: timefreq_*.mat

  • TF: [Nsignals x Ntime x Nfreq] matrix containing all the values of the time-frequency decomposition (complex wavelet coefficients, or double values for power/magnitude/Z-score).

  • TFmask: [Nfreq x Ntime] logical mask indicating the edge effects (0=edge, 1=valid value).

  • Std: [Nsignals x Ntime x Nfreq] standard deviation if this file is an average.

  • Comment: String displayed in the database explorer to represent the file.

  • DataType: From what kind of data this file was computed: {'data', 'results', 'scout, 'matrix'}

  • Time: [1 x Ntime] Time vector used to estimate this file.

  • TimeBands: [Ntimebands x 3] Cell array where each line represents a time band:
    {'band_name', 'time definition', 'function'}

  • Freqs: For regular frequency binning: vector containing all the frequencies.
    If using frequency bands: [Nfreqbands x 3] cell array, where each line represents a frequency band {'band_name', 'frequency definition', 'function'}

  • RefRowNames: Used only for connectivity matrices.

  • RowNames: [1 x Nsignals] Cell array of strings that describes each row of the TF matrix. In this specific case, it would be the list of all the MEG sensor names. But it could also be a list of names of scouts or clusters.

  • Measure: Contains the name of the function that was applied right after the computation of the wavelet coefficients. So it represents the type of data contained in the TF matrix. Possible values:

    • none: No measure applied, TF contains the complex wavelet coefficients.

    • power: Power for each frequency, ie. the square of the amplitude: abs(coefficients)2

    • magnitude: abs(coefficients)

    • log: 10 * log10( abs(coefficients)2)

    • phase: angle(coefficients)

  • Method: String that identifies the process that generated the file:
    {'morlet', 'fft', 'psd', 'hilbert', 'corr', 'cohere', 'granger', 'plv'}

  • DataFile: Initial file from which this file was computed. In the database explorer, the TF file will be shown as a child of this DataFile file.

  • SurfaceFile / GridLoc / GridAtlas: Source space that was used, only for source files.

  • Leff: Effective number of averages = Number of trials that were averaged to obtain this file.

  • ColormapType: String, force a specific colormap type to be used when displaying this file.

  • DisplayUnits: String, force to use specific units when displaying this file.

  • Options: Options that were selected in the time-frequency options window.

  • History: List of operations performed on this file (menu File > View file history).

Useful functions

  • in_bst_timefreq(TimefreqFile): Read a time-frequency file.

  • in_bst(FileName, TimeWindow): Read any Brainstorm file with the possibility to load only a specific part of the file. "TimeWindow" is an range of time values in seconds: [tStart, tStop].

  • bst_process('LoadInputFile', FileName, Target, TimeWindow): The most high-level function for reading data files. "Target" is a string with the list of sensor names or types to load.

  • morlet_transform(): Applies complex Morlet wavelet transform to the time series in input.

Advanced

Additional documentation

Articles

Forum discussions



Tutorial 26: Statistics

Authors: Francois Tadel, Elizabeth Bock, Dimitrios Pantazis, Richard Leahy, Sylvain Baillet

In this auditory oddball experiment, we would like to test for the significant differences between the brain response to the deviant beeps and the standard beeps, time sample by time sample. Until now we have been computing measures of the brain activity in time or time-frequency domain. We were able to see clear effects or slight tendencies, but these observations were always dependent on an arbitrary amplitude threshold and the configuration of the colormap. With appropriate statistical tests, we can go beyond these empirical observations and assess what are the significant effects in a more formal way.

Random variables

In most cases we are interested in comparing the brain signals recorded for two populations or two experimental conditions A and B.

A and B are two random variables for which we have a limited number of repeated measures: multiple trials in the case of a single subject study, or multiple subject averages in the case of a group analysis. To start with, we will consider that each time sample and each signal (source or sensor) is independent: a random variable represents the possible measures for one sensor/source at one specific time point.

A random variable can be described with its probability distribution: a function which indicates what are the chances to obtain one specific measure if we run the experiment. By repeating the same experiment many times, we can approximate this function with a discrete histogram of observed measures.

stat_distribution.gif stat_histogram.gif

Histograms

You can plot histograms like this one in Brainstorm, it may help you understand what you can expect from the statistics functions described in the rest of this tutorial. For instance, seeing a histogram computed with only 4 values would discourage you forever from running a group analysis with 4 subjects...

Recordings

  • Let's evaluate the recordings we obtained for sensor MLP57, the channel that was showing the highest value at 160ms in the difference of averages computed in the previous tutorial.

    histogram_channel.gif

  • We are going to extract only one value for each trial we have imported in the database, and save these values in two separate files, one for each condition (standard and deviant).
    In order to observe more meaningful effects, we will process the trials together from the two acquisition runs. As explained in the previous tutorials (link), this is usually not recommended in MEG analysis, but it can be an acceptable approximation if the subject didn't move between runs.

  • In Process1, select all the deviant trials from both runs.
    Run process Extract > Extract values:
    Options: Time=[160,160]ms, Sensor="MLP57", Concatenate time (dimension 2)

    histogram_extract_process.gif

  • Repeat the same operation for all the standard trials.

  • You obtain two new files in the folder Intra-subject. If you look inside the files, you can observe that the size of the Value matrix matches the number of trials (78 for deviant, 383 for standard). The matrix is [1 x Ntrials] because we asked to concatenate the extracted values in the 2nd dimension.

    histogram_extract_files.gif

  • To display the distribution of the values in these two files:
    select them simultaneously, right-click > File > View histograms.

    histogram_sensor1.gif

  • With the buttons in the toolbar, you can edit the way these distributions are represented: number of bins in the histogram, total number of occurrences (shows taller bars for standard because it has more values) or density of probability (normalized by the total number of values).
    In addition, you can plot the normal distribution corresponding to the mean μ and standard deviation σ computed from the set of values (using Matlab functions mean and std).

  • When comparing two sample sets A and B, we try to evaluate if the distributions of the measures are equal or not. In most of the questions we explore in EEG/MEG analysis, the distributions are overlapping a lot. The very sparse sampling of the data (a few tens or hundreds of repeated measures) doesn't help with the task. Some representations will be more convincing than others to estimate the differences between the two conditions.

    histogram_sensor2.gif

  • The legend of the histograms shows the result of the Shapiro-Wilk normality test, as implemented by Ahmed BenSaïda (Matlab FileExchange). The button "Q-Q plots" gives another way to compare the current samples to the normal distribution (see: Wikipedia, Matlab FileExchange).

    qqplot.gif

  • Everything seems to indicate that the values recorded on the sensor MLP57 at 160ms follow a normal distribution, in both conditions.

Sources (relative)

  • We can repeat the same operation at the source level and extract all the values for scout A1L.

  • In Process1, select all the deviant trials from both runs. Select button [Process sources].
    Run process Extract > Extract values: Time=[160,160]ms, Scout="A1L"

    histogram_scout1.gif

  • Repeat the same operation for all the standard trials.

  • Select the two files > Right-click > File > View histogram.

    histogram_scout2.gif

  • The distributions still look normal, but the variances are now slightly different. You have to pay attention to this information when choosing which parametric t-test to run.

Sources (absolute)

  • Run again the process Extract > Extract values, but this time select Compute absolute values.

  • Display the histograms of the two rectified files.

    histogram_scout3.gif

  • The rectified source values are definitely not following a normal distribution, the shape of the histogram has nothing to do with the corresponding Gaussian curves. As a consequence, if you are using rectified source maps, you will not be able to run independent parametric t-tests.
    Additionally, you may have issues with the detection of some effects (see tutorial Difference).

Time-frequency

  • Time-frequency power for sensor MLP57 at 55ms / 48Hz (left=no normalization, right=ERS/ERD):

    histogram_tf1.gif

  • These sample sets are clearly not normally distributed. Parametric t-tests don't look like good candidates for testing time-frequency power across trials.

Group studies and central limit theorem

  • The observations above hold only for the specific case we are looking at: single subject analysis, testing for differences across trials.

  • In the context of a group analysis, we usually test subject averages between conditions or populations. This corresponds to comparing the distributions of the mean of the trials across subjects, which will tend to be normal when we increase the number of trials (Central-limit theorem). In general, it is easier to obtain sample sets with normal distributions at the group level.

  • Additionally, some tricks can help bringing the samples closer to a normal distribution, like averaging in time/space/frequencies, or testing the square root of the data, in the case of time-frequency power. Some solutions are explored in (Kiebel, Tallon-Baudry & Friston, HBM 2005).

Statistical inference

Hypothesis testing

To show that there is a difference between A and B, we can use a statistical hypothesis test. We start by assuming that the two sets are identical then reject this hypothesis. For all the tests we will use here, the logic is similar:

  • Define a null hypothesis (H0:"A=B") and an alternative hypothesis (eg. H1:"A<B").

  • Make some assumptions on the samples we have (eg. A and B are independent, A and B follow normal distributions, A and B have equal variances).
  • Decide which test is appropriate, and state the relevant test statistic T (eg. Student t-test).

  • Compute from the measures (Aobs, Bobs) the observed value of the test statistic (tobs).

  • Calculate the p-value. This is the probability, under the null hypothesis, of sampling a test statistic at least as extreme as that which was observed. A value of (p<0.05) for the null hypothesis has to be interpreted as follows: "If the null hypothesis is true, the chance that we find a test statistic as extreme or more extreme than the one observed is less than 5%".

  • Reject the null hypothesis if and only if the p-value is less than the significance level threshold (α).

    stat_tails.gif

Evaluation of a test

The quality of test can be evaluated based on two criteria:

  • Sensitivity: True positive rate = power = ability to correctly reject the null hypothesis and control for the false negative rate (type II error rate). A very sensitive test detects a lot of significant effects, but with a lot of false positive.

  • Specificity: True negative rate = ability to correctly accept the null hypothesis and control for the false positive rate (type I error rate). A very specific test detects only the effects that are clearly non-ambiguous, but can be too conservative and miss a lot of the effects of interest.

    stat_errors.gif

Different categories of tests

Two families of tests can be helpful in our case: parametric and nonparametric tests.

  • Parametric tests need some strong assumptions on the probability distributions of A and B then use some well-known properties of these distributions to compare them, based on a few simple parameters (typically the mean and variance). The estimation of these parameters is highly optimized and requires very little memory. The examples which will be described here are the Student's t-tests.

  • Nonparametric tests do not require any assumption on the distribution of the data. They are therefore more reliable and more generic. On the other hand, they are a lot more complicated to implement: they require a lot more memory because all the tested data has to be loaded at once, and a lot more computation time because the same test is repeated thousands of times.

Parametric Student's t-test

Assumptions

The Student's t-test is a widely-used parametric test to evaluate the difference between the means of two random variables (two-sample test), or between the mean of one variable and one known value (one-sample test). If the assumptions are correct, the t-statistic follows a Student's t-distribution.

  • stat_tvalue.gif

The main assumption for using a t-test is that the random variables involved follow a normal distribution (mean: μ, standard deviation: σ). The figure below shows a few example of normal distributions.

  • stat_distrib_normal.gif $$ f(x \; | \; \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi} } \; e^{ -\frac{(x-\mu)^2}{2\sigma^2} } $$

t-statistic

Depending on the type of data we are testing, we can have different variants for this test:

  • One-sample t-test (testing against a known mean μ0):
    $$ t = \frac{\overline{x} - \mu_0}{\sigma/\sqrt{n}} \hspace{40pt} \mathrm{d.f.} = n - 1 $$
    where $$\overline{x}$$ is the sample mean, σ is the sample standard deviation and n is the sample size.

  • Dependent t-test for paired samples (eg. when testing two conditions across a group of subjects). Equivalent to testing the difference of the pairs of samples against zero with a one-sample t-test:
    $$ t = \frac{\overline{X}_D}{\sigma_D/\sqrt{n}} \hspace{40pt} \mathrm{d.f.} = n - 1 $$
    where D=A-B, $$\overline{X}_D$$ is the average of D and σD its standard deviation.

  • Independent two-sample test, equal variance (equal or unequal sample sizes):
    $$ t = \frac{\bar{X}_A - \bar{X}_B}{s_{X_AX_B} \cdot \sqrt{\frac{1}{n_A}+\frac{1}{n_B}}} \hspace{40pt} s_{X_AX_B} = \sqrt{\frac{(n_A-1)s_{X_A}^2+(n_B-1)s_{X_B}^2}{n_A+n_B-2}} \hspace{40pt} \mathrm{d.f.} = n_A + n_B - 2 $$
    where $$s_{X_A}^2$$ and $$s_{X_B}^2$$ are the unbiased estimators of the variances of the two samples.

  • Independent two-sample test, unequal variance (Welch's t-test):
    $$t = {\overline{X}_A - \overline{X}_B \over s_{\overline{X}_A - \overline{X}_B}} \hspace{40pt}s_{\overline{X}_A - \overline{X}_B} = \sqrt{{s_A^2 \over n_A} + {s_B^2  \over n_B}} \hspace{40pt} \mathrm{d.f.} = \frac{(s_A^2/n_A + s_B^2/n_B)^2}{(s_A^2/n_A)^2/(n_A-1) + (s_B^2/n_B)^2/(n_B-1)} $$
    where $$n_A$$ and $$n_B$$ are the sample sizes of A and B.

p-value

Once the t-value is computed (tobs in the previous section), we can convert it to a p-value based on the known distributions of the t-statistic. This conversion depends on two factors: the number of degrees of freedom and the tails of the distribution we want to consider. For a two-tailed t-test, the two following commands in Matlab are equivalent and can convert the t-values in p-values.

p = betainc(df./(df + t.^2), df/2, 0.5);    % Without the Statistics toolbox
p = 2*(1 - tcdf(abs(t),df));                % With the Statistics toolbox

The distribution of this function for different numbers of degrees of freedom:

  • ttest_tcdf.gif

Example 1: Parametric t-test on recordings

Parametric t-tests require the values tested to follow a normal distribution. The recordings evaluated in the histograms sections above (MLP57/160ms) show distributions that are matching this assumption relatively well: the histograms follow the traces of the corresponding normal functions, and the two conditions have very similar variances. It looks reasonable to use a parametric t-test in this case.

  • In the Process2 tab, select the following files, from both runs (approximation discussed previously):

    • Files A: All the deviant trials, with the [Process recordings] button selected.

    • Files B: All the standard trials, with the [Process recordings] button selected.

    • The t-tests work well with unbalanced number of samples: It is better to use all the possible samples you have, even if you have 80 trials in one condition and 400 in the other.

  • Run the process "Test > Parametric test: Independent": Select all the data, do not average.
    Sensor types: Leave this empty instead of entering "MEG", it won't affect the results but the computation will be faster (optimized when processing full files).
    Test: Student's t-test (equal variance), two-tailed.

    stat_ttest1_process.gif

  • Double-click on the new file and add a 2D topography (CTRL+T). The values displayed in the 2D view are the significant t-values. All the sensors that have p-values higher than the significance level threshold (α) are set to zero.

  • With the new Stat tab you can control the significance level α and the correction you want to apply for multiple comparisons (see next section).

  • With the option minimum duration, you can exclude from the display all the data points that are significant only for isolated time samples, and which are mostly likely false positives. If this parameter is set to zero it has no impact on the display. Otherwise all the data points that are not significant continuously for at least this duration are set to zero.

    stat_ttest1_file.gif

Correction for multiple comparisons

Multiple comparison problem

The approach described in this first example performs many tests simultaneously. We test, independently, each MEG sensor and each time sample across the trials, so we run a total of 274*361 = 98914 t-tests.

If we select a critical value of 0.05 (p<0.05), it means that we want to see what is significantly different between the conditions while accepting the risk of observing a false positive in 5% of the cases. If we run the test around 100,000 times, we can expect to observe around 5,000 false positives. We need to better control for false positives (type I errors) when dealing with multiple tests.

Bonferroni correction

The probability to observe at least one false positive, or familywise error rate (FWER) is almost 1:
FWER = 1 - prob(no significant results) = 1 - (1 - 0.05)^100000 ~ 1

A classical way to control the familywise error rate is to replace the p-value threshold with a corrected value, to enforce the expected FWER. The Bonferroni correction sets the significance cut-off at $$\alpha$$/Ntest. If we set (p ≤ $$\alpha$$/Ntest), then we have (FWER ≤ $$\alpha$$). Following the previous example:
FWER = 1 - prob(no significant results) = 1 - (1 - 0.05/100000)^100000 ~ 0.0488 < 0.05

This works well in a context where all the tests are strictly independent. However, in the case of MEG/EEG recordings, the tests have an important level of dependence: two adjacent sensors or time samples often have similar values. In the case of highly correlated tests, the Bonferroni correction tends too be conservative, leading to a high rate of false negatives.

FDR correction

The false discovery rate (FDR) is another way of representing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. It is designed to control the expected proportion of false positives, while the Bonferroni correction controls the probability to have at least one false positive. FDR-controlling procedures have greater power, at the cost of increased rates of Type I errors (Wikipedia)

In Brainstorm, we implement the Benjamini–Hochberg step-up procedure (1995):

  • Sort the p-values p(k) obtained across all the multiple tests (k=1..Ntest).

  • Find the largest k such as p(k) < k / Ntest * $$\alpha$$.

  • Reject the null hypotheses corresponding to the first k smallest p-values.

  • This is is the same procedure as Matlab's call: mafdr(p, 'BHFDR', alpha)

Note that there are different implementations of FDR. FieldTrip uses the Benjamini–Yekutieli (2001) algorithm as described in (Genovese, 2002), which usually gives less true positive results. Don't be surprised if you get empty displays when using the option "FDR" in the FieldTrip processes, while you get significant results with the Brainstorm FDR correction.

In the interface

stat_mcp_gui.gif

You can select interactively the type of correction to apply for multiple comparison while reviewing your statistical results. The checkboxes "Control over dims" allow you to select which are the dimensions that you consider as multiple comparisons (1=sensor, 2=time, 3=not applicable here). If you select all the dimensions, all the values available in the file are considered as the same repeated test, and only one corrected p-threshold is computed for all the time samples and all the sensors.

If you select only the first dimension, only the values recorded at the same time sample are considered as repeated tests, the different time points are corrected independently, with one different corrected p-threshold for each time point.

When changing these options, a message is displayed in the Matlab command window, showing the number of repeated tests that are considered, and the corrected p-value threshold (or the average if there are multiple corrected p-thresholds, when not all the dimensions are selected):

BST> Average corrected p-threshold: 5.0549e-07  (Bonferroni, Ntests=98914)
BST> Average corrected p-threshold: 0.00440939  (FDR, Ntests=98914)

"It doesn't work"

If nothing appears significant after correction, don't start by blaming the method ("FDR doesn't work"). In the first place, it's probably because there is no clear difference between your sample sets or simply because your sample size is too small. For instance, with less than 10 subjects you cannot expect to observe very significant effects in your data.

If you have good reasons to think your observations are meaningful but cannot increase the sample size, consider reducing the number of multiple comparisons you perform (test only the average over a short time window, a few sensors or a region of interest) or using a cluster-based approach. When using permutation tests, increasing the number of random permutations also decreases the p-value of the very significant effects.

Nonparametric permutation tests

Principle

A permutation test (or randomization test) is a type of test in which the distribution of the test statistic under the null hypothesis is obtained by calculating all possible values of the test statistic under rearrangements of the labels on the observed data points. (Wikipedia)

If the null hypothesis is true, the two sets of tested values A and B follow the same distribution. Therefore the values are exchangeable between the two sets: we can move one value from set A to set B, and one value from B to A, and we expect to obtain the same value for the test statistic T.

  • stat_exchange.gif

By taking all the possible permutations between sets A and B and computing the statistic for each of them, we can build a histogram that approximates the permutation distribution.

  • stat_perm_distrib.gif

Then we compare the observed statistic with the permutation distribution. From the histogram, we calculate the proportion of permutations that resulted in a larger test statistic than the observed one. This proportion is called the p-value. If the p-value is smaller than the critical value (typically 0.05), we conclude that the data in the two experimental conditions are significantly different.

  • stat_perm_pvalue.gif

The number of possible permutations between the two sets of data is usually too large to compute an exhaustive permutation test in a reasonable amount of time. The permutation distribution of the statistic of interest is approximated using a Monte-Carlo approach. A relatively small number of random selection of possible permutations can give us a reasonably good idea of the distribution.

Permutation tests can be used for any test statistic, regardless of whether or not its distribution is known. The hypothesis is about the data itself, not about a specific parameter. In the examples below, we use the t-statistic, but we could use any other function.

Practical limitations

Computation time: If you increase the number of random permutations you use for estimating the distribution, the computation time will increase linearly. You need to find a good balance between the total computation time and the accuracy of the result.

Memory (RAM): The implementations of the permutation tests available in Brainstorm require all the data to be loaded in memory before starting the evaluation of the permutation statistics. Running this function on large datasets or on source data could quickly crash your computer. For example, loading the data for the nonparametric equivalent to the parametric t-test we ran previously would require:
276(sensors) * 461(trials) * 361(time) * 8(bytes) / 1024^3(Gb) = 0.3 Gb of memory.

This is acceptable on most recent computers. But to perform the same at the source level, you need:
45000*461*361*8/1024^3 = 58 Gb of memory just to load the data. This is impossible on most computers, we have to give up at least one dimension and run the test only for one region of interest or one time sample (or average over a short time window).

Example 2: Permutation t-test

Let's run the nonparametric equivalent to the test we ran in the first example.

  • In the Process2 tab, select the following files, from both runs (approximation discussed previously):

    • Files A: All the deviant trials, with the [Process recordings] button selected.

    • Files B: All the standard trials, with the [Process recordings] button selected.

  • Run the process "Test > Permutation test: Independent", set the options as shown below.
    Sensor type: You should enter explicitly "MEG" instead of leaving this field empty. It will decrease the computation time (no optimization for full files in the nonparametric tests).
    Note that it may require more than 3Gb of RAM and take more than 10min.

    stat_ttest2_process.gif

  • Open the parametric and nonparametric results side by side, the results should be very similar. You may have to increase the significance level α to 0.05 to see something significant in the nonparametric version (edit the value in the Stat tab). Alternatively, you can obtain lower p-values by running the same process with more randomizations (for instance 2000 or 10000).

    ft_nocorrection_display.gif

  • In this case, the distributions of the values for each sensor and each time point (non-rectified MEG recordings) are very close to normal distributions. This was illustrated at the top of this page, in the section "Histograms". Therefore the assumptions behind the parametric t-test are verified, the results of the parametric tests are correct and very similar to the nonparametric ones.

  • If you get different results with the parametric and nonparametric approaches: trust the nonparametric one. If you want to increase the precision of a nonparametric test a decrease the p-values, increase the number of random permutations.

MacOS troubleshooting

Error "mexmaci64 cannot be opened because the developer cannot be verified":

FieldTrip implementation

FieldTrip functions in Brainstorm

We have the possibility to call some of the FieldTrip toolbox functions from the Brainstorm environment. If you are running the compiled version of Brainstorm these functions are already packaged with Brainstorm, otherwise you need to install FieldTrip on your computer, either manually or as a Brainstorm plugin. See the Plugins tutorial.

Cluster-based correction

One interesting method that has been promoted by the FieldTrip developers is the cluster-based approach for nonparametric tests. In the type of data we manipulate in MEG/EEG analysis, the neighboring channels, time points or frequency bins are expected to show similar behavior. We can group these neighbors into clusters to "accumulate the evidence". A cluster can have a multi-dimensional extent in space/time/frequency.

In the context of a nonparametric test, the test statistic computed at each permutation is the extent of the largest cluster. To reject or accept the null hypothesis, we compare the largest observed cluster with the randomization distribution of the largest clusters.

This approach solves the multiple comparisons problem, because the test statistic that is used (the maximum cluster size) is computed using all the values at the same time, along all the dimensions (time, sensors, frequency). There is only one test that is performed, so there is no multiple comparisons problem.

The result is simpler to report but also a lot less informative than FDR-corrected nonparametric tests. We have only one null hypothesis, "the two sets of data follow the same probability distribution", and the outcome of the test is to accept or reject this hypothesis. Therefore we cannot report the spatial or temporal extent of the most significant clusters. Make sure you read this recommendation before reporting cluster-based results in your publications.

Reference documentation

For a complete description of nonparametric cluster-based statistics in FieldTrip, read the following:

Process options

There are three separate processes in Brainstorm, to call the three FieldTrip functions.

  • ft_timelockstatistics: Compare imported trials (recordings).

  • ft_sourcestatistics: Compare source maps or time-frequency decompositions of sources.

  • ft_freqstatistics: Compare time-frequency decompositions of sensor recordings or scouts signals.

  • See below the correspondence between the options in the interface and the FieldTrip functions.

Test statistic options (name of the options in the interface in bold):

  • cfg.numrandomization = "Number of randomizations"

  • cfg.statistic = "Independent t-test" ('indepsamplesT') or "Paire t-test" ('depsamplesT)

  • cfg.tail = One-tailed (-1), Two-tailed (0), One-tailed (+1)

  • cfg.correctm = "Type of correction" ('no', 'cluster', 'bonferroni', 'fdr', 'max', 'holm', 'hochberg')

  • cfg.method = 'montecarlo'

    ft_process_options.gif
  • cfg.correcttail = 'prob'

Cluster-based correction:

  • cfg.clusteralpha = "Cluster Alpha"

  • cfg.minnbchan = "Min number of neighours"

  • cfg.clustertail = cfg.tail (in not, FieldTrip crashes)

  • cfg.clusterstatistic = 'maxsum'

Input options: All the data selection is done before, in the process code and functions out_fieldtrip_*.m.

  • cfg.channel = 'all';
  • cfg.latency = 'all';
  • cfg.frequency = 'all';
  • cfg.avgovertime = 'no';
  • cfg.avgchan = 'no';
  • cfg.avgoverfreq = 'no';

Example 3: Cluster-based correction

Run again the same test, but this time select the cluster correction.

  • Keep the same files selected in Process2.
  • Run the process "Test > FieldTrip: ft_timelockstatistics", Type of correction = cluster
    Note that it may require more than 5Gb of RAM and take more than 20min: check the Matlab command window for the FieldTrip progress report.

    ft_cluster_process.gif

  • Double-click on the new file, add a 2D topography to is (CTRL+T). Note that in the Stat tab, the options for multiple comparisons corrections are disabled, because the values saved in the file are already corrected, you can only change the significance threshold.
  • Instead, you get a list of significant clusters, which you can display separately if needed. The colored dots on the topography represent the clusters, blue for negative clusters and red for positive clusters. You can change these colors in the Stat tab. Note that the clusters have a spatio-temporal extent: at one time point they can be represented as two separate blobs in the 2D topography, but these blobs are connected at other time points.

    ft_cluster_display1.gif

  • Additional options are available for exploring the clusters, try them all. Values used to represent the clusters: p=p-value, c=cluster statistic (maxsum), s=cluster size (connected data points).

    ft_cluster_display2.gif

  • Don't spend too much time exploring the clusters: In the previous cases, all the tests at each sensor and each time point were computed independently, we could report, individually, whether each of them was significant or not. On the other hand, the cluster-based approach just allows us to report that the two conditions are different, without specifying where or when, which makes the visual exploration of clusters relatively useless. Make sure you read this recommendation before reporting cluster-based results in your publications.

Example 4: Parametric test on sources

We can reproduce similar results at the source level. If you are using non-normalized and non-rectified current density maps, their distributions across trials should be normal, as illustrated earlier with the histograms. You can use a parametric t-test to compare the two conditions at the source level.

  • Keep the same files selected in Process2. Select the button [Process sources] on both sides.

  • Run the process "Test > Parametric test: Independent", Select all the data, do not average.
    Use scouts: No. When this option is not selected, it uses the entire cortex instead.

    ft_sources_process.gif

  • Double-click on the new file. Change the colormap definition to show only positive values (right-click > Colormap: Stat2 > Uncheck: Absolute values) and use a different colormap ("hot" or "jet"). The sign of the relative t-statistic is not meaningful, it depends mostly on the orientation of the dipoles on the cortex (see tutorial: Difference).

    ft_sources_display.gif

Scouts and statistics

  • From the Scout tab, you can also plot the scouts time series and get a summary of what is happening in your regions of interest. Non-zero values indicate the latencies when at least one vertex of the scout has a value that is significantly different between the two conditions. The values that are shown are the averaged t-values in the scout. The figure below shows the option "Values: Relative" to match surface display, but absolute values would make more sense in this case.

    ft_sources_scouts.gif

Unconstrained sources

There are some additional constraints to take into consideration when computing statistics for source models with unconstrained orientations. See the corresponding section in the tutorial Workflows.

Directionality: Difference of absolute values

The test we just computed detects correctly the time and brain regions with significant differences between the two conditions, but the sign of the t statistic is useless, we don't know where the response is stronger or weaker for the deviant stimulation.

After identifying where and when the responses are different, we can go back to the source values and compute another measure that will give us the directionality of this difference:
abs(average(deviant_trials)) - abs(average(standard_trials))

  • Keep the same files selected in Process2. Select the button [Process sources] on both sides.

  • Run process "Test > Difference of means", with the option "Absolute value of average".

    ft_sources_diffmean.gif

  • Double-click on the new file. Double-click on the colorbar to reset it to its defaults. The sign of this difference is meaningful: red values mean "higher amplitude for the deviant condition", blue values mean "higher amplitude for the standard condition", but we don't know if they are statistically significant.

    ft_sources_direction.gif

  • The example above shows the two files at 148ms. The left figure shows the result of the t-test (significant effects but ambiguous sign) and the right figure shows the difference of absolute values (meaningful sign, but no statistical threshold). The superposition of the two information shows that there is some significant increase of activity in the frontal region for the deviant condition, but a decrease around the auditory cortex. This can be combined more formally as explained below.

  • In Process2: FilesA = t-test results (sources), FilesB = difference deviant-standard (sources).
  • Run process: Test > Apply statistic threshold, significance level α=0.01, correction=FDR.

    thresh_process.gif

  • Double-click on the new file, go to 148ms. The statistic threshold from the t-test file was applied to the difference of rectified averages (deviant-standard): only the values for which there is a significant effect between the two conditions are kept, all the others are set to zero and masked. We observe areas colored in white where the two conditions have equal amplitudes but different signs. Note that for displaying this file correctly, you must keep the amplitude slider at 0% (in the Surface tab): a correct statistic threshold is already applied to the source map, you should not perform any additional random amplitude threshold on it.

    thresh_display.gif

Example 5: Parametric test on scouts

The previous example showed how to test for differences across the full source maps, and then to extract the significant scout activity. Another valid approach is to test directly for significant differences in specific regions of interest, after computing the scouts time series for each trial.

This alternative has many advantages: it can be a lot faster and memory-efficient and it reduces the multiple comparisons problem. Indeed, when you perform less tests at each time point (the number of scouts instead of the number of sources), the FDR and Bonferroni corrections are less conservative and lead to higher p-values. On the other hand, it requires to formulate stronger hypotheses: you need to define the regions in which you expect to observe differences, instead of screening the entire brain.

  • In Process2, select all the deviant trials (A) and standard trials (B). Select [Process sources].

  • Run process Test > Parametric test: Independent, Use scouts: A1L, IFGL, M1L.

    scout_process.gif

  • Double-click on the new file. It shows the time points where the scout signals (averaged across vertices) are significantly different in the two conditions. We cannot represent this new file over the cortex because we have restricted the test to the scouts and discarded all the spatial information. As explained in the previous examples: the significant differences are correctly detected but the sign of the t-statistic is ambiguous.

    scout_display.gif

Advanced

Convert statistic results to regular files

Apply statistic threshold

You can convert the results of a statistic test to a regular file. It can be useful because lots of menus and processes are not accessible for the files with the "stat" tag display on top of their icon.

  • In Process1: Select the results for the test you ran on scouts time series (example #5).
  • Run process: "Test > Apply statistic threshold": α=0.01, correction=FDR, dim=[all].

    scout_apply.gif

  • It produces a new file with the same name but without the "stat" tag. This file is a regular matrix that you can use with any process. When you open it, the Stat tab doesn't show up.
  • Note that when using this process from the Process1 tab, with one single stat file in input, the values saved in the file are the t-values from the stat file, not the original physical units. In order to obtain real physical units, you need to call the process with two inputs: the stat file on the left, and the file with the values of interest on the right, as in the previous section Directionality: Difference of absolute values.

Simulate recordings from these scouts

  • Compute a head model for the intra-subject folder:
    Right-click on the channel file > Compute head model, keep all the default options.
    Note that this channel file was created during one of the processes involving the two runs, it contains an average of their respective channel files (average head positions).

  • In Process1, select the new thresholded matrix file.
  • Run process: "Simulate > Simulate recordings from scout", select option Save full sources.

    simulate_process.gif

  • This process creates two files. First it maps the scouts time series on the cortex: it creates an empty source file with zeroes everywhere, then for each scout it maps the values of the input time times series to the sources within the ROI. Then it multiplies these artificial source maps with the forward model to simulate MEG recordings.

    simulate_display.gif

  • Note that this example is not adapted to simulate realistic MEG values. The source time series used in input for the process "Simulate recordings from scouts" are t-values, and not realistic physical units for a current source density (in pAm). The simulated MEG values localize correctly the simulated scouts, but with meaningless amplitudes. For realistic simulations, you must use input signals with amplitudes that correspond to minimum norm current density maps (in pAm). You can obtain these e.g. by computing them with process "Extract > Scout time series".

Advanced

Example 6: Nonparametric test on time-frequency maps

Two run a test on time-frequency maps, we need to have all the time-frequency decompositions for each individual trial available in the database. In the time-frequency tutorial, we saved only the averaged time-frequency decompositions of all the trials.

  • In Process1, select all the trials from both conditions and both runs.

  • Run process "Frequency > Time-frequency (Morlet wavelets)". Select the options as below.
    Select only one sensor (eg. MLP57) to make it faster. Save individual TF maps.
    Measure=Magnitude: It is more standard to test the square root of power (amplitude).
    Do not normalize the TF maps for a test within a single subject (only for group studies).

    tf_trials.gif tf_options.gif

  • In Process2, select all the deviant trials (A) and standard trials (B). Select [Process time-freq].

  • Run process: Test > Permutation test: Independent, 1000 randomizations, no correction.
    No need to select the option "Match signals between files" because the list of signals is the same for all the trials. If you have marked bad channels in some trials during your analysis, you would need to select this option.

    tf_stat_process.gif

  • Double-click on the new file. In the Stat tab, select α=0.05 uncorrected.

    tf_stat_display.gif

  • If you run this test on time-frequency files where the power has been saved, you get this warning:

    tf_warning.gif

Now delete the TF decompositions for the individual trials:

  • In Process1, select the files for which you computed the TF decomposition (all trials).
  • Select the [Process time-freq] button.

  • Run process: File > Delete files, option Delete selected files.

    tf_delete.gif

Advanced

Export to SPM

An alternative to running the statistical tests in Brainstorm is to export all the data and compute the tests with an external program (R, Matlab, SPM, etc). Multiple menus exist to export files to external file formats (right-click on a file > File > Export to file).

Two tutorials explain how to export data specifically to SPM:

Advanced

On the hard drive

Right click on the first test we computed in this tutorial > File > View file contents.

  • stat_contents.gif

Description of the fields

  • pmap: [Nsignals x Ntime x Nfreq]: p-values for all the data points. If empty, computed from tmap:
    pmap = process_test_parametric2('ComputePvalues', tmap, df, TestType, TestTail);

  • tmap: [Nsignals x Ntime x Nfreq]: t-values for all the data points.

  • df: [Nsignals x Ntime x Nfreq]: Number of degrees of freedom for each test.

  • Correction: Correction for multiple comparison already applied ('no', 'cluster', 'fdr', ...)

  • Type: Initial type of the data ('data', 'results', 'timefreq', 'matrix').

  • Comment: String displayed in the database explorer to represent the file.

  • The other fields were copied from the files that were tested, and were described previously.

Useful functions

  • process_test_parametric2: Two-sample independent parametric t-test

  • process_test_parametric2p: Two-sample paired parametric t-test

  • process_test_parametric1: One-sample parametric t-test (against zero)

  • process_ttest_baseline: One-sample parametric t-test (against baseline)

  • process_ft_timelockstatistics: FieldTrip tests for recordings (file type "data")

  • process_ft_sourcestatistics: FieldTrip tests for source maps (file type "results")

  • process_ft_freqstatistics: For time-frequency and scouts (file type "timefreq" and "matrix")

  • Conversion Brainstorm to FieldTrip: out_fieldtrip_data, out_fieldtrip_results, out_fieldtrip_timefreq, out_fieldtrip_matrix

  • process_extract_pthresh: Computes the pmap from the tmap, and saves thresholded files.

  • process_test_parametric2('ComputePvalues', t, df, TestType, TestTail)

  • bst_stat_thresh: Computes the Bonferroni and FDR corrections in Brainstorm.

Advanced

Additional documentation

Articles

Related tutorials

Forum discussions



Tutorial 27: Workflows

Authors: Francois Tadel, Elizabeth Bock, Dimitrios Pantazis, John Mosher, Richard Leahy, Sylvain Baillet

This page provides some general recommendations for your event-related analysis. It is not directly related with the auditory dataset, but provides guidelines you should consider for any MEG/EEG experiment.
We do not provide standard analysis pipelines for resting or steady state recordings yet, but we will add a few examples soon in the section Other analysis scenarios of the tutorials page.

What is your question?

The most appropriate analysis pipeline for your data depends on the question you are trying to answer. Before defining what are the main steps of your analysis, you should be able to state clearly the question you want to answer with your recordings.

What dimension?

  • MEG/EEG recordings
  • Cortical sources
    • Individual anatomy or template
    • Constrained (one value per vertex) or unconstrained (three values per grid point)
    • Full cortex or regions of interests
  • Frequency or time-frequency maps

What kind of experiment?

  • Single subject: Contrast two experimental conditions across trials, for one single subject.

    • Files A: Single trials for condition A.
    • Files B: Single trials for condition B.
  • Group analysis, within subject: Contrast two conditions A and B measured for each subject.

    • Files A: Subject-level averages for condition A (all the subjects).
    • Files B: Subject-level averages for condition B (all the subjects).
  • Group analysis, between subjects: Contrast two groups of subjects for one condition.

    • Files A: Subject-level averages for group #1 (G1).
    • Files B: Subject-level averages for group #2 (G2).

What level of precision?

  • Difference of averages
  • Statistically significant differences between conditions or groups

What statistical test?

  • A = B

    • Tests the null hypothesis H0:(A=B) against the alternative hypothesis H1:(A≠B)

    • Correct detection: Identify correctly where and when the conditions are different.

    • Ambiguous sign: We cannot say which condition is stronger.
  • Power(A) = Power(B)

    • Tests the null hypothesis H0:(Power(A)=Power(B)) against the alternative hypothesis H1:(Power(A)≠Power(B))

    • Incorrect detection: Not sensitive to the cases where A and B have opposite signs.
    • Meaningful sign: We can identify correctly which condition has a stronger response.

    • Power(x) = |x|2, where |x| represents the modulus of the values:
      - Absolute value for scalar values (recordings, constrained sources, time-frequency)
      - Norm of the three orientations for unconstrained sources.

  • Multiple comparisons: FDR is a good choice for correcting p-values for multiple comparisons.

Design considerations

  • Use within-subject designs whenever possible (i.e. collect two conditions A and B for each subject), then contrast data at the subject level before comparing data between subjects.
  • Such designs are not only statistically optimal, but also ameliorate the between-subject sign ambiguities as contrasts can be constructed within each subject.

Common pre-processing pipeline

Most event-related studies can start with the pipeline we've introduced in these tutorials.

  1. Import the anatomy of the subject (or use a template for all the subjects).
  2. Access the recordings:
    • Link the continuous recordings to the Brainstorm database.
    • Prepare the channel file: co-register sensors and MRI, edit type and name of channels.
    • Edit the event markers: fix the delays of the triggers, mark additional events.
  3. Pre-process the signals:
    • Evaluate the quality of the recordings with a power spectral density plot (PSD).
    • Apply frequency filters (low-pass, high-pass, notch).
    • Identify bad channels and bad segments.
    • Correct for artifacts with SSP or ICA.
  4. Import the recordings in the database: epochs around some markers of interest.

How many trials to include?

  • Single subject: Include all the good trials (unless you have a very low number of trials).
    See the averaging tutorial.

  • Group analysis: Use a similar numbers of trials for all the subjects (no need to be strictly equal), reject the subjects for which you have much less good trials.

EEG recordings

Average

  • Average the epochs across acquisition runs: OK.
  • Average the epochs across subjects: OK.
  • Electrodes are in the same standard positions for all the subjects (e.g. 10-20).
  • Never use an absolute value for averaging or contrasting sensor-level data.

Statistics: Single subject

  • A = B: Parametric or non-parametric t-test, independent, two-tailed.

Statistics: Group analysis, within subject

  • A = B

    • First-level statistic: For each subject, sensor average for conditions A and B.

    • Second-level statistic: Parametric or non-parametric t-test, paired, two-tailed.

Statistics: Group analysis, between subjects

  • G1 = G2

    • First-level statistic: For each subject, sensor average for the conditions to test.

    • Second-level statistic: Parametric/non-parametric t-test, independent, two-tailed.

MEG recordings

Average

  • Average the epochs within each acquisition runs: OK.
  • Average across runs: Not advised because the head of the subject may move between runs.
  • Average across subjects: Strongly discouraged because the shape of the heads vary but the sensors are fixed. One sensor does not correspond to the same brain region for different subjects.
  • Tolerance for data exploration: Averaging across runs and subjects can be useful for identifying time points and sensors with interesting effects but should be avoided for formal analysis.
  • Note for Elekta/MaxFilter users: You can align all acquisition run to a reference run, this will allow direct channel comparisons and averaging across runs. Not recommended across subjects.
  • Never use an absolute value for averaging or contrasting sensor-level data.

Statistics: Single subject

  • A = B: Parametric or non-parametric t-test, independent, two-tailed.

Statistics: Group analysis

  • Not recommended with MEG recordings: do your analysis in source space.

Constrained cortical sources

Average: Single subject

  1. Sensor average: Compute one sensor-level average per acquisition run and per condition.

  2. Sources: Estimate sources for each average (constrained, no normalization).

  3. Source average: Average the source-level run averages to get one subject average.
    Compute a weighted average to balance for different numbers of trials across runs.

  4. Low-pass filter your evoked responses (optional).

    • If you filter the average before normalizing wrt baseline, it will lead to an underestimation of the baseline variance, and therefore to an overestimation of the Z scores computed in the next step, especially if the baseline is too short (typically less than 200 time points). The filter increases the autocorrelation of the time series, and therefore biases the estimation of the signal variance (Wikipedia).

    • You have to take into account the possible edge effects due to the filter. You can either extract a small time window or exclude the beginning of the baseline for the normalization.

  5. Normalize the subject min-norm averages: Z-score wrt baseline (no absolute value).
    Justification: The amplitude range of current densities may vary between subjects because of anatomical or experimental differences. This normalization helps bringing the different subjects to the same range of values.

  6. Do not rectify the cortical maps, but display them as absolute values if needed.

Average: Group analysis

  1. Subject averages: Compute within-subject averages for all the subjects, as described above.

  2. Rectify the cortical maps (process: Pre-process > Absolute value).
    Justification: Cortical maps have ambiguous signs across subjects: reconstructed sources depend heavily on the orientation of true cortical sources. Given the folding patterns of individual cortical anatomies vary considerably, cortical maps have subject-specific amplitude and sign ambiguities. This is true even if a standard anatomy is used for reconstruction.

  3. Project the individual source maps on a template (only when using the individual brains).
    For more details, see tutorial: Group analysis: Subject coregistration.

  4. Group average: Compute grand averages of all the subjects.
    Do not use a weighted average: all the subjects should have the same weight in this average.

  5. Smooth spatially the source maps (optional).
    You can smooth after step #3 for computing non-parametric statistics with the subject averages. For a simple group average, it is equivalent to smooth before of after computing the average.

Difference of averages: Within subject

  1. Sensor average: Compute one sensor-level average per acquisition run and condition.

  2. Sources: Estimate sources for each average (constrained, no normalization).

  3. Source average: Average the source-level session averages to get one subject average.

  4. Subject difference: Compute the difference between conditions for each subject #i: (Ai-Bi)

  5. Low-pass filter the difference (optional)

  6. Normalize the difference: Z-score wrt baseline (no absolute value): Z(Ai-Bi)

  7. Rectify the difference (apply an absolute value): |Z(Ai-Bi)|

  8. Project the individual difference on a template (only when using the individual brains).

  9. Group average: Compute grand averages of all the subjects: avg(|Z(Ai-Bi)|).

  10. Smooth spatially the source maps (optional).

Difference of averages: Between subjects

  1. Grand averages: Compute averages for groups #1 and #2 as in Average:Group analysis.

  2. Difference: Compute the difference between group-level averages: avg(|G1|)-avg(|G2|)

  3. Limitations: Because we rectify the source maps before computing the difference, we lose the ability to detect the differences between equal values of opposite signs. And we cannot keep the sign because we are averaging across subjects. Therefore, many effects are not detected correctly.

Statistics: Single subject

  • A = B: Parametric or non-parametric

    • Compute source maps for each trial (constrained, no normalization).
    • Parametric or non-parametric two-sample t-test, independent, two-tailed.
      Identifies correctly where and when the conditions are different (sign not meaningful).

    • Directionality: Additional step to know which condition has higher values.
      Compute the difference of rectified averages: |avg(Ai)|-|avg(Bi)|
      Combine the significance level (t-test) with the direction (difference): See details.

  • |mean(A)| = |mean(B)|: Non-parametric

    • Compute source maps for each trial (constrained, no normalization).
    • Non-parametric independent two-sample "absolute mean test", two-tailed.
      T = (|mean(A)|-|mean(B)|) / sqrt(|var(A)|/NA + |var(B)|/NB)

    • Interesting alternative that provides at the time a correct estimation of the difference (where and when) and the direction (which condition has higher values).

Statistics: Group analysis, within subject

  • Power(A-B) = 0: Parametric

    • First-level statistic: Rectified difference of normalized averages.
      Proceed as in Difference of averages: Within subjects, but stop before the group average (after step #8). You obtain one measure |Ai-Bi| per subject, test these values against zero.

    • Second-level statistic: Parametric one-sample Chi2-test.
      Power = sum(|Ai-Bi|2), i=1..Nsubj ~ Chi2(Nsubj)

    • Identifies where and when the conditions are different (sign not meaningful).

    • Warning: Very sensitive test, with lots of false positive (all the brain can be "significant")
  • |A| = |B|: Parametric or non-parametric

    • First-level statistic: Rectified and normalized subject averages.
      Proceed as in Average: Group analysis to obtain two averages per subject: Ai and Bi.

    • Second-level statistic: Parametric or non-parametric two-sample t-test, paired, two-tailed.

    • This test does not consider the sign difference within a subject, and therefore cannot detect correctly when A and B have opposite signs. Works well and indicates which condition has higher values when A and B have the same sign within a subject.

  • A = B: Parametric or non-parametric [anatomy template only]

    • First-level statistic: Normalized subject averages (not rectified, no projection needed).
      Proceed as in Average: Single subject to obtain two averages per subject: Ai and Bi.

    • Second-level statistic: Parametric or non-parametric two-sample t-test, paired, two-tailed.

    • Applies only if all the subjects are sharing the same template anatomy.
      Not recommended when using individual anatomies because of the sign issue between subjects (the signs might be opposed between two subjects, and the projection of non-rectified values to a template might be inaccurate).

  • Power(A) = 0: Parametric

    • First-level statistic: Rectified and normalized subject averages.
      Proceed as in Average: Group analysis to obtain one average per subject #i: |Ai|.

    • Second-level statistic: Parametric one-sample Chi2-test.
      PowerA = sum(|Ai|2), i=1..Nsubj ~ Chi2(Nsubj).

Statistics: Group analysis, between subjects

  • |G1| = |G2|: Non-parametric

    • First-level statistic: Rectified and normalized subject averages.
      Proceed as in Average: Group analysis to obtain one average per subject.

    • Second-level statistic: Non-parametric two-sample t-test, independent, two-tailed.

  • Power(G1) = Power(G2): Parametric

    • First-level statistic: Rectified and normalized subject averages.
      Proceed as in Average: Group analysis to obtain one average per subject: |Ai|.

    • Second-level statistic: Parametric two-sample power F-test.
      PowerG1 = sum(Ai2), i=1..N1 ~ Chi2(N1)
      PowerG2 = sum(Aj2), j=1..N2 ~ Chi2(N2)
      F(N1,N2) = (PowerG1 / N1) / (PowerG2 / N2)

Unconstrained cortical sources

Three values for each grid point, corresponding to the three dipoles orientations (X,Y,Z).
We want only one statistic and one p-value per grid point in output.

Averages

  • Proceed as indicated above for constrained cortical sources.
  • Just replace the step Rectify with Flatten (process: Sources > Unconstrained to flat map).

  • The operator |A| has to be interpreted as "norm of the three orientations":
    |A| = sqrt(Ax2 + Ay2 + Az2)

Statistics: Single subject

  • |mean(A)| = |mean(B)|: Non-parametric

    • Compute source maps for each trial (unconstrained, no normalization).
    • Non-parametric independent two-sample "absolute mean test", independent, two-tailed.
      T = (|mean(A)|-|mean(B)|) / sqrt(|var(A)|/NA + |var(B)|/NB)

    • Provides at the time a correct estimation of the difference (where and when) and the direction (which condition has higher values).

Statistics: Group analysis, within subject

  • Power(A-B) = 0: Parametric

    • First-level statistic: Flattened difference of normalized averages.
      Proceed as in Difference of averages: Within subjects, but stop before the group average (after step #8). You obtain one measure |Ai-Bi| per subject, test these values against zero.

    • Second-level statistic: Parametric one-sample Chi2-test for unconstrained sources.
      Power = sum(|Ai-Bi|2), i=1..Nsubj ~ Chi2(3*Nsubj)

    • Identifies where and when the conditions are different (sign not meaningful).

    • Warning: Very sensitive test, with lots of false positive (all the brain can be "significant")
  • |A| = |B|: Parametric or non-parametric

    • First-level statistic: Flattened and normalized subject averages.
      Proceed as in Average: Group analysis to obtain two averages per subject: Ai and Bi.

    • Second-level statistic: Parametric or non-parametric two-sample t-test, paired, two-tailed.

    • This test does not consider the sign difference within a subject, and therefore cannot detect correctly when A and B have opposite signs. Works well and indicates which condition has higher values when A and B have the same sign within a subject.

  • Power(A) = 0: Parametric

    • First-level statistic: Flattened and normalized subject averages.
      Proceed as in Average: Group analysis to obtain one average per subject #i: |Ai|.

    • Second-level statistic: Parametric one-sample Chi2-test for unconstrained sources.
      PowerA = sum(|Ai|2) = sum(Aix2+Aiy2+Aiz2), i=1..Nsubj ~ Chi2(3*Nsubj).

Statistics: Group analysis, between subjects

  • |G1| = |G2|: Non-parametric

    • First-level statistic: Flattened and normalized subject averages.
      Proceed as in Average: Group analysis to obtain one average per subject.

    • Second-level statistic: Non-parametric two-sample t-test, independent, two-tailed.

  • Power(G1) = Power(G2): Parametric

    • First-level statistic: Flattened and normalized subject averages.
      Proceed as in Average: Group analysis to obtain one average per subject: |Ai|.

    • Second-level statistic: Parametric two-sample power F-test (unconstrained sources).
      PowerG1 = sum(Aix2+Aiy2+Aiz2), i=1..N1 ~ Chi2(3*N1)
      PowerG2 = sum(Ajx2+Ajy2+Ajz2), j=1..N2 ~ Chi2(3*N2)
      F = (PowerG1 / N1) / (PowerG2 / N2) ~ F(3*N1,3*N2)

Regions of interest (scouts)

Statistics: Single subject

  • Even within-subject cortical maps have sign ambiguities. MEG/EEG source mapping has limited spatial resolution and sources in opposing sulcal/gyral areas are typically reconstructed with inverted signs (constrained orientations only). Averaging activity in cortical regions of interest (scouts) would thus lead to signal cancelation. To mitigate this issue, Brainstorm uses algorithms to change the polarity of individual source time series before averaging across a scout cortical region. Please note however that the polarity of the resulting time series remains ambiguous, as it is determined by statistical considerations, not by physiological information.
  • We therefore recommend that you perform subject-wise averaging/contrast before averaging scout time series. You may also consider aggregating scout source time series using PCA instead of averaging, across all files (all epochs and conditions) from a subject together, ensuring the same component is applied everywhere and therefore mitigating further some of the ambiguities concerning the polarity of the source time series.

Statistics: Group analysis, within subject

  • Comparison of scout time series between subjects is tricky because there is no way to avoid sign ambiguity for different subjects. Thus there are no clear recommendations. Rectifying before comparing scout time series between subjects can be a good idea or not depending on different cases.
  • Having a good understanding of the data (multiple inspections across channels/sources/subjects) can offer hints whether rectifying the scout time series is a good idea. Using unconstrained cortical maps to create the scout time series can ameliorate ambiguity concerns.

Time-frequency maps

Average: Single subject

  • Single trials: Compute time-frequency maps for each trial (magnitude, no normalization).
    It is a more standard analysis to take the square root of power before a t-test.

  • Subject average: Average the time-frequency maps together, separately for each condition. This can be done automatically when computing the TF decompositions (option "Save averaged time-frequency maps" in the process options).

  • The values are all strictly positive, there is no sign ambiguity: you can directly subtract the averages of the two conditions and interpret the sign of the difference.
  • If you average time-frequency maps computed on sensor-level data, the same limitations apply as for averaging sensor level data (see sections about MEG and EEG recordings above).

Average: Group analysis

  1. Subject averages: Compute within-subject averages for all the subjects, as described above.

  2. Normalize the subject averages: ERD/ERS or Z-score wrt baseline.

  3. Group average: Compute grand averages of all the subjects.

  4. Difference of averages: Simply compute the difference of the group averages.

Statistics: Single subject

  • A = B: Parametric or non-parametric

    • Compute time-frequency maps for each trial (magnitude, no normalization).

    • Parametric or non-parametric two-sample t-test, independent, two-tailed.

Statistics: Group analysis, within subject

  • A = B: Parametric or non-parametric [anatomy template only]

    • First-level statistic: Normalized subject averages (ERS/ERD or Z-score).
      Proceed as in Average: Group analysis to obtain two averages per subject: Ai and Bi.

    • Second-level statistic: Parametric or non-parametric two-sample t-test, paired, two-tailed.

Statistics: Group analysis, between subjects

  • G1 = G2: Non-parametric

    • First-level statistic: Normalized subject averages (ERS/ERD or Z-score).
      Proceed as in Average: Group analysis to obtain one average per subject.

    • Second-level statistic: Parametric or non-parametric two-sample t-test, independent, two-tailed.



Include: Nothing found for "\<\\)\>\>"!

Tutorial 28: Scripting

Authors: Francois Tadel, Elizabeth Bock, Matthias Sure, Sylvain Baillet

The previous tutorials explained how to use Brainstorm in an interactive way to process one subject with two acquisition runs. In the context of a typical neuroimaging study, you may have tens or hundreds of subjects to process in the same way, it is unrealistic to do everything manually. Some parts of the analysis can be processed in batches with no direct supervision, others require more attention. This tutorial introduces tools and tricks that will help you assemble an efficient analysis pipeline.

Warning: Never call clear, clear functions, clear classes or clear all while Brainstorm is running. It will clear variables, functions and classes that are needed by Brainstorm..

Starting a new script

The easiest way to get started with a new Brainstorm script is to use the script generator, already introduced in the tutorial Select files and run processes. Select some files in the Process1 or Process2 tabs, select a list of processes, and use the menu Generate .m script. The example below should work with the protocol "TutorialIntroduction" created during the introduction tutorials.

  • In the Process1 tab, leave the selection box empty and click on [Run]. Instead of selecting the files from the Brainstorm interface, we will select them directly from the database using a script.
  • Select process File > Select files: Recordings (do not execute immediately)
    Subject=Subject01, Condition=[Empty], File name=Avg: deviant (the space is important).

    start1.gif

  • This process selects all the recordings with a comment including the string "Avg: deviant", from all the folders in Subject01 (except for "Intra-subject" and "Common files"). We expect to get two files: the averages of the deviant condition for both runs.
  • Add process Pre-process > Band-pass filter: Lower cutoff=0Hz, Upper cutoff=30Hz, 60dB.
    Add process File > Save snapshot: Recordings time series, Sensors=MEG.

    start2.gif

  • This will apply a low-pass filter at 30Hz and save a screen capture of the signals in the report.
  • Do not run the pipeline, select the menu Generate .m script instead. It saves a new .m file and opens it in the Matlab editor. Close the pipeline editor window and look at the script.

    start3.gif

  • The script you just generated can be the starting point to your own custom script. The following sections explain line by line how they work and how to edit them.

Line by line: Header

% Script generated by Brainstorm (19-Jul-2016)

All the lines starting with a "%" are comments, they are never executed.

% Input files
sFiles = [];
SubjectNames = {...
    'Subject01'};

These lines define the script inputs:

  • sFiles: The list of files in input. Currently empty because we did not select anything in the Process1 list. If we had selected files, it would contain a cell array of strings with relative file paths.

  • SubjectNames: List of subject names that are used in the script. Most of the times, the generated scripts contain only one entry, but it is written as a cell array to make it easier to extend to multiple subjects with a loop (described further in this tutorial).

% Start a new report
bst_report('Start', sFiles);

Starts a new report of activity: Clears all the previous logs and gets ready to record new messages. The report will collect all the messages that are generated during the execution of the script by the various processes. You can explicitly add screen captures and additional messages to the current report with the function bst_report. This report will remain open until the function bst_report('Start') is called again. To display the current report, use the menu File > Report viewer.

The syntax function_name('SubFunction', arguments) is used a lot in Brainstorm: it calls a subfunction available inside a .m file. This line above calls the function Start() in the file brainstorm3/toolbox/process/bst_report.m. This is made possible with the use of the short script "macro_method". Many of the Brainstorm .m files are actually libraries of functions, rather than simple "scripts" or "functions".

Line by line: Body

% Process: Select data files in: Subject01/*/Avg: deviant
sFiles = bst_process('CallProcess', 'process_select_files_data', sFiles, [], ...
    'subjectname',   SubjectNames{1}, ...
    'condition',     '', ...
    'tag',           'Avg: deviant', ...
    'includebad',    0, ...
    'includeintra',  0, ...
    'includecommon', 0);

% Process: Low-pass:30Hz
sFiles = bst_process('CallProcess', 'process_bandpass', sFiles, [], ...
    'sensortypes', 'MEG', ...
    'highpass',    0, ...
    'lowpass',     30, ...
    'attenuation', 'strict', ...  % 60dB
    'mirror',      0, ...
    'useold',      0, ...
    'overwrite',   0);

% Process: Snapshot: Recordings time series
sFiles = bst_process('CallProcess', 'process_snapshot', sFiles, [], ...
    'target',         5, ...  % Recordings time series
    'modality',       1, ...  % MEG (All)
    'orient',         4, ...  % bottom
    'time',           0.11, ...
    'contact_time',   [0, 0.1], ...
    'contact_nimage', 12, ...
    'threshold',      20, ...
    'Comment',        'Run');

You will find one block per process you selected in the pipeline editor. They all have the same syntax:
output_files = bst_process('CallProcess', process_name, input_files_A, input_files_B, options_list);

  • process_name: String indicating the function corresponding to the process to execute. To know from the pipeline editor what is the path to the process function: hover your mouse over the selected process, as illustrated in this tutorial.

  • input_files_A: List of input files in Process1, or FilesA in Process2. It can be a cell array of file names (full path, or relative path from the protocol folder), or an array of structures describing the files in the database (returned by a previous call to bst_process).

  • input_files_B: Empty for Process1, or FilesB in Process2. Cell array of strings, or array of struct.

  • options_list: Pairs of (option_name, option_values), one pair for each option of the process.

  • output_files: Array of structures describing the files in output of the process. If the process created new files, this variable points at them. If the process didn't create new files or was modifying exiting files, this variable points at the input files.

% Save and display report
ReportFile = bst_report('Save', sFiles);

Closes the current report and saves it in the user report folder ($HOME/.brainstorm/reports). These reports are in .mat format and contain all the information necessary to re-run the execution exactly in the same way, but they are not easy to read.

The parameter "sFiles" is optional, it indicates what are the files that are considered as the final results of the script. You can remove it without breaking your script: ReportFile = bst_report('Save');

bst_report('Open', ReportFile);

Opens the report viewer to display what happened during the execution. This is equivalent to using the menu File > Report viewer. You can comment this line (ie. add a "%" at the beginning of the line) if you don't want to show the report at the end of the execution.

% bst_report('Export', ReportFile, ExportDir);

This function exports the report in readable format, as an HTML file that includes all the screen captures embedded in it. It is disabled by default. If you want to use this feature: remove the "%" at the beginning of the line, and define the variable ExportDir.

ExportDir must be a string that defines where to save the HTML report. It can be either the absolute path to a HTML file (eg. 'C:\Users\myuser\Documents\report_example.html') or just a folder (eg. 'C:\Users\myuser\Documents'). If you enter only a path to a folder, a default file name including the protocol name and a date tag is generated (report_ProtocolName_YYMMDD_HHMMSS.html).

% bst_report('Email', ReportFile, username, to, subject, isFullReport);

Sends the report by email, as explained in this later section: Send report by email.

% gui_brainstorm('EmptyTempFolder');

Delete all the files in the Brainstorm temporary folder.

Simplify the calls

The script you generated is like any Matlab script: you can edit it, rename the variables, add tests and loops, etc. The first important thing to understand is how to edit the options and change the inputs/outputs. The script generator uses only one variable for all the file lists (sFiles) and the output process is always the input of the following process. This is usually too restrictive to write a full analysis script: we commonly need to have multiple lists of files or to run two different operations on the same file.

Let's consider the first process call, which selects the averages for the Deviant condition in both runs.

sFiles = bst_process('CallProcess', 'process_select_files_data', sFiles, [], ...
    'subjectname',   SubjectNames{1}, ...
    'condition',     '', ...
    'tag',           'Avg: deviant', ...
    'includebad',    0, ...
    'includeintra',  0, ...
    'includecommon', 0);

There is no need to set the parameter sFiles, because there is no input, you can replace it with an empty matrix []. You can therefore remove the line "sFiles = [];". We can also rename the output variable "sAvgData", to be more specific.

sAvgData = bst_process('CallProcess', 'process_select_files_data', [], [], ...

You can omit all the options that are not defined, not used, or kept to their default values:

sAvgData = bst_process('CallProcess', 'process_select_files_data', [], [], ...
    'subjectname',   SubjectNames{1}, ...
    'tag',           'Avg: deviant');

Edit the call to the low-pass filter: Change the input to sAvgData and the output to sAvgDataLow, this way you will be able to keep track of the two files if you need to use them independently later.

sAvgDataLow = bst_process('CallProcess', 'process_bandpass', sAvgData, [], ...
    'sensortypes', 'MEG', ...
    'highpass',    0, ...
    'lowpass',     30, ...
    'attenuation', 'strict');   % 60dB

Edit the call to the snapshot process: Change the input to sAvgDataLow, and remove the output parameter (we are not expecting any output file from it).

bst_process('CallProcess', 'process_snapshot', sAvgDataLow, [], ...
    'target',   5, ...  % Recordings time series
    'modality', 1);     % MEG (All)

Replace the last lines with the following code, in order to export the report instead of opening in the report viewer (edit the file path to point at your own user folder instead).

ReportFile = bst_report('Save');
bst_report('Export', ReportFile, 'C:\Users\myuser\Documents\report_test.html');

Evaluate in Matlab

Select the code for the first process in the Matlab editor, right-click > Evaluate selection (or press F9).

edit1.gif

If you haven't executed your script yet, you will get the following error in the Matlab command window:

Undefined variable "SubjectNames" or class "SubjectNames".

The variable SubjectNames is not defined yet: Execute the first lines "SubjectNames = {'Subject01'}", then try again. You should now have a new variable in your Matlab workspace, which points at the two average files. Type "sAvgData(1)" in your command window to display the first element:

>> sAvgData(1)
ans =
          iStudy: 6
           iItem: 1
        FileName: '..._01_600Hz_notch/data_deviant_average_160513_1329.mat'
        FileType: 'data'
         Comment: 'Avg: deviant (39 files)'
       Condition: 'S01_AEF_20131218_01_600Hz_notch'
     SubjectFile: 'Subject01/brainstormsubject.mat'
     SubjectName: 'Subject01'
        DataFile: ''
     ChannelFile: 'Subject01/S01_AEF_20131218_01_600Hz_notch/channel_ctf_acc1.mat'
    ChannelTypes: {'ADC A'  'ADC V'  'DAC'  'ECG'  'EOG'  'MEG'  'MEG REF' ...}

The field "sAvgData(1).FileName" contains the relative path to the to the Deviant average in the first run. This structure sAvgData contains also a lot of information that can be helpful in your script:

  • iStudy / iItem: Reference of the file in the database (described later in this tutorial).

  • FileType: 'raw' (continuous files), 'data' (recordings), 'results' (sources), 'timefreq' (time-frequency, spectrum and connectivity), or 'matrix' (any time series extracted from other files).

  • Comment: Comment/Name field of the file (what is displayed in the database explorer).

  • Condition: Name of the condition/folder in which the file is located.

  • SubjectFile: Relative path to the subject file (brainstormsubject.mat).

  • SubjectName: Name of the subject (must be the same as the folder name).

  • DataFile: For types 'results' or 'timefreq', path of the parent file in the database explorer.

  • ChannelFile: Relative path to the channel file.

  • ChannelTypes: Cell array of channel types available for the input file.

Naming conventions

To help you navigate in the Brainstorm code, here are some, naming conventions (aka nomenclature):

  • Structures: Name starting with a "s" followed by a capital letter (eg. sFiles, sStudy, sSubject).

  • Indices: Either loop variables or array indices, name starting with a "i" (eg. iSubject, iStudy, iTime).

  • Counts: Number of elements in a group, name starting with a "n" (eg. nAvg, nTrials, nSubjects).

  • Graphic handles: Matlab graphic objects, name starting with a "h" (eg. hFig, hAxes, hLine, hText).

  • File names: Scripts and functions, only lower case, separation with "_" (eg. process_fft, bst_get).

  • Sub-functions: Inside a .m file, name starting with a capital, CamelCase (eg. CallProcess, Start).

Running the script

The simplified script looks like this:

% Input files
SubjectNames = {'Subject01'};
% Start a new report
bst_report('Start');

% Process: Select data files in: Subject01/*/Avg: deviant
sAvgData = bst_process('CallProcess', 'process_select_files_data', [], [], ...
    'subjectname',   SubjectNames{1}, ...
    'tag',           'Avg: deviant');
% Process: Low-pass:30Hz
sAvgDataLow = bst_process('CallProcess', 'process_bandpass', sAvgData, [], ...
    'sensortypes', 'MEG', ...
    'highpass',    0, ...
    'lowpass',     30, ...
    'attenuation', 'strict');  % 60dB
% Process: Snapshot: Recordings time series
bst_process('CallProcess', 'process_snapshot', sAvgDataLow, [], ...
    'target',   5, ...  % Recordings time series
    'modality', 1);     % MEG (All)

% Save and display report
ReportFile = bst_report('Save');
bst_report('Export', ReportFile, 'C:\Users\franc\Documents\report_test.html');

You have three ways to execute it:

  • Select all the lines (Ctrl+A) and evaluate it in Matlab (F9).
  • In the Editor toolbar of the Matlab environment, click on the button [Run].
  • Save the file, go to this folder with Matlab (or add it to your path) and type the name of the script in the command window (without the ".m" at the end).

At the end of the execution, nothing happens, because we indicated we wanted to export the report instead of opening it. To check out the report of execution: use the menu File > Report viewer from the Brainstorm window, or open the file report_test.html that was saved somewhere on your computer.

In this page, you can review everything that happened in the script: when it was executed, how long it took, what are the processes that were executed, some additional messages (two files were selected with the first process) and the screen captures taken by process_snapshot.

  • report1.gif

Running the script again

If you execute the script again, it will not behave as expected anymore. The selection process we used assumes that there is only one file per folder with a name that includes "Avg: deviant". This is not the case anymore after the execution, because the low-pass filtered files also contain the same string. The execution of the first process of the script now returns 4 files.

>> sAvgData = bst_process('CallProcess', 'process_select_files_data', [], [], ...
    'subjectname',   SubjectNames{1}, ...
    'tag',           'Avg: deviant')

sAvgData =
1x4 struct array with fields:
    iStudy
    iItem
    ...

In order to exclude the low-pass filtered files from this selection, you can add another process that will refine the selection. Use the script generator again to create a template call for another process, then copy-paste it in your script.

  • In Process1: Select any recordings file (we will not run anything, just generate a script).
  • Select process File > Select files: By tag: Search="low", Search the file name, Ignore the files.

    select_tag.gif

  • Select the menu Generate .m script (make sure you do not overwrite the script you are currently working on), then close the pipeline editor.

  • Copy-paste and edit the call to process_select_tag to your main script.

Now the file selection part of your script should look like this, and should return only two files:

% Process: Select data files in: Subject01/*/Avg: Deviant
sAvgData = bst_process('CallProcess', 'process_select_files_data', [], [], ...
    'subjectname',   SubjectNames{1}, ...
    'tag',           'Avg: deviant');
% Process: Ignore file names with tag: low
sAvgData = bst_process('CallProcess', 'process_select_tag', sAvgData, [], ...
    'tag',    'low', ...
    'search', 2, ...  % Search the file names
    'select', 2);  % Ignore the files with the tag

With this last modification, your script is more robust. It can be executed multiple times without completely changing its behavior. When you are fetching files from the database using tags or file names, always pay attention to this aspect: the database grows and the further you go, the more specific your requests have to be.

A good practice can be to tag explicitly the output files your script generates if you need to fetch them later. You can use the process File > Add tag and File > Set name.

  • add_tag.gif

Starting Brainstorm

Brainstorm must be running in the background for these scripts to run properly. The interface doesn't have to be visible on the screen, but the database engine must be running for processing requests. At the beginning of your script, you can explicitly start or restart Brainstorm.

  • brainstorm: Start Brainstorm with the regular GUI.

  • brainstorm nogui: Start in silent mode. The Java GUI is created but hidden, the progress bar is not shown, all the processes run without user interactions, using default options instead of asking interactively. Visualization figures opened in the processing scripts are still created and made visible.

  • brainstorm server: Start in headless mode, for execution on a distant computation server that does not have any graphical capability or display attached to it. In this mode, none of the Java GUI elements are created, and the Matlab figures are not displayed. Whether you would be able to add screen captures to your execution reports depends mostly on the Matlab version available of your server (see section below Running scripts on a cluster).

  • brainstorm <script.m> <parameters>: Start Brainstorm in server mode, execute a script and quit Brainstorm. This allows executing any Matlab or Brainstorm script from the command line. This works also from the compiled version of Brainstorm, executed with the free MATLAB Runtime (see installation instructions, section "without Matlab"). Add the full path to the script and parameters to the command line:

    • Windows: brainstorm3.bat <script.m> <parameters>

    • Linux/MacOS: brainstorm3.command <MATLABROOT> <script.m> <parameters>
      MATLABROOT: Matlab Runtime installation folder, eg. /usr/local/MATLAB_Runtime/v98/

  • brainstorm ... local: Instead of using a user defined brainstorm_db folder, it would use the default folder for the database: $HOME/.brainstorm/local_db

If you want to start Brainstorm only if it is not already running, you can use the following code:

if ~brainstorm('status')
    brainstorm nogui
end

To select a specific protocol at the beginning of your script:

ProtocolName = 'TutorialIntroduction';
% Get the protocol index
iProtocol = bst_get('Protocol', ProtocolName);
if isempty(iProtocol)
    error(['Unknown protocol: ' ProtocolName]);
end
% Select the current procotol
gui_brainstorm('SetCurrentProtocol', iProtocol);

To delete the protocol and start over:

% Delete existing protocol
gui_brainstorm('DeleteProtocol', ProtocolName);
% Create new protocol
gui_brainstorm('CreateProtocol', ProtocolName, 0, 0);

Additional command line options:

brainstorm stop      % Quit Brainstorm
brainstorm update    % Download and install latest Brainstorm update (see bst_update)
brainstorm reset     % Re-initialize Brainstorm database and preferences
brainstorm digitize  % Digitize electrodes positions and head shape using a Polhemus
brainstorm setpath   % Add Brainstorm subdirectories to current path
brainstorm info      % Open Brainstorm website
brainstorm forum     % Open Brainstorm forum
brainstorm license   % Display license

Database requests

The functions bst_get and bst_set allow you to query the database, access the configuration of the software and modify some display parameters. The complete reference documentation of these functions is included directly in their code (brainstorm3/toolbox/core/bst_get.m and bst_set.m).

Let's start with a few simple examples:

>> ProtocolInfo = bst_get('ProtocolInfo')  % Configuration of the current protocol
ProtocolInfo =
              Comment: 'TutorialIntroduction'
              STUDIES: 'C:\Work\Protocols\TutorialIntroduction\data'
             SUBJECTS: 'C:\Work\Protocols\TutorialIntroduction\anat'
               iStudy: 6
       UseDefaultAnat: 0
    UseDefaultChannel: 0

>> isGUI = bst_get('isGUI')   % Is the Brainstorm interface displayed (0=no, 1=yes)

>> bst_set('FlipYAxis', 1)                 % New figures will have the Y axis flipped
>> bst_set('TSDisplayMode', 'butterfly')   % New figures will use a "butterfly" view

To reference the files in the database, each protocol is subdivided in Subjects (the "anat" folder, containing the MRI, surfaces and atlases) and Studies (the "data" folder, including the recordings, channel files and all the analyses). Each Study corresponds to a sub-folder (eg. protocol/data/subject01/run01/), and is attached to only one subject.

Subjects and Studies are referenced in the protocol with a unique index, most of the time kept in variables named iSubject and iStudy. The files available in them are also referenced with indices, with variables such as iAnatomy, iSurface, iData, iHeadModel, iResults or iTimefreq. You can see the indices in the database explorer by hovering your mouse over the nodes files and folders:

db_indices.gif

Example: Getting the study structure from the variable sAvgData, defined in the script:

>> sAvgData(1)
ans =
          iStudy: 6
           iItem: 1
           ...

>> sStudy = bst_get('Study', sAvgData(1).iStudy)   % Get study struct with its index
sStudy =
                 Name: 'S01_AEF_20131218_01_600Hz_notch'
             FileName: 'Subject01/S01_AEF_20131218_01_600Hz_notch/brainstormstudy.mat'
          DateOfStudy: '13-May-2016'
    BrainStormSubject: 'Subject01/brainstormsubject.mat'   % Subject filename
            Condition: {'S01_AEF_20131218_01_600Hz_notch'} % Name of the folder
              Channel: [1x1 struct]       % Channel file
             iChannel: []                 % Not used anymore
                 Data: [1x242 struct]     % List of "data" files in the folder
            HeadModel: [1x1 struct]       % List of head models in the folder
           iHeadModel: 1                  % Default head model (file in green)
               Result: [1x244 struct]     % List of source files and links
                 Stat: [1x0 struct]       % List of statistical results
                Image: [0x0 struct]       % List of images
             NoiseCov: [1x2 struct]       % Noise(1) and data(2) covariance files
              Dipoles: [0x0 struct]       % List of dipole files in the folder
             Timefreq: [1x247 struct]     % List of time-frequency files
               Matrix: [0x0 struct]       % List of "matrix" files in the folder

Example: Getting the data structure.

% Get the structure representing the file from sStudy
>> sData = sStudy.Data(sAvgData(1).iItem)
sData =
    FileName: '..._01_600Hz_notch/data_deviant_average_160513_1329.mat'
     Comment: 'Avg: deviant (39 files)'    % File name
    DataType: 'recordings'                 % Type of data in the file
    BadTrial: 0                            % If 1, the trial is marked as bad

Example: Getting the subject structure.

% Get subject structure from filename (lists the files in the subject folder)
>> sSubject = bst_get('Subject', sStudy.BrainStormSubject)
sSubject =
                 Name: 'Subject01'    % Subject name, same as folder name
             Comments: ''             % Not used much
             FileName: 'Subject01/brainstormsubject.mat'
    DateOfAcquisition: ''             % Not used anymore
              Anatomy: [1x1 struct]   % List of MRI volumes
              Surface: [1x9 struct]   % List of surfaces
             iAnatomy: 1              % Index of default MRI
               iScalp: 9              % Index of default head surface
              iCortex: 4              % Index of default cortex surface
          iInnerSkull: []             % Index of default inner skull surface
          iOuterSkull: []             % Index of default outer skull surface
               iOther: []             % Not used anymore
       UseDefaultAnat: 0        % If 1: Use the default anatomy
    UseDefaultChannel: 0        % 0=one/folder, 1=one/subject, 2=one global

Example: Getting the study structure and data index from a file name.

>> DataFile = sAvgData(1).FileName
DataFile =
Subject01/..._01_600Hz_notch/data_deviant_average_160513_1329.mat

>> [sStudy, iStudy, iData] = bst_get('DataFile', DataFile)
sStudy =
                 Name: 'S01_AEF_20131218_01_600Hz_notch'
             FileName: '..._01_600Hz_notch/brainstormstudy.mat'
          DateOfStudy: '13-May-2016'
    BrainStormSubject: 'Subject01/brainstormsubject.mat'
            Condition: {'S01_AEF_20131218_01_600Hz_notch'}
              Channel: [1x1 struct]
             iChannel: []
                 Data: [1x242 struct]
            HeadModel: [1x1 struct]
           iHeadModel: 1
               Result: [1x244 struct]
                 Stat: [1x0 struct]
                Image: [0x0 struct]
             NoiseCov: [1x2 struct]
              Dipoles: [0x0 struct]
             Timefreq: [1x247 struct]
               Matrix: [0x0 struct]
iStudy =
     6
iData =
     1

Many other options are available for searching files in the database with bst_get. We cannot list them all in this page, but you can refer to the code of bst_get.m for more information.

1 function [argout1, argout2, argout3, argout4, argout5] = bst_get( varargin ) 2 % BST_GET: Get a Brainstorm structure. 3 % This function is used to abstract the way that these structures are stored. 4 % 5 % USAGE : 6 % ====== DIRECTORIES ================================================================== 7 % - bst_get('UserDir') : User home directory 8 % - bst_get('BrainstormHomeDir') : Application directory of brainstorm 9 % - bst_get('BrainstormUserDir') : User home directory for brainstorm (<home>/.brainstorm/) 10 % - bst_get('BrainstormTmpDir') : User brainstorm temporary directory (Default: <home>/.brainstorm/tmp/) 11 % - bst_get('BrainstormTmpDir', isForcedDefault=0, SubDir=[]) : User DEFAULT brainstorm temporary directory (<home>/.brainstorm/tmp/SubDir) 12 % - bst_get('BrainstormDocDir') : Doc folder folder of the Brainstorm distribution (may vary in compiled versions) 13 % - bst_get('BrainstormDefaultsDir') : Defaults folder of the Brainstorm distribution (may vary in compiled versions) 14 % - bst_get('UserReportsDir') : User reports directory (<home>/.brainstorm/reports/) 15 % - bst_get('UserMexDir') : User temporary directory (<home>/.brainstorm/mex/) 16 % - bst_get('UserProcessDir') : User custom processes directory (<home>/.brainstorm/process/) 17 % - bst_get('UserDefaultsDir') : User defaults directory (<home>/.brainstorm/defaults/) 18 % - bst_get('UserPluginsDir') : User plugins directory (brainstorm3/plugins/ if it exists, otherwise <home>/.brainstorm/plugins/) 19 % - bst_get('BrainstormDbFile') : User brainstorm.mat file (<home>/.brainstorm/brainstorm.mat) 20 % - bst_get('BrainstormDbDir') : User database directory (contains all the brainstorm protocols) 21 % - bst_get('DirDefaultSubject') : Directory name of the default subject 22 % - bst_get('DirDefaultStudy') : Directory name of the default study for each subject 23 % - bst_get('DirAnalysisInter') : Directory name of the inter-subject analysis study 24 % - bst_get('DirAnalysisIntra') : Directory name of the intra-subject analysis study (for each subject) 25 % - bst_get('AnatomyDefaults') : Get the contents of directory bstDir/defaults/anatomy 26 % - bst_get('MniAtlasDefaults') : Get the contents of directory bstDir/defaults/mniatlas 27 % - bst_get('EegDefaults') : Get the contents of directory bstDir/defaults/eeg 28 % - bst_get('LastUsedDirs') : Structure with all the last used directories (last used) 29 % - bst_get('OsType', isMatlab=1) : Get a string that describes the operating system (if isMatlab=1 return the Matlab/JVM platform, else return the real host system) 30 % - bst_get('FileFilters', DataType) : Get the list of import filters for a specific data type 31 % - bst_get('PluginCustomPath') : Full custom path to all plugins 32 % - bst_get('BrainSuiteDir') : Full path to a local installation of BrainSuite 33 % - bst_get('SpmTpmAtlas') : Full path to the SPM atlas TPM.nii 34 % - bst_get('PythonExe') : Path to the python executable 35 % 36 % ====== PROTOCOLS ==================================================================== 37 % - bst_get('iProtocol') : Indice of current protocol 38 % - bst_get('Protocol', ProtocolName): Return the indice of the protocol ProtocolName, or [] if it doesn't exist 39 % - bst_get('ProtocolInfo') : Definition structure for current protocol 40 % - bst_get('ProtocolSubjects') : Subjects list for current protocol 41 % - bst_get('ProtocolStudies') : Studies list for current protocol 42 % - bst_get('isProtocolLoaded') : 1 if the protocol has been loaded, 0 else 43 % - bst_get('isProtocolModified') : 1 if the protocol has been modified, 0 else 44 % 45 % ====== STUDIES ====================================================================== 46 % - bst_get('Study', StudyFileName) : Get one study in current protocol with its file name 47 % - bst_get('Study', iStudies) : Get one or more studies 48 % - bst_get('Study') : Get current study in current protocol 49 % - bst_get('StudyCount') : Get number of studies in the current protocol 50 % - bst_get('StudyWithSubject', SubjectFile) : Find studies associated with a given subject file (WITHOUT the system studies ('intra_subject', 'default_study')) 51 % - bst_get('StudyWithSubject', ..., 'intra_subject') : Find studies ... INCLUDING 'intra_subject' study 52 % - bst_get('StudyWithSubject', ..., 'default_study') : Find studies ... INCLUDING 'default_study' study 53 % - bst_get('StudyWithCondition', ConditionPath) : Find studies for a given condition path 54 % - bst_get('ChannelStudiesWithSubject', iSubjects) : Get all the studies where there should be a channel file for a list of subjects 55 % - bst_get('AnalysisIntraStudy', iSubject) : Get the default analysis study for target subject 56 % - bst_get('AnalysisInterStudy') : Get the default analysis study for inter-subject analysis 57 % - bst_get('DefaultStudy', iSubject) : Get the default study for target subject (by subject indice) 58 % - bst_get('DefaultStudy') : Get the global default study (common to all subjects) 59 % - bst_get('DefaultStudy', SubjectFile) : Get the default study for target subject (by filename) 60 % - bst_get('ChannelFile', ChannelFile) : Find a channel file in current protocol 61 % - bst_get('ChannelFileForStudy', StudyFile/DataFile) : Find a channel file in current protocol 62 % - bst_get('ChannelForStudy', iStudies) : Return current Channel struct for target study 63 % - bst_get('ChannelModalities', ChannelFile) : Return displayable modalities for a Channel file 64 % - bst_get('ChannelModalities', DataFile) : Return displayable modalities for a Data/Results/Timefreq... file 65 % - bst_get('ChannelDevice', ChannelFile) : Return acquisistion device for a channel file 66 % - bst_get('TimefreqDisplayModalities', TfFile) : Get displayable modalities for a TF file based on recordings 67 % - bst_get('HeadModelForStudy', iStudy) : Return current HeadModel struct for target study 68 % - bst_get('HeadModelFile', HeadModelFile) : Find a HeadModel file in current protocol 69 % - bst_get('HeadModelFile', HeadModelFile, iStudies): Find a HeadModel file in current protocol 70 % - bst_get('NoiseCovFile', NoiseCovFile) : Find a noise covariance file file in current protocol 71 % - bst_get('NoiseCovFile', NoiseCovFile, iStudies) : Find a noise covariance file in current protocol 72 % - bst_get('NoiseCovFile', DataCovFile) : Find a data covariance file file in current protocol 73 % - bst_get('NoiseCovFile', DataCovFile, iStudies) : Find a data covariance file file in current protocol 74 % - bst_get('DataFile', DataFile) : Find a DataFile in current protocol 75 % - bst_get('DataFile', DataFile, iStudies) : Find a DataFile in current protocol 76 % - bst_get('DataForDataList', iStudy, DataListName) : Find all the DataFiles grouped by a data list 77 % - bst_get('DataForStudy', iStudy) : Find all the Data files that are dependent on the channel/headmodel of a given study 78 % - bst_get('DataForStudies', iStudies) 79 % - bst_get('DataForChannelFile', ChannelFile) : Find all the DataFiles that use the given ChannelFile 80 % - bst_get('ResultsFile', ResultsFile) : Find a ResultsFile in current protocol 81 % - bst_get('ResultsFile', ResultsFile, iStudies) : Find a ResultsFile in input studies 82 % - bst_get('ResultsForDataFile', DataFile, iStudies): Find all results computed based on DataFile 83 % - bst_get('StatFile', StatFile) : Find a StatFile in current protocol 84 % - bst_get('StatFile', StatFile, iStudies) : Find a StatFile in input studies 85 % - bst_get('StatForDataFile', DataFile, iStudies) 86 % - bst_get('StatForDataFile', DataFile) 87 % - bst_get('TimefreqFile', TimefreqFile) 88 % - bst_get('TimefreqFile', TimefreqFile, iStudies) 89 % - bst_get('TimefreqForFile', FileName, iStudies) : Find all timefreq files computed based on target file 90 % - bst_get('TimefreqForKernel', KernelFile) 91 % - bst_get('DipolesFile', DipolesFile) 92 % - bst_get('DipolesFile', DipolesFile, iStudies) 93 % - bst_get('DipolesForFile', FileName, iStudies) : Find all dipoles files computed based on target file 94 % - bst_get('DipolesForKernel', KernelFile) 95 % - bst_get('MatrixFile', MatrixFile) 96 % - bst_get('MatrixFile', MatrixFile, iStudies) 97 % - bst_get('MatrixForMatrixList', iStudy, MatrixListName) 98 % - bst_get('AnyFile', AnyFile) 99 % - bst_get('AnyFile', AnyFile, iStudies) 100 % - bst_get('RelatedDataFile', FileName) 101 % - bst_get('RelatedDataFile', FileName, iStudies) 102 % - bst_get('GetFilenames') 103 % 104 % ====== SUBJECTS ====================================================================== 105 % - bst_get('Subject', SubjectFileName, isRaw) : Find a subject in current protocol with its file name 106 % - bst_get('Subject', SubjectName, isRaw) : Find a subject in current protocol with its name 107 % - bst_get('Subject', iSubject) : Get a subject (normal or default if iSubject==0) 108 % - bst_get('Subject') : Get current subject in current protocol 109 % - bst_get('SubjectCount') : Get number of studies in the current protocol 110 % - bst_get('NormalizedSubjectName') : Name of the subject with a normalized anatomy 111 % - bst_get('NormalizedSubject') : Get group analysis subject for the current protocol 112 % - bst_get('ConditionsForSubject', SubjectFile) : Find all conditions for a given subject 113 % - bst_get('SurfaceFile', SurfaceFile) : Find a surface in current protocol 114 % - bst_get('SurfaceFileByType', iSubject, SurfaceType) : Find surfaces with given type for subject #i (default only) 115 % - bst_get('SurfaceFileByType', SubjectFile, SurfaceType) : Find surfaces with given type for subject SubjectFile (default only) 116 % - bst_get('SurfaceFileByType', SurfaceName, SurfaceType) : Find surfaces with given type for subject that also has surface SurfaceName (default only) 117 % - bst_get('SurfaceFileByType', MriName, SurfaceType) : Find surfaces with given type for subject that also has MRI MriName (default only) 118 % - bst_get('SurfaceFileByType', ..., ..., isDefaultOnly) : If 0, return all the surfaces of the given type, instead of only the default surface 119 % - bst_get('MriFile', MriFile) : Find a MRI in current protocol 120 % 121 % ====== GUI ================================================================= 122 % - bst_get('BstControls') : Return main Brainstorm GUI structure 123 % - bst_get('BstFrame') : Return main Brainstorm JFrame 124 % - bst_get('isGUI') : Return 1 if the Brainstorm interface is displayed 125 % - bst_get('GuiLevel') : Return GUI level: -1=server, 0=nogui, 1=normal, 2=autopilot 126 % - bst_get('ScreenDef') : Get screens configuration 127 % - bst_get('DecorationSize') : Get dimensions of the windows decorations 128 % - bst_get('Layout') : Configuration of the main Brainstorm window 129 % - bst_get('Layout', prop) : Get one property in the layout properties 130 % - bst_get('PanelContainer') : Display list of registered panel containers 131 % - bst_get('PanelContainer', ContainerName) : Get a panel container handle 132 % - bst_get('Panel') : Display list of registered panels 133 % - bst_get('Panel', PanelName) : Find a panel with its name 134 % - bst_get('PanelControls', PanelName) : Get the controls of a panel 135 % - bst_get('Clipboard') : Nodes that were copied from the interface 136 % - bst_get('FigFont') : Standard font size displayed in the figures 137 % - bst_get('Font') : Create a Java font, scaled for the operating system 138 % 139 % ====== CONFIGURATION ================================================================= 140 % - bst_get('Version') : Brainstorm version 141 % - bst_get('MatlabVersion') : Matlab version (version number * 100, eg. 801) 142 % - bst_get('MatlabReleaseName') : Matlab version (release name, eg. "R2014a") 143 % - bst_get('JavaVersion') : Java version 144 % - bst_get('isJavacomponent') : Returns 1 if javacomponent is available (Matlab < 2019b), 0 otherwise 145 % - bst_get('SystemMemory') : Amount of Total and Available physical memory (RAM), in MiB 146 % - bst_get('ByteOrder') : {'l','b'} - Byte order used to read and save binary files 147 % - bst_get('TSDisplayMode') : {'butterfly','column'} 148 % - bst_get('ElectrodeConfig', Modality) : Structure describing the display values for SEEG/ECOG/EEG contacts 149 % - bst_get('ElecInterpDist', Modality) : Structure describing the maximum distance for interpolating SEEG/ECOG/EEG values onto a surface 150 % - bst_get('AutoUpdates') : {0,1} - If 1, check automatically for updates at startup 151 % - bst_get('ForceMatCompression') : {0,1} - If 1, always save mat-files using the v7 format instead of v6 152 % - bst_get('IgnoreMemoryWarnings') : {0,1} - If 1, do not display memory warnings at the Brainstorm startup 153 % - bst_get('ExpertMode') : {0,1} - If 1, show advanced options that regular user do not see 154 % - bst_get('DisplayGFP') : {0,1} - If 1, the GFP is displayed on all the time series figures 155 % - bst_get('DownsampleTimeSeries') : {0,1,...} - If > 0, downsample dense time series for faster display 156 % - bst_get('DisableOpenGL') : {0,1,2} - If 1, do not use OpenGL renderer; if 2, use software OpenGL 157 % - bst_get('InterfaceScaling') : {100,125,150,...} - Scales the Brainstorm GUI by a fixed factor 158 % - bst_get('GraphicsSmoothing') : {0,1} - If 1, uses the graphics smoothing (Matlab >= 2014b) 159 % - bst_get('SystemCopy') : {0,1} - If 1, uses the system calls mv/cp instead of movefile/copyfile (Linux only) 160 % - bst_get('DefaultFormats') : Default formats for importing/exporting data, channels, ... (last used) 161 % - bst_get('BFSProperties') : Conductivities and thicknesses for 3-shell spherical forward model 162 % - bst_get('ImportDataOptions') : Import options for recordings 163 % - bst_get('ImportEegRawOptions') : Importation options for RAW EEG format 164 % - bst_get('BugReportOptions') : Bug reporter options 165 % - bst_get('DefaultSurfaceDisplay') : Default display options for surfaces (smooth, data threshold, sulci map) 166 % - bst_get('MagneticExtrapOptions') : Structure with the options for magnetic field extrapolation 167 % - bst_get('DefaultFreqBands') 168 % - bst_get('TimefreqOptions_morlet') 169 % - bst_get('TimefreqOptions_fft') 170 % - bst_get('TimefreqOptions_psd') 171 % - bst_get('TimefreqOptions_hilbert') 172 % - bst_get('OpenMEEGOptions') 173 % - bst_get('DuneuroOptions') 174 % - bst_get('GridOptions_headmodel') 175 % - bst_get('GridOptions_dipfit') 176 % - bst_get('UniformizeTimeSeriesScales') : {0,1} - If 1, the Y-axis of all the time series figures have the scale 177 % - bst_get('FlipYAxis') : {0,1} - If 1, the recordings are displayed with the negative values UP 178 % - bst_get('AutoScaleY') : {0,1} - If 1, the axis limits are updated when the figure is updated 179 % - bst_get('ShowXGrid') : {0,1} - If 1, show the XGrid in the time series figures 180 % - bst_get('ShowYGrid') : {0,1} - If 1, show the YGrid in the time series figures 181 % - bst_get('ShowZeroLines') : {0,1} - If 1, show the Y=0 lines in the columns view 182 % - bst_get('ShowEventsMode') : {'dot','line','none'} 183 % - bst_get('Resolution') : [resX,resY] fixed resolutions for X and Y axes 184 % - bst_get('FixedScaleY', Modality) : Struct with the scales to impose on the recordings for the selected modality 185 % - bst_get('XScale', XScale) : {'log', 'linear'} 186 % - bst_get('YScale', YScale) : {'log', 'linear'} 187 % - bst_get('UseSigProcToolbox') : Use Matlab's Signal Processing Toolbox when available 188 % - bst_get('RawViewerOptions', sFile) : Display options for RAW recordings, adapt for specific file 189 % - bst_get('RawViewerOptions') : Display options for RAW recordings 190 % - bst_get('TopoLayoutOptions') : Display options for 2DLayout display 191 % - bst_get('StatThreshOptions') : Options for online statistical thresholding 192 % - bst_get('ContactSheetOptions') : Display options for contact sheets 193 % - bst_get('ProcessOptions') : Options related with the data processing 194 % - bst_get('CustomColormaps') : Gets the list of user defined colormaps 195 % - bst_get('MriOptions') : Configuration for MRI display 196 % - bst_get('DigitizeOptions') : Digitizer options 197 % - bst_get('PcaOptions') : Options for PCA dimension reduction (xyz flattening or scout function) 198 % - bst_get('ReadOnly') : Read only interface 199 % - bst_get('NodelistOptions') : Structure with the options for file selection in the Process1 and Process2 panels 200 % - bst_get('ResizeFunction') : Get the appropriate resize function 201 % - bst_get('groot') : Get the root graphic object 202 % - bst_get('JFrame', hFig) : Get the underlying java frame for a Matlab figure 203 % - bst_get('LastPsdDisplayFunction') : Display option of measure for spectrum (log, power, magnitude, etc.) 204 % - bst_get('PlotlyCredentials') : Get the credentials and URL to connect to plot.ly server 205 % - bst_get('ExportBidsOptions') : Additional metadata for BIDS export 206 % - bst_get('Pipelines') : Saved Pipelines stored 207 % 208 % SEE ALSO bst_set 209 210 % @============================================================================= 211 % This function is part of the Brainstorm software: 212 % https://neuroimage.usc.edu/brainstorm 213 % 214 % Copyright (c) University of Southern California & McGill University 215 % This software is distributed under the terms of the GNU General Public License 216 % as published by the Free Software Foundation. Further details on the GPLv3 217 % license can be found at http://www.gnu.org/copyleft/gpl.html. 218 % 219 % FOR RESEARCH PURPOSES ONLY. THE SOFTWARE IS PROVIDED "AS IS," AND THE 220 % UNIVERSITY OF SOUTHERN CALIFORNIA AND ITS COLLABORATORS DO NOT MAKE ANY 221 % WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF 222 % MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, NOR DO THEY ASSUME ANY 223 % LIABILITY OR RESPONSIBILITY FOR THE USE OF THIS SOFTWARE. 224 % 225 % For more information type "brainstorm license" at command prompt. 226 % =============================================================================@ 227 % 228 % Authors: Francois Tadel, 2008-2023 229 % Martin Cousineau, 2017 230 % Raymundo Cassani, 2021-2023 231 232 %% ==== PARSE INPUTS ==== 233 global GlobalData; 234 if ((nargin >= 1) && ischar(varargin{1})) 235 contextName = varargin{1}; 236 else 237 return 238 end 239 % Initialize returned variable 240 argout1 = []; 241 argout2 = []; 242 argout3 = []; 243 argout4 = []; 244 argout5 = []; 245 246 % Get required context structure 247 switch contextName 248 %% ==== SUBFUNCTIONS ===== 249 case 'findFileInStudies' 250 [argout1, argout2, argout3] = findFileInStudies(varargin{2:end}); 251 252 %% ==== BRAINSTORM CONFIGURATION ==== 253 case 'Version' 254 argout1 = GlobalData.Program.Version; 255 256 case 'MatlabVersion' 257 if ~exist('version','builtin') 258 Version = 601; 259 else 260 % Get version number 261 str_vers = version(); 262 vers = sscanf(str_vers, '%d.%d%*s'); 263 if isempty(vers) || any(~isnumeric(vers)) || any(isnan(vers)) 264 vers = 0; 265 end 266 Version = 100 * vers(1) + vers(2); 267 % Get release name 268 ipar = [find(str_vers == '('), find(str_vers == ')')]; 269 end 270 argout1 = Version; 271 272 case 'MatlabReleaseName' 273 if ~exist('version','builtin') 274 Release = 'Matlab6'; 275 else 276 % Get version number 277 str_vers = version(); 278 % Get release name 279 ipar = [find(str_vers == '('), find(str_vers == ')')]; 280 Release = str_vers(ipar(1)+1:ipar(2)-1); 281 end 282 argout1 = Release; 283 284 case 'JavaVersion' 285 strver = char(java.lang.System.getProperty('java.version')); 286 iDot = find(strver == '.'); 287 if (length(iDot) < 2) 288 argout1 = str2num(strver); 289 else 290 argout1 = str2num(strver(1:iDot(2)-1)); 291 end 292 293 case 'isJavacomponent' 294 % After Matlab 2019b, javacomponent() and JavaFrame property have been deprecated 295 argout1 = (bst_get('MatlabVersion') <= 906); 296 297 case 'SystemMemory' 298 % [RamTotal_MiB, RamAvailable_MiB] = bst_get('SystemMemory') 299 RamTotal_MiB = []; 300 RamAvailable_MiB = []; 301 tmp = regexp(bst_get('OsType'), '^[a-z]+', 'match', 'ignorecase'); 302 if ~isempty(tmp) 303 osFamily = tmp{1}; 304 end 305 if strcmpi(osFamily, 'win') && (bst_get('MatlabVersion') >= 706) 306 try 307 % Get memory info 308 [usermem, systemmem] = memory(); 309 RamTotal_MiB = round(systemmem.PhysicalMemory.Total / 1024 / 1024); 310 RamAvailable_MiB = round(usermem.MemAvailableAllArrays / 1024 / 1024); 311 catch 312 % Whatever... 313 end 314 315 elseif strcmpi(osFamily, 'linux') 316 try 317 meminfoRes = fileread('/proc/meminfo'); 318 ramTotalkB = regexp(meminfoRes, '(?<=MemTotal:)(.*?)(?=kB)', 'match'); 319 ramAvailablekB = regexp(meminfoRes, '(?<=MemAvailable:)(.*?)(?=kB)', 'match'); 320 if ~isempty(ramAvailablekB) && ~isempty(ramTotalkB) 321 ramTotalkB = str2double(strtrim(ramTotalkB{1})); 322 RamTotal_MiB = round(ramTotalkB /1024); 323 ramAvailablekB = str2double(strtrim(ramAvailablekB{1})); 324 RamAvailable_MiB = round(ramAvailablekB /1024); 325 end 326 catch 327 % Whatever... 328 end 329 elseif strcmpi(osFamily, 'mac') 330 try 331 [~, mem_pressure] = system('memory_pressure'); 332 if ~isempty(mem_pressure) 333 ramTotalB = regexp(mem_pressure, '(?<=The system has)(.*?)(?= )', 'match'); 334 prcFree = regexp(mem_pressure, '(?<=System-wide memory free percentage:)(.*?)(?=%)', 'match'); 335 if ~isempty(ramTotalB) && ~isempty(prcFree) 336 ramTotalB = str2double(ramTotalB{1}); 337 RamTotal_MiB = round(ramTotalB / 1024 / 1024); 338 ramAvailableB = ramTotalB * str2double(prcFree{1}) / 100; 339 RamAvailable_MiB = round(ramAvailableB / 1024 / 1024); 340 end 341 end 342 catch 343 % Whatever... 344 end 345 end 346 argout1 = RamTotal_MiB; 347 argout2 = RamAvailable_MiB; 348 349 case 'BrainstormHomeDir' 350 argout1 = GlobalData.Program.BrainstormHomeDir; 351 case 'BrainstormDbDir' 352 argout1 = GlobalData.DataBase.BrainstormDbDir; 353 case 'UserDir' 354 try 355 if ispc 356 userDir = getenv('USERPROFILE'); 357 else 358 userDir = char(java.lang.System.getProperty('user.home')); 359 end 360 catch 361 userDir = ''; 362 end 363 if isempty(userDir) 364 userDir = bst_get('BrainstormHomeDir'); 365 end 366 argout1 = userDir; 367 368 case 'BrainstormUserDir' 369 bstUserDir = bst_fullfile(bst_get('UserDir'), '.brainstorm'); 370 if ~isdir(bstUserDir) 371 res = mkdir(bstUserDir); 372 if ~res 373 error(['Cannot create Brainstorm user directory: "' bstUserDir '".']); 374 end 375 end 376 argout1 = bstUserDir; 377 378 case 'BrainstormTmpDir' 379 tmpDir = ''; 380 isForcedDefault = ((nargin >= 2) && ~isempty(varargin{2}) && varargin{2}); 381 % Subdirectory 382 if (nargin >= 3) && ~isempty(varargin{3}) 383 SubDir = varargin{3}; 384 385 % % Add PID of MATLAB process to the temporary folder 386 % try 387 % pid = feature('getpid'); 388 % catch 389 % pid = []; 390 % end 391 % if ~isempty(pid) 392 % SubDir = sprintf('%s_%d', SubDir, feature('getpid')); 393 % end 394 395 % Add date+time of MATLAB process to the temporary folder 396 c = clock; 397 SubDir = sprintf('%s_%02.0f%02.0f%02.0f_%02.0f%02.0f%02.0f', SubDir, c(1)-2000, c(2:6)); 398 else 399 SubDir = []; 400 end 401 % Default folder: userdir/tmp 402 defDir = bst_fullfile(bst_get('BrainstormUserDir'), 'tmp'); 403 % If temporary directory is set in the preferences 404 if ~isForcedDefault && isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'BrainstormTmpDir') 405 tmpDir = GlobalData.Preferences.BrainstormTmpDir; 406 end 407 % Else: use directory userdir/tmp 408 if isempty(tmpDir) 409 tmpDir = defDir; 410 end 411 % Create directory if it does not exist yet 412 if ~isdir(tmpDir) 413 res = mkdir(tmpDir); 414 if ~res && ~strcmp(tmpDir, defDir) 415 disp(['BST> Cannot create Brainstorm temporary directory: "' tmpDir '".']); 416 disp(['BST> Using default directory instead: "' defDir '".']); 417 % Create temporary folder 418 tmpDir = defDir; 419 if ~isdir(tmpDir) 420 res = mkdir(tmpDir); 421 else 422 res = 1; 423 end 424 end 425 % Error: cannot create any temporary folder 426 if ~res 427 error(['Cannot create Brainstorm temporary directory: "' tmpDir '".']); 428 end 429 end 430 % Add sub-directory 431 if ~isempty(SubDir) 432 tmpDir = bst_fullfile(tmpDir, SubDir); 433 % Create directory if it does not exist yet 434 if ~isdir(tmpDir) 435 res = mkdir(tmpDir); 436 if ~res 437 error(['Cannot create Brainstorm temporary directory: "' tmpDir '".']); 438 end 439 end 440 end 441 argout1 = tmpDir; 442 443 case 'BrainstormDocDir' 444 docDir = bst_fullfile(GlobalData.Program.BrainstormHomeDir, 'doc'); 445 if ~exist(docDir, 'file') 446 % Matlab compiler >= 2018b stores 'doc' under 'bst_javabuil' 447 docDir = bst_fullfile(GlobalData.Program.BrainstormHomeDir, 'bst_javabuil', 'doc'); 448 if ~exist(docDir, 'file') 449 docDir = ''; 450 disp('BST> Could not find "doc" folder.'); 451 disp(['BST> BrainstormHomeDir = ' GlobalData.Program.BrainstormHomeDir]); 452 end 453 end 454 argout1 = docDir; 455 456 case 'BrainstormDefaultsDir' 457 defaultsDir = bst_fullfile(GlobalData.Program.BrainstormHomeDir, 'defaults'); 458 if ~exist(defaultsDir, 'file') 459 % Matlab compiler >= 2018b stores 'defaults' under 'bst_javabuil' 460 defaultsDir = bst_fullfile(GlobalData.Program.BrainstormHomeDir, 'bst_javabuil', 'defaults'); 461 if ~exist(defaultsDir, 'file') 462 defaultsDir = ''; 463 disp('BST> Could not find "defaults" folder.'); 464 disp(['BST> BrainstormHomeDir = ' GlobalData.Program.BrainstormHomeDir]); 465 end 466 end 467 argout1 = defaultsDir; 468 469 case 'UserReportsDir' 470 reportDir = bst_fullfile(bst_get('BrainstormUserDir'), 'reports'); 471 if ~isdir(reportDir) 472 res = mkdir(reportDir); 473 if ~res 474 error(['Cannot create user reports directory: "' reportDir '".']); 475 end 476 end 477 argout1 = reportDir; 478 479 case 'UserMexDir' 480 mexDir = bst_fullfile(bst_get('BrainstormUserDir'), 'mex'); 481 if ~isdir(mexDir) 482 res = mkdir(mexDir); 483 if ~res 484 error(['Cannot create Brainstorm mex-files directory: "' mexDir '".']); 485 end 486 end 487 argout1 = mexDir; 488 489 case 'UserProcessDir' 490 processDir = bst_fullfile(bst_get('BrainstormUserDir'), 'process'); 491 if ~isdir(processDir) 492 res = mkdir(processDir); 493 if ~res 494 error(['Cannot create Brainstorm custom processes directory: "' processDir '".']); 495 end 496 end 497 argout1 = processDir; 498 499 case 'UserDefaultsDir' 500 defDir = bst_fullfile(bst_get('BrainstormUserDir'), 'defaults'); 501 defDirAnat = bst_fullfile(defDir, 'anatomy'); 502 defDirEeg = bst_fullfile(defDir, 'eeg'); 503 if ~isdir(defDir) 504 res = mkdir(defDir); 505 if ~res 506 error(['Cannot create user templates directory: "' defDir '".']); 507 end 508 end 509 if ~isdir(defDirAnat) 510 res = mkdir(defDirAnat); 511 if ~res 512 error(['Cannot create user templates directory: "' defDirAnat '".']); 513 end 514 end 515 if ~isdir(defDirEeg) 516 res = mkdir(defDirEeg); 517 if ~res 518 error(['Cannot create user templates directory: "' defDirEeg '".']); 519 end 520 end 521 argout1 = defDir; 522 523 case 'UserPluginsDir' 524 % If brainstorm/plugins exists, use it (case of an environment exported and reused) 525 pluginsDir = bst_fullfile(GlobalData.Program.BrainstormHomeDir, 'plugins'); 526 % Otherwise (standard case): <home>/.brainstorm/plugins/ 527 if ~exist(pluginsDir, 'file') 528 pluginsDir = bst_fullfile(bst_get('BrainstormUserDir'), 'plugins'); 529 if ~isdir(pluginsDir) 530 res = mkdir(pluginsDir); 531 if ~res 532 error(['Cannot create plugins directory: "' pluginsDir '".']); 533 end 534 end 535 end 536 argout1 = pluginsDir; 537 538 case 'BrainstormDbFile' 539 argout1 = bst_fullfile(bst_get('BrainstormUserDir'), 'brainstorm.mat'); 540 541 case 'Pipelines' 542 argout1 = GlobalData.Processes.Pipelines; 543 544 case 'OsType' 545 switch (mexext) 546 case 'mexglx', argout1 = 'linux32'; 547 case 'mexa64', argout1 = 'linux64'; 548 case 'mexmaci', argout1 = 'mac32'; 549 case 'mexmaci64', argout1 = 'mac64'; 550 case 'mexmaca64', argout1 = 'mac64arm'; 551 case 'mexs64', argout1 = 'sol64'; 552 case 'mexw32', argout1 = 'win32'; 553 case 'mexw64', argout1 = 'win64'; 554 otherwise, error('Unsupported extension.'); 555 end 556 % CALL: bst_get('OsType', isMatlab=0) 557 if (nargin >= 2) && isequal(varargin{2}, 0) 558 if strcmpi(argout1, 'win32') && (~isempty(strfind(java.lang.System.getProperty('java.home'), '(x86)')) || ~isempty(strfind(java.lang.System.getenv('ProgramFiles(x86)'), '(x86)'))) 559 argout1 = 'win64'; 560 end 561 end 562 563 case 'OsName' 564 argout1 = ''; 565 osFamily = []; 566 tmp = regexp(bst_get('OsType'), '^[a-z]+', 'match', 'ignorecase'); 567 if ~isempty(tmp) 568 osFamily = tmp{1}; 569 end 570 switch osFamily 571 case 'win' 572 [~, system_info] = system('ver'); 573 argout1 = strtrim(system_info); 574 575 case 'linux' 576 os_release = fileread('/etc/os-release'); 577 osName = regexp(os_release, '(?<=PRETTY_NAME=")(.*?)(?=")', 'match'); 578 if ~isempty(osName) 579 osName = strtrim(osName{1}); 580 else 581 osName = regexp(os_release, '(?<=NAME=")(.*?)(?=")', 'match'); 582 if ~isempty(osName) 583 osName = strtrim(osName{1}); 584 else 585 osName = 'Linux unknow distribution'; 586 end 587 end 588 [~, kernelVer] = system('uname -r'); 589 kernelVer = strtrim(kernelVer); 590 argout1 = [osName, ' (' kernelVer, ')']; 591 592 case 'mac' 593 [~, sw_vers] = system('sw_vers'); 594 osName = regexp(sw_vers, '(?<=ProductName:)(.*?)(?=\n)', 'match'); 595 osName = strtrim(osName{1}); 596 osVer = regexp(sw_vers, '(?<=ProductVersion:)(.*?)(?=\n)', 'match'); 597 osVer = strtrim(osVer{1}); 598 [~, osHw] = system('uname -m'); 599 osHw = strtrim(osHw); 600 argout1 = [osName, ' ' osVer, ' (', osHw, ')']; 601 end 602 603 604 %% ==== PROTOCOL ==== 605 case 'iProtocol' 606 if isempty(GlobalData.DataBase.iProtocol) 607 argout1 = 0; 608 else 609 argout1 = GlobalData.DataBase.iProtocol; 610 end 611 612 case 'Protocol' 613 if ~isempty(GlobalData.DataBase.ProtocolInfo) 614 argout1 = find(strcmpi({GlobalData.DataBase.ProtocolInfo.Comment}, varargin{2})); 615 else 616 argout1 = []; 617 end 618 619 case {'ProtocolInfo', 'ProtocolSubjects', 'ProtocolStudies', 'isProtocolLoaded', 'isProtocolModified'} 620 argout2 = GlobalData.DataBase.iProtocol; 621 % No protocol: return empty matrix 622 if isempty(argout2) || ~isnumeric(argout2) || argout2 == 0 623 return; 624 end 625 % Check index integrity 626 if ((argout2 <= 0) || (argout2 > length(GlobalData.DataBase.ProtocolInfo))), warning('Brainstorm:InvalidIndex', 'Invalid index'), return, end 627 % Get requested protocol structure 628 argout1 = GlobalData.DataBase.(contextName)(argout2); 629 630 631 %% ==== STUDY ==== 632 % Usage: [sStudy, iStudy] = bst_get('Study', StudyFileName) 633 % [sStudy, iStudy] = bst_get('Study') 634 % [sStudy, iStudy] = bst_get('Study', iStudies) 635 case 'Study' 636 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 637 return; 638 end 639 % Get list of current protocols 640 ProtocolInfo = GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol); 641 ProtocolStudies = GlobalData.DataBase.ProtocolStudies(GlobalData.DataBase.iProtocol); 642 % Get list of current protocol studies 643 if isempty(ProtocolStudies) || isempty(ProtocolInfo) 644 return; 645 end 646 % ===== PARSE INPUTS ===== 647 if (nargin < 2) 648 % Call: bst_get('Study'); 649 iStudies = ProtocolInfo.iStudy; 650 StudyFileName = []; 651 elseif (isnumeric(varargin{2})) 652 iStudies = varargin{2}; 653 StudyFileName = []; 654 elseif (ischar(varargin{2})) 655 iStudies = []; 656 StudyFileName = strrep(varargin{2}, ProtocolInfo.STUDIES, ''); 657 end 658 % Indices 659 iAnalysisStudy = -2; % CANNOT USE -1 => DISABLES SEARCH FUNCTIONS 660 iDefaultStudy = -3; 661 % Indices > 0: normal studies indiced in ProtocolStudies.Study array 662 663 % ===== GET STUDY BY INDEX ===== 664 % Call: bst_get('Study', iStudies); 665 if ~isempty(iStudies) 666 argout1 = repmat(ProtocolStudies.DefaultStudy, 0); 667 % Get analysis study 668 iTargetAnalysis = find(iStudies == iAnalysisStudy); 669 if ~isempty(iTargetAnalysis) 670 argout1(iTargetAnalysis) = repmat(ProtocolStudies.AnalysisStudy, size(iTargetAnalysis)); 671 argout2(iTargetAnalysis) = repmat(iAnalysisStudy, size(iTargetAnalysis)); 672 end 673 % Get default study 674 iTargetDefault = find(iStudies == iDefaultStudy); 675 if ~isempty(iTargetDefault) 676 argout1(iTargetDefault) = repmat(ProtocolStudies.DefaultStudy, size(iTargetDefault)); 677 argout2(iTargetDefault) = repmat(iDefaultStudy, size(iTargetDefault)); 678 end 679 % Get normal studies 680 iTargetNormal = find((iStudies >= 1) & (iStudies <= length(ProtocolStudies.Study))); 681 if ~isempty(iTargetNormal) 682 argout1(iTargetNormal) = ProtocolStudies.Study(iStudies(iTargetNormal)); 683 argout2(iTargetNormal) = iStudies(iTargetNormal); 684 end 685 % Error 686 if isempty(argout1) 687 GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol).iStudy = []; 688 end 689 690 % ===== GET STUDY BY FILENAME ===== 691 % Call: bst_get('Study', StudyFileName); 692 elseif ~isempty(StudyFileName) 693 % NORMAL STUDY 694 iStudy = find(file_compare({ProtocolStudies.Study.FileName}, StudyFileName), 1); 695 % If a study is found : return it 696 if ~isempty(iStudy) 697 argout1 = ProtocolStudies.Study(iStudy); 698 argout2 = iStudy; 699 % DEFAULT STUDY 700 elseif ~isempty(ProtocolStudies.DefaultStudy) && file_compare({ProtocolStudies.DefaultStudy.FileName}, StudyFileName) 701 argout1 = ProtocolStudies.DefaultStudy; 702 argout2 = iDefaultStudy; 703 % ANALYSIS STUDY 704 elseif ~isempty(ProtocolStudies.AnalysisStudy) && file_compare({ProtocolStudies.AnalysisStudy.FileName}, StudyFileName) 705 argout1 = ProtocolStudies.AnalysisStudy; 706 argout2 = iAnalysisStudy; 707 end 708 else 709 return 710 end 711 712 713 %% ==== STUDY WITH SUBJECT FILE ==== 714 % Usage : [sStudies, iStudies] = bst_get('StudyWithSubject', SubjectFile) : WITHOUT the system studies ('intra_subject', 'default_study') 715 % [sStudies, iStudies] = bst_get(..., 'intra_subject', 'default_study') : WITH the system studies: 'intra_subject' | 'default_study' 716 case 'StudyWithSubject' 717 % Parse inputs 718 if (nargin < 2) || ~ischar(varargin{2}) 719 error('Invalid call to bst_get()'); 720 end 721 if (nargin > 2) 722 IntraStudies = any(strcmpi(varargin(3:end), 'intra_subject')); 723 DefaultStudies = any(strcmpi(varargin(3:end), 'default_study')); 724 else 725 IntraStudies = 0; 726 DefaultStudies = 0; 727 end 728 SubjectFile = {varargin{2}}; 729 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 730 return; 731 end 732 % Get list of current protocol description 733 ProtocolInfo = GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol); 734 ProtocolStudies = GlobalData.DataBase.ProtocolStudies(GlobalData.DataBase.iProtocol); 735 if isempty(ProtocolStudies) || isempty(ProtocolInfo) 736 return; 737 end 738 739 % Get default subject 740 sDefaultSubject = bst_get('Subject', 0); 741 % If SubjectFile is the default subject filename 742 if ~isempty(sDefaultSubject) && ~isempty(sDefaultSubject.FileName) && file_compare( SubjectFile{1}, sDefaultSubject.FileName) 743 % Get all the subjects files that use default anatomy 744 ProtocolSubjects = GlobalData.DataBase.ProtocolSubjects(GlobalData.DataBase.iProtocol); 745 iSubjectUseDefaultAnat = find([ProtocolSubjects.Subject.UseDefaultAnat]); 746 if isempty(iSubjectUseDefaultAnat) 747 return 748 end 749 SubjectFile = {ProtocolSubjects.Subject(iSubjectUseDefaultAnat).FileName}; 750 % Also updates inter-subject node 751 isInterSubject = 1; 752 else 753 isInterSubject = 0; 754 end 755 % Search all the current protocol's studies 756 iStudies = []; 757 for i=1:length(SubjectFile) 758 iStudies = [iStudies, find(file_compare({ProtocolStudies.Study.BrainStormSubject}, SubjectFile{i}))]; 759 end 760 % Return results 761 if ~isempty(iStudies) 762 % Remove "analysis_intra" and "default_study" studies from list 763 if ~IntraStudies 764 iStudies(strcmpi({ProtocolStudies.Study(iStudies).Name}, bst_get('DirAnalysisIntra'))) = []; 765 end 766 if ~DefaultStudies 767 iStudies(strcmpi({ProtocolStudies.Study(iStudies).Name}, bst_get('DirDefaultStudy'))) = []; 768 end 769 % Return studies 770 argout1 = ProtocolStudies.Study(iStudies); 771 argout2 = iStudies; 772 else 773 argout1 = repmat(db_template('Study'), 0); 774 argout2 = []; 775 end 776 % Add inter-subject node, if needed 777 if isInterSubject 778 [sInterStudy, iInterStudy] = bst_get('AnalysisInterStudy'); 779 argout1 = [argout1, sInterStudy]; 780 argout2 = [argout2, iInterStudy]; 781 end 782 783 784 %% ==== STUDY WITH CONDITION PATH ==== 785 % USAGE: [sStudies, iStudies] = bst_get('StudyWithCondition', ConditionPath) 786 % 787 % INPUT: ConditionPath 788 % - 'SubjectName/conditionName' : Target condition for the specified subject 789 % - 'SubjectName/@intra' : Intra-subject condition for the subject 790 % - 'SubjectName/@default_study' : Default condition for the subject (where the subject's shared files are stored) 791 % - '*/conditionName' : Target condition for all the subjects 792 % - '@inter' : Inter-subject condition 793 % - '@default_study' : Protocol's default condition (where the protocol's shared files are stored) 794 795 case 'StudyWithCondition' 796 % Parse inputs 797 if (nargin ~= 2) || ~ischar(varargin{2}) 798 error('Invalid call to bst_get()'); 799 end 800 ConditionPath = varargin{2}; 801 % Get list of current protocol description 802 ProtocolInfo = GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol); 803 ProtocolStudies = GlobalData.DataBase.ProtocolStudies(GlobalData.DataBase.iProtocol); 804 if isempty(ProtocolStudies) || isempty(ProtocolInfo) 805 return; 806 end 807 808 % ConditionPath = @inter 809 if strcmpi(ConditionPath, '@inter') 810 iStudy = -2; 811 argout2 = iStudy; 812 argout1 = bst_get('Study', iStudy); 813 % ConditionPath = @default_study 814 elseif strcmpi(ConditionPath, '@default_study') 815 iStudy = -3; 816 argout2 = iStudy; 817 argout1 = bst_get('Study', iStudy); 818 % ConditionPath = SubjectName/ConditionName 819 else 820 % Get subject and condition names 821 condSplit = str_split(ConditionPath); 822 if (length(condSplit) ~= 2) 823 error('Invalid condition path.'); 824 end 825 SubjectName = condSplit{1}; 826 ConditionName = condSplit{2}; 827 828 % If first element is '*', search for condition in all the studies 829 if (SubjectName(1) == '*') 830 iStudies = 1:length(ProtocolStudies.Study); 831 % Else : search for condition only in studies that are linked to the subject specified in the ConditionPath 832 else 833 % Get subject 834 sSubject = bst_get('Subject', SubjectName, 1); 835 if isempty(sSubject) 836 return; 837 end 838 iStudies = find(file_compare({ProtocolStudies.Study.BrainStormSubject}, sSubject.FileName)); 839 end 840 % Nothing to search 841 if isempty(iStudies) 842 return 843 end 844 845 % Search all the current protocol's studies 846 iStudies = iStudies(cellfun(@(c)isequal(ConditionName, c), [ProtocolStudies.Study(iStudies).Condition])); 847 % Return results 848 if ~isempty(iStudies) 849 % Remove "analysis_intra" and "default_study" studies from list 850 if ~strcmpi(ConditionName, '@intra') 851 iStudies(strcmpi({ProtocolStudies.Study(iStudies).Condition}, bst_get('DirAnalysisIntra'))) = []; 852 end 853 if ~strcmpi(ConditionName, '@default_study') 854 iStudies(strcmpi({ProtocolStudies.Study(iStudies).Condition}, bst_get('DirDefaultStudy'))) = []; 855 end 856 % Sort by subject 857 if (length(iStudies) > 1) 858 SubjNameList = cell(1,length(iStudies)); 859 % For each study, get subject name 860 for i = 1:length(iStudies) 861 sSubject = bst_get('Subject', ProtocolStudies.Study(iStudies(i)).BrainStormSubject); 862 SubjNameList{i} = sSubject.Name; 863 end 864 % Sort subjects names 865 [sortSubjList, iSort] = sort(SubjNameList); 866 % Apply same sorting to studies 867 iStudies = iStudies(iSort); 868 end 869 % Return studies 870 argout1 = ProtocolStudies.Study(iStudies); 871 argout2 = iStudies; 872 else 873 argout1 = repmat(db_template('Study'), 0); 874 argout2 = []; 875 end 876 end 877 878 %% ==== CHANNEL STUDIES WITH SUBJECT ==== 879 % Usage: iStudies = bst_get('ChannelStudiesWithSubject', iSubjects, 'NoIntra') 880 case 'ChannelStudiesWithSubject' 881 % Parse inputs 882 if (nargin >= 2) && isnumeric(varargin{2}) 883 iSubjects = varargin{2}; 884 else 885 error('Invalid call to bst_get()'); 886 end 887 if (nargin == 3) && strcmpi(varargin{3}, 'NoIntra') 888 NoIntra = 1; 889 else 890 NoIntra = 0; 891 end 892 % Process all subjects 893 iStudies = []; 894 for i=1:length(iSubjects) 895 iSubject = iSubjects(i); 896 sSubject = bst_get('Subject', iSubject, 1); 897 % No subject: error 898 if isempty(sSubject) 899 continue 900 % If subject uses default channel file 901 elseif (sSubject.UseDefaultChannel ~= 0) 902 % Get default study for this subject 903 [tmp___, iStudiesNew] = bst_get('DefaultStudy', iSubject); 904 iStudies = [iStudies, iStudiesNew]; 905 % Else: get all the studies belonging to this subject 906 else 907 if NoIntra 908 [tmp___, iStudiesNew] = bst_get('StudyWithSubject', sSubject.FileName); 909 else 910 [tmp___, iStudiesNew] = bst_get('StudyWithSubject', sSubject.FileName, 'intra_subject'); 911 end 912 iStudies = [iStudies, iStudiesNew]; 913 end 914 end 915 argout1 = iStudies; 916 917 918 %% ==== STUDIES COUNT ==== 919 % Usage: [nbStudies] = bst_get('StudyCount') 920 case 'StudyCount' 921 % Nothing 922 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 923 argout1 = 0; 924 return; 925 end 926 % Get list of current protocol studies 927 ProtocolStudies = GlobalData.DataBase.ProtocolStudies(GlobalData.DataBase.iProtocol); 928 argout1 = length(ProtocolStudies.Study); 929 930 %% ==== SUBJECTS COUNT ==== 931 % Usage: [nbSubjects] = bst_get('SubjectCount') 932 case 'SubjectCount' 933 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 934 argout1 = 0; 935 return; 936 end 937 % Get list of current protocol studies 938 ProtocolSubjects = GlobalData.DataBase.ProtocolSubjects(GlobalData.DataBase.iProtocol); 939 argout1 = length(ProtocolSubjects.Subject); 940 941 %% ==== NORMALIZED SUBJECT ==== 942 case 'NormalizedSubject' 943 % Get normalized subject name 944 normSubjName = 'Group_analysis'; 945 % Try to get normalized subject 946 [sNormSubj, iNormSubj] = bst_get('Subject', normSubjName, 0); 947 % If normalized subject does not exist: create it 948 if isempty(sNormSubj) 949 % Always use default anatomy 950 UseDefaultAnat = 1; 951 % If all the subjects use a global channel file: Use global default as well 952 if ~isempty(GlobalData.DataBase.ProtocolSubjects(GlobalData.DataBase.iProtocol).Subject) && ... 953 all([GlobalData.DataBase.ProtocolSubjects(GlobalData.DataBase.iProtocol).Subject.UseDefaultChannel] == 2) 954 UseDefaultChannel = 2; 955 else 956 UseDefaultChannel = 1; 957 end 958 % Create subject 959 [sNormSubj, iNormSubj] = db_add_subject(normSubjName, [], UseDefaultAnat, UseDefaultChannel); 960 % Get full subject structure 961 [sNormSubj, iNormSubj] = bst_get('Subject', normSubjName, 0); 962 end 963 argout1 = sNormSubj; 964 argout2 = iNormSubj; 965 966 967 %% ==== ANALYSIS STUDY (INTRA) ==== 968 % Usage: [sAnalStudy, iAnalStudy] = bst_get('AnalysisIntraStudy', iSubject) 969 case 'AnalysisIntraStudy' 970 % Parse inputs 971 if (nargin == 2) 972 iSubject = varargin{2}; 973 else 974 error('Invalid call to bst_get()'); 975 end 976 % Get subject 977 sSubject = bst_get('Subject', iSubject, 1); 978 % Get studies related to subject 979 [sSubjStudies, iSubjStudies] = bst_get('StudyWithSubject', sSubject.FileName, 'intra_subject'); 980 % Look for the 'AnalysisIntra' study 981 iFound = find(cellfun(@(c)ismember(bst_get('DirAnalysisIntra'), c), {sSubjStudies.Condition})); 982 iAnalStudy = iSubjStudies(iFound); 983 sAnalStudy = sSubjStudies(iFound); 984 % Return found structure 985 argout1 = sAnalStudy; 986 argout2 = iAnalStudy; 987 988 989 %% ==== ANALYSIS STUDY (INTER) ==== 990 % Usage: [sAnalStudyInter, iAnalStudyInter] = bst_get('AnalysisInterStudy') 991 case 'AnalysisInterStudy' 992 iAnalStudyInter = -2; 993 [argout1, argout2] = bst_get('Study', iAnalStudyInter); 994 995 996 %% ==== DEFAULT STUDY ==== 997 % Usage: [sDefaulStudy, iDefaultStudy] = bst_get('DefaultStudy', iSubject) 998 % [sDefaulStudy, iDefaultStudy] = bst_get('DefaultStudy') : iSubject=0 999 % [sDefaulStudy, iDefaultStudy] = bst_get('DefaultStudy', SubjectFile) 1000 case 'DefaultStudy' 1001 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 1002 return; 1003 end 1004 % Parse inputs 1005 if (nargin == 1) 1006 iSubject = 0; 1007 elseif (nargin == 2) && isnumeric(varargin{2}) 1008 iSubject = varargin{2}; 1009 elseif (nargin == 2) && ischar(varargin{2}) 1010 SubjectFile = varargin{2}; 1011 % Get subject attached to study 1012 [sSubject, iSubject] = bst_get('Subject', SubjectFile, 1); 1013 if isempty(sSubject) || ~sSubject.UseDefaultChannel 1014 return; 1015 end 1016 else 1017 error('Invalid call to bst_get()'); 1018 end 1019 % === DEFAULT SUBJECT === 1020 % => Return global default study 1021 if (iSubject == 0) 1022 % Get protocol's studies 1023 ProtocolStudies = GlobalData.DataBase.ProtocolStudies(GlobalData.DataBase.iProtocol); 1024 % Return Global default study 1025 argout1 = ProtocolStudies.DefaultStudy; 1026 argout2 = -3; 1027 % === NORMAL SUBJECT === 1028 else 1029 % Get subject 1030 sSubject = bst_get('Subject', iSubject, 1); 1031 % === GLOBAL DEFAULT STUDY === 1032 if sSubject.UseDefaultChannel == 2 1033 % Get protocol's studies 1034 ProtocolStudies = GlobalData.DataBase.ProtocolStudies(GlobalData.DataBase.iProtocol); 1035 % Return Global default study 1036 argout1 = ProtocolStudies.DefaultStudy; 1037 argout2 = -3; 1038 % === SUBJECT'S DEFAULT STUDY === 1039 elseif sSubject.UseDefaultChannel == 1 1040 % Get studies related to subject 1041 [sSubjStudies, iSubjStudies] = bst_get('StudyWithSubject', sSubject.FileName, 'default_study'); 1042 % Look for the 'DefaultStudy' study 1043 iFound = find(cellfun(@(c)ismember(bst_get('DirDefaultStudy'), c), {sSubjStudies.Condition})); 1044 iDefaultStudy = iSubjStudies(iFound); 1045 sDefaultStudy = sSubjStudies(iFound); 1046 % Return found structure 1047 argout1 = sDefaultStudy; 1048 argout2 = iDefaultStudy; 1049 end 1050 end 1051 1052 1053 1054 %% ==== SUBJECT ==== 1055 % Usage : [sSubject, iSubject] = bst_get('Subject', iSubject, isRaw) 1056 % [sSubject, iSubject] = bst_get('Subject', SubjectFileName, isRaw); 1057 % [sSubject, iSubject] = bst_get('Subject', SubjectName, isRaw); 1058 % [sSubject, iSubject] = bst_get('Subject'); 1059 % If isRaw is set: force to return the real brainstormsubject description 1060 % (ignoring wether it uses protocol's default anatomy or not) 1061 case 'Subject' 1062 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 1063 return; 1064 end 1065 % Get list of current protocol subjects 1066 ProtocolSubjects = GlobalData.DataBase.ProtocolSubjects(GlobalData.DataBase.iProtocol); 1067 sSubject = []; 1068 SubjectName = []; 1069 SubjectFileName = []; 1070 % ISRAW parameter 1071 if (nargin < 3) 1072 isRaw = 0; 1073 else 1074 isRaw = varargin{3}; 1075 end 1076 % Call: bst_get('subject', iSubject, isRaw); 1077 if (nargin >= 2) && isnumeric(varargin{2}) 1078 iSubject = varargin{2}; 1079 if (iSubject > length(ProtocolSubjects.Subject)) 1080 error('Invalid subject indice.'); 1081 end 1082 % If required subject is default subject (iSubject = 0) 1083 if (iSubject == 0) 1084 % Default subject available 1085 if ~isempty(ProtocolSubjects.DefaultSubject) 1086 sSubject = ProtocolSubjects.DefaultSubject; 1087 % Default subject not available 1088 else 1089 return 1090 end 1091 % Normal subject 1092 else 1093 sSubject = ProtocolSubjects.Subject(iSubject); 1094 end 1095 1096 % Call: bst_get('subject', SubjectFileName, isRaw); 1097 % Call: bst_get('subject', SubjectName, isRaw); 1098 elseif (nargin >= 2) && isempty(varargin{2}) 1099 % If study name is empty: use DefaultSubject 1100 SubjectFileName = ProtocolSubjects.DefaultSubject.FileName; 1101 elseif (nargin >= 2) && (ischar(varargin{2})) 1102 [fName, fBase, fExt] = bst_fileparts(varargin{2}); 1103 % Argument is a Matlab .mat filename 1104 if strcmpi(fExt, '.mat') 1105 SubjectFileName = varargin{2}; 1106 % Else : assume argument is a directory 1107 else 1108 SubjectName = file_standardize(varargin{2}); 1109 end 1110 1111 % Call: bst_get('subject'); => looking for current subject 1112 elseif (nargin < 2) 1113 % Get current subject filename in current study 1114 sStudy = bst_get('Study'); 1115 if isempty(sStudy) 1116 return 1117 end 1118 SubjectFileName = sStudy.BrainStormSubject; 1119 % If study's subject is not defined, get DefaultSubject 1120 if isempty(SubjectFileName) && ~isempty(ProtocolSubjects.DefaultSubject) 1121 SubjectFileName = ProtocolSubjects.DefaultSubject.FileName; 1122 end 1123 else 1124 error('Invalid call to bst_get()'); 1125 end 1126 1127 % If Subject is defined by its filename/name 1128 if isempty(sSubject) 1129 % Look in Default Subject 1130 if ~isempty(ProtocolSubjects.DefaultSubject) && (file_compare(ProtocolSubjects.DefaultSubject.FileName, SubjectFileName) ... 1131 || strcmpi(ProtocolSubjects.DefaultSubject.Name, SubjectName)) 1132 sSubject = ProtocolSubjects.DefaultSubject; 1133 iSubject = 0; 1134 % If not found : find target subject file name in normal subjects 1135 elseif ~isempty(SubjectFileName) 1136 iSubject = find(file_compare({ProtocolSubjects.Subject.FileName}, SubjectFileName), 1); 1137 sSubject = ProtocolSubjects.Subject(iSubject); 1138 elseif ~isempty(SubjectName) 1139 iSubject = find(file_compare({ProtocolSubjects.Subject.Name}, SubjectName), 1); 1140 sSubject = ProtocolSubjects.Subject(iSubject); 1141 else 1142 error('Subject name not specified.'); 1143 end 1144 end 1145 1146 % Return found subject 1147 if ~isempty(iSubject) && ~isempty(sSubject) 1148 % If subject uses default subject 1149 if sSubject.UseDefaultAnat && ~isRaw && ~isempty(ProtocolSubjects.DefaultSubject) && ~isempty(ProtocolSubjects.DefaultSubject.FileName) 1150 % Return default subject (WITH REAL SUBJECT'S NAME) 1151 argout1 = ProtocolSubjects.DefaultSubject; 1152 argout1.Name = sSubject.Name; 1153 argout1.UseDefaultAnat = sSubject.UseDefaultAnat; 1154 argout1.UseDefaultChannel = sSubject.UseDefaultChannel; 1155 argout2 = iSubject; 1156 % Else, return found subject 1157 else 1158 argout1 = sSubject; 1159 argout2 = iSubject; 1160 end 1161 end 1162 1163 1164 %% ==== SURFACE FILE ==== 1165 % Usage : [sSubject, iSubject, iSurface] = bst_get('SurfaceFile', SurfaceFile) 1166 case 'SurfaceFile' 1167 % No protocol in database 1168 if isempty(GlobalData) || isempty(GlobalData.DataBase) || isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 1169 return; 1170 end 1171 % Get list of current protocol subjects 1172 ProtocolSubjects = GlobalData.DataBase.ProtocolSubjects(GlobalData.DataBase.iProtocol); 1173 if isempty(ProtocolSubjects) 1174 return 1175 end; 1176 1177 % Parse inputs 1178 if (nargin == 2) 1179 SurfaceFile = varargin{2}; 1180 else 1181 error('Invalid call to bst_get().'); 1182 end 1183 1184 % Remove SUBJECTS path from SurfaceFile 1185 SurfaceFile = file_short(SurfaceFile); 1186 % Look for surface file in DefaultSubject 1187 if ~isempty(ProtocolSubjects.DefaultSubject) 1188 % Find the first surface that matches the SurfaceFile 1189 iSurface = find(file_compare(SurfaceFile, {ProtocolSubjects.DefaultSubject.Surface.FileName}), 1); 1190 % If a surface was found in default subject : return it 1191 if ~isempty(iSurface) 1192 argout1 = ProtocolSubjects.DefaultSubject; 1193 argout2 = 0; 1194 argout3 = iSurface; 1195 return 1196 end 1197 end 1198 % Look for surface file in all the surfaces of all subjects 1199 for iSubj = 1:length(ProtocolSubjects.Subject) 1200 % Find the first surface that matches the SurfaceFile 1201 iSurface = find(file_compare(SurfaceFile, {ProtocolSubjects.Subject(iSubj).Surface.FileName}), 1); 1202 % If a surface was found in current subject : return it 1203 if ~isempty(iSurface) 1204 argout1 = ProtocolSubjects.Subject(iSubj); 1205 argout2 = iSubj; 1206 argout3 = iSurface; 1207 return 1208 end 1209 end 1210 1211 1212 %% ==== SURFACE FILE BY TYPE ==== 1213 % Usage : [sSurface, iSurface] = bst_get('SurfaceFileByType', iSubject, SurfaceType) 1214 % [sSurface, iSurface] = bst_get('SurfaceFileByType', SubjectFile, SurfaceType) 1215 % [sSurface, iSurface] = bst_get('SurfaceFileByType', SurfaceFile, SurfaceType) 1216 % [sSurface, iSurface] = bst_get('SurfaceFileByType', MriFile, SurfaceType) 1217 % [sSurface, iSurface] = bst_get('SurfaceFileByType', ..., SurfaceType, isDefaultOnly) 1218 case 'SurfaceFileByType' 1219 % By default: return only the default surfaces of the category 1220 if (nargin < 4) 1221 isDefaultOnly = 1; 1222 else 1223 isDefaultOnly = varargin{4}; 1224 end 1225 % Get subject 1226 if isempty(varargin{2}) 1227 % Get default subject 1228 sSubject = bst_get('Subject', 0); 1229 elseif ischar(varargin{2}) 1230 FileName = varargin{2}; 1231 sSubject = bst_get('AnyFile', FileName); 1232 else 1233 iSubject = varargin{2}; 1234 sSubject = bst_get('Subject', iSubject); 1235 end 1236 % Error handling 1237 if isempty(sSubject) 1238 disp('BST> Warning: Subject not found.'); 1239 return; 1240 elseif isempty(sSubject.Surface) 1241 return; 1242 end 1243 SurfaceType = varargin{3}; 1244 1245 % === RETURN ONLY DEFAULTS === 1246 if isDefaultOnly 1247 % Look for required surface type 1248 field = ['i' SurfaceType]; 1249 if ~isfield(sSubject, field) || isempty(sSubject.(field)) 1250 return 1251 end 1252 argout1 = sSubject.Surface(sSubject.(field)); 1253 argout2 = sSubject.(field); 1254 % === RETURN ALL THE SURFACES === 1255 else 1256 % Build the list of tagged surfaces 1257 fileTag = ['_' lower(SurfaceType)]; 1258 iTargetList = find(cellfun(@(c)~isempty(strfind(c, fileTag)), {sSubject.Surface.FileName})); 1259 % Put the default cortex on top of the list 1260 iDefaults = intersect([sSubject.iCortex, sSubject.iScalp, sSubject.iInnerSkull, sSubject.iOuterSkull, sSubject.iFibers, sSubject.iFEM], iTargetList); 1261 if ~isempty(iDefaults) 1262 iTargetList = [iDefaults, setdiff(iTargetList, iDefaults)]; 1263 end 1264 % Return all cortex surfaces 1265 argout1 = sSubject.Surface(iTargetList); 1266 argout2 = iTargetList; 1267 end 1268 1269 1270 %% ==== MRI FILE ==== 1271 % Usage : [sSubject, iSubject, iMri] = bst_get('MriFile', MriFile) 1272 case 'MriFile' 1273 % No protocol in database 1274 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 1275 return; 1276 end 1277 % Get list of current protocol subjects 1278 ProtocolSubjects = GlobalData.DataBase.ProtocolSubjects(GlobalData.DataBase.iProtocol); 1279 if isempty(ProtocolSubjects) 1280 return 1281 end 1282 1283 % Parse inputs 1284 if (nargin == 2) 1285 MriFile = varargin{2}; 1286 else 1287 error('Invalid call to bst_get().'); 1288 end 1289 1290 % Remove SUBJECTS path from MriFile 1291 MriFile = file_short(MriFile); 1292 % Look for MRI file in DefaultSubject 1293 if ~isempty(ProtocolSubjects.DefaultSubject) 1294 % Find the first MRI that matches the MriFile 1295 iMri = find(file_compare(MriFile, {ProtocolSubjects.DefaultSubject.Anatomy.FileName}), 1); 1296 % If a MRI was found in default subject : return it 1297 if ~isempty(iMri) 1298 argout1 = ProtocolSubjects.DefaultSubject; 1299 argout2 = 0; 1300 argout3 = iMri; 1301 return 1302 end 1303 end 1304 % Look for MRI file in all the MRIs of all subjects 1305 for iSubj = 1:length(ProtocolSubjects.Subject) 1306 % Find the first MRI that matches the MriFile 1307 iMri = find(file_compare(MriFile, {ProtocolSubjects.Subject(iSubj).Anatomy.FileName}), 1); 1308 % If a MRI was found in current subject : return it 1309 if ~isempty(iMri) 1310 argout1 = ProtocolSubjects.Subject(iSubj); 1311 argout2 = iSubj; 1312 argout3 = iMri; 1313 return 1314 end 1315 end 1316 1317 1318 %% ==== CHANNEL FILE ==== 1319 % Usage: [sStudy, iStudy, iChannel] = bst_get('ChannelFile', ChannelFile) 1320 case 'ChannelFile' 1321 % No protocol in database 1322 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 1323 return; 1324 end 1325 ProtocolInfo = GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol); 1326 % Parse inputs 1327 if (nargin == 2) 1328 ChannelFile = varargin{2}; 1329 ChannelFile = strrep(ChannelFile, ProtocolInfo.STUDIES, ''); 1330 else 1331 error('Invalid call to bst_get().'); 1332 end 1333 % Look for Channel file in all the surfaces of all subjects 1334 [argout1, argout2, argout3] = findFileInStudies('Channel', 'FileName', ChannelFile); 1335 1336 1337 %% ==== CHANNEL FILE FOR STUDY ==== 1338 % Usage: [ChannelFile, sStudy, iStudy] = bst_get('ChannelFileForStudy', StudyFile/DataFile) 1339 case 'ChannelFileForStudy' 1340 % Parse inputs 1341 if (nargin == 2) 1342 StudyFile = varargin{2}; 1343 else 1344 error('Invalid call to bst_get().'); 1345 end 1346 % Get study in database 1347 [sStudy, iStudy] = bst_get('Study', StudyFile); 1348 % If data file instead of Study file 1349 if isempty(sStudy) 1350 [sStudy, iStudy] = bst_get('AnyFile', StudyFile); 1351 end 1352 sChannel = bst_get('ChannelForStudy', iStudy); 1353 if ~isempty(sChannel) 1354 argout1 = sChannel.FileName; 1355 argout2 = sStudy; 1356 argout3 = iStudy; 1357 else 1358 argout1 = []; 1359 end 1360 1361 1362 %% ==== CHANNEL STRUCT FOR STUDY ==== 1363 % Usage: [sChannel, iChanStudy] = bst_get('ChannelForStudy', iStudies) 1364 case 'ChannelForStudy' 1365 % Parse inputs 1366 if (nargin == 2) 1367 iStudies = varargin{2}; 1368 else 1369 error('Invalid call to bst_get().'); 1370 end 1371 iChanStudies = []; 1372 sListChannel = []; 1373 for i = 1:length(iStudies) 1374 % Get study 1375 iStudy = iStudies(i); 1376 sStudy = bst_get('Study', iStudy); 1377 if isempty(sStudy) 1378 continue; 1379 end 1380 iChanStudy = iStudy; 1381 % === Analysis-Inter node === 1382 iAnalysisInter = -2; 1383 iGlobalDefaultStudy = -3; 1384 if (iStudy == iAnalysisInter) 1385 % If no channel file is defined in 'Analysis-intra' node: look in 1386 if isempty(sStudy.Channel) 1387 % Get global default study 1388 sStudy = bst_get('Study', iGlobalDefaultStudy); 1389 iChanStudy = iGlobalDefaultStudy; 1390 end 1391 % === All other nodes === 1392 else 1393 % Get subject attached to study 1394 [sSubject, iSubject] = bst_get('Subject', sStudy.BrainStormSubject, 1); 1395 if isempty(sSubject) 1396 return; 1397 end 1398 % Subject uses default channel/headmodel 1399 if (sSubject.UseDefaultChannel ~= 0) 1400 [sStudy, iChanStudy] = bst_get('DefaultStudy', iSubject); 1401 if isempty(sStudy) 1402 return 1403 end 1404 end 1405 end 1406 iChanStudies = [iChanStudies, iChanStudy]; 1407 sListChannel = [sListChannel, sStudy.Channel]; 1408 end 1409 % Return Channel structure 1410 argout1 = sListChannel; 1411 argout2 = iChanStudies; 1412 1413 %% ==== CHANNEL MODALITIES ===== 1414 % Usage: [Modalities, DispMod, DefMod] = bst_get('ChannelModalities', ChannelFile) 1415 % [Modalities, DispMod, DefMod] = bst_get('ChannelModalities', DataFile/ResultsFile/TimefreqFile...) 1416 case 'ChannelModalities' 1417 % Get input file 1418 [sStudy, iStudy, iItem, DataType, sItem] = bst_get('AnyFile', varargin{2}); 1419 % If channel file 1420 if strcmpi(DataType, 'channel') 1421 sChannel = sItem; 1422 else 1423 sChannel = bst_get('ChannelForStudy', iStudy); 1424 end 1425 % Return modalities 1426 if ~isempty(sChannel) 1427 % Get all modalities 1428 if ~isempty(sChannel.DisplayableSensorTypes) 1429 % Return the displayable sensors on top of the list 1430 argout1 = cat(2, sChannel.DisplayableSensorTypes, setdiff(sChannel.Modalities, sChannel.DisplayableSensorTypes)); 1431 else 1432 argout1 = sChannel.Modalities; 1433 end 1434 % Get only sensors that have spatial representation 1435 argout2 = sChannel.DisplayableSensorTypes; 1436 % Default candidates 1437 if ~isempty(argout2) 1438 defList = argout2; 1439 else 1440 defList = argout1; 1441 end 1442 if isempty(defList) 1443 return; 1444 end 1445 % Remove EDF and BDF from the default list 1446 defList = setdiff(defList, {'EDF','BDF','KDF'}); 1447 % Get default modality 1448 if ismember('SEEG', defList) 1449 argout3 = 'SEEG'; 1450 elseif ismember('ECOG', defList) 1451 argout3 = 'ECOG'; 1452 elseif any(ismember({'MEG','MEG GRAD','MEG MAG'}, defList)) 1453 argout3 = 'MEG'; 1454 elseif ismember('EEG', defList) 1455 argout3 = 'EEG'; 1456 elseif ismember('NIRS', defList) 1457 argout3 = 'NIRS'; 1458 else 1459 argout3 = defList{1}; 1460 end 1461 % Place the default on top of the lists 1462 if ismember(argout3, argout1) 1463 argout1(strcmpi(argout1, argout3)) = []; 1464 argout1 = cat(2, argout3, argout1); 1465 end 1466 if ismember(argout3, argout2) 1467 argout2(strcmpi(argout2, argout3)) = []; 1468 argout2 = cat(2, argout3, argout2); 1469 end 1470 else 1471 argout1 = []; 1472 end 1473 1474 1475 %% ==== TIMEFREQ DISPLAY MODALITIES ==== 1476 % Usage: DisplayMod = bst_get('TimefreqDisplayModalities', TimefreqFile) 1477 case 'TimefreqDisplayModalities' 1478 TimefreqFile = varargin{2}; 1479 % Load sensor names from file 1480 TimefreqMat = in_bst_timefreq(TimefreqFile, 0, 'RowNames'); 1481 % Get channel file 1482 ChannelFile = bst_get('ChannelFileForStudy', TimefreqFile); 1483 if isempty(ChannelFile) 1484 return; 1485 end 1486 % Load channel file 1487 ChannelMat = load(file_fullpath(ChannelFile), 'Channel'); 1488 % Get channels that are present in the file 1489 [tmp__,I,J] = intersect({ChannelMat.Channel.Name}, TimefreqMat.RowNames); 1490 FileMod = unique({ChannelMat.Channel(I).Type}); 1491 % Check if only one type of gradiometer is selected 1492 if isequal(FileMod, {'MEG GRAD'}) && all(cellfun(@(c)(c(end)=='2'), {ChannelMat.Channel(I).Name})) 1493 argout1 = {'MEG GRAD2'}; 1494 elseif isequal(FileMod, {'MEG GRAD'}) && all(cellfun(@(c)(c(end)=='3'), {ChannelMat.Channel(I).Name})) 1495 argout1 = {'MEG GRAD3'}; 1496 % Keep only the modalities that can be displayed (as topography) 1497 else 1498 argout1 = intersect(FileMod, {'MEG','MEG GRAD','MEG MAG','EEG','ECOG','SEEG','NIRS'}); 1499 end 1500 1501 1502 %% ==== CHANNEL DEVICE ==== 1503 % Usage: Device = bst_get('ChannelDevice', ChannelFile) 1504 case 'ChannelDevice' 1505 ChannelFile = varargin{2}; 1506 if ~isempty(strfind(ChannelFile, 'vectorview306')) 1507 Device = 'Vectorview306'; 1508 elseif ~isempty(strfind(ChannelFile, 'ctf_acc1')) 1509 Device = 'CTF'; 1510 elseif ~isempty(strfind(ChannelFile, '4d_acc1')) 1511 Device = '4D'; 1512 elseif ~isempty(strfind(ChannelFile, 'babysquid')) 1513 Device = 'BabySQUID'; 1514 elseif ~isempty(strfind(ChannelFile, 'babymeg')) 1515 Device = 'BabyMEG'; 1516 elseif ~isempty(strfind(ChannelFile, 'kit')) 1517 Device = 'KIT'; 1518 elseif ~isempty(strfind(ChannelFile, 'ricoh')) 1519 Device = 'RICOH'; 1520 elseif ~isempty(strfind(ChannelFile, 'kriss')) 1521 Device = 'KRISS'; 1522 elseif ~isempty(strfind(ChannelFile, 'nirsbrs')) 1523 Device = 'NIRS-BRS'; 1524 else 1525 Device = ''; 1526 end 1527 argout1 = Device; 1528 1529 1530 %% ==== HEADMODEL STRUCT FOR STUDY ==== 1531 % Usage: [sHeadModel] = bst_get('HeadModelForStudy', iStudy) 1532 case 'HeadModelForStudy' 1533 % Parse inputs 1534 if (nargin == 2) 1535 iStudy = varargin{2}; 1536 else 1537 error('Invalid call to bst_get().'); 1538 end 1539 % Get study 1540 sStudy = bst_get('Study', iStudy); 1541 % === Analysis-Inter node === 1542 iAnalysisInter = -2; 1543 iGlobalDefaultStudy = -3; 1544 if (iStudy == iAnalysisInter) 1545 % If no channel file is defined in 'Analysis-intra' node: look in 1546 if isempty(sStudy.iHeadModel) 1547 % Get global default study 1548 sStudy = bst_get('Study', iGlobalDefaultStudy); 1549 end 1550 % === All other nodes === 1551 else 1552 % Get subject attached to study 1553 [sSubject, iSubject] = bst_get('Subject', sStudy.BrainStormSubject, 1); 1554 if isempty(sSubject) 1555 return; 1556 end 1557 % Subject uses default channel/headmodel 1558 if (sSubject.UseDefaultChannel ~= 0) 1559 sStudy = bst_get('DefaultStudy', iSubject); 1560 if isempty(sStudy) 1561 return 1562 end 1563 end 1564 end 1565 % Return HeadModel structure 1566 if ~isempty(sStudy.iHeadModel) 1567 argout1 = sStudy.HeadModel(sStudy.iHeadModel(1)); 1568 else 1569 argout1 = []; 1570 end 1571 1572 1573 %% ==== HEADMODEL FILE ==== 1574 % Usage: [sStudy, iStudy, iHeadModel] = bst_get('HeadModelFile', HeadModelFile, iStudies) 1575 % [sStudy, iStudy, iHeadModel] = bst_get('HeadModelFile', HeadModelFile) 1576 case 'HeadModelFile' 1577 % No protocol in database 1578 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 1579 return; 1580 end 1581 % Input #2: HeadModelFile 1582 ProtocolInfo = GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol); 1583 HeadModelFile = varargin{2}; 1584 HeadModelFile = strrep(HeadModelFile, ProtocolInfo.STUDIES, ''); 1585 % Input #3: iStudies 1586 if (nargin < 3) 1587 iStudies = []; 1588 else 1589 iStudies = varargin{3}; 1590 end 1591 % Look for surface file in all the surfaces of all subjects 1592 [argout1, argout2, argout3] = findFileInStudies('HeadModel', 'FileName', HeadModelFile, iStudies); 1593 1594 %% ==== NOISECOV FILE ==== 1595 % Usage: [sStudy, iStudy, iNoiseCov] = bst_get('NoiseCovFile', NoiseCovFile, iStudies) 1596 % [sStudy, iStudy, iNoiseCov] = bst_get('NoiseCovFile', NoiseCovFile) 1597 % Usage: [sStudy, iStudy, iNoiseCov] = bst_get('DataCovFile', NoiseCovFile, iStudies) 1598 % [sStudy, iStudy, iNoiseCov] = bst_get('DataCovFile', NoiseCovFile) 1599 case {'NoiseCovFile', 'DataCovFile'} 1600 % No protocol in database 1601 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 1602 return; 1603 end 1604 % Input #2: NoiseCovFile 1605 ProtocolInfo = GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol); 1606 NoiseCovFile = varargin{2}; 1607 NoiseCovFile = strrep(NoiseCovFile, ProtocolInfo.STUDIES, ''); 1608 % Input #3: iStudies 1609 if (nargin < 3) 1610 iStudies = []; 1611 else 1612 iStudies = varargin{3}; 1613 end 1614 % Look for surface file in all the surfaces of all subjects 1615 [argout1, argout2, argout3] = findFileInStudies('NoiseCov', 'FileName', NoiseCovFile, iStudies); 1616 1617 1618 %% ==== DATA FILE ==== 1619 % Usage: [sStudy, iStudy, iData] = bst_get('DataFile', DataFile, iStudies) 1620 % [sStudy, iStudy, iData] = bst_get('DataFile', DataFile) 1621 case 'DataFile' 1622 % No protocol in database 1623 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 1624 return; 1625 end 1626 % Input #2: DataFile 1627 ProtocolInfo = GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol); 1628 DataFile = varargin{2}; 1629 DataFile = strrep(DataFile, ProtocolInfo.STUDIES, ''); 1630 % Input #3: iStudies 1631 if (nargin < 3) 1632 iStudies = []; 1633 else 1634 iStudies = varargin{3}; 1635 end 1636 % Look for file in all the studies 1637 [argout1, argout2, argout3] = findFileInStudies('Data', 'FileName', DataFile, iStudies); 1638 1639 1640 %% ==== DATA FOR DATA LIST ==== 1641 % Usage: [iFoundData] = bst_get('DataForDataList', iStudy, DataListName) 1642 case 'DataForDataList' 1643 iStudy = varargin{2}; 1644 DataListName = varargin{3}; 1645 % Get study structure 1646 sStudy = bst_get('Study', iStudy); 1647 % Get all the data files held by this datalist 1648 listComments = cellfun(@(c)deblank(str_remove_parenth(c)), {sStudy.Data.Comment}, 'UniformOutput', 0); 1649 iFoundData = find(strcmp(listComments, DataListName)); 1650 % Return found data files 1651 argout1 = iFoundData; 1652 1653 %% ==== MATRIX FOR MATRIX LIST ==== 1654 % Usage: [iFoundMatrix] = bst_get('MatrixForMatrixList', iStudy, MatrixListName) 1655 case 'MatrixForMatrixList' 1656 iStudy = varargin{2}; 1657 MatrixListName = varargin{3}; 1658 % Get study structure 1659 sStudy = bst_get('Study', iStudy); 1660 % Get all the matrix files held by this datalist 1661 listComments = cellfun(@(c)deblank(str_remove_parenth(c)), {sStudy.Matrix.Comment}, 'UniformOutput', 0); 1662 iFoundMatrix = find(strcmp(listComments, MatrixListName)); 1663 % Return found matrix files 1664 argout1 = iFoundMatrix; 1665 1666 1667 %% ==== DATA FOR STUDY (INCLUDING SHARED STUDIES) ==== 1668 % Usage: [iStudies, iDatas] = bst_get('DataForStudy', iStudy) 1669 case 'DataForStudy' 1670 % Get target study 1671 iStudy = varargin{2}; 1672 sStudy = bst_get('Study', iStudy); 1673 isDefaultStudy = strcmpi(sStudy.Name, bst_get('DirDefaultStudy')); 1674 isGlobalDefault = (iStudy == -3); 1675 1676 % If study is the global default study 1677 sStudies = []; 1678 iStudies = []; 1679 if isGlobalDefault 1680 % Get all the subjects of the protocol 1681 nbSubjects = bst_get('SubjectCount'); 1682 for iSubject = 1:nbSubjects 1683 sSubject = bst_get('Subject', iSubject, 1); 1684 if sSubject.UseDefaultChannel 1685 [tmp_sStudies, tmp_iStudies] = bst_get('StudyWithSubject', sSubject.FileName); 1686 sStudies = [sStudies, tmp_sStudies]; 1687 iStudies = [iStudies, tmp_iStudies]; 1688 end 1689 end 1690 % Else, if study is a subject's default study (ie. channel file is shared by all studies of one subject) 1691 elseif isDefaultStudy 1692 % Get all the subject's studies 1693 [sStudies, iStudies] = bst_get('StudyWithSubject', sStudy.BrainStormSubject, 'intra_subject', 'default_study'); 1694 else 1695 % Normal: one channel per condition 1696 sStudies = sStudy; 1697 iStudies = iStudy; 1698 end 1699 % Get all the DataFiles for all these studies 1700 for i = 1:length(sStudies) 1701 nData = length(sStudies(i).Data); 1702 argout1 = [argout1, repmat(iStudies(i), [1,nData])]; 1703 argout2 = [argout2, 1:nData]; 1704 end 1705 1706 1707 %% ==== DATA FOR STUDIES (INCLUDING SHARED STUDIES) ==== 1708 % Usage: [iStudies, iDatas] = bst_get('DataForStudies', iStudies) 1709 case 'DataForStudies' 1710 iStudies = varargin{2}; 1711 for i = 1:length(iStudies) 1712 [tmp_iStudies, tmp_iDatas] = bst_get('DataForStudy', iStudies(i)); 1713 argout1 = [argout1, tmp_iStudies]; 1714 argout2 = [argout2, tmp_iDatas]; 1715 end 1716 1717 %% ==== DATA FILE FOR CHANNEL FILE ==== 1718 % Usage: DataFiles = bst_get('DataForChannelFile', ChannelFile) 1719 case 'DataForChannelFile' 1720 ChannelFile = varargin{2}; 1721 DataFiles = {}; 1722 % Get study for the given channel file 1723 [sStudy, iStudy] = bst_get('ChannelFile', ChannelFile); 1724 if isempty(sStudy) 1725 return; 1726 end 1727 % Get dependent data files 1728 [iStudies, iDatas] = bst_get('DataForStudy', iStudy); 1729 % Get all the Data filenames 1730 for i = 1:length(iStudies) 1731 sStudy = bst_get('Study', iStudies(i)); 1732 DataFiles = cat(2, DataFiles, {sStudy.Data(iDatas(i)).FileName}); 1733 end 1734 argout1 = DataFiles; 1735 1736 1737 %% ==== RESULTS FILE ==== 1738 % Usage: [sStudy, iStudy, iResult] = bst_get('ResultsFile', ResultsFile, iStudies) 1739 % [sStudy, iStudy, iResult] = bst_get('ResultsFile', ResultsFile) 1740 case 'ResultsFile' 1741 % No protocol in database 1742 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 1743 return; 1744 end 1745 % Input #2: ResultsFile 1746 ProtocolInfo = GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol); 1747 ResultsFile = varargin{2}; 1748 ResultsFile = strrep(ResultsFile, ProtocolInfo.STUDIES, ''); 1749 % Input #3: iStudies 1750 if (nargin < 3) 1751 iStudies = []; 1752 else 1753 iStudies = varargin{3}; 1754 end 1755 % Look for surface file in all the surfaces of all subjects 1756 [argout1, argout2, argout3] = findFileInStudies('Result', 'FileName', ResultsFile, iStudies); 1757 1758 1759 %% ==== RESULTS FOR DATA FILE ==== 1760 % Usage: [sStudy, iStudy, iResults] = bst_get('ResultsForDataFile', DataFile) : search the whole protocol 1761 % Usage: [sStudy, iStudy, iResults] = bst_get('ResultsForDataFile', DataFile, iStudies) : search only the specified studies 1762 case 'ResultsForDataFile' 1763 % No protocol in database 1764 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 1765 return; 1766 end 1767 ProtocolInfo = GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol); 1768 % Input #2: DataFile 1769 DataFile = varargin{2}; 1770 DataFile = strrep(DataFile, ProtocolInfo.STUDIES, ''); 1771 % Determine in which studies to search for ResultsFile 1772 if (nargin >= 3) 1773 % Studies specified in argument 1774 iStudy = varargin{3}; 1775 else 1776 % Get study in which DataFile is located 1777 [sStudy, iStudy] = bst_get('DataFile', DataFile); 1778 if isempty(iStudy) 1779 return; 1780 end 1781 end 1782 % Search selected studies 1783 [argout1, argout2, argout3] = findFileInStudies('Result', 'DataFile', DataFile, iStudy); 1784 1785 1786 %% ==== STAT FILE ==== 1787 % Usage: [sStudy, iStudy, iData] = bst_get('StatFile', StatFile, iStudies) 1788 % [sStudy, iStudy, iData] = bst_get('StatFile', StatFile) 1789 case 'StatFile' 1790 % No protocol in database 1791 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 1792 return; 1793 end 1794 % Input #2: SataFile 1795 ProtocolInfo = GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol); 1796 StatFile = varargin{2}; 1797 StatFile = strrep(StatFile, ProtocolInfo.STUDIES, ''); 1798 % Input #3: iStudies 1799 if (nargin < 3) 1800 iStudies = []; 1801 else 1802 iStudies = varargin{3}; 1803 end 1804 % Look for surface file in all the surfaces of all subjects 1805 [argout1, argout2, argout3] = findFileInStudies('Stat', 'FileName', StatFile, iStudies); 1806 1807 1808 %% ==== STAT FOR DATA FILE ==== 1809 % Usage: [sStudy, iStudy, iResults] = bst_get('StatForDataFile', DataFile) : search the whole protocol 1810 % Usage: [sStudy, iStudy, iResults] = bst_get('StatForDataFile', DataFile, iStudies) : search only the specified studies 1811 case 'StatForDataFile' 1812 % No protocol in database 1813 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 1814 return; 1815 end 1816 ProtocolInfo = GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol); 1817 % Parse inputs 1818 if (nargin >= 2) 1819 DataFile = varargin{2}; 1820 DataFile = strrep(DataFile, ProtocolInfo.STUDIES, ''); 1821 else 1822 error('Invalid call to bst_get().'); 1823 end 1824 % Determine in which studies to search for ResultsFile 1825 if (nargin >= 3) 1826 % Studies specified in argument 1827 iStudies = varargin{3}; 1828 else 1829 % Get study in which DataFile is located 1830 [sStudies, iStudies] = bst_get('DataFile', DataFile); 1831 if isempty(iStudies) 1832 return; 1833 end 1834 end 1835 % Search selected studies 1836 [argout1, argout2, argout3] = findFileInStudies('Stat', 'DataFile', DataFile, iStudies); 1837 1838 %% ==== TIMEFREQ FILE ==== 1839 % Usage: [sStudy, iStudy, iTimefreq] = bst_get('TimefreqFile', TimefreqFile, iStudies) 1840 % [sStudy, iStudy, iTimefreq] = bst_get('TimefreqFile', TimefreqFile) 1841 case 'TimefreqFile' 1842 % No protocol in database 1843 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 1844 return; 1845 end 1846 % Input #2: TimefreqFile 1847 ProtocolInfo = GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol); 1848 TimefreqFile = varargin{2}; 1849 TimefreqFile = strrep(TimefreqFile, ProtocolInfo.STUDIES, ''); 1850 % Remove optional RefRowName 1851 iPipe = find(TimefreqFile == '|', 1); 1852 if ~isempty(iPipe) 1853 TimefreqFile = TimefreqFile(1:iPipe-1); 1854 end 1855 % Input #3: iStudies 1856 if (nargin < 3) 1857 iStudies = []; 1858 else 1859 iStudies = varargin{3}; 1860 end 1861 % Look for surface file in all the surfaces of all subjects 1862 [argout1, argout2, argout3] = findFileInStudies('Timefreq', 'FileName', TimefreqFile, iStudies); 1863 1864 %% ==== TIMEFREQ FOR FILE ==== 1865 % Usage: [sStudy, iStudy, iTimefreq] = bst_get('TimefreqForFile', FileName, iStudies) : search only the specified studies 1866 % [sStudy, iStudy, iTimefreq] = bst_get('TimefreqForFile', FileName) : search the whole protocol 1867 case 'TimefreqForFile' 1868 % No protocol in database 1869 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 1870 return; 1871 end 1872 ProtocolInfo = GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol); 1873 % Parse inputs 1874 if (nargin >= 2) 1875 FileName = varargin{2}; 1876 FileName = strrep(FileName, ProtocolInfo.STUDIES, ''); 1877 else 1878 error('Invalid call to bst_get().'); 1879 end 1880 % Get study in which file is located 1881 if (nargin >= 3) 1882 iStudies = varargin{3}; 1883 [sStudy, iStudy, iFile, DataType] = bst_get('AnyFile', FileName, iStudies); 1884 else 1885 [sStudy, iStudy, iFile, DataType] = bst_get('AnyFile', FileName); 1886 end 1887 % If file was not found 1888 if isempty(iStudy) 1889 return; 1890 end 1891 % Search direct dependent files 1892 [tmp, tmp, iTf] = findFileInStudies('Timefreq', 'DataFile', FileName, iStudy); 1893 % Data files: get all the depending results, and then all the timefreq for those results 1894 if strcmpi(DataType, 'data') 1895 [tmp, tmp, iResults] = bst_get('ResultsForDataFile', FileName, iStudy); 1896 for i = 1:length(iResults); 1897 % Search selected studies 1898 [tmp, tmp, iTfRes] = findFileInStudies('Timefreq', 'DataFile', sStudy.Result(iResults(i)).FileName, iStudy); 1899 if ~isempty(iTfRes) 1900 iTf = [iTf iTfRes]; 1901 end 1902 end 1903 end 1904 % Return results 1905 if ~isempty(iTf) 1906 argout1 = sStudy; 1907 argout2 = iStudy; 1908 argout3 = iTf; 1909 end 1910 1911 1912 %% ==== DIPOLES FOR FILE ==== 1913 % Usage: [sStudy, iStudy, iDipoles] = bst_get('DipolesForFile', FileName, iStudies) : search only the specified studies 1914 % [sStudy, iStudy, iDipoles] = bst_get('DipolesForFile', FileName) : search the whole protocol 1915 case 'DipolesForFile' 1916 % No protocol in database 1917 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 1918 return; 1919 end 1920 ProtocolInfo = GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol); 1921 % Parse inputs 1922 if (nargin >= 2) 1923 FileName = varargin{2}; 1924 FileName = strrep(FileName, ProtocolInfo.STUDIES, ''); 1925 else 1926 error('Invalid call to bst_get().'); 1927 end 1928 % Get study in which file is located 1929 if (nargin >= 3) 1930 iStudies = varargin{3}; 1931 [sStudy, iStudy, iFile, DataType] = bst_get('AnyFile', FileName, iStudies); 1932 else 1933 [sStudy, iStudy, iFile, DataType] = bst_get('AnyFile', FileName); 1934 end 1935 % If file was not found 1936 if isempty(iStudy) 1937 return; 1938 end 1939 % Search direct dependent files 1940 [tmp, tmp, iDip] = findFileInStudies('Dipoles', 'DataFile', FileName, iStudy); 1941 % Data files: get all the depending results, and then all the timefreq for those results 1942 if strcmpi(DataType, 'data') 1943 [tmp, tmp, iResults] = bst_get('ResultsForDataFile', FileName, iStudy); 1944 for i = 1:length(iResults); 1945 % Search selected studies 1946 [tmp, tmp, iDipRes] = findFileInStudies('Dipoles', 'DataFile', sStudy.Result(iResults(i)).FileName, iStudy); 1947 if ~isempty(iDipRes) 1948 iDip = [iDip, iDipRes]; 1949 end 1950 end 1951 end 1952 % Return results 1953 if ~isempty(iDip) 1954 argout1 = sStudy; 1955 argout2 = iStudy; 1956 argout3 = iDip; 1957 end 1958 1959 1960 %% ==== TIMEFREQ FOR KERNEL ==== 1961 % Find all the timefreq files dependent from links due to a given kernel 1962 % Usage: [sStudy, iStudy, iTimefreq] = bst_get('TimefreqForKernel', KernelFile) 1963 case 'TimefreqForKernel' 1964 sFoundStudy = []; 1965 iFoundStudy = []; 1966 iFoundTimefreq = []; 1967 % No protocol in database 1968 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 1969 return; 1970 end 1971 % Get study in which file is located 1972 KernelFile = varargin{2}; 1973 [sStudy, iStudy, iFile, DataType] = bst_get('ResultsFile', KernelFile); 1974 if isempty(iStudy) 1975 return; 1976 end 1977 % Get all the data files relative with this kernel 1978 [iDepStudies, iDepDatas] = bst_get('DataForStudy', iStudy); 1979 % Keep only once each study 1980 iDepStudies = unique(iDepStudies); 1981 % Process all the dependent studies 1982 for iSt = 1:length(iDepStudies) 1983 % Get the study structure 1984 sDepStudy = bst_get('Study', iDepStudies(iSt)); 1985 % Process each timefreq file separately 1986 for iTf = 1:length(sDepStudy.Timefreq) 1987 DataFile = sDepStudy.Timefreq(iTf).DataFile; 1988 % Keep only the files that are linked to LINKS 1989 if isempty(DataFile) || (length(DataFile) < 5) || ~isequal(DataFile(1:5), 'link|') 1990 continue; 1991 end 1992 % Split link 1993 splitFile = str_split(DataFile, '|'); 1994 % If the kernel is found: add it to the found list 1995 if file_compare(splitFile{2}, KernelFile) 1996 sFoundStudy = [sFoundStudy sDepStudy]; 1997 iFoundStudy = [iFoundStudy, iDepStudies(iSt)]; 1998 iFoundTimefreq = [iFoundTimefreq, iTf]; 1999 end 2000 end 2001 end 2002 % Return findings 2003 argout1 = sFoundStudy; 2004 argout2 = iFoundStudy; 2005 argout3 = iFoundTimefreq; 2006 2007 2008 %% ==== DIPOLES FOR KERNEL ==== 2009 % Find all the dipoles files dependent from links due to a given kernel 2010 % Usage: [sStudy, iStudy, iDipoles] = bst_get('TimefreqForKernel', KernelFile) 2011 case 'DipolesForKernel' 2012 sFoundStudy = []; 2013 iFoundStudy = []; 2014 iFoundDipoles = []; 2015 % No protocol in database 2016 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 2017 return; 2018 end 2019 % Get study in which file is located 2020 KernelFile = varargin{2}; 2021 [sStudy, iStudy, iFile, DataType] = bst_get('ResultsFile', KernelFile); 2022 if isempty(iStudy) 2023 return; 2024 end 2025 % Get all the data files relative with this kernel 2026 [iDepStudies, iDepDatas] = bst_get('DataForStudy', iStudy); 2027 % Keep only once each study 2028 iDepStudies = unique(iDepStudies); 2029 % Process all the dependent studies 2030 for iSt = 1:length(iDepStudies) 2031 % Get the study structure 2032 sDepStudy = bst_get('Study', iDepStudies(iSt)); 2033 % Process each timefreq file separately 2034 for iDip = 1:length(sDepStudy.Dipoles) 2035 DataFile = sDepStudy.Dipoles(iDip).DataFile; 2036 % Keep only the files that are linked to LINKS 2037 if isempty(DataFile) || (length(DataFile) < 5) || ~isequal(DataFile(1:5), 'link|') 2038 continue; 2039 end 2040 % Split link 2041 splitFile = str_split(DataFile, '|'); 2042 % If the kernel is found: add it to the found list 2043 if file_compare(splitFile{2}, KernelFile) 2044 sFoundStudy = [sFoundStudy sDepStudy]; 2045 iFoundStudy = [iFoundStudy, iDepStudies(iSt)]; 2046 iFoundDipoles = [iFoundDipoles, iDip]; 2047 end 2048 end 2049 end 2050 % Return findings 2051 argout1 = sFoundStudy; 2052 argout2 = iFoundStudy; 2053 argout3 = iFoundDipoles; 2054 2055 2056 %% ==== DIPOLES FILE ==== 2057 % Usage: [sStudy, iStudy, iDipole] = bst_get('DipolesFile', DipolesFile, iStudies) 2058 % [sStudy, iStudy, iDipole] = bst_get('DipolesFile', DipolesFile) 2059 case 'DipolesFile' 2060 % No protocol in database 2061 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 2062 return; 2063 end 2064 % Input #2: DipolesFile 2065 ProtocolInfo = GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol); 2066 DipolesFile = varargin{2}; 2067 DipolesFile = strrep(DipolesFile, ProtocolInfo.STUDIES, ''); 2068 % Input #3: iStudies 2069 if (nargin < 3) 2070 iStudies = []; 2071 else 2072 iStudies = varargin{3}; 2073 end 2074 % Look for surface file in all the surfaces of all subjects 2075 [argout1, argout2, argout3] = findFileInStudies('Dipoles', 'FileName', DipolesFile, iStudies); 2076 2077 2078 %% ==== MATRIX FILE ==== 2079 % Usage: [sStudy, iStudy, iDipole] = bst_get('MatrixFile', MatrixFile, iStudies) 2080 % [sStudy, iStudy, iDipole] = bst_get('MatrixFile', MatrixFile) 2081 case 'MatrixFile' 2082 % No protocol in database 2083 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 2084 return; 2085 end 2086 % Input #2: MatrixFile 2087 ProtocolInfo = GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol); 2088 MatrixFile = varargin{2}; 2089 MatrixFile = strrep(MatrixFile, ProtocolInfo.STUDIES, ''); 2090 % Input #3: iStudies 2091 if (nargin < 3) 2092 iStudies = []; 2093 else 2094 iStudies = varargin{3}; 2095 end 2096 % Look for surface file in all the surfaces of all subjects 2097 [argout1, argout2, argout3] = findFileInStudies('Matrix', 'FileName', MatrixFile, iStudies); 2098 2099 %% ==== IMAGE FILE ==== 2100 % Usage: [sStudy, iStudy, iDipole] = bst_get('ImageFile', ImageFile, iStudies) 2101 % [sStudy, iStudy, iDipole] = bst_get('ImageFile', ImageFile) 2102 case 'ImageFile' 2103 % No protocol in database 2104 if isempty(GlobalData.DataBase.iProtocol) || (GlobalData.DataBase.iProtocol == 0) 2105 return; 2106 end 2107 % Input #2: ImageFile 2108 ProtocolInfo = GlobalData.DataBase.ProtocolInfo(GlobalData.DataBase.iProtocol); 2109 ImageFile = varargin{2}; 2110 ImageFile = strrep(ImageFile, ProtocolInfo.STUDIES, ''); 2111 % Input #3: iStudies 2112 if (nargin < 3) 2113 iStudies = []; 2114 else 2115 iStudies = varargin{3}; 2116 end 2117 % Look for surface file in all the surfaces of all subjects 2118 [argout1, argout2, argout3] = findFileInStudies('Image', 'FileName', ImageFile, iStudies); 2119 2120 2121 %% ==== ANY FILE ==== 2122 % Usage: [sStudy, iStudy, iFile, DataType, sItem] = bst_get('AnyFile', FileName, iStudies) 2123 % [sStudy, iStudy, iFile, DataType, sItem] = bst_get('AnyFile', FileName) 2124 case 'AnyFile' 2125 % Input #2: FileName 2126 FileName = varargin{2}; 2127 if isempty(FileName) 2128 return 2129 end 2130 % Input #3: iStudies 2131 if (nargin < 3) 2132 iStudies = []; 2133 else 2134 iStudies = varargin{3}; 2135 end 2136 % Get data format 2137 fileType = file_gettype(FileName); 2138 if isempty(fileType) 2139 error('File type is not recognized.'); 2140 end 2141 sItem = []; 2142 % Get information related with this file 2143 switch (fileType) 2144 % ===== FUNCTIONAL ===== 2145 case 'channel' 2146 [sStudy, iStudy] = bst_get('ChannelFile', FileName); 2147 iItem = 1; 2148 if (nargout >= 5) && ~isempty(sStudy) 2149 sItem = sStudy.Channel; 2150 end 2151 case 'headmodel' 2152 [sStudy, iStudy, iItem] = bst_get('HeadModelFile', FileName); 2153 if (nargout >= 5) && ~isempty(sStudy) 2154 sItem = sStudy.HeadModel(iItem); 2155 end 2156 case 'noisecov' 2157 [sStudy, iStudy, iItem] = bst_get('NoiseCovFile', FileName); 2158 if (nargout >= 5) && ~isempty(sStudy) 2159 sItem = sStudy.NoiseCov(iItem); 2160 end 2161 case 'ndatacov' 2162 [sStudy, iStudy, iItem] = bst_get('DataCovFile', FileName); 2163 if (nargout >= 5) && ~isempty(sStudy) 2164 sItem = sStudy.NoiseCov(iItem); 2165 end 2166 case {'data', 'spike'} 2167 [sStudy, iStudy, iItem] = bst_get('DataFile', FileName, iStudies); 2168 if (nargout >= 5) && ~isempty(sStudy) 2169 sItem = sStudy.Data(iItem); 2170 end 2171 case {'results', 'link'} 2172 [sStudy, iStudy, iItem] = bst_get('ResultsFile', FileName, iStudies); 2173 if (nargout >= 5) && ~isempty(sStudy) 2174 sItem = sStudy.Result(iItem); 2175 end 2176 case {'presults', 'pdata','ptimefreq','pmatrix'} 2177 [sStudy, iStudy, iItem] = bst_get('StatFile', FileName, iStudies); 2178 if (nargout >= 5) && ~isempty(sStudy) 2179 sItem = sStudy.Stat(iItem); 2180 end 2181 case 'dipoles' 2182 [sStudy, iStudy, iItem] = bst_get('DipolesFile', FileName, iStudies); 2183 if (nargout >= 5) && ~isempty(sStudy) 2184 sItem = sStudy.Dipoles(iItem); 2185 end 2186 case 'timefreq' 2187 % Remove optional RefRowName 2188 iPipe = find(FileName == '|', 1); 2189 if ~isempty(iPipe) 2190 FileName = FileName(1:iPipe-1); 2191 end 2192 [sStudy, iStudy, iItem] = bst_get('TimefreqFile', FileName, iStudies); 2193 if (nargout >= 5) && ~isempty(sStudy) 2194 sItem = sStudy.Timefreq(iItem); 2195 end 2196 case 'matrix' 2197 [sStudy, iStudy, iItem] = bst_get('MatrixFile', FileName, iStudies); 2198 if (nargout >= 5) && ~isempty(sStudy) 2199 sItem = sStudy.Matrix(iItem); 2200 end 2201 case 'brainstormstudy' 2202 [sStudy, iStudy] = bst_get('Study', FileName); 2203 iItem = 0; 2204 if (nargout >= 5) && ~isempty(sStudy) 2205 sItem = sStudy; 2206 end 2207 case {'image', 'video', 'videolink'} 2208 [sStudy, iStudy, iItem] = bst_get('ImageFile', FileName, iStudies); 2209 if (nargout >= 5) && ~isempty(sStudy) 2210 sItem = sStudy.Image(iItem); 2211 end 2212 % ===== ANATOMY ===== 2213 case {'cortex','scalp','innerskull','outerskull','tess','fibers','fem'} 2214 [sStudy, iStudy, iItem] = bst_get('SurfaceFile', FileName); 2215 if (nargout >= 5) && ~isempty(sStudy) 2216 sItem = sStudy.Surface(iItem); 2217 end 2218 case 'subjectimage' 2219 [sStudy, iStudy, iItem] = bst_get('MriFile', FileName); 2220 if (nargout >= 5) && ~isempty(sStudy) 2221 sItem = sStudy.Anatomy(iItem); 2222 end 2223 case 'brainstormsubject' 2224 [sStudy, iStudy] = bst_get('Subject', FileName); 2225 iItem = 0; 2226 if (nargout >= 5) && ~isempty(sStudy) 2227 sItem = sStudy; 2228 end 2229 otherwise 2230 error('File type is not recognized.'); 2231 end 2232 argout1 = sStudy; 2233 argout2 = iStudy; 2234 argout3 = iItem; 2235 argout4 = fileType; 2236 if (nargout >= 5) 2237 argout5 = sItem; 2238 end 2239 2240 2241 %% ==== GET RELATED DATA FILE ==== 2242 % Usage: DataFile = bst_get('RelatedDataFile', FileName, iStudies) 2243 % DataFile = bst_get('RelatedDataFile', FileName) 2244 case 'RelatedDataFile' 2245 % Input #2: FileName 2246 FileName = varargin{2}; 2247 % Input #3: iStudies 2248 if (nargin < 3) 2249 iStudies = []; 2250 else 2251 iStudies = varargin{3}; 2252 end 2253 % Get file in database 2254 [sStudy, iStudy, iFile, fileType] = bst_get('AnyFile', FileName, iStudies); 2255 % If this data file does not belong to any study 2256 if isempty(sStudy) 2257 return; 2258 end 2259 % Get associated data file 2260 switch (fileType) 2261 case 'data' 2262 RelatedDataFile = sStudy.Data(iFile).FileName; 2263 case {'pdata','presults','ptimefreq','pmatrix'} 2264 RelatedDataFile = sStudy.Stat(iFile).DataFile; 2265 case {'results', 'link'} 2266 RelatedDataFile = sStudy.Result(iFile).DataFile; 2267 case 'dipoles' 2268 RelatedDataFile = sStudy.Dipoles(iFile).DataFile; 2269 case 'timefreq' 2270 RelatedDataFile = sStudy.Timefreq(iFile).DataFile; 2271 otherwise 2272 RelatedDataFile = ''; 2273 end 2274 % If related file is results: get related data file 2275 if ~isempty(RelatedDataFile) 2276 relFileType = file_gettype(RelatedDataFile); 2277 if ismember(relFileType, {'link','results'}) 2278 RelatedDataFile = bst_get('RelatedDataFile', RelatedDataFile, iStudy); 2279 end 2280 end 2281 % Return file 2282 argout1 = RelatedDataFile; 2283 2284 %% ==== ALL CONDITIONS FOR ONE SUBJECT ==== 2285 % Usage: [Conditions] = bst_get('ConditionsForSubject', SubjectFile) 2286 case 'ConditionsForSubject' 2287 % Parse inputs 2288 if (nargin == 2) 2289 SubjectFile = varargin{2}; 2290 else 2291 error('Invalid call to bst_get().'); 2292 end 2293 % Get list of studies associated with subject 2294 sStudies = bst_get('StudyWithSubject', SubjectFile); 2295 % Get Conditions for each study 2296 Conditions = {}; 2297 for i = 1:length(sStudies) 2298 % Test if the condition of this study was not added previously 2299 isNewCondition = 1; 2300 for iCond = 1:length(Conditions) 2301 % If new condition is found 2302 % (and excludes DirAnalysisIntra and DirDefaultSubject from list) 2303 if isempty(sStudies(i).Condition) || ... 2304 isequal(sStudies(i).Condition, Conditions(iCond)) || ... 2305 strcmpi(sStudies(i).Condition{1}, bst_get('DirAnalysisIntra')) || ... 2306 strcmpi(sStudies(i).Condition{1}, bst_get('DirDefaultSubject')) 2307 isNewCondition = 0; 2308 break; 2309 end 2310 end 2311 % If Condition is not added yet : add it to the list 2312 if isNewCondition && ~isempty(sStudies(i).Condition) 2313 Conditions{end+1} = sStudies(i).Condition{1}; 2314 end 2315 end 2316 % Return conditions list 2317 argout1 = Conditions; 2318 2319 2320 %% ==== ANATOMY DEFAULTS ==== 2321 % Returns the list of all the anatomy defaults (distributed with the software + user defined) 2322 case 'AnatomyDefaults' 2323 % Parse inputs 2324 if (nargin == 2) 2325 AnatName = varargin{2}; 2326 else 2327 AnatName = []; 2328 end 2329 % Get templates from the brainstorm3 folder 2330 progDir = bst_fullfile(bst_get('BrainstormDefaultsDir'), 'anatomy'); 2331 progFiles = dir(progDir); 2332 % Get templates from the user folder 2333 userDir = bst_fullfile(bst_get('UserDefaultsDir'), 'anatomy'); 2334 userFiles = dir(userDir); 2335 % Combine the two lists 2336 AllProgNames = cat(2, {progFiles.name}, cellfun(@(c)cat(2,c,'.zip'), {progFiles.name}, 'UniformOutput', 0)); 2337 AllFiles = cat(2, cellfun(@(c)bst_fullfile(progDir,c), setdiff({progFiles.name}, {'.','..'}), 'UniformOutput', 0), ... 2338 cellfun(@(c)bst_fullfile(userDir,c), setdiff({userFiles.name}, AllProgNames), 'UniformOutput', 0)); 2339 % Initialize list of defaults 2340 sTemplates = repmat(struct('FilePath',[],'Name',[]), 0); 2341 % Find all the valid defaults (.zip files or subdirectory with a brainstormsubject.mat in it) 2342 for i = 1:length(AllFiles) 2343 % Decompose file name 2344 [fPath, fBase, fExt] = bst_fileparts(AllFiles{i}); 2345 % Entry is a directory W/ a name that does not start with a '.' 2346 if isempty(fBase) || strcmpi(fBase(1),'.') || (~isempty(fExt) && ~strcmpi(fExt, '.zip')) 2347 continue; 2348 end 2349 % If it's a folder: check for a brainstormsubject file 2350 if isdir(AllFiles{i}) 2351 bstFiles = dir(bst_fullfile(AllFiles{i}, 'brainstormsubject*.mat')); 2352 if (length(bstFiles) == 1) 2353 sTemplates(end+1).FilePath = AllFiles{i}; 2354 sTemplates(end).Name = fBase; 2355 end 2356 % If it's a zip file 2357 elseif isequal(fExt, '.zip') 2358 sTemplates(end+1).FilePath = AllFiles{i}; 2359 sTemplates(end).Name = fBase; 2360 end 2361 end 2362 % Get defaults from internet 2363 if ~ismember('icbm152', lower({sTemplates.Name})) 2364 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=ICBM152_2023b'; 2365 sTemplates(end).Name = 'ICBM152'; 2366 end 2367 if ~ismember('icbm152_2022', lower({sTemplates.Name})) 2368 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=ICBM152_2022'; 2369 sTemplates(end).Name = 'ICBM152_2022'; 2370 end 2371 if ~ismember('icbm152_2023b', lower({sTemplates.Name})) 2372 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=ICBM152_2023b'; 2373 sTemplates(end).Name = 'ICBM152_2023b'; 2374 end 2375 if ~ismember('icbm152_2019', lower({sTemplates.Name})) 2376 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=ICBM152_2019'; 2377 sTemplates(end).Name = 'ICBM152_2019'; 2378 end 2379 if ~ismember('icbm152_brainsuite_2016', lower({sTemplates.Name})) 2380 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=ICBM152_BrainSuite_2016'; 2381 sTemplates(end).Name = 'ICBM152_BrainSuite_2016'; 2382 end 2383 if ~ismember('colin27_2016', lower({sTemplates.Name})) 2384 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=Colin27_2016'; 2385 sTemplates(end).Name = 'Colin27_2016'; 2386 end 2387 if ~ismember('colin27_4nirs_2024', lower({sTemplates.Name})) 2388 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=Colin27_4NIRS_2024'; 2389 sTemplates(end).Name = 'Colin27_4NIRS_2024'; 2390 end 2391 if ~ismember('colin27_brainsuite_2016', lower({sTemplates.Name})) 2392 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=Colin27_BrainSuite_2016'; 2393 sTemplates(end).Name = 'Colin27_BrainSuite_2016'; 2394 end 2395 if ~ismember('bci-dni_brainsuite_2020', lower({sTemplates.Name})) 2396 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=BCI-DNI_BrainSuite_2020'; 2397 sTemplates(end).Name = 'BCI-DNI_BrainSuite_2020'; 2398 end 2399 if ~ismember('uscbrain_brainsuite_2020', lower({sTemplates.Name})) 2400 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=USCBrain_BrainSuite_2020'; 2401 sTemplates(end).Name = 'USCBrain_BrainSuite_2020'; 2402 end 2403 if ~ismember('fsaverage_2020', lower({sTemplates.Name})) 2404 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=FSAverage_2020'; 2405 sTemplates(end).Name = 'FsAverage_2020'; 2406 end 2407 if ~ismember('kabdebon_7w', lower({sTemplates.Name})) 2408 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=Kabdebon_7w'; 2409 sTemplates(end).Name = 'Kabdebon_7w'; 2410 end 2411 if ~ismember('oreilly_0.5m_2021', lower({sTemplates.Name})) 2412 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=Oreilly_0.5m_2021'; 2413 sTemplates(end).Name = 'Oreilly_0.5m_2021'; 2414 end 2415 if ~ismember('oreilly_1m_2021', lower({sTemplates.Name})) 2416 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=Oreilly_1m_2021'; 2417 sTemplates(end).Name = 'Oreilly_1m_2021'; 2418 end 2419 if ~ismember('oreilly_2m_2021', lower({sTemplates.Name})) 2420 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=Oreilly_2m_2021'; 2421 sTemplates(end).Name = 'Oreilly_2m_2021'; 2422 end 2423 if ~ismember(lower({sTemplates.Name}), 'oreilly_3m_2021') 2424 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=Oreilly_3m_2021'; 2425 sTemplates(end).Name = 'Oreilly_3m_2021'; 2426 end 2427 if ~ismember('oreilly_4.5m_2021', lower({sTemplates.Name})) 2428 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=Oreilly_4.5m_2021'; 2429 sTemplates(end).Name = 'Oreilly_4.5m_2021'; 2430 end 2431 if ~ismember('oreilly_6m_2021', lower({sTemplates.Name})) 2432 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=Oreilly_6m_2021'; 2433 sTemplates(end).Name = 'Oreilly_6m_2021'; 2434 end 2435 if ~ismember('oreilly_7.5m_2021', lower({sTemplates.Name})) 2436 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=Oreilly_7.5m_2021'; 2437 sTemplates(end).Name = 'Oreilly_7.5m_2021'; 2438 end 2439 if ~ismember('oreilly_9m_2021', lower({sTemplates.Name})) 2440 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=Oreilly_9m_2021'; 2441 sTemplates(end).Name = 'Oreilly_9m_2021'; 2442 end 2443 if ~ismember('oreilly_10.5m_2021', lower({sTemplates.Name})) 2444 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=Oreilly_10.5m_2021'; 2445 sTemplates(end).Name = 'Oreilly_10.5m_2021'; 2446 end 2447 if ~ismember('oreilly_12m_2021', lower({sTemplates.Name})) 2448 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=Oreilly_12m_2021'; 2449 sTemplates(end).Name = 'Oreilly_12m_2021'; 2450 end 2451 if ~ismember('oreilly_15m_2021', lower({sTemplates.Name})) 2452 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=Oreilly_15m_2021'; 2453 sTemplates(end).Name = 'Oreilly_15m_2021'; 2454 end 2455 if ~ismember('oreilly_18m_2021', lower({sTemplates.Name})) 2456 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=Oreilly_18m_2021'; 2457 sTemplates(end).Name = 'Oreilly_18m_2021'; 2458 end 2459 if ~ismember('oreilly_24m_2021', lower({sTemplates.Name})) 2460 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=Oreilly_24m_2021'; 2461 sTemplates(end).Name = 'Oreilly_24m_2021'; 2462 end 2463 % If a specific template was requested 2464 if ~isempty(AnatName) 2465 iAnat = find(strcmpi({sTemplates.Name}, AnatName)); 2466 sTemplates = sTemplates(iAnat); 2467 end 2468 % Sort in alphabetical order 2469 if ~isempty(sTemplates) 2470 [tmp__, I] = sort_nat({sTemplates(2:end).Name}); 2471 sTemplates = sTemplates([1, I+1]); 2472 end 2473 % Return defaults list 2474 argout1 = sTemplates; 2475 2476 2477 %% ==== MNI ATLASES ==== 2478 % Returns the list of all the available MNI atlases 2479 case 'MniAtlasDefaults' 2480 % Get templates from the brainstorm3 folder 2481 mniDir = bst_fullfile(bst_get('UserDefaultsDir'), 'mniatlas'); 2482 mniFiles = dir(bst_fullfile(mniDir, '*.nii.gz')); 2483 mniFiles = cellfun(@(c)bst_fullfile(mniDir,c), {mniFiles.name}, 'UniformOutput', 0); 2484 % Initialize list of defaults 2485 sTemplates = repmat(struct('FilePath',[],'Name',[],'Info',[]), 0); 2486 % Find all the valid defaults (.zip files or subdirectory with a brainstormsubject.mat in it) 2487 for i = 1:length(mniFiles) 2488 % Decompose file name 2489 [fPath, fBase, fExt] = bst_fileparts(mniFiles{i}); 2490 % Keep only files with .nii and .nii.gz extensions 2491 if ~isempty(fBase) && (fBase(1) ~= '.') && ~isempty(fExt) && strcmpi(fExt, '.gz') 2492 sTemplates(end+1).FilePath = mniFiles{i}; 2493 sTemplates(end).Name = strrep(fBase, '.nii', ''); 2494 sTemplates(end).Info = ''; 2495 end 2496 end 2497 % Sort in alphabetical order 2498 if ~isempty(sTemplates) 2499 [tmp__, I] = sort_nat(lower({sTemplates.Name})); 2500 sTemplates = sTemplates(I); 2501 end 2502 2503 % Get defaults from internet 2504 if ~ismember('aal1', lower({sTemplates.Name})) 2505 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=mni_AAL1'; 2506 sTemplates(end).Name = 'AAL1'; 2507 sTemplates(end).Info = 'https://www.gin.cnrs.fr/en/tools/aal/'; 2508 end 2509 if ~ismember('aal2', lower({sTemplates.Name})) 2510 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=mni_AAL2'; 2511 sTemplates(end).Name = 'AAL2'; 2512 sTemplates(end).Info = 'https://www.gin.cnrs.fr/en/tools/aal/'; 2513 end 2514 if ~ismember('aal3', lower({sTemplates.Name})) 2515 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=mni_AAL3'; 2516 sTemplates(end).Name = 'AAL3'; 2517 sTemplates(end).Info = 'https://www.gin.cnrs.fr/en/tools/aal/'; 2518 end 2519 if ~ismember('aicha', lower({sTemplates.Name})) 2520 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=mni_AICHA'; 2521 sTemplates(end).Name = 'AICHA'; 2522 sTemplates(end).Info = 'https://www.gin.cnrs.fr/en/tools/aicha'; 2523 end 2524 if ~ismember('brainnetome', lower({sTemplates.Name})) 2525 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=mni_Brainnetome'; 2526 sTemplates(end).Name = 'Brainnetome'; 2527 sTemplates(end).Info = 'http://atlas.brainnetome.org/'; 2528 end 2529 if ~ismember('brainnetome_leaddbs', lower({sTemplates.Name})) 2530 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=mni_Brainnetome_leaddbs'; 2531 sTemplates(end).Name = 'Brainnetome_leaddbs'; 2532 sTemplates(end).Info = 'http://atlas.brainnetome.org/'; 2533 end 2534 if ~ismember('brodmann', lower({sTemplates.Name})) 2535 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=mni_Brodmann'; 2536 sTemplates(end).Name = 'Brodmann'; 2537 sTemplates(end).Info = 'https://people.cas.sc.edu/rorden/mricro/lesion.html#brod'; 2538 end 2539 if ~ismember('hammers83', lower({sTemplates.Name})) 2540 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=mni_Hammers'; 2541 sTemplates(end).Name = 'Hammers'; 2542 sTemplates(end).Info = 'http://brain-development.org/brain-atlases/adult-brain-atlases/'; 2543 end 2544 if ~ismember('neuromorphometrics', lower({sTemplates.Name})) 2545 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=mni_Neuromorphometrics'; 2546 sTemplates(end).Name = 'Neuromorphometrics'; 2547 sTemplates(end).Info = 'https://search.kg.ebrains.eu/instances/Dataset/ef48c5e9-6b3c-4d5a-a9a9-e678fe10bdf6'; 2548 end 2549 if ~ismember('julich-brain-v25', lower({sTemplates.Name})) 2550 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=mni_Julich-Brain-v25'; 2551 sTemplates(end).Name = 'Julich-Brain-v25'; 2552 sTemplates(end).Info = 'https://search.kg.ebrains.eu/instances/Dataset/ef48c5e9-6b3c-4d5a-a9a9-e678fe10bdf6'; 2553 end 2554 if ~ismember('schaefer2018_100_7net', lower({sTemplates.Name})) 2555 sTemplates(end+1).FilePath = 'http://neuroimage.usc.edu/bst/getupdate.php?t=mni_Schaefer2018'; 2556 sTemplates(end).Name = 'Schaefer2018'; 2557 sTemplates(end).Info = 'https://github.com/ThomasYeoLab/CBIG/tree/master/stable_projects/brain_parcellation/Schaefer2018_LocalGlobal'; 2558 end 2559 % Return defaults list 2560 argout1 = sTemplates; 2561 2562 2563 %% ==== EEG DEFAULTS ==== 2564 % Returns an array of struct(fullpath, name) of all the Brainstorm eeg nets defaults 2565 % Usage: EegDefaults = bst_get('EegDefaults') 2566 % EegDefaults = bst_get('EegDefaults', TemplateName=[], SetupName=[]) 2567 case 'EegDefaults' 2568 % Parse inputs 2569 if (nargin >= 3) 2570 SetupName = varargin{3}; 2571 else 2572 SetupName = []; 2573 end 2574 if (nargin >= 2) 2575 TemplateName = varargin{2}; 2576 else 2577 TemplateName = []; 2578 end 2579 % Get templates from the brainstorm3 folder 2580 progDir = bst_fullfile(bst_get('BrainstormDefaultsDir'), 'eeg'); 2581 progFiles = dir(bst_fullfile(progDir, '*')); 2582 % Get templates from the user folder 2583 userDir = bst_fullfile(bst_get('UserDefaultsDir'), 'eeg'); 2584 userFiles = dir(bst_fullfile(userDir, '*')); 2585 % Combine the two lists 2586 dirList = cat(2, cellfun(@(c)bst_fullfile(progDir,c), {progFiles.name}, 'UniformOutput', 0), ... 2587 cellfun(@(c)bst_fullfile(userDir,c), setdiff({userFiles.name}, {progFiles.name}), 'UniformOutput', 0)); 2588 % Initialize list of folders 2589 fullDefaultsList = repmat(struct('contents','', 'name',''), 0); 2590 % For each template directory 2591 for iDir = 1:length(dirList) 2592 % Decompose file name 2593 [fPath, fBase, fExt] = bst_fileparts(dirList{iDir}); 2594 % Entry is a not a folder, or starts with a "." 2595 if ~isdir(dirList{iDir}) || isempty(fBase) || strcmpi(fBase(1),'.') 2596 continue; 2597 end 2598 % Skip if it is not the requested template 2599 if ~isempty(TemplateName) && ~strcmpi(fBase, TemplateName) 2600 continue; 2601 end 2602 % Get files list 2603 fileList = dir(bst_fullfile(dirList{iDir}, 'channel*.mat')); 2604 defaultsList = repmat(struct('fullpath','', 'name',''), 0); 2605 % Find all the valid defaults (channel files) 2606 for iFile = 1:length(fileList) 2607 [tmp__, baseName] = bst_fileparts(fileList(iFile).name); 2608 baseName = strrep(baseName, 'channel_', ''); 2609 baseName = strrep(baseName, '_channel', ''); 2610 baseName = strrep(baseName, '_', ' '); 2611 % Skip if it is not the requested template 2612 if ~isempty(SetupName) && ~strcmpi(baseName, SetupName) 2613 continue; 2614 end 2615 % Add to list of templates 2616 iNewDefault = length(defaultsList) + 1; 2617 defaultsList(iNewDefault).fullpath = bst_fullfile(dirList{iDir}, fileList(iFile).name); 2618 defaultsList(iNewDefault).name = baseName; 2619 end 2620 % Add files list to defaults list 2621 if ~isempty(defaultsList) 2622 fullDefaultsList(end + 1) = struct('contents', defaultsList, ... 2623 'name', fBase); 2624 end 2625 end 2626 % Return defaults list 2627 argout1 = fullDefaultsList; 2628 2629 2630 %% ==== GET FILENAMES ==== 2631 case 'GetFilenames' 2632 iStudies = varargin{2}; 2633 iItems = varargin{3}; 2634 DataType = varargin{4}; 2635 FileNames = cell(1, length(iStudies)); 2636 argout1 = {}; 2637 for i = 1:length(iStudies) 2638 % Get study definition 2639 sStudy = bst_get('Study', iStudies(i)); 2640 if isempty(sStudy) 2641 continue; 2642 end 2643 % Recordings or sources 2644 switch (DataType) 2645 case 'data' 2646 if (iItems(i) > length(sStudy.Data)) 2647 return; 2648 end 2649 FileNames{i} = sStudy.Data(iItems(i)).FileName; 2650 case 'results' 2651 if (iItems(i) > length(sStudy.Result)) 2652 return; 2653 end 2654 FileNames{i} = sStudy.Result(iItems(i)).FileName; 2655 case 'timefreq' 2656 if (iItems(i) > length(sStudy.Timefreq)) 2657 return; 2658 end 2659 FileNames{i} = sStudy.Timefreq(iItems(i)).FileName; 2660 case 'matrix' 2661 if (iItems(i) > length(sStudy.Matrix)) 2662 return; 2663 end 2664 FileNames{i} = sStudy.Matrix(iItems(i)).FileName; 2665 case {'pdata','presults','ptimfreq'} 2666 if (iItems(i) > length(sStudy.Stat)) 2667 return; 2668 end 2669 FileNames{i} = sStudy.Stat(iItems(i)).FileName; 2670 end 2671 end 2672 argout1 = FileNames; 2673 2674 2675 %% ==== GUI ==== 2676 case 'BstFrame' 2677 if isempty(GlobalData) || isempty(GlobalData.Program.GUI) || isempty(GlobalData.Program.GUI.mainWindow) 2678 argout1 = []; 2679 else 2680 argout1 = GlobalData.Program.GUI.mainWindow.jBstFrame; 2681 end 2682 case 'BstControls' 2683 if isempty(GlobalData) || isempty(GlobalData.Program) || isempty(GlobalData.Program.GUI) || isempty(GlobalData.Program.GUI.mainWindow) 2684 argout1 = []; 2685 else 2686 argout1 = GlobalData.Program.GUI.mainWindow; 2687 end 2688 case 'isGUI' 2689 if isempty(GlobalData) || isempty(GlobalData.Program) || ~isfield(GlobalData.Program, 'GuiLevel') 2690 argout1 = []; 2691 else 2692 argout1 = (GlobalData.Program.GuiLevel >= 1); 2693 end 2694 case 'GuiLevel' 2695 if isempty(GlobalData) || isempty(GlobalData.Program) || ~isfield(GlobalData.Program, 'GuiLevel') 2696 argout1 = []; 2697 else 2698 argout1 = GlobalData.Program.GuiLevel; 2699 end 2700 case 'ScreenDef' 2701 if isempty(GlobalData) || isempty(GlobalData.Program) || ~isfield(GlobalData.Program, 'ScreenDef') 2702 argout1 = []; 2703 else 2704 argout1 = GlobalData.Program.ScreenDef; 2705 end 2706 case 'DecorationSize' 2707 if isempty(GlobalData) || isempty(GlobalData.Program) || ~isfield(GlobalData.Program, 'DecorationSize') 2708 argout1 = []; 2709 else 2710 argout1 = GlobalData.Program.DecorationSize; 2711 end 2712 case 'Layout' 2713 % Default or current layout structure 2714 if ~isfield(GlobalData, 'Preferences') || ~isfield(GlobalData.Preferences, 'Layout') || ~((nargin == 1) || isfield(GlobalData.Preferences.Layout, varargin{2})) || ~isfield(GlobalData.Preferences.Layout, 'MainWindowPos') 2715 GlobalData.Preferences.Layout = db_template('Layout'); 2716 end 2717 % Structure or property call 2718 if (nargin == 2) && ischar(varargin{2}) && isfield(GlobalData.Preferences.Layout, varargin{2}) 2719 argout1 = GlobalData.Preferences.Layout.(varargin{2}); 2720 elseif (nargin == 1) 2721 argout1 = GlobalData.Preferences.Layout; 2722 else 2723 error('Invalid call to bst_get.'); 2724 end 2725 2726 case 'ByteOrder' 2727 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'ByteOrder') 2728 argout1 = GlobalData.Preferences.ByteOrder; 2729 else 2730 argout1 = 'l'; 2731 end 2732 2733 case 'UniformizeTimeSeriesScales' 2734 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'UniformizeTimeSeriesScales') 2735 argout1 = GlobalData.Preferences.UniformizeTimeSeriesScales; 2736 else 2737 argout1 = 1; 2738 end 2739 2740 case 'FlipYAxis' 2741 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'FlipYAxis') 2742 argout1 = GlobalData.Preferences.FlipYAxis; 2743 else 2744 argout1 = 0; 2745 end 2746 2747 case 'AutoScaleY' 2748 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'AutoScaleY') 2749 argout1 = GlobalData.Preferences.AutoScaleY; 2750 else 2751 argout1 = 1; 2752 end 2753 2754 case 'ShowXGrid' 2755 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'ShowXGrid') 2756 argout1 = GlobalData.Preferences.ShowXGrid; 2757 else 2758 argout1 = 0; 2759 end 2760 2761 case 'ShowYGrid' 2762 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'ShowYGrid') 2763 argout1 = GlobalData.Preferences.ShowYGrid; 2764 else 2765 argout1 = 0; 2766 end 2767 2768 case 'ShowZeroLines' 2769 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'ShowZeroLines') 2770 argout1 = GlobalData.Preferences.ShowZeroLines; 2771 else 2772 argout1 = 1; 2773 end 2774 2775 case 'Resolution' 2776 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'Resolution') 2777 argout1 = GlobalData.Preferences.Resolution; 2778 else 2779 argout1 = [0 0]; 2780 end 2781 2782 case 'FixedScaleY' 2783 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'FixedScaleY') && isfield(GlobalData.Preferences.FixedScaleY, varargin{2}) && ~isempty(GlobalData.Preferences.FixedScaleY.(varargin{2})) 2784 argout1 = GlobalData.Preferences.FixedScaleY.(varargin{2}); 2785 else 2786 argout1 = []; 2787 end 2788 2789 case 'XScale' 2790 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'XScale') 2791 argout1 = GlobalData.Preferences.XScale; 2792 else 2793 argout1 = 'linear'; 2794 end 2795 2796 case 'YScale' 2797 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'YScale') 2798 argout1 = GlobalData.Preferences.YScale; 2799 else 2800 argout1 = 'linear'; 2801 end 2802 2803 case 'ShowEventsMode' 2804 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'ShowEventsMode') 2805 argout1 = GlobalData.Preferences.ShowEventsMode; 2806 else 2807 argout1 = 'dot'; 2808 end 2809 2810 case 'AutoUpdates' 2811 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'AutoUpdates') 2812 argout1 = GlobalData.Preferences.AutoUpdates; 2813 else 2814 argout1 = 1; 2815 end 2816 2817 case 'ForceMatCompression' 2818 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'ForceMatCompression') 2819 argout1 = GlobalData.Preferences.ForceMatCompression; 2820 else 2821 argout1 = 0; 2822 end 2823 2824 case 'IgnoreMemoryWarnings' 2825 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'IgnoreMemoryWarnings') 2826 argout1 = GlobalData.Preferences.IgnoreMemoryWarnings; 2827 else 2828 argout1 = 0; 2829 end 2830 2831 case 'SystemCopy' 2832 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'SystemCopy') 2833 argout1 = GlobalData.Preferences.SystemCopy; 2834 else 2835 argout1 = 0; 2836 end 2837 2838 case 'ExpertMode' 2839 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'ExpertMode') 2840 argout1 = GlobalData.Preferences.ExpertMode; 2841 else 2842 argout1 = 0; 2843 end 2844 2845 case 'DisplayGFP' 2846 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'DisplayGFP') 2847 argout1 = GlobalData.Preferences.DisplayGFP; 2848 else 2849 argout1 = 1; 2850 end 2851 2852 case 'DownsampleTimeSeries' 2853 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'DownsampleTimeSeries') 2854 if (GlobalData.Preferences.DownsampleTimeSeries == 1) 2855 GlobalData.Preferences.DownsampleTimeSeries = 5; 2856 end 2857 argout1 = GlobalData.Preferences.DownsampleTimeSeries; 2858 else 2859 argout1 = 5; 2860 end 2861 2862 case 'GraphicsSmoothing' 2863 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'GraphicsSmoothing') 2864 argout1 = GlobalData.Preferences.GraphicsSmoothing; 2865 else 2866 argout1 = 5; 2867 end 2868 2869 case 'DisableOpenGL' 2870 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'DisableOpenGL') 2871 argout1 = GlobalData.Preferences.DisableOpenGL; 2872 else 2873 argout1 = 0; 2874 end 2875 2876 case 'InterfaceScaling' 2877 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'InterfaceScaling') 2878 argout1 = GlobalData.Preferences.InterfaceScaling; 2879 else 2880 % Get screen resolution 2881 if isfield(GlobalData, 'Program') && isfield(GlobalData.Program, 'ScreenDef') && isfield(GlobalData.Program.ScreenDef, 'javaPos') && ~isempty(GlobalData.Program.ScreenDef(1).javaPos) 2882 AvailableRes = [100 125 150 200 250 300 400]; 2883 iRes = bst_closest(GlobalData.Program.ScreenDef(1).javaPos.width * 100 / 1920, AvailableRes); 2884 argout1 = AvailableRes(iRes); 2885 else 2886 argout1 = 100; 2887 end 2888 end 2889 2890 case 'TSDisplayMode' 2891 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'TSDisplayMode') 2892 argout1 = GlobalData.Preferences.TSDisplayMode; 2893 else 2894 argout1 = 'butterfly'; 2895 end 2896 2897 case 'PluginCustomPath' 2898 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'PluginCustomPath') && ~isempty(GlobalData.Preferences.PluginCustomPath) 2899 argout1 = GlobalData.Preferences.PluginCustomPath; 2900 else 2901 argout1 = []; 2902 end 2903 2904 case 'BrainSuiteDir' 2905 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'BrainSuiteDir') && ~isempty(GlobalData.Preferences.BrainSuiteDir) 2906 if isdir(GlobalData.Preferences.BrainSuiteDir) && file_exist(bst_fullfile(GlobalData.Preferences.BrainSuiteDir, 'bdp')) 2907 argout1 = GlobalData.Preferences.BrainSuiteDir; 2908 else 2909 argout1 = []; 2910 end 2911 else 2912 argout1 = []; 2913 end 2914 2915 case 'SpmTpmAtlas' 2916 preferSpm = 0; 2917 % CALL: bst_get('SpmTpmAtlas', 'SPM') 2918 if (nargin >= 2) && strcmpi(varargin{2}, 'SPM') 2919 preferSpm = 1; 2920 end 2921 2922 % Get template file 2923 tpmUser = bst_fullfile(bst_get('BrainstormUserDir'), 'defaults', 'spm', 'TPM.nii'); 2924 if file_exist(tpmUser) && ~preferSpm 2925 argout1 = tpmUser; 2926 disp(['BST> SPM12 template found: ' tpmUser]); 2927 return; 2928 end 2929 % If it does not exist: check in brainstorm3 folder 2930 tpmDistrib = bst_fullfile(bst_get('BrainstormHomeDir'), 'defaults', 'spm', 'TPM.nii'); 2931 if file_exist(tpmDistrib) && ~preferSpm 2932 argout1 = tpmDistrib; 2933 disp(['BST> SPM12 template found: ' tpmDistrib]); 2934 return; 2935 end 2936 % If it does not exist: check in spm12 folder 2937 PlugSpm = bst_plugin('GetInstalled', 'spm12'); 2938 if ~isempty(PlugSpm) 2939 tpmSpm = bst_fullfile(PlugSpm.Path, PlugSpm.SubFolder, 'tpm', 'TPM.nii'); 2940 if file_exist(tpmSpm) 2941 argout1 = tpmSpm; 2942 disp(['BST> SPM12 template found: ' tpmSpm]); 2943 return; 2944 elseif preferSpm 2945 argout1 = bst_get('SpmTpmAtlas'); 2946 return 2947 end 2948 else 2949 tpmSpm = ''; 2950 end 2951 % Not found... 2952 disp('SPM12 template not found in any of the following folders:'); 2953 disp([' - ' tpmUser]); 2954 disp([' - ' tpmDistrib]); 2955 if ~isempty(tpmSpm) 2956 disp([' - ' tpmSpm]); 2957 end 2958 argout1 = []; 2959 2960 case 'PythonExe' 2961 % Get saved value 2962 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'PythonExe') && ~isempty(GlobalData.Preferences.PythonExe) 2963 if file_exist(GlobalData.Preferences.PythonExe) 2964 argout1 = GlobalData.Preferences.PythonExe; 2965 else 2966 disp(['BST> Error: Python executable not found: ' GlobalData.Preferences.PythonExe]); 2967 argout1 = []; 2968 end 2969 else 2970 argout1 = []; 2971 end 2972 % If not defined in Brainstorm, but set in Matlab 2973 if isempty(argout1) 2974 [pyVer, PythonExe] = bst_python_ver(); 2975 if ~isempty(PythonExe) && file_exist(PythonExe) 2976 disp(['BST> Found Python executable: ' PythonExe]); 2977 argout1 = PythonExe; 2978 bst_set('PythonExe', PythonExe); 2979 end 2980 end 2981 2982 case 'ElectrodeConfig' 2983 % Get modality 2984 Modality = varargin{2}; 2985 if isequal(Modality, 'ECOG+SEEG') 2986 Modality = 'ECOG_SEEG'; 2987 elseif isempty(Modality) || ~ismember(Modality, {'EEG','ECOG','SEEG'}) 2988 error(['Invalid modality: ' Modality]); 2989 end 2990 % Value was saved previously 2991 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'ElectrodeConfig') && isfield(GlobalData.Preferences.ElectrodeConfig, Modality) && isfield(GlobalData.Preferences.ElectrodeConfig.(Modality), 'ContactDiameter') 2992 argout1 = GlobalData.Preferences.ElectrodeConfig.(Modality); 2993 % Get default value 2994 else 2995 switch (Modality) 2996 case 'EEG' 2997 ElectrodeConfig.Type = 'eeg'; 2998 ElectrodeConfig.ContactDiameter = 0.010; 2999 ElectrodeConfig.ContactLength = 0.002; 3000 ElectrodeConfig.ElecDiameter = []; 3001 ElectrodeConfig.ElecLength = []; 3002 case 'ECOG' 3003 ElectrodeConfig.Type = 'ecog'; 3004 ElectrodeConfig.ContactDiameter = 0.004; 3005 ElectrodeConfig.ContactLength = 0.001; 3006 ElectrodeConfig.ElecDiameter = 0.0005; 3007 ElectrodeConfig.ElecLength = []; 3008 case {'SEEG','ECOG_SEEG'} 3009 ElectrodeConfig.Type = 'seeg'; 3010 ElectrodeConfig.ContactDiameter = 0.0008; 3011 ElectrodeConfig.ContactLength = 0.002; 3012 ElectrodeConfig.ElecDiameter = 0.0007; 3013 ElectrodeConfig.ElecLength = 0.070; 3014 end 3015 argout1 = ElectrodeConfig; 3016 end 3017 3018 case 'ElecInterpDist' 3019 % Get modality 3020 Modality = varargin{2}; 3021 if isequal(Modality, 'ECOG+SEEG') 3022 Modality = 'ECOG_SEEG'; 3023 elseif isempty(Modality) || ~ismember(Modality, {'EEG','ECOG','SEEG','MEG'}) 3024 error(['Invalid modality: ' Modality]); 3025 end 3026 % Value was saved previously 3027 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'ElecInterpDist') && isfield(GlobalData.Preferences.ElecInterpDist, Modality) 3028 argout1 = GlobalData.Preferences.ElecInterpDist.(Modality); 3029 % Get default value 3030 else 3031 switch (Modality) 3032 case 'EEG', argout1 = .3; 3033 case 'ECOG', argout1 = .015; 3034 case 'SEEG', argout1 = .015; 3035 case 'ECOG_SEEG', argout1 = .015; 3036 case 'MEG', argout1 = .5; 3037 end 3038 end 3039 3040 case 'UseSigProcToolbox' 3041 % In a parfor loop: GlobalData is empty => Check only if the toolbox is installed (ignore user preferences) 3042 if isempty(GlobalData) || ~isfield(GlobalData, 'Program') || ~isfield(GlobalData.Program, 'HasSigProcToolbox') 3043 argout1 = exist('kaiserord', 'file'); 3044 else 3045 % Save the result of the check for the SigProc tb 3046 if isempty(GlobalData.Program.HasSigProcToolbox) 3047 % Check if Signal Processing Toolbox is installed 3048 GlobalData.Program.HasSigProcToolbox = (exist('kaiserord', 'file') == 2); 3049 end 3050 % Return user preferences 3051 if ~GlobalData.Program.HasSigProcToolbox 3052 argout1 = 0; 3053 elseif isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'UseSigProcToolbox') 3054 argout1 = GlobalData.Preferences.UseSigProcToolbox; 3055 else 3056 argout1 = 1; 3057 end 3058 end 3059 3060 case 'CustomColormaps' 3061 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'CustomColormaps') && ~isempty(GlobalData.Preferences.CustomColormaps) 3062 argout1 = GlobalData.Preferences.CustomColormaps; 3063 else 3064 argout1 = repmat(struct('Name', '', 'CMap', []), 0); 3065 end 3066 3067 case 'BFSProperties' 3068 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'BFSProperties') && ~isempty(GlobalData.Preferences.BFSProperties) 3069 argout1 = GlobalData.Preferences.BFSProperties; 3070 else 3071 argout1 = [.33 .0042 .33 .88 .93]; 3072 end 3073 3074 case 'ShowHiddenFiles' 3075 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'ShowHiddenFiles') 3076 argout1 = GlobalData.Preferences.ShowHiddenFiles; 3077 else 3078 argout1 = 0; 3079 end 3080 3081 case 'LastUsedDirs' 3082 defPref = struct(... 3083 'ImportData', '', ... 3084 'ImportChannel', '', ... 3085 'ImportAnat', '', ... 3086 'ImportMontage', '', ... 3087 'ExportChannel', '', ... 3088 'ExportData', '', ... 3089 'ExportAnat', '', ... 3090 'ExportProtocol', '', ... 3091 'ExportImage', '', ... 3092 'ExportScript', '', ... 3093 'ExportMontage', ''); 3094 argout1 = FillMissingFields(contextName, defPref); 3095 % Check that all folders are valid 3096 fields = fieldnames(argout1); 3097 for i = 1:length(fields) 3098 if ~ischar(argout1.(fields{i})) || ~file_exist(argout1.(fields{i})) 3099 argout1.(fields{i}) = ''; 3100 end 3101 end 3102 3103 case 'DefaultFormats' 3104 defPref = struct(... 3105 'AnatIn', 'FreeSurfer', ... 3106 'ChannelIn', '', ... 3107 'ChannelOut', '', ... 3108 'DataIn', 'CTF', ... 3109 'DataOut', '', ... 3110 'DipolesIn', '', ... 3111 'DipolesOut', '', ... 3112 'ImageOut', '', ... 3113 'EventsIn', '', ... 3114 'EventsOut', '', ... 3115 'MriIn', '', ... 3116 'MriOut', 'Nifti1', ... 3117 'NoiseCovIn', '', ... 3118 'NoiseCovOut', '', ... 3119 'ResultsIn', '', ... 3120 'ResultsOut', '', ... 3121 'SpmOut', '', ... 3122 'SspIn', '', ... 3123 'SspOut', '', ... 3124 'SurfaceIn', '', ... 3125 'SurfaceOut', '', ... 3126 'LabelIn', '', ... 3127 'LabelOut', '', ... 3128 'TimefreqIn', '', ... 3129 'TimefreqOut', '', ... 3130 'MatrixIn', '', ... 3131 'MatrixOut', '', ... 3132 'MontageIn', '', ... 3133 'MontageOut', '', ... 3134 'FibersIn', ''); 3135 argout1 = FillMissingFields(contextName, defPref); 3136 3137 case 'ImportDataOptions' 3138 defPref = db_template('ImportOptions'); 3139 argout1 = FillMissingFields(contextName, defPref); 3140 3141 case 'RawViewerOptions' 3142 defPref = struct(... 3143 'PageDuration', 3, ... 3144 'RemoveBaseline', 'all', ... 3145 'UseCtfComp', 1, ... 3146 'Shortcuts', []); 3147 defPref.Shortcuts = {... 3148 '1', 'event1', 'simple', []; ... % Key, event name, event type (simple,extended,page), epoch time 3149 '2', 'event2', 'simple', []; ... 3150 '3', 'event3', 'simple', []; ... 3151 '4', 'event4', 'simple', []; ... 3152 '5', 'event5', 'simple', []; ... 3153 '6', 'event6', 'simple', []; ... 3154 '7', 'event7', 'simple', []; ... 3155 '8', 'event8', 'simple', []; ... 3156 '9', 'event9', 'simple', []}; 3157 argout1 = FillMissingFields(contextName, defPref); 3158 % If invalid PageDuration: reset to default 3159 if (argout1.PageDuration <= 0.1) 3160 argout1.PageDuration = defPref.PageDuration; 3161 end 3162 % If old shortcuts: reset to defaults 3163 if any(size(argout1.Shortcuts) ~= size(defPref.Shortcuts)) 3164 disp('BST> Warning: Reset keyboard shortcuts to include new options.'); 3165 argout1.Shortcuts = defPref.Shortcuts; 3166 bst_set('RawViewerOptions', argout1); 3167 end 3168 3169 case 'MontageOptions' 3170 defPref = struct('Shortcuts', []); 3171 defPref.Shortcuts = { 3172 %'a', []; ... Note: A is reserved for All channels 3173 'b', []; ... 3174 'c', []; ... 3175 'd', []; ... 3176 'e', []; ... 3177 'f', []; ... 3178 'g', []; ... 3179 'h', []; ... 3180 'i', []; ... 3181 'j', []; ... 3182 'k', []; ... 3183 'l', []; ... 3184 'm', []; ... 3185 'n', []; ... 3186 'o', []; ... 3187 'p', []; ... 3188 'q', []; ... 3189 'r', []; ... 3190 's', []; ... 3191 't', []; ... 3192 'u', []; ... 3193 'v', []; ... 3194 'w', []; ... 3195 'x', []; ... 3196 'y', []; ... 3197 'z', []; ... 3198 }; 3199 argout1 = FillMissingFields(contextName, defPref); 3200 3201 case 'TopoLayoutOptions' 3202 defPref = struct(... 3203 'TimeWindow', [], ... 3204 'FreqWindow', [], ... 3205 'WhiteBackground', 0, ... 3206 'ShowRefLines', 1, ... 3207 'ShowLegend', 1, ... 3208 'FlipYAxis', 0, ... 3209 'ContourLines', 10); 3210 argout1 = FillMissingFields(contextName, defPref); 3211 3212 case 'StatThreshOptions' 3213 defPref = struct(... 3214 'pThreshold', .05, ... 3215 'durThreshold', 0, ... 3216 'Correction', 'fdr', ... 3217 'Control', [1 2 3]); 3218 argout1 = FillMissingFields(contextName, defPref); 3219 % Make sure that Control is not a string (previous brainstorm version) 3220 if ischar(argout1.Control) 3221 argout1.Control = defPref.Control; 3222 end 3223 % Make sure that 'no' is used instead of 'none' (previous brainstorm version) 3224 if strcmpi(argout1.Correction, 'none') 3225 argout1.Correction = 'no'; 3226 end 3227 3228 case 'ContactSheetOptions' 3229 defPref = struct(... 3230 'nImages', 20, ... 3231 'TimeRange', [], ... 3232 'SkipVolume', 0.2); 3233 argout1 = FillMissingFields(contextName, defPref); 3234 3235 case 'ProcessOptions' 3236 defPref = struct(... 3237 'SavedParam', struct(), ... 3238 'MaxBlockSize', 100 * 1024 * 1024 / 8, ... % 100MiB == 13,107,200 doubles 3239 'LastMaxBlockSize', 100 * 1024 * 1024 / 8); % 100MiB == 13,107,200 doubles 3240 argout1 = FillMissingFields(contextName, defPref); 3241 3242 case 'ImportEegRawOptions' 3243 defPref = struct(... 3244 'isCanceled', 0, ... 3245 'BaselineDuration', 0, ... 3246 'SamplingRate', 1000, ... 3247 'MatrixOrientation', 'channelXtime', ... % {'channelXtime', 'timeXchannel'} 3248 'VoltageUnits', 'V', ... % {'\muV', 'mV', 'V'} 3249 'SkipLines', 0, ... 3250 'nAvg', 1, ... 3251 'isChannelName', 0); % 1 if the first entry contains the channel name 3252 argout1 = FillMissingFields(contextName, defPref); 3253 3254 case 'BugReportOptions' 3255 defPref = struct(... 3256 'isEnabled', 0, ... 3257 'SmtpServer', 'mailhost.chups.jussieu.fr', ... 3258 'UserEmail', ''); 3259 argout1 = FillMissingFields(contextName, defPref); 3260 3261 case 'DefaultSurfaceDisplay' 3262 defPref = struct(... 3263 'SurfShowSulci', 1, ... 3264 'SurfSmoothValue', 0, ... 3265 'DataThreshold', 0.5, ... 3266 'SizeThreshold', 1, ... 3267 'DataAlpha', 0); 3268 argout1 = FillMissingFields(contextName, defPref); 3269 3270 case 'MagneticExtrapOptions' 3271 defPref = struct(... 3272 'ForceWhitening', 0, ... 3273 'EpsilonValue', 0.0001); 3274 argout1 = FillMissingFields(contextName, defPref); 3275 3276 case 'DefaultFreqBands' 3277 argout1 = {... 3278 'delta', '2, 4', 'mean'; ... 3279 'theta', '5, 7', 'mean'; ... 3280 'alpha', '8, 12', 'mean'; ... 3281 'beta', '15, 29', 'mean'; ... 3282 'gamma1', '30, 59', 'mean'; ... 3283 'gamma2', '60, 90', 'mean'}; 3284 3285 case 'TimefreqOptions_morlet' 3286 defPref.isTimeBands = 0; 3287 defPref.isFreqBands = 0; 3288 defPref.isFreqLog = 0; 3289 defPref.TimeBands = {}; 3290 defPref.Freqs = '1:1:60'; 3291 defPref.FreqsLog = '1:40:150'; 3292 defPref.FreqBands = bst_get('DefaultFreqBands'); 3293 defPref.Measure = 'power'; 3294 defPref.SaveKernel = 0; 3295 defPref.Output = 'all'; 3296 defPref.RemoveEvoked = 0; 3297 defPref.ClusterFuncTime = 'after'; 3298 defPref.MorletFc = 1; 3299 defPref.MorletFwhmTc = 3; 3300 argout1 = FillMissingFields(contextName, defPref); 3301 if isempty(argout1.Freqs) 3302 argout1.Freqs = defPref.Freqs; 3303 end 3304 if ~isempty(argout1.FreqBands) && ((size(argout1.FreqBands,2) ~= 3) || ~all(cellfun(@ischar, argout1.FreqBands(:))) || any(cellfun(@(c)isempty(strtrim(c)), argout1.FreqBands(:)))) 3305 argout1.FreqBands = defPref.FreqBands; 3306 end 3307 3308 case 'TimefreqOptions_hilbert' 3309 defPref.isTimeBands = 0; 3310 defPref.isFreqBands = 1; 3311 defPref.isFreqLog = 0; 3312 defPref.TimeBands = {}; 3313 defPref.Freqs = []; 3314 defPref.FreqsLog = []; 3315 defPref.FreqBands = bst_get('DefaultFreqBands'); 3316 defPref.Measure = 'power'; 3317 defPref.SaveKernel = 0; 3318 defPref.Output = 'all'; 3319 defPref.RemoveEvoked = 0; 3320 defPref.ClusterFuncTime = 'after'; 3321 argout1 = FillMissingFields(contextName, defPref); 3322 if isempty(argout1.Freqs) 3323 argout1.Freqs = defPref.Freqs; 3324 end 3325 if ~isempty(argout1.FreqBands) && (size(argout1.FreqBands,2) == 3) && ~ischar(argout1.FreqBands{1,2}) 3326 argout1.FreqBands = defPref.FreqBands; 3327 end 3328 3329 case 'TimefreqOptions_plv' % not used 3330 defPref.isTimeBands = 0; 3331 defPref.isFreqBands = 1; 3332 defPref.isFreqLog = 0; 3333 defPref.TimeBands = {}; 3334 defPref.Freqs = []; 3335 defPref.FreqsLog = []; 3336 defPref.FreqBands = bst_get('DefaultFreqBands'); 3337 defPref.Measure = 'other'; 3338 defPref.SaveKernel = 0; 3339 defPref.Output = 'all'; 3340 defPref.ClusterFuncTime = 'after'; 3341 argout1 = FillMissingFields(contextName, defPref); 3342 if isempty(argout1.Freqs) 3343 argout1.Freqs = defPref.Freqs; 3344 end 3345 if ~isempty(argout1.FreqBands) && ~ischar(argout1.FreqBands{1,2}) 3346 argout1.FreqBands = defPref.FreqBands; 3347 end 3348 3349 case 'TimefreqOptions_fft' 3350 defPref.isTimeBands = 0; 3351 defPref.isFreqBands = 0; 3352 defPref.isFreqLog = 0; 3353 defPref.TimeBands = {}; 3354 defPref.Freqs = []; 3355 defPref.FreqsLog = []; 3356 defPref.FreqBands = bst_get('DefaultFreqBands'); 3357 defPref.Measure = 'power'; 3358 defPref.Output = 'all'; 3359 defPref.ClusterFuncTime = 'after'; 3360 argout1 = FillMissingFields(contextName, defPref); 3361 if isempty(argout1.Freqs) 3362 argout1.Freqs = defPref.Freqs; 3363 end 3364 if ~isempty(argout1.FreqBands) && ~ischar(argout1.FreqBands{1,2}) 3365 argout1.FreqBands = defPref.FreqBands; 3366 end 3367 3368 case 'TimefreqOptions_psd' 3369 defPref.isTimeBands = 0; 3370 defPref.isFreqBands = 0; 3371 defPref.isFreqLog = 0; 3372 defPref.TimeBands = {}; 3373 defPref.Freqs = []; 3374 defPref.FreqsLog = []; 3375 defPref.FreqBands = bst_get('DefaultFreqBands'); 3376 defPref.Measure = 'power'; 3377 defPref.Output = 'all'; 3378 defPref.ClusterFuncTime = 'after'; 3379 argout1 = FillMissingFields(contextName, defPref); 3380 if isempty(argout1.Freqs) 3381 argout1.Freqs = defPref.Freqs; 3382 end 3383 if ~isempty(argout1.FreqBands) && ~ischar(argout1.FreqBands{1,2}) 3384 argout1.FreqBands = defPref.FreqBands; 3385 end 3386 3387 case 'TimefreqOptions_stft' 3388 defPref.isTimeBands = 0; 3389 defPref.isFreqBands = 0; 3390 defPref.isFreqLog = 0; 3391 defPref.TimeBands = {}; 3392 defPref.Freqs = []; 3393 defPref.FreqsLog = []; 3394 defPref.FreqBands = bst_get('DefaultFreqBands'); 3395 defPref.Measure = 'power'; 3396 defPref.Output = 'all'; 3397 defPref.ClusterFuncTime = 'after'; 3398 defPref.StftWinLen = 1; 3399 defPref.StftWinOvr = 0; 3400 defPref.StftFrqMax = 0; 3401 argout1 = FillMissingFields(contextName, defPref); 3402 if isempty(argout1.Freqs) 3403 argout1.Freqs = defPref.Freqs; 3404 end 3405 if ~isempty(argout1.FreqBands) && ~ischar(argout1.FreqBands{1,2}) 3406 argout1.FreqBands = defPref.FreqBands; 3407 end 3408 3409 case 'ExportBidsOptions' 3410 defPref.ProjName = []; 3411 defPref.ProjID = []; 3412 defPref.ProjDesc = []; 3413 defPref.Categories = []; 3414 defPref.JsonDataset = ['{' 10 ' "License": "PD"' 10 '}']; 3415 defPref.JsonMeg = ['{' 10 ' "TaskDescription": "My task"' 10 '}']; 3416 argout1 = FillMissingFields(contextName, defPref); 3417 3418 case 'OpenMEEGOptions' 3419 defPref.BemFiles = {}; 3420 defPref.BemNames = {'Scalp', 'Skull', 'Brain'}; 3421 defPref.BemCond = [1, 0.0125, 1]; 3422 defPref.BemSelect = [1 1 1]; 3423 defPref.isAdjoint = 0; 3424 defPref.isAdaptative = 1; 3425 defPref.isSplit = 0; 3426 defPref.SplitLength = 4000; 3427 argout1 = FillMissingFields(contextName, defPref); 3428 3429 case 'DuneuroOptions' 3430 defPref = duneuro_defaults(); 3431 argout1 = FillMissingFields(contextName, defPref); 3432 3433 case 'GridOptions_dipfit' 3434 defPref = struct(... 3435 'Method', 'isotropic', ... 3436 'nLayers', 17, ... 3437 'Reduction', 3, ... 3438 'nVerticesInit', 4000, ... 3439 'Resolution', 0.020, ... 3440 'FileName', ''); 3441 argout1 = FillMissingFields(contextName, defPref); 3442 3443 case 'GridOptions_headmodel' 3444 defPref = struct(... 3445 'Method', 'isotropic', ... 3446 'nLayers', 17, ... 3447 'Reduction', 3, ... 3448 'nVerticesInit', 4000, ... 3449 'Resolution', 0.005, ... 3450 'FileName', ''); 3451 argout1 = FillMissingFields(contextName, defPref); 3452 3453 case 'MriOptions' 3454 defPref = struct(... 3455 'isRadioOrient', 0, ... 3456 'isMipAnatomy', 0, ... 3457 'isMipFunctional', 0, ... 3458 'OverlaySmooth', 0, ... 3459 'InterpDownsample', 3, ... 3460 'DistanceThresh', 6, ... 3461 'UpsampleImage', 0, ... 3462 'DefaultAtlas', []); 3463 argout1 = FillMissingFields(contextName, defPref); 3464 3465 case 'DigitizeOptions' 3466 defPref = struct(... 3467 'PatientId', 'S001', ... 3468 'ComPort', 'COM1', ... 3469 'ComRate', 9600, ... 3470 'ComByteCount', 94, ... % 47 bytes * 2 receivers 3471 'UnitType', 'fastrak', ... 3472 'ConfigCommands', [], ... % setup-specific device configuration commands, e.g. hemisphere of operation 3473 'nFidSets', 2, ... 3474 'Fids', {{'NAS', 'LPA', 'RPA'}}, ... % 3 anat points (required) and any other, e.g. MEG coils, in desired digitization order 3475 'DistThresh', 0.005, ... % 5 mm distance threshold between repeated measures of fid positions 3476 'isBeep', 1, ... 3477 'isMEG', 1, ... 3478 'isSimulate', 0, ... 3479 'Montages', [... 3480 struct('Name', 'No EEG', ... 3481 'Labels', [], ... 3482 'ChannelFile', []), ... 3483 struct('Name', 'Default', ... 3484 'Labels', [], ... 3485 'ChannelFile', [])], ... 3486 'iMontage', 1, ... 3487 'Version', '2024'); % Version of the Digitize panel: 'legacy' or '2024' 3488 argout1 = FillMissingFields(contextName, defPref); 3489 3490 case 'PcaOptions' 3491 defPref.Method = 'pca'; % deprecated legacy per-file with sign inconsistencies, but kept as default for reproducibility 3492 defPref.Baseline = [-.1, 0]; % not used for 'pca': full window instead 3493 defPref.DataTimeWindow = [0, 1]; % not used for 'pca': full window instead 3494 defPref.RemoveDcOffset = 'file'; 3495 argout1 = FillMissingFields(contextName, defPref); 3496 3497 case 'ConnectGraphOptions' 3498 % Get interface scaling factor 3499 InterfaceScaling = bst_get('InterfaceScaling'); 3500 % Get default values 3501 defPref = struct(... 3502 'LobeFullLabel', 1, ... 3503 'TextDisplayMode', [1 2], ... 3504 'LabelSize', 7 .* InterfaceScaling ./ 100, ... 3505 'NodeSize', 5 .* InterfaceScaling ./ 100, ... 3506 'LinkSize', 1.5 .* InterfaceScaling ./ 100, ... 3507 'BgColor', [0 0 0], ... 3508 'HierarchyNodeIsVisible', 1); 3509 % If we have an additional argument, get the default values 3510 if nargin > 1 3511 argout1 = defPref; 3512 % Otherwise, get the saved values 3513 else 3514 savedValues = FillMissingFields(contextName, defPref); 3515 3516 % if any of the fields are [], replace by default value 3517 % do it here to avoid touching the common FillMissingFields 3518 % function, as other tools may actually want to set [] as desired property 3519 fields = fieldnames(savedValues); 3520 for i=1:numel(fields) 3521 if(isempty(savedValues.(fields{i}))) 3522 savedValues.(fields{i}) = defPref.(fields{i}); 3523 end 3524 end 3525 argout1 = savedValues; 3526 end 3527 3528 case 'NodelistOptions' 3529 defPref = struct(... 3530 'String', '', ... % What to search for 3531 'Target', 'Comment', ... % What field to search for: {'FileName', 'Comment'} 3532 'Action', 'Select'); % What to do with the filtered files: {'Select', 'Exclude'} 3533 argout1 = FillMissingFields(contextName, defPref); 3534 3535 case 'ReadOnly' 3536 if isfield(GlobalData.DataBase, 'isReadOnly') 3537 argout1 = GlobalData.DataBase.isReadOnly; 3538 else 3539 argout1 = 0; 3540 end 3541 3542 case 'LastPsdDisplayFunction' 3543 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'LastPsdDisplayFunction') 3544 argout1 = GlobalData.Preferences.LastPsdDisplayFunction; 3545 else 3546 argout1 = []; 3547 end 3548 3549 case 'PlotlyCredentials' 3550 % Get saved username and API key 3551 try 3552 creds = loadplotlycredentials(); 3553 argout1 = creds.username; 3554 argout2 = creds.api_key; 3555 catch 3556 argout1 = ''; 3557 argout2 = ''; 3558 end 3559 % Get saved domain 3560 try 3561 config = loadplotlyconfig(); 3562 argout3 = config.plotly_domain; 3563 catch 3564 argout3 = ''; 3565 end 3566 3567 case 'KlustersExecutable' 3568 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, 'KlustersExecutable') 3569 argout1 = GlobalData.Preferences.KlustersExecutable; 3570 else 3571 argout1 = []; 3572 end 3573 3574 3575 %% ===== FILE FILTERS ===== 3576 case 'FileFilters' 3577 switch lower(varargin{2}) 3578 case 'mri' 3579 argout1 = {... 3580 {'.img'}, 'MRI: Analyze (*.img/*.hdr)', 'Analyze'; ... 3581 {'.ima'}, 'MRI: BrainVISA GIS (*.ima/*.dim)', 'GIS'; ... 3582 {'.ima'}, 'MRI: BrainVISA GIS (*.ima/*.dim)', 'GIS'; ... 3583 {'.mri'}, 'MRI: CTF (*.mri)', 'CTF'; ... 3584 {'.mat'}, 'MRI: FieldTrip (*.mat)', 'FT-MRI'; ... 3585 {'.mgh','.mgz'}, 'MRI: MGH (*.mgh,*.mgz)', 'MGH'; ... 3586 {'.mnc', '.mni'}, 'MRI: MNI (*.mnc,*.mni)', 'MINC'; ... 3587 {'.nii','.gz'}, 'MRI: NIfTI-1 (*.nii;*.nii.gz)', 'Nifti1'; ... 3588 {'_subjectimage'}, 'MRI: Brainstorm (*subjectimage*.mat)', 'BST'; ... 3589 {'*'}, 'MRI: DICOM (SPM converter)', 'DICOM-SPM'; ... 3590 {'.mri', '.fif', '.img', '.ima', '.nii', '.mgh', '.mgz', '.mnc', '.mni', '.gz', '_subjectimage'}, 'All MRI files (subject space)', 'ALL'; ... 3591 {'.mri', '.fif', '.img', '.ima', '.nii', '.mgh', '.mgz', '.mnc', '.mni', '.gz', '_subjectimage'}, 'All MRI files (MNI space)', 'ALL-MNI'; ... 3592 {'.mri', '.fif', '.img', '.ima', '.nii', '.mgh', '.mgz', '.mnc', '.mni', '.gz', '_subjectimage'}, 'Volume atlas (subject space)', 'ALL-ATLAS'; ... 3593 {'.mri', '.fif', '.img', '.ima', '.nii', '.mgh', '.mgz', '.mnc', '.mni', '.gz', '_subjectimage'}, 'Volume atlas (MNI space)', 'ALL-MNI-ATLAS'; ... 3594 }; 3595 case 'mriout' 3596 argout1 = {... 3597 {'.img'}, 'MRI: Analyze (*.img/*.hdr)', 'Analyze'; ... 3598 {'.ima'}, 'MRI: BrainVISA GIS (*.ima/*.dim)', 'GIS'; ... 3599 {'.mri'}, 'MRI: CTF (*.mri)', 'CTF'; ... 3600 {'.mat'}, 'MRI: FieldTrip (*.mat)', 'FT-MRI'; ... 3601 {'.nii'}, 'MRI: NIfTI-1 (*.nii)', 'Nifti1'... 3602 }; 3603 case 'anatin' 3604 argout1 = {... 3605 {'.folder'}, 'FreeSurfer', 'FreeSurfer-fast'; ... 3606 {'.folder'}, 'FreeSurfer + Volume atlases', 'FreeSurfer'; ... 3607 {'.folder'}, 'FreeSurfer + Volume atlases + Thickness', 'FreeSurfer+Thick'; ... 3608 {'.folder'}, 'BrainSuite', 'BrainSuite-fast'; ... 3609 {'.folder'}, 'BrainSuite + Volume atlases', 'BrainSuite'; ... 3610 {'.folder'}, 'BrainVISA', 'BrainVISA'; ... 3611 {'.folder'}, 'CAT12', 'CAT12'; ... 3612 {'.folder'}, 'CAT12 + Thickness', 'CAT12+Thick'; ... 3613 {'.folder'}, 'CIVET', 'CIVET'; ... 3614 {'.folder'}, 'CIVET + Thickness', 'CIVET+Thick'; ... 3615 {'.folder'}, 'HCP MEG/anatomy (pipeline v3)', 'HCPv3'; ... 3616 {'.folder'}, 'SimNIBS', 'SimNIBS'; ... 3617 }; 3618 case 'source4d' 3619 argout1 = {... 3620 {'.folder'}, 'NIfTI-1 (*.nii)', 'Nifti1';... 3621 {'.folder'}, 'Analyze (*.img/*.hdr)', 'Analyze'; ... 3622 {'.folder'}, 'Matlab 4D matrix (*voltime*.mat)', 'BST'; ... 3623 }; 3624 3625 case 'surface' 3626 argout1 = {... 3627 {'.mesh'}, 'BrainVISA (*.mesh)', 'MESH'; ... 3628 {'_tess', '_head', '_scalp', '_brain', '_cortex', '_innerskull', '_outerskull'}, 'Brainstorm (*.mat)', 'BST'; ... 3629 {'.dfs'}, 'BrainSuite (*.dfs)', 'DFS'; ... 3630 {'.dsgl'}, 'BrainSuite old (*.dsgl)', 'DSGL'; ... 3631 {'.bd0','.bd1','.bd2','.bd3','.bd4','.bd5','.bd6','.bd7','.bd8','.bd9', ... 3632 '.s00','.s01','.s02','.s03','.s04','.s05','.s06','.s07','.s08','.s09'}, ... 3633 'Curry BEM (*.db*;*.s0*)', 'CURRY-BEM'; 3634 {'.vtk'}, 'FSL: VTK (*.vtk)', 'VTK'; ... 3635 {'*'}, 'FreeSurfer (*.*)', 'FS'; 3636 {'.off'}, 'Geomview OFF (*.off)', 'OFF'; ... 3637 {'.gii'}, 'GIfTI / MRI coordinates (*.gii)', 'GII'; ... 3638 {'.gii'}, 'GIfTI / MNI coordinates (*.gii)', 'GII-MNI'; ... 3639 {'.gii'}, 'GIfTI / World coordinates (*.gii)', 'GII-WORLD'; ... 3640 {'.fif'}, 'MNE (*.fif)', 'FIF'; ... 3641 {'.obj'}, 'MNI OBJ (*.obj)', 'MNIOBJ'; ... 3642 {'.obj'}, 'Wavefront OBJ (*.obj)', 'WFTOBJ'; ... 3643 {'.msh'}, 'SimNIBS3/headreco Gmsh4 (*.msh)', 'SIMNIBS3'; ... 3644 {'.msh'}, 'SimNIBS4/charm Gmsh4 (*.msh)', 'SIMNIBS4'; ... 3645 {'.tri'}, 'TRI (*.tri)', 'TRI'; ... 3646 {'.mri', '.fif', '.img', '.ima', '.nii', '.mgh', '.mgz', '.mnc', '.mni', '.gz', '_subjectimage'}, 'Volume mask or atlas (subject space)', 'MRI-MASK'; ... 3647 {'.mri', '.fif', '.img', '.ima', '.nii', '.mgh', '.mgz', '.mnc', '.mni', '.gz'}, 'Volume mask or atlas (MNI space)', 'MRI-MASK-MNI'; ... 3648 {'.nwbaux'}, 'Neurodata Without Borders (*.nwbaux)', 'NWB'; ... 3649 {'*'}, 'All surface files (*.*)', 'ALL'; ... 3650 }; 3651 3652 case 'surfaceout' 3653 argout1 = {... 3654 {'.mesh'}, 'BrainVISA (*.mesh)', 'MESH'; ... 3655 {'.dfs'}, 'BrainSuite (*.dfs)', 'DFS'; ... 3656 {'.fs'}, 'FreeSurfer (*.fs)', 'FS'; ... 3657 {'.obj'}, 'Wavefront OBJ (*.obj)', 'OBJ'; ... 3658 {'.off'}, 'Geomview OFF (*.off)', 'OFF'; ... 3659 {'.gii'}, 'GIfTI (*.gii)', 'GII'; ... 3660 {'.tri'}, 'TRI (*.tri)', 'TRI'; ... 3661 {'.stl'}, 'STL (*.stl)', 'STL'; ... 3662 }; 3663 3664 case 'data' 3665 argout1 = {... 3666 {'.*'}, 'MEG/EEG: 4D-Neuroimaging/BTi (*.*)', '4D'; ... 3667 {'.meg4','.res4'}, 'MEG/EEG: CTF (*.ds;*.meg4;*.res4)', 'CTF'; ... 3668 {'.fif'}, 'MEG/EEG: Elekta-Neuromag (*.fif)', 'FIF'; ... 3669 {'.mat'}, 'MEG/EEG: FieldTrip (*.mat)', 'FT-TIMELOCK'; ... 3670 {'.raw'}, 'MEG/EEG: ITAB (*.raw)', 'ITAB'; ... 3671 {'.kdf'}, 'MEG/EEG: KRISS MEG (*.kdf)', 'KDF'; ... 3672 {'.mrk','.sqd','.con','.raw','.ave'}, 'MEG/EEG: Ricoh (*.sqd;*.con;*.raw;*.ave;*.mrk)', 'RICOH'; ... 3673 {'.mat'}, 'MEG/EEG: SPM (*.mat/.dat)', 'SPM-DAT'; ... 3674 {'.mrk','.sqd','.con','.raw','.ave'}, 'MEG/EEG: Yokogawa/KIT (*.sqd;*.con;*.raw;*.ave;*.mrk)', 'KIT'; ... 3675 {'.meghdf5'}, 'MEG/EEG: York Instruments MEGSCAN (.meghdf5)', 'MEGSCAN-HDF5'; ... 3676 {'.bst'}, 'MEG/EEG: Brainstorm binary (*.bst)', 'BST-BIN'; ... 3677 {'.adicht'}, 'EEG: ADInstruments LabChart (*.adicht)', 'EEG-ADICHT'; ... 3678 {'.msr'}, 'EEG: ANT ASA (*.msr)', 'EEG-ANT-MSR'; ... 3679 {'.cnt','.avr'}, 'EEG: ANT EEProbe (*.cnt;*.avr)', 'EEG-ANT-CNT'; ... 3680 {'*'}, 'EEG: ASCII text (*.*)', 'EEG-ASCII'; ... 3681 {'.raw'}, 'EEG: Axion AxIS (*.raw)', 'EEG-AXION'; ... 3682 {'.dat'}, 'EEG: BCI2000 (*.dat)', 'EEG-BCI2000'; ... 3683 {'.bdf'}, 'EEG: BDF (*.bdf)', 'EEG-BDF'; ... 3684 {'.avr','.mux','.mul'}, 'EEG: BESA exports (*.avr;*.mul;*.mux)', 'EEG-BESA'; ... 3685 {'.acq'}, 'EEG: BIOPAC AcqKnowledge (*.acq)', 'EEG-BIOPAC'; ... 3686 {'.ns1','.ns2','.ns3','.ns4','.ns5','.ns6'}, 'EEG: Blackrock NeuroPort (*.nsX/*.nev)', 'EEG-BLACKROCK'; 3687 {'.eeg','.dat'}, 'EEG: BrainVision BrainAmp (*.eeg;*.dat)', 'EEG-BRAINAMP'; ... 3688 {'.txt'}, 'EEG: BrainVision Analyzer (*.txt)', 'EEG-BRAINVISION'; ... 3689 {'.sef','.ep','.eph'}, 'EEG: Cartool (*.sef;*.ep;*.eph)', 'EEG-CARTOOL'; ... 3690 {'.dat','.cdt'}, 'EEG: Curry (*.dat;*.cdt)', 'EEG-CURRY'; ... 3691 {'.smr','.son'}, 'EEG: CED Spike2 old 32bit (*.smr;*.son)', 'EEG-SMR'; ... 3692 {'.smr','.smrx'}, 'EEG: CED Spike2 new 64bit (*.smr;*.smrx)', 'EEG-SMRX'; ... 3693 {'.rda'}, 'EEG: Compumedics ProFusion Sleep (*.rda)', 'EEG-COMPUMEDICS-PFS'; ... 3694 {'.bin'}, 'EEG: Deltamed Coherence-Neurofile (*.bin)', 'EEG-DELTAMED'; ... 3695 {'.edf','.rec'}, 'EEG: EDF / EDF+ (*.rec;*.edf)', 'EEG-EDF'; ... 3696 {'.edf','.rec'}, 'EEG EDF / EDF+ FieldTrip reader (*.rec;*.edf)', 'EEG-EDF-FT'; ... 3697 {'.set'}, 'EEG: EEGLAB (*.set)', 'EEG-EEGLAB'; ... 3698 {'.raw'}, 'EEG: EGI Netstation RAW (*.raw)', 'EEG-EGI-RAW'; ... 3699 {'.mff','.bin'}, 'EEG: EGI-Philips (*.mff)', 'EEG-EGI-MFF'; ... 3700 {'.edf'}, 'EEG: EmotivPRO (*.edf)', 'EEG-EMOTIV'; ... 3701 {'.erp','.hdr'}, 'EEG: ERPCenter (*.hdr;*.erp)', 'EEG-ERPCENTER'; ... 3702 {'.erp'}, 'EEG: ERPLab (*.erp)', 'EEG-ERPLAB'; ... 3703 {'.mat','.hdf5'}, 'EEG: g.tec Matlab (*.mat,*.hdf5)', 'EEG-GTEC'; ... 3704 {'.rhd','.rhs'}, 'EEG: Intan (*.rhd,*.rhs)', 'EEG-INTAN'; ... 3705 {'.mb2'}, 'EEG: MANSCAN (*.mb2)', 'EEG-MANSCAN'; ... 3706 {'.trc'}, 'EEG: Micromed (*.trc)', 'EEG-MICROMED'; ... 3707 {'.mat'}, 'EEG: Matlab matrix (*.mat)', 'EEG-MAT'; ... 3708 {'.csv'}, 'EEG: Muse (*.csv)', 'EEG-MUSE-CSV'; ... 3709 {'.ncs'}, 'EEG: Neuralynx (*.ncs)', 'EEG-NEURALYNX'; ... 3710 {'.nwb'}, 'EEG: Neurodata Without Borders (*.nwb)','NWB'; ... 3711 {'.nedf','.easy'}, 'EEG: Neuroelectrics (*.nedf;*.easy)', 'EEG-NEUROELECTRICS'; ... 3712 {'.bin'}, 'EEG: NeurOne session folder', 'EEG-NEURONE'; ... 3713 {'.cnt','.avg','.eeg','.dat'}, 'EEG: Neuroscan (*.cnt;*.eeg;*.avg;*.dat)', 'EEG-NEUROSCAN'; ... 3714 {'.eeg','.dat'}, 'EEG: NeuroScope (*.eeg;*.dat)', 'EEG-NEUROSCOPE'; ... 3715 {'.e'}, 'EEG: Nicolet (*.e)', 'EEG-NICOLET'; ... 3716 {'.eeg'}, 'EEG: Nihon Kohden (*.eeg)', 'EEG-NK'; ... 3717 {'.dat'}, 'EEG: Open Ephys flat binary (*.dat)', 'EEG-OEBIN'; ... 3718 {'.plx','.pl2'}, 'EEG: Plexon (*.plx;*.pl2)', 'EEG-PLEXON'; ... 3719 {'.ns1','.ns2','.ns3','.ns4','.ns5','.ns6'}, 'EEG: Ripple Trellis (*.nsX/*.nev)', 'EEG-RIPPLE'; ... 3720 {'.h5'}, 'EEG: The Virtual Brain (*_TimeSeriesEEG.h5)', 'EEG-TVB'; ... 3721 {'.csv'}, 'EEG: Wearable Sensing (*.csv)', 'EEG-WS-CSV'; ... 3722 {'.xdf'}, 'EEG: XDF (*.xdf)', 'EEG-XDF'; ... 3723 {'.nirs'}, 'NIRS: Brainsight (*.nirs)', 'NIRS-BRS'; ... 3724 {'.bnirs','.jnirs','.snirf'}, 'NIRS: SNIRF (*.snirf)', 'NIRS-SNIRF'; ... 3725 {'.edf'}, 'Eye tracker: EyeLink (*.edf)', 'EYELINK'; ... 3726 {'.tsv'}, 'Eye tracker: Tobii glasses (*.tsv)', 'EYE-TOBII-TSV'; ... 3727 }; 3728 case 'raw' 3729 argout1 = {... 3730 {'.*'}, 'MEG/EEG: 4D-Neuroimaging/BTi (*.*)', '4D'; ... 3731 {'.meg4','.res4'}, 'MEG/EEG: CTF (*.ds;*.meg4;*.res4)', 'CTF'; ... 3732 {'.fif'}, 'MEG/EEG: Elekta-Neuromag (*.fif)', 'FIF'; ... 3733 {'.mat'}, 'MEG/EEG: FieldTrip (*.mat)', 'FT-TIMELOCK'; ... 3734 {'.raw'}, 'MEG/EEG: ITAB (*.raw)', 'ITAB'; ... 3735 {'.kdf'}, 'MEG/EEG: KRISS MEG (*.kdf)', 'KDF'; ... 3736 {'.mrk','.sqd','.con','.raw','.ave'}, 'MEG/EEG: Ricoh (*.sqd;*.con;*.raw;*.ave;*.mrk)', 'RICOH'; ... 3737 {'.mat'}, 'MEG/EEG: SPM (*.mat/.dat)', 'SPM-DAT'; ... 3738 {'.mrk','.sqd','.con','.raw','.ave'}, 'MEG/EEG: Yokogawa/KIT (*.sqd;*.con;*.raw;*.ave;*.mrk)', 'KIT'; ... 3739 {'.meghdf5'}, 'MEG/EEG: York Instruments MEGSCAN (.meghdf5)', 'MEGSCAN-HDF5'; ... 3740 {'.bst'}, 'MEG/EEG: Brainstorm binary (*.bst)', 'BST-BIN'; ... 3741 {'.adicht'}, 'EEG: ADInstruments LabChart (*.adicht)', 'EEG-ADICHT'; ... 3742 {'.msr'}, 'EEG: ANT ASA (*.msr)', 'EEG-ANT-MSR'; ... 3743 {'.cnt','.avr'}, 'EEG: ANT EEProbe (*.cnt;*.avr)', 'EEG-ANT-CNT'; ... 3744 {'*'}, 'EEG: ASCII text (*.*)', 'EEG-ASCII'; ... 3745 {'.raw'}, 'EEG: Axion AxIS (*.raw)', 'EEG-AXION'; ... 3746 {'.dat'}, 'EEG: BCI2000 (*.dat)', 'EEG-BCI2000'; ... 3747 {'.bdf'}, 'EEG: BDF (*.bdf)', 'EEG-BDF'; ... 3748 {'.avr','.mux','.mul'}, 'EEG: BESA exports (*.avr;*.mul;*.mux)', 'EEG-BESA'; ... 3749 {'.acq'}, 'EEG: BIOPAC AcqKnowledge (*.acq)', 'EEG-BIOPAC'; ... 3750 {'.ns1','.ns2','.ns3','.ns4','.ns5','.ns6'}, 'EEG: Blackrock NeuroPort (*.nsX/*.nev)', 'EEG-BLACKROCK'; 3751 {'.eeg','.dat'}, 'EEG: BrainVision BrainAmp (*.eeg;*.dat)', 'EEG-BRAINAMP'; ... 3752 {'.txt'}, 'EEG: BrainVision Analyzer (*.txt)', 'EEG-BRAINVISION'; ... 3753 {'.sef','.ep','.eph'}, 'EEG: Cartool (*.sef;*.ep;*.eph)', 'EEG-CARTOOL'; ... 3754 {'.smr','.son'}, 'EEG: CED Spike2 old 32bit (*.smr;*.son)', 'EEG-SMR'; ... 3755 {'.smr','.smrx'}, 'EEG: CED Spike2 new 64bit (*.smr;*.smrx)', 'EEG-SMRX'; ... 3756 {'.rda'}, 'EEG: Compumedics ProFusion Sleep (*.rda)', 'EEG-COMPUMEDICS-PFS'; ... 3757 {'.dat','.cdt'}, 'EEG: Curry (*.dat;*.cdt)', 'EEG-CURRY'; ... 3758 {'.bin'}, 'EEG: Deltamed Coherence-Neurofile (*.bin)', 'EEG-DELTAMED'; ... 3759 {'.edf','.rec'}, 'EEG: EDF / EDF+ (*.rec;*.edf)', 'EEG-EDF'; ... 3760 {'.edf','.rec'}, 'EEG EDF / EDF+ FieldTrip reader (*.rec;*.edf)', 'EEG-EDF-FT'; ... 3761 {'.set'}, 'EEG: EEGLAB (*.set)', 'EEG-EEGLAB'; ... 3762 {'.raw'}, 'EEG: EGI Netstation RAW (*.raw)', 'EEG-EGI-RAW'; ... 3763 {'.mff','.bin'}, 'EEG: EGI-Philips (*.mff)', 'EEG-EGI-MFF'; ... 3764 {'.edf'}, 'EEG: EmotivPRO (*.edf)', 'EEG-EMOTIV'; ... 3765 {'.mat','.hdf5'}, 'EEG: g.tec Matlab (*.mat,*.hdf5)', 'EEG-GTEC'; ... 3766 {'.rhd','.rhs'}, 'EEG: Intan (*.rhd,*.rhs)', 'EEG-INTAN'; ... 3767 {'.mb2'}, 'EEG: MANSCAN (*.mb2)', 'EEG-MANSCAN'; ... 3768 {'.mat'}, 'EEG: Matlab matrix (*.mat)', 'EEG-MAT'; ... 3769 {'.csv'}, 'EEG: Muse (*.csv)', 'EEG-MUSE-CSV'; ... 3770 {'.trc'}, 'EEG: Micromed (*.trc)', 'EEG-MICROMED'; ... 3771 {'.ncs'}, 'EEG: Neuralynx (*.ncs)', 'EEG-NEURALYNX'; ... 3772 {'.nwb'}, 'EEG: Neurodata Without Borders (*.nwb)','NWB'; ... 3773 {'.nedf','.easy'}, 'EEG: Neuroelectrics (*.nedf;*.easy)', 'EEG-NEUROELECTRICS'; ... 3774 {'.bin'}, 'EEG: NeurOne session folder', 'EEG-NEURONE'; ... 3775 {'.cnt','.avg','.eeg','.dat'}, 'EEG: Neuroscan (*.cnt;*.eeg;*.avg;*.dat)', 'EEG-NEUROSCAN'; ... 3776 {'.eeg','.dat'}, 'EEG: NeuroScope (*.eeg;*.dat)', 'EEG-NEUROSCOPE'; ... 3777 {'.e'}, 'EEG: Nicolet (*.e)', 'EEG-NICOLET'; ... 3778 {'.eeg'}, 'EEG: Nihon Kohden (*.eeg)', 'EEG-NK'; ... 3779 {'.dat'}, 'EEG: Open Ephys flat binary (*.dat)', 'EEG-OEBIN'; ... 3780 {'.plx','.pl2'}, 'EEG: Plexon (*.plx;.pl2)' 'EEG-PLEXON'; ... 3781 {'.ns1','.ns2','.ns3','.ns4','.ns5','.ns6'}, 'EEG: Ripple Trellis (*.nsX/*.nev)', 'EEG-RIPPLE'; ... 3782 {'.h5'}, 'EEG: The Virtual Brain (*_TimeSeriesEEG.h5)', 'EEG-TVB'; ... 3783 {'.tbk'}, 'EEG: Tucker Davis Technologies (*.tbk)', 'EEG-TDT'; ... 3784 {'.csv'}, 'EEG: Wearable Sensing (*.csv)', 'EEG-WS-CSV'; ... 3785 {'.xdf'}, 'EEG: XDF (*.xdf)', 'EEG-XDF'; ... 3786 {'.trc','.eeg','.e','.bin','.rda','.edf','.bdf'}, 'SEEG: Deltamed/Micromed/NK/Nicolet/BrainVision/EDF', 'SEEG-ALL'; ... 3787 {'.trc','.eeg','.e','.bin','.rda','.edf','.bdf'}, 'ECOG: Deltamed/Micromed/NK/Nicolet/BrainVision/EDF', 'ECOG-ALL'; ... 3788 {'.nirs'}, 'NIRS: Brainsight (*.nirs)', 'NIRS-BRS'; ... 3789 {'.bnirs','.jnirs','.snirf'}, 'NIRS: SNIRF (*.snirf)', 'NIRS-SNIRF'; ... 3790 {'.edf'}, 'Eye tracker: EyeLink (*.edf)', 'EYELINK'; ... 3791 {'.tsv'}, 'Eye tracker: Tobii glasses (*.tsv)', 'EYE-TOBII-TSV'; ... 3792 }; 3793 3794 case 'dataout' 3795 argout1 = {... 3796 {'.bst'}, 'MEG/EEG: Brainstorm binary (*.bst)', 'BST-BIN'; ... 3797 {'.mat'}, 'MEG/EEG: FieldTrip timelock (*.mat)', 'FT-TIMELOCK'; ... 3798 {'.mat'}, 'MEG/EEG: SPM (*.mat/.dat)', 'SPM-DAT'; ... 3799 {'.eeg'}, 'EEG: BrainVision BrainAmp (*.eeg)', 'EEG-BRAINAMP'; ... 3800 {'.eph'}, 'EEG: Cartool EPH (*.eph)', 'EEG-CARTOOL-EPH'; ... 3801 {'.edf'}, 'EEG: EDF+ (*.edf)', 'EEG-EDF'; ... 3802 {'.raw'}, 'EEG: EGI NetStation RAW (*.raw)', 'EEG-EGI-RAW'; ... 3803 {'.snirf'}, 'NIRS: SNIRF (*.snirf)', 'NIRS-SNIRF'; ... 3804 {'.txt'}, 'ASCII: Space-separated, fixed column size (*.txt)', 'ASCII-SPC'; ... 3805 {'.txt'}, 'ASCII: Space-separated with header, fixed column size (*.txt)', 'ASCII-SPC-HDR'; ... 3806 {'.tsv'}, 'ASCII: Tab-separated (*.tsv)', 'ASCII-TSV'; ... 3807 {'.tsv'}, 'ASCII: Tab-separated with header (*.tsv)', 'ASCII-TSV-HDR'; ... 3808 {'.tsv'}, 'ASCII: Tab-separated with header transposed (*.tsv)', 'ASCII-TSV-HDR-TR'; ... 3809 {'.csv'}, 'ASCII: Comma-separated (*.csv)', 'ASCII-CSV'; ... 3810 {'.csv'}, 'ASCII: Comma-separated with header (*.csv)', 'ASCII-CSV-HDR'; ... 3811 {'.csv'}, 'ASCII: Comma-separated with header transposed (*.csv)', 'ASCII-CSV-HDR-TR'; ... 3812 {'.xlsx'}, 'Microsoft Excel (*.xlsx)', 'EXCEL'; ... 3813 {'.xlsx'}, 'Microsoft Excel transposed (*.xlsx)', 'EXCEL-TR'; ... 3814 {'_timeseries'}, 'Brainstorm matrix (*timeseries*.mat)', 'BST'; ... 3815 }; 3816 case 'rawout' 3817 argout1 = {... 3818 {'.bst'}, 'MEG/EEG: Brainstorm binary (*.bst)', 'BST-BIN'; ... 3819 {'.mat'}, 'MEG/EEG: SPM (*.mat/.dat)', 'SPM-DAT'; ... 3820 {'.eeg'}, 'EEG: BrainVision BrainAmp (*.eeg)', 'EEG-BRAINAMP'; ... 3821 {'.edf'}, 'EEG: EDF+ (*.edf)', 'EEG-EDF'; ... 3822 {'.raw'}, 'EEG: EGI NetStation RAW (*.raw)', 'EEG-EGI-RAW'; ... 3823 {'.snirf'}, 'NIRS: SNIRF (*.snirf)', 'NIRS-SNIRF'; ... 3824 }; 3825 case 'events' 3826 argout1 = {... 3827 {'.trg'}, 'ANT EEProbe (*.trg)', 'ANT'; ... 3828 {'.mrk'}, 'AnyWave (*.mrk)', 'ANYWAVE'; ... 3829 {'.evt'}, 'BESA (*.evt)', 'BESA'; ... 3830 {'.tsv'}, 'BIDS events: onset, duration, trial_type, channel (*.tsv)', 'BIDS'; ... 3831 {'.vmrk'}, 'BrainVision BrainAmp (*.vmrk)', 'BRAINAMP'; ... 3832 {'_events'}, 'Brainstorm (events*.mat)', 'BST'; ... 3833 {'.mrk'}, 'Cartool (*.mrk)', 'CARTOOL'; ... 3834 {'.mrk'}, 'CTF MarkerFile (*.mrk)', 'CTF'; ... 3835 {'.cef'}, 'Curry (*.cef)', 'CURRY'; ... 3836 {'.eve','.fif'}, 'Elekta-Neuromag MNE (*.eve;*.fif)', 'FIF'; ... 3837 {'.evl','.txt'}, 'Elekta-Neuromag Graph (*.evl;*.txt)', 'GRAPH'; ... 3838 {'.txt','.mat'}, 'FieldTrip trial definition (*.txt;*.mat)', 'TRL'; ... 3839 {'.trg'}, 'KRISS MEG (*.trg)', 'KDF'; ... 3840 {'.evt'}, 'Micromed (*.evt)', 'MICROMED'; ... 3841 {'.ev2'}, 'Neuroscan (*.ev2)', 'NEUROSCAN'; ... 3842 {'.txt'}, 'Nicolet export (*.txt)', 'NICOLET'; ... 3843 {'timestamps.npy'},'Open Ephys (timestamps.npy)', 'OEBIN'; ... 3844 {'.log'}, 'Presentation (*.log)', 'PRESENTATION'; ... 3845 {'.mrk','.sqd','.con','.raw','.ave'}, 'Ricoh (*.mrk;*.sqd;*.con;*.raw;*.ave)', 'RICOH'; ... 3846 {'.txt'}, 'XLTEK export (*.txt)', 'XLTEK'; ... 3847 {'.mrk','.sqd','.con','.raw','.ave'}, 'Yokogawa/KIT (*.mrk;*.sqd;*.con;*.raw;*.ave)', 'KIT'; ... 3848 {'.*'}, 'Array of times (*.mat;*.*)', 'ARRAY-TIMES'; ... 3849 {'.*'}, 'Array of samples (*.mat;*.*)', 'ARRAY-SAMPLES'; ... 3850 {'.txt','.csv'}, 'CSV text file: label, time, duration (*.txt;*.csv)', 'CSV-TIME'; ... 3851 {'.*'}, 'CTF Video Times (.txt)', 'CTFVIDEO'; ... 3852 }; 3853 case 'eventsout' 3854 argout1 = {... 3855 {'.mrk'}, 'AnyWave (*.mrk)', 'ANYWAVE'; ... 3856 {'_events'}, 'Brainstorm (events*.mat)', 'BST'; ... 3857 {'.vmrk'}, 'BrainVision BrainAmp (*.vmrk)', 'BRAINAMP'; ... 3858 {'.mrk'}, 'CTF MarkerFile (*.mrk)', 'CTF'; ... 3859 {'.eve','.fif'}, 'Elekta-Neuromag/MNE (*.eve)', 'FIF'; ... 3860 {'.evl'}, 'Elekta-Neuromag Graph (Alternative Style) (*.evl)', 'GRAPH_ALT'; ... 3861 {'.txt'}, 'Array of times (*.txt)', 'ARRAY-TIMES'; ... 3862 {'.txt'}, 'Array of samples (*.txt)', 'ARRAY-SAMPLES'; ... 3863 {'.txt','.csv'}, 'CSV text file: label, time, duration (*.txt;*.csv)', 'CSV-TIME'; ... 3864 {'.txt'}, 'CTF Video Times (*.txt)', 'CTFVIDEO'; ... 3865 {'.tsv'}, 'BIDS events: onset, duration, trial_type (*.tsv)', 'BIDS'; ... 3866 }; 3867 case 'channel' 3868 argout1 = {... 3869 {'.*'}, 'MEG/EEG: 4D-Neuroimaging/BTi (*.*)', '4D'; ... 3870 {'.meg4','.res4'}, 'MEG/EEG: CTF (*.ds;*.meg4;*.res4)', 'CTF' ; ... 3871 {'.fif'}, 'MEG/EEG: Elekta-Neuromag (*.fif)', 'FIF'; ... 3872 {'.kdf'}, 'MEG/EEG: KRISS MEG (*.kdf)', 'KDF'; ... 3873 {'.raw'}, 'MEG/EEG: ITAB (*.raw)', 'ITAB'; ... 3874 {'.mrk','.sqd','.con','.raw','.ave'}, 'MEG/EEG: Ricoh (*.sqd;*.con;*.raw;*.ave;*.mrk)', 'RICOH'; ... 3875 {'.mrk','.sqd','.con','.raw','.ave'}, 'MEG/EEG: Yokogawa/KIT (*.sqd;*.con;*.raw;*.ave;*.mrk)', 'KIT'; ... 3876 {'.meghdf5'}, 'MEG/EEG: York Instruments MEGSCAN (.meghdf5)', 'MEGSCAN-HDF5'; ... 3877 {'.bst'}, 'MEG/EEG: Brainstorm binary (*.bst)', 'BST-BIN'; ... 3878 {'_channel'}, 'MEG/EEG: Brainstorm (channel*.mat)', 'BST'; ... 3879 {'.elc'}, 'EEG: ANT ASA/Xensor (*.elc)', 'XENSOR'; ... 3880 {'.sfp','.elp','.ela','.eps'}, 'EEG: BESA (*.sfp;*.elp;*.eps/*.ela)', 'BESA'; ... 3881 {'.bvef','.bvct','.txt'}, 'EEG: BrainVision electrode file (*.bvef,*.bvct,*.txt)', 'BRAINVISION'; ... 3882 {'.tsv'}, 'EEG: BIDS electrodes.tsv, subject space mm (*.tsv)', 'BIDS-SCANRAS-MM'; ... 3883 {'.tsv'}, 'EEG: BIDS electrodes.tsv, MNI space mm (*.tsv)', 'BIDS-MNI-MM'; ... 3884 {'.tsv'}, 'EEG: BIDS electrodes.tsv, ACPC space mm (*.tsv)', 'BIDS-ACPC-MM'; ... 3885 {'.tsv'}, 'EEG: BIDS electrodes.tsv, ALS/SCS/CTF space mm (*.tsv)','BIDS-ALS-MM'; ... 3886 {'.tsv'}, 'EEG: BIDS electrodes.tsv, CapTrak space mm (*.tsv)', 'BIDS-CAPTRAK-MM'; ... 3887 {'.els','.xyz'}, 'EEG: Cartool (*.els;*.xyz)', 'CARTOOL'; ... 3888 {'.eeg'}, 'EEG: MegDraw (*.eeg)', 'MEGDRAW'; ... 3889 {'.res','.rs3','.pom'}, 'EEG: Curry, LPS (*.res;*.rs3;*.pom)', 'CURRY'; ... 3890 {'.ced','.xyz','.set'}, 'EEG: EEGLAB (*.ced;*.xyz;*.set)', 'EEGLAB'; ... 3891 {'.elc'}, 'EEG: EETrak (*.elc)', 'EETRAK'; ... 3892 {'.sfp'}, 'EEG: EGI (*.sfp)', 'EGI'; ... 3893 {'coordinates.xml'}, 'EEG: EGI-Philips (coordinates.xml)', 'MFF'; ... 3894 {'.elp'}, 'EEG: EMSE (*.elp)', 'EMSE'; ... 3895 {'.pts','.csv'}, 'EEG: IntrAnat, subject space (*.pts;*.csv)', 'INTRANAT'; ... 3896 {'.pts','.csv'}, 'EEG: IntrAnat, MNI space (*.pts;*.csv)', 'INTRANAT_MNI'; ... 3897 {'.csv'}, 'EEG: Localite (*.csv)', 'LOCALITE'; ... 3898 {'.dat','.tri','.txt','.asc'}, 'EEG: Neuroscan (*.dat;*.tri;*.txt;*.asc)', 'NEUROSCAN'; ... 3899 {'.pos','.pol','.elp','.txt'}, 'EEG: Polhemus (*.pos;*.pol;*.elp;*.txt)', 'POLHEMUS'; ... 3900 {'.csv'}, 'EEG: SimNIBS (*.csv)', 'SIMNIBS'; ... 3901 {'.h5'}, 'EEG: The Virtual Brain (*_SensorsEEG.h5)', 'TVB'; ... 3902 {'*'}, 'EEG: ASCII: Name,XYZ (*.*)', 'ASCII_NXYZ'; ... 3903 {'*'}, 'EEG: ASCII: Name,XYZ_MNI (*.*)', 'ASCII_NXYZ_MNI'; ... 3904 {'*'}, 'EEG: ASCII: Name,XYZ_World (*.*)', 'ASCII_NXYZ_WORLD'; ... 3905 {'*'}, 'EEG: ASCII: Name,XY (*.*)', 'ASCII_NXY'; ... 3906 {'*'}, 'EEG: ASCII: XYZ (*.*)', 'ASCII_XYZ'; ... 3907 {'*'}, 'EEG: ASCII: XYZ_MNI (*.*)', 'ASCII_XYZ_MNI'; ... 3908 {'*'}, 'EEG: ASCII: XYZ_World (*.*)', 'ASCII_XYZ_WORLD'; ... 3909 {'*'}, 'EEG: ASCII: XY (*.*)', 'ASCII_XY'; ... 3910 {'*'}, 'EEG: ASCII: XYZ,Name (*.*)', 'ASCII_XYZN'; ... 3911 {'*'}, 'EEG: ASCII: XYZ_MNI,Name (*.*)', 'ASCII_XYZN_MNI'; ... 3912 {'*'}, 'EEG: ASCII: XYZ_World,Name (*.*)', 'ASCII_XYZN_WORLD'; ... 3913 {'*'}, 'EEG: ASCII: Name,Theta,Phi (*.*)', 'ASCII_NTP'; ... 3914 {'*'}, 'EEG: ASCII: Theta,Phi (*.*)', 'ASCII_TP'; ... 3915 }; 3916 case 'channelout' 3917 argout1 = {... 3918 {'.pos'}, 'EEG+Headshape: Polhemus (*.pos)', 'POLHEMUS'; ... 3919 {'.eeg'}, 'Headshape: MegDraw (*.eeg)', 'MEGDRAW'; ... 3920 {'.pos'}, 'Headshape: Polhemus (*.pos)', 'POLHEMUS-HS'; ... 3921 {'.txt'}, 'Headshape: ASCII: XYZ (*.txt)', 'ASCII_XYZ-HS'; ... 3922 {'.txt'}, 'Headshape: ASCII: XYZ_World (*.txt)', 'ASCII_XYZ_WORLD-HS'; ... 3923 {'.txt'}, 'Headshape: ASCII: Name,XYZ (*.txt)', 'ASCII_NXYZ-HS'; ... 3924 {'.txt'}, 'Headshape: ASCII: Name,XYZ_World (*.txt)', 'ASCII_NXYZ_WORLD-HS'; ... 3925 {'.txt'}, 'Headshape: ASCII: XYZ,Name (*.txt)', 'ASCII_XYZN-HS'; ... 3926 {'.txt'}, 'Headshape: ASCII: XYZ_World,Name (*.txt)', 'ASCII_XYZN_WORLD-HS'; ... 3927 {'.sfp'}, 'EEG: BESA (*.sfp)', 'BESA-SFP'; ... 3928 {'.elp'}, 'EEG: BESA (*.elp)', 'BESA-ELP'; ... 3929 {'.tsv'}, 'EEG: BIDS electrodes.tsv, subject space mm (*.tsv)', 'BIDS-SCANRAS-MM'; ... 3930 {'.tsv'}, 'EEG: BIDS electrodes.tsv, MNI space mm (*.tsv)', 'BIDS-MNI-MM'; ... 3931 {'.tsv'}, 'EEG: BIDS electrodes.tsv, ALS/SCS/CTF space mm (*.tsv)', 'BIDS-ALS-MM'; ... 3932 {'.xyz'}, 'EEG: Cartool (*.xyz)', 'CARTOOL-XYZ'; ... 3933 {'.res'}, 'EEG: Curry (*.res)', 'CURRY-RES'; ... 3934 {'.xyz'}, 'EEG: EEGLAB (*.xyz)', 'EEGLAB-XYZ'; ... 3935 {'.sfp'}, 'EEG: EGI (*.sfp)', 'EGI'; ... 3936 {'.txt'}, 'EEG/NIRS: ASCII: XYZ (*.txt)', 'ASCII_XYZ-EEG'; ... 3937 {'.txt'}, 'EEG/NIRS: ASCII: XYZ_MNI (*.txt)', 'ASCII_XYZ_MNI-EEG'; ... 3938 {'.txt'}, 'EEG/NIRS: ASCII: XYZ_World (*.txt)', 'ASCII_XYZ_WORLD-EEG'; ... 3939 {'.txt'}, 'EEG/NIRS: ASCII: Name,XYZ (*.txt)', 'ASCII_NXYZ-EEG'; ... 3940 {'.txt'}, 'EEG/NIRS: ASCII: Name,XYZ_MNI (*.txt)', 'ASCII_NXYZ_MNI-EEG'; ... 3941 {'.txt'}, 'EEG/NIRS: ASCII: Name,XYZ_World (*.txt)', 'ASCII_NXYZ_WORLD-EEG'; ... 3942 {'.txt'}, 'EEG/NIRS: ASCII: XYZ,Name (*.txt)', 'ASCII_XYZN-EEG'; ... 3943 {'.txt'}, 'EEG/NIRS: ASCII: XYZ_MNI,Name (*.txt)', 'ASCII_XYZN_MNI-EEG'; ... 3944 {'.txt'}, 'EEG/NIRS: ASCII: XYZ_World,Name (*.txt)', 'ASCII_XYZN_WORLD-EEG'; ... 3945 {'.txt'}, 'EEG/NIRS: Brainsight (*.txt)', 'BRAINSIGHT-TXT'; ... 3946 {'.tsv'}, 'NIRS: BIDS optrodes.tsv, subject space mm (*.tsv)', 'BIDS-NIRS-SCANRAS-MM'; ... 3947 {'.tsv'}, 'NIRS: BIDS optrodes.tsv, MNI space mm (*.tsv)', 'BIDS-NIRS-MNI-MM'; ... 3948 {'.tsv'}, 'NIRS: BIDS optrodes.tsv, ALS/SCS/CTF space mm (*.tsv)', 'BIDS-NIRS-ALS-MM'; ... 3949 }; 3950 case 'labelin' 3951 argout1 = {... 3952 {'.dfs'}, 'BrainSuite atlas (*.dfs)', 'DFS'; ... 3953 {'.annot'}, 'FreeSurfer atlas (*.annot)', 'FS-ANNOT'; ... 3954 {'.label'}, 'FreeSurfer ROI, single scout (*.label)', 'FS-LABEL-SINGLE'; ... 3955 {'.label'}, 'FreeSurfer ROI, probability map (*.label)', 'FS-LABEL'; ... 3956 {'.gii'}, 'GIfTI texture (*.gii)', 'GII-TEX'; ... 3957 {'.dset'}, 'SUMA atlas (*.dset)', 'DSET'; ... 3958 {'_scout'}, 'Brainstorm scouts (*scout*.mat)', 'BST'; ... 3959 {'.mri', '.fif', '.img', '.ima', '.nii', '.mgh', '.mgz', '.mnc', '.mni', '.gz', '_subjectimage'}, 'Volume mask or atlas (dilated, subject space)', 'MRI-MASK'; ... 3960 {'.mri', '.fif', '.img', '.ima', '.nii', '.mgh', '.mgz', '.mnc', '.mni', '.gz'}, 'Volume mask or atlas (dilated, MNI space)', 'MRI-MASK-MNI'; ... 3961 {'.mri', '.fif', '.img', '.ima', '.nii', '.mgh', '.mgz', '.mnc', '.mni', '.gz', '_subjectimage'}, 'Volume mask or atlas (no overlap, subject space)', 'MRI-MASK-NOOVERLAP'; ... 3962 {'.mri', '.fif', '.img', '.ima', '.nii', '.mgh', '.mgz', '.mnc', '.mni', '.gz'}, 'Volume mask or atlas (no overlap, MNI space)', 'MRI-MASK-NOOVERLAP-MNI'; ... 3963 }; 3964 case 'resultsout' 3965 argout1 = {... 3966 {'_sources'}, 'Brainstorm sources (*sources*.mat)', 'BST'; ... 3967 {'.mat'}, 'FieldTrip sources (*.mat)', 'FT-SOURCES'; ... 3968 {'.txt'}, 'ASCII: Space-separated, fixed columns size (*.txt)', 'ASCII-SPC'; ... 3969 {'.txt'}, 'ASCII: Space-separated with header, fixed column size (*.txt)', 'ASCII-SPC-HDR'; ... 3970 {'.tsv'}, 'ASCII: Tab-separated (*.tsv)', 'ASCII-TSV'; ... 3971 {'.tsv'}, 'ASCII: Tab-separated with header (*.tsv)', 'ASCII-TSV-HDR'; ... 3972 {'.tsv'}, 'ASCII: Tab-separated with header transposed (*.tsv)', 'ASCII-TSV-HDR-TR'; ... 3973 {'.csv'}, 'ASCII: Comma-separated (*.csv)', 'ASCII-CSV'; ... 3974 {'.csv'}, 'ASCII: Comma-separated with header (*.csv)', 'ASCII-CSV-HDR'; ... 3975 {'.csv'}, 'ASCII: Comma-separated with header transposed (*.csv)', 'ASCII-CSV-HDR-TR'; ... 3976 {'.xlsx'}, 'Microsoft Excel (*.xlsx)', 'EXCEL'; ... 3977 {'.xlsx'}, 'Microsoft Excel transposed (*.xlsx)', 'EXCEL-TR'; ... 3978 }; 3979 case 'timefreqout' 3980 argout1 = {... 3981 {'_timefreq'}, 'Brainstorm structure (*timefreq*.mat)', 'BST'; ... 3982 {'.mat'}, 'FieldTrip freq (*.mat)', 'FT-FREQ'; ... 3983 {'.txt'}, 'ASCII: Space-separated, fixed columns size (*.txt)', 'ASCII-SPC'; ... 3984 {'.txt'}, 'ASCII: Space-separated with header, fixed column size (*.txt)', 'ASCII-SPC-HDR'; ... 3985 {'.tsv'}, 'ASCII: Tab-separated (*.tsv)', 'ASCII-TSV'; ... 3986 {'.tsv'}, 'ASCII: Tab-separated with header (*.tsv)', 'ASCII-TSV-HDR'; ... 3987 {'.tsv'}, 'ASCII: Tab-separated with header transposed (*.tsv)', 'ASCII-TSV-HDR-TR'; ... 3988 {'.csv'}, 'ASCII: Comma-separated (*.csv)', 'ASCII-CSV'; ... 3989 {'.csv'}, 'ASCII: Comma-separated with header (*.csv)', 'ASCII-CSV-HDR'; ... 3990 {'.csv'}, 'ASCII: Comma-separated with header transposed (*.csv)', 'ASCII-CSV-HDR-TR'; ... 3991 {'.xlsx'}, 'Microsoft Excel (*.xlsx)', 'EXCEL'; ... 3992 {'.xlsx'}, 'Microsoft Excel transposed (*.xlsx)', 'EXCEL-TR'; ... 3993 }; 3994 case 'matrixout' 3995 argout1 = {... 3996 {'_matrix'}, 'Brainstorm structure (*matrix*.mat)', 'BST'; ... 3997 {'.bst'}, 'MEG/EEG: Brainstorm binary (*.bst)', 'BST-BIN'; ... 3998 {'.mat'}, 'FieldTrip timelock (*.mat)', 'FT-TIMELOCK'; ... 3999 {'.edf'}, 'EEG: EDF+ (*.edf)', 'EEG-EDF'; ... 4000 {'.txt'}, 'ASCII: Space-separated, fixed columns size (*.txt)', 'ASCII-SPC'; ... 4001 {'.txt'}, 'ASCII: Space-separated with header, fixed column size (*.txt)', 'ASCII-SPC-HDR'; ... 4002 {'.tsv'}, 'ASCII: Tab-separated (*.tsv)', 'ASCII-TSV'; ... 4003 {'.tsv'}, 'ASCII: Tab-separated with header (*.tsv)', 'ASCII-TSV-HDR'; ... 4004 {'.tsv'}, 'ASCII: Tab-separated with header transposed (*.tsv)', 'ASCII-TSV-HDR-TR'; ... 4005 {'.csv'}, 'ASCII: Comma-separated (*.csv)', 'ASCII-CSV'; ... 4006 {'.csv'}, 'ASCII: Comma-separated with header (*.csv)', 'ASCII-CSV-HDR'; ... 4007 {'.csv'}, 'ASCII: Comma-separated with header transposed (*.csv)', 'ASCII-CSV-HDR-TR'; ... 4008 {'.xlsx'}, 'Microsoft Excel (*.xlsx)', 'EXCEL'; ... 4009 {'.xlsx'}, 'Microsoft Excel transposed (*.xlsx)', 'EXCEL-TR'; ... 4010 }; 4011 case 'montagein' 4012 argout1 = {... 4013 {'.sel'}, 'MNE selection files (*.sel)', 'MNE'; ... 4014 {'.mon'}, 'Text montage files (*.mon)', 'MON'; ... 4015 {'_montage'}, 'Brainstorm montage files (montage_*.mat)', 'BST'; 4016 {'.csv'}, 'Comma-separated montage files (*.csv)', 'CSV'}; 4017 case 'montageout' 4018 argout1 = {... 4019 {'.sel'}, 'MNE selection files (*.sel)', 'MNE'; ... 4020 {'.mon'}, 'Text montage files (*.mon)', 'MON'; ... 4021 {'_montage'}, 'Brainstorm montage files (montage_*.mat)', 'BST'}; 4022 case 'clusterin' 4023 argout1 = {... 4024 {'_cluster'}, 'Brainstorm clusters file (*cluster*.mat)', 'BST'; ... 4025 {'.sel'}, 'MNE selection files (*.sel)', 'MNE'}; 4026 case 'fibers' 4027 argout1 = {... 4028 {'.trk'}, 'TrackVis (*.trk)', 'TRK'; ... 4029 {'_fibers'}, 'Brainstorm fibers files (fibers_*.mat)', 'BST'}; 4030 end 4031 4032 4033 %% ===== FONTS ===== 4034 case 'FigFont' 4035 if ispc 4036 argout1 = 8; 4037 else 4038 argout1 = 9; 4039 end 4040 InterfaceScaling = bst_get('InterfaceScaling'); 4041 if (InterfaceScaling ~= 100) 4042 argout1 = argout1 * InterfaceScaling / 100; 4043 end 4044 4045 case 'Font' 4046 % Default font size 4047 if (nargin < 2) 4048 if strncmp(computer,'MAC',3) 4049 fontSize = 12; 4050 else 4051 fontSize = 11; 4052 end 4053 % Font size in input 4054 else 4055 fontSize = varargin{2}; 4056 end 4057 % Adjust for interface scaling 4058 fontSize = fontSize * bst_get('InterfaceScaling') / 100; 4059 4060 % Font types 4061 fontTypes = {}; 4062 if (nargin >= 3) 4063 if ischar(varargin{3}) 4064 fontTypes = varargin(3); 4065 else 4066 fontTypes = varargin{3}; 4067 end 4068 else 4069 fontTypes{end + 1} = 'Arial'; % Default font 4070 fontTypes{end + 1} = 'Liberation Sans'; % Free Arial substitute 4071 end 4072 % Check for cached font 4073 foundFont = 0; 4074 for iFont = 1 : length(fontTypes) 4075 strCache = strrep(sprintf('%s%d', fontTypes{iFont}, round(fontSize*100)), ' ', '_'); 4076 if ~isempty(GlobalData) && isfield(GlobalData, 'Program') && isfield(GlobalData.Program, 'FontCache') && isfield(GlobalData.Program.FontCache, strCache) 4077 argout1 = GlobalData.Program.FontCache.(strCache); 4078 foundFont = 1; 4079 break; 4080 end 4081 end 4082 4083 % If font not cached, find first supported font 4084 if ~foundFont 4085 ge = java.awt.GraphicsEnvironment.getLocalGraphicsEnvironment(); 4086 allFonts = cell(ge.getAvailableFontFamilyNames()); 4087 4088 for iFont = 1 : length(fontTypes) 4089 if any(strcmp(fontTypes{iFont}, allFonts)) 4090 fontType = fontTypes{iFont}; 4091 foundFont = 1; 4092 break; 4093 end 4094 end 4095 4096 if ~foundFont 4097 fontType = 'SansSerif'; % If nothing else works. 4098 end 4099 4100 strCache = strrep(sprintf('%s%d', fontType, round(fontSize*100)), ' ', '_'); 4101 argout1 = java.awt.Font(fontType, java.awt.Font.PLAIN, fontSize); 4102 GlobalData.Program.FontCache.(strCache) = argout1; 4103 end 4104 4105 %% ==== PANEL CONTAINERS ==== 4106 case 'PanelContainer' 4107 % Get Brainstorm GUI context structure 4108 bst_GUI = GlobalData.Program.GUI; 4109 if (isempty(bst_GUI) || ~isfield(bst_GUI, 'panelContainers')) 4110 error('Brainstorm GUI is not yet initialized'); 4111 end 4112 4113 % Get ContainerName in argument 4114 if ((nargin >= 2) && (ischar(varargin{2}))) 4115 ContainerName = varargin{2}; 4116 % If no container name in argument : just display all the container names 4117 else 4118 disp('Registered panel containers :'); 4119 for iContainer = 1:length(bst_GUI.panelContainers) 4120 disp([' - ' bst_GUI.panelContainers(iContainer).name]); 4121 end 4122 return 4123 end 4124 4125 % Look for containerName in all the registered panel containers 4126 iContainer = 1; 4127 found = 0; 4128 while (~found (iContainer <= length(bst_GUI.panelContainers))) 4129 if (strcmpi(ContainerName, bst_GUI.panelContainers(iContainer).name)) 4130 found = 1; 4131 else 4132 iContainer = iContainer + 1; 4133 end 4134 end 4135 % If container is found : return it 4136 if (found) 4137 argout1 = bst_GUI.panelContainers(iContainer).jHandle; 4138 else 4139 % warning('Brainstorm:InvalidContainer', 'Container ''%s'' could not be found.', ContainerName); 4140 end 4141 4142 4143 %% ==== PANELS ==== 4144 case 'Panel' 4145 % Get Brainstorm GUI context structure 4146 if (isempty(GlobalData) || isempty(GlobalData.Program.GUI) || ~isfield(GlobalData.Program.GUI, 'panels')) 4147 return 4148 end 4149 listPanels = GlobalData.Program.GUI.panels; 4150 % Get Panel in argument 4151 if ((nargin >= 2) && (ischar(varargin{2}))) 4152 PanelName = varargin{2}; 4153 % If no panel name in argument : just display all the panels names 4154 else 4155 disp('Registered panels :'); 4156 for iContainer = 1:length(listPanels) 4157 disp([' - ' get(listPanels(iContainer), 'name')]); 4158 end 4159 return 4160 end 4161 % Look for panelName in all the registered panels 4162 iPanel = find(strcmpi(PanelName, get(listPanels, 'name')), 1); 4163 if ~isempty(iPanel) 4164 argout1 = listPanels(iPanel); 4165 argout2 = iPanel; 4166 end 4167 4168 4169 %% ==== PANEL CONTROLS ==== 4170 % Calls : bst_get('PanelControls', PanelName) 4171 case 'PanelControls' 4172 % Get Panel name in argument 4173 if ((nargin >= 2) && (ischar(varargin{2}))) 4174 PanelName = varargin{2}; 4175 else 4176 error('Invalid call to bst_get()'); 4177 end 4178 % Find BstPanel with this name 4179 bstPanel = bst_get('Panel', PanelName); 4180 % If panel was found : return its controls 4181 if ~isempty(bstPanel) 4182 argout1 = get(bstPanel, 'sControls'); 4183 end 4184 4185 %% ===== NODES COPY ===== 4186 % Calls : bst_get('Clipboard') 4187 case 'Clipboard' 4188 argout1 = GlobalData.Program.Clipboard.Nodes; 4189 argout2 = GlobalData.Program.Clipboard.isCut; 4190 4191 %% ==== DIRECTORIES ==== 4192 case 'DirDefaultSubject' 4193 argout1 = '@default_subject'; 4194 case 'DirDefaultStudy' 4195 argout1 = '@default_study'; 4196 case 'DirAnalysisIntra' 4197 argout1 = '@intra'; 4198 case 'DirAnalysisInter' 4199 argout1 = '@inter'; 4200 case 'NormalizedSubjectName' 4201 argout1 = 'Group_analysis'; 4202 4203 %% ==== OTHER ==== 4204 case 'ResizeFunction' 4205 if (bst_get('MatlabVersion') <= 803) 4206 argout1 = 'ResizeFcn'; 4207 else 4208 argout1 = 'SizeChangedFcn'; 4209 end 4210 case 'groot' 4211 if (bst_get('MatlabVersion') <= 803) 4212 argout1 = 0; 4213 else 4214 argout1 = groot; 4215 end 4216 case 'JFrame' 4217 hFig = varargin{2}; 4218 MatlabVersion = bst_get('MatlabVersion'); 4219 jFrame = []; 4220 try 4221 if (MatlabVersion <= 705) 4222 jf = get(hFig, 'javaframe'); 4223 jFrame = jf.fFigureClient.getWindow(); 4224 elseif (MatlabVersion <= 712) 4225 jf = get(handle(hFig), 'javaframe'); 4226 jFrame = jf.fFigureClient.getWindow(); 4227 elseif (MatlabVersion <= 803) 4228 jf = get(handle(hFig), 'javaframe'); 4229 jFrame = jf.fHG1Client.getWindow(); 4230 elseif (MatlabVersion < 907) % Matlab >= 2019b deprecated the JavaFrame property 4231 warning('off', 'MATLAB:HandleGraphics:ObsoletedProperty:JavaFrame'); 4232 jf = get(hFig, 'javaframe'); 4233 warning('on', 'MATLAB:HandleGraphics:ObsoletedProperty:JavaFrame'); 4234 jFrame = jf.fHG2Client.getWindow(); 4235 else 4236 disp('BST> Error: Matlab 2019b deprecated the JavaFrame property.'); 4237 end 4238 catch 4239 disp('BST> Warning: Cannot get the JavaFrame property for the selected figure.'); 4240 end 4241 argout1 = jFrame; 4242 4243 %% ==== ERROR ==== 4244 otherwise 4245 error(sprintf('Invalid context : "%s"', contextName)); 4246 end 4247 end 4248 4249 4250 4251 4252 %% ==== HELPERS ==== 4253 % Return all the protocol studies that have a given file in its structures 4254 % Possible field names: Result.DataFile, Result.FileName, Data.FileName, Channel.FileName 4255 % 4256 % USAGE: [sFoundStudy, iFoundStudy, iItem] = findFileInStudies(fieldGroup, fieldName, fieldFile, iStudiesList) 4257 % [sFoundStudy, iFoundStudy, iItem] = findFileInStudies(fieldGroup, fieldName, fieldFile) 4258 function [sFoundStudy, iFoundStudy, iItem] = findFileInStudies(fieldGroup, fieldName, fieldFile, iStudiesList) 4259 global GlobalData; 4260 sFoundStudy = []; 4261 iFoundStudy = []; 4262 iItem = []; 4263 % If no file provided, return 4264 if isempty(fieldFile) 4265 return; 4266 end 4267 % Extract folder(s) of the file(s) we're looking for 4268 fieldParts = strfind(fieldFile, '|'); 4269 if ~isempty(fieldParts) 4270 fieldParts(end+1) = length(fieldFile); 4271 fieldFolders = {}; 4272 iLast = 1; 4273 for iPart = 1:length(fieldParts) 4274 folder = fileparts(fieldFile(iLast:fieldParts(iPart)-1)); 4275 if ~isempty(folder) 4276 fieldFolders{end + 1} = folder; 4277 end 4278 iLast = fieldParts(iPart) + 1; 4279 end 4280 else 4281 fieldFolders = {fileparts(fieldFile)}; 4282 end 4283 % Get protocol information 4284 ProtocolStudies = GlobalData.DataBase.ProtocolStudies(GlobalData.DataBase.iProtocol); 4285 % List studies to process 4286 if (nargin < 4) || isempty(iStudiesList) 4287 iStudiesList = [-2, -3, 1:length(ProtocolStudies.Study)]; 4288 end 4289 4290 % NORMAL STUDIES: Look for surface file in all the surfaces of all subjects 4291 for iStudy = iStudiesList 4292 % Get study 4293 switch (iStudy) 4294 case -2, sStudy = ProtocolStudies.AnalysisStudy; 4295 case -3, sStudy = ProtocolStudies.DefaultStudy; 4296 otherwise, sStudy = ProtocolStudies.Study(iStudy); 4297 end 4298 % Check if field is available for the study 4299 if isempty(sStudy.(fieldGroup)) 4300 continue; 4301 end 4302 % Check we are in the correct folder 4303 if ~any(file_compare(fieldFolders, fileparts(sStudy.FileName))) 4304 continue; 4305 end 4306 % Get list of files from study 4307 filesList = {sStudy.(fieldGroup).(fieldName)}; 4308 if isempty(filesList) 4309 continue; 4310 end 4311 % Replace empty cells with empty strings 4312 iValidFiles = find(cellfun(@ischar, filesList)); 4313 if isempty(iValidFiles) 4314 continue; 4315 end 4316 % Find target in this list 4317 iItem = find(file_compare(filesList(iValidFiles), fieldFile)); 4318 if ~isempty(iItem) 4319 sFoundStudy = sStudy; 4320 iFoundStudy = iStudy; 4321 iItem = iValidFiles(iItem); 4322 return 4323 end 4324 end 4325 end 4326 4327 4328 %% ===== FILL MISSING FIELDS ===== 4329 function bstPref = FillMissingFields(PrefName, defPref) 4330 global GlobalData; 4331 if isfield(GlobalData, 'Preferences') && isfield(GlobalData.Preferences, PrefName) && isstruct(GlobalData.Preferences.(PrefName)) 4332 bstPref = GlobalData.Preferences.(PrefName); 4333 bstPref = struct_copy_fields(bstPref, defPref, 0); 4334 else 4335 bstPref = defPref; 4336 end 4337 end 4338 4339 4340 4341 4342 4343 4344


To change parameters or database structures: bst_set.m.

1 function bst_set( varargin ) 2 % BST_SET: Set a Brainstorm structure. 3 % 4 % DESCRIPTION: This function is used to abstract the way that these structures are stored. 5 % 6 % USAGE: 7 % ====== DIRECTORIES ================================================================== 8 % - bst_set('BrainstormHomeDir', BrainstormHomeDir) 9 % - bst_set('BrainstormTmpDir', BrainstormTmpDir) 10 % - bst_set('BrainstormDbDir', BrainstormDbDir) 11 % - bst_set('LastUsedDirs', sDirectories) 12 % - bst_set('BrainSuiteDir', BrainSuiteDir) 13 % - bst_set('PythonExe', PythonExe) 14 % - bst_set('PluginCustomPath', PluginCustomPath) 15 % 16 % ====== PROTOCOLS ==================================================================== 17 % - bst_set('iProtocol', iProtocol) 18 % - bst_set('ProtocolInfo', sProtocolInfo) 19 % - bst_set('ProtocolSubjects', ProtocolSubjects) 20 % - bst_set('isProtocolLoaded', isProtocolLoaded) 21 % - bst_set('isProtocolModified',isProtocolModified) 22 % - bst_set('ProtocolStudies', ProtocolStudies) 23 % - bst_set('Study', iStudy, sStudy) : Set a study in current protocol 24 % - bst_set('Subject', iSubject, sSubject) : Set a subject in current protocol 25 % 26 % ====== GUI ================================================================= 27 % - bst_set('Layout', sLayout) 28 % - bst_set('Layout', PropName, PropValue) 29 % - bst_set('Clipboard', Nodes, isCut) : Copy operation from the tree 30 % 31 % ====== CONFIGURATION ================================================================= 32 % - bst_set('Version', Version) 33 % - bst_set('ByteOrder', value) : 'b' for big endian, 'l' for little endian 34 % - bst_set('AutoUpdates', isAutoUpdates) 35 % - bst_set('ExpertMode', isExpertMode) 36 % - bst_set('DisplayGFP', isDisplayGFP) 37 % - bst_set('DownsampleTimeSeries', isDownsampleTimeSeries) 38 % - bst_set('GraphicsSmoothing', isGraphicsSmoothing) 39 % - bst_set('ForceMatCompression', isForceCompression) 40 % - bst_set('IgnoreMemoryWarnings', isIgnoreMemoryWarnings) 41 % - bst_set('SystemCopy', isSystemCopy) 42 % - bst_set('DisableOpenGL', isDisableOpenGL) 43 % - bst_set('InterfaceScaling', InterfaceScaling) 44 % - bst_set('TSDisplayMode', TSDisplayMode) : {'butterfly','column'} 45 % - bst_set('ElectrodeConfig', ElectrodeConfig, Modality) 46 % - bst_set('ElecInterpDist', ElecInterpDist, Modality) 47 % - bst_set('DefaultFormats' defaultFormats) 48 % - bst_set('BFSProperties', [scalpCond,skullCond,brainCond,scalpThick,skullThick]) 49 % - bst_set('ImportEegRawOptions', ImportEegRawOptions) 50 % - bst_set('BugReportOptions', BugReportOptions) 51 % - bst_set('DefaultSurfaceDisplay', displayStruct) 52 % - bst_set('MagneticExtrapOptions', extrapStruct) 53 % - bst_set('TimefreqOptions_morlet', Options) 54 % - bst_set('TimefreqOptions_fft', Options) 55 % - bst_set('TimefreqOptions_psd', Options) 56 % - bst_set('TimefreqOptions_hilbert', Options) 57 % - bst_set('TimefreqOptions_plv', Options) 58 % - bst_set('OpenMEEGOptions', Options) 59 % - bst_set('DuneuroOptions', Options) 60 % - bst_set('GridOptions_headmodel', Options) 61 % - bst_set('GridOptions_dipfit', Options) 62 % - bst_set('UniformizeTimeSeriesScales', isUniform) 63 % - bst_set('FlipYAxis', isFlipY) 64 % - bst_set('AutoScaleY', isAutoScaleY) 65 % - bst_set('FixedScaleY', Modality, Value) 66 % - bst_set('XScale', XScale) 67 % - bst_set('YScale', YScale) 68 % - bst_set('ShowXGrid', isShowXGrid) 69 % - bst_set('ShowYGrid', isShowYGrid) 70 % - bst_set('ShowZeroLines', isShowZeroLines) 71 % - bst_set('ShowEventsMode', ShowEventsMode) 72 % - bst_set('Resolution', [resX,resY]) 73 % - bst_set('UseSigProcToolbox', UseSigProcToolbox) 74 % - bst_set('RawViewerOptions', RawViewerOptions) 75 % - bst_set('TopoLayoutOptions', TopoLayoutOptions) 76 % - bst_set('StatThreshOptions', StatThreshOptions) 77 % - bst_set('ContactSheetOptions', ContactSheetOptions) 78 % - bst_set('ProcessOptions', ProcessOptions) 79 % - bst_set('MriOptions', MriOptions) 80 % - bst_set('CustomColormaps', CustomColormaps) 81 % - bst_set('DigitizeOptions', DigitizeOptions) 82 % - bst_set('PcaOptions', PcaOptions) 83 % - bst_set('ReadOnly', ReadOnly) 84 % - bst_set('LastPsdDisplayFunction', LastPsdDisplayFunction) 85 % - bst_set('PlotlyCredentials', Username, ApiKey, Domain) 86 % - bst_set('KlustersExecutable', ExecutablePath) 87 % - bst_set('ExportBidsOptions'), ExportBidsOptions) 88 % - bst_set('Pipelines') Saved Pipelines stored 89 % 90 % SEE ALSO bst_get 91 92 % @============================================================================= 93 % This function is part of the Brainstorm software: 94 % https://neuroimage.usc.edu/brainstorm 95 % 96 % Copyright (c) University of Southern California & McGill University 97 % This software is distributed under the terms of the GNU General Public License 98 % as published by the Free Software Foundation. Further details on the GPLv3 99 % license can be found at http://www.gnu.org/copyleft/gpl.html. 100 % 101 % FOR RESEARCH PURPOSES ONLY. THE SOFTWARE IS PROVIDED "AS IS," AND THE 102 % UNIVERSITY OF SOUTHERN CALIFORNIA AND ITS COLLABORATORS DO NOT MAKE ANY 103 % WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF 104 % MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, NOR DO THEY ASSUME ANY 105 % LIABILITY OR RESPONSIBILITY FOR THE USE OF THIS SOFTWARE. 106 % 107 % For more information type "brainstorm license" at command prompt. 108 % =============================================================================@ 109 % 110 % Authors: Francois Tadel, 2008-2021 111 % Martin Cousineau, 2017 112 113 global GlobalData; 114 115 %% ==== PARSE INPUTS ==== 116 if ((nargin >= 1) && ischar(varargin{1})) 117 contextName = varargin{1}; 118 if (nargin >= 2) 119 contextValue = varargin{2}; 120 else 121 contextValue = []; 122 end 123 else 124 error('Usage : bst_set(contextName, contextValue)'); 125 end 126 127 % Get required context structure 128 switch contextName 129 %% ==== BRAINSTORM CONFIGURATION ==== 130 case 'Version' 131 GlobalData.Program.Version = contextValue; 132 case 'BrainstormHomeDir' 133 GlobalData.Program.BrainstormHomeDir = contextValue; 134 case 'BrainstormDbDir' 135 GlobalData.DataBase.BrainstormDbDir = contextValue; 136 case 'BrainstormTmpDir' 137 GlobalData.Preferences.BrainstormTmpDir = contextValue; 138 case 'Pipelines' 139 GlobalData.Processes.Pipelines = contextValue; 140 141 %% ==== PROTOCOL ==== 142 case 'iProtocol' 143 if isnumeric(contextValue) 144 GlobalData.DataBase.iProtocol = contextValue; 145 else 146 error('iProtocol should be a number.'); 147 end 148 case {'ProtocolSubjects', 'ProtocolStudies'} 149 for structField = fieldnames(contextValue)' 150 GlobalData.DataBase.(contextName)(GlobalData.DataBase.iProtocol).(structField{1}) = contextValue.(structField{1}); 151 end 152 GlobalData.DataBase.isProtocolModified(GlobalData.DataBase.iProtocol) = 1; 153 case 'ProtocolInfo' 154 for structField = fieldnames(contextValue)' 155 GlobalData.DataBase.(contextName)(GlobalData.DataBase.iProtocol).(structField{1}) = contextValue.(structField{1}); 156 end 157 case 'isProtocolLoaded' 158 GlobalData.DataBase.isProtocolLoaded(GlobalData.DataBase.iProtocol) = contextValue; 159 case 'isProtocolModified' 160 GlobalData.DataBase.isProtocolModified(GlobalData.DataBase.iProtocol) = contextValue; 161 162 %% ==== SUBJECT ==== 163 case 'Subject' 164 % Get subjects list 165 ProtocolSubjects = bst_get('ProtocolSubjects'); 166 iSubject = varargin{2}; 167 sSubject = varargin{3}; 168 % If default subject 169 if (iSubject == 0) 170 ProtocolSubjects.DefaultSubject = sSubject; 171 else 172 ProtocolSubjects.Subject(iSubject) = sSubject; 173 end 174 % Update DataBase 175 bst_set('ProtocolSubjects', ProtocolSubjects); 176 177 178 %% ==== STUDY ==== 179 case 'Study' 180 % Get studies list 181 ProtocolStudies = bst_get('ProtocolStudies'); 182 iStudies = varargin{2}; 183 sStudies = varargin{3}; 184 iAnalysisStudy = -2; 185 iDefaultStudy = -3; 186 for i = 1:length(iStudies) 187 % Normal study 188 if (iStudies(i) > 0) 189 ProtocolStudies.Study(iStudies(i)) = sStudies(i); 190 % Inter-subject analysis study 191 elseif (iStudies(i) == iAnalysisStudy) 192 ProtocolStudies.AnalysisStudy = sStudies(i); 193 % Default study 194 elseif (iStudies(i) == iDefaultStudy) 195 ProtocolStudies.DefaultStudy = sStudies(i); 196 end 197 end 198 % Update DataBase 199 bst_set('ProtocolStudies', ProtocolStudies); 200 201 202 %% ==== GUI ==== 203 % USAGE: bst_set('Layout', sLayout) 204 % bst_set('Layout', PropName, PropValue) 205 case 'Layout' 206 if (nargin == 2) && isstruct(contextValue) 207 GlobalData.Preferences.Layout = contextValue; 208 isUpdateScreens = 0; 209 elseif (nargin == 3) && ischar(contextValue) && isfield(GlobalData.Preferences, 'Layout') && isfield(GlobalData.Preferences.Layout, contextValue) 210 GlobalData.Preferences.Layout.(contextValue) = varargin{3}; 211 isUpdateScreens = strcmpi(contextValue, 'DoubleScreen'); 212 else 213 error('Invalid call to bst_set.'); 214 end 215 % Update screen configuration 216 GlobalData.Program.ScreenDef = gui_layout('GetScreenClientArea'); 217 % Update layout right now 218 gui_layout('Update'); 219 % If the number of screen was changed: update the maximum size of the Brainstorm window 220 if isUpdateScreens 221 gui_layout('UpdateMaxBstSize'); 222 end 223 224 % USAGE: bst_set('FixedScaleY', []) 225 % bst_set('FixedScaleY', Modality, Value) 226 case 'FixedScaleY' 227 if (nargin == 3) && ~isempty(contextValue) && ~isempty(varargin{3}) 228 GlobalData.Preferences.FixedScaleY.(contextValue) = varargin{3}; 229 elseif (nargin == 2) && isempty(contextValue) 230 GlobalData.Preferences.FixedScaleY = struct(); 231 end 232 233 case 'ByteOrder' 234 switch(contextValue) 235 case {'b','ieee-le','n'} 236 GlobalData.Preferences.ByteOrder = 'b'; 237 case {'l','ieee-be'} 238 GlobalData.Preferences.ByteOrder = 'l'; 239 otherwise 240 error('Invalid byte order.'); 241 end 242 243 case 'Clipboard' 244 if (length(varargin) >= 3) 245 isCut = varargin{3}; 246 else 247 isCut = 0; 248 end 249 GlobalData.Program.Clipboard.Nodes = contextValue; 250 GlobalData.Program.Clipboard.isCut = isCut; 251 252 case 'ElectrodeConfig' 253 Modality = varargin{2}; 254 ElectrodeConf = varargin{3}; 255 if isequal(Modality, 'ECOG+SEEG') 256 Modality = 'ECOG_SEEG'; 257 elseif ~ismember(Modality, {'EEG','SEEG','ECOG','MEG'}) 258 error(['Invalid modality: ' Modality]); 259 end 260 GlobalData.Preferences.(contextName).(Modality) = ElectrodeConf; 261 262 case 'ElecInterpDist' 263 Modality = varargin{2}; 264 ElecInterpDist = varargin{3}; 265 if isequal(Modality, 'ECOG+SEEG') 266 Modality = 'ECOG_SEEG'; 267 elseif ~ismember(Modality, {'EEG','SEEG','ECOG','MEG'}) 268 error(['Invalid modality: ' Modality]); 269 end 270 GlobalData.Preferences.(contextName).(Modality) = ElecInterpDist; 271 272 case {'UniformizeTimeSeriesScales', 'XScale', 'YScale', 'FlipYAxis', 'AutoScaleY', 'ShowXGrid', 'ShowYGrid', 'ShowZeroLines', 'ShowEventsMode', ... 273 'Resolution', 'AutoUpdates', 'ExpertMode', 'DisplayGFP', 'ForceMatCompression', 'GraphicsSmoothing', 'DownsampleTimeSeries', ... 274 'DisableOpenGL', 'InterfaceScaling', 'TSDisplayMode', 'UseSigProcToolbox', 'LastUsedDirs', 'DefaultFormats', ... 275 'BFSProperties', 'ImportDataOptions', 'ImportEegRawOptions', 'RawViewerOptions', 'MontageOptions', 'TopoLayoutOptions', ... 276 'StatThreshOptions', 'ContactSheetOptions', 'ProcessOptions', 'BugReportOptions', 'DefaultSurfaceDisplay', ... 277 'MagneticExtrapOptions', 'MriOptions', 'ConnectGraphOptions', 'NodelistOptions', 'IgnoreMemoryWarnings', 'SystemCopy', ... 278 'TimefreqOptions_morlet', 'TimefreqOptions_hilbert', 'TimefreqOptions_fft', 'TimefreqOptions_psd', 'TimefreqOptions_stft', 'TimefreqOptions_plv', ... 279 'OpenMEEGOptions', 'DuneuroOptions', 'DigitizeOptions', 'PcaOptions', 'CustomColormaps', 'PluginCustomPath', 'BrainSuiteDir', 'PythonExe', ... 280 'GridOptions_headmodel', 'GridOptions_dipfit', 'LastPsdDisplayFunction', 'KlustersExecutable', 'ExportBidsOptions', 'ShowHiddenFiles'} 281 GlobalData.Preferences.(contextName) = contextValue; 282 283 case 'ReadOnly' 284 GlobalData.DataBase.isReadOnly = contextValue; 285 286 case 'PlotlyCredentials' 287 if length(varargin) ~= 4 288 error('Invalid call to bst_set.'); 289 end 290 [username, apiKey, domain] = varargin{2:4}; 291 % Default domain: plot.ly 292 if isempty(domain) 293 domain = 'https://plot.ly'; 294 end 295 % Plotly needs a URL with HTTP and no trailing slash. 296 if strfind(domain, 'https://') 297 domain = strrep(domain, 'https://', 'http://'); 298 elseif isempty(strfind(domain, 'http://')) 299 domain = ['http://', domain]; 300 end 301 if domain(end) == '/' 302 domain = domain(1:end-1); 303 end 304 % Save credentials 305 saveplotlycredentials(username, apiKey); 306 saveplotlyconfig(domain); 307 308 %% ==== ERROR ==== 309 otherwise 310 error('Invalid context : ''%s''', contextName); 311 312 313 end 314 315 316 317

File structures

The structures of the different types of files were described in the sections "On the hard drive" of the introduction tutorials. Here is a summary of all these sections:

Custom processing

In many situations, you will find useful to read the files available in the database, and maybe modify them. The easiest approaches do not require any scripting, we will start by reviewing them quickly.

Process: Run Matlab command

If you want to modify the values saved in a file (eg. the field "F" from a "data" file), the easiest way is probably to use the process1 File > Run Matlab command. It is also available from Process2 in the category "Other".

It loads the files in input and run them through a piece of Matlab code that you can edit freely. It can extend a lot the flexibility of the Brainstorm pipeline manager, providing an easy access to any Matlab function or script.

  • runMatlab.gif

The corresponding script looks like this:

sFiles = bst_process('CallProcess', 'process_matlab_eval', sFiles, [], ...
    'matlab',      'Data = Data.^2;', ...
    'sensortypes', 'MEG', ...
    'overwrite',   0);

Export/Import with the database explorer

Matlab

Right-click on the Deviant average in the database explorer > File > Export to Matlab > "DataMat".

  • export_matlab.gif

Data exported as "DataMat"
>> DataMat
DataMat =
     ChannelFlag: [340x1 double]
    ColormapType: []
         Comment: 'Avg: deviant (39 files)'
        DataType: 'recordings'
          Device: 'CTF'
    DisplayUnits: []
               F: [340x361 double]
         History: {45x3 cell}
             Std: []
            Time: [1x361 double]
            nAvg: 39
          Events: [1x1 struct]

DataMat is the exact content of the corresponding .mat file, as loaded with Matlab's load() function. Edit some of the fields of this structure from the Matlab command window:

>> DataMat.Comment = 'Test square';
>> DataMat.F = DataMat.F .^ 2;

Now right-click on the folder containing the original file > File > Import from Matlab > DataMat:

import_matlab.gif

If instead, you right-click on the original file and select the menu File > Import from Matlab, it overwrites the selected file instead of creating a new one with the selected structure.

Files

In addtion to export to Matlab, files in the database explorer can be exported as files. Right-click on the Deviant average in the database explorer > File > Export to File. Indicate the filename of the file format for the destination folder. The available file formats depend on the type of file to export.

  • export_file.gif

Also, 'data' (raw and imported/trial recordings), 'results' (sources), 'timefreq' (time-frequency, spectrum and connectivity), or 'matrix' (any time series extracted from other files) files can be also exported to files using the process: File > Export to file .

  • process_export_file.gif

Reference: File manipulation

Useful functions for manipulating file names and paths (read the code of the functions for help):

  • file_fullpath: Converts a relative file path to an absolute path.

  • file_short: Converts an absolute file path to a short path, relative to the current protocol folder.

  • file_gettype: Returns the type of a file.

Reading files from a script (all the functions take relative paths in input):

  • in_tess_bst(TessFile): Read a surface file.

  • in_mri_bst(MriFile): Read a MRI/volume file.

  • in_bst_data(DataFile): Read an imported epoch.

  • in_bst_timefreq(TimefreqFile): Read a power spectrum, time-frequency or connectivity file.

  • in_bst_channel(ChannelFile): Read a channel file.

  • in_bst_headmodel(HeadmodelFile, ApplyOrient): Read a head model file and apply orientations.

  • in_bst_results(ResultsFile, LoadFull): Load a source file and optionally reconstruct the full source time series on the fly (ImagingKernel * recordings).

  • in_bst_matrix(MatrixFile): Read a file with the type "matrix".

  • in_bst(FileName, TimeWindow): Read any Brainstorm data file with the possibility to load only a specific part of the file. "TimeWindow" is a range of time values in seconds: [tStart, tStop].

  • bst_process('LoadInputFile', FileName, Target, TimeWindow): The most high-level function for reading data files. "Target" is a string with the list of signal names or types to load.

  • bst_memory('GetConnectMatrix', TimefreqMat): Rebuild a full connectivity matrix.

  • in_fopen(DataFile, FileFormat, ImportOptions): Low-level function for opening continuous files.

  • in_fread(sFile, ChannelMat, iEpoch, SamplesBounds, iChannels, ImportOptions): Low-level function for reading blocks from continuous files. Requires a call to in_fopen first to get the sFile structure.

Saving files:

  • bst_save(FileName, FileMat, Version, isAppend): Save a file but does not register it in the database.

    • FileName: Absolute path to a .mat file, use in combination with file_fullpath for relative paths.

    • FileMat: Valid Brainstorm structure, corresponding to the file type.

    • Version: Defines which version of the Matlab .mat format is used to store the data:
      - 'v6': Fastest option, bigger files, no compression, no files >2Gb
      - 'v7': Slower option, compressed, no files >2Gb
      - 'v7.3': Much slower than the others, compressed, but only way to save files > 2Gb.

    • isAppend: If set to 1, updates only the fields defined in FileMat, keep the others untouched.

  • file_unique: Produces a unique file name by adding a number tag to it.

  • bst_process('GetNewFilename', OutputFolder, BaseFilename): Generate a new unique file name based on the beginning of the file name (eg. BaseFilename='data_average_test_').

Registering new files in the database:

  • db_add(iStudy/iSubject, FileMat): Add a new file in an anatomy folder or a functional data folder. This function saves the file and then reloads the folder. FileMat must be a structure, not a filename. You should not save the file manually before calling this function.

  • db_add_data(iStudy, FileName, FileMat): Register in the database a structure FileMat that has already been saved in file FileName. You should call bst_save manually before calling this function.

  • db_add_data(iStudy, FileName, FileMat, iItem): Overwrites the existing file #iItem.

Reload folders (if you saved or deleted files without registering correctly the modification in the database):

  • db_reload_studies(iStudies): Reload only the select data folders (aka "studies").

  • db_reload_conditions(iSubjects): Reload all the data folders for a subject.

  • db_reload_subjects(iSubjects): Reload the anatomy of the selected subjects.

  • db_reload_database('current'): Reload the entire protocol (anatomy and functional data).

Other useful database functions:

  • db_add_condition: Create a new folder in a subject.

  • db_add_subject: Create a new subject.

  • db_delete_studies: Delete a list of folders.

  • db_delete_subjects: Delete a list of subjects.

  • db_group_conditions: Merge two folders from the same subject.

  • db_rename_condition: Rename a folder.

  • db_rename_subject: Rename a subject.

  • db_set_channel: Set the channel file for a folder.

  • db_set_headmodel: Copy a head model to other folders

  • db_set_noisecov: Set the noise/data covariance for a folder, or copy to other folders/subjects.

  • db_set_template: Copy an anatomy template to a subject or use it as the default anatomy.

Export a file from the database to other file formats (read the comments in the functions for help):

  • export_channel
  • export_data
  • export_events
  • export_result
  • export_timefreq
  • export_matrix
  • export_mri
  • export_surfaces
  • export_protocol: Export a subject or an entire protocol as a .zip file.

Convert Brainstorm structures to FieldTrip structures:

  • out_fieldtrip_channel
  • out_fieldtrip_data
  • out_fieldtrip_timefreq
  • out_fieldtrip_headmodel
  • out_fieldtrip_results
  • out_fieldtrip_matrix

Reference: Display functions

Create new visualization figures:

  • view_channels: Display sensors in a 3D figure.

  • view_helmet: Display the inner surface of the MEG helmet in a 3D figure.

  • view_timeseries: Display a data file as time series.

  • view_timeseries_matrix: Display a custom matrix as time series.

  • view_topography: Display a data file as a spatial topography.

  • view_erpimage: Display multiple data files as an image, signal by signal.

  • view_timefreq: Open a time-frequency file (various display modes available).

  • view_spectrum: Display a power spectrum (PSD or time-frequency files).

  • view_connect: Open a connectivity matrix (various display modes available).

  • view_matrix: Open a "matrix" file (various display modes available).

  • view_contactsheet: Create a contact sheet in time or across a volume from an existing figure.

  • view_noisecov: Display a noise or data covariance file.

  • view_dipoles: Open a dipoles file (various display modes available).

  • view_pac: Open PAC results (various display modes available).

  • view_mri: View a MRI file in the MRI viewer (with or without a functional overlay from a source file).

  • view_mri_3d: View a MRI file in a 3D figure (with or without a functional overlay from a source file).

  • view_surface: View a surface.

  • view_surface_data: View a surface file with a source file as its texture.

  • view_surface_matrix: View a custom surface (user defines vertices, faces, color, transparency).

  • view_image_reg: Display a 3D or 4D matrix as an image with time and/or frequency dimensions.

  • view_struct: Display a structure as with the popup menu "File > View file contents".

  • script_view_sources: Shortcut script to display source files.

  • bst_memory('UnloadAll', 'Forced'): Close all the existing figures.

Configure time-series figures:

  • panel_time('SetCurrentTime', t): Set current time.

  • panel_record('SetTimeLength', duration): Set the length of the current display page, in seconds.

  • panel_record('SetStartTime', t): Set the start of the current page, in seconds.

  • panel_record('SetDisplayMode', hFig, DisplayMode): 'column' or 'butterfly'.

  • panel_filter('SetFilters', isLowPass, LowPass, isHighPass, HighPass, isSinRem, SinRem, isMirror)

  • panel_montage('SetCurrentMontage', hFig, MontageName): Change the montage of channels.

  • bst_figures('SetSelectedRows', SelectedChannels): Set selected channels (cell array of strings).

  • figure_timeseries('SetTimeSelectionManual', hFig, TimeWindow): Select a time segment.

Configure 3D figures:

  • figure_3d('SetStandardView', hFig, 'left'): Change camera view (top,bottom,left,right,front,back).

  • figure_3d('ViewSensors', hFig, isMarkers, isLabels): Enable the view of sensor markers and labels.

  • panel_surface('SetShowSulci', hFig, iTess, 1): Show/hide the sulci (darker color for deeper areas).

  • panel_surface('SetSurfaceColor', hFig, iTess, [1 0 0]): Set the surface color.

  • panel_surface('SetSurfaceSmooth', hFig, iTess, Smooth, 0): Set the amount of smoothing (0-1).

  • panel_surface('SetSurfaceTransparency', hFig, iTess, Transp): Set the surface transparency (0-1).

  • panel_surface('SetDataThreshold', hFig, iTess, Thresh): Set the amplitude threshold.

  • panel_surface('SetSizeThreshold', hFig, iTess, MinSize): Set size threshold (min size slider).

  • panel_surface('SelectHemispheres', target): Equivalent to clicking on the buttons in the Resect panel of the Surface tab. Possible target values: 'left', 'right', 'struct', 'none'

  • figure_mri('SetLocation', CsName, hFig, [], xyz): CsName=voxel/mri/scs/mni, xyz=[x,y,z]

Configure time-frequency figures:

  • panel_freq('SetCurrentFreq', iFreq, 1): Set the current frequency index (from the Freqs vector)

  • panel_freq('SetCurrentFreq', Freq, 0): Set the current frequency (in Hz).

  • sOptions = panel_display('GetDisplayOptions'): Get display options selected in the Display tab.

  • panel_display('SetDisplayOptions', sOptions): Change the options selected in the Display tab.

  • sOptions.HideEdgeEffects: Controls the checkbox "Hide edge effects" (0 or 1).

  • sOptions.HighResolution: Controls the checkbox "Smooth display" (0 or 1).

  • sOptions.RowName: Controls the signal that is currently displayed (for 'SingleSensor' display mode).

  • sOptions.Function: Controls the display function (magnitude, power, log, phase).

Configure colormaps:

  • bst_colormaps('SetColormapName', ColormapType, ColormapName): 'jet', 'parula', 'cmap_rbw', ...

  • bst_colormaps('SetMaxMode', ColormapType, MaxMode): Colorbar range ('global', 'local', 'custom').

  • bst_colormaps('SetMaxCustom', ColormapType, [], Min, Max): Set a custom colorbar range.

  • bst_colormaps('SetColormapAbsolute', ColormapType, isAbsolute): Show positive or relative values

  • bst_colormaps('SetDisplayColorbar', ColormapType, isDisplay): Show colorbar in the figures.

  • bst_colormaps('RestoreDefaults', ColormapType): Restore the default configuration for a colormap.

  • ColormapType: anatomy, meg, eeg, sources, stat1, stat2, time, timefreq, connect1, connectn, image

Configure statistical thresholding:

  • StatThreshOptions = bst_get('StatThreshOptions'): Get display options selected in the Stat tab.

  • bst_set('StatThreshOptions', StatThreshOptions): Change the options selected in the Stat tab.

  • StatThreshOptions.pThreshold: Current significance level α (ie. p-value threshold)

  • StatThreshOptions.Correction: Correction for multiple comparisons ('none', 'fdr', 'bonferroni')

  • StatThreshOptions.Control: List of dimensions to correct for multiple comparisons (default = [1 2 3])

Export the contents of a figure to a file:

  • out_figure_image: Screen capture of any Brainstorm figure.

  • out_figure_movie: Save a movie from one or multiple Brainstorm figures.

  • out_figure_timefreq: Extract some of the data displayed in a time-frequency figure.

  • out_figure_timeseries: Extract some of the data displayed in a time series figure.

Example: Creating a new file

This section illustrates how to add new files to the database. We will create a sinusoidal signal and save it in a "matrix" file, in a new folder of the subject "Test".

% Time: 1 second with a sampling frequency of 1000Hz
t = 0:0.001:1;
% Generate two sinsuoidal signals (20Hz,30Hz)
F = [sin(20*2*pi*t); 0.5*sin(30*2*pi*t)];

% Initialize an empty "matrix" structure
sMat = db_template('matrixmat');
% Fill the required fields of the structure
sMat.Value       = F;
sMat.Comment     = 'Test sinusoids';
sMat.Description = {'Signal #1: 20Hz'; 'Signal #2: 30Hz'};
sMat.Time        = t;

% Create a new folder "Script" in subject "Test"
iStudy = db_add_condition('Test', 'Script');
% Get the corresponding study structure
sStudy = bst_get('Study', iStudy);

There are many options to add a new file to the database, with various levels of requirements. You can call the db_add function (reloads the destination folder, therefore slow if you save many files), save the file in the corresponding folder and reload the protocol (slow as well), or register the file in the database manually (more complicated but faster).

Option #1: db_add

OutputFile = db_add(iStudy, sMat);

Option #2: bst_save / db_reload_studies

% Get the full path to the new folder
% (same folder as the brainstormstudy.mat file for this study)
OutputFolder = bst_fileparts(file_fullpath(sStudy.FileName));
% Get a new unique filename (including a timestamp)
MatrixFile = bst_process('GetNewFilename', OutputFolder, 'matrix_test');
% Save file
bst_save(MatrixFile, sMat, 'v6');
% Reload the folder in which the new file was saved
db_reload_studies(iStudy);

Option #3: bst_save / db_add_data

% Another way to generate a unique filename (without a timestamp)
MatrixFile = file_unique(bst_fullfile(OutputFolder, 'matrix_test.mat'));
% Save file
bst_save(MatrixFile, sMat, 'v6');
% Reference saved file in the database
db_add_data(iStudy, MatrixFile, sMat);
% Update the database explorer display
panel_protocols('UpdateNode', 'Study', iStudy);
  • newfile.gif

Example: Editing events

A step that commonly requires manual changes is the definition of the event markers. For example, we have to combine external triggers or behavioral information with the existing events. This example illustrates how to load the events, modify them and save them back.

For the continuous recordings, the events are saved in the .mat file corresponding to the "Link to raw file". These structures contain only meta-data and information created with Brainstorm, the EEG/MEG recordings are available in a separate binary file. First, we need to load this link.

% Right-click on a "Link to raw file" in the database explorer
%  > File > Copy file path to clipboard
RawFile = '/.../@rawS01.../data_0raw_S01_..._01_600Hz_notch.mat'

% Load the "sFile" structure, contained in the .F structure
% of the link file (data_0raw...mat)
sRaw = in_bst_data(RawFile, 'F');

>> sRaw.F.events
ans =
1x7 struct array with fields:
    label
    color
    epochs
    times
    reactTimes
    select
    channels
    notes

For example, let's say we want to add 30ms to all the events in the category "button" in order to compensate for some hardware delay, and create a new event category with the modified timing. We need first to identify what is the index of the category "button", in this array of 7 event structures.

% Find the index of the event category "button"
iEvtButton = find(strcmpi({sRaw.F.events.label}, 'button'));

>> iEvtButton
iEvtButton =
     3

In the code above, note this special Matlab syntax that allows the concatenation of the values of one field across multiple structures, in an array of structures:

>> {sRaw.F.events.label}
ans =
    'standard'    'deviant'    'button'    'cardiac'
    'blink'    'bad_1-7Hz'    'bad_40-240Hz'

If you want to search instead all the events containing a specific tag, for example "bad", you can use the cellfun function (applies the same function sequentially to all the elements in a cell array and concatenates the results) in combination with the strfind function (search for a substring). The final call to the find function returns at which indices the list of tags found in the event label is not empty.

>> iEvtBad = find( ~cellfun( @(c)isempty(strfind(c,'bad')), ...
                             {sRaw.F.events.label}))
iEvtBad =
     6     7

The code below copies the existing event category "button", renames it and add a 30ms offset. If you add or remove events, you must adjust the size of the other fields: epochs (always 1 for most file formats), channels and notes (cell array of empty matrices in most cases).

% Copy the event category "button" to a new category
iEvtNew = length(sRaw.F.events) + 1;
sRaw.F.events(iEvtNew) = sRaw.F.events(iEvtButton);
% Rename the new event to "button_offset"
sRaw.F.events(iEvtNew).label = 'button_offset';

% How many samples in 30ms (0.030s * 600Hz = 18 samples)
offsetSample = round(0.030 .* sRaw.F.prop.sfreq);
% Apply offset to the events in the "button_offset" category
sRaw.F.events(iEvtNew).times = sRaw.F.events(iEvtNew).times + 0.03
% Round new time values to the nearest sample
sRaw.F.events(iEvtNew).times = ...
    round(sRaw.F.events(iEvtNew).times .* sRaw.F.prop.sfreq) ./ sRaw.F.prop.sfreq;
% Re-generate an epochs field with only ones, and empty notes and channels fields
% (optional here, as we didn't change the number of evt)
nTimes = size(sRaw.F.events(iEvtNew).times, 2);
sRaw.F.events(iEvtNew).epochs = ones(1, nTimes);
sRaw.F.events(iEvtNew).channels = cell(1, nTimes);
sRaw.F.events(iEvtNew).notes = cell(1, nTimes);
% Change the event color to yellow (red=1, green=1, blue=0)
sRaw.F.events(iEvtNew).color = [1 1 0];

>> sRaw.F.events(iEvtNew)
ans =
         label: 'button_offset'
         color: [1 1 0]
        epochs: [1x40 double]
         times: [1x40 double]
    reactTimes: []
        select: 1
      channels: {1x40 cell}
         notes: {1x40 cell}

The last step is to save the modifications back to the "Link to raw file". Here the call to file_fullpath is optional because the variable RawFile already contains the absolute path to the file.

% Update the sRaw structure to the RawFile file (the last parameter appends to the existing struct)
bst_save(file_fullpath(RawFile), sRaw, 'v6', 1);

Open the recordings to make sure your transformation worked the way you expected.

  • edit_events.gif

Find examples in the code

The easier way to understand how to use a function is to search the code with the "Find files" interface in Matlab. Go to the brainstorm3 folder, click on "Find files" (or Ctrl+Shift+F), enter the name of a function in "Find files containing text", Include subfolders, Match case. It will return all the lines that include the string you entered across all the files in the Brainstorm distribution. Just double-click on a line to jump to the code in the Matlab editor.

find_files.gif

You can use the same interface to find what function is called when you click on a button or menu in the interface. Search for the label or the tooltip of the interface element in the same way. The example below shows how to track what happens when you click on the headmodel popup menu "Check spheres".

find_files2.gif

If you have trouble understanding how to set some input parameters, you can use the Matlab's debugger to explore a real use case. Place a breakpoint at the begging of your function of interest (watch this tutorial if you don't know how to do this), for example in view_timeseries.m. Then click on the corresponding menus in the Brainstorm interface (eg. double-click on a data file). When the execution reaches the line you selected, it stops and gives you back the commands. You can explore the values in all the variables, modify them, and execute the code step by step (many options available in the Editor tab of Matlab).

debugger.gif

Find interface callback functions

If you are looking for the function called by a menu or a button in the interface:

  • In Matlab, go to the "brainstorm3" folder
  • Click on the button "Find file" in the ribbon "Editor" of Matlab interface (Ctrl+Shift+F)
  • Select the option "Include subfolders"
  • Search for the text of the menu or button you are looking for (eg. 'Set as default'); you can do a case-sensitive or a case-insensitive search (option "Match case")
  • It gives you the list of all the places where this text appears in the Brainstorm code. In the previous example, in tree_callbacks.m, you’d find a call to function SetDefaultSurf.

  • Double-click on the corresponding line to jump to the code
  • Right-click on "SetDefaultSurf" > Open "SetDefaultSurf"

  • The editor should jump to the code of function SetDefaultSurf(), and there you’d find your call to "db_surface_default"

  • Right-click on it > Open "db_surface_default" to open the function and read the header that should explain its usage (this example is not very detailed, sorry)

  • Sometimes it helps to look at other examples of calls to this function: Use the window "Find files" again to search for text "db_surface_default"

Additional quality control

You can add in the reports all the information that may help you control the quality of the analysis, or figures you want to include in publications or clinical reports. The process "File > Save snapshot" lets you save some predefined views, but you can also custom screen captures. The example below shows how to add a "raster plot" for all the deviant trials from Run#01 in the report.

% Get all the deviant trials in Run#01  (the list includes the deviant average)
sDeviant = bst_process('CallProcess', 'process_select_files_data', [], [], ...
    'subjectname',   'Subject01', ...
    'condition',     'S01_AEF_20131218_01_600Hz_notch', ...
    'tag',           'deviant');

% Open raster plot
hFig = view_erpimage({sDeviant.FileName}, 'erpimage', 'MEG');
% Select the channel MRT34
sOptions = panel_display('GetDisplayOptions');
sOptions.RowName = 'MRT34';
panel_display('SetDisplayOptions', sOptions);

% Screen capture of this figure
% bst_report('Snapshot', hFig, FileName, Comment, WindowPosition);
bst_report('Snapshot', hFig, [], 'ERP image: MRT34', [300 100 600 400]);
% Close figure
close(hFig);

You can also add messages in the reports (information, warning or errors).

% Function call: bst_report(MsgType, sProcess, sInputs, Message)
bst_report('Info',    [], sDeviant, 'This is an information message.');
bst_report('Warning', [], sDeviant, 'This is a warning.');
bst_report('Error',   [], sDeviant, 'This is an error.');

% Open the report viewer to show the current report (not saved yet)
bst_report('Open', 'Current');

Report generated with the code above:

  • report2.gif

Loop over subjects

Creating loops is not supported yet by the script generator, but relatively easy to do from a script without knowing too much about Matlab programming. The example below shows how to create a loop over subjects to import their anatomy. The dataset used here is from the tutorial MEG visual: single subject.

With the Process1 box empty, select the process "Import > Import anatomy > Import anatomy folder" and generate a script. Simplify if using the guidelines presented in the previous sections:

% Input files
SubjectNames = {'sub001'};
RawFiles = {...
    '/.../Tutorials/sample_group/freesurfer/sub001'};

% Process: Import anatomy folder
bst_process('CallProcess', 'process_import_anatomy', [], [], ...
    'subjectname', SubjectNames{1}, ...
    'mrifile',     {RawFiles{1}, 'FreeSurfer'}, ...
    'nvertices',   15000);

Add the other subject names and corresponding FreeSurfer folders in the script header:

SubjectNames = {'sub001', 'sub002', 'sub003', 'sub004'};
RawFiles = {...
    '/.../Tutorials/sample_group/freesurfer/sub001', ...
    '/.../Tutorials/sample_group/freesurfer/sub002', ...
    '/.../Tutorials/sample_group/freesurfer/sub003', ...
    '/.../Tutorials/sample_group/freesurfer/sub004'};

Add a for loop around all the steps to repeat on each subject ("for" before, and "end" after the code), and replace the indices "1" with the loop variable:

% Loop on subjects
for iSubject = 1:length(SubjectNames)
    % Process: Import anatomy folder
    bst_process('CallProcess', 'process_import_anatomy', [], [], ...
        'subjectname', SubjectNames{iSubject}, ...
        'mrifile',     {RawFiles{iSubject}, 'FreeSurfer'}, ...
        'nvertices',   15000);
end

Loop over acquisition runs

If you have multiple subjects for which the anatomy is already imported, and multiple runs to process for each subject, you can add two nested for loops to link all the runs to the database in the same script. The dataset used here is from the tutorial MEG visual: single subject.

With the Process1 box empty, select the process "Import > Import recordings > Create link to raw file" and generate a script. Simplify if using the guidelines presented in the previous sections:

% Input files
SubjectNames = {'sub001'};
RawFiles = {...
    '/.../sample_group/ds117/sub001/MEG/run_01_sss.fif'};

% Process: Create link to raw file
sFileRaw = bst_process('CallProcess', 'process_import_data_raw', [], [], ...
    'subjectname',    SubjectNames{1}, ...
    'datafile',       {RawFiles{1}, 'FIF'}, ...
    'channelreplace', 0, ...
    'channelalign',   0, ...
    'evtmode',        'value');

Add the other subject names and all the runs for all the subjects (array of cell arrays) in the script header:

SubjectNames = {'sub001', 'sub002'};
RawFiles = {...
    {'/.../sample_group/ds117/sub001/MEG/run_01_sss.fif', ...
     '/.../sample_group/ds117/sub001/MEG/run_02_sss.fif', ...
     '/.../sample_group/ds117/sub001/MEG/run_03_sss.fif'}, ...
    {'/.../sample_group/ds117/sub002/MEG/run_01_sss.fif', ...
     '/.../sample_group/ds117/sub002/MEG/run_02_sss.fif', ...
     '/.../sample_group/ds117/sub002/MEG/run_03_sss.fif'}};

Add two for loops around the code to repeat on all the runs:

% Loop on subjects
for iSubject = 1:length(SubjectNames)
  % Loop on runs for each subject
  for iRun = 1:length(RawFiles{iSubject})
    % Process: Create link to raw file
    sFileRaw = bst_process('CallProcess', 'process_import_data_raw', [], [], ...
        'subjectname',    SubjectNames{iSubject}, ...
        'datafile',       {RawFiles{iSubjects}{iRun}, 'FIF'}, ...
        'channelreplace', 0, ...
        'channelalign',   0, ...
        'evtmode',        'value');
  end
end

How to process an entire study

This section proposes a standard workflow for processing a full group study with Brainstorm. It contains the same steps of analysis as the introduction tutorials, but separating what can be done automatically from what should be done manually. This workflow can be adapted to most ERP studies (stimulus-based).

  • Prototype: Start by processing one or two subjects completely interactively (exactly like in the introduction tutorials). Use the few pilot subjects that you have for your study to prototype the analysis pipeline and check manually all the intermediate stages. Take notes of what you're doing along the way, so that you can later write a script that reproduces the same operations.

  • Anatomical fiducials: Set NAS/LPA/RPA and compute the MNI normalization for each subject.

    • Segmentation: Run FreeSurfer/BrainSuite to get surfaces and atlases for all the subjects.

    • File > Batch MRI fiducials: This menu prompts for the selection of the fiducials for all the subjects and saves a file fiducials.m in each segmentation folder. You will not have to redo this even if you have to start over your analysis from the beginning.

    • Script: Write a loop that calls the process "Import anatomy folder" for all the subjects.

    • Alternatives: Create and import the subjects one by one and set the fiducials at the import time. Or use the default anatomy for all the subjects (or use warped templates).

  • Script #1: Pre-processing: Loop on the subjects and the acquisition runs.

    • Create link to raw files: Link the subject and noise recordings to the database.

    • Event markers: Read and group triggers from digital and analog channel, fix stimulation delays

    • Evaluation: Power spectrum density of the recordings to evaluate their quality.

    • Pre-processing: Notch filter, sinusoid removal, band-pass filter.

    • Evaluation: Power spectrum density of the recordings to make sure the filters worked well.

    • Cleanup: Delete the links to the original files (the filtered ones are copied in the database).

    • Detect artifacts: Detect heartbeats, Detect eye blinks, Remove simultaneous.

    • Compute SSP: Heartbeats, Blinks (this selects the first component of each decomposition)

    • Compute ICA: If you have some artifacts you'd like to remove with ICA (no default selection).

    • Screenshots: Check the MRI/sensors registration, PSD before and after corrections, SSP.

    • Export the report: One report per subject, or one report for all the subjects, saved in HTML.

  • Manual inspection #1:

    • Check the reports: Information messages (number of events, errors and warnings) and screen captures (registration problems, obvious noisy channels, incorrect SSP topographies).

    • Mark bad channels: Open the recordings, select the channels and mark them as bad. Or use the process "Set bad channels" to mark the same bad channels in multiple files.

    • Fix the SSP/ICA: For the suspicious runs: Open the file, adjust the list of blink and cardiac events, remove and recompute the SSP decompositions, manually select the components.

    • Detect other artifacts: Run the process on all the runs of all the subjects at once (select all the files in Process1 and run the process, or generate the equivalent script).

    • Mark bad segments: Review the artifacts detected in 1-7Hz and 40-240Hz, keep only the ones you really want to remove, then mark the event categories as bad. Review quickly the rest of the file and check that there are no other important artifacts.

    • Additional SSP: If you find one type of artifact that repeats (typically saccades and SQUID jumps), you can create additional SSP projectors, either with the process "SSP: Generic" or directly from a topography figure (right-click on the figure > Snapshot> Use as SSP projector).

  • Script #2: Subject-level analysis: Epoching, averaging, sources, time-frequency.

    • Importing: Process "Import MEG/EEG: Events" and "Pre-process > Remove DC offset".

    • Averaging: Average trials by run, average runs by subject (registration problem in MEG).

    • Noise covariance: Compute from empty room or resting recordings, copy to other folders.

    • Head model: Compute for each run, or compute once and copy if the runs are co-registered.

    • Sources: Compute for each run, average across runs and subjects in source space for MEG.

    • Time-frequency: Computation with Hilbert transform or Morlet wavelets, then normalize.

    • Screenshots: Check the quality of all the averages (time series, topographies, sources).

    • Export the report: One report per subject, or one report for all the subjects, saved in HTML.

  • Manual inspection #2:

    • Check the reports: Check the number of epochs imported and averaged in each condition, check the screen capture of the averages (all the primary responses should be clearly visible).

    • Regions of interest: If not using predefined regions from an atlas, define the scouts on the anatomy of each subject (or on the template and then project them to the subjects).

  • Script #3: Group analysis, ROI-based analysis, etc.

    • Averaging: Group averages for the sensor data, the sources and the time-frequency maps.

    • Statistics: Contrast between conditions or groups of subjects.

    • Regions of interest: Any operation that involve scouts.

Final scripts

The following script from the Brainstorm distribution reproduces the introduction tutorials ("Get started"): brainstorm3/toolbox/script/tutorial_introduction.m - Report: report_TutorialIntroduction.html

1 function tutorial_introduction(tutorial_dir, reports_dir) 2 % TUTORIAL_INTRODUCTION: Script that runs all the Brainstorm introduction tutorials. 3 % 4 % INPUTS: 5 % - tutorial_dir : Directory where the sample_introduction.zip file has been unzipped 6 % - reports_dir : Directory where to save the execution report (instead of displaying it) 7 8 % @============================================================================= 9 % This function is part of the Brainstorm software: 10 % https://neuroimage.usc.edu/brainstorm 11 % 12 % Copyright (c) University of Southern California & McGill University 13 % This software is distributed under the terms of the GNU General Public License 14 % as published by the Free Software Foundation. Further details on the GPLv3 15 % license can be found at http://www.gnu.org/copyleft/gpl.html. 16 % 17 % FOR RESEARCH PURPOSES ONLY. THE SOFTWARE IS PROVIDED "AS IS," AND THE 18 % UNIVERSITY OF SOUTHERN CALIFORNIA AND ITS COLLABORATORS DO NOT MAKE ANY 19 % WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF 20 % MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, NOR DO THEY ASSUME ANY 21 % LIABILITY OR RESPONSIBILITY FOR THE USE OF THIS SOFTWARE. 22 % 23 % For more information type "brainstorm license" at command prompt. 24 % =============================================================================@ 25 % 26 % Author: Francois Tadel, 2016-2017 27 28 29 % ===== FILES TO IMPORT ===== 30 % Output folder for reports 31 if (nargin < 2) || isempty(reports_dir) || ~isdir(reports_dir) 32 reports_dir = []; 33 end 34 % You have to specify the folder in which the tutorial dataset is unzipped 35 if (nargin == 0) || isempty(tutorial_dir) || ~file_exist(tutorial_dir) 36 error('The first argument must be the full path to the dataset folder.'); 37 end 38 % Subject name 39 SubjectName = 'Subject01'; 40 % Build the path of the files to import 41 AnatDir = fullfile(tutorial_dir, 'sample_introduction', 'anatomy'); 42 Run1File = fullfile(tutorial_dir, 'sample_introduction', 'data', 'S01_AEF_20131218_01_600Hz.ds'); 43 Run2File = fullfile(tutorial_dir, 'sample_introduction', 'data', 'S01_AEF_20131218_02_600Hz.ds'); 44 NoiseFile = fullfile(tutorial_dir, 'sample_introduction', 'data', 'S01_Noise_20131218_02_600Hz.ds'); 45 % Check if the folder contains the required files 46 if ~file_exist(Run1File) 47 error(['The folder ' tutorial_dir ' does not contain the folder from the file sample_introduction.zip.']); 48 end 49 % Re-inialize random number generator 50 if (bst_get('MatlabVersion') >= 712) 51 rng('default'); 52 end 53 54 55 %% ===== TUTORIAL #1: CREATE PROTOCOL ================================================ 56 % =================================================================================== 57 disp([10 'DEMO> Tutorial #1: Create protocol' 10]); 58 % The protocol name has to be a valid folder name (no spaces, no weird characters...) 59 ProtocolName = 'TutorialIntroduction'; 60 % Start brainstorm without the GUI 61 if ~brainstorm('status') 62 brainstorm nogui 63 end 64 % Delete existing protocol 65 gui_brainstorm('DeleteProtocol', ProtocolName); 66 % Create new protocol 67 gui_brainstorm('CreateProtocol', ProtocolName, 0, 0); 68 % Start a new report 69 bst_report('Start'); 70 % Reset colormaps 71 bst_colormaps('RestoreDefaults', 'meg'); 72 % Set the current display mode to 'butterfly' 73 bst_set('TSDisplayMode', 'butterfly'); 74 75 76 %% ===== TUTORIAL #2: IMPORT ANATOMY ================================================= 77 % =================================================================================== 78 disp([10 'DEMO> Tutorial #2: Import anatomy' 10]); 79 % Process: Import FreeSurfer folder 80 bst_process('CallProcess', 'process_import_anatomy', [], [], ... 81 'subjectname', SubjectName, ... 82 'mrifile', {AnatDir, 'FreeSurfer'}, ... 83 'nvertices', 15000, ... 84 'nas', [127, 213, 139], ... 85 'lpa', [ 52, 113, 96], ... 86 'rpa', [202, 113, 91]); 87 % This automatically calls the SPM registration procedure because the AC/PC/IH points are not defined 88 89 90 91 %% ===== TUTORIAL #3: EXPLORE ANATOMY ================================================ 92 % =================================================================================== 93 disp([10 'DEMO> Tutorial #3: Explore anatomy' 10]); 94 % Get subject definition 95 sSubject = bst_get('Subject', SubjectName); 96 % Get MRI file and surface files 97 MriFile = sSubject.Anatomy(sSubject.iAnatomy).FileName; 98 CortexFile = sSubject.Surface(sSubject.iCortex).FileName; 99 HeadFile = sSubject.Surface(sSubject.iScalp).FileName; 100 % Display MRI 101 hFigMri1 = view_mri(MriFile); 102 hFigMri3 = view_mri_3d(MriFile, [], [], 'NewFigure'); 103 hFigMri2 = view_mri_slices(MriFile, 'x', 20); 104 pause(0.5); 105 % Close figures 106 close([hFigMri1 hFigMri2 hFigMri3]); 107 % Display scalp and cortex 108 hFigSurf = view_surface(HeadFile); 109 hFigSurf = view_surface(CortexFile, [], [], hFigSurf); 110 hFigMriSurf = view_mri(MriFile, CortexFile); 111 % Figure configuration 112 iTess = 2; 113 panel_surface('SetShowSulci', hFigSurf, iTess, 1); 114 panel_surface('SetSurfaceColor', hFigSurf, iTess, [1 0 0]); 115 panel_surface('SetSurfaceSmooth', hFigSurf, iTess, 0.5, 0); 116 panel_surface('SetSurfaceTransparency', hFigSurf, iTess, 0.8); 117 figure_3d('SetStandardView', hFigSurf, 'left'); 118 pause(0.5); 119 % Close figures 120 close([hFigSurf hFigMriSurf]); 121 122 123 124 %% ===== TUTORIAL #4: CHANNEL FILE =================================================== 125 % =================================================================================== 126 disp([10 'DEMO> Tutorial #4: Channel file' 10]); 127 % Process: Create link to raw files 128 sFilesRun1 = bst_process('CallProcess', 'process_import_data_raw', [], [], ... 129 'subjectname', SubjectName, ... 130 'datafile', {Run1File, 'CTF'}, ... 131 'channelalign', 1); 132 sFilesRun2 = bst_process('CallProcess', 'process_import_data_raw', [], [], ... 133 'subjectname', SubjectName, ... 134 'datafile', {Run2File, 'CTF'}, ... 135 'channelalign', 1); 136 sFilesNoise = bst_process('CallProcess', 'process_import_data_raw', [], [], ... 137 'subjectname', SubjectName, ... 138 'datafile', {NoiseFile, 'CTF'}, ... 139 'channelalign', 0); 140 sFilesRaw = [sFilesRun1, sFilesRun2, sFilesNoise]; 141 % Process: Snapshot: Sensors/MRI registration 142 bst_process('CallProcess', 'process_snapshot', [sFilesRun1, sFilesRun2], [], ... 143 'target', 1, ... % Sensors/MRI registration 144 'modality', 1, ... % MEG (All) 145 'orient', 1, ... % left 146 'Comment', 'MEG/MRI Registration'); 147 148 % View sensors 149 hFig = view_surface(HeadFile); 150 hFig = view_channels(sFilesRun1.ChannelFile, 'MEG', 1, 1, hFig); 151 % Hide sensors 152 pause(0.5); 153 hFig = view_channels(sFilesRun1.ChannelFile, 'MEG', 0, 0, hFig); 154 % View coils 155 hFig = view_channels(sFilesRun1.ChannelFile, 'CTF', 1, 1, hFig); 156 % View helmet 157 pause(0.5); 158 hFig = view_helmet(sFilesRun1.ChannelFile, hFig); 159 pause(0.5); 160 close(hFig); 161 % Edit good/bad channel for current file 162 gui_edit_channel(sFilesRun1.ChannelFile); 163 pause(0.5); 164 % Unload everything 165 bst_memory('UnloadAll', 'Forced'); 166 167 168 169 %% ===== TUTORIAL #5: REVIEW RAW ===================================================== 170 % =================================================================================== 171 disp([10 'DEMO> Tutorial #5: Review raw' 10]); 172 % Process: Convert to continuous (CTF): Continuous 173 bst_process('CallProcess', 'process_ctf_convert', sFilesRaw, [], ... 174 'rectype', 2); % Continuous 175 176 % View recordings 177 hFigMeg = view_timeseries(sFilesRun1.FileName, 'MEG'); 178 hFigEeg = view_timeseries(sFilesRun1.FileName, 'Misc'); 179 hFigSel = view_timeseries(sFilesRun1.FileName, 'MEG', {'MLT11','MLT12','MLT13'}); 180 % Figure configuration 181 pause(0.5); 182 panel_record('SetTimeLength', 3); 183 panel_record('SetStartTime', 100); 184 panel_record('SetDisplayMode', hFigMeg, 'column'); 185 panel_montage('SetCurrentMontage', hFigMeg, 'CTF LT'); 186 % Set filters: panel_filter('SetFilters', LowPassEnabled, LowPassValue, HighPassEnabled, HighPassValue, SinRemovalEnabled, SinRemovalValue, MirrorEnabled, FullSourcesEnabled) 187 panel_filter('SetFilters', 1, 100, 1, 1, 0, [], 0, 0); 188 pause(0.5); 189 panel_record('SetDisplayMode', hFigMeg, 'butterfly'); 190 panel_montage('SetCurrentMontage', hFigMeg, ''); 191 % Close figures 192 close([hFigMeg hFigEeg hFigSel]); 193 194 195 196 %% ===== TUTORIAL #8: STIM DELAYS ==================================================== 197 % =================================================================================== 198 disp([10 'DEMO> Tutorial #8: Stim delays' 10]); 199 % Process: Detect: standard_fix 200 bst_process('CallProcess', 'process_evt_detect_analog', [sFilesRun1, sFilesRun2], [], ... 201 'eventname', 'standard_fix', ... 202 'channelname', 'UADC001', ... 203 'timewindow', [], ... 204 'threshold', 1.2, ... 205 'blanking', 0.2, ... 206 'highpass', 0, ... 207 'lowpass', 0, ... 208 'refevent', 'standard', ... 209 'isfalling', 0, ... 210 'ispullup', 0, ... 211 'isclassify', 0); 212 % Process: Detect: deviant_fix 213 bst_process('CallProcess', 'process_evt_detect_analog', [sFilesRun1, sFilesRun2], [], ... 214 'eventname', 'deviant_fix', ... 215 'channelname', 'UADC001', ... 216 'timewindow', [], ... 217 'threshold', 1.2, ... 218 'blanking', 0.2, ... 219 'highpass', 0, ... 220 'lowpass', 0, ... 221 'refevent', 'deviant', ... 222 'isfalling', 0, ... 223 'ispullup', 0, ... 224 'isclassify', 0); 225 % Process: Read from channel 226 bst_process('CallProcess', 'process_evt_read', [sFilesRun1, sFilesRun2], [], ... 227 'stimchan', 'UDIO001', ... 228 'trackmode', 1, ... % Value: detect the changes of channel value 229 'zero', 0); 230 231 % Process: Delete events 232 bst_process('CallProcess', 'process_evt_delete', [sFilesRun1, sFilesRun2], [], ... 233 'eventname', 'standard, deviant, button'); 234 % Process: Rename event (standard_fix>standard) 235 bst_process('CallProcess', 'process_evt_rename', [sFilesRun1, sFilesRun2], [], ... 236 'src', 'standard_fix', ... 237 'dest', 'standard'); 238 % Process: Rename event (deviant_fix>deviant) 239 bst_process('CallProcess', 'process_evt_rename', [sFilesRun1, sFilesRun2], [], ... 240 'src', 'deviant_fix', ... 241 'dest', 'deviant'); 242 % Process: Rename event (64>button) 243 bst_process('CallProcess', 'process_evt_rename', [sFilesRun1, sFilesRun2], [], ... 244 'src', '64', ... 245 'dest', 'button'); 246 247 248 249 %% ===== TUTORIAL #10: FREQUENCY FILTERS ============================================= 250 % =================================================================================== 251 disp([10 'DEMO> Tutorial #10: Frequency filters' 10]); 252 % Process: Sinusoid removal: 60Hz 120Hz 180Hz 300Hz 253 sFilesNotch = bst_process('CallProcess', 'process_notch', sFilesRaw, [], ... 254 'freqlist', [60, 120, 180], ... 255 'sensortypes', 'MEG', ... 256 'read_all', 0); 257 % Process: Power spectrum density (Welch) 258 sFilesPsd = bst_process('CallProcess', 'process_psd', [sFilesRaw, sFilesNotch], [], ... 259 'timewindow', [], ... 260 'win_length', 4, ... 261 'win_overlap', 50, ... 262 'clusters', {}, ... 263 'sensortypes', 'MEG', ... 264 'edit', struct(... 265 'Comment', 'Power', ... 266 'TimeBands', [], ... 267 'Freqs', [], ... 268 'ClusterFuncTime', 'none', ... 269 'Measure', 'power', ... 270 'Output', 'all', ... 271 'SaveKernel', 0)); 272 % Process: Snapshot: Frequency spectrum 273 bst_process('CallProcess', 'process_snapshot', sFilesPsd, [], ... 274 'target', 10, ... % Frequency spectrum 275 'modality', 1, ... % MEG (All) 276 'Comment', 'Power spectrum density'); 277 % Process: Delete folders 278 bst_process('CallProcess', 'process_delete', sFilesRaw, [], ... 279 'target', 2); % Delete folders 280 % Separate the three outputs 281 sFilesRun1 = {sFilesNotch(1).FileName}; 282 sFilesRun2 = {sFilesNotch(2).FileName}; 283 sFilesNoise = {sFilesNotch(3).FileName}; 284 285 286 287 %% ===== TUTORIAL #11: BAD CHANNELS ================================================== 288 % =================================================================================== 289 % % Process: Set bad channels 290 % sFiles = bst_process('CallProcess', 'process_channel_setbad', sFilesRun2, [], ... 291 % 'sensortypes', 'MRT51, MLO52, MLO42, MLO43'); 292 293 294 295 %% ===== TUTORIAL #12: ARTIFACTS DETECTION =========================================== 296 % =================================================================================== 297 disp([10 'DEMO> Tutorial #12: Artifacts detection' 10]); 298 % Process: Detect heartbeats 299 bst_process('CallProcess', 'process_evt_detect_ecg', [sFilesRun1, sFilesRun2], [], ... 300 'channelname', 'ECG', ... 301 'timewindow', [], ... 302 'eventname', 'cardiac'); 303 % Process: Detect eye blinks 304 bst_process('CallProcess', 'process_evt_detect_eog', [sFilesRun1, sFilesRun2], [], ... 305 'channelname', 'VEOG', ... 306 'timewindow', [], ... 307 'eventname', 'blink'); 308 % Process: Remove simultaneous 309 bst_process('CallProcess', 'process_evt_remove_simult', [sFilesRun1, sFilesRun2], [], ... 310 'remove', 'cardiac', ... 311 'target', 'blink', ... 312 'dt', 0.25, ... 313 'rename', 0); 314 315 316 317 %% ===== TUTORIAL #13: SSP =========================================================== 318 % =================================================================================== 319 disp([10 'DEMO> Tutorial #13: SSP' 10]); 320 % Process: SSP ECG: cardiac 321 bst_process('CallProcess', 'process_ssp_ecg', [sFilesRun1, sFilesRun2], [], ... 322 'eventname', 'cardiac', ... 323 'sensortypes', 'MEG', ... 324 'usessp', 0, ... 325 'select', 1); 326 % Process: SSP EOG: blink 327 bst_process('CallProcess', 'process_ssp_eog', sFilesRun1, [], ... 328 'eventname', 'blink', ... 329 'sensortypes', 'MEG', ... 330 'usessp', 0, ... 331 'select', [1 2]); 332 bst_process('CallProcess', 'process_ssp_eog', sFilesRun2, [], ... 333 'eventname', 'blink', ... 334 'sensortypes', 'MEG', ... 335 'usessp', 0, ... 336 'select', 1); 337 338 339 %% ===== TUTORIAL #14: BAD SEGMENTS ================================================== 340 % =================================================================================== 341 disp([10 'DEMO> Tutorial #14: Bad segments' 10]); 342 % Process: Detect other artifacts 343 bst_process('CallProcess', 'process_evt_detect_badsegment', [sFilesRun1, sFilesRun2], [], ... 344 'timewindow', [], ... 345 'sensortypes', 'MEG', ... 346 'threshold', 3, ... % 3 347 'isLowFreq', 1, ... 348 'isHighFreq', 1); 349 350 % Process: Rename event (1-7Hz > saccade) (Run02 only) 351 bst_process('CallProcess', 'process_evt_rename', sFilesRun2, [], ... 352 'src', '1-7Hz', ... 353 'dest', 'saccade'); 354 355 % Manual selection of saccades (cannot be done from the pipeline editor: manual edition of the structures) 356 sMatRun2 = in_bst_data(sFilesRun2{1}, 'F'); 357 iEvtSaccade = find(strcmpi({sMatRun2.F.events.label}, 'saccade')); 358 sMatRun2.F.events(iEvtSaccade).times = [30, 81.5, 104, 142.5, 167, 187.5, 246.5, 319; 31, 83, 105, 144, 168, 188.5, 248, 320]; 359 sMatRun2.F.events(iEvtSaccade).epochs = ones(1, size(sMatRun2.F.events(iEvtSaccade).times, 2)); 360 sMatRun2.F.events(iEvtSaccade).channels = []; 361 sMatRun2.F.events(iEvtSaccade).notes = []; 362 bst_save(file_fullpath(sFilesRun2{1}), sMatRun2, 'v6', 1); 363 364 % Process: SSP: saccade (Run02 only) 365 bst_process('CallProcess', 'process_ssp', sFilesRun2, [], ... 366 'timewindow', [], ... 367 'eventname', 'saccade', ... 368 'eventtime', [-0.2, 0.2], ... 369 'bandpass', [1.5, 7], ... 370 'sensortypes', 'MEG', ... 371 'usessp', 1, ... 372 'saveerp', 0, ... 373 'method', 1, ... 374 'select', 1); 375 % Process: Detect other artifacts (Run02 only) 376 bst_process('CallProcess', 'process_evt_detect_badsegment', sFilesRun2, [], ... 377 'timewindow', [], ... 378 'sensortypes', 'MEG', ... 379 'threshold', 3, ... % 3 380 'isLowFreq', 1, ... 381 'isHighFreq', 1); 382 383 % Process: Rename event (1-7Hz > bad_1-7Hz) 384 bst_process('CallProcess', 'process_evt_rename', [sFilesRun1, sFilesRun2], [], ... 385 'src', '1-7Hz', ... 386 'dest', 'bad_1-7Hz'); 387 % Process: Rename event (40-240Hz > bad_40-240Hz) 388 bst_process('CallProcess', 'process_evt_rename', [sFilesRun1, sFilesRun2], [], ... 389 'src', '40-240Hz', ... 390 'dest', 'bad_40-240Hz'); 391 % Process: Snapshot: SSP projectors 392 bst_process('CallProcess', 'process_snapshot', [sFilesRun1, sFilesRun2], [], ... 393 'target', 2, ... % SSP projectors 394 'Comment', 'SSP projectors'); 395 396 397 398 %% ===== TUTORIAL #15: IMPORT EVENTS ================================================= 399 % =================================================================================== 400 disp([10 'DEMO> Tutorial #15: Import events' 10]); 401 % Process: Import MEG/EEG: Events (Run01) 402 sFilesEpochs1 = bst_process('CallProcess', 'process_import_data_event', sFilesRun1, [], ... 403 'subjectname', SubjectName, ... 404 'condition', '', ... 405 'eventname', 'standard, deviant', ... 406 'timewindow', [], ... 407 'epochtime', [-0.100, 0.500], ... 408 'createcond', 0, ... 409 'ignoreshort', 1, ... 410 'usectfcomp', 1, ... 411 'usessp', 1, ... 412 'freq', [], ... 413 'baseline', [-0.1, -0.0017]); 414 % Process: Import MEG/EEG: Events (Run02) 415 sFilesEpochs2 = bst_process('CallProcess', 'process_import_data_event', sFilesRun2, [], ... 416 'subjectname', SubjectName, ... 417 'condition', '', ... 418 'eventname', 'standard, deviant', ... 419 'timewindow', [], ... 420 'epochtime', [-0.100, 0.500], ... 421 'createcond', 0, ... 422 'ignoreshort', 1, ... 423 'usectfcomp', 1, ... 424 'usessp', 1, ... 425 'freq', [], ... 426 'baseline', [-0.1, -0.0017]); 427 % Display raster plot 428 hFigRaster = view_erpimage({sFilesEpochs1.FileName}, 'erpimage', 'MEG'); 429 panel_display(); 430 bst_report('Snapshot', hFigRaster, sFilesEpochs1(1).FileName, 'ERP image'); 431 close(hFigRaster); 432 433 434 %% ===== TUTORIAL #16: AVERAGE ======================================================= 435 % =================================================================================== 436 disp([10 'DEMO> Tutorial #16: Average' 10]); 437 % Process: Average: By trial group (folder average) 438 sFilesAvg = bst_process('CallProcess', 'process_average', [sFilesEpochs1, sFilesEpochs2], [], ... 439 'avgtype', 5, ... % By trial groups (folder average) 440 'avg_func', 1, ... % Arithmetic average: mean(x) 441 'weighted', 0, ... 442 'keepevents', 1); 443 % Process: Delete events 'cardiac' 444 bst_process('CallProcess', 'process_evt_delete', sFilesAvg, [], ... 445 'eventname', 'cardiac'); 446 % Process: Snapshot: Recordings time series 447 bst_process('CallProcess', 'process_snapshot', sFilesAvg, [], ... 448 'target', 5, ... % Recordings time series 449 'modality', 1, ... % MEG (All) 450 'Comment', 'Evoked response'); 451 % Set colormap: global color scale 452 bst_colormaps('SetMaxMode', 'meg', 'global'); 453 % Process: Snapshot: Recordings topography (contact sheet) 454 bst_process('CallProcess', 'process_snapshot', sFilesAvg, [], ... 455 'target', 7, ... % Recordings topography (contact sheet) 456 'modality', 1, ... % MEG 457 'contact_time', [0, 0.350], ... 458 'contact_nimage', 15, ... 459 'Comment', 'Evoked response'); 460 461 % Process: Average+Stderr: By trial group (subject average) 462 sFilesAvgAll = bst_process('CallProcess', 'process_average', [sFilesEpochs1, sFilesEpochs2], [], ... 463 'avgtype', 6, ... % By trial group (subject average) 464 'avg_func', 7, ... % Arithmetic average + Standard error 465 'weighted', 0, ... 466 'keepevents', 1); 467 % Process: Delete events 'cardiac' 468 bst_process('CallProcess', 'process_evt_delete', sFilesAvgAll, [], ... 469 'eventname', 'cardiac'); 470 % Process: Delete events 'saccade' 471 bst_process('CallProcess', 'process_evt_delete', sFilesAvgAll, [], ... 472 'eventname', 'saccade'); 473 % Process: Snapshot: Recordings time series 474 bst_process('CallProcess', 'process_snapshot', sFilesAvgAll, [], ... 475 'target', 5, ... % Recordings time series 476 'modality', 1, ... % MEG (All) 477 'Comment', 'Evoked response'); 478 479 480 %% ===== TUTORIAL #17: EXPLORATION =================================================== 481 % =================================================================================== 482 disp([10 'DEMO> Tutorial #17: Bad segments' 10]); 483 % View averages 484 hFigMeg1 = view_timeseries(sFilesAvg(1).FileName, 'MEG'); 485 hFigMeg2 = view_timeseries(sFilesAvg(2).FileName, 'MEG'); 486 hFigEeg1 = view_timeseries(sFilesAvg(1).FileName, 'Misc'); 487 hFigEeg2 = view_timeseries(sFilesAvg(2).FileName, 'Misc'); 488 hFigTopo1 = view_topography(sFilesAvg(1).FileName, 'MEG', '2DSensorCap'); 489 hFigTopo2 = view_topography(sFilesAvg(2).FileName, 'MEG', '2DSensorCap'); 490 hFigTp2 = view_topography(sFilesAvg(3).FileName, 'MEG', '3DSensorCap'); 491 hFigTp3 = view_topography(sFilesAvg(3).FileName, 'MEG', '2DDisc'); 492 hFigTp4 = view_topography(sFilesAvg(3).FileName, 'MEG', '2DLayout'); 493 % Set time: 90ms 494 panel_time('SetCurrentTime', 0.090); 495 % Set filters: 40Hz low-pass, no high-pass 496 panel_filter('SetFilters', 1, 40, 0, [], 0, [], 0, 0); 497 % View selected sensors 498 SelectedChannels = {'MLC31', 'MLC32'}; 499 bst_figures('SetSelectedRows', SelectedChannels); 500 view_timeseries(sFilesAvg(4).FileName, [], SelectedChannels); 501 % Select time window 502 figure_timeseries('SetTimeSelectionManual', hFigMeg1, [0.070, 0.130]); 503 % Show sensors on 2DSensorCap topography 504 isMarkers = 1; 505 isLabels = 0; 506 figure_3d('ViewSensors', hFigTopo1, isMarkers, isLabels); 507 % Display time contact sheet for a figure 508 pause(0.5); 509 hContactFig = view_contactsheet( hFigTopo2, 'time', 'fig', [], 12, [0 0.120] ); 510 pause(0.5); 511 close(hContactFig); 512 513 514 515 %% ===== TUTORIAL #18: COLORMAPS ===================================================== 516 % =================================================================================== 517 disp([10 'DEMO> Tutorial #18: Colormaps' 10]); 518 % Set 'Meg' colormap to 'jet' 519 bst_colormaps('SetColormapName', 'meg', 'jet'); 520 pause(0.5); 521 % Set 'Meg' colormap to 'rwb' 522 bst_colormaps('SetColormapName', 'meg', 'cmap_rbw'); 523 % Set colormap to display absolute values 524 bst_colormaps('SetColormapAbsolute', 'meg', 1); 525 % Normalize colormap for each time frame 526 bst_colormaps('SetMaxMode', 'meg', 'local'); 527 % Hide colorbar 528 bst_colormaps('SetDisplayColorbar', 'meg', 0); 529 pause(0.5); 530 % Restore colormap to default values 531 bst_colormaps('RestoreDefaults', 'meg'); 532 % Edit good/bad channel for current file 533 gui_edit_channelflag(sFilesAvg(1).FileName); 534 % Close figures 535 pause(0.5); 536 bst_memory('UnloadAll', 'Forced'); 537 538 539 540 %% ===== TUTORIAL #20: HEAD MODEL ==================================================== 541 % =================================================================================== 542 disp([10 'DEMO> Tutorial #20: Head model' 10]); 543 % Process: Compute head model 544 bst_process('CallProcess', 'process_headmodel', sFilesAvg, [], ... 545 'comment', '', ... 546 'sourcespace', 1, ... 547 'meg', 3); % Overlapping spheres 548 % Get study structure 549 sStudy = bst_get('Study', sFilesAvg(1).iStudy); 550 % Show spheres 551 hFig = view_spheres(sStudy.HeadModel(sStudy.iHeadModel).FileName, sStudy.Channel.FileName, sSubject); 552 pause(0.5); 553 close(hFig); 554 555 556 557 %% ===== TUTORIAL #21: NOISE COVARIANCE ============================================== 558 % =================================================================================== 559 disp([10 'DEMO> Tutorial #21: Noise covariance' 10]); 560 % Process: Compute covariance (noise or data) 561 bst_process('CallProcess', 'process_noisecov', sFilesNoise, [], ... 562 'baseline', [], ... 563 'sensortypes', 'MEG, EEG, SEEG, ECOG', ... 564 'target', 1, ... % Noise covariance (covariance over baseline time window) 565 'dcoffset', 1, ... % Block by block, to avoid effects of slow shifts in data 566 'identity', 0, ... 567 'copycond', 1, ... 568 'copysubj', 0, ... 569 'replacefile', 1); % Replace 570 % Process: Snapshot: Noise covariance 571 bst_process('CallProcess', 'process_snapshot', sFilesNoise, [], ... 572 'target', 3, ... % Noise covariance 573 'Comment', 'Noise covariance'); 574 575 % Process: Compute covariance (noise or data) [Run01] 576 bst_process('CallProcess', 'process_noisecov', sFilesEpochs1, [], ... 577 'baseline', [-0.1, -0.0017], ... 578 'datatimewindow', [0, 0.5], ... 579 'sensortypes', 'MEG, EEG, SEEG, ECOG', ... 580 'target', 2, ... % Data covariance (covariance over data time window) 581 'dcoffset', 1, ... % Block by block, to avoid effects of slow shifts in data 582 'identity', 0, ... 583 'copycond', 0, ... 584 'copysubj', 0, ... 585 'replacefile', 1); % Replace 586 % Process: Compute covariance (noise or data) [Run02] 587 bst_process('CallProcess', 'process_noisecov', sFilesEpochs2, [], ... 588 'baseline', [-0.1, -0.0017], ... 589 'datatimewindow', [0, 0.5], ... 590 'sensortypes', 'MEG, EEG, SEEG, ECOG', ... 591 'target', 2, ... % Data covariance (covariance over data time window) 592 'dcoffset', 1, ... % Block by block, to avoid effects of slow shifts in data 593 'identity', 0, ... 594 'copycond', 0, ... 595 'copysubj', 0, ... 596 'replacefile', 1); % Replace 597 % Process: Snapshot: Data covariance 598 bst_process('CallProcess', 'process_snapshot', [sFilesEpochs1(1), sFilesEpochs2(1)], [], ... 599 'target', 12, ... % Data covariance 600 'Comment', 'Data covariance'); 601 602 603 604 %% ===== TUTORIAL #22: SOURCE ESTIMATION ============================================= 605 % =================================================================================== 606 disp([10 'DEMO> Tutorial #22: Source estimation' 10]); 607 % === GET DEVIANT AVERAGE RUN01 === 608 % Process: Select recordings in: Subject01/S01_AEF_20131218_01_600Hz_notch 609 sFiles01 = bst_process('CallProcess', 'process_select_files_data', [], [], ... 610 'subjectname', SubjectName, ... 611 'condition', 'S01_AEF_20131218_01_600Hz_notch', ... 612 'includebad', 0); 613 % Process: Select file comments with tag: deviant 614 sFilesAvgDeviant01 = bst_process('CallProcess', 'process_select_tag', sFiles01, [], ... 615 'tag', 'Avg: deviant', ... 616 'search', 2, ... % Search the file comments 617 'select', 1); % Select only the files with the tag 618 619 % === CONSTRAINED EXAMPLE === 620 % Minimum norm options 621 InverseOptions = struct(... 622 'Comment', 'MN: MEG', ... 623 'InverseMethod', 'minnorm', ... 624 'InverseMeasure', 'amplitude', ... 625 'SourceOrient', {{'fixed'}}, ... 626 'Loose', 0.2, ... 627 'UseDepth', 1, ... 628 'WeightExp', 0.5, ... 629 'WeightLimit', 10, ... 630 'NoiseMethod', 'reg', ... 631 'NoiseReg', 0.1, ... 632 'SnrMethod', 'fixed', ... 633 'SnrRms', 0.001, ... 634 'SnrFixed', 3, ... 635 'ComputeKernel', 1, ... 636 'DataTypes', {{'MEG'}}); 637 % Process: Compute sources [2018] 638 sFilesSrcDeviant01 = bst_process('CallProcess', 'process_inverse_2018', sFilesAvgDeviant01, [], ... 639 'output', 2, ... % Kernel only: one per file 640 'inverse', InverseOptions); 641 642 % === DISPLAY SOURCES MANUALLY === 643 % View time series 644 hFigSrc1 = view_timeseries(sFilesAvgDeviant01(1).FileName, 'MEG'); 645 % View on the cortex surface 646 hFigSrc2 = script_view_sources(sFilesSrcDeviant01.FileName, 'cortex'); 647 % Set current time to 90ms 648 panel_time('SetCurrentTime', 0.090); 649 % Set orientation 650 figure_3d('SetStandardView', hFigSrc2, 'left'); 651 % Set surface threshold 652 iSurf = 1; 653 thresh = .30; 654 panel_surface('SetDataThreshold', hFigSrc2, iSurf, thresh); 655 % Set surface smoothing 656 panel_surface('SetSurfaceSmooth', hFigSrc2, iSurf, .6, 0); 657 % Show sulci 658 panel_surface('SetShowSulci', hFigSrc2, iSurf, 1); 659 660 % View sources on MRI (3D orthogonal slices) 661 hFigSrc3 = script_view_sources(sFilesSrcDeviant01.FileName, 'mri3d'); 662 panel_surface('SetDataThreshold', hFigSrc3, iSurf, thresh); 663 % Set the position of the cuts in the 3D figure 664 cutsPosVox = [74 93 159]; 665 panel_surface('PlotMri', hFigSrc3, cutsPosVox); 666 667 % View sources with MRI Viewer 668 hFigSrc4 = script_view_sources(sFilesSrcDeviant01.FileName, 'mriviewer'); 669 panel_surface('SetDataThreshold', hFigSrc4, iSurf, thresh); 670 % Set the position of the cuts in the MRI Viewer (values in millimeters) 671 figure_mri('SetLocation', 'voxel', hFigSrc4, [], cutsPosVox); 672 % Close figures 673 close([hFigSrc1 hFigSrc2 hFigSrc3 hFigSrc4]); 674 675 % === UNCONSTRAINED EXAMPLE === 676 % Unconstrained minnorm 677 InverseOptions.Comment = 'MN: MEG'; 678 InverseOptions.InverseMeasure = 'amplitude'; 679 InverseOptions.SourceOrient = {'free'}; 680 % Process: Compute sources [2018] 681 sFilesSrcUnconst = bst_process('CallProcess', 'process_inverse_2018', sFilesAvgDeviant01, [], ... 682 'output', 2, ... % Kernel only: one per file 683 'inverse', InverseOptions); 684 685 686 % === NORMALIZED SOURCES === 687 % dSPM 688 InverseOptions.Comment = 'dSPM: MEG'; 689 InverseOptions.InverseMeasure = 'dspm2018'; 690 InverseOptions.SourceOrient = {'fixed'}; 691 sFilesSrcDspm = bst_process('CallProcess', 'process_inverse_2018', sFilesAvgDeviant01, [], ... 692 'output', 2, ... % Kernel only: one per file 693 'inverse', InverseOptions); 694 % sLORETA (old function) 695 sFilesSrcSloreta = bst_process('CallProcess', 'process_inverse', sFilesAvgDeviant01, [], ... 696 'comment', '', ... 697 'method', 3, ... % sLORETA 698 'wmne', struct(... 699 'SourceOrient', {{'fixed'}}, ... 700 'loose', 0.2, ... 701 'SNR', 3, ... 702 'pca', 1, ... 703 'diagnoise', 0, ... 704 'regnoise', 1, ... 705 'magreg', 0.1, ... 706 'gradreg', 0.1, ... 707 'depth', 1, ... 708 'weightexp', 0.5, ... 709 'weightlimit', 10), ... 710 'sensortypes', 'MEG, MEG MAG, MEG GRAD, EEG', ... 711 'output', 2); % Kernel only: one per file 712 % Process: Z-score normalization: [-100ms,-2ms] 713 sFilesSrcZscore = bst_process('CallProcess', 'process_baseline_norm', sFilesSrcDeviant01, [], ... 714 'baseline', [-0.100, -0.002], ... 715 'source_abs', 0, ... 716 'method', 'zscore'); % Z-score transformation: x_std = (x - μ) / σ 717 718 % Process: Snapshot: Sources (one time) 719 bst_process('CallProcess', 'process_snapshot', sFilesSrcDeviant01, [], ... 720 'target', 8, ... % Sources (one time) 721 'orient', 1, ... % left 722 'time', 0.09, ... 723 'threshold', 30, ... 724 'Comment', 'Current density map (Constrained)'); 725 bst_process('CallProcess', 'process_snapshot', sFilesSrcDspm, [], ... 726 'target', 8, ... % Sources (one time) 727 'orient', 1, ... % left 728 'time', 0.09, ... 729 'threshold', 60, ... 730 'Comment', 'dSPM'); 731 bst_process('CallProcess', 'process_snapshot', sFilesSrcSloreta, [], ... 732 'target', 8, ... % Sources (one time) 733 'orient', 1, ... % left 734 'time', 0.09, ... 735 'threshold', 60, ... 736 'Comment', 'sLORETA'); 737 bst_process('CallProcess', 'process_snapshot', sFilesSrcZscore, [], ... 738 'target', 8, ... % Sources (one time) 739 'orient', 1, ... % left 740 'time', 0.09, ... 741 'threshold', 60, ... 742 'Comment', 'Z-score'); 743 bst_process('CallProcess', 'process_snapshot', sFilesSrcUnconst, [], ... 744 'target', 8, ... % Sources (one time) 745 'orient', 1, ... % left 746 'time', 0.0917, ... 747 'threshold', 0, ... 748 'Comment', 'Current density map (Unconstrained)'); 749 750 % === DELETE EXPERIMENTS === 751 % Process: Delete constrained example 752 bst_process('CallProcess', 'process_delete', [sFilesSrcDeviant01, sFilesSrcDspm, sFilesSrcSloreta, sFilesSrcZscore, sFilesSrcUnconst], [], ... 753 'target', 1); % Delete selected files 754 755 756 % === SHARED KERNEL === 757 % Constrained minnorm 758 InverseOptions.Comment = 'MN: MEG'; 759 InverseOptions.InverseMeasure = 'amplitude'; 760 InverseOptions.SourceOrient = {'fixed'}; 761 % Process: Compute sources [2018] 762 sFilesAvgSrc = bst_process('CallProcess', 'process_inverse_2018', sFilesAvg, [], ... 763 'output', 1, ... % Kernel only: shared 764 'inverse', InverseOptions); 765 766 767 % === AVERAGE SOURCES ACROSS RUNS === 768 % Process: Average: By trial group (subject average) 769 sFilesIntraSrc = bst_process('CallProcess', 'process_average', sFilesAvgSrc, [], ... 770 'avgtype', 6, ... % By trial group (subject average) 771 'avg_func', 1, ... % Arithmetic average: mean(x) 772 'weighted', 1, ... 773 'scalenormalized', 0); 774 % Process: Low-pass:40Hz 775 sFilesIntraSrcLow = bst_process('CallProcess', 'process_bandpass', sFilesIntraSrc, [], ... 776 'highpass', 0, ... 777 'lowpass', 40, ... 778 'attenuation', 'strict', ... % 60dB 779 'mirror', 0, ... 780 'overwrite', 0); 781 % Process: Z-score normalization: [-100ms,-2ms] 782 sFilesIntraZscore = bst_process('CallProcess', 'process_baseline_norm', sFilesIntraSrcLow, [], ... 783 'baseline', [-0.100, -0.0017], ... 784 'source_abs', 0, ... 785 'method', 'zscore'); % Z-score transformation: x_std = (x - μ) / σ 786 787 % Process: Delete intermediate results 788 bst_process('CallProcess', 'process_delete', sFilesIntraSrcLow, [], ... 789 'target', 1); % Delete selected files 790 % Screen captures 791 bst_process('CallProcess', 'process_snapshot', sFilesIntraZscore, [], ... 792 'target', 8, ... % Sources (one time) 793 'orient', 1, ... % left 794 'time', 0.09, ... 795 'threshold', 40, ... 796 'Comment', 'Average across runs (left)'); 797 bst_process('CallProcess', 'process_snapshot', sFilesIntraZscore, [], ... 798 'target', 8, ... % Sources (one time) 799 'orient', 2, ... % right 800 'time', 0.09, ... 801 'threshold', 40, ... 802 'Comment', 'Average across runs (right)'); 803 bst_process('CallProcess', 'process_snapshot', sFilesIntraZscore(1), [], ... 804 'target', 9, ... % Sources (contact sheet) 805 'orient', 1, ... % left 806 'contact_time', [0, 0.35], ... 807 'contact_nimage', 15, ... 808 'threshold', 20, ... 809 'Comment', 'Average across runs'); 810 811 812 %% ===== TUTORIAL #23: SCOUTS ======================================================== 813 % =================================================================================== 814 disp([10 'DEMO> Tutorial #23: Scouts' 10]); 815 % Load surface file 816 sCortex = in_tess_bst(CortexFile); 817 % Add new scouts in first atlas 818 sCortex.iAtlas = find(strcmpi({sCortex.Atlas.Name}, 'Destrieux')); 819 % Save file 820 bst_save(file_fullpath(CortexFile), sCortex, 'v7'); 821 % Unload everything 822 bst_memory('UnloadAll', 'Forced'); 823 % Find scouts indices to display: {'A1L', 'A1R', 'IFGL', 'M1L'} 824 [tmp,iScouts,tmp] = intersect({sCortex.Atlas(sCortex.iAtlas).Scouts.Label}, {'G_temp_sup-G_T_transv L', 'G_temp_sup-G_T_transv R', 'G_front_inf-Opercular L', 'G_precentral L'}); 825 826 % View cortex 827 hFigSurf1 = view_surface(CortexFile, [], [], 'NewFigure'); 828 hFigSurf2 = view_surface(CortexFile, [], [], 'NewFigure'); 829 figure_3d('SetStandardView', hFigSurf1, 'left'); 830 figure_3d('SetStandardView', hFigSurf2, 'right'); 831 panel_surface('SetSurfaceSmooth', hFigSurf1, 1, .6, 0); 832 panel_surface('SetSurfaceSmooth', hFigSurf2, 1, .6, 0); 833 panel_surface('SetShowSulci', hFigSurf1, 1, 1); 834 panel_surface('SetShowSulci', hFigSurf2, 1, 1); 835 % Configure scouts display 836 panel_scout('SetScoutsOptions', 0, 1, 1, 'all', 0.7, 1, 1, 0); 837 % View scouts 838 hFigScouts = view_scouts({sFilesIntraZscore.FileName}, iScouts); 839 hLegend = findobj(hFigScouts, 'Type', 'legend'); 840 if ~isempty(hLegend) && ishandle(hLegend(1)) 841 set(hLegend(1), 'Units', 'pixels'); 842 pos = get(hLegend(1), 'Position'); 843 set(hLegend(1), 'Position', [1, 1, pos(3), pos(4)]); 844 end 845 % Save figures 846 bst_report('Snapshot', hFigScouts, sFilesIntraZscore(1).FileName, 'Scouts', [100 100 670 250]); 847 % Close all 848 pause(1); 849 bst_memory('UnloadAll', 'Forced'); 850 851 852 853 %% ===== TUTORIAL #24: TIME-FREQUENCY ================================================ 854 % =================================================================================== 855 disp([10 'DEMO> Tutorial #24: Time-frequency' 10]); 856 % Process: Simulate generic signals 857 sSim = bst_process('CallProcess', 'process_simulate_matrix', [], [], ... 858 'subjectname', 'Test', ... 859 'condition', 'Simulation', ... 860 'samples', 6000, ... 861 'srate', 1000, ... 862 'matlab', ['f1 = 2; f2 = 20; f3 = 50;' 10 'i =2000:6000;' 10 'Data(1,i) = sin(f1*2*pi*t(i)) + 0.4 * cos(f2*2*pi*t(i));' 10 'Data = Data + 0.2 * sin(f3*2*pi*t) + 0.4 * rand(1,6000);']); 863 864 % Time-frequency options 865 TfOptions = struct(... 866 'Comment', 'Power,1-60Hz', ... 867 'TimeBands', [], ... 868 'Freqs', [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60], ... 869 'MorletFc', 1, ... 870 'MorletFwhmTc', 3, ... 871 'ClusterFuncTime', 'none', ... 872 'Measure', 'power', ... 873 'Output', 'average', ... 874 'SaveKernel', 0); 875 % Process: Time-frequency (Morlet wavelets) 876 sSimTf1 = bst_process('CallProcess', 'process_timefreq', sSim, [], ... 877 'edit', TfOptions, ... 878 'normalize', 'multiply'); % 1/f compensation: Multiply output values by frequency 879 880 % === NORMALIZATION === 881 % Process: Time-frequency (Morlet wavelets) 882 sSimTf2 = bst_process('CallProcess', 'process_timefreq', sSim, [], ... 883 'edit', TfOptions, ... 884 'normalize', 'none'); % None: Save non-standardized time-frequency maps 885 % Process: Spectral flattening 886 sSimTf2Multi = bst_process('CallProcess', 'process_tf_norm', sSimTf2, [], ... 887 'normalize', 'multiply', ... % 1/f compensation (multiple by frequency) 888 'overwrite', 0); 889 % Process: Event-related perturbation (ERS/ERD): [750ms,1250ms] 890 sSimTf2Ersd = bst_process('CallProcess', 'process_baseline_norm', sSimTf2, [], ... 891 'baseline', [0.75, 1.25], ... 892 'method', 'ersd', ... % Event-related perturbation (ERS/ERD): x_std = (x - μ) / μ * 100 893 'overwrite', 0); 894 895 % Process: Snapshot: Time-frequency maps 896 bst_process('CallProcess', 'process_snapshot', sSimTf2, [], ... 897 'target', 14, ... % Time-frequency maps 898 'Comment', 'Not normalized'); 899 % Process: Snapshot: Time-frequency maps 900 bst_process('CallProcess', 'process_snapshot', sSimTf2Multi, [], ... 901 'target', 14, ... % Time-frequency maps 902 'Comment', 'Spectral flattening: 1/f compensation'); 903 % Process: Snapshot: Time-frequency maps 904 bst_process('CallProcess', 'process_snapshot', sSimTf2Ersd, [], ... 905 'target', 14, ... % Time-frequency maps 906 'Comment', 'ERS/ERD'); 907 % Spectrum/Time series 908 hFigTf1 = view_spectrum(sSimTf2.FileName, 'Spectrum'); 909 hFigTf2 = view_spectrum(sSimTf2.FileName, 'TimeSeries'); 910 panel_time('SetCurrentTime', 0.5); 911 panel_freq('SetCurrentFreq', 2, 0); 912 bst_report('Snapshot', [hFigTf1 hFigTf2], sSimTf2.FileName, 'Not normalized: 2s/20Hz', [200, 200, 400, 250]); 913 panel_time('SetCurrentTime', 2.02); 914 panel_freq('SetCurrentFreq', 20, 0); 915 bst_report('Snapshot', [hFigTf1 hFigTf2], sSimTf2.FileName, 'Not normalized: 2s/20Hz', [200, 200, 400, 250]); 916 bst_memory('UnloadAll', 'Forced'); 917 918 919 % === HILBERT TRANSFORM === 920 % Process: Hilbert transform 921 sSimHilbert = bst_process('CallProcess', 'process_hilbert', sSim, [], ... 922 'edit', struct(... 923 'Comment', 'Power', ... 924 'TimeBands', [], ... 925 'Freqs', {{'delta', '2, 4', 'mean'; 'theta', '5, 7', 'mean'; 'alpha', '8, 12', 'mean'; 'beta', '15, 29', 'mean'; 'gamma1', '30, 59', 'mean'; 'gamma2', '60, 90', 'mean'}}, ... 926 'ClusterFuncTime', 'none', ... 927 'Measure', 'power', ... 928 'Output', 'all', ... 929 'SaveKernel', 0), ... 930 'normalize', 'none', ... % None: Save non-standardized time-frequency maps 931 'mirror', 0); 932 % Process: Spectral flattening 933 sSimHilbertMulti = bst_process('CallProcess', 'process_tf_norm', sSimHilbert, [], ... 934 'normalize', 'multiply', ... % 1/f compensation (multiple by frequency) 935 'overwrite', 0); 936 % Process: Event-related perturbation (ERS/ERD): [750ms,1250ms] 937 sSimHilbertErsd = bst_process('CallProcess', 'process_baseline_norm', sSimHilbert, [], ... 938 'baseline', [0.75, 1.25], ... 939 'method', 'ersd', ... % Event-related perturbation (ERS/ERD): x_std = (x - μ) / μ * 100 940 'overwrite', 0); 941 942 % Process: Snapshot: Time-frequency maps 943 bst_process('CallProcess', 'process_snapshot', sSimHilbert, [], ... 944 'target', 14, ... % Time-frequency maps 945 'Comment', 'Not normalized'); 946 % Process: Snapshot: Time-frequency maps 947 bst_process('CallProcess', 'process_snapshot', sSimHilbertMulti, [], ... 948 'target', 14, ... % Time-frequency maps 949 'Comment', 'Spectral flattening: 1/f compensation'); 950 % Process: Snapshot: Time-frequency maps 951 bst_process('CallProcess', 'process_snapshot', sSimHilbertErsd, [], ... 952 'target', 14, ... % Time-frequency maps 953 'Comment', 'ERS/ERD'); 954 955 % === SINGLE TRIALS === 956 TfOptions.Comment = 'Avg,Power,1-150Hz'; 957 TfOptions.Freqs = [1, 2, 3.1, 4.2, 5.4, 6.7, 8, 9.5, 11, 12.6, 14.3, 16.1, 18.1, 20.1, 22.3, 24.6, 27, 29.6, 32.4, 35.3, 38.4, 41.6, 45.1, 48.8, 52.7, 56.9, 61.3, 66, 70.9, 76.2, 81.8, 87.7, 94, 100.6, 107.7, 115.2, 123.1, 131.6, 140.5, 150]; 958 % Process: Time-frequency (Morlet wavelets) 959 sEpochsAvgTf = bst_process('CallProcess', 'process_timefreq', sFilesEpochs1, [], ... 960 'sensortypes', 'MEG, EEG', ... 961 'edit', TfOptions, ... 962 'normalize', 'none'); % None: Save non-standardized time-frequency maps 963 % Process: Event-related perturbation (ERS/ERD): [-75ms,0ms] 964 sEpochsAvgTfErsd = bst_process('CallProcess', 'process_baseline_norm', sEpochsAvgTf, [], ... 965 'baseline', [-0.075, 0], ... 966 'method', 'ersd', ... % Event-related perturbation (ERS/ERD): x_std = (x - μ) / μ * 100 967 'overwrite', 0); 968 969 % === DISPLAY === 970 % View time-frequency file 971 hFigTf1 = view_timefreq(sEpochsAvgTfErsd.FileName, 'SingleSensor'); 972 % Configure display 973 sOptions = panel_display('GetDisplayOptions'); 974 sOptions.HideEdgeEffects = 1; 975 sOptions.HighResolution = 1; 976 panel_display('SetDisplayOptions', sOptions); 977 % Other display modes 978 hFigTf2 = view_timefreq(sEpochsAvgTfErsd.FileName, 'AllSensors'); 979 hFigTf3 = view_timefreq(sEpochsAvgTfErsd.FileName, '2DLayout'); 980 hFigTf4 = view_timefreq(sEpochsAvgTfErsd.FileName, '2DLayoutOpt'); 981 bst_colormaps('SetColormapName', 'stat2', 'jet'); 982 bst_colormaps('SetColormapAbsolute', 'stat2', 1); 983 bst_report('Snapshot', hFigTf1, sEpochsAvgTfErsd.FileName, 'Time-frequency', [200, 200, 400, 250]); 984 bst_report('Snapshot', [hFigTf2 hFigTf3 hFigTf4], sEpochsAvgTfErsd.FileName, 'Time-frequency', [200, 200, 750, 400]); 985 close([hFigTf1 hFigTf2 hFigTf3 hFigTf4]); 986 % Image [channel x time] 987 hFigTf5 = view_erpimage(sEpochsAvgTfErsd.FileName, 'trialimage'); 988 % Topographies 989 hFigTf6 = view_topography(sEpochsAvgTfErsd.FileName, 'MEG', '3DSensorCap'); 990 hFigTf7 = view_topography(sEpochsAvgTfErsd.FileName, 'MEG', '2DDisc'); 991 hFigTf8 = view_topography(sEpochsAvgTfErsd.FileName, 'MEG', '2DSensorCap'); 992 hFigTf9 = view_topography(sEpochsAvgTfErsd.FileName, 'MEG', '2DLayout'); 993 panel_time('SetCurrentTime', 0.175); 994 panel_freq('SetCurrentFreq', 8, 0); 995 bst_report('Snapshot', [hFigTf5 hFigTf6 hFigTf7 hFigTf8 hFigTf9], sEpochsAvgTfErsd.FileName, 'Time-frequency: 8Hz', [200, 200, 400, 250]); 996 close([hFigTf5 hFigTf6 hFigTf7 hFigTf8 hFigTf9]); 997 998 999 % === AVERAGE FOR SCOUTS === 1000 % Select all sources for the single deviant epochs 1001 sFilesEpochDeviantSrc = bst_process('CallProcess', 'process_select_files_results', [], [], ... 1002 'subjectname', SubjectName, ... 1003 'condition', '', ... 1004 'includebad', 0); 1005 sFilesEpochDeviantSrc = bst_process('CallProcess', 'process_select_tag', sFilesEpochDeviantSrc, [], ... 1006 'tag', 'deviant', ... 1007 'search', 1, ... % Search the file names 1008 'select', 1); % Select only the files with the tag 1009 sFilesEpochDeviantSrc = bst_process('CallProcess', 'process_select_tag', sFilesEpochDeviantSrc, [], ... 1010 'tag', 'average', ... 1011 'search', 1, ... % Search the file names 1012 'select', 2); % Exclude the files with the tag 1013 1014 % Process: Time-frequency (Morlet wavelets) 1015 sFilesTfScout = bst_process('CallProcess', 'process_timefreq', sFilesEpochDeviantSrc, [], ... 1016 'clusters', {'Destrieux', {'G_temp_sup-G_T_transv L', 'G_temp_sup-G_T_transv R', 'G_front_inf-Opercular L', 'G_precentral L'}}, ... 1017 'scoutfunc', 1, ... % Mean 1018 'edit', struct(... 1019 'Comment', 'Deviant: Scouts,Avg,Power,1-150Hz', ... 1020 'TimeBands', [], ... 1021 'Freqs', [1, 2, 3.1, 4.2, 5.4, 6.7, 8, 9.5, 11, 12.6, 14.3, 16.1, 18.1, 20.1, 22.3, 24.6, 27, 29.6, 32.4, 35.3, 38.4, 41.6, 45.1, 48.8, 52.7, 56.9, 61.3, 66, 70.9, 76.2, 81.8, 87.7, 94, 100.6, 107.7, 115.2, 123.1, 131.6, 140.5, 150], ... 1022 'MorletFc', 1, ... 1023 'MorletFwhmTc', 3, ... 1024 'ClusterFuncTime', 'after', ... 1025 'Measure', 'power', ... 1026 'Output', 'average', ... 1027 'SaveKernel', 0), ... 1028 'normalize', 'none'); % None: Save non-standardized time-frequency maps) 1029 % Process: Event-related perturbation (ERS/ERD): [-75ms,0ms] 1030 sFilesTfScoutErsd = bst_process('CallProcess', 'process_baseline_norm', sFilesTfScout, [], ... 1031 'baseline', [-0.075, 0], ... 1032 'method', 'ersd', ... % Event-related perturbation (ERS/ERD): x_std = (x - μ) / μ * 100 1033 'overwrite', 0); 1034 % Process: Snapshot: Time-frequency maps 1035 bst_process('CallProcess', 'process_snapshot', sFilesTfScoutErsd, [], ... 1036 'target', 14, ... % Time-frequency maps 1037 'Comment', 'ERS/ERD'); 1038 1039 1040 % === FULL CORTEX / HILBERT === 1041 % Process: Hilbert transform 1042 sFilesHilbertCortex = bst_process('CallProcess', 'process_hilbert', sFilesEpochDeviantSrc, [], ... 1043 'clusters', {}, ... 1044 'scoutfunc', 1, ... % Mean 1045 'edit', struct(... 1046 'Comment', 'Deviant: Avg,Magnitude', ... 1047 'TimeBands', [], ... 1048 'Freqs', {{'delta', '2, 4', 'mean'; 'theta', '5, 7', 'mean'; 'alpha', '8, 12', 'mean'; 'beta', '15, 29', 'mean'; 'gamma1', '30, 59', 'mean'; 'gamma2', '60, 90', 'mean'}}, ... 1049 'ClusterFuncTime', 'none', ... 1050 'Measure', 'power', ... 1051 'Output', 'average', ... 1052 'RemoveEvoked', 0, ... 1053 'SaveKernel', 0), ... 1054 'normalize', 'none', ... % None: Save non-standardized time-frequency maps 1055 'mirror', 0); 1056 % Process: Event-related perturbation (ERS/ERD): [-75ms,0ms] 1057 sFilesHilbertCortexErsd = bst_process('CallProcess', 'process_baseline_norm', sFilesHilbertCortex, [], ... 1058 'baseline', [-0.075, 0], ... 1059 'method', 'ersd', ... % Event-related perturbation (ERS/ERD): x_std = (x - μ) / μ * 100 1060 'overwrite', 0); 1061 1062 % View results 1063 hFigTf1 = view_surface_data([], sFilesHilbertCortexErsd.FileName); 1064 hFigTf2 = view_timefreq(sFilesHilbertCortexErsd.FileName, 'SingleSensor', 362); 1065 figure_3d('SetStandardView', hFigTf1, 'left'); 1066 panel_surface('SetDataThreshold', hFigTf1, 1, 0.5); 1067 panel_time('SetCurrentTime', 0.175); 1068 panel_freq('SetCurrentFreq', 3); 1069 bst_colormaps('RestoreDefaults', 'stat2'); 1070 bst_report('Snapshot', [hFigTf1 hFigTf2], sFilesHilbertCortexErsd.FileName, 'Hilbert transform: Alpha band', [200, 200, 400, 250]); 1071 bst_memory('UnloadAll', 'Forced'); 1072 1073 1074 %% ===== TUTORIAL #25: DIFFRERENCE =================================================== 1075 % =================================================================================== 1076 disp([10 'DEMO> Tutorial #25: Difference' 10]); 1077 1078 % ===== SELECT TRIALS (DATA) ===== 1079 % Process: Select recordings in: Subject01/* 1080 sFilesAll = bst_process('CallProcess', 'process_select_files_data', [], [], ... 1081 'subjectname', SubjectName, ... 1082 'condition', '', ... 1083 'includebad', 0); 1084 % Process: Select file names with tag: deviant_trial 1085 sEpochDeviant = bst_process('CallProcess', 'process_select_tag', sFilesAll, [], ... 1086 'tag', 'deviant_trial', ... 1087 'search', 1, ... % Search the file names 1088 'select', 1); % Select only the files with the tag 1089 % Process: Select file names with tag: standard_trial 1090 sEpochStandard = bst_process('CallProcess', 'process_select_tag', sFilesAll, [], ... 1091 'tag', 'standard_trial', ... 1092 'search', 1, ... % Search the file names 1093 'select', 1); % Select only the files with the tag 1094 1095 % ===== SELECT TRIALS (SOURCE) ===== 1096 % Process: Select recordings in: Subject01/* 1097 sFilesAllSrc = bst_process('CallProcess', 'process_select_files_results', [], [], ... 1098 'subjectname', SubjectName, ... 1099 'condition', '', ... 1100 'includebad', 0); 1101 % Process: Select file names with tag: deviant_trial 1102 sEpochDeviantSrc = bst_process('CallProcess', 'process_select_tag', sFilesAllSrc, [], ... 1103 'tag', 'deviant_trial', ... 1104 'search', 1, ... % Search the file names 1105 'select', 1); % Select only the files with the tag 1106 % Process: Select file names with tag: standard_trial 1107 sEpochStandardSrc = bst_process('CallProcess', 'process_select_tag', sFilesAllSrc, [], ... 1108 'tag', 'standard_trial', ... 1109 'search', 1, ... % Search the file names 1110 'select', 1); % Select only the files with the tag 1111 1112 % ===== ABSOLUTE DIFFERENCE ====== 1113 % Process: Difference: A-B, abs 1114 sDiffSrc = bst_process('CallProcess', 'process_diff_ab', sFilesIntraSrc(1).FileName, sFilesIntraSrc(2).FileName, ... 1115 'source_abs', 1); 1116 % Process: Set comment: deviant|abs - standard|abs 1117 sDiffSrc = bst_process('CallProcess', 'process_set_comment', sDiffSrc, [], ... 1118 'tag', 'deviant|abs - standard|abs', ... 1119 'isindex', 1); 1120 % Process: Low-pass:40Hz 1121 sDiffSrcZscore = bst_process('CallProcess', 'process_bandpass', sDiffSrc, [], ... 1122 'highpass', 0, ... 1123 'lowpass', 40, ... 1124 'attenuation', 'strict', ... % 60dB 1125 'mirror', 0, ... 1126 'overwrite', 1); 1127 % Process: Z-score transformation: [-100ms,-2ms] 1128 sDiffSrcZscore = bst_process('CallProcess', 'process_baseline_norm', sDiffSrcZscore, [], ... 1129 'baseline', [-0.1, -0.002], ... 1130 'source_abs', 0, ... 1131 'method', 'zscore', ... % Z-score transformation: x_std = (x - μ) / σ 1132 'overwrite', 0); 1133 % Process: Snapshot: Sources (contact sheet) 1134 bst_process('CallProcess', 'process_snapshot', sDiffSrcZscore, [], ... 1135 'target', 9, ... % Sources (contact sheet) 1136 'modality', 1, ... % MEG (All) 1137 'orient', 1, ... % left 1138 'contact_time', [0, 0.35], ... 1139 'contact_nimage', 15, ... 1140 'threshold', 30, ... 1141 'Comment', 'Difference deviant - standard (absolute)'); 1142 1143 % ===== RELATIVE DIFFERENCE ===== 1144 % Process: Difference: A-B 1145 sDiffSrcRel = bst_process('CallProcess', 'process_diff_ab', sFilesIntraSrc(1).FileName, sFilesIntraSrc(2).FileName, ... 1146 'source_abs', 0); 1147 % Process: Set comment: deviant - standard 1148 sDiffSrcRel = bst_process('CallProcess', 'process_set_comment', sDiffSrcRel, [], ... 1149 'tag', 'deviant - standard', ... 1150 'isindex', 1); 1151 % Process: Low-pass:40Hz 1152 sDiffSrcRelZscore = bst_process('CallProcess', 'process_bandpass', sDiffSrcRel, [], ... 1153 'highpass', 0, ... 1154 'lowpass', 40, ... 1155 'attenuation', 'strict', ... % 60dB 1156 'mirror', 0, ... 1157 'overwrite', 1); 1158 % Process: Z-score transformation: [-100ms,-2ms] 1159 sDiffSrcRelZscore = bst_process('CallProcess', 'process_baseline_norm', sDiffSrcRelZscore, [], ... 1160 'baseline', [-0.1, -0.002], ... 1161 'source_abs', 0, ... 1162 'method', 'zscore', ... % Z-score transformation: x_std = (x - μ) / σ 1163 'overwrite', 0); 1164 % Configure colormap: hot/absolute 1165 bst_colormaps('SetColormapName', 'stat2', 'hot'); 1166 bst_colormaps('SetColormapAbsolute', 'stat2', 1); 1167 % Process: Snapshot: Sources (contact sheet) 1168 bst_process('CallProcess', 'process_snapshot', sDiffSrcRelZscore, [], ... 1169 'target', 9, ... % Sources (contact sheet) 1170 'modality', 1, ... % MEG (All) 1171 'orient', 1, ... % left 1172 'contact_time', [0, 0.35], ... 1173 'contact_nimage', 15, ... 1174 'threshold', 30, ... 1175 'Comment', 'Difference deviant - standard (relative)'); 1176 % Restore colormap: rwb/relative 1177 bst_colormaps('SetColormapName', 'stat2', 'cmap_rbw'); 1178 bst_colormaps('SetColormapAbsolute', 'stat2', 0); 1179 1180 % ===== DIFFERENCE OF MEANS ===== 1181 % Process: Select uniform number of files [uniform] 1182 [sEpochDeviantUni, sEpochStandardUni] = bst_process('CallProcess', 'process_select_uniform2', sEpochDeviant, sEpochStandard, ... 1183 'nfiles', 0, ... 1184 'method', 4); % Uniformly distributed 1185 % Process: Difference of means [mean] 1186 sDiffMean = bst_process('CallProcess', 'process_diff_mean', sEpochDeviantUni, sEpochStandardUni, ... 1187 'avg_func', 1, ... % Arithmetic average mean(A) - mean(B) 1188 'weighted', 0); 1189 % Process: Snapshot: Recordings time series 1190 bst_process('CallProcess', 'process_snapshot', sDiffMean, [], ... 1191 'target', 5, ... % Recordings time series 1192 'modality', 1, ... % MEG (All) 1193 'Comment', 'Difference of means'); 1194 1195 1196 1197 %% ===== TUTORIAL #26: STATISTICS ==================================================== 1198 % =================================================================================== 1199 disp([10 'DEMO> Tutorial #26: Statistics' 10]); 1200 % ===== HISTOGRAMS ===== 1201 % Process: Extract values: [160ms] MLP57 1202 sHistoDeviant = bst_process('CallProcess', 'process_extract_values', sEpochDeviant, [], ... 1203 'timewindow', [0.16, 0.16], ... 1204 'sensortypes', 'MLP57', ... 1205 'isabs', 0, ... 1206 'avgtime', 0, ... 1207 'avgrow', 0, ... 1208 'dim', 2, ... % Concatenate time (dimension 2) 1209 'Comment', ''); 1210 % Process: Extract values: [160ms] MLP57 1211 sHistoStandard = bst_process('CallProcess', 'process_extract_values', sEpochStandard, [], ... 1212 'timewindow', [0.16, 0.16], ... 1213 'sensortypes', 'MLP57', ... 1214 'isabs', 0, ... 1215 'avgtime', 0, ... 1216 'avgrow', 0, ... 1217 'dim', 2, ... % Concatenate time (dimension 2) 1218 'Comment', ''); 1219 % Display histograms 1220 hFigHisto = view_histogram({sHistoDeviant.FileName, sHistoStandard.FileName}); 1221 bst_report('Snapshot', hFigHisto, sHistoDeviant.FileName, 'Histograms for MLP57/160ms'); 1222 close(hFigHisto); 1223 1224 % ===== EXEMPLE #1: PARAMETRIC/DATA ===== 1225 % Process: t-test [equal] [-100ms,500ms] H0:(A-B = 0) 1226 sTestParamData = bst_process('CallProcess', 'process_test_parametric2', sEpochDeviant, sEpochStandard, ... 1227 'timewindow', [-0.1, 0.5], ... 1228 'sensortypes', '', ... 1229 'isabs', 0, ... 1230 'avgtime', 0, ... 1231 'avgrow', 0, ... 1232 'Comment', '', ... 1233 'test_type', 'ttest_equal', ... % Student's t-test (equal variance) t = (mean(A)-mean(B)) / (Sx * sqrt(1/nA + 1/nB))Sx = sqrt(((nA-1)*var(A) + (nB-1)*var(B)) / (nA+nB-2)) df = nA + nB - 2 1234 'tail', 'two'); % Two-tailed 1235 % Set display properties 1236 StatThreshOptions = bst_get('StatThreshOptions'); 1237 StatThreshOptions.pThreshold = 0.05; 1238 StatThreshOptions.Correction = 'fdr'; 1239 StatThreshOptions.Control = [1 2 3]; 1240 bst_set('StatThreshOptions', StatThreshOptions); 1241 % Process: Snapshot: Recordings time series 1242 bst_process('CallProcess', 'process_snapshot', sTestParamData, [], ... 1243 'target', 5, ... % Recordings time series 1244 'modality', 1, ... % MEG (All) 1245 'time', 0.16, ... 1246 'Comment', 'Parametric t-test (p<0.05, FDR)'); 1247 % Process: Snapshot: Recordings topography (one time) 1248 bst_process('CallProcess', 'process_snapshot', sTestParamData, [], ... 1249 'target', 6, ... % Recordings topography (one time) 1250 'modality', 1, ... % MEG (All) 1251 'time', 0.16, ... 1252 'Comment', 'Parametric t-test (p<0.05, FDR)'); 1253 1254 % ===== EXEMPLE #2: NON-PARAMETRIC/DATA ===== 1255 % Process: Perm t-test equal [-100ms,500ms MEG] H0:(A=B), H1:(A<>B) 1256 sTestPermData = bst_process('CallProcess', 'process_test_permutation2', sEpochDeviant, sEpochStandard, ... 1257 'timewindow', [-0.1, 0.5], ... 1258 'sensortypes', 'MEG', ... 1259 'isabs', 0, ... 1260 'avgtime', 0, ... 1261 'avgrow', 0, ... 1262 'iszerobad', 1, ... 1263 'Comment', '', ... 1264 'test_type', 'ttest_equal', ... % Student's t-test (equal variance) t = (mean(A)-mean(B)) / (Sx * sqrt(1/nA + 1/nB))Sx = sqrt(((nA-1)*var(A) + (nB-1)*var(B)) / (nA+nB-2)) 1265 'randomizations', 1000, ... 1266 'tail', 'two'); % Two-tailed 1267 % Process: Snapshot: Recordings time series 1268 bst_process('CallProcess', 'process_snapshot', sTestPermData, [], ... 1269 'target', 5, ... % Recordings time series 1270 'modality', 1, ... % MEG (All) 1271 'time', 0.16, ... 1272 'Comment', 'Non-parametric t-test (p<0.05, FDR)'); 1273 % Process: Snapshot: Recordings topography (one time) 1274 bst_process('CallProcess', 'process_snapshot', sTestPermData, [], ... 1275 'target', 6, ... % Recordings topography (one time) 1276 'modality', 1, ... % MEG (All) 1277 'time', 0.16, ... 1278 'Comment', 'Non-parametric t-test (p<0.05, FDR)'); 1279 1280 % ===== EXEMPLE #3: CLUSTER/DATA ===== 1281 % Process: FT t-test unequal cluster [-100ms,500ms MEG] H0:(A=B), H1:(A<>B) 1282 sTestClustData = bst_process('CallProcess', 'process_ft_timelockstatistics', sEpochDeviant, sEpochStandard, ... 1283 'sensortypes', 'MEG', ... 1284 'timewindow', [-0.1, 0.5], ... 1285 'isabs', 0, ... 1286 'avgtime', 0, ... 1287 'avgchan', 0, ... 1288 'randomizations', 1000, ... 1289 'statistictype', 1, ... % Independent t-test 1290 'tail', 'two', ... % Two-tailed 1291 'correctiontype', 2, ... % cluster 1292 'minnbchan', 0, ... 1293 'clusteralpha', 0.05); 1294 % Process: Snapshot: Recordings time series 1295 bst_process('CallProcess', 'process_snapshot', sTestClustData, [], ... 1296 'target', 5, ... % Recordings time series 1297 'time', 0.16, ... 1298 'modality', 1, ... % MEG (All) 1299 'Comment', 'Cluster-based permutation test'); 1300 % Process: Snapshot: Recordings topography (one time) 1301 bst_process('CallProcess', 'process_snapshot', sTestClustData, [], ... 1302 'target', 6, ... % Recordings topography (one time) 1303 'modality', 1, ... % MEG (All) 1304 'time', 0.16, ... 1305 'Comment', 'Cluster-based permutation test'); 1306 1307 % ===== EXAMPLE #4: PARAMETRIC/SOURCES ===== 1308 % Process: t-test [equal] [-100ms,500ms] H0:(A-B = 0) 1309 sTestParamSrc = bst_process('CallProcess', 'process_test_parametric2', sEpochDeviantSrc, sEpochStandardSrc, ... 1310 'timewindow', [-0.1, 0.5], ... 1311 'scoutsel', {}, ... 1312 'scoutfunc', 1, ... % Mean 1313 'isnorm', 0, ... 1314 'avgtime', 0, ... 1315 'Comment', '', ... 1316 'test_type', 'ttest_equal', ... % Student's t-test (equal variance) t = (mean(A)-mean(B)) / (Sx * sqrt(1/nA + 1/nB))Sx = sqrt(((nA-1)*var(A) + (nB-1)*var(B)) / (nA+nB-2)) df = nA + nB - 2 1317 'tail', 'two'); % Two-tailed 1318 % Process: Difference of means [abs(mean)] 1319 sDiffMeanSrc = bst_process('CallProcess', 'process_diff_mean', sEpochDeviantSrc, sEpochStandardSrc, ... 1320 'avg_func', 2, ... % Absolute value of average abs(mean(A)) - abs(mean(B)) 1321 'weighted', 0); 1322 % Process: Apply statistic threshold: p<0.05 (FDR:1,2,3) 1323 sDiffMeanSrcThresh = bst_process('CallProcess', 'process_extract_pthresh2', sTestParamSrc, sDiffMeanSrc, ... 1324 'pthresh', 0.05, ... 1325 'correction', 3, ... % False discovery rate (FDR) 1326 'control1', 1, ... 1327 'control2', 1, ... 1328 'control3', 1); 1329 % Process: Snapshot: Sources (one time) 1330 bst_process('CallProcess', 'process_snapshot', sTestParamSrc, [], ... 1331 'target', 8, ... % Sources (one time) 1332 'orient', 1, ... % left 1333 'time', 0.148, ... 1334 'threshold', 40, ... 1335 'Comment', 'Parametric t-test (p<0.05, FDR)'); 1336 % Process: Snapshot: Sources (one time) 1337 bst_process('CallProcess', 'process_snapshot', sDiffMeanSrc, [], ... 1338 'target', 8, ... % Sources (one time) 1339 'orient', 1, ... % left 1340 'time', 0.148, ... 1341 'threshold', 40, ... 1342 'Comment', 'abs(average(deviant)) - abs(average(standard))'); 1343 % Process: Snapshot: Sources (one time) 1344 bst_process('CallProcess', 'process_snapshot', sDiffMeanSrcThresh, [], ... 1345 'target', 8, ... % Sources (one time) 1346 'orient', 1, ... % left 1347 'time', 0.148, ... 1348 'threshold', 0, ... 1349 'Comment', 'Different of mean thresholded with t-test results'); 1350 1351 % ===== EXAMPLE #5: PARAMETRIC/SCOUTS ===== 1352 % Process: t-test equal [-100ms,500ms] H0:(A=B), H1:(A<>B) 1353 sTestParamScout = bst_process('CallProcess', 'process_test_parametric2', sEpochDeviantSrc, sEpochStandardSrc, ... 1354 'timewindow', [-0.1, 0.5], ... 1355 'scoutsel', {'Destrieux', {'G_front_inf-Opercular L', 'G_precentral L', 'G_temp_sup-G_T_transv L'}}, ... 1356 'scoutfunc', 1, ... % Mean 1357 'isnorm', 0, ... 1358 'avgtime', 0, ... 1359 'Comment', '', ... 1360 'test_type', 'ttest_equal', ... % Student's t-test (equal variance) t = (mean(A)-mean(B)) / (Sx * sqrt(1/nA + 1/nB))Sx = sqrt(((nA-1)*var(A) + (nB-1)*var(B)) / (nA+nB-2)) df = nA + nB - 2 1361 'tail', 'two'); % Two-tailed 1362 % Process: Apply statistic threshold: p<0.05 (FDR:1,2,3) 1363 sTestParamScoutThresh = bst_process('CallProcess', 'process_extract_pthresh', sTestParamScout, [], ... 1364 'pthresh', 0.05, ... 1365 'correction', 3, ... % False discovery rate (FDR) 1366 'control1', 1, ... 1367 'control2', 1, ... 1368 'control3', 1); 1369 % Process: Compute head model 1370 bst_process('CallProcess', 'process_headmodel', sTestParamScoutThresh, [], ... 1371 'sourcespace', 1, ... % Cortex surface 1372 'meg', 3); % Overlapping spheres 1373 % Process: Simulate recordings from scouts 1374 sSimulData = bst_process('CallProcess', 'process_simulate_recordings', sTestParamScoutThresh, [], ... 1375 'scouts', {'Destrieux', {'G_front_inf-Opercular L', 'G_precentral L', 'G_temp_sup-G_T_transv L'}}, ... 1376 'savesources', 1); 1377 1378 % Get corresponding source file 1379 [sStudy,iStudy,iRes] = bst_get('ResultsForDataFile', sSimulData.FileName); 1380 sSimulSrc = sStudy.Result(iRes).FileName; 1381 % Reset visualization filters 1382 panel_filter('SetFilters', 0, [], 0, [], 0, [], 0, 0); 1383 % Process: Snapshot: Recordings time series 1384 bst_process('CallProcess', 'process_snapshot', sTestParamScout, [], ... 1385 'target', 5, ... % Recordings time series 1386 'time', 0.148, ... 1387 'Comment', 'Parametric t-test (p<0.05, FDR)'); 1388 % Process: Snapshot: Sources (one time) 1389 bst_process('CallProcess', 'process_snapshot', sSimulSrc, [], ... 1390 'target', 8, ... % Sources (one time) 1391 'orient', 1, ... % left 1392 'time', 0.148, ... 1393 'threshold', 0, ... 1394 'Comment', 'Simulated sources'); 1395 % Process: Snapshot: Recordings time series 1396 bst_process('CallProcess', 'process_snapshot', sSimulData, [], ... 1397 'target', 5, ... % Recordings time series 1398 'modality', 1, ... % MEG (All) 1399 'time', 0.148, ... 1400 'Comment', 'Simulated MEG recordings'); 1401 1402 1403 % ===== EXAMPLE #6: NON-PARAMETRIC/TIMEFREQ ===== 1404 TfOptions.Output = 'all'; 1405 % Process: Time-frequency (Morlet wavelets) / DEVIANT 1406 sEpochDeviantTf = bst_process('CallProcess', 'process_timefreq', sEpochDeviant, [], ... 1407 'sensortypes', 'MLP57', ... 1408 'edit', TfOptions, ... 1409 'normalize', 'none'); % None: Save non-standardized time-frequency maps 1410 % Process: Time-frequency (Morlet wavelets) / STANDARD 1411 sEpochStandardTf = bst_process('CallProcess', 'process_timefreq', sEpochStandard, [], ... 1412 'sensortypes', 'MLP57', ... 1413 'edit', TfOptions, ... 1414 'normalize', 'none'); % None: Save non-standardized time-frequency maps 1415 % Process: Perm t-test equal [-100ms,500ms 1-150Hz] H0:(A=B), H1:(A<>B) 1416 sTestTf = bst_process('CallProcess', 'process_test_permutation2', sEpochDeviantTf, sEpochStandardTf, ... 1417 'timewindow', [-0.1, 0.5], ... 1418 'freqrange', [1, 150], ... 1419 'rows', '', ... 1420 'isabs', 0, ... 1421 'avgtime', 0, ... 1422 'avgrow', 0, ... 1423 'avgfreq', 0, ... 1424 'matchrows', 0, ... 1425 'iszerobad', 1, ... 1426 'Comment', '', ... 1427 'test_type', 'ttest_equal', ... % Student's t-test (equal variance) t = (mean(A)-mean(B)) / (Sx * sqrt(1/nA + 1/nB))Sx = sqrt(((nA-1)*var(A) + (nB-1)*var(B)) / (nA+nB-2)) 1428 'randomizations', 1000, ... 1429 'tail', 'two'); % Two-tailed 1430 % Set stat threshold 1431 StatThreshOptions.pThreshold = 0.05; 1432 StatThreshOptions.Correction = 'none'; 1433 bst_set('StatThreshOptions', StatThreshOptions); 1434 % Process: Snapshot: Time-frequency maps 1435 bst_process('CallProcess', 'process_snapshot', sTestTf, [], ... 1436 'target', 14, ... % Time-frequency maps 1437 'Comment', 'Non-parametric t-test (p<0.05, Uncorrected)'); 1438 % Process: Delete intermediate results 1439 bst_process('CallProcess', 'process_delete', [sEpochDeviantTf, sEpochStandardTf], [], ... 1440 'target', 1); % Delete selected files 1441 1442 1443 1444 %% ===== SAVE REPORT ===== 1445 % Save and display report 1446 ReportFile = bst_report('Save', []); 1447 if ~isempty(reports_dir) && ~isempty(ReportFile) 1448 bst_report('Export', ReportFile, reports_dir); 1449 else 1450 bst_report('Open', ReportFile); 1451 end 1452 1453 disp([10 'DEMO> Done.' 10]);


For an example of a script illustrating how to create loops, look at the tutorial MEG visual: single subject. brainstorm3/toolbox/script/tutorial_visual_single.m - Report: report_TutorialVisual_sub001.html

1 function tutorial_visual_single(bids_dir, reports_dir) 2 % TUTORIAL_VISUAL_SINGLE: Runs the Brainstorm/SPM group analysis pipeline (single subject, BIDS version). 3 % 4 % ONLINE TUTORIALS: https://neuroimage.usc.edu/brainstorm/Tutorials/VisualSingle 5 % 6 % INPUTS: 7 % - bids_dir: Path to folder ds000117 (https://openneuro.org/datasets/ds000117) 8 % |- derivatives/freesurfer/sub-XX : Segmentation folders generated with FreeSurfer 9 % |- derivatives/meg_derivatives/sub-XX/ses-meg/meg/*.fif : MEG+EEG recordings (processed with MaxFilter's SSS) 10 % |- derivatives/meg_derivatives/sub-emptyroom/ses-meg/meg/*.fif : Empty room measurements 11 % - reports_dir: If defined, exports all the reports as HTML to this folder 12 13 % @============================================================================= 14 % This function is part of the Brainstorm software: 15 % https://neuroimage.usc.edu/brainstorm 16 % 17 % Copyright (c) University of Southern California & McGill University 18 % This software is distributed under the terms of the GNU General Public License 19 % as published by the Free Software Foundation. Further details on the GPLv3 20 % license can be found at http://www.gnu.org/copyleft/gpl.html. 21 % 22 % FOR RESEARCH PURPOSES ONLY. THE SOFTWARE IS PROVIDED "AS IS," AND THE 23 % UNIVERSITY OF SOUTHERN CALIFORNIA AND ITS COLLABORATORS DO NOT MAKE ANY 24 % WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF 25 % MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, NOR DO THEY ASSUME ANY 26 % LIABILITY OR RESPONSIBILITY FOR THE USE OF THIS SOFTWARE. 27 % 28 % For more information type "brainstorm license" at command prompt. 29 % =============================================================================@ 30 % 31 % Author: Francois Tadel, Elizabeth Bock, 2016-2018 32 33 34 %% ===== SCRIPT VARIABLES ===== 35 % Full list of subjects to process 36 SubjectNames = {'sub-01', 'sub-02', 'sub-03', 'sub-04', 'sub-05', 'sub-06', 'sub-07', 'sub-08', ... 37 'sub-09', 'sub-10', 'sub-11', 'sub-12', 'sub-13', 'sub-14', 'sub-15', 'sub-16'}; 38 % Empty-room dates for each subject (so that we can match automatically recordings with empty-room) 39 EmptyRoomSubj = 'sub-emptyroom'; 40 AcquisitionDates = {'09-Apr-2009', '06-May-2009', '11-May-2009', '18-May-2009', '15-May-2009', '15-May-2009', '15-May-2009', '15-May-2009', ... 41 '15-May-2009', '15-May-2009', '01-Jun-2009', '01-Jun-2009', '01-Jun-2009', '26-Nov-2009', '08-Dec-2009', '08-Dec-2009'}; 42 % Bad channels {iSubj} = {Run01, Run02, Run03, Run04, Run05, Run06} 43 BadChannels{1} = {'EEG016', 'EEG070', 'EEG050',{'EEG008','EEG050'}, [], []}; 44 BadChannels{2} = {{'EEG027', 'EEG030', 'EEG038'}, 'EEG010', 'EEG010', 'EEG010', 'EEG010', 'EEG010'}; 45 BadChannels{3} = {{'EEG008','EEG017'}, {'EEG008','EEG017'}, {'EEG008','EEG017'}, {'EEG008','EEG017'}, {'EEG008','EEG017','EEG001'}, {'EEG008','EEG017','EEG020'}}; 46 BadChannels{4} = {{'EEG038'}, {'EEG038','EEG001','EEG016'}, {'EEG038','EEG001','EEG016'}, {'EEG038','EEG001'}, {'EEG038','EEG001','EEG016'}, {'EEG038','EEG001','EEG016'}}; 47 BadChannels{5} = {'EEG001', 'EEG001', [], [], [], []}; 48 BadChannels{6} = {'EEG068', [], 'EEG004', [], [], []}; 49 BadChannels{7} = {[], [], {'EEG004','EEG008'}, {'EEG004','EEG008'},{'EEG004','EEG008','EEG043','EEG045','EEG047'}, {'EEG004','EEG008'}}; 50 BadChannels{8} = {[], [], [], [], [], []}; 51 BadChannels{9} = {[], 'EEG004', 'EEG004', [], 'EEG004', 'EEG004'}; 52 BadChannels{10} = {[], [], [], [], [], []}; 53 BadChannels{11} = {{'EEG010','EEG050'}, 'EEG050', 'EEG050', 'EEG050', 'EEG050', 'EEG050'}; 54 BadChannels{12} = {{'EEG024','EEG057'}, {'EEG024','EEG057'}, {'EEG024','EEG057'}, {'EEG024','EEG057','EEG070'}, {'EEG024','EEG057'}, {'EEG024','EEG057','EEG070'}}; 55 BadChannels{13} = {'EEG009', 'EEG009', {'EEG009','EEG057','EEG69'}, 'EEG009', {'EEG009','EEG044'}, {'EEG009','EEG044'}}; 56 BadChannels{14} = {'EEG029', 'EEG029', 'EEG029', {'EEG004','EEG008','EEG016','EEG029'}, {'EEG004','EEG008','EEG016','EEG029'}, {'EEG004','EEG008','EEG016','EEG029'}}; 57 BadChannels{15} = {'EEG038', 'EEG038', 'EEG038', 'EEG038', {'EEG054','EEG038'}, 'EEG038'}; 58 BadChannels{16} = {'EEG008', 'EEG008', 'EEG008', 'EEG008', 'EEG008', 'EEG008'}; 59 % SSP components to remove {iSubj} = {sRun01, sRun02, sRun03, sRun03, sRun04, sRun05, sRun06}, sRun0X={ECG_GRAD,ECG_MAG} 60 SspSelect{1} = {{1,1}, {1,1}, {1,1}, {1,1}, {1,1}, {1,1}}; 61 SspSelect{2} = {{1,1}, {1,1}, {1,1}, {1,1}, {3,1}, {1,1}}; 62 SspSelect{3} = {{[],1}, {1,1}, {1,1}, {1,1}, {1,1}, {1,1}}; 63 SspSelect{4} = {{[],1}, {[],1}, {[],1}, {[],1}, {[],1}, {[],1}}; 64 SspSelect{5} = {{2,1}, {1,1}, {1,1}, {[],1}, {1,1}, {1,1}}; 65 SspSelect{6} = {{2,1}, {2,1}, {1,1}, {2,1}, {1,1}, {2,1}}; 66 SspSelect{7} = {{1,1}, {1,1}, {1,1}, {1,1}, {1,1}, {1,1}}; 67 SspSelect{8} = {{1,1}, {1,1}, {[],1}, {2,1}, {1,1}, {2,1}}; 68 SspSelect{9} = {{1,1}, {1,1}, {[],1}, {[],1}, {1,1}, {1,1}}; 69 SspSelect{10} = {{1,1}, {1,1}, {1,1}, {1,1}, {1,1}, {1,1}}; 70 SspSelect{11} = {{[],1}, {[],1}, {[],1}, {[],1}, {[],[]}, {[],[]}}; 71 SspSelect{12} = {{[1,2],[1,2]}, {1,1}, {1,1}, {1,1}, {1,1}, {1,1}}; 72 SspSelect{13} = {{[],[]}, {[],[]}, {[],[]}, {[],[]}, {[],[]}, {[],[]}}; 73 SspSelect{14} = {{1,1}, {1,1}, {1,1}, {1,1}, {1,1}, {1,1}}; 74 SspSelect{15} = {{1,1}, {1,1}, {1,1}, {1,1}, {1,1}, {1,1}}; 75 SspSelect{16} = {{1,1}, {1,1}, {1,1}, {2,1}, {1,1}, {1,1}}; 76 77 78 %% ===== CREATE PROTOCOL ===== 79 % Start brainstorm without the GUI 80 if ~brainstorm('status') 81 brainstorm nogui 82 end 83 % Output folder for reports 84 if (nargin < 2) || isempty(reports_dir) || ~isdir(reports_dir) 85 reports_dir = []; 86 end 87 % You have to specify the folder in which the tutorial dataset is unzipped 88 if (nargin < 1) || isempty(bids_dir) || ~file_exist(bids_dir) || ~file_exist(bst_fullfile(bids_dir, 'derivatives')) || ~file_exist(bst_fullfile(bids_dir, 'dataset_description.json')) 89 error('The first argument must be the full path to the tutorial folder.'); 90 end 91 % The protocol name has to be a valid folder name (no spaces, no weird characters...) 92 ProtocolName = 'TutorialVisual'; 93 % Delete existing protocol 94 gui_brainstorm('DeleteProtocol', ProtocolName); 95 % Create new protocol 96 gui_brainstorm('CreateProtocol', ProtocolName, 0, 0); 97 % Set visualization filters: 40Hz low-pass, no high-pass 98 panel_filter('SetFilters', 1, 40, 0, [], 0, [], 0, 0); 99 % Set colormap: local color scale 100 bst_colormaps('SetMaxMode', 'meg', 'local'); 101 bst_colormaps('SetMaxMode', 'eeg', 'local'); 102 103 104 %% ===== PRE-PROCESS AND IMPORT ===== 105 for iSubj = 1:16 106 % Start a new report (one report per subject) 107 bst_report('Start'); 108 disp(sprintf('\n===== IMPORT: SUBJECT #%d =====\n', iSubj)); 109 110 % If subject already exists: delete it 111 [sSubject, iSubject] = bst_get('Subject', SubjectNames{iSubj}); 112 if ~isempty(sSubject) 113 db_delete_subjects(iSubject); 114 end 115 116 % ===== FILES TO IMPORT ===== 117 % Build the path of the files to import 118 AnatDir = fullfile(bids_dir, 'derivatives', 'freesurfer', SubjectNames{iSubj}, 'ses-mri', 'anat'); 119 DataDir = fullfile(bids_dir, 'derivatives', 'meg_derivatives', SubjectNames{iSubj}, 'ses-meg', 'meg'); 120 % Check if the folder contains the required files 121 if ~file_exist(AnatDir) 122 error(['The folder "' AnatDir '" does not exist.']); 123 end 124 if ~file_exist(DataDir) 125 error(['The folder "' DataDir '" does not exist.']); 126 end 127 128 % ===== ANATOMY ===== 129 % Process: Import anatomy folder 130 bst_process('CallProcess', 'process_import_anatomy', [], [], ... 131 'subjectname', SubjectNames{iSubj}, ... 132 'mrifile', {AnatDir, 'FreeSurfer'}, ... 133 'nvertices', 15000); 134 135 % ===== PROCESS EACH RUN ===== 136 for iRun = 1:6 137 % Files to import 138 FifFile = bst_fullfile(DataDir, sprintf('%s_ses-meg_task-facerecognition_run-%02d_proc-sss_meg.fif', SubjectNames{iSubj}, iRun)); 139 140 % ===== LINK CONTINUOUS FILE ===== 141 % Process: Create link to raw file 142 sFileRaw = bst_process('CallProcess', 'process_import_data_raw', [], [], ... 143 'subjectname', SubjectNames{iSubj}, ... 144 'datafile', {FifFile, 'FIF'}, ... 145 'channelreplace', 1, ... 146 'channelalign', 0); 147 % Set acquisition date 148 panel_record('SetAcquisitionDate', sFileRaw.iStudy, AcquisitionDates{iSubj}); 149 150 % ===== PREPARE CHANNEL FILE ===== 151 % Process: Set channels type 152 bst_process('CallProcess', 'process_channel_settype', sFileRaw, [], ... 153 'sensortypes', 'EEG061, EEG064', ... 154 'newtype', 'NOSIG'); 155 bst_process('CallProcess', 'process_channel_settype', sFileRaw, [], ... 156 'sensortypes', 'EEG062', ... 157 'newtype', 'EOG'); 158 bst_process('CallProcess', 'process_channel_settype', sFileRaw, [], ... 159 'sensortypes', 'EEG063', ... 160 'newtype', 'ECG'); 161 162 % Process: Remove head points 163 sFileRaw = bst_process('CallProcess', 'process_headpoints_remove', sFileRaw, [], ... 164 'zlimit', 0); 165 % Process: Refine registration 166 sFileRaw = bst_process('CallProcess', 'process_headpoints_refine', sFileRaw, []); 167 % Process: Project electrodes on scalp 168 sFileRaw = bst_process('CallProcess', 'process_channel_project', sFileRaw, []); 169 170 % Process: Snapshot: Sensors/MRI registration 171 bst_process('CallProcess', 'process_snapshot', sFileRaw, [], ... 172 'target', 1, ... % Sensors/MRI registration 173 'modality', 1, ... % MEG (All) 174 'orient', 1, ... % left 175 'Comment', sprintf('MEG/MRI Registration: Subject #%d, Run #%d', iSubj, iRun)); 176 bst_process('CallProcess', 'process_snapshot', sFileRaw, [], ... 177 'target', 1, ... % Sensors/MRI registration 178 'modality', 4, ... % EEG 179 'orient', 1, ... % left 180 'Comment', sprintf('EEG/MRI Registration: Subject #%d, Run #%d', iSubj, iRun)); 181 182 % ===== IMPORT TRIGGERS ===== 183 % Process: Read from channel 184 bst_process('CallProcess', 'process_evt_read', sFileRaw, [], ... 185 'stimchan', 'STI101', ... 186 'trackmode', 2, ... % Bit: detect the changes for each bit independently 187 'zero', 0); 188 % Process: Group by name 189 bst_process('CallProcess', 'process_evt_groupname', sFileRaw, [], ... 190 'combine', 'Unfamiliar=3,4', ... 191 'dt', 0, ... 192 'delete', 1); 193 % Process: Rename event 194 bst_process('CallProcess', 'process_evt_rename', sFileRaw, [], ... 195 'src', '3', ... 196 'dest', 'Famous'); 197 % Process: Rename event 198 bst_process('CallProcess', 'process_evt_rename', sFileRaw, [], ... 199 'src', '5', ... 200 'dest', 'Scrambled'); 201 % Process: Add time offset 202 bst_process('CallProcess', 'process_evt_timeoffset', sFileRaw, [], ... 203 'info', [], ... 204 'eventname', 'Famous, Unfamiliar, Scrambled', ... 205 'offset', 0.0345); 206 % Process: Delete events 207 bst_process('CallProcess', 'process_evt_delete', sFileRaw, [], ... 208 'eventname', '1,2,6,7,8,9,10,11,12,13,14,15,16'); 209 % Process: Detect cHPI activity (Elekta):STI201 210 bst_process('CallProcess', 'process_evt_detect_chpi', sFileRaw, [], ... 211 'eventname', 'chpi_bad', ... 212 'channelname', 'STI201', ... 213 'method', 'off'); % Mark as bad when the HPI coils are OFF 214 215 % ===== FREQUENCY FILTERS ===== 216 % Process: Notch filter: 50Hz 100Hz 150Hz 200Hz 217 sFileClean = bst_process('CallProcess', 'process_notch', sFileRaw, [], ... 218 'freqlist', [50, 100, 150, 200], ... 219 'sensortypes', 'MEG, EEG', ... 220 'read_all', 0); 221 % Process: Power spectrum density (Welch) 222 sFilesPsd = bst_process('CallProcess', 'process_psd', [sFileRaw, sFileClean], [], ... 223 'timewindow', [], ... 224 'win_length', 4, ... 225 'win_overlap', 50, ... 226 'sensortypes', 'MEG, EEG', ... 227 'edit', struct(... 228 'Comment', 'Power', ... 229 'TimeBands', [], ... 230 'Freqs', [], ... 231 'ClusterFuncTime', 'none', ... 232 'Measure', 'power', ... 233 'Output', 'all', ... 234 'SaveKernel', 0)); 235 % Process: Snapshot: Frequency spectrum 236 bst_process('CallProcess', 'process_snapshot', sFilesPsd, [], ... 237 'target', 10, ... % Frequency spectrum 238 'Comment', sprintf('Power spctrum: Subject #%d, Run #%d', iSubj, iRun)); 239 240 % ===== BAD CHANNELS ===== 241 if ~isempty(BadChannels{iSubj}{iRun}) 242 % Process: Set bad channels 243 bst_process('CallProcess', 'process_channel_setbad', sFileClean, [], ... 244 'sensortypes', BadChannels{iSubj}{iRun}); 245 end 246 247 % ===== EEG REFERENCE ===== 248 % Process: Re-reference EEG 249 bst_process('CallProcess', 'process_eegref', sFileClean, [], ... 250 'eegref', 'AVERAGE', ... 251 'sensortypes', 'EEG'); 252 253 % ===== DETECT ARTIFACTS ====== 254 % Process: Detect heartbeats 255 bst_process('CallProcess', 'process_evt_detect_ecg', sFileClean, [], ... 256 'channelname', 'EEG063', ... 257 'timewindow', [], ... 258 'eventname', 'cardiac'); 259 % Different amplitude thresholds for different subjects 260 if strcmpi(SubjectNames{iSubj}, 'sub-05') 261 thresholdMAX = 50; 262 else 263 thresholdMAX = 100; 264 end 265 % Process: Detect: blink_BAD - Detects all events where the amplitude exceeds 100uV 266 bst_process('CallProcess', 'process_evt_detect_threshold', sFileClean, [], ... 267 'eventname', 'blink_BAD', ... 268 'channelname', 'EEG062', ... 269 'timewindow', [], ... 270 'thresholdMAX', thresholdMAX, ... 271 'units', 3, ... % uV (10^-6) 272 'bandpass', [0.3, 20], ... 273 'isAbsolute', 1, ... 274 'isDCremove', 0); 275 276 % ===== SSP COMPUTATION ===== 277 % Process: SSP ECG: cardiac 278 bst_process('CallProcess', 'process_ssp_ecg', sFileClean, [], ... 279 'eventname', 'cardiac', ... 280 'sensortypes', 'MEG GRAD', ... 281 'usessp', 1, ... 282 'select', SspSelect{iSubj}{iRun}{1}); 283 bst_process('CallProcess', 'process_ssp_ecg', sFileClean, [], ... 284 'eventname', 'cardiac', ... 285 'sensortypes', 'MEG MAG', ... 286 'usessp', 1, ... 287 'select', SspSelect{iSubj}{iRun}{2}); 288 % Process: Snapshot: SSP projectors 289 bst_process('CallProcess', 'process_snapshot', sFileClean, [], ... 290 'target', 2, ... 291 'Comment', sprintf('Subject #%d, Run #%d', iSubj, iRun)); % SSP projectors 292 293 % ===== IMPORT BAD EVENTS ===== 294 % Get bad segments: this is typically done manually, not from a script 295 BadSegments = GetBadSegments(iSubj, iRun); 296 % Process: Import from file 297 bst_process('CallProcess', 'process_evt_import', sFileClean, [], ... 298 'evtfile', {BadSegments, 'ARRAY-TIMES'}, ... 299 'evtname', 'BAD'); 300 301 % ===== IMPORT TRIALS ===== 302 % Process: Import MEG/EEG: Events 303 sFilesEpochs = bst_process('CallProcess', 'process_import_data_event', sFileClean, [], ... 304 'subjectname', SubjectNames{iSubj}, ... 305 'condition', '', ... 306 'eventname', 'Famous, Scrambled, Unfamiliar', ... 307 'timewindow', [], ... 308 'epochtime', [-0.5, 1.2], ... 309 'createcond', 0, ... 310 'ignoreshort', 1, ... 311 'usectfcomp', 1, ... 312 'usessp', 1, ... 313 'freq', [], ... 314 'baseline', [-0.5, -0.0009]); 315 316 % ===== AVERAGE: RUN ===== 317 % Process: Average: By trial group (folder average) 318 sFilesAvg = bst_process('CallProcess', 'process_average', sFilesEpochs, [], ... 319 'avgtype', 5, ... % By trial group (folder average) 320 'avg_func', 1, ... % Arithmetic average: mean(x) 321 'weighted', 0, ... 322 'keepevents', 0); 323 % Process: Snapshot: Recordings time series 324 bst_process('CallProcess', 'process_snapshot', sFilesAvg, [], ... 325 'target', 5, ... % Recordings time series 326 'modality', 4, ... % EEG 327 'time', 0.11, ... 328 'Comment', sprintf('Subject #%d, Run #%d', iSubj, iRun)); 329 % Process: Snapshot: Recordings topography 330 bst_process('CallProcess', 'process_snapshot', sFilesAvg, [], ... 331 'target', 6, ... % Recordings topography (one time) 332 'modality', 4, ... % EEG 333 'time', 0.11, ... 334 'Comment', sprintf('Subject #%d, Run #%d', iSubj, iRun)); 335 336 % ===== COMPUTE NOISECOV: EEG ===== 337 % Process: Compute covariance (noise or data) 338 bst_process('CallProcess', 'process_noisecov', sFilesEpochs, [], ... 339 'baseline', [-0.5, -0.0009], ... 340 'sensortypes', 'EEG', ... 341 'target', 1, ... % Noise covariance (covariance over baseline time window) 342 'dcoffset', 1, ... % Block by block, to avoid effects of slow shifts in data 343 'identity', 0, ... 344 'copycond', 0, ... 345 'copysubj', 0, ... 346 'replacefile', 1); % Replace 347 end 348 349 % Save report 350 ReportFile = bst_report('Save', []); 351 if ~isempty(reports_dir) && ~isempty(ReportFile) 352 bst_report('Export', ReportFile, bst_fullfile(reports_dir, ['report_' ProtocolName '_' SubjectNames{iSubj} '.html'])); 353 end 354 end 355 356 357 %% ===== EMPTY ROOM RECORDINGS ===== 358 disp(sprintf('\n===== IMPORT: EMPTY-ROOM =====\n')); 359 % Loop on all the noise sessions 360 NoiseFiles = {}; 361 for ses = {'20090409', '20090506', '20090511', '20090515', '20090518', '20090601', '20091126', '20091208'} 362 NoiseFiles{end+1} = fullfile(bids_dir, 'derivatives', 'meg_derivatives', EmptyRoomSubj, ['ses-' ses{1}], 'meg', ['sub-emptyroom_ses-' ses{1} '_task-noise_proc-sss_meg.fif']); 363 end 364 % Process: Create link to raw file 365 sFilesNoise = bst_process('CallProcess', 'process_import_data_raw', [], [], ... 366 'subjectname', EmptyRoomSubj, ... 367 'datafile', {NoiseFiles, 'FIF'}, ... 368 'channelreplace', 1, ... 369 'channelalign', 0); 370 % Process: Notch filter: 50Hz 100Hz 150Hz 200Hz 371 sFileNoiseClean = bst_process('CallProcess', 'process_notch', sFilesNoise, [], ... 372 'freqlist', [50, 100, 150, 200], ... 373 'sensortypes', 'MEG, EEG', ... 374 'read_all', 0); 375 % Process: Compute noise covariance 376 bst_process('CallProcess', 'process_noisecov', sFileNoiseClean, [], ... 377 'baseline', [], ... 378 'sensortypes', 'MEG', ... 379 'target', 1, ... % Noise covariance (covariance over baseline time window) 380 'dcoffset', 1, ... % Block by block, to avoid effects of slow shifts in data 381 'identity', 0, ... 382 'copycond', 1, ... 383 'copysubj', 1, ... 384 'copymatch', 1, ... 385 'replacefile', 2); % Merge 386 387 388 %% ===== SOURCE ESTIMATION ===== 389 % Start a new report (one report for the source estimation of all the subjects) 390 bst_report('Start'); 391 % Loop on the subjects: This loop is separated from the previous one, because we should 392 % compute the BEM surfaces after importing all the runs, so that the registration is done 393 % using the high resolution head surface, instead of the smooth scalp BEM layer. 394 for iSubj = 1:length(SubjectNames) 395 disp(sprintf('\n===== SOURCES: SUBJECT #%d =====\n', iSubj)); 396 397 % ===== BEM SURFACES ===== 398 % Process: Generate BEM surfaces 399 bst_process('CallProcess', 'process_generate_bem', [], [], ... 400 'subjectname', SubjectNames{iSubj}, ... 401 'nscalp', 1082, ... 402 'nouter', 642, ... 403 'ninner', 642, ... 404 'thickness', 4, ... 405 'method', 'brainstorm'); 406 407 % ===== SELECT ALL AVERAGES ===== 408 % Process: Select data files in: */* 409 sFilesAvg = bst_process('CallProcess', 'process_select_files_data', [], [], ... 410 'subjectname', SubjectNames{iSubj}); 411 % Process: Select file comments with tag: Avg 412 sFilesAvg = bst_process('CallProcess', 'process_select_tag', sFilesAvg, [], ... 413 'tag', 'Avg'); % Select only the files with the tag 414 415 % ===== COMPUTE HEAD MODELS ===== 416 % Process: Compute head model (only for the first run of the subject) 417 bst_process('CallProcess', 'process_headmodel', sFilesAvg(1), [], ... 418 'sourcespace', 1, ... % Cortex surface 419 'meg', 3, ... % Overlapping spheres 420 'eeg', 3, ... % OpenMEEG BEM 421 'ecog', 1, ... % 422 'seeg', 1, ... % 423 'openmeeg', struct(... 424 'BemSelect', [1, 1, 1], ... 425 'BemCond', [1, 0.0125, 1], ... 426 'BemNames', {{'Scalp', 'Skull', 'Brain'}}, ... 427 'BemFiles', {{}}, ... 428 'isAdjoint', 0, ... 429 'isAdaptative', 1, ... 430 'isSplit', 0, ... 431 'SplitLength', 4000)); 432 % Get all the runs for this subject (ie the list of the study indices) 433 iStudyOther = setdiff(unique([sFilesAvg.iStudy]), sFilesAvg(1).iStudy); 434 % Copy the forward model file to the other runs 435 sHeadmodel = bst_get('HeadModelForStudy', sFilesAvg(1).iStudy); 436 for iStudy = iStudyOther 437 db_add(iStudy, sHeadmodel.FileName); 438 end 439 440 % ===== COMPUTE SOURCES: MEG ===== 441 % Process: Compute sources [2018] 442 sAvgSrcMeg = bst_process('CallProcess', 'process_inverse_2018', sFilesAvg, [], ... 443 'output', 1, ... % Kernel only: shared 444 'inverse', struct(... 445 'Comment', 'MN: MEG ALL', ... 446 'InverseMethod', 'minnorm', ... 447 'InverseMeasure', 'amplitude', ... 448 'SourceOrient', {{'fixed'}}, ... 449 'Loose', 0.2, ... 450 'UseDepth', 1, ... 451 'WeightExp', 0.5, ... 452 'WeightLimit', 10, ... 453 'NoiseMethod', 'reg', ... 454 'NoiseReg', 0.1, ... 455 'SnrMethod', 'fixed', ... 456 'SnrRms', 1e-06, ... 457 'SnrFixed', 3, ... 458 'ComputeKernel', 1, ... 459 'DataTypes', {{'MEG GRAD', 'MEG MAG'}})); 460 % Process: Snapshot: Sources (one time) - Loop only to get a correct comment for the report 461 for i = 1:length(sAvgSrcMeg) 462 bst_process('CallProcess', 'process_snapshot', sAvgSrcMeg(i), [], ... 463 'target', 8, ... % Sources (one time) 464 'orient', 4, ... % bottom 465 'time', 0.11, ... 466 'threshold', 20, ... 467 'Comment', ['MEG sources: ' sFilesAvg(i).FileName]); 468 end 469 470 % ===== COMPUTE SOURCES: EEG ===== 471 % Process: Compute sources [2018] 472 sAvgSrcEeg = bst_process('CallProcess', 'process_inverse_2018', sFilesAvg, [], ... 473 'output', 1, ... % Kernel only: shared 474 'inverse', struct(... 475 'Comment', 'MN: EEG', ... 476 'InverseMethod', 'minnorm', ... 477 'InverseMeasure', 'amplitude', ... 478 'SourceOrient', {{'fixed'}}, ... 479 'Loose', 0.2, ... 480 'UseDepth', 1, ... 481 'WeightExp', 0.5, ... 482 'WeightLimit', 10, ... 483 'NoiseMethod', 'reg', ... 484 'NoiseReg', 0.1, ... 485 'SnrMethod', 'fixed', ... 486 'SnrRms', 1e-06, ... 487 'SnrFixed', 3, ... 488 'ComputeKernel', 1, ... 489 'DataTypes', {{'EEG'}})); 490 % Process: Snapshot: Sources (one time) - Loop only to get a correct comment for the report 491 for i = 1:length(sAvgSrcEeg) 492 bst_process('CallProcess', 'process_snapshot', sAvgSrcEeg(i), [], ... 493 'target', 8, ... % Sources (one time) 494 'orient', 4, ... % bottom 495 'time', 0.11, ... 496 'threshold', 10, ... 497 'Comment', ['EEG sources: ' sFilesAvg(i).FileName]); 498 end 499 end 500 % Save report 501 ReportFile = bst_report('Save', []); 502 if ~isempty(reports_dir) && ~isempty(ReportFile) 503 bst_report('Export', ReportFile, bst_fullfile(reports_dir, ['report_' ProtocolName '_sources.html'])); 504 end 505 506 507 %% ===== TIME-FREQUENCY ===== 508 % Start a new report (one report for the time-frequency of all the subjects) 509 bst_report('Start'); 510 % List of conditions to process separately 511 AllConditions = {'Famous', 'Scrambled', 'Unfamiliar'}; 512 % Channels to display in the screen capture, by order of preference (if the first channel is bad, use the following) 513 SelChannel = {'EEG070','EEG060','EEG065','EEG050','EEG003'}; 514 % Compute one separate time-frequency average for each subject/run/condition 515 for iSubj = 1:length(SubjectNames) 516 disp(sprintf('\n===== TIME-FREQUENCY: SUBJECT #%d =====\n', iSubj)); 517 for iRun = 1:6 518 % Process: Select data files in: Subject/Run 519 sTrialsAll = bst_process('CallProcess', 'process_select_files_data', [], [], ... 520 'subjectname', SubjectNames{iSubj}, ... 521 'condition', sprintf('sub-%02d_ses-meg_task-facerecognition_run-%02d_proc-sss_meg_notch', iSubj, iRun)); 522 % Loop on the conditions 523 for iCond = 1:length(AllConditions) 524 % Comment describing this average 525 strComment = [SubjectNames{iSubj}, ' / ', sprintf('run_%02d', iRun), ' / ', AllConditions{iCond}]; 526 disp(['BST> ' strComment]); 527 % Find the first good channel in the display list 528 if isempty(BadChannels{iSubj}{iRun}) 529 iSel = 1; 530 else 531 iSel = find(~ismember(SelChannel,BadChannels{iSubj}{iRun}), 1); 532 end 533 % Process: Select file comments with tag: Avg 534 sTrialsCond = bst_process('CallProcess', 'process_select_tag', sTrialsAll, [], ... 535 'tag', [AllConditions{iCond}, '_trial'], ... 536 'search', 1, ... % Search the file names 537 'select', 1); % Select only the files with the tag 538 % Process: Time-frequency (Morlet wavelets), averaged across trials 539 sTimefreq = bst_process('CallProcess', 'process_timefreq', sTrialsCond, [], ... 540 'sensortypes', 'MEG MAG, EEG', ... 541 'edit', struct(... 542 'Comment', ['Avg: ' AllConditions{iCond} ', Power, 6-60Hz'], ... 543 'TimeBands', [], ... 544 'Freqs', [6, 6.8, 7.6, 8.6, 9.7, 11, 12.4, 14, 15.8, 17.9, 20.2, 22.8, 25.7, 29, 32.7, 37, 41.7, 47.1, 53.2, 60], ... 545 'MorletFc', 1, ... 546 'MorletFwhmTc', 3, ... 547 'ClusterFuncTime', 'none', ... 548 'Measure', 'power', ... 549 'Output', 'average', ... 550 'RemoveEvoked', 0, ... 551 'SaveKernel', 0), ... 552 'normalize', 'none'); % None: Save non-standardized time-frequency maps 553 % Process: Extract time: [-200ms,900ms] 554 sTimefreq = bst_process('CallProcess', 'process_extract_time', sTimefreq, [], ... 555 'timewindow', [-0.2, 0.9], ... 556 'overwrite', 1); 557 % Screen capture of one sensor 558 hFigTf = view_timefreq(sTimefreq.FileName, 'SingleSensor', SelChannel{iSel}); 559 bst_report('Snapshot', hFigTf, strComment, 'Time-frequency', [200, 200, 400, 250]); 560 close(hFigTf); 561 end 562 end 563 end 564 % Save report 565 ReportFile = bst_report('Save', []); 566 if ~isempty(reports_dir) && ~isempty(ReportFile) 567 bst_report('Export', ReportFile, bst_fullfile(reports_dir, ['report_' ProtocolName '_timefreq.html'])); 568 end 569 end 570 571 572 573 574 %% ===== SUPPORT FUNCTIONS ===== 575 function BadSeg = GetBadSegments(iSubj, iRun) 576 BadSegments{1} = {... 577 [247.867 248.185; 598.999 598.999; 598.999 598.999; 611.999 611.999; 612.999 612.999; 613.999 613.999; 616.999 616.999; 617.999 617.999; 623.999 623.999; 715.209 715.467], ... 578 [84.791 85.166], ... 579 [79.183 80.167], ... 580 [64.309 65.185], ... 581 [90.958 91.167; 178.005 178.355; 293.282 295.919; 312.298 316.479; 353.835 357.716], ... 582 [60.292 66.802; 69.975 71.210; 105.233 107.586; 108.822 109.506; 376.225 376.325]}; 583 BadSegments{2} = {... 584 [223.806 224.199; 279.772 279.895; 453.241 455.108; 692.423 692.593], ... 585 [65.298 66.194; 304.727 306.178; 399.165 400.732], ... 586 [203.141 205.085; 281.579 287.883; 420.395 421.128], ... 587 [387.118 388.229; 440.318 441.900; 554.825 558.744], ... 588 [71.000 80.999; 82.750 87.367; 149.528 149.667; 264.747 267.995; 368.415 371.973; 376.263 378.763; 398.334 401.551; 537.410 541.645], ... 589 [38.000 47.999; 47.825 50.046; 61.298 61.384; 249.653 253.379; 282.917 283.820; 286.135 287.616; 298.167 300.196; 328.254 329.511; 335.957 337.817; 478.277 480.707]}; 590 BadSegments{3} = {... 591 [406.312 407.207; 727.055 728.714], ... 592 [84.894 85.156; 152.028 152.946; 297.835 298.915; 418.272 421.845; 554.084 554.794], ... 593 [73.758 74.159; 378.212 378.536; 406.065 407.099; 470.541 471.698; 488.900 491.168; 529.596 530.453], ... 594 [94.874 95.152; 317.385 321.374; 325.696 327.055; 439.220 439.829; 454.473 455.175; 486.196 486.829; 518.660 522.015; 524.400 525.249; 562.417 570.325], ... 595 [96.208 97.181; 98.942 99.096; 135.005 135.754; 143.990 144.599; 250.139 250.247; 300.459 300.559; 338.265 339.322; 545.913 546.067], ... 596 [91.415 92.156; 284.843 286.525; 297.886 298.404; 317.046 317.163; 332.698 332.791; 358.946 359.402; 428.405 428.775; 478.374 478.690; 549.866 550.128]}; 597 BadSegments{4} = {... 598 [22.967 22.967; 50.036 50.098; 52.058 52.058; 156.653 156.653; 171.565 173.386; 239.544 242.105; 268.162 270.175; 268.992 268.992; 316.032 316.032; 338.283 339.000; 357.959 361.909; 370.871 370.871; 381.579 383.677; 437.731 437.731; 463.482 468.505; 476.135 479.838; 486.652 488.272; 504.860 508.999], ... 599 [309.493 311.707; 342.681 344.525; 354.019 357.321; 390.023 391.225; 393.926 395.855; 404.221 405.069; 432.522 435.932; 459.048 460.715; 471.763 478.529; 549.387 551.999; 591.087 594.143; 608.541 611.079; 624.847 626.615; 649.648 651.570], ... 600 [57.411 58.198; 88.346 88.955; 200.761 202.335; 227.016 227.688; 257.726 258.054; 356.798 359.005; 404.260 411.003], ... 601 [46.000 54.823; 61.000 70.332; 203.005 207.125; 275.875 278.121; 313.500 314.824; 337.973 338.636; 422.505 426.239], ... 602 [58.000 62.479; 78.250 85.166; 89.955 91.360; 116.322 117.888; 130.013 131.987; 149.509 150.489; 174.650 175.823; 182.030 183.334; 196.758 197.384; 204.458 204.697; 205.236 208.663; 311.028 316.383; 320.700 327.181; 332.437 335.354; 344.205 346.133; 374.208 374.865; 385.519 386.214; 441.942 444.241; 453.957 456.997; 486.039 487.004; 501.238 504.185; 512.962 514.675; 553.398 556.215], ... 603 [41.406 45.743; 58.681 59.144; 108.086 108.896; 140.633 143.750; 196.110 199.474; 210.778 210.971; 234.649 235.143; 258.081 259.632; 339.101 340.805; 390.277 390.609; 438.935 442.122; 528.221 534.031]}; 604 BadSegments{5} = {... 605 [265.539 265.778; 266.334 266.495; 268.479 268.965; 367.428 367.636; 439.655 442.779; 453.497 453.853; 504.997 505.329; 519.513 519.683; 595.674 595.982; 602.000 602.463], ... 606 [121.113 121.499; 124.971 126.213; 253.735 254.075; 272.232 272.464; 272.895 273.104; 346.368 346.645; 368.812 369.052; 406.382 406.605; 452.920 453.113; 454.903 455.112; 507.655 507.840; 508.766 509.013; 584.853 585.030; 594.831 595.656; 597.261 602.249], ... 607 [37.251 37.497; 38.825 39.056; 40.615 41.849; 43.624 44.758; 53.333 53.641; 54.698 55.076; 57.668 59.196; 79.129 79.360; 81.475 81.714; 122.658 123.375; 284.787 285.296; 288.754 288.993; 345.790 346.022; 421.212 421.459; 481.428 482.207; 503.408 503.685; 504.272 504.449; 524.451 524.714; 526.913 531.499], ... 608 [87.322 88.178; 91.085 91.325; 95.121 95.491; 114.174 114.397; 129.874 130.113; 151.220 151.544; 281.689 281.959; 532.966 533.345], ... 609 [59.176 60.218; 74.854 75.317; 308.180 309.877; 380.705 381.059], ... 610 [182.382 183.245; 196.220 196.736; 276.018 276.327; 292.490 294.086; 370.755 370.847; 435.644 436.624; 467.535 468.460; 522.838 525.847]}; 611 BadSegments{6} = {... 612 [141.690 142.424; 157.070 157.417; 355.138 356.025; 423.999 423.999; 424.999 424.999; 426.999 426.999; 427.999 427.999; 486.430 488.151; 493.999 493.999; 501.511 501.619; 501.549 501.549; 501.585 501.585; 502.999 502.999; 503.999 503.999; 540.999 540.999; 541.999 541.999; 555.999 555.999; 556.999 556.999; 561.999 561.999; 563.999 563.999; 564.999 564.999; 565.999 565.999; 567.999 567.999], ... 613 [64.898 65.161; 71.700 72.718; 226.185 226.740; 324.124 324.425; 329.062 329.301; 486.143 486.975], ... 614 [62.300 63.148; 266.254 266.639; 409.920 410.221], ... 615 [54.048 55.214; 330.893 331.255], ... 616 [185.616 186.495; 331.411 331.796; 386.843 387.028; 387.999 387.999; 389.999 389.999; 434.575 434.875; 519.802 519.995], ... 617 [44.720 45.167; 211.446 211.964; 368.955 369.172]}; 618 BadSegments{7} = {... 619 [154.966 155.144; 551.639 551.855], ... 620 [88.774 89.167; 107.999 107.999; 109.999 109.999; 110.999 110.999; 112.999 112.999; 113.999 113.999; 114.999 114.999; 119.999 119.999; 121.999 121.999; 124.223 125.465; 137.011 138.871], ... 621 [81.627 82.136; 377.953 381.046], ... 622 [241.136 242.171; 543.849 544.196; 596.639 598.553; 600.227 601.075; 603.999 603.999; 605.999 605.999; 609.999 609.999; 611.305 612.809; 614.999 614.999; 615.803 616.937; 623.694 625.877; 653.999 653.999; 655.055 655.756; 663.999 663.999], ... 623 [68.852 69.481; 74.034 75.200; 78.426 78.600; 104.497 105.963; 253.951 254.034; 256.964 257.038; 257.915 258.048; 323.254 324.156; 365.880 368.131; 369.952 370.060; 371.728 372.999; 430.931 431.965; 535.521 542.999], ... 624 [70.271 71.205; 94.441 96.445; 98.613 99.126; 112.318 112.749; 131.686 132.265; 148.935 150.615; 161.120 161.600; 205.325 208.214; 215.035 215.863; 217.403 218.818; 286.178 287.171; 399.075 404.853]}; 625 BadSegments{8} = {... 626 [238.505 238.546; 256.354 257.224; 316.116 316.167; 341.519 341.558; 356.493 356.566; 380.095 380.170; 391.906 392.048; 448.457 448.562; 469.931 470.028; 530.488 530.575; 555.180 558.136; 562.152 562.245; 588.403 588.502; 625.205 625.662; 638.019 638.438; 649.982 650.008; 650.691 651.357; 651.925 652.061; 665.445 665.472; 695.923 696.015; 706.528 706.720; 729.706 732.534], ... 627 [99.546 106.114; 113.245 114.199; 150.885 154.989; 277.486 278.075; 333.645 335.574; 339.358 340.646; 371.860 375.394; 497.459 499.156], ... 628 [49.008 50.205; 231.446 233.406; 329.590 329.659; 355.101 356.019; 360.733 360.973; 372.955 374.891; 389.283 392.068; 453.610 455.431; 464.632 465.265; 489.209 489.996; 514.777 515.295], ... 629 [178.368 179.232; 293.865 294.521; 418.252 418.314; 450.124 450.209; 480.236 480.445; 492.725 493.975; 495.170 497.624; 500.382 500.459; 504.247 504.402; 628.017 628.079; 628.827 628.905; 630.209 630.332], ... 630 [100.616 101.172; 107.691 108.539; 188.814 188.875; 193.119 193.281; 207.964 208.033; 432.254 432.385; 489.834 489.911; 517.960 518.037; 518.955 519.040; 520.938 521.062; 521.945 522.037; 523.959 524.052; 525.942 526.057; 526.945 527.015; 528.958 529.059; 531.138 531.192; 531.979 532.048; 532.951 533.028; 533.962 534.024], ... 631 [135.997 137.232; 383.565 383.657; 418.763 418.955]}; 632 BadSegments{9} = {... 633 [215.107 216.187; 262.388 262.388; 287.519 287.635; 289.895 290.135; 311.999 311.999; 350.161 351.179; 526.154 527.033; 564.999 564.999; 584.935 585.059; 587.999 587.999; 601.999 601.999; 603.999 603.999; 608.999 608.999; 612.999 612.999], ... 634 [47.667 47.775; 49.172 54.681; 67.195 70.805; 78.988 80.477; 138.701 138.948; 138.727 138.727; 138.728 138.728; 138.728 138.728; 138.734 138.734; 138.734 138.734; 138.881 138.881; 138.881 138.881; 138.907 138.907; 140.993 141.093; 141.037 141.037; 141.045 141.045; 155.795 155.864; 155.822 155.822; 155.849 155.849; 168.999 168.999; 206.999 206.999; 219.999 219.999; 225.999 225.999; 226.999 226.999; 228.999 228.999; 236.999 236.999; 242.463 242.463; 242.463 242.463; 242.487 242.487; 247.999 247.999; 251.999 251.999; 252.483 252.483; 253.315 254.163; 265.999 265.999; 267.999 267.999; 272.358 272.358; 272.410 272.410; 298.999 298.999; 300.999 300.999; 314.037 314.037; 314.047 314.047; 321.813 321.813; 321.813 321.813; 321.833 321.833; 329.117 329.117; 329.117 329.117; 329.156 329.156; 346.999 346.999; 347.999 347.999; 349.320 349.320; 349.329 349.329; 352.528 355.869; 364.040 364.040; 396.278 397.420; 404.865 404.865; 407.905 407.905; 407.905 407.905; 407.954 407.954; 418.454 418.454; 418.454 418.454; 418.486 418.486; 441.999 441.999; 444.999 444.999; 447.999 447.999; 453.550 453.650; 454.931 455.055; 457.999 457.999; 479.964 480.079; 481.999 481.999; 482.949 483.073; 488.948 489.064; 511.999 511.999; 520.999 520.999; 523.999 523.999; 526.954 527.069; 533.999 533.999; 537.999 537.999], ... 635 [82.889 83.198; 206.143 207.424; 403.873 404.096; 406.790 407.485; 413.936 414.075; 415.912 416.112; 536.621 537.532], ... 636 [182.999 182.999; 182.999 182.999; 183.999 183.999; 195.999 195.999; 208.999 208.999; 209.999 209.999; 278.601 278.955; 413.913 414.067; 415.250 417.792; 419.940 420.087; 420.999 420.999; 512.999 512.999; 514.363 516.254; 521.917 522.134; 522.999 522.999; 523.915 524.101; 533.999 533.999; 538.999 538.999; 539.892 540.062; 543.999 543.999; 548.874 549.089; 549.900 550.062; 552.755 554.075; 585.999 585.999; 587.957 588.073; 588.999 588.999; 594.999 594.999; 603.971 604.056; 604.928 605.097; 615.986 618.495; 623.999 623.999], ... 637 [53.215 54.203; 164.227 168.965; 201.433 202.775; 292.973 295.889; 303.961 304.817; 309.354 311.236; 313.123 313.639; 322.547 323.017; 331.141 334.852; 356.637 356.985; 367.855 368.079; 369.995 373.459; 377.849 378.142; 406.072 407.306; 436.164 436.665; 458.033 458.905; 516.889 518.656; 517.812 518.684], ... 638 [113.945 115.225; 198.570 198.863; 264.162 264.795; 383.705 385.641; 396.477 397.064; 399.706 406.457; 452.261 453.179; 486.338 487.102; 498.869 499.579; 507.968 508.925; 546.266 547.285; 558.825 560.128]}; 639 BadSegments{10} = {... 640 [235.104 236.184; 324.272 325.028; 330.799 331.401; 541.062 541.826; 564.747 565.072], ... 641 [100.581 101.229; 285.644 285.644; 285.644 285.644; 297.979 298.033; 298.999 298.999; 300.949 301.026; 301.999 301.999; 303.999 303.999; 304.999 304.999; 306.999 306.999; 307.999 307.999; 310.999 310.999; 311.999 311.999; 313.971 314.025; 314.999 314.999; 316.995 317.035; 317.999 317.999; 319.999 319.999; 320.923 321.046; 326.971 327.048; 327.999 327.999; 329.999 329.999; 330.999 330.999; 332.974 333.028; 333.999 333.999; 335.971 336.025; 336.999 336.999; 338.973 339.035; 339.999 339.999; 341.951 342.044; 343.999 343.999; 344.964 345.033; 346.999 346.999; 347.999 347.999; 452.015 453.411; 458.735 459.776; 467.974 468.089], ... 642 [41.266 41.675; 53.905 55.240; 159.159 159.345; 220.255 220.394], ... 643 [66.751 67.190; 201.674 201.812; 294.119 295.168; 303.188 303.528; 316.992 317.037; 317.999 317.999; 317.999 317.999; 319.999 319.999; 320.980 321.042; 322.999 322.999; 325.999 325.999; 326.955 327.048; 328.999 328.999; 333.999 333.999; 334.999 334.999; 342.999 342.999; 343.999 343.999; 351.915 352.038; 352.999 352.999; 407.979 408.056; 409.999 409.999; 411.999 411.999; 415.405 416.154; 435.999 435.999; 436.962 437.046; 468.885 469.100; 469.999 469.999; 470.999 470.999; 516.210 516.357], ... 644 [105.369 106.172; 141.468 142.178; 199.008 199.818; 201.269 201.716; 352.851 353.083; 449.865 452.041; 459.508 459.802], ... 645 [229.948 230.033; 230.999 230.999; 230.999 230.999; 259.588 259.990; 343.970 345.798; 361.999 361.999; 362.970 363.046; 364.999 364.999; 366.999 366.999; 367.962 368.031; 372.521 374.179; 405.999 405.999; 409.999 409.999; 411.999 411.999; 412.999 412.999; 434.999 434.999; 435.970 436.031; 516.999 516.999; 519.999 519.999]}; 646 BadSegments{11} = {... 647 [179.045 180.225; 323.999 323.999; 323.999 323.999; 324.999 324.999; 327.978 328.025; 328.966 329.028; 330.999 330.999; 365.999 365.999; 367.999 367.999; 370.999 370.999; 371.999 371.999; 373.999 373.999; 375.965 376.042; 377.999 377.999], ... 648 [52.107 53.156; 521.077 521.155], ... 649 [65.331 66.156], ... 650 [81.579 82.143; 108.565 108.565; 108.566 108.566; 112.176 112.176; 122.377 122.377; 122.565 122.565; 213.882 213.882; 215.305 215.305; 224.851 224.851; 224.919 224.919; 255.815 255.815; 257.893 257.893; 359.952 359.952; 361.630 361.754; 370.285 370.285; 376.853 376.853; 511.600 511.600; 513.631 513.631; 513.988 513.988; 518.215 518.215], ... 651 [63.205 64.154; 170.951 171.036; 176.841 176.841; 176.842 176.842; 177.282 177.282; 223.886 223.971; 259.963 259.963; 261.547 261.547; 341.185 341.302; 368.999 368.999; 370.999 370.999; 374.999 374.999; 382.971 383.033; 383.999 383.999; 386.999 386.999; 388.958 389.035; 390.999 390.999; 391.955 392.031; 394.955 395.040; 396.999 396.999; 398.999 398.999; 400.999 400.999; 402.999 402.999; 404.963 405.033; 406.999 406.999; 429.829 429.829; 430.346 430.346; 465.184 465.184; 470.290 470.290], ... 652 [50.195 51.145; 158.850 159.012]}; 653 BadSegments{12} = {... 654 [193.435 194.175; 500.000 501.000; 08.988 509.868; 528.999 528.999; 528.999 528.999; 529.999 529.999; 531.999 531.999; 547.999 547.999], ... 655 [133.484 134.185; 134.911 135.065; 553.999 553.999; 553.999 553.999; 554.999 554.999; 557.999 557.999; 558.999 558.999; 564.999 564.999; 565.977 566.046; 568.999 568.999; 569.951 570.028; 571.999 571.999; 579.959 580.028; 580.977 581.055; 583.971 584.033; 583.999 583.999; 585.999 585.999; 586.999 586.999; 588.980 589.026; 590.999 590.999; 591.958 592.044; 595.999 595.999], ... 656 [46.028 47.216; 129.557 130.236; 200.999 200.999; 200.999 200.999; 201.975 202.075; 203.999 203.999; 204.967 205.053; 213.627 214.515; 218.958 219.044; 222.959 228.252; 309.999 309.999; 311.897 317.537; 311.999 311.999; 351.968 353.234; 399.999 399.999; 446.508 451.454; 487.999 487.999; 512.278 512.926], ... 657 [82.535 84.062; 102.247 102.355; 134.999 134.999; 134.999 134.999; 136.999 136.999; 138.973 139.042; 147.999 147.999; 148.999 148.999; 149.999 149.999; 193.999 193.999; 194.999 194.999; 224.984 226.095; 241.954 242.077; 243.948 244.087; 280.999 280.999; 281.999 281.999; 305.000 305.451; 310.945 311.068; 328.999 328.999; 352.999 352.999; 353.999 353.999; 397.025 397.411; 415.944 418.096; 415.999 415.999; 470.999 470.999; 477.971 478.041; 483.997 484.737; 492.958 493.089; 493.999 493.999; 494.999 494.999; 522.999 522.999; 523.999 523.999; 524.955 525.039], ... 658 [175.183 176.155; 246.112 246.991; 412.005 413.340; 483.915 484.061; 485.959 486.028; 497.360 498.880; 594.457 594.944; 596.641 598.006; 615.277 618.710], ... 659 [66.900 72.232; 75.510 78.688; 543.495 545.999]}; 660 BadSegments{13} = {... 661 [307.246 308.179; 627.881 628.089; 629.941 630.049], ... 662 [171.506 172.301; 172.999 172.999; 430.999 430.999; 432.999 432.999; 434.999 434.999; 448.999 448.999; 479.999 479.999; 489.999 489.999; 490.999 490.999; 491.999 491.999], ... 663 [92.059 93.209], ... 664 [52.791 53.231; 54.999 54.999; 65.985 68.138; 91.240 91.386; 92.137 92.207; 105.346 105.493; 121.120 121.398; 146.546 146.955; 194.025 197.652; 242.985 243.571; 247.101 247.556; 269.889 270.083; 270.946 271.070; 274.408 275.866; 294.565 295.267; 319.370 319.879; 365.999 365.999; 375.913 376.044; 376.869 377.055; 377.999 377.999; 390.130 393.225; 403.965 404.035; 404.953 405.075; 419.265 419.565; 427.015 427.154; 427.879 428.118; 427.999 427.999; 428.913 429.067; 480.945 482.095; 484.285 484.571; 516.929 517.099; 517.963 518.079], ... 665 [115.921 116.060; 117.999 117.999; 120.999 120.999; 126.896 127.058; 130.955 131.039; 131.957 132.043; 132.961 133.015; 134.999 134.999; 135.950 136.043; 139.999 139.999; 143.999 143.999; 144.850 145.066; 145.999 145.999; 146.999 146.999; 160.929 161.045; 161.999 161.999; 189.086 189.310; 193.939 194.063; 195.975 196.045; 199.961 200.045; 201.952 202.059; 202.939 205.084; 210.999 210.999; 211.738 213.629; 217.954 218.061; 218.987 219.049; 219.975 220.028; 258.953 259.037; 259.952 260.045; 260.963 261.032; 261.958 262.043; 262.954 263.054; 262.999 262.999; 288.999 288.999; 295.477 295.655; 298.976 299.054; 299.979 300.018; 300.975 301.005; 336.952 337.044; 337.977 338.032; 338.988 339.027; 338.999 338.999; 339.991 340.045; 340.975 341.029; 341.978 342.016; 343.976 344.054; 361.961 362.045; 365.942 366.050; 370.999 370.999; 384.961 385.045; 409.977 410.047; 415.984 416.030; 426.944 427.059; 430.999 430.999; 436.999 436.999; 448.999 448.999; 449.500 452.131; 481.962 482.055; 482.973 483.027; 484.975 485.060; 503.999 503.999; 504.999 504.999; 507.967 508.028; 508.955 509.032; 510.999 510.999; 512.999 512.999; 518.865 520.292; 522.958 523.043; 523.999 523.999; 524.948 525.034; 525.959 526.044; 526.946 527.055; 527.942 528.027; 528.961 529.053; 529.945 530.022; 535.999 535.999; 536.957 537.058; 538.967 539.036; 539.963 540.040; 541.976 542.045; 543.999 543.999; 551.956 552.034; 552.990 553.028; 553.977 554.024; 555.968 556.022; 556.204 557.454; 557.963 558.032; 559.969 560.023; 563.973 564.035; 563.999 563.999], ... 666 [87.308 87.639; 107.039 108.189; 427.943 428.035; 437.965 438.027; 439.973 440.026; 491.275 492.571]}; 667 BadSegments{14} = {... 668 [365.982 367.194; 368.999 368.999; 368.999 368.999; 369.999 369.999; 371.999 371.999; 373.999 373.999; 375.942 376.050; 376.999 376.999; 378.999 378.999; 380.999 380.999; 382.999 382.999; 383.999 383.999; 386.999 386.999; 387.999 387.999; 418.999 418.999; 444.999 444.999; 690.178 693.218], ... 669 [101.796 102.183; 218.999 218.999; 219.999 219.999; 221.999 221.999; 222.999 222.999; 227.999 227.999; 229.999 229.999; 230.999 230.999; 232.999 232.999; 233.999 233.999; 235.999 235.999; 237.999 237.999; 238.985 239.031; 240.999 240.999; 242.999 242.999; 243.999 243.999; 246.999 246.999; 247.999 247.999; 253.999 253.999; 258.999 258.999; 259.999 259.999; 260.227 261.123; 264.999 264.999; 265.999 265.999; 268.999 268.999; 269.999 269.999; 274.999 274.999; 276.999 276.999; 277.999 277.999; 280.999 280.999; 282.966 283.021; 284.999 284.999; 285.999 285.999; 360.999 360.999; 362.973 363.026; 364.999 364.999; 365.999 365.999; 367.999 367.999; 368.985 369.024; 370.999 370.999; 371.999 371.999; 378.999 378.999; 379.999 379.999; 381.999 381.999; 383.999 383.999; 385.999 385.999; 387.999 387.999; 388.999 388.999; 398.999 398.999; 415.999 415.999; 417.999 417.999; 418.999 418.999; 421.999 421.999; 422.999 422.999], ... 670 [89.285 90.187], ... 671 [85.991 87.179; 88.999 88.999; 89.999 89.999; 92.966 93.027; 93.999 93.999; 107.999 107.999; 110.999 110.999; 111.971 112.025; 118.999 118.999; 121.999 121.999; 122.999 122.999; 166.999 166.999; 168.957 169.035; 173.999 173.999; 174.999 174.999; 175.957 176.035; 198.999 198.999; 199.999 199.999; 202.999 202.999; 203.999 203.999; 205.999 205.999; 207.999 207.999; 208.999 208.999; 210.974 211.044; 211.999 211.999; 212.999 212.999; 232.984 233.023; 234.999 234.999; 235.999 235.999; 236.999 236.999; 239.958 240.044; 240.999 240.999], ... 672 [71.132 72.227], ... 673 [98.134 98.827; 110.497 110.814; 114.000 117.272; 279.999 279.999; 279.999 279.999; 280.999 280.999; 282.999 282.999; 283.999 283.999; 284.999 284.999; 286.999 286.999; 287.999 287.999; 288.999 288.999; 289.999 289.999; 291.966 292.044; 292.999 292.999; 346.999 346.999; 437.608 437.646; 518.988 519.104; 522.969 523.015; 534.999 534.999; 536.967 537.029; 562.725 563.426]}; 674 BadSegments{15} = {... 675 [66.755 67.172; 257.988 258.042; 258.714 259.385; 260.958 261.044; 265.931 266.062; 267.968 268.022; 268.978 269.033; 270.992 271.031; 272.964 273.017; 273.968 274.030; 275.975 276.059; 276.970 277.046; 288.976 289.038; 289.984 290.015; 295.994 296.025; 296.982 297.013; 300.966 301.152; 364.730 364.800], ... 676 [46.531 46.555; 65.156 66.148; 87.535 87.642; 183.896 183.934; 294.342 294.375; 330.004 330.528; 333.785 333.815; 345.177 345.203; 399.544 399.573; 480.964 480.980], ... 677 [53.557 54.175; 148.851 149.136], ... 678 [43.681 44.214; 361.255 361.587; 409.462 411.021], ... 679 [90.868 91.130; 410.935 411.044], ... 680 [64.786 65.141; 132.385 132.786; 177.735 178.252; 278.747 278.902; 313.959 314.028; 314.970 315.031]}; 681 BadSegments{16} = {... 682 [38.958 44.335; 305.280 312.826; 324.778 326.067; 393.183 398.367; 403.410 422.854], ... 683 [49.726 50.251; 58.675 65.465; 263.918 264.088; 321.795 322.195; 522.881 523.081], ... 684 [61.245 62.325; 65.628 65.905; 315.686 317.275; 335.184 336.449; 387.870 388.117; 389.892 390.062], ... 685 [56.020 57.255; 60.711 61.096; 64.353 64.924; 73.202 73.757; 76.134 76.844; 95.625 96.859; 171.436 172.625; 178.359 178.715; 212.828 213.415; 352.425 352.617; 367.755 369.097; 488.400 489.426], ... 686 [71.665 72.513; 76.347 77.181; 127.134 128.862], ... 687 [46.190 47.209; 53.461 55.752; 194.317 194.510; 213.101 213.240; 216.356 216.442; 225.962 226.286; 245.163 245.464; 252.041 253.422; 254.648 254.856; 292.966 294.340; 340.928 341.067; 346.298 346.452]}; 688 BadSeg = BadSegments{iSubj}{iRun}'; 689 end 690

Temporary files

Some processes need to create temporary files on the hard drive. For example, when epoching MEG/EEG recordings, Brainstorm would first create a temporary folder import_yymmdd_hhmmss, store all the epochs in it, then move them to the database when the epoching process is completed. The name of the temporary folder indicates its creation time (year/month/day_hour_minutes_seconds).

The default folder where Brainstorm stores its temporary files is located in the user folder ($HOME/.brainstorm/tmp/). This can be changed as explained in the introduction tutorial Create protocol.

At the end of each process, all the temporary files should be deleted automatically. However, in some cases, the process crashes or is killed before it can delete its temporary files. When starting the Brainstorm interface, users gets offered to delete the remaining temporary files. However, when running Brainstorm only from scripts, these files may never be deleted or made explicitly visible. To avoid these leftover temporary files to pile up indefinitely, it can be useful to add an explicit call to the function below, which deletes all of them.

gui_brainstorm('EmptyTempFolder');

Warning: Do not call this function in scripts that are supposed to be executed in parallel from the same user, as it might delete the temporary files from another session of Brainstorm.

Parallel processing

At the moment, Brainstorm is not capable of safely handling multiple sessions executed on the same computer, from the same user or on the same database folder. We are working on a new version of the databasing system that will support concurrent accesses. Until this is released, you should not have multiple people working at the same time with the same database, or multiple Matlab/Brainstorm instances running in parallel.

In some specific cases, however, it is possible to start multiple long processes simultaneously without much risk. For example, starting the BrainSuite MRI segmentation for multiple subjects within the same protocol can make sense. Some recommendations to run this safely:

  • The current protocol should never be changed
  • The temporary files from the other processes should not be deleted
  • Each instance of Matlab/Brainstorm must process a different subject
  • Reload the database after all the executions are completed

Note that parallel processing does not always improve global performances. Many Matlab functions are already optimized to distribute their computation load over multiple CPUs. Running these in parallel from multiple instances of Matlab could be slower and less efficient in terms of memory usage, than running them sequentially from the same instance. Before deciding on running tasks in parallel, unless they are clearly not optimized to work on multiple cores (e.g. FreeSurfer segmentation), you should always run some tests to make sure it improves the global computation time.

Running scripts on a cluster

With Matlab

Matlab, including the toolboxes you need, needs to be installed on the server and a Brainstorm version needs to be located on the server. Start Brainstorm with the command "brainstorm server".

For the database: the folder "brainstorm_db" needs to be registered at the beginning of the script. You can either upload a protocol folder from your local computer, or create an empty brainstorm_db folder and a new protocol on the distance server. At the end of this initialization phase, you need to have something like the structure below:

  • /home/user/brainstorm3

  • /home/user/brainstorm_db/ProtocolName/anat

  • /home/user/brainstorm_db/ProtocolName/data

Your processing script could start with some of the elements below, depending on what you need:

% Start Brainstorm
cd /home/user/brainstorm3;
if ~brainstorm('status')
    brainstorm server
end

% Path to a Brainstorm database (= a folder that contains one or more Brainstorm protocols)
BrainstormDbDir = '/home/user/brainstorm_db';

% Load a new uploaded database (sets BrainstormDbDir and load all the protocols it contains)
db_import(BrainstormDbDir);

% Alternative: Set the Brainstorm DB folder
% (defines where the new protocols are going to be created, but does not load anything)
bst_set('BrainstormDbDir', BrainstormDbDir);

% Get the protocol index of an existing protocol (already loaded previously in Brainstorm)
iProtocol = bst_get('Protocol', ProtocolName);

% Create a new protocol if needed
if isempty(iProtocol)
    UseDefaultAnat = 0;
    UseDefaultChannel = 0;
    gui_brainstorm('CreateProtocol', ProtocolName, UseDefaultAnat, UseDefaultChannel);
end

% Delete an existing protocol
gui_brainstorm('DeleteProtocol', ProtocolName);

Execution: You should check that there is a Matlab license available on the server, and that RAM and disk quotas are large enough for the data you will create while processing. The Matlab version might differ from the version installed on your local computer or toolboxes might be missing, causing some errors.

A Brainstorm script can be executed either directly (it needs to start Brainstorm itself, like in the example script above), or started using the syntax brainstorm <script.m> <parameters> (starts Brainstorm in server mode, executes the script and quit Brainstorm). See sections Starting Brainstorm and Parameters.

External Matlab call

To call a Brainstorm script directly from the command line of a terminal and avoid the graphical environment of Matlab to get started, on Linux or MacOS, you can use the syntax:

matlab -nosplash -nodesktop -r "run('/path/to/script.m');"

Without Matlab

Brainstorm scripts can also be executed from the compiled version of Brainstorm, therefore not requiring a Matlab license, only the installation of the free MATLAB Runtime (see installation instructions, section "without Matlab"). Add the full path to the script and the parameters to the command line:

  • Windows: brainstorm3.bat <script.m> <parameters>

  • Linux/MacOS: brainstorm3.command <MATLABROOT> <script.m> <parameters>
    MATLABROOT: Matlab Runtime installation folder, eg. /usr/local/MATLAB_Runtime/v98/

  • To avoid being asked for "brainstorm_db", add the argument: local
    Example: brainstorm3.command /usr/local/MATLAB/MATLAB_Runtime/v98 main.m local

In this configuration, Brainstorm is started using the command brainstorm server and then the script is executed. Therefore, your script should not start brainstorm again: remove the line "brainstorm server" from the example script in the section above.

In this configuration, the Matlab Runtime is not allowed to compile and execute .m scripts as a regular Matlab installation would. To work around this limitation, Brainstorm reads the script.m file and executes its contents with the Matlab eval function. Therefore this approach does not support the definition of functions or classes within the script.m. The compiled version of Brainstorm can only run code that can be executed from the Matlab Command Window. If you need more flexibility, you need to recompile a modified version of Brainstorm including your custom code (see below).

Parameters

The syntax of the script changes depending on the input parameters it accepts from the command line.

No extra parameters

The .m file must have the structure of a simple script, with no additional function definition.

Example script.m:

disp(['SCRIPT> Script start.']);
sStudy = bst_get('Study');
disp(['SCRIPT> Current study path: ' sStudy.FileName 10]);

Execution from the command line:

>> brainstorm 'C:\Users\franc\Downloads\GetStudyPath.m'

BST> Starting Brainstorm:
BST> =================================
BST> Version: 15-Jun-2022
BST> Deleting old process reports...
BST> Loading configuration file...
BST> Reading process folder...
BST> Loading current protocol...
BST> =================================

SCRIPT> Script start.
SCRIPT> Current study path: Subject01/S01_AEF_20131218_01_600Hz_notch/brainstormstudy.mat

BST> Emptying temporary directory...
BST> Brainstorm stopped.

Extra command line parameters

The .m file must declare a function on its first line. The number and names of parameters are parsed by Brainstorm from this first line, matched with the parameters passed from the command line, then the first line is removed and the rest of the code is executed with eval. Avoid including the "end" statement for the function, as it would result in more work to identify and remove it from the code. No other functions, sub-functions or classes can be declared in the same .m file.

Note that parameters are received as char arrays in the code: if you are expecting to pass numerical values, then the values must be parsed from the string (eg. with str2num).

Example script.m:

function GetStudyPath(iStudy)

disp([10 'SCRIPT> Script start.']);
sStudy = bst_get('Study', str2num(iStudy));
disp(['SCRIPT> Study path: ' sStudy.FileName 10]);

Execution from the command line:

>> brainstorm 'C:\Users\franc\Downloads\GetStudyPath.m' 1

BST> Starting Brainstorm:
BST> =================================
BST> Version: 15-Jun-2022
BST> Deleting old process reports...
BST> Loading configuration file...
BST> Reading process folder...
BST> Loading current protocol...
BST> =================================

WARNING: This file is a function, trying to convert to a script...

SCRIPT> Script start.
SCRIPT> Study path: Test/@default_study/brainstormstudy.mat

BST> Emptying temporary directory...
BST> Brainstorm stopped.

Send report by email

When running some long computation on a distant server, it can be convenient to receive an email when the processing is over. Two solutions for sending yourself an email from Brainstorm: from the pipeline editor, or directly from a script.

Use the process File > Send report by email to send the current execution report:

  • Brainstorm username: The user name you use to download Brainstorm from the website, or to post messages on the user forum. The email associated with this account is the primary recipient of the email.

  • Send copy to: Optional, sends a copy to an additional email address. Use this to notify collaborator, or receive the notification on an email address that is not registered on the Brainstorm website.

  • Subject: Subject of the email.

  • Send full report: If selected, sends the full HTML execution report as displayed by the Report viewer. Be careful when working with sensitive patient data, as it would send with no encryption the full file names of all the files. If not selected, only the name of the processes and execution times are sent, in plain text.

    email.gif

Alternatively, use one of the following options (isFullReport can be set to 0 or 1):

% At the end of your script
ReportFile = bst_report('Save', sFiles);
bst_report('Email', ReportFile, username, to, subject, isFullReport);

% Anywhere in your script
bst_report('Email', 'current', username, to, subject, isFullReport);

How to compile Brainstorm

Brainstorm can be compiled as a JAR application using the function bst_compile.m. You would need this in order to execute without Matlab a modified version of Brainstorm.

Software requirements:

  • MATLAB >= 2020a

  • MATLAB Compiler toolbox
  • MATLAB Compiler SDK toolbox
  • OpenJDK 8:
  • Set the environment variable JAVA_HOME to the installation folder of the JDK.

    • Windows 10: Right-click on "This PC" > Advanced system settings > Environment variables

    • Linux & MacOS: The syntax depends on your shell, look it up on Google.

    • From Matlab: setenv('JAVA_HOME', '/path/to/jdk-8.0.../')
    • Check the environment variable from Matlab: getenv('JAVA_HOME')
  • The Matlab "bin" folder must be in the system path. Open a terminal and run "mcc", if you get an error message ("command not found"), you should add to the system path the "bin" folder located in the Matlab installation folder before starting Matlab.

To compile Brainstorm, run from the command line:

  • brainstorm compile: Compiles Brainstorm together with many plugins: SPM12, FieldTrip, Brain2mesh, Iso2mesh, BrainEntropy, LibSVM, NIRSTORM... At the moment, this works only from a Windows 10 computer (but will be extended later to other OS).

  • brainstorm compile noplugs: Compile Brainstorm without the extra plugins. This should work on all the operating systems.

Additional documentation








Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):




Tutorials/AllIntroduction (last edited 2016-07-25 22:54:00 by FrancoisTadel)