Tutorial 1: Create a new protocol

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

How to read these tutorials

The goal of these introduction tutorials is to guide you through most of the features of the software. All the pages use the same example dataset. The results of one section are most of the time used in the following section, so read these pages in the correct order.

Advanced

Some pages may contain too many details for your level of interest or competence. The sections marked as [Advanced] are not required for you to follow the tutorials until the end. You can skip them the first time you go through the documentation. You will be able to get back to the theory later if you need.

Please follow first these tutorials with the data we provide. This way you will be able to focus on learning how to use the software. It is better to start with some data that is easy to analyze. After going through all the tutorials, you should be comfortable enough with the software to start analyzing your own recordings.

You will observe minor differences between the screen captures presented in these pages and what you obtain on your computer: different colormaps, different values, etc. The software being constantly improved, some results changed since we produced the illustrations. When the changes are minor and the interpretation of the figures remain the same, we don't necessarily update the images in the tutorial.

If you are interested only in EEG or intra-cranial recordings, don't think that a MEG-based tutorial is not adapted for you. Most of the practical aspects of the data manipulation is very similar in EEG and MEG. First start by reading these introduction tutorials using the MEG example dataset provided, then when you are familiar with the software, go through the tutorial "EEG and Epilepsy" to get some more details about the processing steps that are specific for EEG, or read one of the SEEG/ECOG tutorials available in the section "Other analysis scenarios".

Presentation of the experiment

All the introduction tutorials are based on a simple auditory oddball experiment:

  • One subject, two acquisition runs of 6 minutes each.
  • Subject stimulated binaurally with intra-aural earphones.
  • Each run contains 200 regular beeps and 40 easy deviant beeps.
  • Recordings with a CTF MEG system with 275 axial gradiometers.
  • Anatomy of the subject: 1.5T MRI, processed with FreeSurfer 5.3.

  • More details will be given about this dataset along the process.
  • Full dataset description available on this page: Introduction dataset.

Brainstorm folders

Brainstorm needs different directories to work properly. If you put everything in the same folder, you would run into many problems. Try to understand this organization before creating a new database.


1. Program directory: "brainstorm3"

  • Contains all the program files: Matlab scripts, compiled binaries, templates, etc.
  • There is no user data in this folder.
  • You can delete it and replace it with a newer version at anytime, your data will be safe.
  • Recommended location:
    • Windows: Documents\brainstorm3

    • Linux: /home/username/brainstorm3

    • MacOS: Documents/brainstorm3


2. Database directory: "brainstorm_db"

  • Created by user.
  • Contains all the Brainstorm database files.
  • Managed by the application: do not move, delete or add files by yourself.
  • Recommended location:
    • Windows: Documents\brainstorm_db

    • Linux: /home/username/brainstorm_db

    • MacOS: Documents/brainstorm_db


3. User directory: ".brainstorm"

  • Created at Brainstorm startup. Typical location:

    • Windows: C:\Users\username\.brainstorm

    • Linux: /home/username/.brainstorm

    • MacOS: /Users/username/.brainstorm

  • Contains:
    • brainstorm.mat: Brainstorm user preferences.

    • defaults/: Anatomy templates downloaded by Brainstorm.

    • mex/: Some mex files that have to be recompiled.

    • plugins/: Plugins downloaded by Brainstorm (see tutorial Plugins).

    • process/: Personal process folder (see tutorial How to write your own process).

    • reports/: Execution reports (see tutorial Run processes).

    • tmp/: Temporary folder, emptied every time Brainstorm is started with user confirmation.
      You may have to change the location of the temporary folder if you have a limited amount of storage or a limited quota in your home folder (see below).

Be sure that the paths to the Program, Database, and User directories do not contain special characters. See related forum post.


4. Original data files:

  • Recordings you acquired and you want to process with Brainstorm.
  • Put them wherever you want but not in any of the previous folders.

Starting Brainstorm for the first time

  1. If you haven't read the installation instructions, do it now: Installation.

  2. Start Brainstorm from Matlab or with the compiled executable.

    BST> Starting Brainstorm:
    BST> =================================
    BST> Version: 28-Jan-2015
    BST> Checking internet connectivity... ok
    BST> Compiling main interface files...
    BST> Emptying temporary directory...
    BST> Deleting old process reports...
    BST> Loading configuration file...
    BST> Initializing user interface...
    BST> Starting OpenGL engine... hardware
    BST> Reading plugins folder...
    BST> Loading current protocol...
    BST> =================================
  3. Read and accept the license file.
  4. Select your Brainstorm database directory (brainstorm_db).
  5. If you do something wrong and don't know how to go back, you can always re-initialize Brainstorm by typing "brainstorm reset" in the Matlab command window, or by clicking on [Reset] in the software preferences (menu File > Edit preferences).

Main interface window

The Brainstorm window described below is designed to remain on one side of the screen. All the space of the desktop that is not covered by this window will be used for opening other figures.

Do not try to maximize this window, or the automatic management of the data figures might not work correctly. Keep it on one side of your screen, just large enough so you can read the file names in the database explorer.

main_window.gif

The text is too small

If you have a high-resolution screen, the text and icons in the Brainstorm window may not scale properly, leading the interface to be impossible to use. Select the menu File > Edit preferences, the slider at the bottom of the option window lets you increase the ratio of the Brainstorm interface. If it doesn't help, try changing the scaling options in your operating system preferences.

preferences.gif

Database structure

Brainstorm allows you to organize your recordings and analysis with three levels of definition:

  • Protocol

    • Group of datasets that have to be processed or displayed together.
    • A protocol can include one or several subjects.
    • Some people would prefer to call this experiment or study.

    • You can only open one protocol at a time.
    • Your Brainstorm database is a collection of protocols.
  • Subject

    • A person who participated in a given protocol.
    • A subject contains two categories of information: anatomy and functional data.
    • Anatomy: Includes at least an MRI volume and some surfaces extracted from the MRI.

    • Functional data: Everything that is related with the MEG/EEG acquisition.

    • For each subject, it is possible to use either the actual MRI of the person or one of the anatomy templates available in Brainstorm.
  • Sub-folders

    • For each subject, the functional files can be organized in different sub-folders.
    • These folders can represent different recordings sessions (aka acquisition runs) or different experimental conditions.
    • The current structure of the database does not allow more than one level of sub-folders for each subject. It is not possible to organize the files by session AND by condition.

      db.gif

Database files

  • The database folder "brainstorm_db" is managed completely from the graphic user interface (GUI).
  • All the files in the database have to be imported through the GUI. Do not try to copy files by yourself in the brainstorm_db folder, it won't work.
  • Everything in this folder is stored in Matlab .mat format, with the following architecture:
    • Anatomy data: brainstorm_db/protocol_name/anat/subject_name

    • Functional data: brainstorm_db/protocol_name/data/subject_name/subfolder/

  • Most of the files you see in the database explorer in Brainstorm correspond to files on the hard drive, but there is no one-to-one correspondence. There is extra information stored in each directory, to save properties, comments, default data, links between different items, etc. This is one of the reasons for which you should not try to manipulate directly the files in the Brainstorm database directory.
  • The structure of the database is saved in the user preferences, so when you start the program or change protocol, there is no need to read again all the files on the hard drive.
  • If Brainstorm or Matlab crashes before the database structure is correctly saved, the files that are displayed in the Brainstorm database explorer may differ from what is actually on the disk. When this happens, you can force Brainstorm to rebuild the structure from the files on the hard drive: right-click on a folder > Reload.

Create your first protocol

  1. Menu File > New protocol.

    menuFile.gif

  2. Edit the protocol name and enter: "TutorialIntroduction".
    It will automatically update the anatomy and datasets paths. Do not edit manually these paths, unless you work with a non-standard database organization and know exactly what you are doing.

  3. Default properties for the subjects: These are the default settings that are used when creating new subjects. It is then possible to override these settings for each subject individually.
    • Default anatomy: (MRI and surfaces)

      • No, use individual anatomy:
        Select when you have individual MRI scans for all the participants of your study.

      • Yes, use default anatomy:
        Select when you do not have individual scans for the participants, and you would like to use one of the anatomy templates available in Brainstorm.

    • Default channel file: (Sensors names and positions)

      • No, use one channel file per acquisition run: Default for all studies
        Different head positions: Select this if you may have different head positions for one subject. This is usually not the case in EEG, where the electrodes stay in place for all the experiment. In MEG, this is a common setting: one recording session is split in multiple acquisition runs, and the position of the subject's head in the MEG might be different between two runs.
        Different number of channels: Another use case is when you have multiple recordings for the same subject that do not have the same number of channels. You cannot share the channel file if the list of channels is not strictly the same for all the files.
        Different cleaning procedures: If you are cleaning artifacts from each acquisition run separately using SSP or ICA projectors, you cannot share the channel file between them (the projectors are saved in the channel file).

      • Yes, use one channel file per subject: Use with caution
        This can be a setting adapted to EEG: the electrodes are in the same position for all the files recorded on one subject, and the number of channels is the same for all the files. However, to use this option, you should not be using SSP/ICA projectors on the recordings, or they should be computed for all the files at once. This may lead to some confusion and sometimes to manipulation errors. For this reason, we decided not to recommend this setting.

      • Yes, use only one global channel file: Not recommended
        This is never a recommended setting. It could be used in the case of an EEG study where you use only standard EEG positions on a standard anatomy, but only if you are not doing any advanced source reconstruction. If you share the position of the electrodes between all the subjects, it will also share the source models, which are dependent on the quality of the recordings for each subject. This is complicated to understand at this level, it will make more sense later in the tutorials.

  4. In the context of this study, we are going to use the following settings:

    • No, use individual anatomy: Because we have access to a T1 MRI scan of the subject.

    • No, use one channel file per condition/run: The typical MEG setup.

  5. Click on [Create].

    createNewProtocol.gif

Protocol exploration

The protocol is created and you can now see it in the database explorer. It is represented by the top-most node in the tree.

  • You can switch between anatomy and functional data with the three buttons just above the database explorer. Read the tooltips of the buttons to see which one does what.
  • In the Anatomy view, there is a Default anatomy node. It contains the ICBM152 anatomy, distributed by the Montreal Neurological Institute (MNI), which is one of the template anatomy folders that are distributed with Brainstorm.

  • The Default anatomy node contains the MRI and the surfaces that are used for the subjects without an individual anatomy, or for registering multiple subjects to the same anatomy for group analysis.

  • There are no subjects in the database yet, so the Functional data views are empty.

  • Everything you can do with an object in the database explorer (anatomy files, subjects, protocol) is accessible by right-clicking on it.

  • treeNewProtocol.gif

Set up a backup

Just like any computer work, your Brainstorm database is always at risk. Software bugs, computer or network crashes and manipulation errors can cause the loss of months of data curation and computation. If the database structure gets corrupted or if you delete or modify accidentally some files, you might not be able to get your data back. There is no undo button!

You created your database, now take some time to find a way to make it safe. If you are not familiar with backuping systems, watch some online tutorials explaining how to set up an automatic daily or weekly backup of your sensitive data. It might seem annoying and useless now, but could save you weeks in the future.

Changing the temporary folder

Some processes need to create temporary files on the hard drive. For example, when epoching MEG/EEG recordings, Brainstorm would first create a temporary folder "import_yymmdd_hhmmss", store all the epochs in it, then move them to the database when the epoching process is completed. The name of the temporary folder indicates its creation time (year/month/day_hour_minutes_seconds). At the end of the process, all the temporary files should be deleted automatically.

The default folder where Brainstorm stores its temporary files is located in the user folder ($HOME/.brainstorm/tmp/), so before importing recordings or calculating large models, you have to make sure you have enough storage space available.

If you work on a centralized network where all the computers are sharing the same resources, the system admins may impose limited disk quotas to all users and encourage them to use local hard drives instead of the limited and shared user folder. In such context, Brainstorm may quickly fill up your limited quota and at some point block your user account.

If the amount of storage space you have for your user folder is limited (less than 10Gb), you may have to change the temporary folder used by Brainstorm. Select the menu File > Edit preferences and set the temporary directory to a folder that is local to your computer, in which you won't suffer from any storage limitation.

If a process crashes or is killed before it deletes its temporary files, they would remain in the temporary folder until explicitly deleted. When starting Brainstorm, you would always get offered to delete the previous temporary files: always agree to it, unless they correspond to files generated by another session running simultaneously. Alternatively, you can delete these temporary files by clicking on the Empty button in the same preferences window. More information in the Scripting tutorial.

preferences.gif

Summary

  • Different folders for:
    • the program (brainstorm3).

    • the database (brainstorm_db).

    • your original recordings.

  • Never modify the contents of the database folder by yourself.
  • Do not put the original recordings in any of the Brainstorm folders, import them with the interface.
  • Do not try to maximize the Brainstorm window: keep it small on one side of your screen.

Roadmap

The workflow described in these introduction tutorials include the following steps:

  • Importing the anatomy of the subjects, the definition of the sensors and the recordings.
  • Pre-processing, cleaning, epoching and averaging the EEG/MEG recordings.
  • Estimating sources from the imported recordings.
  • Computing measures from the brain signals of interest in sensor space or source space.

workflow.gif

Advanced

Moving a database

If you are running out of disk space or need to share your database with someone else, you may need to copy or move your protocols to a different folder or drive. Each protocol is handled independently by Brainstorm, therefore in order to move the entire database folder (brainstorm_db), you need to repeat the operations below for each protocol in your database.

Copy the raw files

The original continuous files are not saved in the Brainstorm database. The "Link to raw files" depend on a static path on your local computer and cannot be moved easily to a new computer. You can copy them inside the database before moving the database to a different computer/hard drive using the menu: File > Export protocol > Copy raw files to database. This will make local copies in .bst format of all your original files. The resulting protocol would be larger but portable. This can also be done file by file: right-click > File > Copy to database.

Export a protocol

The easiest option to share a protocol with a collaborator is to export it as a zip file.

  • Export: Use the menu File > Export protocol > Export as zip file.
    Avoid using spaces and special characters in the zip file name.

  • Import: Use the menu File > Load protocol > Load from zip file.
    The name of the protocol created in the brainstorm_db folder is the name of the zip file. If there is already a protocol with this label, Brainstorm would return an error. To import the protocol as a different name, you only need to rename the zip file before importing it.

  • Size limitation: This solution is limited to smaller databases: creating zip files larger than a few Gb can take a lot of time or even crash. For larger databases, prefer the other options below.

Export a subject

Similar as the protocol export, but extracts only the files needed by a single subject.

  • Export: Right-click on the subject > File > Export subject.

  • Import as new protocol: Use the menu File > Load protocol > Load from zip file.

  • Import in an existing protocol: Use the menu File > Load protocol > Import subject from zip.

Move a protocol

To move a protocol to a different location:

  1. [Optional] Set up a backup of your entire brainstorm_db folder if your haven't done it yet. There will be no undo button to press if something bad happens.
  2. [Optional] Copy the raw files to the database (see above)
  3. Unload: Menu File > Delete protocol > Only detach from database.

  4. Move: Move the entire protocol folder to a different location. Remember a protocol folder should be located in the "brainstorm_db" folder and should contain only two subfolders "data" and "anat". Never move or copy a single subject manually.

  5. Load: Menu File > Load protocol > Load from folder > Select the new location of the protocol

  6. If you want to move the entire "brainstorm_db" folder at once, make sure you detach all the protocols in your Brainstorm installation first.

Duplicate a protocol

To duplicate a protocol in the same computer:

  • Copy: Make a full copy of the protocol to duplicate in the brainstorm_db folder, e.g. TutorialIntroduction => TutorialIntroduction_copy. Avoid using any space or special character in the new folder name.

  • Load: Menu File > Load protocol > Load from folder > Select the new protocol folder



Tutorial 2: Import the subject anatomy

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet, Chinmay Chinara

Brainstorm orients most of its database organization and processing stream for handling anatomical information together with the MEG/EEG recordings, because its primary focus was to estimate brain sources from MEG/EEG, which ideally requires an accurate spatial modelling of the head and sensors.

If you don't have anatomical scans of your subjects or are not interested in any spatial display, various solution will be presented along the tutorials, starting from the last section of this page. Be patient and follow everything as instructed, you will get to the information you need.

Download

The dataset we will use for the introduction tutorials is available online.

  • Go to the Download page of this website, and download the file: sample_introduction.zip

  • Unzip it in a folder that is not in any of the Brainstorm folders (program folder or database folder).
  • This is really important that you always keep your original data files in a separate folder: the program folder can be deleted when updating the software, and the contents of the database folder is supposed to be manipulated only by the program itself.

Create a new subject

The protocol is currently empty. You need to add a new subject before you can start importing data.

  1. Switch to the anatomy view (first button just above the database explorer).
  2. Right-click on the top folder TutorialIntroduction > New subject.
    Alternatively: Use the menu File > New subject.

    create_subject.gif

  3. The window that opens lets you edit the subject name and settings. It offers again the same options for the default anatomy and channel file: you can redefine for one subject the default values set at the protocol level if you need to. See previous tutorial for help.

    create_subject_options.gif

  4. Keep all the default settings and click on [Save].

Right-click doesn't work

If the right-click doesn't work anywhere in the Brainstorm interface and you cannot get to see the popup menus in the database explorer, try to connect a standard external mouse with two buttons. Some Apple pointing devices do not interact very well with Java/Matlab.

Alternatively, try to change the configuration of your trackpad in the system preferences.

Import the anatomy

For estimating the brain sources of the MEG/EEG signals, the anatomy of the subject must include at least three files: a T1-weighted MRI volume, the envelope of the cortex and the surface of the head.

Brainstorm cannot extract the cortex envelope from the MRI, you have to run this operation with an external program of your choice. The results of the MRI segmentation obtained with the following programs can be automatically imported: FreeSurfer, BrainSuite, BainVISA, CAT12, and CIVET. CAT is the only application fully interfaced with Brainstorm, and available for download as a Brainstorm plugin. However, FreeSurfer is more considered as a reference in this domain, therefore this is the solution we decided to demonstrate in these tutorials.

The anatomical information of this study was acquired with a 1.5T MRI scanner, the subject had a marker placed on the left cheek. The MRI volume was processed with FreeSurfer 7.1, the result of this automatic segmentation process is available in the downloaded folder sample_introduction/anatomy.

  1. Make sure that you are still in the anatomy view for your protocol.
  2. Right-click on the subject folder > Import anatomy folder:

    • Set the file format: FreeSurfer + Volume atlases

    • Select the folder: sample_introduction/anatomy

    • Click on [Open]
  3. Number of vertices of the cortex surface: 15000 (default value).
    This option defines the number of points that will be used to represent the cortex envelope. It will also be the number of electric dipoles we will use to model the activity of the brain. This default value of 15000 was chosen empirically as a good balance between the spatial accuracy of the models and the computation speed. More details later in the tutorials.

    import_anat_menu.gif

  4. The MRI views should be correct (axial/coronal/sagittal), you just need to make sure that the marker on the cheek is really on the left of the MRI. Then you can proceed with the fiducial selection.

    import_anat_marker.gif

Using the MRI Viewer

To help define these fiducial points, let's start with a brief description of the MRI Viewer:

  • Navigate in the volume:

    • Click anywhere on the MRI slices to move the cursor.
    • Use the sliders below the views.
    • Use the mouse wheel to scroll through slices (after clicking on the view to select it).
    • On a MacBook pad, use the two finger-move up/down to scroll.

  • Zoom: Use the magnifying glass buttons at the bottom of the figure, or the corresponding shortcuts (keyboard [+]/[-], or [CTRL]+mouse wheel).

  • Image contrast: Click and hold the right mouse button on one image, then move up and down.

  • Select a point: Place the cursor at the spot you want and click on the corresponding [Set] button.

  • Display the head surface: Click on the button "View 3D head surface" to compute and display the head surface. Click on the surface to move the coordinates of the cursor in the MRI Viewer figure. When the fiducials are not defined, they appear as floating a few centimeters away from the head.

    head_fiducials.gif

  • More information about all the coordinates displayed in this figure: CoordinateSystems

Fiducial points

Brainstorm uses a few reference points defined in the MRI to align the different files:

  • Required: Three points to define the Subject Coordinate System (SCS):

    • Nasion (NAS), Left ear (LPA), Right ear (RPA)
    • This is used to register the MEG/EEG sensors on the MRI.
  • Optional: Three additional anatomical landmarks (NCS):

    • Anterior commissure (AC), Posterior commissure (PC) and any interhemispheric point (IH).
    • Computing the MNI normalization sets these points automatically (see below), therefore setting them manually is not required.
  • For instructions on finding these points, read the following page: CoordinateSystems.

Nasion (NAS)

  • In this study, we used the real nasion position instead of the CTF coil position.

    import_anat_nas.gif
    MRI coordinates: 127, 213, 139

Left ear (LPA)

  • In this study, we used the connection points between the tragus and the helix (red dot on the CoordinateSystems page) instead of the CTF coil position or the left and right preauricular points.

    import_anat_lpa.gif
    MRI coordinates: 52, 113, 96

Right ear (RPA)

  • import_anat_rpa.gif
    MRI coordinates: 202, 113, 91

Anterior commissure (AC)

  • import_anat_ac.gif
    MRI coordinates: 127, 119, 149

Posterior commissure (PC)

  • import_anat_pc.gif
    MRI coordinates: 128, 93, 141

Inter-hemispheric point (IH)

  • This point can be anywhere in the mid-sagittal plane, these coordinates are just an example.

    import_anat_ih.gif
    MRI coordinates: 131, 114, 206

Type the coordinates

  • If you have the coordinates of the fiducials already written somewhere, you can type or copy-paste them instead of the pointing at them in with the cursor. Right-click on the figure > Edit fiducials positions > MRI coordinates.

    import_anat_typefid.gif

Validation

  • Once you are done with the fiducial selection, click on the [Save] button, at the bottom-right corner of the MRI Viewer figure.
  • The automatic import of the FreeSurfer folder resumes. At the end you get many new files in the database and a 3D view of the cortex and scalp surface. Here again you can note that the marker is visible on the left cheek, as expected.

    import_anat_files.gif

  • The next tutorial will describe these files and explore the various visualization options.
  • Close all figures and clear memory: Use this button in the toolbar of the Brainstorm window to close all the open figures at once and to empty the memory from all the temporary data that the program keeps in memory for faster display.

    close_all.gif

Graphic bugs

If you do not see the cortex surface through the head surface, or if you observe any other issue with the 3D display, there might be an issue with the OpenGL drivers. You may try the following options:

  • Update the drivers for your graphics card.
  • Upgrade your version of Matlab.
  • Run the compiled version of Brainstorm (see Installation).

  • Turn off the OpenGL hardware acceleration: Menu File > Edit preferences > Software or Disabled.

  • Send a bug report to MathWorks.

For Linux users with an integrated GPU and NVIDIA GPU, if you experience the troubles above, or the slow navigation in the 3D display (usually with 2 or more surfaces). Verify that you are using the NVIDIA GPU as primary GPU. More information depending on your distribution: Ubuntu, Debian and Arch Linux.

MNI normalization

For comparing results with the literature or with other imaging modalities, the normalized MNI coordinate system is often used. To be able to get "MNI coordinates" for individual brains, an extra step of normalization is required.

To compute a transformation between the individual MRI and the ICBM152 space, you have two available options, use the one of your choice:

  • In the MRI Viewer: Click on the link "Click here to compute MNI normalization".

  • In the database explorer: Right-click on the MRI > MNI normalization.

Select the first option maff8: This method is embedded in Brainstorm and does not require the installation of SPM12. However, it requires the automatic download of the file SPM12 Tissue Probability Maps. If you do not have access to internet, see the instructions on the Installation page.

It is based on an affine co-registration with the MNI ICBM152 template from the SPM software, described in the following article: Ashburner J, Friston KJ, Unified segmentation, NeuroImage 2005.

mni_norm.gif

Note that this normalization does not modify the anatomy, it just saves a transformation that enables the conversion between Brainstorm coordinates and MNI coordinates. After computing this transformation, you have access to one new line of information in the MRI Viewer.

  • mni_coordinates.gif

This operation also sets automatically some anatomical points (AC, PC, IH) if not defined yet. After the computation, make sure they are correctly positioned. You can run this computation while importing the anatomy, when the MRI viewer is displayed for the first time, this will save you the trouble of marking the AC/PC/IH points manually.

MacOS troubleshooting

Error "mexmaci64 cannot be opened because the developer cannot be verified":

Alternatives

The head surface looks bad: You can try computing another one with different properties.

No individual anatomy: If you do not have access to an individual MR scan of the subject, or if its quality is too low to be processed with FreeSurfer, you have other options:

Other options for importing the FreeSurfer anatomical segmentation:

  • Automated import: We selected the menu Import anatomy folder for a semi-manual import, in order to select manually the position of the anatomical fiducials and the number of points of the cortex surface. If you are not interested in setting accurately the positions of the fiducials, you can use the menu Import anatomy folder (auto): it computes the linear MNI normalization first and use default fiducials defined in MNI space, and uses automatically 15000 vertices for the cortex.

  • FreeSurfer options: We selected the file format FreeSurfer + Volume atlases for importing the ASEG parcellation in the database. This slows down the import and increases the size on the hard drive. If you know you won't use it, select FreeSurfer instead. A third menu is avalaible to also import the cortical thickness as source files in the database.



Tutorial 3: Display the anatomy

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

Anatomy folder

The anatomy of the subject "Subject01" should now contain all the files Brainstorm could import from the FreeSurfer segmentation results:

  • MRI: T1-weighted MRI, resampled and re-aligned by FreeSurfer.

  • ASEG / DKT / Desikan-Killiany / Destrieux: Volume parcellations (including subcortical regions)

  • Head mask: Head surface, generated by Brainstorm.
    If this doesn't look good for your subject, you can recalculate another head surface using different parameters: right-click on the subject folder > Generate head surface.

  • Cortex_336231V: High-resolution pial envelope generated by FreeSurfer.

  • Cortex_15002V: Low-resolution pial envelope, downsampled from the original one by Brainstorm.

  • Cortex_cereb_17005V: Low-res pial envelope + cerebellum surface extracted from ASEG

  • White_*: White matter envelope, high and low resolution.

  • Mid_*: Surface that represents the mid-point between the white and cortex envelopes.

  • Subcortical: Save FreeSurfer subcortical regions as in the ASEG volume, but tesselated as surfaces.

  • For more information about the files generated by FreeSurfer, read the FreeSurfer page.

    import_anat_files.gif

Default surfaces

  • There are four possible surface types: cortex, inner skull, outer skull, head.
  • For each type of surface, one file is selected as the one to use by default for all the operations.
  • This selected surface is displayed in green.

  • Here, there is only one "head" surface, which is selected.
  • The mid, cortex and white surfaces can all be used as "cortex" surfaces, only one can be selected at a time. By default, the low-resolution cortex should be selected and displayed in green.

  • To select a different cortex surface, you can double-click on it or right-click > Set as default.

MRI Viewer

Right-click on the MRI to get the list of the available display menus:

  • mri_popup.gif

Open the MRI Viewer. This interface was already introduced in the previous tutorial. It corresponds to the default display menu if you double-click on the MRI from the database explorer. Description of the window:

  • MIP Anatomy: Maximum Intensity Projection. When this option is selected, the MRI viewer shows the maximum intensity value across all the slices in each direction. This maximum does not depend on the selected slice, therefore if you move the cursor, the image stays the same.

  • Neurological/Radiological: There are two standard orientations for displaying medical scans. In the neurological orientation, the left hemisphere is on the left of the image, in the radiological orientation the left hemisphere is on the right of the image.

  • Coordinates: Position of the cursor in different coordinate systems. See: CoordinateSystems

  • Colormap: Click on the colorbar and move up/down (brightness) or left/right (contrast)

  • Popup menu: All the figures have additional options available in a popup menu, accessible with a right-click on the figure. The colormap options will be described later in the tutorials, you can test the other options by yourself.

    mri_viewer.gif mri_viewer_mip.gif

MRI contact sheets

You can get collections of slices in any direction (axial, coronal or sagittal) with the popup menus in the database explorer or the MRI Viewer figure.

  • Zoom: mouse wheel (or two finger-move on a MacBook pad)

  • Move in zoomed image: click + move

  • Adjust contrast: right click + move up/down

    mri_axial.gif

MRI in 3D

Right-click on the MRI file in the database explorer > Display > 3D orthogonal slices.

  • mri_3d.gif

  • Simple mouse operations:

    • Rotate: Click + move. Note that two different types of rotations are available: at the center of the figure the object will follow you mouse, on the sides it will do a 2D rotation of the image.

    • Zoom: Mouse wheel, or two finger-move on a MacBook pad.

    • Move: Left+right click + move (or middle-click + move).

    • Colormap: Click on the colorbar and move up/down (brightness) or left/right (contrast).

    • Reset view: Double click anywhere on the figure.

    • Reset colormap: Double-click on the colorbar.

    • Move slices: Right click on the slice to move + move.
      (or use the Resect panel in the Surface tab)

  • Popup operations (right-click on the figure):
    • Colormap: Edit the colormap, detailed in another tutorial.

    • MRI display: For now, contains mostly the MIP option (Maximum Intensity Projection).

    • Get coordinates: Pick a point in any 3D view and get its coordinates.

    • Snapshots: Save images or movies from this figure.

    • Figure: Change some of the figure options or edit it using the Matlab tools.

    • Views: Set one of the predefined orientation.

    • Note the indications in the right part of the popup menu, they represent the keyboard shortcut for each menu.
  • Keyboard shortcuts:
    • Views shortcuts (0,1,2...9 and [=]): Remember them, they will be very useful when exploring the cortical sources. To switch from left to right, it is much faster to press a key than having to rotate the brain with the mouse.

    • Zoom: Keys [+] and [-] for zooming in and out.

    • Move slices: [x]=Sagittal, [y]=Coronal, [z]=Axial, hold [shift] for reverse direction.

  • Surface tab (in the main Brainstorm window, right of the database explorer):
    • This panel is primarily dedicated to the display of the surfaces, but some controls can also be useful for the 3D MRI view.
    • Transparency: Changes the transparency of the slices.

    • Smooth: Changes the background threshold applied to the MRI slices. If you set it zero, you will see the full slices, as extracted from the volume.

    • Resect: Changes the position of the slices in the three directions.

Surfaces

To display a surface you can either double-click on it or right-click > Display. The tab "Surface" contains buttons and sliders to control the display of the surfaces.

  • The mouse and keyboard operations described for the 3D MRI view also apply here.
  • Smooth: Inflates the surface to make all the parts of the cortex envelope visible.
    This is just a display option, it does not actually modify the surface.

  • Color: Changes the color of the surface.

  • Sulci: Shows the bottom of the cortical folds with a darker color. We recommend to keep this option selected for the cortex, it helps for the interpretation of source locations on smoothed brains.

  • Edge: Display the faces of the surface tesselation.

  • Resect: The sliders and the buttons Left/Right/Struct at the bottom of the panel allow you to cut the surface or reorganize the anatomical structures in various ways.

  • Multiple surfaces: If you open two surfaces from the same subject, they will be displayed on the same figure. Then you need to select the surface you want to edit before changing its properties. The list of available surfaces is displayed at the top of the Surface tab.

  • At the bottom of the Surface tab, you can read the number of vertices and faces in the tesselation.

    surface_options.gif surface_mesh.gif

Get coordinates

  • Close all the figures. Open the cortex surface again.
  • Right-click on the 3D figure, select "Get coordinates".
  • Click anywhere on the cortex surface: a yellow cross appears and the coordinates of the point are displayed in all the available coordinates systems.
  • You can click on [View/MRI] to see where this point is located in the MRI, using the MRI Viewer.

    surface_coordinates.gif

Subcortical regions: Volume

The standard FreeSurfer segmentation pipeline generates multiple volume parcellations of anatomical regions, all including the ASEG subcortical parcellation. Double-click on a volume parcellation to open it for display. This opens the MRI Viewer with two volumes: the T1 MRI as the background, and the parcellation as a semi-transparent overlay.

  • Adjust the transparency of the overlay from the Surface tab, slider Transp.
  • The name of the region under the cursor appears at the top-right corner. The integer before this name is the label of the ROI, ie. the integer value of the voxel under the cursor in the parcellation volume.

    aseg_volume.gif

  • Close the MRI viewer.
  • Double-click again on the subject's MRI to open it in the MRI viewer.
  • Observe that the anatomical label is also present at the top-right corner of this figure; in this case, the integer reprents the voxel value of the displayed MRI. This label information comes from the ASEG file: whenever there are volume parcellations available for the subject, one of them is loaded in the MRI Viewer by default. The name of the selected parcellation is displayed in the figure title bar.
  • You can change the selected parcellation with the right-click popup menu Anatomical atlas. You can change the parcellation scheme, disable its use to make the MRI work faster, or show the parcellation volume as an overlay (menu Show atlas). More information in the tutorial Using anatomy templates.

    aseg_label.gif

Subcortical regions: Surface

Brainstorm reads the ASEG volume labels and tesselates some of these regions, then groups all the meshes in a large surface file where the regions are identified in an atlas called "Structures". It identifies: 8 bilateral structures (accumbens, amygdala, caudate, hippocampus, pallidum, putamen, thalamus, cerebellum) and 1 central structure (brainstem).

These structures can be useful for advanced source modeling, but will not be used in the introduction tutorials. Please refer to the advanced tutorials for more information: Volume source estimation and Deep cerebral structures.

  • aseg.gif

With the button [Struct] at the bottom of the Surface tab, you can see the structures separately.

  • resect_struct.gif

Registration MRI/surfaces

The MRI and the surfaces are represented using the different coordinate systems and could be misregistered for various reasons. If you are using the automated segmentation pipeline from FreeSurfer or BrainSuite you should never have any problem, but if something goes wrong or in the case of more manual import procedures it is always good to check that the MRI and the surfaces are correctly aligned.

  • Right-click on the low-res cortex > MRI Registration > Check MRI/surface registration

  • The calculation of the interpolation between the MRI and the cortex surface takes a few seconds, but the result is then saved in the database and will be reused later.
  • The yellow lines represent the re-interpolation of the surface in the MRI volume.

    surface_register.gif

Advanced

Interaction with the file system

For most manipulations, it is not necessary to know exactly what is going on at the level of the file system, in the Brainstorm database directory. However, many things are not accessible from the Brainstorm interface, you may sometimes find it useful to manipulate some piece of data directly from the Matlab command window.

Where are the files ?

  • Leave your mouse for a few seconds over any node in the database explorer, a tooltip will appear with the name and path of the corresponding file on the hard drive.
  • Paths are relative to current protocol path (brainstorm_db/TutorialIntroduction). What is displayed in the Brainstorm window is a comment and may have nothing to do with the real file name. For instance, the file name corresponding to "head mask" is Subjec01/tess_head_mask.mat.

  • Almost all the files in the database are in Matlab .mat format. You can load and edit them easily in the Matlab environment, where they appear as structures with several fields.

    file_tooltip.gif

Popup menu: File

Right-click on a surface file: many menus can lead you to the files and their contents.

  • file_menu.gif

  • View file contents: Display all the fields in the Matlab .mat file.

    file_contents.gif

  • View file history: Review the History field in the file, that records all the operations that were performed on the file since if was imported in Brainstorm.

    file_history.gif

  • Export to file: Export in one of the supported mesh file format.

  • Export to Matlab: Load the contents of the .mat file in the Matlab base workspace. It is then accessible from the Matlab command window.

  • Import from Matlab: Replace the selected file with the content of a variable from the Matlab base workspace. Useful to save back in the database a structure that was exported and modified manually with the Matlab command window.

  • Copy / Cut / Paste: Allow you to copy/move files in the database explorer. Keyboard shortcuts for these menus are the standard Windows shortcuts (Ctrl+C, Ctrl+X, Ctrl+V). The database explorer also supports drag-and-drop operations for moving files between different folders.

  • Delete: Delete a file. Keyboard shortcuts: Delete key.

  • Rename: Change the Comment field in the file. It "renames" the file in the database explorer, but does not change the actual file name on the hard drive. Keyboard shortcut: F2

  • Copy file path to clipboard: Copies the full file name into the system clipboard, so that you can paste it in any other window (Ctrl+V or Paste menu)

  • Go to this directory (Matlab): Change the current Matlab path, so that you can access the file from the Matlab Command window or the Matlab Current directory window

  • Show in file explorer: Open a file explorer window in this directory.

  • Open terminal in this folder: Start a system console in the file directory (Linux and MacOS only).

What are all these other files ?

  • If you look in brainstorm_db/TutorialIntroduction with the file explorer of your operating system, you'll find many other directories and files that are not visible in the database explorer.

    file_disk.gif

  • The protocol TutorialIntroduction is divided in Anatomy and Datasets directories:

    • Each subject in anat is described by an extra file: brainstormsubject.mat

    • Each folder in data is described by an extra file: brainstormstudy.mat

  • anat/@default_subject: Contains the files of the default anatomy (Default anatomy)

  • data/@default_study: Files shared between different subjects (Global common files)

  • data/@inter: Results of inter-subject analysis

  • data/Subject01/@default_study: Files shared between different folders in Subject01

  • data/Subject01/@intra: Results of intra-subject analysis (across different folders)

Advanced

On the hard drive: MRI

Right-click on the MRI > File > View file contents:

  • contents_mri.gif

Structure of the MRI files: subjectimage_*.mat

  • Comment: String displayed in the database explorer to represent the file.

  • Cube: [Nsagittal x Ncoronal x Naxial] full MRI volume. Cube(1,1,1) is in the left, posterior, inferior corner.

  • Voxsize: Size of one voxel in millimeters (sagittal, coronal, axial).

  • SCS: Defines the Subject Coordinate System. Points below are in MRI (millimeters) coordinates.

    • NAS: (x,y,z) coordinates of the nasion fiducial.

    • LPA: (x,y,z) coordinates of the left ear fiducial.

    • RPA: (x,y,z) coordinates of the right ear fiducial.

    • R: [3x3] rotation matrix from MRI coordinates to SCS coordinates.

    • T: [3x1] translation matrix from MRI coordinates to SCS coordinates.

    • Origin: MRI coordinates of the point with SCS coordinates (0,0,0).

  • NCS: Defines the MNI coordinate system, either with a linear or a non-linear transformation.

    • AC: (x,y,z) coordinates of the Anterior Commissure.

    • PC: (x,y,z) coordinates of the Posterior Commissure.

    • IH: (x,y,z) coordinates of an Inter-Hemisperic point.

    • (Linear transformation)
      • R: [3x3] rotation matrix from MRI coordinates to MNI coordinates.

      • T: [3x1] translation matrix from MRI coordinates to MNI coordinates.

    • (Non-linear transformation)
      • iy: 3D floating point matrix: Inverse MNI deformation field, as in SPM naming conventions. Same size as the Cube matrix, it gives for each voxel its coordinates in the MNI space, and is therefore used to convert from MRI coordinates to MNI coordinates.

      • y: 3D floating point matrix: Forward MNI deformation field, as in SPM naming conventions. For some MNI coordinates, it gives their coorrespondance in the original MRI space. To be interpreted, it has to be used with the matrix y_vox2ras.

      • y_vox2ras: [4x4 double], transformation matrix that converts from voxel coordinates of the y volume to MNI coordinates.

      • y_method: Algorithm used for computing the normalization ('segment'=SPM12 Segment)

    • Origin: MRI coordinates of the point with NCS coordinates (0,0,0).

  • Header: Header from the original file format (.nii, .mgz, ...)

  • Histogram: Result of the internal analysis of the MRI histogram, mainly to detect background level.

  • InitTransf: [Ntransform x 2] cell-matrix: Transformations that are applied to the MRI before importing the surfaces. Example: {'vox2ras', [4x4 double]}

  • Labels: [Nlabels x 3] cell-matrix: For anatomical parcellations, this field contains the names and RGB colors associated with each integer label in the volume. Example:<<BR>>{0, 'Background', [0 0 0]}
    {1, 'Precentral L', [203 142 203]}

  • History: List of operations performed on this file (menu File > View file history).

Useful functions

  • /toolbox/io/in_mri_bst(MriFile): Read a Brainstorm MRI file and compute the missing fields.

  • /toolbox/io/in_mri(MriFile, FileFormat=[]): Read a MRI file (format is auto-detected).

  • /toolbox/io/in_mri_*.m: Low-level functions for reading all the file formats.

  • /toolbox/anatomy/mri_*.m: Routines for manipulating MRI volumes.

  • /toolbox/gui/view_mri(MriFile, ...): Display an imported MRI in the MRI viewer.

  • /toolbox/gui/view_mri_3d(MriFile, ...): Display an imported MRI in a 3D figure.

Advanced

On the hard drive: Surface

Right-click on any cortex surface > File > View file contents:

  • file_contents.gif

Structure of the surface files: tess_*.mat

  • Atlas: Array of structures, each entry is one menu in the drop-down list in the Scout tab.

    • Name: Label of the atlas (reserved names: "User scouts", "Structures", "Source model")

    • Scouts: List of regions of interest in this atlas, see the Scout tutorial.

  • Comment: String displayed in the database explorer to represent the file.

  • Curvature: [Nvertices x 1], curvature value at each point.

  • Faces: [Nfaces x 3], triangles constituting the surface mesh.

  • History: List of operations performed on this file (menu File > View file history).

  • iAtlas: Index of the atlas that is currently selected for this surface.

  • Reg: Structure with registration information, used to interpolate the subject's maps on a template.

  • SulciMap: [Nvertices x 1], binary mask marking the botton of the sulci (1=displayed as darker).

  • tess2mri_interp: [Nvoxels x Nvertices] sparse interpolation matrix MRI<=>surface.

  • VertConn: [Nvertices x Nvertices] Sparse adjacency matrix, VertConn(i,j)=1 if i and j are neighbors.

  • Vertices: [Nvertices x 3], coordinates (x,y,z) of all the points of the surface, in SCS coordinates.

  • VertNormals: [Nvertices x 3], direction (x,y,z) of the normal to the surface at each vertex.

Useful functions

  • /toolbox/io/in_tess_bst(SurfaceFile): Read a Brainstorm surface file and compute the missing fields.

  • /toolbox/io/in_tess(TessFile, FileFormat=[], sMri=[]): Read a surface file (format is auto-detected).

  • /toolbox/io/in_tess_*.m: Low-level functions for reading all the file formats.

  • /toolbox/anatomy/tess_*.m: Routines for manipulating surfaces.

  • /toolbox/gui/view_surface(SurfaceFile, ...): Display an imported surface in a 3D figure.

  • /toolbox/gui/view_surface_data(SurfaceFile, OverlayFile, ...): Display a surface with a source map.

  • /toolbox/gui/view_surface_matrix(Vertices, Faces, ...): Display a mesh in a 3D figure.



Tutorial 4: Channel file / MEG-MRI coregistration

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

The anatomy of your subject is ready. Before we can start looking at the MEG/EEG recordings, we need to make sure that the sensors (electrodes, magnetometers or gradiometers) are properly aligned with the MRI and the surfaces of the subject.

In this tutorial, we will start with a detailed description of the experiment and the files that were recorded, then we will link the original CTF files to the database in order to get access to the sensors positions, and finally we will explore the various options for aligning these sensors on the head of the subject.

License

This dataset (MEG and MRI data) was collected by the MEG Unit Lab, McConnell Brain Imaging Center, Montreal Neurological Institute, McGill University, Canada. The original purpose was to serve as a tutorial data example for the Brainstorm software project. It is presently released in the Public Domain, and is not subject to copyright in any jurisdiction.

We would appreciate though that you reference this dataset in your publications: please acknowledge its authors (Elizabeth Bock, Peter Donhauser, Francois Tadel and Sylvain Baillet) and cite the Brainstorm project seminal publication.

Presentation of the experiment

Experiment

  • One subject, two acquisition runs of 6 minutes each.
  • Subject stimulated binaurally with intra-aural earphones (air tubes+transducers), eyes opened and looking at a fixation cross on a screen.
  • Each run contains:
    • 200 regular beeps (440Hz).
    • 40 easy deviant beeps (554.4Hz, 4 semitones higher).
  • Random inter-stimulus interval: between 0.7s and 1.7s seconds, uniformly distributed.
  • The subject presses a button when detecting a deviant with the right index finger.
  • Auditory stimuli generated with the Matlab Psychophysics toolbox.
  • The specifications of this dataset were discussed initially on the FieldTrip bug tracker:
    http://bugzilla.fieldtriptoolbox.org/show_bug.cgi?id=2300.

MEG acquisition

  • Acquisition at 2400Hz, with a CTF 275 system, subject in sitting position

  • Recorded at the Montreal Neurological Institute in December 2013
  • Anti-aliasing low-pass filter at 600Hz, files saved with the 3rd order gradient
  • Downsampled at a lower sampling rate: from 2400Hz to 600Hz: The only purpose for this resampling is to make the introduction tutorials easier to follow the on a regular computer.

  • Recorded channels (340):
    • 1 Stim channel indicating the presentation times of the audio stimuli: UPPT001 (#1)
    • 1 Audio signal sent to the subject: UADC001 (#316)
    • 1 Response channel recordings the finger taps in response to the deviants: UDIO001 (#2)
    • 26 MEG reference sensors (#5-#30)
    • 274 MEG axial gradiometers (#31-#304)
    • 2 EEG electrodes: Cz, Pz (#305 and #306)
    • 1 ECG bipolar (#307)
    • 2 EOG bipolar (vertical #308, horizontal #309)
    • 12 Head tracking channels: Nasion XYZ, Left XYZ, Right XYZ, Error N/L/R (#317-#328)
    • 20 Unused channels (#3, #4, #310-#315, #329-340)
  • 3 datasets:
    • S01_AEF_20131218_01_600Hz.ds: Run #1, 360s, 200 standard + 40 deviants

    • S01_AEF_20131218_02_600Hz.ds: Run #2, 360s, 200 standard + 40 deviants

    • S01_Noise_20131218_02_600Hz.ds: Empty room recordings, 30s long

  • Average reaction times for the button press after a deviant tone:
    • Run #1: 515ms +/- 108ms

    • Run #2: 596ms +/- 134ms

Head shape and fiducial points

  • 3D digitization using a Polhemus Fastrak device driven by Brainstorm (S01_20131218_01.pos)

  • More information: Digitize EEG electrodes and head shape

  • The output file is copied to each .ds folder and contains the following entries:
    • The position of the center of CTF coils.
    • The position of the anatomical references we use in Brainstorm:
      Nasion and connections tragus/helix, as illustrated here.

    • Around 150 head points distributed on the hard parts of the head (no soft tissues).

  • Switch to the "functional data" view.

    view_functional.gif

  • Right-click on the subject folder > Review raw file

    • Select the file format: "MEG/EEG: CTF (*.ds...)"

    • Select all the .ds folders in: sample_introduction/data

    • In the CTF file format, each session of recordings is saved in a folder with the extension "ds". The different types of information collected during each session are saved as different files in this folder (event markers, sensor definitions, bad segments, MEG recordings).
  • review_menu.gif

  • Refine registration now? YES
    This operation is detailed in the next section.

    review_refine.gif

  • Percentage of head points to ignore: 0
    If you have some points that were not digitized correctly and that appear far from the head surface, you should increase this value in order to exclude them from the fit.
    review_outliers.gif

Automatic registration

The registration between the MRI and the MEG (or EEG) is done in two steps. We start with a first approximation based on three reference points, then we refine it with the full head shape of the subject.

Step 1: Fiducials

  • The initial registration is based on the three fiducial points that define the Subject Coordinate System (SCS): nasion, left ear, right ear. You have marked these three points in the MRI viewer in the previous tutorial.

  • These same three points have also been marked before the acquisition of the MEG recordings. The person who recorded this subject digitized their positions with a tracking device (such as a Polhemus FastTrak or Patriot). The position of these points are saved in the dataset.

  • When we bring the MEG recordings into the Brainstorm database, we align them on the MRI using these fiducial points: we match the NAS/LPA/RPA points digitized with the ones we located in the MRI Viewer.
  • This registration method gives approximate results. It can be good enough in some cases, but not always because of the imprecision of the measures. The tracking system is not always very precise, the points are not always easy to identify on the MRI slides, and the very definition of these points does not offer a millimeter precision. All this combined, it is easy to end with an registration error of 1cm or more.

  • The quality of the source analysis we will perform later is highly dependent on the quality of the registration between the sensors and the anatomy. If we start with a 1cm error, this error will be propagated everywhere in the analysis.

    polhemus_setup.gif polhemus_beth.jpg

Step 2: Head shape

  • To improve this registration, we recommend our users to always digitize additional points on the head of the subjects: around 100 points uniformly distributed on the hard parts of the head (skull from nasion to inion, eyebrows, ear contour, nose crest). Avoid marking points on the softer parts (cheeks or neck) because they may have a different shape when the subject is seated on the Polhemus chair or lying down in the MRI. More information on digitizing head points.

  • We have two versions of the full head shape of the subject: one coming from the MRI (the head surface, represented in grey in the figures below) and one coming from the Polhemus digitizer at the time of the MEG/EEG acquisition (represented as green dots).
  • The algorithm that is executed when you chose the option "Refine registration with head points" is an iterative algorithm that tries to find a better fit between the two head shapes (grey surface and green dots), to improve the intial NAS/LPA/RPA registration. This technique usually improves significantly the registration between the MRI and the MEG/EEG sensors.

  • Tolerance: If you enter a percentage of head points to ignore superior to zero, the fit is performed once with all the points, then the head points the most distant to the cortex are removed, and the fit is executed a second time with the head points that are left.
  • The two pictures below represent the registration before and after this automatic head shape registration (left=step 1, right=step 2). The yellow surface represents the MEG helmet: the solid plastic surface in which the subject places his/her head. If you ever see the grey head surface intersecting this yellow helmet surface, there is obviously something wrong with the registration.
  • At the end of the import process, you can close the figure that shows the final registration.

    refine_before.gif refine_after.gif

  • A window reporting the distance between the scalp and the head points is displayed. You can use these values as references for estimating whether you can trust the automatic registration or not. Defining whether the distances are correct or abnormal depend on your digitization setup.

    refine_outliers.gif

Defaced volumes

When processing your own datasets, if your MRI images are defaced, you might need to proceed in a slightly different way. The de-identification procedures remove the nose and other facial features from the MRI. If your digitized head shape includes points on the missing parts of the head, this may cause an important bias in automatic registration. In this case it is advised to remove the head points below the nasion before proceeding to the automatic registration, as illustrated in this tutorial.

New files and folders

Many new files are now visible in the database explorer:

  • Three folders representing the three MEG datasets that we linked to the database. Note the tag "raw" in the icon of the folders, this means that the files are considered as new continuous files.
  • S01_AEF_20131218_01_600Hz: Subject01, Auditory Evoked Field, 18-Dec-2013, run #01

  • S01_AEF_20131218_02_600Hz: Subject01, Auditory Evoked Field, 18-Dec-2013, run #02

  • S01_Noise_20131218_02_600Hz: Subject01, Noise recordings (no subject in the MEG)

  • All three have been downsampled from 2400Hz to 600Hz.

Each of these new folders show two elements:

  • Channel file: Defines the types and names of channels that were recorded, the position of the sensors, the head shape and other various details. This information has been read from the MEG datasets and saved as a new file in the database. The total number of data channels recorded in the file is indicated between parenthesis (340).

  • Link to raw file: Link to the original file that you imported. All the relevant meta-data was read from the MEG dataset and copied inside the link itself (sampling rate, number of samples, event markers and other details about the acquisition session). But no MEG/EEG recordings were read or copied to the database. If we open this file, the values are read directly from the original files in the .ds folder.

    review_tree.gif

Review vs Import

When trying to bring external data into the Brainstorm environment, a common source of confusion is the difference between the two popup menus Review and Import:

  • Review raw file: Allows you to create a link to your original continuous data file. It reads the header and sensor information from the file but does not copy the recordings in the database. Most of the artifact cleaning should be done directly using these links.

  • Import MEG/EEG: Extract segments of recordings (epochs) from an external file and saves copies of them in the Brainstorm database. You should not be using this menu until you have fully pre-processed your recordings, or if you are importing files that are already epoched or averaged.

Display the sensors

Right-click on the CTF channels file and try all the display menus:

  • CTF Helmet: Shows a surface that represents the inner surface of the MEG helmet.

  • CTF coils (MEG): Display the MEG head coils of this CTF system: they are all axial gradiometers, only the coils close to the head are represented. The small squares do not represent the real shape of the sensors (the CTF coils are circular loops) but an approximation made in the forward model computation.

  • CTF coils (ALL): Display all the MEG sensors, including the reference magnetometers and gradiometers. The orientation of the coils is represented with a red segment.

  • MEG: MEG sensors are represented as small white dots and can be selected by clicking on them.

  • ECG / EOG: Ignore these menus, we do not have proper positions for these electrodes.

  • Misc: Shows the approximate positions of the EEG electrodes (Cz and Pz).

  • Use the [Close all] button to close all the figures when you are done.

    channel_menu.gif

channel_display.gif

Advanced

Sensor map

Here is a map with the full list of sensor names for this CTF system, it could be useful for navigating in the recordings. Click on the image for a larger version.

?action=AttachFile&do=view&target=snap_3conditions.jpg

Advanced

Manual registration

If the registration you get with the automatic alignment is incorrect, or if there was an issue when you digitized the position of the fiducials or the head shape, you may have to realign manually the sensors on the head. Right-click on the channel file > MRI Registration:

  • Check: Show all the possible information that may help to verify the registration.

  • Edit: Opens a window where you can move manually the MEG helmet relative to the head.
    Read the tooltips of the buttons in the toolbar to see what is available, select an operation and then right-click+move up/down to apply it. From a scientific point of view this is not exactly a rigorous operation, but sometimes it is much better than using wrong default positions.
    IMPORTANT: this refinement can only be used to better align the headshape with the digitized points - it cannot be used to correct for a subject who is poorly positioned in the helmet (i.e. you cannot move the helmet closer to the subjects head if they were not seated that way to begin with!)

    channel_manual.gif

  • Refine using head points: Runs the automatic registration described earlier.

  • In the 3D views, the head points can be color-coded to represent the distance to the scalp. Right-click on the figure > Channels > Color head points by distance (shortcut CTRL+H). The colorbar indicates in millimeters the distance of each point to the scalp, as compute by bst_surfdist.m.

    refine_dist.png

There is nothing to change here, but remember to always check the registration scalp/sensors.

Advanced

Multiple runs and head positions

Between two acquisition runs the subject may move in the MEG helmet, the relative position of the MEG sensors with the head surface changes. At the beginning of each MEG run, the positions of the head localization coils are detected and used to update the position of the MEG sensors.

  • The two AEF runs 01 and 02 were acquired successively. The position of the subject's head in the MEG helmet was estimated twice, once at the beginning of each run.
  • To evaluate visually the displacement between the two runs, select at the same time all the channel files you want to compare (the ones for run 01 and 02), right-click > Display sensors > MEG.

    channel_multiple.gif

  • Typically, we would like to group the trials coming from multiple acquisition runs. However, because of the subject's movements between runs, it is usually not possible to directly compare the MEG values between runs. The sensors may not capture the activity coming from the same regions of the brain.
  • You have three options if you consider grouping information from multiple runs:
    • Method 1: Process all the runs separately and average between runs at the source level: The more accurate option, but requires more work, computation time and storage.

    • Method 2: Ignore movements between runs: This can be acceptable if the displacements are really minimal, less accurate but much faster to process and easier to manipulate.

    • Method 3: Co-register properly the runs using the process Standardize > Co-register MEG runs: Can be a good option for displacements under 2cm.
      Warning: This method has not be been fully evaluated on our side, use at your own risk. Also, it does not work correctly if you have different SSP projectors calculated for multiple runs.

  • In this tutorial, we will illustrate only method 1: runs are not co-registered.

Advanced

Edit the channel file

Display a table with all the information about the individual channels. You can edit all the values.

  • Right-click on the channel of the first folder (AEF#01) > Edit channel file:

    channel_edit.gif

  • Index: Index of the channel in the data matrix. Can be edited to reorder the channels.

  • Name: Name that was given to the channel by the acquisition device.

  • Type: Type of information recordeded (MEG, EEG, EOG, ECG, EMG, Stim, Other, "Delete", etc)

    • You may have to change the Type for some channels. For instance if an EOG channel was saved as a regular EEG channel, you have to change its type to prevent it from being used in the source estimation.
    • To delete a channel from this file: select "(Delete)" in the type column.
  • Group: Used to define sub-group of channels of the same type.

    • SEEG/ECOG: Each group of contacts can represent a depth electrode or a grid, and it can be plotted separately. A separate average reference montage is calculated for each group.
    • MEG/EEG: Not used.
  • Comment: Additional description of the channel.

    • MEG sensors: Do not edit this information if it is not empty.
  • Loc: Position of the sensor (x,y,z) in SCS coordinates. Do not modify this from the interface.
    One column per coil and per integration point (information useful for the forward modeling).

  • Orient: Orientation of the MEG coils (x,y,z) in SCS coordinates). One column per Loc column.

  • Weight: When there is more than one coil or integration point, the Weight field indicates the multiplication factor to apply to each of these points.

  • To edit the type or the comment for multiple sensors at once, select them all then right-click.
  • Close this figure, do not save the modifications if you made any.

Advanced

On the hard drive

Some other fields are present in the channel file that cannot be accessed with the Channel editor window. You can explore these other fields with the File menu, selecting View file contents or Export to Matlab, as presented in the previous tutorial.

  • channel_contents.gif

Structure of the channel files: channel_*.mat

  • Comment : String that is displayed in the Brainstorm database explorer.

  • MegRefCoef: Noise compensation matrix for CTF and 4D MEG recordings, based on some other sensors that are located far away from the head.

  • Projector: SSP/ICA projectors used for artifact cleaning purposes. See the SSP tutorial.

  • TransfMeg / TransfMegLabel: Transformations that were applied to the positions of the MEG sensors to bring them in the Brainstorm coordinate system.

  • TransfEeg / TransfEegLabel: Same for the position of the EEG electrodes.

  • HeadPoints: Extra head points that were digitized with a tracking system.

  • Channel: An array that defines each channel individually (see previous section).

  • Clusters: An array of structures that defines channels of clusters, with the following fields:

    • Sensors: Cell-array of channel names

    • Label: String, name of the cluster

    • Color: RGB values between 0 and 1 [R,G,B]

    • Function: String, cluster function name (deault: 'Mean')

  • History: Describes all the operations that were performed with Brainstorm on this file. To get a better view of this piece of information, use the menu File > View file history.

  • IntraElectrodes: Definition of iEEG devices, documented in the SEEG tutorial.

Useful functions

  • /toolbox/io/import_channel.m: Read a channel file and save it in the database.

  • /toolbox/io/in_channel_*.m: Low-level functions for reading all the file formats.

  • /toolbox/io/in_bst_channel.m: Read a channel file saved in the database.

  • /toolbox/sensors/channel_*.m: Routines for manipulating channel files.

  • /toolbox/gui/view_channels(ChannelFile, Modality, ...): Display the sensors in a 3D figure.

Advanced

Additional documentation



Tutorial 5: Review continuous recordings

Authors: Francois Tadel, Elizabeth Bock, John C Mosher, Sylvain Baillet

Open the recordings

Let's look at the first file in the list: AEF#01.
Right-click on the Link to raw file. Below the first to menus, you have the list of channel types:

  • MEG: 274 axial gradiometers

  • ECG: 1 electrocadiogram, bipolar electrode across the chest

  • EOG: 2 electrooculograms (vertical and horizontal)

  • Misc: EEG electrodes Cz and Pz

  • ADC A: Unused

  • ADC V: Auditory signal sent to the subject

  • DAC: Unused

  • FitErr: Fitting error when trying to localize the three head localization coils (NAS, LPA, RPA)

  • HLU: Head Localizing Unit, displacements in the three directions (x,y,z) for the three coils

  • MEG REF: 26 reference sensors used for removing the environmental noise

  • Other: Unused

  • Stim: Stimulation channel, records the stim triggers generated by the Psychophysics toolbox and other input channels, such as button presses generated by the subject

  • SysClock: System clock, unused

Select > MEG > Display time series (or double-click on the file).

  • link_menu.gif

It will open a new figure and enable many controls in the Brainstorm window.

review_epoch.gif

Navigate in time

The files we have imported here are shown the way they have been saved by the CTF MEG system: as contiguous epochs of 1 second each. These epochs are not related with the stimulus triggers or the subject's responses, they are just a way of saving the files. We will first explore the recordings in this epoched mode before switching to the continuous mode.

From the time series figure

  • Click: Click on the white or grey parts of figure to move the time cursor (red vertical line).
    If you click on the signals, it selects the corresponding channels. Click again to unselect.

  • Shortcuts: See the tooltips in the time panel for important keyboard shortcuts:
    Left arrow, right arrow, page up, page down, F3, Shift+F3, etc...

  • Bottom bar: The red square in the bottom bar represents the portion of the file that is currently displayed from the current file or epoch. Right now we show all the epoch #1. This will be more useful in the continuous mode.

  • Zoom: Scroll to zoom horizontally around the time cursor (mouse wheel or two-finger up/down).

  • [<<<] and [>>>]: Previous/next epoch or page

From the time panel

  • Time: [0, 998]ms is the time segment over which the first epoch is defined.

  • Sampling: We downsampled these files to 600Hz for easier processing in the tutorials.

  • Text box: Current time, can be edited manually.

  • [<] and [>]: Previous/next time sample - Read the tooltip for details and shortcuts

  • [<<] and [>>]: Previous/next time sample (x10) - Read the tooltip for details and shortcuts

  • [<<<] and [>>>]: Previous/next epoch or page - Read the tooltip for details and shortcuts

From the page settings

  • Epoch: Selects the current time block that is displayed in the time series figure.

  • Start: Starting point of the time segment displayed in the figure. Useful in continuous mode only.

  • Duration: Length of this time segment. Useful in continuous mode only.

Time selection

  • In the time series figure, click and drag your mouse for selecting a time segment.
  • At the bottom of the figure, you will see the duration of the selected block, and min/max values.
  • Useful for quickly estimating the latencies between two events, or the period of an oscillation.
  • To zoom into the selection: Shift+left click, middle click, or right-click > Time selection > Zoom into.

  • Click anywhere on the figure to cancel this time selection.

    review_timesel.gif

Epoched vs. continuous

  • The CTF MEG system can save two types of files: epoched (.ds) or continuous (_AUX.ds).
  • Here we have an intermediate storage type: continuous recordings saved in "epoched" files. The files are saved as small blocks of recordings of a constant time length (1 second in this case). All these time blocks are contiguous, there is no gap between them.
  • Brainstorm can consider this file either as a continuous or an epoched file. By default it imports the regular .ds folders as epoched, but we need to change this manually.
  • Right-click on the "Link to raw file" for AEF#01 > Switch epoched/continuous
    You should get a message: "File converted to: continuous".

  • Double-click on the "Link to raw file" again. Now you can navigate in the file without interruptions. The box "Epoch" is disabled and all the events in the file are displayed at once.

    review_continuous.gif

  • With the red square at the bottom of the figure, you can navigate in time (click in the middle and drag with the mouse) or change the size of the current page (click on the left or right edge of the red square and move your mouse).

    resize_page.gif

  • Increase the duration of the displayed window to 3 seconds (Page settings > Duration).

    review_setpage.gif

  • Close the figure.
  • Repeat this operation with the other files to convert them all to a continuous mode.
    • AEF#02 > Switch epoched/continuous

    • Noise > Switch epoched/continuous

Display mode: Butterfly/Column

  • Close all the figures.
  • Double-click on the AEF#01 Link to raw file to open the MEG recordings.
  • What we see are all the traces of the 274 sensors overlaid on top of each other.
  • Click on the "Display mode" button in the toolbar of the Record tab.

    review_switch.gif

  • All the signals are now displayed, one below the other, but because we have 274 MEG channels the figure is still unreadable. We need to select only a subset of these sensors.

    review_column.jpg

Montage selection

  • You can use the montage menu to select a group of sensors. This menu is accessible in two ways:
    • Record toolbar > Drop-down menu.

    • Figure popup menu > Right-click on the figure > Montage

  • Pre-defined groups of channels are available for some common MEG and EEG systems.
    Notice the keyboard shortcut on the right for All channels (Shift+A). You can define your own (Shift+B, C...) if you go to Edit montages.

  • You can also use this menu to create your own sensor selections or more complex montages.
    A separate tutorial is dedicated to the montage editor.

  • Select the group: CTF LT (Left Temporal).

    review_montage.gif

  • More information about the Montage editor.

Channel selection

If you click on the white or grey areas of the figure, it changes the current time.
If you click on the lines representing the recorded signals instead, it selects the corresponding channels.

  • When some channels are selected, an additional menu "Channels" is visible in the figure popup.
  • Select "View selected" or press [Enter] to open the selected channels in a separate window.
  • The management of the bad channels will be introduced in a separate tutorial.

    channel_select.gif

Amplitude scale

A variety of display options allows you to adjust the amplitude scale for the recordings (vertical axis). Most of these options are available in the right part of the time series figure, some are repeated in the Record tab of the Brainstorm window.

  • review_scale.gif

  • Increase/decrease gain: Buttons [+] and [-] on the right side of the figure. The shortcuts for these buttons are indicated in the tooltips (leave the mouse for a short while over a button): right-click and move your mouse, hold the Shift key and scroll, or use the keys "+" and "-".

  • Auto-scale amplitude: Button [AS] in the figure.
    Selected: the vertical scale is adapted to the new maximum amplitude when you scroll in the file.
    Not selected: The vertical scale is fixed, scrolling in the file does not affect the axis resolution.

  • Flip Y axis: Exchange the direction of the Y axis, to have the peaks of negative values pointing up. Useful mostly for clinical EEG.

  • Set amplitude scale: Opens a window to enter the amplitude scale manually. The value corresponds to the space between two horizontal lines in this figure.

  • Set axis resolution: See section "Time and amplitude resolution" below.

  • Remove DC offset: Button [DC] in the Record tab. When selected, the average value over the entire current time window is subtracted from each channel. This means that if you change the length of the time window, the value that is removed from each channel may change. Always keep this option selected for unprocessed MEG recordings, unless you use a high-pass filter.

  • Normalize signals: Divide each signal by its maximal amplitude in the displayed time window. The signals displayed with this normalization are unitless.

  • Apply CTF compensation: Button [CTF] in the Record tab. Enable/disable the CTF noise correction based on the reference sensors, when it is not already applied in the file. In the current file, the CTF 3rd order gradient compensation is already applied, therefore this option is not available.

  • Vertical zoom: Use the zoom/scrolll buttons on the right of the figure or your mouse (CTRL+Mouse wheel to zoom, middle-click+move to scroll) in order to look at specific channels without having to change the montage.

  • review_vscroll.gif

  • Uniform amplitude scales: Force all the time series figures to use the same amplitude scale. Option available in the Record tab with the button uniform_button.gif or from the figure options menu when at least two time series figures are visible. More details.

Advanced

Time and amplitude resolution

In the Brainstorm interface, the axis resolution is usually set implicitly: you can set the size of the window, the duration or recordings reviewed at once and the maximum amplitude to show in the figure. These parameters are convenient to explore the recordings interactively but don't allow us to have reproducible displays with constant time and amplitude resolutions.

However, some applications are very sensitive to the horizontal and vertical scaling, such as the visual detection of epileptic spikes. The shapes of traces the epileptologists try to identify are altered by the axes resolution. This is detailed in the tutorial EEG and Epilepsy.

For this reason, we also added an option to set the figure resolution explicitly. The distance unit on a screen is the pixel, we can set precisely how much time is represented by one pixel horizontally and how much amplitude is represented by one pixel vertically.
Display menu in the right part of the figure > Amplitude > Set axis resolution (shortcut: CTRL+O)

Note that this interface does not store the input values, it just modifies the other parameters (figure size, time window, max amplitude) to fit the resolution objectives. If you modify these parameters after setting the resolution (resize the figure, leave the button [AS] selected and scroll in time, etc) the resolution is lost, you have to set it again manually.

review_resolution.gif

Filters for visualization

With the Filter tab, you can apply a band-pass filter to the recordings, or remove a set of specific frequencies (example: the 50Hz or 60Hz power lines contamination and their harmonics). The filters are applied only to the time window that is currently loaded. If the segment is too short for the required filters, the results might be inaccurate.

These visualization filters provide a quick estimate for visualization only, the results are not saved anywhere. To filter properly the continuous files, please use the Process1 tab (see tutorial #10).
The option "Filter all results" is not useful for now, it will be described later.

After testing the high-pass, low-pass and notch filters, uncheck them. Otherwise you may forget about them, they would stay on until you restart Brainstorm. Note that as long as there are visualization filters applied, the title of the Filter tab remains red.

  • review_filter.gif

Mouse and keyboard shortcuts

Keyboard shortcuts

  • Left / right arrows:

    • Change current time, sample by sample
    • +Control key: Jump to previous/next epoch or page (same as [<<<] and [>>>])

    • +Shift key: Jump to previous/next event (you need to have one event selected)

    • MacOS: These shortcuts are different, please read the tooltips for [>], [>>] and [>>>]

  • Page-up / page-down:

    • Change current time, 10 samples at a time
    • +Control key: Jump to the next/previous epoch or page, 10x faster

  • F3/Shift+F3: Jump to the next/previous epoch or page (10% overlap between 2 pages)

  • F4/Shift+F4: Jump to the next/previous half-page (50% overlap)

  • F6/Shift+F6: Jump to the next/previous page with no overlap (0% overlap)

  • Plus / minus: Adjust the vertical scale of the time series

  • Shift + Letter: Changes the montage

  • Control + B: Mark selected time segment as bad

  • Control + D: Dock figure

  • E: Add / delete event marker

  • Control + E: Add / delete event marker for the selected channels

  • Control + F: Open a copy of the figure, not managed by the Brainstorm window manager

  • Control + H: Hide/show selected event group

  • Control + I: Save figure as image

  • Control + J: Open a copy of the figure as an image

  • Control + O: Set axes resolution

  • Control + L: Change display mode of events (dots, lines or hidden)

  • Control + J: Open a screen capture of the figure

  • Control + T: Open a 2D topography window at the current time

  • Enter: Display the selected channels in a separate figure

  • Escape: Unselect all the selected channels

  • Delete: Mark the selected channels as bad

  • 1 2 3 4 5 6 7 8 9: User-defined shortcuts for new events (tutorial #7)

Mouse shortcuts

  • Click on a channel: Select the channel

  • Click: Change current time

  • Shift + click: Force the selection of the current time (even when clicking on a channel)

  • Click + move: Select time range

  • Right-click: Display popup menu

  • Right-click + move: Adjust the vertical scale of the time series

  • Scroll: Zoom around current time

  • Shift + scroll: Adjust the vertical scale of the time series

  • Control + scroll: Zoom vertically

  • Central click + move: Move in a zoomed figure

  • Double click: Restore initial zoom settings (or edit the notes associated to the clicked event)



Tutorial 6: Multiple windows

Authors: Francois Tadel

General organization

This tutorial is a parenthesis to explain how the figures are positioned on the screen and how you can organize your workspace more efficiently. One interesting feature of the Brainstorm interface is the ability to open easily multiple views or multiple datasets simultaneously.

The buttons in the menu "Window layout options" can help you organize all the opened figures in an efficient way. There are four options for the automatic placement of the figures on the screen and you have the possibility to save your own specific working environment.

Remember that the Brainstorm window is designed to remain on one side of the screen. All the space of the desktop that is not covered by this window will be used for opening other figures. This available space is designated in the menus below as "Full area". Do not try to maximize the Brainstorm window, or the automatic management of the data figures might not work correctly.

  • toolbarWindows.gif

Automatic figure positioning

  • Layout options: Defines how the figures are positioned on the screen

    • Tiled: All the figures have similar sizes.

    • Weighted: Some figures containing more information are given more space on the screen. This mode is mostly useful when reviewing continuous recordings.

    • Full area: Each figure takes all the space available for figures.

    • None: The new figures are displayed at the default Matlab position, always at the same place, and never re-organized after. Selecting this option can be useful if the auto-arrangement does not work well on your system or if you want to organize your windows by yourself. It is also automatically selected when using "user setups" (see below).

  • One screen / two screens: If you have multiple monitors, Brainstorm can try to place the database window on one screen and all the other figures on the other screen. If you force Brainstorm to use only one screen, all the figures should stay on the same screen.

  • Full screen: If selected, the figures are set to their maximum size, covering the Brainstorm window

  • Show all figures: If you have many figures hidden by some other fullscreen window (Matlab, Firefox to read this tutorial, etc), you don't have click on all of them in the taskbar to get them back. Just make the Brainstorm window visible and click on this button, it would bring all the figures back (not working on some Linux window managers).

  • User setups: You can save a combination of figures currently opened on your desktop and re-use it later on a different dataset. It can be very useful for reviewing long continuous files.

  • Close all figures: Last button in the toolbar. Close everything and free the allocated memory.

Example

  • Double-click on AEF#01 Link to raw file to open the MEG sensors.

  • Open the EOG signals for the same file: Right-click on the file > EOG > Display time series.

  • Open a 2D topography for the MEG sensors: Right-click on the file > MEG > 2D sensor cap.
    This view represents the values of the all the MEG sensors at the current time point. This type of figures will be described in another tutorial.

  • Cycle through the options: Tiled, Weighted, Full area.

    layout_tiled.gif layout_weighted.gif

  • Select the option None, close all the figures (using the [Close all] button), and re-open them.
    Notice that now the position of the figures is not managed by Brainstorm anymore.

  • Select again Weighted: the figures are automatically re-arranged again.

  • Test the option Full screen.

  • If you have two screens connected, you can try the options One screen / Two screens.

Advanced

Multiple views of the same data

  • Keep all the existing figures: MEG, EOG, 2D topography.
  • Open another time series view of the same file, same MEG sensors.
    • Note that if you double-click again on the file, it just selects the existing figure.
    • To force opening another view: Right-click on the file > MEG > Display time series.

    • Only the first view that was open on this file shows the events bar and the time navigation bar at the bottom of the figure. If you want the two MEG figures displayed in the exact same way, you can close everything, then start by opening the EOG view, then two MEG views.
  • Re-arrange the figures in a nicer way.
  • Select montage "CTF LT" for one figure, and montage "CTF RF" for the other.

    • You can change individually the display properties of each figure.
    • When creating a new figure, it re-uses the last display properties that were used.
    • To change the properties of one figure, you have first to select this figure. Clicking on the title bar of the figure is not enough, you have to click inside the figure (this is due to some limitations of the Matlab figures implementation).

    • When the new figure is selected, the controls in the Record tab are updated, and you can change the display properties for this figure.
  • There is currently a limitation relative to the continuous file viewer: it is not possible to review two continuous datasets at the same time. This is usually not a problem because we typically review the continuous files one after the other. It will be possible to open multiple data files after we import them in the database, this is what is really useful.

    layout_user.gif

Advanced

User setups

  • Keep the four figures previously created (MEG LT, MEG RF, EOG, 2D sensor).
  • In the menu "Window layout options" > User setups > New setup > "Test".

  • Close all the figures (using the Close all button).
  • Double-click again on the Link to raw file to open MEG sensors.
  • In the menu "Window layout options" > User setups > Test.
    It should restore your desktop exactly like it was when you saved it.

  • Note that the layout None is now selected. Using custom window configurations disables the automatic arrangement of the windows on the screen.

  • This feature is interesting for users who need to review numerous files everyday in a very specific way, for instance in the case of visual inspection of epilepsy recordings. It can save them a substantial amount of time to load their reviewing environment in just one click.

Advanced

Uniform amplitude scales

  • Set the display mode "butterfly" for the two MEG time series figures:
    Uncheck the first button in the Record tab.

  • With the button Uniform amplitude scale uniform_button.gif , in the Record tab, you can change the way the amplitude of multiple time series figures is scaled.

  • Selected: All the time series figures with similar units have the same y-axis scale, you can compare visually the amplitudes between two datasets.

    uniform_yes.gif

  • Not selected: Each figure is scaled independently to its own maximum amplitude.

    uniform_no.gif

Graphic bugs

If you observe any graphic problem with these displays, there might be an issue with the OpenGL drivers. You may try the following options:

  • Update the drivers for your graphics card.
  • Upgrade your version of Matlab.
  • Run the compiled version of Brainstorm (see Installation).

  • Turn off the OpenGL hardware acceleration: Menu File > Edit preferences > Software or Disabled.

  • Send a bug report to the Mathworks.


Tutorial 7: Event markers

Authors: Francois Tadel, Elizabeth Bock, John C Mosher

Lists of events

You probably noticed colored dots on top of the recordings in the MEG figure. They represent the event markers saved in this dataset. In this documentation, they can be called indifferently events, markers or triggers. Some are stimulus triggers that were generated by the stimulation computer (Psychtoolbox-3), others are the subject responses recorded from a button box. This tutorial shows how to manipulate these markers.

  • Open the MEG recordings for file AEF#01.

  • Make sure it is configured as presented here: Montage "CTF LT", [DC] button selected, 3s pages.
  • All the markers available in the file are listed in the Events section of the Record tab.
  • On the left, you have the groups of events and the number of occurrence for each group:
    • 200 standard audio stimulations

    • 40 deviant audio stimulations

    • 40 button responses: The subject presses a button with the right index finger when a deviant is presented. This is a very easy task so all the deviants are detected

  • On the right, you have the list of the time instants at which the selected event occurs.
  • These two lists are interactive. If you click on an event group (left list), it shows the corresponding occurrences in the right list. If you click on one particular event in the right list, the file viewer jumps to it. It works the other way as well: if you click on a dot representing an event in the MEG figure, the corresponding event group and occurrence are selected in the Record tab.

    events_list.gif

Adding events

The markers can represent either stimulation triggers or subject responses that were recorded during the acquisition. It can also be useful to add new markers during the analysis of the recordings, to identify events of interest that are not detected at the time of the recordings, such as artifacts (eye movements, heartbeats, subject movements) or specific patterns of brain activity (epileptic spikes).

  • Create a new group of events "Test" with the menu Events > Add group.

  • Click on this new category to select it. It contains no occurrences at the beginning (x0).

    events_newgroup.gif

  • Then place the time cursor (red vertical bar) where you want to add a new marker "Test".
  • Add a few occurrences with any of the three methods:
    • In the Record tab: Select the menu Events > Add / delete event

    • In the time series figure: Right-click > Add / delete event

    • In the time series figure: Press key E

  • If the display is too dense, it can be difficult to set the current time instead of selecting a channel. Note that you can click outside of the white area to select the time (on top of the figure), or use the shortcut Shift+click.

    events_newmarker.gif

  • Remove all the event occurrences in "Test", but not the group itself. Use any of the three methods:
    • In the Record tab: Select one or more event occurrences, press the Delete key.

    • In the time series figure: Click on an event dot and right-click > Add / delete event.

    • In the time series figure: Click on an event dot and press key E.

Extended events

You can also use this interface to create events that have a temporal extension, i.e., that last for more than one time sample. This can be used to define bad segments in the recordings.

  • In the time series window, select a time range (click + move).
  • Add an event: menus or key E.

  • The first occurrence you add in an event group defines its type: single time point (simple events), or time range (extended events). You cannot mix different types of events in a group. You get an error when you try to add a time segment in an event category that already contains a simple event.

    events_extended.gif

  • Remove the event group "Test": Click on it in the list and press the Delete key.

Bad segments

It is very common to have portions of the recordings heavily contaminated by events coming from the subject (eye blinks, movements, heartbeats, teeth clenching) or from the environment (stimulation equipment, elevators, cars, trains, building vibrations...). Some of them are well defined and can be removed efficiently, some are too complex to be modeled efficiently. For this last category, it is usually safer to mark the noisy segments as bad, and ignore them for the rest of the analysis.

To mark a segment of recordings as bad, the procedure is the same as for defining an extended event: select a time window, and then tag it as bad with one of the following methods.

  • In the Record tab: Select the menu Events > Reject time segment,

  • In the time series figure: Right-click > Reject time segment,

  • In the time series figure: Press Ctrl + B

It creates a new event group BAD, and adds an extended event to it. Later, when epoching this file (extracting time blocks and saving them in the database), the trials that contain a bad segment will be imported but tagged as bad, and ignored in the rest of the analysis.

You can create multiple groups of bad segments, for instance to identify different types of artifacts. Any event group that contains the tag "BAD" will be considered as indicating bad segments.

events_bad.gif

Advanced

Hide event groups

When you have too many events in the viewer, seeing the ones you are interested in can be difficult. This will be the case for insteance after we detect the heartbeats in the signal, we will have one event every second, which is not always interesting to see. Each event category can be selectively hidden.

  • In the record tab, select the group of events you want to hide.
  • Use the menu Events > Show/Hide group, or press shortcut key H.

  • The event group is greyed out, and the corresponding markers disappear from the viewer.

    events_hide.gif

Advanced

Channel events

Some events can be attached to only one or a few channels. This is useful for instance for reviewing clinical EEG recordings, where neurologists are tagging epileptic activity only on a subset of the channels.

  • First select the channels of interest by clicking on them (the signals should turn red).
  • Place the time cursor where you want to create the event (click on the white or grey areas of the figure, or use the shortcut Shift+Click).
  • Right-click anywhere on the figure > Add/delete channel event, or shortcut Ctrl+E.

  • The event marker appears directly on the selected channel, and the name of the channel appears in the list of event times (in the Brainstorm window).

    events_channel.gif

  • Then you can deselect the channel (click again on it) or press the Escape key before creating a new event attached to a different channel.

  • If no channel is selected, you can proceed in this alternate way: position the time cursor where you want to create the event, right-click directly on the channel to which to want to attach the event, and select "Add/delete channel event".

Advanced

Notes

Additional comments can be added to the event, in case additional details should be displayed in the file viewer. This is mostly useful for reviewing clinical recordings as well.

  • Right-click on any event marker or event text (or double-click on it) > Edit notes.

  • Enter the text to display next to the marker.

    events_note.gif

  • Alternatively, you can double-click on the event in the list of event times (in the Brainstorm window).

Advanced

Display modes

Three display modes are available for the event markers: dots, lines or hidden. Select the corresponding menu in the display options, or press CTRL+L multiple times.

events_dots.gif events_lines.gif

Advanced

Custom shortcuts

When reviewing long recordings and adding manually lots of events (eg. when marking manually epileptic spikes), using the menus presented above is not convenient because they require many mouse clicks.

Using the menu Events > Edit keyboard shortcuts, you can associate custom events to the key 1 to 9 of the keyboard. Define the name of the event type to create for each key, and then simply press the corresponding key to add/delete a marker at the current time position. Three options are available for each event type:

  • Simple: Create a simple event where the red time cursor is placed.

  • Full page: Create an extended event including the entire page of recordings, then move to the next page of recordings. This option was added for a specific application (sleep staging) that consists in labelling blocks of 30s through the entire file.

  • Extended: Create an extended event with the time window indicated on the right of the panel around the time cursor.

events_shortcuts.gif

Saving modifications

Now you can delete all the event groups that you've just created and leave only the initial ones (button, standard, deviant): select the event groups and press Delete, or use the menu Events > Delete group.

When you close the continuous file viewer, or the last figure that shows a part of the raw file, the dataset is unloaded, the file is released and the memory is freed.

If you edited the events for this file, you are asked whether to save the modifications or not. If you answer "Yes", the modifications are saved only in the database link (Link to raw file), not in the original file itself. Therefore, you would see the changes the next time you double-click on the "link to raw file" again, but not if you open the original .ds file in another protocol or with an external program.

events_save.gif

Note that events you edit are not automatically saved until that moment. As you would do with any other type of computer work, save your work regularly, to limit the damages of a program or computer crash. In the Record tab, use the menu File > Save modifications.

Advanced

Other menus

events_menus.gif

File

  • Import in database: Import blocks of the current continuous file into the database. Equivalent to a right click on the "Link to raw file" in the database explorer > Import in database.

  • Save modifications: Save the modifications made to the events in the database link.

  • Add events from file: Import events from an external file. Many file formats are supported.

  • Read events from channel: Read the information saved during the acquisition in a digital auxiliary channel (eg. a stimulus channel) and generate events.

  • Detect analog triggers: Detect transition events in an external analog channel, such as the voltage of a photodiode exposed to light or a microphone recording a sound.

  • Export all events: Save all the events in an external file.

  • Export selected events: Same as "Export all events" but exports only the selected events.

Events

  • Rename group: Rename the selected group of events. Shortcut: double-click.

  • Set color: Change the color associated with an event group.

  • Mark group as bad: Add a tag "bad" in the event name, so that it is considered as bad segment.

  • Sort groups: Reorders the event groups by name, or by time of the first occurrence.

  • Merge groups: Merge two event groups into a new group. Initial groups are deleted. To keep them, duplicate them before merging.

  • Duplicate groups: Make a copy of the selected event groups.

  • Convert to simple events: Convert a group of extended events (several time points for each event), to simple events (one time point). An option window asks you whether to keep the first or the last sample only of the extended events.

  • Convert to extended events: Convert simple events to segments of a fixed length.

  • Combine stim/response: Create new groups of events based on stim/response logic.
    Example: Stimulus A can be followed by response B or C. Use this process to split the group A in two groups: AB, followed by B; and AC, followed by C.

  • Detect multiple responses: Finds the multiple responses (events that are too close to each other)

  • Group by name: Combine different event groups by name.

  • Group by time: Combine simultaneous events and creates new event groups.

  • Add time offset: Adds a constant time to all the events in a group, to compensate for a delay.

  • Edit keyboard shortcuts: Custom associations between keys 1..9 and events

  • Reject time segment: Mark the current time selection as bad.

  • Jump to previous/next event: Convenient way of browsing through all the markers in a group.
    Shortcut: Shift + left/right

Advanced

On the hard drive

The nodes "Link to raw file" you see in the database explorer are represented by .mat files on the hard drive. They contain all the header information extracted from the raw files, but do not contain a full copy of the recordings.

All the additional information created from the Brainstorm interface (event markers, bad channels, SSP projectors) are not saved back to the original raw files, they are only saved in the "Link to raw file". The names of these files start with the tag data_0raw_, they share the same structure as all the imported epochs (introduced later in the tutorials).

To explore the contents of these link files, right-click on them use the popup menus File > View file contents or File > Export to Matlab.

link_contents.gif

  • F: sFile structure, documents completely the continuous raw file, described below.
    (for imported epochs, .F contains directly the MEG/EEG recordings [Nchannels x Ntime])

  • Comment: String used to represent the file in the database explorer.

  • Time: First and last time points recorded in the continuous file.

  • ChannelFlag: [Nchannels x 1] list of good/bad channels (good=1, bad=-1)

  • DataType: Type of data stored in this file.

    • 'raw' = Link to a continuous raw file
    • 'recordings' = Imported epoch
  • Device: Acquisition system that recorded the dataset.

  • Events: Not used in the case of links.

  • Leff: Effective number of averages = Number of input files averaged to produce this file.

  • History: Describes all the operations that were performed with Brainstorm on this file. To get a better view of this piece of information, use the menu File > View file history.


sFile structure: This structure is passed directly to all the input/output functions on continuous files.

  • filename: Full path to the continuous raw file.

  • format: Format of the continuous raw file.

  • device: Acquisition system that recorded the dataset. Same as Link.Device.

  • condition: Name of the folder in which this file is supposed to be displayed.

  • comment: Original file comment.

  • byteorder: Endianness, 'l' = Little Endian, 'b' = Big Endian

  • prop: Structure, basic properties of the recordings

    • times: First and last time points recorded in the continuous file.

    • sfreq: Sampling frequency

    • Leff: Number of files that were averaged to produce this file.

    • currCtfComp: Level of CTF compensation currently applied.

    • destCtfComp: Level of CTF compensation in which we want to view the file (usually: 3)

  • epochs: Array of structures used only in the case of continuous recordings saved as "epochs"

  • events: Array of structures describing the event markers in the file, one structure per event group:

    • label: Name of the event group

    • color: [r,g,b] Color used to represent the event group, in Matlab format

    • epochs: [1 x Nevt] Indicate in which epoch the event is located (index in the sFile.epochs array), or 1 everywhere for files that are not saved in "epoched" mode.
      Nevt = number or occurrences of the event = number of markers in this group

    • times: [1 x Nevt] Time in seconds of each marker in this group (times = samples / sfreq), aligned on exact sample instants (times = round(times*sfreq)/sfreq).
      For extended events: [2 x Nevt], first row = start, second row = end

    • reactTimes: Not used anymore

    • select: Indicates if the event group should be displayed in the viewer.

    • channels: {1 x Nevt} Cell array of cell-arrays of strings. Each event occurrence can be associated with one or more channels, by setting .channels{iEvt} to a cell-array of channel names.

    • notes: {1 x Nevt} Cell-array of strings: additional comments for each event occurrence

  • header: Structure describing additional header information, depending on the original file format.

  • channelflag: List of good/bad channels, same information as Link.ChannelFlag.

Useful functions

  • in_bst_data(DataFile, FieldsList): Read the structure for a data file.



Tutorial 8: Stimulation delays

Authors: Francois Tadel, Elizabeth Bock

The event markers that are saved in the data files might have delays. In most cases, the stimulation triggers saved by the acquisition system indicate when the stimulation computer requested a stimulus to be presented. After this request, the equipment used to deliver the stimulus to the subject (projector, screen, sound card, electric or tactile device) always introduce some delays. Therefore, the stimulus triggers are saved before the instant when the subject actually receives the stimulus.

For accurate timing of the brain responses, it is very important to estimate these delays precisely and if possible to account for them in the analysis. This tutorial explains how to correct for the different types of delays in the case of an auditory study, if the output of the sound card is saved together with the MEG/EEG recordings. A similar approach can be used in visual experiments using a photodiode.

Note for beginners

This entire tutorial can be considered as advanced. It is very important to correct for the stimulation delays in your experiments, but if you are not using any stimulation device, you do not need this information. However, if you skip the entire tutorial, you will have uncorrected delays and it will be more difficult to follow along the rest of the tutorials. Just go quickly through the actions that are required and skip all the explanations.

Advanced

Documented delays

Reminder: The full description of this auditory dataset is available on this page: Introduction dataset.

Delay #1: Production of the sound

  • The stimulation software generates the request to play a sound, the corresponding trigger is recorded in the stim channel by the MEG acquisition software.
  • Then this request goes through different software layers (operating system, sound card drivers) and the sound card electronics. The sound card produces an analog sound signal that is sent at the same time to the subject and to MEG acquisition system. The acquisition software saves a copy of it in an audio channel together with the MEG recordings and the stim channel.
  • The delay can be measured from the recorded files by comparing the triggers in the stim channel and the actual sound in the audio channel. We measured delays between 11.5ms and 12.8ms (std = 0.3ms). These delays are not constant, we should adjust for them. Jitters in the stimulus triggers cause the different trials to be aligned incorrectly in time, hence "blurred" averaged responses.

Delay #2: Transmission of the sound

  • The sound card plays the sound, the audio signal is sent with a cable to two transducers located in the MEG room, close to the subject. This causes no observable delay.
  • The transducers convert the analog audio signal into a sound (air vibration). Then this sound is delivered to the subject's ears through air tubes. These two operations cause a small delay.
  • This delay cannot be estimated from the recorded signals: before the acquisition, we placed a sound meter at the extremity of the tubes to record when the sound is delivered. We measured delays between 4.8ms and 5.0ms (std = 0.08ms). At a sampling rate of 2400Hz, this delay can be considered constant, we will not compensate for it.

Delay #3: Recording of the signals

  • The CTF MEG systems have a constant delay of 4 samples between the analog channels (MEG/EEG, auditory,etc) and the digital channels (stim, buttons, etc), because of an anti-aliasing filter that is applied to the first and not the second. This translates here to a constant 'negative' delay of 1.7ms, meaning the analog channels are delayed when compared to the stim channels.

  • Many acquisition devices (EEG and MEG) have similar hidden features, read correctly the documentation of your hardware before analyzing your recordings.


delays_sketch_small.gif


Evaluation of the delay

Let's display simultaneously the stimulus channel and the audio signal.

  • Right-click AEF#01 link > Stim > Display time series: The stim channel is UPPT001.

  • Right-click AEF#01 link > ADC V > Display time series: The audio channel is UADC001.

  • In the Record tab, set the duration of display window to 0.200s.

  • Jump to the third event in the "standard" category.
  • We can observe that there is a delay of about 13ms between the time where the stimulus trigger is generated by the stimulation computer and the moment where the sound is actually played by the sound card of the stimulation computer (delay #1).

    delays_evaluate.gif

  • What we want to do is to discard the existing triggers and replace them with new, more accurate ones created based on the audio signal. We need to detect the beginning of the sound on analog channel UADC001.
  • Note that the representation of the oscillation of the sound tone is really bad here. The frequency of this standard tone is 440Hz. It was correctly captured by the original recordings at 2400Hz, but not in the downsampled version we use in the introduction tutorials. It should still be good enough for performing the detection of the stimulus.

Detection of the analog triggers

Detecting the standard triggers

Run the detection of the "standard" audio triggers on channel UADC001 for file AEF#01.

  • Keep the same windows open as previously.
  • In the Record tab, select the menu File > Detect analog triggers.

  • This opens the Pipeline editor window with the process Events > Detect analog triggers selected. This window will be introduced later, for now we will just use it to configure the process options. Configure it as illustrated below:

    delays_detect.gif

Advanced


Explanation of the options (for future reference, you can skip this now):

  • Event name: Name of the new event category created to store the detected triggers.
    We can start with the event "standard", and call the corrected triggers "standard_fix".

  • Channel name: Channel on which to perform the detection (audio channel UADC001).

  • Time window: Time segment on which you want to detect analog triggers.
    Leave the default time window or check the box "All file", it will do the same thing.

  • Amplitude threshold: A trigger is set whenever the amplitude of the signal increases above X times the standard deviation of the signal over the entire file. Increase this value if you want the detection to be less sensitive.

  • Min duration between two events: If the event we want to detect is an oscillation, we don't want to detect a trigger at each cycle of this oscillation. After we detect one, we stop the detection for a short time. Use a value that is always between the duration of the stimulus (here 100ms) and the inter-stimulus interval (here > 700ms).

  • Apply band-pass filter before the detection: Use this option if the effect you are trying to detect is more visible in a specific frequency band. In our case, the effect is obvious in the broadband signal, we don't need to apply any filter.

  • Reference: If you have an approximation of the triggers timing, you can specify it here. Here we have the events "standard" and we want to detect a trigger in their neighborhood.
    If we do not use this option, the process creates only one new group with all the audio signals, without distinction between the deviant and standard tones.

  • Detect falling edge (instead of rising edge): Detects the end of the tone instead of the beginning.

  • Remove DC offset: If the signal on which we perform the detection does not oscillate around zero or has a high continuous component, removing the average of the signal can improve the detection. This should be selected when using a photodiode with a pull-up resistor.

  • Enable classification: Tries to automatically classify the different types of events that are detected based on the morphology of the signal in the neighborhood of the trigger.

Results of the detection

  • Navigate through a few of the new "standard_fix" events to evaluate if the result is correct. You can observe that the corrected triggers are consistently detected after the rising portion of the audio signal, two samples after the last sample where the signal was flat.
  • This means that we are over-compensating delay #1 by 3.3ms. But at least this delay is constant and will not affect the analysis. We can count this as a constant delay of -3.3ms.

    delays_results.gif

Detecting the deviant triggers

  • Repeat the same operation for the deviant tones.
  • In the Record tab, select the menu File > Detect analog triggers.

    delays_deviant.gif

Some cleaning

  • We will use the corrected triggers only, we can delete the original ones to avoiding any confusion.
  • Delete the event groups "deviant" and "standard" (select them and press the Delete key).

  • Rename the group "deviant_fix" into "deviant" (double-click on the group name).
  • Rename the group "standard_fix" into "standard".
  • Close all: Answer YES to save the modifications.

    delays_final.gif

Repeat on acquisition run #02

Repeat all the exact same operations on the link to file AEF#02:

  • Right-click AEF#02 link > Stim > Display time series: The stim channel is UPPT001.

  • Right-click AEF#02 link > ADC V > Display time series: The audio channel is UADC001.

  • In the Record tab, select menu File > Detect analog triggers: standard_fix

  • In the Record tab, select menu File > Detect analog triggers: deviant_fix

  • Check that the events are correctly detected.
  • Delete the event groups "deviant" and "standard" (select them and press the Delete key).

  • Rename the group "deviant_fix" into "deviant" (double-click on the group name).
  • Rename the group "standard_fix" into "standard".
  • Close all: Answer YES to save the modifications.

Delays after this correction

We compensated for the jittered delays (delay #1), but not for hardware delays (delay #2). Note that delay #3 is no longer an issue since we are not using the orginal stim markers, but the more accurate audio signal. The final delay between the "standard_fix" triggers and the moment when the subject gets the stimulus is now delay #2 and the over-compensation.

Final constant delay: 4.9 - 3.3 = 1.6ms

We decide not to compensate for this delay because it is very short and does not introduce any jitter in the responses. It is not going to change anything in the interpretation of the data.

Advanced

Detection of the button responses

The subject presses a button with the right index finger when a deviant is presented. We don't really need to correct this category of events because it is already correct. You can skip this section if you are not interested in parsing digital channels.

The digital channel Stim/UDIO001 contains the inputs from the response button box (optic device, negligible delay). Each bit of the integer value on this channel corresponds to the activation of one button. We can read this channel directy to get accurate timing for the button presses.

  • Right-click AEF#01 link > Stim > Display time series: The response channel is UDIO001.

  • In the Record tab: Set the page duration to 3 seconds.

    delays_button.gif

  • Note on the DC removal: You may see the base value of the UDIO001 channel "below" the zero line. This is an effect of the DC correction that is applied on the fly to the recordings. The average of the signals over the current page is subtracted from them. To restore the real value you can uncheck the [DC] button in the Record tab. Atlernatively, just remember that the reference line for a channel doesn't necessarily mean "zero" when the DC removal option is on.

  • In the Record tab, select menu File > Read events from channel: UDIO001 / Value

    delays_button_detect.gif

  • Reject events shorter than X samples: This option is not needed here, but can become useful when the transitions between the values of the stim channels are not as clean as the sharp steps in this tutorial dataset. For example:

    • The individual bits are not changing exactly at the same time,
    • Values added (bit-wise) at transitions between 2 non-zero values when downsampling (e.g. CTF),
    • "Button bouncing": non-ideal behavior of any switch which generates multiple transitions from a single input.
  • You get a new event category 64, this is the value of the UDIO001 at the detected transitions. There are 40 of them, one for the each button press. We can use this as a replacement for the original button category.

    delays_button_fix.gif

  • To make things clearer: delete the button group and rename 64 into button.

    delays_button_final.gif

  • Close all: Answer YES to save the modifications.
  • Optionally, you can repeat the same operation for the other run, AEF#02. But we will not use the "button" markers in the analysis, so it is not very useful.
  • Note that these events will have delay #3 (when compared to MEG/EEG) since they are recorded on a digital channel.

Advanced

Another example: visual experiments

We have discussed here how we could compensate for the delays introduced in an auditory experiment using a copy of the audio signal saved in the recordings. A similar approach can be used for other types of experiments. Another typical example is the use a photodiode in visual experiments.

When sending images to the subject using a screen or a projector, we usually have jittered delays coming from the stimulation computer (software and hardware) and due to the refresh rate of the device. These delays are difficult to account for in the analysis.

To detect accurately when the stimulus is presented to the subject, we can place a photodiode in the MEG room. The diode produce a change in voltage when presented with a change in light input, for example black to white on the screen. This is typically managed with a small square of light in the corner of the stimulus screen - turning white when the stimulus appears on the screen and then black at all other times. The signal coming from this photodiode can be recorded together with the MEG/EEG signals, just like we did here for the audio signal. Depending on the photodiode, it is recommended to use a pull-up resistor when recording the signal. Then we can detect the triggers on the photodiode output channel using the menu "detect analog triggers", including the use of the 'Remove DC offset' option.

delays_video.png



Tutorial 9: Select files and run processes

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

The Brainstorm window includes a graphical batching interface. With the two tabs Process1 and Process2 in the lower part of the window, you can select files from the database explorer and assemble a processing pipeline. Most of the operations available in the interface can also be executed this way, including everything we've been doing with Brainstorm so far.

On the other hand, some features are only available this way. It is the case for the frequency filters we will need for the pre-processing of our auditory recordings. This tutorial is a parenthesis to explain how to select files and run processes, we will resume with the cleaning of the recordings in the next tutorial.

Selecting files to process

The tab Process1 contains a empty box in which you can drag and drop any number of files or folders from the database explorer. The easiest way to understand how it works is to try it.

  • Try to drag and drop, in Process1, all the nodes you currently have in your database explorer.
  • You will see that it accepts all the folders and all the recordings, but not the channel files.
  • When you add a new node, the interface evaluates the number of files of the selected type that each of them contain. The number in the brackets next to each node represents the number of data files that were found in them.
  • On top of the list, a comment shows the total number of files that are currently selected.

    process1_example.gif

  • The buttons on the left side allow you to select what type of file you want to process: Recordings, sources, time-frequency, other. When you select another button, all the counts are updated to reflect the number of files of the selected type that are found for each node.
  • Right now, if you select another file type, it would show only "0" everywhere because there are no sources or time-frequency decompositions available in the database yet.

    process1_sources.gif

  • To remove files from the Process1 list:
    • Select the nodes to remove (holding Shift or Ctrl key) and press the Delete key.

    • Right-click on the list > Clear list

Filter by name

When you have lots of files in a folder, like multiple source reconstructions or time-frequency files for each trial, it is difficult to grab just the ones you are interested in. After selecting your folders in the Process1 box, you can refine the selection with the Filter search box at the bottom-right corner of the window.

  • The example below shows how to select the data files corresponding to the noise recordings: by typing "Noise" in the search box and selecting the option "Search file paths". We cannot perform the search "by name" because all the data files have the same name "Link to raw file".

    process1_search.gif

  • Reminder: To see the file name corresponding to a node in the database, leave your mouse over it for a few seconds. You can do this both in the database explorer and the Process1 list.

    process_filename.gif

The options offered in the Filter menu are:

  • Search file paths: Look for the string in the full file paths (including their relative path).

  • Search names: Look for the string in the names of the files, ie. what is displayed in the database explorer to represent them (the .Comment field).

  • Search parent names: Extends the search to the name of the parent files (applicable only to source and time-frequency files, which can depend on a data file).

  • Select files: Only the files that contain the string are selected.

  • Exclude files: Only the files that DO NOT contain the string are selected.

  • Reset filters: Removes the current file filters applied on Process1 and Process2.

  • Case insensitive: Note that the search is not sensitive to case.

  • Boolean logic: You can combine different keywords to make a more precise search using advanced search queries. See the following section for more information.

Selecting processes

  • Clear the file list and the search filters.
  • Select all three datasets we have linked to our protocol.
    You can select the three "link to raw file" nodes, the three folders or the entire subject node.

  • Click on the [Run] button at the bottom-left corner of the Process1 tab.

  • The Pipeline editor window appears. You can use it to create an analysis pipeline, i.e., a list of processes that are applied to the selected files one after the other. The first button in the toolbar shows the list of processes that are currently available. If you click on a menu, it's added to the list.

  • Some menus appear in grey. This means that they are not designed to be applied to the type of data that you have in input, or at the end of the current pipeline.
  • In the current example, we have a file with the type "continuous raw recordings", so we have access mostly to menus to manipulate event markers, run cleaning procedures and import data blocks. You can recognize a few operations that we executed in the previous tutorials: "Event > Read from channel" and "Event > Detect analog triggers".

    process1_addprocess.gif

  • When you select a process, a list of options specific to this process is shown in the window.
  • To delete a process: Select it and press the Delete key, or use the [X] button in the toolbar.

  • After selecting a first process, you can add another one. The output of the first process will be passed to the second process without giving back the control to the user. This is how you can build a full analysis pipeline with the interface.
  • After adding a few processes, you can move a process up or down in the pipeline with the [up arrow] and [down arrow] buttons in the toolbar. Click on a process in the pipeline to edit its options.
  • Select and delete a few processes to understand how this interface works. Just do not click on RUN.

Plugin structure

All the menus available in the pipeline editor are actually plugins for Brainstorm. The processes are functions that are independent from each other and automatically detected when starting Brainstorm.

Any Matlab script that is added to the plugin folder (brainstorm3/toolbox/process/functions/) and has the right format will automatically be detected and made available in the GUI. This mechanism makes it easy for external contributors to develop their own code and integrate it in the interface.

More information: How to write your own process

To see where the function corresponding to a process is on the hard drive: select the process in the pipeline editor, then leave your mouse for a few seconds over its title.

  • process_path.gif

Note for beginners

Everything below is advanced documentation, you can skip it for now.




Advanced

Search Database

Sometimes when working with huge protocols, you can get lost in the size of your database tree. While filtering from the process box as introduced in the previous section is one way to select the files you are looking for, we have introduced a more straightforward approach to search for file(s) in your database. At the right below the protocol selection dropdown, you can click on the magnifying glass to open up the search dialog.

  • search_new.png

From there, you can create a new search query from the GUI, or type / paste an existing search query string (see the following section for more details). Let's select "New Search" to create a new query from the GUI.

  • search_dialog.png

From this menu, you can create a search query to apply on your active protocol. It has different options:

  • Search by: The file metadata to use for the search.

    • Name: Name of the file in Brainstorm
    • File type: Type of the file, see dropdown when selected for possible values
    • File path: Path of the file in the Brainstorm database folder
    • Parent name: Name of any parent file in the database tree (e.g. Subject or Folder)
  • Equality: Type of equality to apply.

    • Contains: File metadata contains the entered value
    • Contains (case): Same as contains, but case sensitive
    • Equals: Exact equality, the file metadata is equal to the entered value
    • Equals (case): Same as equals, but case sensitive
  • Not: Whether to invert the selected equality, e.g. DOES NOT CONTAIN vs CONTAINS.

  • Search for: The value to search for.

  • Remove: To remove the search row if not needed anymore.

  • + and: To add a search row, with the AND boolean logic. If you have two rows A and B, then the returned files will match both search A and B.

  • + or: To add a search row, with the OR boolean logic. If you have two rows A and B, then the returned files will match both search A or B.

In the above example, we are looking for raw files (File type = Raw data) whose parent name contains the word "noise". This allows us to search for raw noise recordings.

  • search_noise_result.png

Notice that you now have multiple tabs in your Brainstorm database. The "Database" tab contains all files in your protocol, whereas the "noise" tab only contains the files that pass the search and their parents. You can have multiple searches/tabs active so that you can easily create pipelines by dragging and dropping different search results in the process box. Do keep in mind that if you drag and drop a parent object in the process box (e.g. Subject01) with an active search, only files that pass the active search will be processed by the pipeline.

Once a search is created, you can interact with it in different ways. You can right click on the tab and Edit the search on the fly from the GUI, Copy the search to clipboard as a query string to use it in a script, or Close the search.

  • search_right_click_tab.png

You can also click on the magnifying glass when a search is active to get more options such as Saving the search for later use and Generating a process call to apply this search in a script.

  • search_edit_options.png

If you click Generate process call, a line of script will be generated for you to use your search query as a process in a script. It will also be copied to clipboard.

  • search_process.png

Notice that your search was created to a query string:

  • ([parent CONTAINS "noise"] AND [type EQUALS "RawData"])

This advanced query syntax is described in the following section.

Advanced search queries

For advanced users, you can write more complex search queries that can combine multiple keywords and types of keywords using boolean logic. You can do this using the Brainstorm search GUI and then copy your search as text to re-use later. These queries work for both database searches and process filters. The syntax is rigid such that the order of the commands is important, so we recommend you use the search GUI whenever possible to avoid errors. Search queries can contain the following types of elements:

  • Search parameters: These are simple searches that are on a specific type of value. They need to be written in [square brackets]. They look like the following:

    • [searchFor EQUALITY NOT "value"]
    • SearchFor: Which field of the files metadata to search for It can have the following values, in lower case:

      • Name: Searches using the file name in Brainstorm
      • Type: Searches using the file type in Brainstorm
      • Path: Searches using the file path in the Brainstorm database folder
      • Parent: Searches using the parents name in the Brainstorm database tree
    • Equality: The type of equality you want to use to compare the file value to the searched value. It can have the following values, in upper case:

      • CONTAINS: Whether the searchFor field contains the text "value"
      • CONTAINS_CASE: Same as CONTAINS, but case sensitive
      • EQUALS: Whether the searchFor field exactly equals the text "value"
      • EQUALS_CASE: Same as EQUALS, but case sensitive
    • NOT: (optional) add this reserved keyword to return the opposite results of the search, so for example, all files that do NOT CONTAIN the text "value".

    • "value": the text you want to search for, in double quotes.

  • Boolean operators: These are used to group together search parameters and search blocks using boolean logic. Considering search parameters a, b and c, the following will return files that pass searches a and a, or does not pass search c:

    • (a AND b) OR NOT c
    • AND: This combines search parameters and blocks such that both conditions have to be met.

    • OR: This combines search parameters and blocks such that either conditions have to be met

    • NOT: This precedes a search block or parameter such that the condition result is reversed. So if a condition had to be met, it now has to not be met.

    • Important note: AND and OR operators cannot be mixed together (you cannot have both in the same search block), because otherwise it creates uncertainties.

  • Search blocks: These are combinations of search parameters and boolean operators, wrapped in (round brackets). You cannot have different boolean operators in the same block

Example

(([name CONTAINS "test1"] AND [type EQUALS "Matrix"]) OR NOT [parent CONTAINS "test2"])

Effect: This will match all matrix files containing text "test1" or all files whose parent docontains the text "test2".

Limitations of the GUI

The GUI does not support multiple nested search blocks. It only allows for one OR block followed by one AND block. If your query is more advanced than this, you will not be able to edit it with the search GUI. We recommend you use the process filter box instead.

Advanced

Saving a pipeline

After preparing your analysis pipeline by listing all the operations to run on your input files, you can either click on the [Run] button, or save/export your pipeline. The last button in the the toolbar offers a list of menus to save, load and export the pipelines.

  • pipeline_example.gif

  • Load: List of pipelines that are saved in the user preferences on this computer.

  • Load from .mat file: Import a pipeline from a pipeline_...mat file.

  • Save: Save the pipeline in the user preferences.

  • Save as .mat matrix: Exports the pipeline as Matlab structure in a .mat file. Allows different users to exchange their analysis pipelines, or a single user between different computers.

  • Generate .m script: This option generates a Matlab script.

  • Delete: Remove a pipeline that is saved in the user preferences.

  • Reset options: Brainstorm automatically saves the options of all the processes in the user preferences. This menu removes all the saved options and sets them back to the default values.

Advanced

Automatic script generation

Here is the Matlab script that is generated for this pipeline.

pipeline_script.gif

Reading this script is easy: input files at the top, one block per process, one line per option. You can also modify them to add personal code, loops or tests. Many features are still missing in the pipeline editor, but the generated scripts are easy enough for users with basic Matlab knowledge to edit and improve them.

Running this script from Matlab or clicking on the [Run] button of the pipeline editor produce exactly the same results. In both cases you will not have any interaction with the script, it could be executed without any direct supervision. You just get a report in the end that describes everything that happened during the execution.

These scripts cannot be reloaded in the pipeline editor window after being generated. If you work on a long analysis pipeline, save it in your user preferences before generating the corresponding Matlab script.

Advanced

Process: Select files with tag

Since we are discussing the file selection and the pipeline execution, we can explore a few more available options. We have seen how to filter the files in the Process1 box using the Filter search box. We can get to the exact same result by using the process File > Select files: By tag before the process you want to execute, to keep only a subset of the files that were placed in the Process1 list.

It is less convenient in interactive mode because you don't immediately see the effect of your file filter, but it can be very useful when writing scripts. You can also combine search constraints by adding the same process multiple times in your pipeline, which is not possible with the search box.

  • Make sure you still have the three datasets selected in the Process1 list.
  • Select the process: File > Select files: By tag

  • Select the options: Search: "Noise", Search the file names, Select only the files with the tag.
  • Click on [Run] to execute the process.

    process1_select.gif

  • This process is useless if not followed immediately by another process that does something with the selected files. It does nothing but selecting the file, but we can observe that the operation was actually executed with the report viewer.

Advanced

Report viewer

Everytime the pipeline editor is used to run a list of processes, a report is created and logs all the messages that are generated during the execution. These reports are saved in the user home folder: $HOME/.brainstorm/reports/.

The report viewer shows, as an HTML page, some of the information saved in this report structure: the date and duration of execution, the list of processes, and the input and output files. It reports all the warnings and errors that occurred during the execution.

The report is displayed at the end of the execution only if there were more than one process executed, or if an error or a warning was reported. In this case, nothing is displayed.

You can always explicitly open the report viewer to show the last reports: File > Report viewer.

reports.gif

When running processes manually from a script, the calls to bst_report explicitly indicate when the logging of the events should start and stop.

You can add images to the reports for quality control using the process File > Save snapshot, and send the final reports by email with the process File > Send report by email.

With the buttons in the toolbar, you can go back to the previous reports saved from the same protocol.

More information: Scripting tutorial

Advanced

Error management

  • Select the same files and same process: File > Select files: By tag

  • Note that the options you used during the previous call are now selected by default.
  • Instead of "Noise", now search for a string that doesn't exist in the file name, such as "XXXX".

    search_xxxx.gif

  • Click on [Run] to execute the process. You will get the following error.

    search_error.gif

  • If you open the report viewer, it should look like this.

    search_report.gif

Advanced

Control the output file names

If you are running two processes with different parameters but that produce exactly the same file paths and file names, you wouldn't be able to select them with this process. But immediately after calling any process, you can add the process File > Add tag to tag one specific set of files, so that you can easily re-select them later.

Example: You run the time-frequency decomposition twice with different options on the same files, tag the files after calculating them with different tags.

  • addTag.gif

Additional documentation



Tutorial 10: Power spectrum and frequency filters

Authors: Hossein Shahabi, Francois Tadel, Elizabeth Bock, John C Mosher, Richard Leahy, Sylvain Baillet

We are now going to process our continuous recordings to remove the main sources of noise. Typically, we can expect contaminations coming from the environment (power lines, stimulation equipment, building vibrations) and from the subject (movements, blinks, heartbeats, breathing, teeth clenching, muscle tension, metal in the mouth or the body). In this tutorial, we will focus first on the noise patterns that occur continuously, at specific frequencies.

We can correct for these artifacts using frequency filters. Usually we prefer to run these notch and band-pass filters before any other type of correction, on the continuous files. They can be applied to the recordings without much supervision, but they may create important artifacts at the beginning and the end of the signals. Processing the entire continuous recordings at once instead of the imported epochs avoids adding these edge effects to all the trials.

Evaluation of the noise level

Before running any type of cleaning procedure on MEG/EEG recordings, we always recommend to start with a quick evaluation of the noise level. An easy way to do this is to estimate the power spectrum of all the signals over the entire recordings.

  • Clear the list of files in the Process1 tab.
  • Select the three datasets we have linked to our protocol.
    You can select the three "link to raw file" nodes, the three folders or the entire subject node.

    psd_select_files.gif

  • Click on [Run] to open the pipeline editor window.
  • Select the process "Frequency > Power spectrum density (Welch)"

  • This process evaluates the power of the MEG/EEG signals at different frequencies, using the Welch's method (see Wikipedia or MathWorks). It splits the signals in overlapping windows of a given length, calculates the Fourier Transform (FFT) of each of these short segments, and averages the power of the FFT coefficients for all the overlapping windows.

    psd_options.png

  • Set the options as follows (click on [Edit] for the additional options):
    • Time window: [All file]
      Portion of the input file you want to use for estimating the spectrum.
      It is common to observe huge artifacts at the beginning or the end of the recordings, in this case you should exclude these segments from the calculation of the PSD.
      In practice, using just the first 100s or 200s of the file can give you a good enough impression of the quality of the recordings.

    • Window length: 4 seconds
      Estimator length = length of the overlapping time windows for which we calculate the FFT. The number of time samples in the estimator is the same as the number of frequency bins in the output file. Increasing this parameter increases the output frequency resolution (distance between two frequency bins) but degrades the stability of the estimator, as it also decreases the total number of averaged time windows. A Hamming window is applied to each estimator window before the computation of the FFT. See forum post: Effect of window length on the PSD

    • Overlap: 50%
      How much overlap do we want between two consecutive time windows.

    • Units: Physical: U^2/Hz
      Scaling of the spectrum. This only affects the values on the Y axis of the spectrum. Physical units should be used in most cases.
      "Normalized" gives normalized frequencies from 0 to 2pi (Hz·s).
      "Before Nov 2020" reproduces the older Brainstorm spectrum scaling (see this forum post).

    • Sensor types or names: MEG
      Defines the list of channels (names or types) on which you want to apply the process.

    • Frequency definition: Matlab's FFT default
      You have the option to directly use the frequency binning returned by the FFT, or run an additional step of averaging these bins in larger frequency bands. Note that you can freely edit these frequency bands.

    • Output: Save individual PSD value.
      This option will separately estimate the PSD for each of the three files in input, and create three files in output. If you select the other option (save average), it calculates the same three files but averages them on the fly and saves only one file in the database.

    • Implementation details: See function brainstorm3/toolbox/timefreq/bst_psd.m

  • Click on [Run] to start the execution of the process.
  • Troubleshooting: If you get "Out of memory" errors, try to run this PSD estimation on a shorter time segment. For example, set the time window to [0,100s] instead of the full file. This process starts by loading all the needed recordings in memory, you might not have enough memory available on your system to fit the entire dataset.

  • It produces three new files, that appear as "depending" on the three datasets. The comments of the files indicate how many overlapping windows could be used to estimate the PSD. "179/4000ms" means 179 windows of 4s each (total 716s). With the 50% overlap, it sums up to a little less than 2x the file length (360s).

    psd_output_files.gif

Interpretation of the PSD

File: AEF#01

  • Double-click on the new PSD file for run #01 to display it (or right-click > Power spectrum).

    psd_display.gif

  • The power spectrum is displayed in a figure similar to the time series, but the X axis represents the frequencies. Most of the shortcuts described for the recordings are also valid here. Clicking on the white parts of the figure or using the arrow keys of the keyboard move the frequency cursor. The current frequency can also be controlled with a new slider displayed in the Brainstorm window, just below the time panel.
  • Each black line represents the power spectrum for one channel. If you click on a channel, it gets highlighted in red. Click again for deselecting. Right-click on a selected channel to read its name.

    psd_run1.png

  • The frequency resolution of this power spectrum, ie. the distance between two frequency bins represented in this figure, is about 0.15Hz. This precision depends on the length of the estimator window you used. The FFT is computed on the number of time samples per window (4s*600Hz), rounded to the next power of 2 (nextpow2), and represents the full spectrum of the file (0-600Hz).
    Frequency resolution = sampling_freq / 2^nextpow2(estimator_length*sampling_freq) = 0.1465 Hz

  • The shape of this graph is normal, it does not indicate anything unexpected:
    • Peaks related with the subject's alpha rhythms: around 10Hz and 20Hz.

    • Peaks related with the power lines: 60Hz, 120Hz and 180Hz.
      These datasets were recorded in Canada, where the alternating powerline current is delivered at 60Hz. In Europe you would observe similar peaks at 50Hz, 100Hz and 150Hz.

  • Add a topography view for the same file, with one of the two methods below:
    • Right-click on the PSD file > 2D Sensor cap.

    • Right-click on the spectrum figure > 2D Sensor cap (shortcut: Ctrl+T)

  • Scroll in frequencies to see the spatial distribution of each frequency bin:

    psd_topo_ncm4.png

  • We have already identified two artifacts we will need to remove: the eye movements and the 60Hz+harmonics from the power lines.

File: AEF#02

  • Open the spectrum view for the run AEF#02.
    To view the signal units instead of dB, select Display Tab > Measure > Power. Then from the display options icon on the right of the figure, select Amplitude > Log scale

    psd_run2_units.png

    psd_run2.png

  • Add a 2D sensor cap view for the same file. Scroll to zoom in/out.
    To display the sensor names, right-click on the figure > Channels > Display sensors.

  • This spectrum looks very similar to the run #01: same alpha and power line peaks.
  • Additionally, we observe higher signal power between 30Hz and 100Hz on many occipital sensors. This is probably related to some tension in the neck muscles due to an uncomfortable seating position in the MEG. We will see later whether these channels need to be tagged as bad or not.

    psd_neck.png

File: Noise recordings

  • Open the spectrum view for the noise recordings.

    psd_noise.png

  • This shows the power spectrum of the signals that are recorded when there is no subject in the MEG room. It gives a good and simple representation of the instrumental noise. If you had one bad MEG sensor, you would see it immediately in this graph. Here everything looks good.

X Log-scale

  • One option is worth mentioning when displaying power spectra: the logarithmic scale for the X axis. In the diaply options for the PSD figure > Frequency > Log scale. It is sometimes better adapted to represent this type of data than a linear scale (especially with higher sampling frequencies).

    psd_log.png

Elekta-Neuromag and EEG users

The Elekta-Neuromag MEG systems combine different types of sensors with very different amplitude ranges, therefore you would not observe the same types of figures. Same thing for EEG users, this might not look like what you observe on your recordings.

For now, keep on following these tutorials with the example dataset to learn how to use all the Brainstorm basic features. Once you're done, read additional tutorials in the section "Other analysis scenarios" to learn about the specificities related with your own acquisition system.

Apply a notch filter

This filter has been updated in 2019. In the new configuration, the user can define the 3-dB bandwidth of the filter. Please consider that a smaller bandwidth means a sharper filter which in some cases makes filter unstable. In case you want to reproduce the old filter, you can check the box "Use old filter implementation". The 3-dB bandwidth is not applicable to the old configuration.

For illustration purposes, we will now run a frequency filter to remove the 60Hz+harmonics from the continuous files. Notch filters are adapted for removing well identified contaminations from systems oscillating at very stable frequencies.

  • Keep all the three datasets selected in the Process1 box.
    Remember to always apply the same filters on the subject recordings and the noise recordings.

  • Click on [Run] to open the Pipeline editor.
  • Run the process: Pre-process > Notch filter

    notch_process.gif

    • Process the entire file at once: NO

    • Sensor types or names: MEG

    • Frequencies to remove: 60, 120, 180 Hz

    • 3-dB notch bandwidth: 2 Hz

    • The higher harmonics (240Hz) are not clearly visible, and too high to bother us in this analysis.
  • This process creates three new datasets, with additional "notch" tags. These output files are saved directly in the Brainstorm database, in a binary format specific to Brainstorm (.bst).

    notch_output.gif

  • If you delete the folders corresponding to the original files (before the filter), your original recordings in the .ds folders are not altered. If you delete the folders corresponding to the filtered files, you will lose your filtered recordings in .bst format.
  • To check where the file corresponding to a "Link to raw file" is actually saved on the hard drive, right-click on it > File > Show in file explorer.

  • Important: This is an optional processing step. Whether you need this on your own recordings depends on the analysis you are planning to run on the recordings (see advanced sections below).

Evaluation of the filter

  • Right-click on the Process1 list > Clear list.

  • Drag and drop the three filtered files in Process1.

    notch_psd.gif

  • Run again the PSD process "Frequency > Power spectrum density (Welch)" on the new files, with the same parameters as before, to evaluate the quality of the correction.

  • Double-click on the new PSD files to open them.

    notch_evaluation.png

  • Scroll to zoom in and observe what is happening around 60Hz (before / after).

    notch_zoom.png

  • See below an example of how this filter can affect the time series: top=before, bottom=after.
    We show the reference sensor BR2 because it shows a lot more 60Hz than any MEG sensor (sensor type "MEG REF"), ie. oscillations with a period of 16.7ms.
    Note the edge effect at the beginning of the signal: the signals below are 1.5s long, the notch filter at 60Hz is visibly not performing well during the first 500ms (blue window).

    notch_timeseries2.gif

  • If you look in the list of events, you would see a new category "transient_notch". This corresponds to the time period during which can expect significant edge effects due to the filtering. Brainstorm doesn't mark these blocks as bad by default, you would have to do it manually - you will see how to do this in one of the following tutorials. In the case of this dataset, the transient duration is much shorter than the delay before the first stimulation, so not relevant in our processing pipeline. See the advanced sections below for more details about the estimation of this transient duration.

    notch_transient.gif

Some cleaning

To avoid any confusion later, delete the links to the original files:

  • Select the folders containing the original files and press Delete (or right-click > File > Delete).

    notch_delete.gif

  • Always read the confirmation messages carefully, you will avoid some bad surprises.

    notch_confirmation.gif

  • This is what your database explorer should look like at the end of this tutorial:

    notch_final.gif

Note for beginners

Everything below is advanced documentation, you can skip it for now.




Advanced

What filters to apply?

The frequency filters you should apply depend on the noise present in your recordings, but also on the type of analysis you are planning to use them for. This sections provides some general recommendations.

High-pass filter

  • Purpose: Remove the low frequencies from the signals. Typically used for:

    • Removing the arbitrary DC offset and slow drifts of MEG sensors (< 0.2Hz),

    • Removing the artifacts occurring at low frequencies (< 1Hz, e.g. breathing or eye movements).

  • Warnings:

    • Edge effects: Transient effects you should discard at the start and end of each filtered signal because the filtering window extends into time periods outside those for which you have data.
    • Avoid using on epoched data: You need long segments of recordings to run a high-pass filter.
    • Be careful with the frequency you choose if you are studying cognitive processes that may include sustained activity in some brain regions (eg. n-back memory task).

Low-pass filter

  • Purpose: Remove the high frequencies from the signals. Typically used for:

    • If the components of interest are below for example 40Hz, you may discard the faster components in the signal by applying a low-pass filter with a frequency cutoff below 40Hz.
    • Removing strong noise occurring at high frequencies (eg. muscle contractions, stimulators).
    • Display averages: In an event-related design, you will import and average multiple trials. You may low-pass filter these averages for display and interpretation purposes.
    • Statistics: In an event-related study with multiple subjects, the latency of the brain response of interest may vary between subjects. Smoothing the subject-level averages before computing a statistic across subjects may help reveal the effect of interest.
  • Warnings:

    • Edge effects: Transient effects you should discard at the start and end of each filtered signal because the filtering window extends into time periods outside those for which you have data.
    • It is always better to filter continuous (non-epoched data) when possible.
    • When filtering averages: Import longer epochs, average them, filter, then remove the beginning and the end of the average to keep only the signals that could be filtered properly

Band-pass filter

  • Purpose: A band-pass filter is the combination of a low-pass filter and a high-pass filter, it removes all the frequencies outside of the frequency band of interest.

  • Warnings: The same considerations and warnings as for high and low pass filtering apply here.

Notch filter

  • Purpose: Remove a sinusoidal signal at a specific frequency (power lines noise, head tracking coils).

  • Warnings:

    • Use only if needed: It is not always recommended to remove the 50-60Hz power lines peaks. If you don't have a clear reason to think that these frequencies will cause a problem in your analysis, you don't need to filter them out.
    • In an ERP analysis, the averaging of multiple trials will get rid of the 50-60Hz power line oscillations because they are not time-locked to the stimulus.
    • If you are using a low-pass filter, do not a apply a notch filter at a higher frequency (useless).
  • Alternatives: If the notch filter is not giving satisfying result, you have two other options.

    • Band-stop filter: Similar to the notch filter, but more aggressive on the data.
      Useful for removing larger segments of the spectrum, in case the power line peaks are spread over numerous frequency bins or for suppressing other types of artifacts.

    • Sinusoid removal: This process can do a better job at removing precise frequencies by identifying the sinusoidal components and then subtracting them from the signals in the time domain. This is not a frequency filter and works best on short segments of recordings.
      Run it on the imported epochs rather than on the continuous files.

When to apply these filters?

  • Continuous files: Frequency filters used for pre-processing purposes should be applied before epoching the recordings. In general, filters will introduce transient effects at the beginning and the end of each time series, which make these parts of the data unreliable and they should be discarded. If possible, it is safer and more efficient to filter the entire recordings from the original continuous file at once.

  • Before SSP/ICA cleaning: Artifact cleaning with SSP/ICA projectors require all the channels of data to be loaded in memory at the same time. Applying a frequency filter on a file that contains projectors requires all the file to be loaded and processed at once, which may cause memory issues. Pre-processing filters are rarely changed, whereas you may want to redo the SSP/ICA cleaning multiple times. Therefore it is more convenient to apply the filters first.

  • Imported epochs: Filtering epochs after importing them in the database is possible but requires extra attention: you may need to import longer epochs to be able to deal with the edge effects.

  • After averaging: You may low-pass filter the averaged epochs for display or statistical purposes but again be aware of the edge effects.

  • Empty room measurements: In principle, all the filters that are applied to the experimental data also need to be applied, with the same settings, to the noise recordings. In the source estimation process, we will need all the files to have similar levels of noise, especially for the calculation of the noise covariance matrix. This applies in particular when some channels are noisy.

  • Think first: Never apply a frequency filter without a clear reason (artifacts, predefined frequency ranges of interest, etc.) and without keeping the side effects under control. Avoid when possible.

Filter specifications: Low-pass, high-pass, band-pass

  • Process: Pre-process > Band-pass filter

    specs_bandpass.gif

  • Process options:

    • Lower cutoff frequency: Defines a high-pass filter (enter 0 for a low-pass filter)

    • Upper cutoff frequency: Defines a low-pass filter (enter 0 for a high-pass filter)

    • Stopband attenuation: The higher the attenuation, the higher the performance of the filter, but longer the transient duration. Use the default (60dB) unless you need shorter edge effects.

    • Use old filter: For replicating results obtained with older versions of Brainstorm.

    • View filter response: Click on this button to display the impulse and frequency response of your filter, and confirm that the responses appear reasonable.

  • Filter design:

    • Description: Even-order linear phase FIR filter, based on a Kaiser window design. The order N is estimated using Matlab's kaiserord function and the filter generated with fir1. Because the filters are linear phase, we can (and do) compensate for the filter delay by shifting the sequence backward in time by M=N/2 samples. This effectively makes the filters zero-phase and zero-delay.

    • Ripple and attenuation: The allowed ripple in pass and attenuation in stop band are set by default to 10^(-3) and 60dB respectively (note that with Kaiser window design, errors in pass and stopband will always be equal). Transitions between pass and stopbands are set to 15 percent of the upper and lower passband edges. However, when the lower edge of the passband is 5hz or lower we set the transition width to 50 percent of the lower passband edge.

    • Filtering function: The FIR bandpass filter can be performed in frequency domain (fftfilt function) or in time domain (filter function). The two approaches give the same results, but they have different execution times depending on the filter oder. The time-domain filtering is faster for low-order filters and much slower for high-order filters. The process selects automatically what approach to use.

  • Edge effects:

    • Transient (full): With any filtering operation there will always be a transient effect at the beginning of the filtered data. For our filter, this effect will last for half of the filter order: M=N/2 samples. We strongly recommend that your data records are sufficiently long that you can discard these M=N/2 samples. Because we are using zero-phase filtering, there is a similar N/2 effect at the end of the sampled data – these samples should also be discarded.

    • Transient (99% energy): For some filters, the full transient window might be longer than your epochs. However, most of the energy is carried by the beginning of the filter, and you can obtain amplitudes acceptable for most analysis after a fraction of this full window. For this reason we also mention a much shorter window in the documentation of the filter, which corresponds to the duration needed to obtain 99% of the total energy in the impulse response. This duration corresponds to the "transient" event markers that are added to the recordings when applying filters.

      bandpass_transient.gif

    • Adjust the parameters: If possible, always discard the full transient window. If the edge effect affects too much of your data, adjust the filter parameters to reduce filter order (increase the lower cut-off frequency or reduce the stopband attenuation). If you cannot get any acceptable compromise, you can consider discarding shorter transient windows, but never go below this "99% energy" window.

    • Mirroring: We included an option to mirror the data at the start and end of the record instead of padding the signal with zeros. This will reduce the apparent N/2 transients at the start and end of your data record, but you should be aware that these samples are still unreliable and we do not recommend using them.

    • [TODO] Check this "99% energy" criteria in the case of high-pass filters, it does not seem very useful...

  • Additional recommendations:

    • Filter order: The key issue to be aware of when filtering is that the specification you choose for your filter will determine the length of the impulse response (or the filter order) which in turn will affect the fraction of your data that fall into the "edge" region. The most likely factor that will contribute to a high filter order and large edge effects is if you specify a very low frequency at the lower edge of the passband (i.e. the high pass cut-off frequency).

    • Detrending: If your goal is to remove the DC signal we recommend you first try detrending the data (removes average and best linear fit) to see if this is sufficient. If you still need to remove other low frequency components, then pick the highest cut-off frequency that will fit your needs.

    • Design optimization: If you are performing bandpass filtering and are not satisfied with the results, you can investigate filtering your data twice, once with a low pass filter and once with a high pass filter. The advantage of this is that you can now separately control the transition widths and stop band attenuation of the two filters. When designing a single BPF using the Kaiser (or any other) window, the maximum deviation from the desired response will be equal in all bands, and the transition widths will be equal at the lower and upper edges of the pass band. By instead using a LPF and a HPF you can optimize each of these processes separately using our filtering function.

  • Linear phase, no distortion, zero delay: As described earlier, FIR filters have a linear phase in the frequency domain. It means that all samples of the input signal will have a same delay in the output. This delay is compensated after filtering. Consequently, no distortion happens during the filtering process. To illustrate this property, we considered a chirp signal in which the oscillation frequency grows linearly. The signal is band-pass filtered in two frequency ranges. The following plot represents the original signal and its filtered versions with our proposed filters. Results show that the input and output signals of this filter are completely aligned without any delay or distortion.

    bandpass_chirp.gif

  • Function: brainstorm3/toolbox/math/bst_bandpass_hfilter.m

  • External call: process_bandpass('Compute', x, Fs, HighPass, LowPass, 'bst-hfilter', isMirror, isRelax)

  • Code:

    1 function [x, FiltSpec, Messages] = bst_bandpass_hfilter(x, Fs, HighPass, LowPass, isMirror, isRelax, Function, TranBand, Method) 2 % BST_BANDPASS_HFILTER Linear phase FIR bandpass filter. 3 % 4 % USAGE: [x, FiltSpec, Messages] = bst_bandpass_hfilter(x, Fs, HighPass, LowPass, isMirror=0, isRelax=0, Function=[detect], TranBand=[], Method='bst-hfilter-2019') 5 % [~, FiltSpec, Messages] = bst_bandpass_hfilter([], Fs, HighPass, LowPass, isMirror=0, isRelax=0, Function=[detect], TranBand=[], Method='bst-hfilter-2019') 6 % x = bst_bandpass_hfilter(x, Fs, FiltSpec) 7 % 8 % DESCRIPTION: 9 % - A linear phase FIR filter is created. 10 % - Function "kaiserord" and "kaiser" are used to set the necessary order for fir1. 11 % - The transition band can be modified by user. 12 % - Requires Signal Processing Toolbox for the following functions: 13 % kaiserord, kaiser, fir1, fftfilt. If not, using Octave-based alternatives. 14 % 15 % INPUT: 16 % - x : [nChannels,nTime] input signal (empty to only get the filter specs) 17 % - Fs : Sampling frequency 18 % - HighPass : Frequency below this value are filtered in Hz (set to 0 for low-pass filter only) 19 % - LowPass : Frequency above this value are filtered in Hz (set to 0 for high-pass filter only) 20 % - isMirror : isMirror (default = 0 no mirroring) 21 % - isRelax : Change ripple and attenuation coefficients (default=0 no relaxation) 22 % - Function : 'fftfilt', filtering in frequency domain (default) 23 % 'filter', filtering in time domain 24 % If not specified, detects automatically the fastest option based on the filter order 25 % - TranBand : Width of the transition band in Hz 26 % - Method : Version of the filter (2019/2016-18) 27 % 28 % OUTPUT: 29 % - x : Filtered signals 30 % - FiltSpec : Filter specifications (coefficients, length, ...) 31 % - Messages : Warning messages, if any 32 33 % @============================================================================= 34 % This function is part of the Brainstorm software: 35 % https://neuroimage.usc.edu/brainstorm 36 % 37 % Copyright (c) University of Southern California & McGill University 38 % This software is distributed under the terms of the GNU General Public License 39 % as published by the Free Software Foundation. Further details on the GPLv3 40 % license can be found at http://www.gnu.org/copyleft/gpl.html. 41 % 42 % FOR RESEARCH PURPOSES ONLY. THE SOFTWARE IS PROVIDED "AS IS," AND THE 43 % UNIVERSITY OF SOUTHERN CALIFORNIA AND ITS COLLABORATORS DO NOT MAKE ANY 44 % WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF 45 % MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, NOR DO THEY ASSUME ANY 46 % LIABILITY OR RESPONSIBILITY FOR THE USE OF THIS SOFTWARE. 47 % 48 % For more information type "brainstorm license" at command prompt. 49 % =============================================================================@ 50 % 51 % Authors: Hossein Shahabi, Francois Tadel, John Mosher, Richard Leahy, 52 % 2016-2019 53 54 55 %% ===== PARSE INPUTS ===== 56 % Filter is already computed 57 if (nargin == 3) 58 FiltSpec = HighPass; 59 % Default filter options 60 else 61 if (nargin < 9) || isempty(Method) 62 Method = 'bst-hfilter-2019' ; 63 end 64 if (nargin < 8) || isempty(TranBand) 65 TranBand = []; 66 end 67 if (nargin < 7) || isempty(Function) 68 Function = []; % Auto-detection based on the filter order later in the code 69 end 70 if (nargin < 6) || isempty(isRelax) 71 isRelax = 0; 72 end 73 if (nargin < 5) || isempty(isMirror) 74 isMirror = 0; 75 end 76 FiltSpec = []; 77 end 78 Messages = []; 79 80 81 %% ===== CREATE FILTER ===== 82 if isempty(FiltSpec) 83 % ===== FILTER SPECIFICATIONS ===== 84 Nyquist = Fs/2; 85 % High-pass filter 86 if ~isempty(HighPass) && (HighPass ~= 0) 87 f_highpass = HighPass / Nyquist; % Change frequency from Hz to normalized scale (0-1) 88 switch Method 89 case 'bst-hfilter-2019' 90 if isempty(TranBand) || TranBand==0 91 if (HighPass <= 5) 92 LwTranBand = .5 ; %Hz 93 else 94 LwTranBand = 1 ; %Hz 95 end 96 f_highstop = f_highpass - LwTranBand/Nyquist; 97 else 98 f_highstop = max(0, HighPass - TranBand) / Nyquist; 99 % f_highstop = max(0.2, HighPass - TranBand) / Nyquist; 100 TranBand = (f_highpass - f_highstop)*Nyquist ; % Adjusted Transition band 101 end 102 case 'bst-hfilter-2016' 103 % Default transition band 104 if (HighPass <= 5) % Relax the transition band if HighPass<5 Hz 105 f_highstop = .5 * f_highpass; 106 else 107 f_highstop = .85 * f_highpass; 108 end 109 end 110 else 111 f_highpass = 0; 112 f_highstop = 0; 113 LwTranBand = 1 ; 114 end 115 % Low-pass filter 116 if ~isempty(LowPass) && (LowPass ~= 0) 117 f_lowpass = LowPass / Nyquist; 118 switch Method 119 case 'bst-hfilter-2019' 120 if isempty(TranBand) || TranBand==0 121 UpTranBand = 1 ; 122 UpTranBand = min(UpTranBand,LwTranBand) ; 123 f_lowstop = f_lowpass + UpTranBand/Nyquist; 124 else 125 f_lowstop = f_lowpass + TranBand/Nyquist; 126 end 127 case 'bst-hfilter-2016' 128 % Default transition band 129 if f_highpass==0 % If this is a low-pass filter 130 f_lowstop = 1.05 * f_lowpass; 131 else 132 f_lowstop = 1.15 * f_lowpass; 133 end 134 end 135 else 136 f_lowpass = 0; 137 f_lowstop = 0; 138 end 139 % If both high-pass and low-pass are zero 140 if (f_highpass == 0) && (f_lowpass == 0) 141 Messages = ['No frequency band in input.' 10]; 142 return; 143 % Input frequencies are too high 144 elseif (f_highpass >= 1) || (f_lowpass >= 1) 145 Messages = sprintf('Cannot filter above %dHz.\n', Nyquist); 146 return; 147 end 148 % Transition parameters 149 if isRelax 150 Ripple = 10^(-2); 151 Atten = 10^(-2); % Equals 40db 152 else 153 Ripple = 10^(-3); % pass band ripple 154 Atten = 10^(-3); % Equals 60db 155 end 156 157 % ===== DESIGN FILTER ===== 158 % Build the general case first 159 fcuts = [f_highstop, f_highpass, f_lowpass, f_lowstop]; 160 mags = [0 1 0]; % filter magnitudes 161 devs = [Atten Ripple Atten]; % deviations 162 % Now adjust for desired properties 163 fcuts = max(0,fcuts); % Can't go below zero 164 fcuts = min(1-eps, fcuts); % Can't go above or equal to 1 165 166 % We have implicitly created a bandpass, but now adjust for desired filter 167 if (f_lowpass == 0) % User didn't want a lowpass 168 fcuts(3:4) = []; 169 mags(3) = []; 170 devs(3) = []; 171 end 172 if (f_highpass == 0) % User didn't want a highpass 173 fcuts(1:2) = []; 174 mags(1) = []; 175 devs(1) = []; 176 end 177 178 % Generate FIR filter 179 % Using Matlab's Signal Processing toolbox 180 if bst_get('UseSigProcToolbox') 181 [n,Wn,beta,ftype] = kaiserord(fcuts, mags, devs, 2); 182 n = n + rem(n,2); % ensure even order 183 b = fir1(n, Wn, ftype, kaiser(n+1,beta), 'noscale'); 184 % Using Octave-based functions 185 else 186 [n,Wn,beta,ftype] = oc_kaiserord(fcuts, mags, devs, 2); 187 n = n + rem(n,2); % ensure even order 188 b = oc_fir1(n, Wn, ftype, oc_kaiser(n+1,beta), 'noscale'); 189 end 190 191 % Filtering function: Detect the fastest option, if not explicitely defined 192 if isempty(Function) 193 % The filter() function is a bit faster for low-order filters, but much slower for high-order filters 194 if (n > 800) % Empirical threshold 195 Function = 'fftfilt'; 196 else 197 Function = 'filter'; 198 end 199 end 200 201 % Compute the cumulative energy of the impulse response 202 E = b((n/2)+1:end) .^ 2 ; 203 E = cumsum(E) ; 204 E = E ./ max(E) ; 205 % Compute the effective transient: Number of samples necessary for having 99% of the impulse response energy 206 [tmp, iE99] = min(abs(E - 0.99)) ; 207 208 % Output structure 209 FiltSpec.b = b; 210 FiltSpec.a = 1; 211 FiltSpec.order = n; 212 FiltSpec.transient = iE99 / Fs ; % Start up and end transients in seconds (Effective) 213 % FiltSpec.transient_full = n / (2*Fs) ; % Start up and end transients in seconds (Actual) 214 FiltSpec.f_highpass = f_highpass; 215 FiltSpec.f_lowpass = f_lowpass; 216 FiltSpec.fcuts = fcuts * Nyquist ; % Stop and pass bands in Hz (instead of normalized) 217 FiltSpec.function = Function; 218 FiltSpec.mirror = isMirror; 219 % If empty input: just return the filter specs 220 if isempty(x) 221 return; 222 end 223 end 224 225 %% ===== FILTER SIGNALS ===== 226 % Transpose signal: [time,channels] 227 [nChan, nTime] = size(x); 228 % Half of filter length 229 M = FiltSpec.order / 2; 230 % If filter length > 10% of data length 231 edgePercent = 2*FiltSpec.transient / (nTime / Fs); 232 if (edgePercent > 0.1) 233 Messages = [Messages, sprintf('Start up and end transients (%.2fs) represent %.1f%% of your data.\n', 2*FiltSpec.transient, 100*edgePercent)]; 234 end 235 236 % Remove the mean of the data before filtering 237 xmean = mean(x,2); 238 x = bst_bsxfun(@minus, x, xmean); 239 240 % Mirroring requires the data to be longer than the filter 241 if (FiltSpec.mirror) && (nTime < M) 242 Messages = [Messages, 'Warning: Data is too short for mirroring. Option is ignored...' 10]; 243 FiltSpec.mirror = 0; 244 end 245 % Mirror signals 246 if (FiltSpec.mirror) 247 x = [fliplr(x(:,1:M)), x, fliplr(x(:,end-M+1:end))]; 248 % Zero-padding 249 else 250 x = [zeros(nChan,M), x, zeros(nChan,M)] ; 251 end 252 253 % Filter signals 254 switch (FiltSpec.function) 255 case 'fftfilt' 256 if bst_get('UseSigProcToolbox') 257 x = fftfilt(FiltSpec.b, x')'; 258 else 259 x = oc_fftfilt(FiltSpec.b, x')'; 260 end 261 case 'filter' 262 x = filter(FiltSpec.b, FiltSpec.a, x, [], 2); 263 end 264 265 % Remove extra data 266 x = x(:,2*M+1:end); 267 % Restore the mean of the signal (only if there is no high-pass filter) 268 if (FiltSpec.f_highpass == 0) 269 x = bst_bsxfun(@plus, x, xmean); 270 end

Filter specifications: Notch

  • Description: 2nd order IIR notch filter with zero-phase lag (implemented with filtfilt).

  • Reference: Mitra, Sanjit Kumar, and Yonghong Kuo. Digital signal processing: a computer-based approach. Vol. 2. New York: McGraw-Hill, 2006. MatlabCentral #292960

  • Edge effects: It is computed based on the 99% energy of the estimated impulse response.

  • Function: brainstorm3/toolbox/process/functions/process_notch.m

  • External call: [x, FiltSpec, Messages] = Compute(x, sfreq, FreqList, Method, bandWidth)

Filter specifications: Band-stop

  • Description: 4th order Butterworth IIR filter with zero-phase lag (implemented with filtfilt)

  • Reference: FieldTrip: x = ft_preproc_bandstopfilter(x, sfreq, FreqBand, [], 'but', 'twopass')

  • Edge effects It is computed based on the 99% energy of the estimated impulse response.

  • Function: brainstorm3/toolbox/process/functions/process_bandstop.m

  • External call: x = process_bandstop('Compute', x, sfreq, FreqList, FreqWidth)

On the hard drive

The names of the files generated by the process "Power spectrum density" start with the tag timefreq_psd, they share the same structure as all the files that include a frequency dimension.

To explore the contents of a PSD file created in this tutorial, right-click on it and use the popup menus
File > View file contents or File > Export to Matlab.

  • psd_contents.gif

Structure of the time-frequency files: timefreq_psd_*.mat

  • TF: [Nsignals x Ntime x Nfreq] Stores the spectrum information. Nsignals is the number of channels that were selected with the option "MEG" in the PSD process. Nfreq is the number of frequency bins. There is no time dimension (Ntime = 1).

  • Comment: String used to represent the file in the database explorer.

  • Time: Window of time over which the file was estimated.

  • TimeBands: Defined only when you select the option "Group in time bands".
    Always empty for the PSD files because there is no time dimension.

  • Freqs: [1 x Nfreq] List of frequencies for which the power spectrum was estimated (in Hz).

  • RefRowNames: Only used for connectivity results.

  • RowNames: [Nsignals x 1] Describes the rows of the TF matrix (first dimension). Here it corresponds to the name of the MEG sensors, in the same order as is the .TF field.

  • Measure: Function currently applied to the FFT coefficients {power, none, magnitude, log, other}

  • Method: Function that was used to produce this file {psd, hilbert, morlet, corr, cohere, ...}

  • DataFile: File from which this file was calculated = Parent file in the database explorer.

  • DataType: Type of file from which this file was calculated (file type of .DataFile).

    • 'data' = Recordings
    • 'cluster' = Recordings grouped by clusters of sensors
    • 'results' = Source activations
    • 'scouts' = Time series associated with a region of interest
  • SurfaceFile / GridLoc / GridAtlas / Atlas: Used only when the input file was a source file.

  • Leff: Effective number of averages = Number of input files averaged to produce this file.

  • Options: Most relevant options that were passed to the function bst_timefreq.

  • History: Describes all the operations that were performed with Brainstorm on this file. To get a better view of this piece of information, use the menu File > View file history.

Useful functions

  • in_bst_timefreq(PsdFile): Read a PSD or time-frequency file.

  • in_bst(FileName): Read any Brainstorm file.

  • bst_process('LoadInputFile', FileName, Target): The most high-level function for reading Brainstorm files. "Target" is a string with the list of sensor names or types to load (field RowNames).

  • bst_psd(F, sfreq, WinLength, WinOverlap): Computation of the Welch's power spectrum density.

Additional documentation

Forum discussions



Tutorial 11: Bad channels

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

It is common during the acquisition to have a few sensors that are recording values that will not be usable in the data analysis. In MEG, a sensor can be damaged or unstable. In EEG, the quality of the connection between the electrode and the scalp is sometimes too low to record anything interesting.

It is important to identify the sensors with poor signal quality at an early stage of the pre-processing, because the efficiency of the artifact removal will depend on it. If you try to remove blink and cardiac artifacts with some bad sensors it may not work very well, and worse, it will propagate the bad signals to all the channels.

This tutorial will explain the various ways we have to handle the bad channels. Note that the recordings from this auditory experiment do not contain any bad sensors, therefore the entire tutorial is optional. If you are not interested, you can skip it and will still be able to follow the next tutorials.

Advanced

Identifying bad channels

Some bad channels are easy to detect, their signals look either completely off or totally flat compared with the other surrounding sensors. Some others are more difficult to identify. The examples below are taken from other datasets.

  • The power spectrum density (PSD) is usually a good way to spot a few bad channels, this is why we always recommend to compute it for all the datasets:

    psd_neck.gif bad_psd.gif

  • Simply looking at the signals traces, some channels may appear generally noisier than the others:

    bad_signal.gif

  • Looking at a 2D sensor topography, if one sensor shows very different values from its neighbors for extended periods of time, you can doubt of its quality:

    bad_topo.gif

Advanced

Selecting sensors

  • Double-click on the recordings for run #01 to open the MEG sensors.
  • Right-click on the time series figure > View topography (or press Ctrl+T).

  • Right-click on the topography figure > Channels > Display sensors (or press Ctrl+E).

  • If you can't see anything because the topography figure is too small, you can change the way the figures are automatically arranged. In the top-right corner of the Brainstorm figure, select the menu "Window layout options > Tiled".

  • You can select one channel by clicking on its signal or on the dot representing it in the topography figure. Note that the sensor selection is automatically reported to the other figure.

    select_channel.gif

  • You can select multiple sensors at the same time the topography figure.
    Right-click on the figure, then hold the mouse button and move the mouse.

    select_multiple.gif

  • Select a few sensors, then right-click on one of the figures and check out the Channels menu:

    select_popup.gif

    • View selected: Show the time series of the selected sensors.

    • Mark selected as bad: Remove sensors from the display and all the further computations.

    • Mark non-selected as bad: Keep only the selected channels.

    • Reset selection: Unselect all the selected sensors.

    • Mark all channels as good: Brings back all the channels to display.

    • Edit good/bad channels: Opens an interface that looks like the channel editor, but with one extra column to edit the status (good or bad) of each channel.

Advanced

Marking bad channels

  • Select a few channels, right-click > Channels > Mark selected as bad (or press the Delete key).

  • The selected channels disappear from the two views. In the time series figure, the signals are not visible anymore, in the topography the corresponding dots disappear and the values of the magnetic fields around the missing sensors get re-interpolated based on what is left.

    select_delete.gif

  • With the time series figure, you can display the signals that have been tagged as bad.
    In the Record tab, select the montage "Bad channels".
    In this view, you cannot select the channels, they are not available anymore.

    bad_montage.gif

  • Right-click on a figure > Channels > Edit good/bad channels.
    This menu open a window very similar to the Channel Editor window, with additional green and red dots to indicate the status of each channel. Click on the dot to switch it to good or bad.

    bad_edit.gif

Advanced

From the database explorer

Many options to change the list of bad channels are available from the database explorer.

  • The menus are available if you right-click one data file (or link to raw file). In this case, the selected operation is applied only on the selected file.

    bad_popup_file.gif

  • The same menus are also available for all the folders. In this case, the selected operation is applied recursively to all the data files (and links to raw files) that are found in the folder.

    bad_popup_db.gif

  • With this batching ability of the database explorer, you can quickly tag some bad channels in all the recordings of a subject or for the entire protocol. You can also get a quick overview of all the bad channels in all the files at once with the menu View all bad channels.

    bad_view.gif

  • Restore all the good channels before moving to the next tutorial. For instance, right-click on the protocol folder TutorialIntroduction > Good/bad channels > Mark all channels as good.

Advanced

Epoching and averaging

The list of bad channels is saved separately for each dataset.

At this stage of the analysis, the database contains only links to continuous files. When you import epochs from a continuous file, the list of bad channels will be copied from the raw file to all the imported data files.

Then you will be able to redefine this list for each epoch individually, tagging more channels as bad, or including back the ones that are ok. This way it is possible to exclude from the analysis the channels that are too noisy in a few trials only, for instance because of some movement artifacts.

When averaging, if an epoch contains one bad channel, this bad channel is excluded from the average but all the other channels are kept. If the same channel is good in other trials, it will be considered as good in the average. This means that not all the channels have the same number of trials for calculating the average.

This may cause the different channels of an averaged file to have different signal-to-noise ratios, which may lead to confusing results. However, we decided to implement the average in this way to be able to keep more data in the studies with a low number of trials and a lot of noise.

Advanced

On the hard drive

The list of bad channels is saved for each data file separately, in the field ChannelFlag.
This vector indicates for each channel #i if it is good (ChannelFlag(i)= 1) or bad (ChannelFlag(i)= -1).

Right-click on a link to a continuous file > File > View file contents:

bad_file.gif

For raw data files, this information is duplicated in the sFile structure (F field) in order to be passed easily to the low-level reading functions. If you are planning to modify the list of bad channels manually, you need to change two fields: mat.ChannelFlag and mat.F.channelflag. For imported data, you just need to modify the field mat.ChannelFlag.



Tutorial 12: Artifact detection

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

The previous tutorials illustrated how to remove noise patterns occurring continuously and at specific frequencies. However, most of the events that contaminate the MEG/EEG recordings are not persistent, span over a large frequency range or overlap with the frequencies of the brain signals of interest. Frequency filters are not appropriate to correct for eye movements, breathing movements, heartbeats or other muscle activity.

For getting rid of reproducible artifacts, one popular approach is the Signal-Space Projection (SSP). This method is based on the spatial decomposition of the MEG/EEG recordings for a selection of time samples during which the artifact is present. Therefore we need to identify when each type of artifacts occurs in the recordings. This tutorial shows how to automatically detect some well defined artifacts: the blinks and the heartbeats.

Observation

Let's start by observing the type of contamination the blinks and heartbeats cause to the MEG recordings.

  • Run #01: Double-click on the link to show the MEG sensors.

  • Configuration: Page of 3 seconds, view in columns, selection of the "CTF LT" sensors (the left-temporal sensors will be a good example to show at the same time the two types of artifacts).

  • EOG: Right-click on the link > EOG > Display time series. Two channels are classified as EOG:

    • VEOG: Vertical electrooculogram (two electrodes placed below and above one eye)

    • HEOG: Horizontal electrooculogram (two electrodes placed on the temples of the subject)

    • On these traces, there is not much happening for most of the recordings except for a few bumps. This subject is sitting very still and not blinking much. We can expect MEG recordings of a very good quality.
  • ECG: Right-click on the link > ECG > Display time series.
    The electrocardiogram was recorded with a bipolar montage of electrodes across the chest. You can recognize the typical shape of the electric activity of the heart (P, QRS and T waves).

  • Find a blink: Scroll through the recordings using the F3 shortcut until you see a large blink.

    • Remember you can change the amplitude scale with many shortcuts (eg. right-click + move).
    • To keep the scale fixed between two pages: Uncheck the button [AS] (auto-scale).

    • For instance, you can observe a nice blink at 20.8s (red cursor in the screen capture below).
    • On the same page, you should be able to observe the contamination due to a few heartbeats, corresponding to the peaks of the ECG signal (eg. 19.8s, shown as a blue selection below).
  • The additional data channels (ECG and EOG) contain precious information that we can use for the automatic detection of the blinks and heartbeats. We strongly recommend that you always record these signals during your own experiments, it helps a lot with the data pre-processing.

    observe.gif

Detection: Heartbeats

In the Record tab, select the menu: "Artifacts > Detect heartbeats".

  • It automatically opens the pipeline editor, with the process "Detect heartbeats" selected.
  • Channel name: Name of the channel that is used to perform the detection. Select or type "ECG".

  • Time window: Time range that the algorithm should scan for amplitude peaks. Leave the default values to process the entire file, or check the option [All file].

  • Event name: Name of the event group created for saving the detected events. Enter "cardiac".

    detect_ecg.gif

  • Click on Run. After the process stops, you can see a new event category "cardiac". The 464 (aprox.) heartbeats for 360s of recordings indicate an average heart rate of 77bpm, everything looks normal.

  • You can check a few of them, to make sure the "cardiac" markers really indicate the ECG peaks. Not all peaks need to be detected, but you should have a minimum of 10-20 events marked for removing the artifacts using SSP, described in the following tutorials.

    detect_ecg_done.gif

Now do the same thing for the blinks: Menu "Artifacts > Detect eye blinks".

  • Channel name: VEOG

  • Time window: All file

  • Event name: Blink

    detect_eog.gif

  • Run, then look quickly at the 15 detected blinks (shortcut: Shift+Right arrow).

    detect_eog_done.gif

Remove simultaneous blinks/heartbeats

We will use these event markers as the input to our SSP cleaning method. This technique works well if each artifact is defined precisely and as independently as possible from the other artifacts. This means that we should try to avoid having two different artifacts marked at the same time.

Because the heart beats every second or so, there is a high chance that when the subject blinks there is a heartbeat not too far away in the recordings. We cannot remove all the blinks that are contaminated with a heartbeat because we would have no data left. But we have a lot of heartbeats, so we can do the contrary: remove the markers "cardiac" that are occurring during a blink.

In the Record tab, select the menu "Artifacts > Remove simultaneous". Set the options:

  • Remove events named: "cardiac"

  • When too close to events: "blink"

  • Minimum delay between events: 250ms

    detect_simult.gif

After executing this process, the number of "cardiac" events goes from 465 to 456. The deleted heartbeats were all less than 250ms away from a blink.

Run #02: Running from a script

Let's perform the same detection operations on Run #02, using this time the Process1 box.

  • Close everything with the [X] button at the top-right corner of the Brainstorm window.

  • Select the run AEF #02 in the Process1 box, then select the following processes:

  • Events > Detect heartbeats: Select channel ECG, check "All file", event name "cardiac".

  • Events > Detect eye blinks: Select channel VEOG, check "All file", event name "blink".

  • Events > Remove simultaneous: Remove "cardiac", too close to "blink", delay 250ms.

    detect_script.gif

  • Open the Run#02 recordings (MEG+EOG+ECG) and verify that the detection worked as expected. You should get 472 cardiac events and 19 blink events.

Advanced

Artifacts classification

If the EOG signals are not as clean as here, the detection processes may create more than one category, for instance: blink, blink2, blink3. The algorithm not only detects specific events in a signal, it also classifies them by shape. For two detected events, the signals around the event marker have to be sufficiently correlated (> 0.8) to be classified in the same category. At the end of the process, all the categories that contain less than 5 events are deleted.

In the good cases, this can provide an automatic classification of different types of artifacts, for instance: blinks, saccades and other eye movements. The tutorial MEG median nerve (CTF) is a good illustration of appropriate classification: blink groups the real blinks, and blink2 contains mostly saccades.

  • detect_classification.gif

In the bad cases, the signal is too noisy and the classfication fails. It leads to either many different categories, or none if all the categories have less than 5 events. If you don't get good results with the process "Detect eye blinks", you can try to run a custom detection with the classification disabled.

At the contrary, if you obtain one category that mixes multiple types of artifacts and would like to automatically separate them in different sub-groups, you can try the process "Events > Classify by shape". It is more powerful than the automatic classification from the event detection process because it can run on multiple signals at the same type: first it reduces the number of dimensions with a PCA decomposition, then runs a similar classification procedure.

Advanced

Detection: Custom events

These two processes "Detect heartbeats" and "Detect eye blinks" are in reality shortcuts for a generic process "Detect custom events". This process can be used for detecting any kind of event based on the signal power in a specific frequency band. We are not going to use it here, but you may have to use it if the standard parameters do not work well, or for detecting other types of events.

  • The signal to analyze is read from the continuous file (options "Channel name" and "Time window").
  • Frequency band: The signal is filtered in a frequency band where the artifact is easy to detect. For EOG: 1.5-15Hz ; for ECG: 10-40Hz.

  • Threshold: An event of interest is detected if the absolute value of the filtered signal value goes over a given number of times the standard deviation. For EOG: 2xStd, for ECG: 4xStd

  • Minimum duration between two events: If the filtered signal crosses the threshold several times in relation with the same artifact (eg. muscle activity in an EMG channel), we don't want to trigger several events but just one at the beginning of the activity. This parameter would indicate the algorithm to take only the maximum value over the given time window; it also prevents from detecting other events immediately after a successful detection. For the ECG, this value is set to 500ms, because it is very unlikely that the heart rate of the subject goes over 120 beats per minute.

  • Ignore the noisy segments: If this option is selected, the detection is not performed on the segments that are much noisier than the rest of the recordings.

  • Enable classification: If this option is selected, the events are classified by shape in different categories, based on correlation measure. In the end, only the categories that have more than 5 occurrences are kept, all the other successful detections are ignored.

    detect_custom.gif

Advanced

In case of failure

If the signals are not as clean as in this sample dataset, the automatic detection of the heartbeats and blinks may fail with the standard parameters. You may have to use the process "Detect custom events" and adjust some parameters. For instance:

  • If nothing is detected: decrease the amplitude threshold, or try to adjust the frequency band.
  • If too many events are detected: increase the amplitude threshold or the minimum duration between two events.
  • If too many categories of events are generated, and you have a very little number of events in the end: disable the classification.
  • To find the optimal frequency band for an artifact, you can open the recordings and play with the online band-pass filters in the Filter tab. Keep the band that shows the highest amplitude peaks.

If you cannot get your artifacts to be detected automatically, you can browse through the recordings and mark all the artifacts manually, as explained in the tutorial Event markers..

Advanced

Other detection processes

Events > Detect analog trigger

  • See tutorial Stimulation delays.

  • This is used to detect events on any channel (MEG, EEG, STIM, Analog, etc), where the baseline is relatively stable and the events will predictably cross a threshold. This is useful when you want to detect a single time point (simple event) at the start of an event, as in these examples:

    analog_detection_audio.png analog_detection_pd.png analog_detect_saccade.png

Events > Detect custom events

  • See tutorial Artifact detection.

  • This is used to detect events on any channel (MEG, EEG, STIM, Analog, etc) where the baseline is relatively stable and the events will predictably cross a threshold. This is useful when you want to detect a simple event at the peak of an event, as in these examples:

    custom_detect_jump.png custom_detect_muscle.png

Events > Detect events above threshold

  • See tutorial MEG visual: single subject.

  • This is used to detect signal on any channel (MEG, EEG, STIM, Analog, etc) that is above a defined threshold value. This is useful when you want to detect all time points when the signal is above the threshold (extended events), as in these examples:

    threshold_detect_blink.png threshold_detect_event.png

  • The extended event can be converted to a single event (when the rising or falling edge is desired). in the Record tab, select the event to convert, then in the menu Events > Convert to simple event > select Start, Middle, or End to indicate where the marker should be placed.

Events > Detect other artifacts

Events > Detect movement

Synchronize > Transfer events

Artifacts > Detect bad channels: Peak-to-peak

  • With imported data: Reject channels and trials from imported data.

  • With raw data: Create bad events (specific channel or all channels) on the raw data.

  • This process is usually not recommended, as the amplitude of the signal is not always a good marker of the quality of the channel.

Advanced

Additional documentation



Tutorial 13: Artifact cleaning with SSP

Authors: Francois Tadel, Elizabeth Bock, John C Mosher, Sylvain Baillet

As previously said, the frequency filters are not adapted to remove artifacts that are transient or overlapping in frequency domain with the brain signals of interest. Other approaches exist to correct for these artifacts, based on the spatial signature of the artifacts.

If an event is very reproducible and occurs always at the same location (eg. eye blinks and heartbeats), the sensors will always record the same values when it occurs. We can identify the topographies corresponding to this artifact (ie. the spatial distributions of values at one time point) and remove them from the recordings. This spatial decomposition is the basic idea behind two widely used approaches: the SSP (Signal-Space Projection) and ICA (Independent Component Analysis) methods.

This introduction tutorial will focus on the SSP approach, as it is a lot simpler and faster but still very efficient for removing blinks and heartbeats from MEG recordings. For cleaning EEG data, ICA is ofter better indicated - the interface for running ICA decompositions is very similar and is described in an advanced tutorial.

Overview

The general SSP objective is to identify the sensor topographies that are typical of a specific artifact, then to create spatial projectors to remove the contributions of these topographies from the recordings.

  1. We start by identifying many examples of the artifact we are trying to remove. This is what we've been doing in the previous tutorial with the creation of the "cardiac" and "blink" events.
  2. We extract a short time window around each of these event markers and concatenate in time all the small blocks of recordings.
  3. We run a principle components analysis (PCA) on the concatenated artifacts in order to get a decomposition in various spatial components (number of components = number of sensors).
  4. If it works well, we can find in the first few principal components some topographies that are very specific of the type of artifact we are targeting. We select these components to remove.
  5. We compute a linear projector for each spatial component to remove and save them in the database (in the "Link to raw file"). They are not immediately applied to the recordings.
  6. Whenever some recordings are read from this file, the SSP projectors are applied on the fly to remove the artifact contributions. This approach is fast and memory efficient.
  7. Note that these tools are available on continuous files only ("Link to raw file") and cannot be applied to recordings that have already been imported in the database.

    ssp_intro.gif

The order matters

This procedure has to be repeated separately for each artifact type. The order in which you process the artifacts matters, because for removing the second artifact we typically use the recordings cleaned with the first set of SSP projectors. We have to decide which one to process first.

It works best if each artifact is defined precisely and as independently as possible from the other artifacts. If the two artifacts happen simultaneously, the SSP projectors calculated for the blink may contain some of the heartbeat topography and vice versa. When trying to remove the second artifact, we might not be able to clearly isolate it anymore.

Because the heart beats every second or so, there is a high chance that when the subject blinks there is a heartbeat not too far away in the recordings. Therefore a significant number of the blinks will be contaminated with heartbeats. But we have usually a lot of "clean" heartbeats, we can start by removing these ones. To correctly isolate these two common artifacts, we recommend the following procedure:

  • Remove the markers "cardiac" that are occurring during a blink (done in the previous tutorial),
  • Compute the cardiac SSP (with no eye movements, because we removed the co-occurring events),

  • Compute the blink SSP (with no heartbeats, because they've already been taken care of).

If you have multiple modalities recorded simultaneously, for example MEG and EEG, you should run this entire procedure twice, once for the EEG only and once for the MEG only. You will always get better results if you process the different types of sensors separately. Same thing when processing Elekta-Neuromag recordings: separately process the magnetometers (MEG MAG) and the gradiometers (MEG GRAD).

SSP: Heartbeats

Double-click on the link to show the MEG sensors for Run #01.
In the Record tab, select the menu: "Artifacts > SSP: Heartbeats".

  • Event name: Name of the event to use to calculate the projectors, enter "cardiac".

  • Sensor types: Type of sensors for which the projection should be calculated ("MEG"). Note that you will always get better results if you process the different types of sensors separately.

  • Compute using existing SSP projectors: You have the option to calculate the projectors from the raw recordings, or from the recordings filtered with the previously computed SSP projectors.
    Unless you have a good reason for not considering the existing projectors, you should select this option. Then if the results are not satisfying, try again with the option disabled.
    For this step it doesn't make any difference because there are not projectors yet in the file.

    ssp_ecg_process.gif

After the computation is done, a new figure is displayed, that lets you select the active projectors.

  • On the left: The projector categories where each row represents the result of an execution of this process (usually one for each sensor type and each artifact).

  • On the right: The spatial components returned by the PCA decomposition. The percentage indicated between brackets is the singular value for this each component, normalized for this decomposition (percentage = Si / sum(Si), see technical details at the end of this page).

  • Percentage: More practically, it indicates the amount of signal that was captured by the component during the decomposition. The higher it is, the more the component is representative of the artifact recordings that were used to calculate it. In the good cases, you would typically see one to three components with values that are significantly higher than the others.

  • When a component is selected, it means that it is removed from the recordings. A spatial projector is computed and applied to the recordings on the fly when reading from the continuous file.
  • Default selection: The software selects the first component and leaves the others unselected. This selection is arbitrary and doesn't mean the cleaning is correct, you should always manually review the components that you want to remove.

    ssp_select1.gif

Evaluate the components

The percentage indicated for the first value (9%) is much higher than the following ones (5%, 5%, 4%, 3%...), this could indicate that it targets relatively well the cardiac artifact. Let's investigate this.

  • Click on the first component, then click on the toolbar button [Display component topography]. This menu shows the spatial distribution of the sensor values for this component. Note that you don't have to select the component (ie. check the box) to display it. This topography seems to correspond to a strong dipolar activity located relatively far from the sensor array, it matches the type of artifact we expect from the heart activity.



  • ssp_select2.gif

  • The second button "Display component topography [No magnetic interpolation]" produces the same figure but without the reinterpolation of the magnetic fields that is typically applied to the MEG recordings in Brainstorm, it may help understand some difficult cases. This magnetic interpolation will be detailed later in the introduction tutorials.

  • You can display multiple components in the same figure: select them at the same time in the list (holding the Shift/Ctrl/Cmd button of your keyboard) and then click on the button "Display topography". No other strong components looks like it could be related with the heartbeats.

    ssp_multiple.gif

  • The last button in the toolbar [Display component time series], opens a figure that represents the evolution of the contribution of this component over time. The higher the amplitude, the more present the selected topography in the recordings. Click on it to show the component #1, then display the ECG signal at the same time (right-click on the file > ECG > Display time series).

    ssp_cardiac_good.gif

  • We observe that the "SSP1" trace correlates relatively well with the ECG trace, in the sense that we captured most the ECG peaks with this component. However, the component seems also to capture much more signal than just the heartbeats: many alpha oscillations and some of the ocular activity. The example below shows a blink in the EOG, ECG and SSP component #1.

    ssp_cardiac_bad.gif

  • If you remove this component from the recordings, you can expect to see most of the artifacts related with the cardiac activity to go away, but you will also remove additional signal elements that were not really well identified. The job is done but it causes some unwanted side effects.

  • It is in general possible to refine the SSP decomposition by going back to the selection of "cardiac" markers that we used to compute it. You could look at all the ECG peaks individually and remove the markers located in segments of recordings that are noisier or that contain a lot of alpha activity (~10Hz). You would need to delete this SSP decomposition and run again the same process.
  • Alternatively, or if you don't manage to extract a clean cardiac component with a PCA/SSP decomposition, you could try to run an ICA decomposition instead. You might be able to get better results, but it comes with significant computation and manual exploration times. Note that for some subjects, the cardiac artifact is not very strong and could be simply ignored in the analysis.

Evaluate the correction

The topography of the component #1 looks like it represents the heart activity and its temporal evolution shows peaks where we identified heartbeats. It is therefore a good candidate for removal, we just need to make sure the signals look good after the correction before validating this choice.

  • Show the left-temporal MEG sensors (CTF LT) and select/unselect the first SSP component.
  • Repeat this for different time windows, to make sure that the cardiac peaks in the MEG sensors really disappear when the projector #1 is selected and that the rest is not altered too much.
  • No correction:

    ssp_ecg_off.gif

  • Cardiac component #1 removed:

    ssp_ecg_on.gif

  • In this example we will consider that the current decomposition is good enough.
    Make sure you select the component #1, then click on [Save] to validate the modifications.

  • After this window is closed, you can always open it again from the Record tab with the menu Artifacts > Select active projectors. At this stage of the analysis, you can modify the list of projectors applied to the recordings at any time.

Let's try the same thing with the eye blinks.

  • Select the process "Artifacts > SSP: Eye blinks"

  • Run it on the event type "blink", that indicates the peaks of the VEOG signal.
    Select the option "Compute using existing projectors" (if this step doesn't seem to work correctly, try again without selecting this option).

    ssp_eog_process.gif

  • You see now a new category of projectors. Based on the distribution of values, this first component is most likely a good representation of the artifact we are trying to remove. The second one could be a good candidate as well.

    ssp_select_eog1.gif

  • Select the first three components and display their topographies:

    ssp_eog_topo.gif

  • Component #1: Most likely a blink,

  • Component #2: Probably a saccade (another type of eye movement),

  • Component #3: Not related with the eye movements (maybe related with the alpha activity).
  • As a side note, if you had not selected the option "Compute using existing SSP/ICA projectors", you would have obtained the projectors below, which correspond to the topography of the artifact in the original signals (without considering the cardiac projector). It is normal if the topographies we obtain after removing the cardiac peaks are slightly different, this is because they are computed on the different subspace of the signals. The relative singular values is smaller after the cardiac correction, maybe because the recordings we used to compute it already contained some eye movements.

    ssp_eog_topo_no.gif

  • Display the times series for these three components, together with the EOG signals. You have to uncheck temporarily the component #1 to be able to display its signal. When it is checked, it is removed from the signal therefore it corresponds to a flat trace.
    The figure below shows the EOG and SSP values between 318s and 324s. The SSP1 trace matches the blink observed in VEOG and SSP2 matches the saccade observed in HEOG.

    ssp_eog_ts.gif

  • Left-temporal MEG signals when there is no component selected:

    ssp_eog_off.gif

  • With only the component #2 selected (saccade):

    ssp_eog_on1.gif

  • With components #1 and #2 selected (blink + saccade):

    ssp_eog_on2.gif

  • Keep the components #1 and #2 selected and click on [Save] to validate your changes.

Run #02

Reproduce the same operations on Run #02:

  • Close everything with the [X] button at the top-right corner of the Brainstorm window.
  • Open the MEG recordings for run AEF #02 (double-click on the file link).

  • Artifacts > SSP: Heartbeats: Event name "cardiac", sensors "MEG", use existing SSP projectors.
    Select component #1, click on [Save].

    ssp_run02_ecg.gif

  • Artifacts > SSP: Eye blinks: Event name "blink", sensors "MEG", use existing SSP projectors.
    Select component #1, click on [Save].

    ssp_run02_eog.gif

  • Note that in this second session, the representation of the saccade was not as clear as in the first file. The distribution of the percentage values does not show any clear component other from the blink one, and the topographies are not as clear. In general, the saccade processing requires a separate step, we will illustrate this in the next tutorial.

Note for beginners

Everything below is advanced documentation, you can skip it for now.




Advanced

SSP: Generic

The calculation of the SSP for the heartbeats and the eye blinks are shortcuts to a more generic process "Artifacts > SSP: Generic". You may need this process if the standard parameters do not work of if you want to use this technique to remove other types of artifacts.

  • Time window: What segment of the file you want to consider.

  • Event name: Markers that are used to characterize the artifact. If you don't specify any event name, it will use the entire time window as the input of the PCA decomposition.

  • Event window: Time segment to consider before and after each event marker. We want this time window to be longer than the artifact effect itself, in order to have a large number of time samples representative of a normal brain activity. This helps the PCA decomposition to separate the artifact from the ongoing brain activity.

  • Frequency band: Definition of the band-pass filter that is applied to the recordings before calculating the projector. Usually you would use the same frequency band as we used for the detection, but you may want to try to refine this parameter if the results are not satisfying.

  • Sensor types or names: List of sensor names or types for which the SSP projectors are calculated. You can get better results if you process one sensor type at a time.

  • Compute using existing SSP/ICA projectors: Same as in the heartbeats/blinks processes.

  • Save averaged artifact in the database: If you check this option with an event name selected, the process will save the average of all the artifact epochs (event marker + event window) before and after the application of the first component of the SSP decomposition. This is illustrated in the next section.

  • Method to calculate the projector:

    • PCA: What was described until now: SVD decomposition to extract spatial components.

    • Average: Uses only one spatial component, the average of the time samples at which the selected events occur. This has no effect if there are no events selected.

      ssp_generic.gif

Advanced

Averaged artifact

One efficient way of representing the impact of this artifact correction is to epoch the recordings around the artifacts before and after the correction and compute the average of these epochs.

  • Run the process "SSP: Generic" with:
    • The default blink options: event "blink", [-200,+200]ms, [1.5-15]Hz.
    • The option "Computing existing SSP/ICA projectors" disabled.

    • The option "Save averaged artifact in the database" selected.

    • The option panel should look like the screen capture in the previous section.
  • Look at the topography of the first component. You can notice that the percentage value is higher than what we got previously, and that the topography looks different than previously.

    ssp_generic_topo.gif

  • This difference comes from the fact that this time we did not use the cardiac SSP to compute the blink SSP ("Compute existing SSP" disabled). This could indicate that there is some form of cross-contamination of the "blink" and "cardiac" events that we designed here. The origin of the common signals between the different segments of artifact is sometimes due to important alpha waves (around 10Hz) that are present for most of the recordings. It don't matter much, you just have to remember that the computation order matters and that you can try variations of the suggested workflow to fit better your recordings.
  • Otherwise, the difference between this topography and the previous one could be only due to the fact that they represent the artifact in different subspaces (in the first case, one dimension has already been removed). Even if the two artifacts were completely independant (the two removed dimensions are orthogonal), the topographies would look slightly different.
  • You should see now two additional files in your database. They are both the average of the 19 blinks identified in the recordings, [-200,+200]ms around the "blink" events. The top row shows the average before the SSP correction, the bottom row the same average but recomputed after removing the first component of the decomposition. The artifact is gone.

    ssp_eog_average.gif

  • Delete this new category, and make sure you get back to the previous settings (first component of both "cardiac" and "blink" selected). Click on [Save] to validate this modification.

    ssp_final_selection.gif

Advanced

Troubleshooting

You have calculated your SSP projectors as indicated here but you don't get any good results. No matter what you do, the topographies don't look like the targeted artifact. You can try the following:

  • Review one by one the events indicating the artifacts, remove the ones that are less clear or that occur close to another artifact.
  • Select or unselect the option "Compute using existing SSP".
  • Change the order in which you compute the projectors.
  • Use the process "SSP: Generic" and modify some parameters:
    • Use a narrower frequency band: especially the EOG, if the projectors capture some of the alpha oscillations, you can limit the frequency band to [1.5 - 9] Hz.
    • Reduce or increase the time window around the peak of the artifact.
    • Change the method: Average / SSP.
  • If you have multiple acquisition runs, you may try to use all the artifacts from all the runs rather than processing the files one by one. For that, use the Process2 tab instead of Process1. Put the "Link to raw file" of all the runs on both sides, Files A (what is used to compute the SSP) and Files B (where the SSP are applied).

Always look at what this procedure gives you in output. Most of the time, the artifact cleaning will be an iterative process where you will need several experiments to adjust the options and the order of the different steps in order to get optimal results.

Advanced

SSP Theory

The Signal-Space Projection (SSP) is one approach to rejection of external disturbances. Here is a short description of the method by Matti Hämäläinen, from the MNE 2.7 reference manual, section 4.16.

Unlike many other noise-cancellation approaches, SSP does not require additional reference sensors to record the disturbance fields. Instead, SSP relies on the fact that the magnetic field distributions generated by the sources in the brain have spatial distributions sufficiently different from these generated by external noise sources. Furthermore, it is implicitly assumed that the linear space spanned by the significant external noise patterns has a low dimension.

Without loss of generality we can always decompose any n-channel measurement b(t) into its signal and noise components as:

  • b(t) = bs(t) + bn(t)

Further, if we know that bn(t) is well characterized by a few field patterns b1...bm, we can express the disturbance as

  • bn(t) = Ucn(t) + e(t) ,

where the columns of U constitute an orthonormal basis for b1...bm, cn(t) is an m-component column vector, and the error term e(t) is small and does not exhibit any consistent spatial distributions over time, i.e., Ce = E{eeT} = I. Subsequently, we will call the column space of U the noise subspace. The basic idea of SSP is that we can actually find a small basis set b1...bm such that the conditions described above are satisfied. We can now construct the orthogonal complement operator

  • P⊥ = I - UUT

and apply it to b(t) yielding

  • b(t) ≈ P⊥bs(t) ,

since P⊥bn(t) = P⊥Ucn(t) ≈ 0. The projection operator P⊥ is called the signal-space projection operator and generally provides considerable rejection of noise, suppressing external disturbances by a factor of 10 or more. The effectiveness of SSP depends on two factors:

  1. The basis set b1...bm should be able to characterize the disturbance field patterns completely and

  2. The angles between the noise subspace space spanned by b1...bm and the signal vectors bs(t) should be as close to π/2 as possible.

If the first requirement is not satisfied, some noise will leak through because P⊥bn(t) ≠ 0. If the any of the brain signal vectors bs(t) is close to the noise subspace not only the noise but also the signal will be attenuated by the application of P⊥ and, consequently, there might by little gain in signal-to-noise ratio.

Since the signal-space projection modifies the signal vectors originating in the brain, it is necessary to apply the projection to the forward solution in the course of inverse computations.

Advanced

SSP Algorithm

The logic of the SSP computation is the following:

  1. Take a small time window around each marker to capture the full effect of the artifact, plus some clean brain signals before and after. The default time window is [-200,+200]ms for eye blinks, and [-40,+40]ms for the heartbeats.
  2. Filter the signals in a frequency band of interest, in which the artifact is the most visible (in practice, we extract a segment long enough so that it can be filtered properly, and cut it after filtering).
  3. Concatenate all these time blocks into a big matrix A = [b1, ..., bm]

  4. Compute the singular value decomposition of this matrix A: [U,S,V] = svd(A, 'econ')

  5. The singular vectors Ui with the highest singular values Si are an orthonormal basis of the artifact subspace that we want to subtract from the recordings. The software selects by default the vector with the highest eigenvalue. Then it is possible to redefine interactively the selected components.

  6. Calculate the projection operator: P⊥i = I - UiUiT

  7. Apply this projection on the MEG or EEG recordings F: F = P⊥iF

  8. The process has to be repeated separately several times for each sensor type and each artifact.

Steps #1 to #5 are done by the processes "Artifact > SSP" in the Record tab: the results, the vectors Ui, are saved in the channel file (field ChannelMat.Projector(i).Components).

Steps #6 and #7 are calculated on the fly when reading a block of recordings from the continuous file: when using the raw viewer, running a process a process on the continuous file, or importing epochs in the database.

Step #8 is the manual control of the process. Take some time to understand what you are trying to remove and how to do it. Never trust blindly any fully automated artifact cleaning algorithm, always check manually what is removed from the recordings, and do not give up if the first results are not satisfying.

Advanced

Extract the time series

It could be useful to save the SSP or ICA time series in a new file for further processing. Here is one solution to get there:

  • First, make sure you do not remove the components you are interested in: open the continuous recordings, Record tab > Artifacts > Select active projectors, unselect the components you want to study, so that they are kept in the imported data.

    ssp_ts_0.gif

  • Import the segments of recordings of interest from the continuous file: select the option Apply SSP/ICA projectors, otherwise the projectors would be discarded from the new channel file in the imported folder.

    ssp_ts_1.gif

  • To review the SSP/ICA time series (optional): open the recordings you just imported, and select the menu Artifacts > Load projector as montages in the Record tab. The projectors are made available in the montage menu.

    ssp_ts_2.gif

  • To create a new file with the SSP/ICA time series in the database: select the file you imported in Process1 and run the process Standardize > Apply montage, with the option Create new folders selected.

    ssp_ts_2.gif

Advanced

On the hard drive

The projectors are saved in the channel file associated with the recordings. This means that they will be shared by all the files that share the same channel file. As a consequence, you cannot share the channel files between acquisition runs if you are planning to use different SSP projectors for different runs.

You can find them in the field ChannelMat.Projector (array of structures):

  • Comment: String representing the projector in the window "Select active projectors".

  • Components: [Nsensors x Ncomponents], each column is one spatial component.

  • CompMask: [1 x Ncomponents], Indicates if each component is selected or not (0 or 1).

  • Status: 0=Category not selected, 1=Category selected, 2=Projectors already applied to the file.

  • SingVal: [1 x Ncomponents], Singular values of the SVD decomposition for each component.

ssp_storage.gif

Advanced

Additional documentation

Articles

  • C. D. Tesche, M. A. Uusitalo, R. J. Ilmoniemi, M. Huotilainen, M. Kajola, and O. Salonen, "Signal-space projections of MEG data characterize both distributed and well-localized neuronal sources," Electroencephalogr Clin Neurophysiol, vol. 95, pp. 189-200, 1995.
  • M. A. Uusitalo and R. J. Ilmoniemi, "Signal-space projection method for separating MEG or EEG into components," Med Biol Eng Comput, vol. 35, pp. 135-40, 1997.

Tutorials and forum discussions



Tutorial 14: Additional bad segments

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

We have already corrected the recordings for the artifacts at fixed frequencies (power lines) and for some standard and reproducible artifacts (heartbeats and blinks). There are many other possible sources of noise that can make the recordings unusable in our analysis. This tutorial introduces how to identify and mark these bad segments.

Manual inspection

It is very important to mark bad segments (noisy sections) of the recordings before running any fancy analysis. It may save you hours of repeated work later, when you discover after processing all your data that you have to redo everything because you have left some huge artifacts in the recordings.

In terms of quality control, there is no automatic method that will give you results as good as a manual screening of the recordings. We recommend you always take a few minutes to scroll through all your files to identify and tag all the noisy segments. Do this full screening after you're done with all the other pre-processing steps (filtering and SSP/ICA cleaning) to remove what has not been corrected with other techniques.

At the beginning, it is not easy to separate what is too noisy from what is acceptable. This is usually an iterative process: at the first attempt you guess, you average the trials and estimate the sources and finally realize that are some eye movements left that are masking your effect of interest. You have to delete everything, add some bad segments and try again. On the contrary, if you reject too much data at the beginning, you may not have enough trials to observe your effect correctly. The balance is not easy to find, but you'll get good at it quickly. Brainstorm offers tools to do these operations easily, so a few trials and errors are not too dramatic. Just make sure you check the quality of your data at every step of the analysis, so that you don't go too far in the wrong direction.

To review your recordings and check for major artifacts, you can for instance:

  • Display all the sensors in columns, you don't need to see each sensor individually.
  • Use a large time window (page duration = 5-20s).
  • Increase the gain of the sensors to see clearly the abnormally large values.
  • Scroll using the shortcuts corresponding to buttons [<<<] and [>>>]: F3, Shift+F3, F4, etc.

  • Unselect the autoscale option ([AS] button on the right side of the figure) so that the amplitude doesn't change every time you move to the next page, it will make the review easier.

When you identify something that doesn't look good:

  • Click and move your mouse to select the segment you consider as noisy.
  • Mark the segment as bad: Right-click > Reject time segment (or Ctrl+B).

  • The example below shows Run #02 from 206s to 211s:

    bad_select.gif

Automatic detection

We have developed some tools to help with this screening procedure. The process "Artifacts > Detect other artifacts" identifies epochs of time that contain typical artifacts from eye movement, subject movement or muscle contractions. While it is still advised that you visually inspect all of your data, this process can help identify areas that contain artifacts which you may want to mark as bad segments.

Currently, the process runs only on continuous raw links and identifies artifacts in two frequency bands, chosen because of the predictability of artifacts in these bands.

  • 1-7 Hz: Typically related to subject movement, eye movements and dental work (or other metal)

  • 40-240 Hz: Related to muscle noise and some sensor artifacts

  • Note that the alpha band (8-12 Hz) is specifically avoided here since some alpha oscillations can be quite high in amplitude and falsely detected as artifact.

Important notes

  • Before running this detection it is highly recommended that you run the cleaning processes for cardiac and eye blink artifacts.
  • This process is currently being tested. If you find a bug or have other comments related to its performance, please provide comments here or on the user forum.

  • We recommend you use the markers that this process creates as suggestions, not as the actual reality. Do not use this method fully automatically, always review its results.

Recommendations for usage

  • Start by running the process on one run per subject. Scan though the recording and confirm that the detection is performing well.
  • Adjust the threshold as needed, then run the detection on the other runs for that subject.
  • If there are many eye movements, the "1-7 Hz" detection can work well for computing an SSP projector. This is done using the menu "Artifacts > SSP: Generic" as described below. If a suitable projector is found and applied, re-run the artifact detection to find the remaining artifacts that were not removed.

Run #01

We will now apply this process on the first acquisition session:

  • Double-click on the link to show the MEG sensors for Run #01.

  • The process will exclude the segments that are already marked as bad from the detection. If you have marked bad segments at the beginning of this tutorial, delete them all: Select the category "BAD" and press the delete key, or menu Events > Delete group.

  • In the Record tab, select the menu: "Artifacts > Detect other artifacts".

    • Time window: The time window for which the detection will be performed.

    • Sensitivity: 1 is very sensitive, 5 very conservative, 3 works well for a variety of conditions.

    • Frequency band selection: Check the box for which band(s) you will perform the detection.

      bad_detect_process.gif

  • After running the process, event types are created, one for each frequency band. They contain extended events indicating the start and end of the epoch. The time resolution is 1 second and therefore the epoch may, in fact, be a bit longer than the actual artifact. You can manually refine the time definition if you wish and mark some or all events as bad.

    bad_detect_evt1.gif

  • Check quickly the segments that were marked as possible artifacts. They all correspond to some form of irregularities in the recordings but there are very few of them. We will just flag all the segments detected in both categories as bad.
  • For considering these events as bad, select them both in the events list and use the menu
    "Events > Mark group as bad". Alternatively, you can rename the events and add the tag "bad_" in their name, it would have the same effect.

    bad_rename_evt1.gif

  • Unload all the data with the [X] in the toolbar of the Brainstorm window. Save the modifications.

Run #02

Repeat the same operation on the second data file:

  • Double-click on the link to show the MEG sensors for Run #02.

  • Run the menu "Artifacts > Detect other artifacts" with the same parameters as the first file.

    bad_detect_evt2.gif

  • The category "1-7Hz" contains many saccades, maybe enough for computing an SSP projector.
  • If you are not interested in seeing how to remove the saccades with SSP projectors, just mark the two groups as bad: with the menu "Events > Mark group as bad", or renaming them in "bad_1-7Hz" and "bad_40-240Hz". Then go directly to the next tutorial.

Advanced

Saccade SSP

This run #02 is a good example to illustrate how we can use SSP projectors to remove the artifacts caused by eye saccades. You could mark the saccades manually or use the pre-selection available in "1-7Hz".

  • Rename category "1-7Hz" in "saccade" and delete category "40-240Hz".

  • Open the EOG recordings at the same time: right-click on file > EOG > Display time series.

  • Keep only the saccades: Delete all the events that do not show a clear step in the HEOG channel.

    bad_saccades.gif

  • Run the process "Artifacts > SSP: Generic" with the following options:

    ssp_saccades.gif


Note: the event window option will not be used because the events "saccade" are extended events and already include their duration.

  • The first component removes the artifact very well. Keep it selected and click on [Save].

    ssp_saccades_topo.gif

  • Run again the process "Detect other artifacts". There are now less events detected in 1-7Hz.

  • Rename the categories in "bad_1-7Hz" and "bad_40-240Hz" to flag them as bad segments.

    bad_rename_evt2.gif

Advanced

Elekta-Neuromag SQUID jumps

MEG signals recorded with Elekta-Neuromag systems frequently contain SQUID jumps. They are easy to spot visually in the recordings, they look like sharp steps followed by a change of baseline value. These jumps are due to the instability of the electronics, which fails at controlling the state of the SQUID during the recording sessions.

These steps cause important issues in the analysis of the signal, both in amplitude and in frequency. They are difficult to detect and remove, especially when some pre-processing with the Elekta software has already been applied. Running MaxFilter/SSS on MEG recordings with a SQUID jump on one sensor propagates the artifact to all the sensors.

The best approach is to remove these jumps from the analysis:

  • By marking them as bad segments manually, if their number is reasonable (in Brainstorm).
    The 1-7Hz artifact detection employed here will usually catch them.

  • By marking the sensors as bad if only a few of them are affected (before running MaxFilter).

  • By computing SSP projectors to remove the jumps (before running MaxFilter/SSS).
    As of today, this approach is possible only if you use MNE or MNE-Python for the early stages of pre-processing. When the free SSS algorithm implemented in MNE-Python is made available in Brainstorm, we will be able to use this approach in Brainstorm as well.

An example before MaxFilter (SQUID jump visible on one sensor only):

  • squid_jumps.png

Examples after MaxFilter (SQUID jump propagated on all the sensors):

  • artifact_jumps_sss.gif

Advanced

Additional documentation



Tutorial 15: Import epochs

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

We can consider that our datasets are clean from any major artifact. We will now proceed to the analysis of the brain signals we recorded in response to the auditory stimulation. There are two major types of processing workflows for MEG/EEG, depending on whether we are dealing with an event-related paradigm or a steady-state/resting-state study.

This tutorial will only focus on the event-related case: series of stimuli are sent to the subject and we have the corresponding triggers marked in the recordings. We will base our analysis on these triggers, import short epochs around each of them and average them. You will find in the advanced tutorials a scenario of MEG resting-state analysis.

Import in database

Until now, we've only been looking at data that was read from continuous files. The raw file viewer provides rapid access to the recordings, but many operations can only be applied to short segments of recordings that have been imported in the database. We will refer to these as "epochs" or "trials".

  • Right-click on Run#01 > Import in database.

    import_popup.gif

  • Set the import options as described below:

    import_options.gif

  • Time window: Time range of interest. We are interested by all the stimulations, so do not change this parameter. The default values always represent the entire file.

  • Split: Useful to import continuous recordings without events, to import successive chunks of the same duration. We do not need this here.

  • Events selection: Check the "Use events" option, and select both "standard" and "deviant".
    The number between parenthesis represents the number of occurrences of each event in the selected time window (changes if you modify the time definition at the top of the window)

  • Epoch time: Time segment that is extracted around each event marker. Set it to [-100,+500]ms.
    This option is disabled for extended events: if you want to enable it, you need to convert the extended events to simple events first.

  • Apply SSP/ICA projectors: Use the active projectors calculated during the previous pre-processing steps. Always check the summary of the projectors that are selected.
    Here there are 2 categories ("cardiac" and "blink") with a total of 3 projectors selected (one in "cardiac" and two in "blink", the blink and the saccade). Keep this option selected.

  • Remove DC Offset: Check this option, select Time range: [-100, -1.7]ms. For each epoch, it will:

    • Compute the average of each channel over the baseline (pre-stimulus interval: [-100,-1.7]ms)
    • Subtract it from the channel at every time instant (full epoch interval: [-100,+500]ms).
    • This option removes the baseline value of each sensor. In MEG, the sensors record variations around a somewhat arbitrary level, therefore this operation is always needed, unless it was already applied during one of the pre-processing steps.
  • Resample recordings: Keep this unchecked.

  • Create a separate folder for each epoch type: Do not check this option.

    • If selected: a new folder is created for each event type ("standard" and "deviant")
    • If not selected: all the epochs are saved in a new folder, the same one for all the events, that has the same name as the initial raw file. This is what we want because we have two acquisition runs with different channel files (different head positions and different SSP projectors) to import for the same subject. If we select this option, the "standard" epochs of both runs would be imported in the same folder and would end up sharing the same channel file, which is not correct.

One new folder appears in Subject01. It contains a channel file and two trial groups.

  • The channel file is copied from the continuous file.
  • To expand a group of trials and show all the files: double-click on it or click on the "+" next to it.
  • The SSP projectors calculated in the previous tutorial were applied on the fly while reading from the continuous file. These epochs are clean from eye blinks and power line contamination.
  • Note that the trials that are overlapping with a BAD segment are tagged as bad in the database explorer (marked with a red dot). All the bad trials are going to be ignored in the rest of the analysis, because they are ignored by the Process1 and Process2 tabs (see next tutorial).

    import_new_folder.gif import_bad.gif

Review the individual trials

After reviewing the continuous file with the "columns" view (channels one below the other) it can be useful to also review the imported trials with the "butterfly" view (all the channels superimposed).

  • Double-click on the first trial for the "deviant" condition.
  • Switch to the "butterfly" display mode: in the Record tab, click on the first button in the toolbar.

    import_review.gif

  • Right-click on the figure > Navigator > Next data file, or use the keyboard shortcut F3.
    This way you can quickly review all the trials to make sure that there is no obvious problem.
    Mac users: The keys "Fx" are obtained by holding the "Fn" key simultaneously.

    import_navigator.gif

To manually tag a trial as bad, you have three options:

  • Right-click on the trial file in the database > Reject trial.

  • Right-click on the figure > Reject trial.

  • Use the keyboard shortcut Ctrl+B.

  • To set all the trials back as good in a group: right-click on the trials group > Accept bad trials.

Raster plot

You can also get an overview of the values of one specific sensor over all the trials at once.

  • Right-click on the group of trials "deviant" > Display as image > MEG.

  • You can change the selected sensor with drop-down menu in the Display tab, or use the up/down arrows on your keyboard after clicking on the figure.
  • The bad trials are already marked, but if they were not this view could help you identify them easily.

    erpimage.gif

Run #02

Repeat the same operations for the second dataset:

  • Right-click on Run#02 > Import in database.

  • Import events "standard" and "deviant" with the same options.

    import_run02.gif

Advanced

Epoch length

We imported epochs of 600ms (100ms baseline + 500ms post-stimulus) but did not justify this choice.
The length of the epochs you import should be chosen very carefully. If you realize later your epochs are too short or too long, you would have to start over your analysis from this point.
The epoch length to consider depends on:

The experimental design

  • The minimum duration between two stimuli defines the maximum length you can consider analyzing after the stimulus. You should design your experiment so that it always includes the entire evoked response, plus an additional segment that you can use as a baseline for the following epoch.
  • In this study, the inter-stimulus interval (ISI) is random between 0.7s and 1.7s. The minimum ISI (700ms) is long enough to include the entire auditory evoked response, but not the button press that follows a deviant tone. In some cases (late subject response and short ISI), the following stimulation occurs while the brain is still processing the button press. The baseline of some epochs may contain motor and somatosensory components.
  • For data processing, it is always better to have longer ISI, but it also means increasing the duration of the experiment or decreasing the number of repetitions, which leads to other problems. The trade-off between data quality and recording time in this experiment is acceptable, very few trials are actually contaminated by the motor response to the previous trial. We will ignore this problem in the following tutorials, but you could decide to reject these few trials in your own analysis.
  • Here we consider only a short baseline (100ms) to avoid including too much motor activity.
    We will only study the auditory response, therefore 500ms post-stimulus is enough.

The processing pipeline

You may have to artificially extend the epochs of interest for technical reasons. Most filters cause edge effects, ie. unreliable segments of data at the beginning and the end of the signal. When applied on short epochs, they might destroy all the data of interest.

For avoiding this, you can add a few hundred milliseconds before and after your epoch of interest. It doesn't matter if it overlaps with the previous or the next epoch. After running the operations that required longer signals, you can cut your epochs back to the desired epoch length. Examples:

  • Time-frequency (Morlet wavelets):
    When estimating the power at frequency f Hz, you get incorrect values for at least one period (T=1/f) at the beginning and the end of the signal. For example, at 2Hz you need to discard the first and last 500ms of your time-frequency maps (1/2Hz=0.5s).

  • Low-pass filtering:
    With any filtering operation there will always be a transient effect at the beginning of the filtered data. After filtering, you need to discard the time windows corresponding to these effects. Their duration depends on the order of the filter: this is documented in the tutorial Power spectrum and frequency filters.

  • Hilbert transform:
    Same considerations as for the low-pass filter. This process starts by filtering the signals in various frequency bands, using the same function as the band-pass and low-pass filters.

  • Normalizations:
    The normalization procedures that use a baseline from the same epoch (Z-score, ERS/ERD, baseline correction) usually work better with longer baselines. The longer the clean baseline, the better the estimation of the average and standard deviation over this baseline. If your baseline is too short, the quality of your normalization will be low.
    If you normalize time-frequency maps or filtered source averages, you have to additionally exclude the edge effects from the baseline, and consider an even longer baseline.

In this tutorial, we decided to work with very short epochs (600ms only) so that all the analysis would run on most computers, including personal laptops. For any type of frequency analysis on the recordings, this will be too short. When processing your own recordings, you should increase the size of the epochs beyond the segment that you are actually planning to study.

Advanced

On the hard drive

Right-click on any imported epoch > File > View file contents:

  • import_struct.gif

Structure of the imported epochs: data_*.mat

  • F: [Nchannels x Ntime] recordings time series, in Volts.

  • Std: [Nchannels x Ntime] Standard deviation or standard error, when available (see next tutorial).

  • Comment: String displayed in the database explorer to represent this file.

  • ChannelFlag: [Nchannels x 1] One value per channel, 1 means good, -1 means bad.

  • Time: [1 x Ntime] Time values for each sample recorded in F, in seconds.

  • DataType: Type of information saved in the F matrix.

  • Device: Name of the acquisition system used to record this file.

  • Leff: Effective number of averages. For averaged files, number of trials that were used to compute this file.

  • Events: Time markers available in the file (stimulus triggers or other events)

    • label: Name of the event group.

    • color: [r,g,b] Color used to represent the event group, in Matlab format.

    • epochs: [1 x Nevt] Only ones for imported epochs.

    • times: [1 x Nevt] Time in seconds of each marker in this group (times = samples / sfreq).
      For extended events: [2 x Nevt], first row = start, second row = end.

    • reactTimes: Not used anymore.

    • select: Indicates if the event group should be displayed in the viewer.

    • channels: {1 x Nevt} Cell array of cell-arrays of strings. Each event occurrence can be associated with one or more channels, by setting .channels{iEvt} to a cell-array of channel names.

    • notes: {1 x Nevt} Cell-array of strings: additional comments for each event occurrence

  • History: Operations performed on file since it was imported (menu "View file history").

File history

Right-click on any imported epoch > File > View file history:

  • import_history.gif

List of bad trials

  • There is no field in the file structure that says if the trial is good or bad.
    This information is saved at the level of the folder, in the brainstormstudy.mat file.

  • Right-click on an imported folder > File > Show in file explorer.

    import_folder.gif

  • Load the brainstormstudy.mat file into Matlab, the bad trials are listed in the cell array "BadTrials":

    import_study.gif

Useful functions

  • in_bst_data(DataFile, FieldsList): Read an imported epoch.

  • in_bst(FileName, TimeWindow): Read any Brainstorm data file with the possibility to load only a specific part of the file. "TimeWindow" is a range of time values in seconds: [tStart, tStop].

  • bst_process('LoadInputFile', FileName, Target, TimeWindow): The most high-level function for reading data files. "Target" is a string with the list of sensor names or types to load.

Additional documentation



Tutorial 16: Average response

Authors: Francois Tadel, Elizabeth Bock, Sylvain Baillet

All the epochs we have imported in the previous tutorial are represented by matrices that have the same size (same number of channels, same number of time points), therefore they can be averaged together by experimental condition. The result is called indifferently "evoked response", "average response", "event-related field" in MEG (ERF) or "event-related potential" in EEG (ERP). It shows the components of the brain signals that are strictly time-locked to the presentation of a stimulus.

  • average_slide.gif

Averaging

We will now compute the average responses for both the "standard" and "deviant" conditions. Two constraints have to be taken into consideration at this stage.

Averaging runs separately: With MEG, it is not recommended to average sensor data across acquisition runs with different head positions (ie. different "channel files"). If the head of the subject moved between two blocks of recordings, the sensors do not record the same parts of the brain before and after, therefore the runs cannot be compared directly. With EEG, you can generally skip this recommendation.

Number of trials: When computing subject-level averages for experimental conditions with different number of trials, you have two options. You can either use the same number of trials for all the conditions and subjects (to make them "more comparable") or use all the available good trials (more samples lead to better estimations of the mean and variance). Here we will go with the second option, using all the trials. See this advanced section for more details.

  • Drag and drop all the "standard" and "deviant" trials for both runs in Process1.

  • In the Process1 box, the number of imported trials (comment in the database explorer, eg. "40 files") does not match the number of files selected for processing (between brackets, eg. "[39]"). This difference is due to the bad trials that we have in these folders. The trials tagged with a red dot in the database explorer are ignored by all the processes. The total number of selected files is 457 instead of 479, it means that we have a total of 22 bad trials.

  • Select the process "Average > Average files".
    Select the options: By trial group (folder), Arithmetic average, Keep all the event markers.

    average_folder.gif

  • You get two new files for each acquisition run. The number between parenthesis indicates how many good trials were used to compute each average.

    average_files.gif

Process options: Average

Description of all the options of the process: Average > Average files.

  • Everything: Averages all the files selected in Process1 together, creates only one file in output.

  • By subject: Groups the files by subject (ignoring the folders), creates one file per subject.

  • By folder (subject average): Groups by subject and by folder, ignoring the trial groups.
    In the current configuration, it would produce two files, one for each acquisition run.

  • By folder (grand average): Groups by folder, across the subjects. All the files located in folders with the same name would be averaged together, no matter in what subject they are.

  • By trial group (folder average): Groups by set of trials with the same name, separately for each folder and each subject. Here it creates four groups (two folders x two trial groups).

  • By trial group (subject average): Groups by set of trials with the same name, for each subject. The separation in folders is ignored. Here it would produce two files (deviant and standard).

  • By trial group (grand average): Groups by set of trials with the same name, ignoring the classification by folder or subject.

  • Function: Documented directly in the option panel.

  • Weighted average: When averaging single trials, the number of files is saved in the field Leff of the average file. When re-averaging the averages across acquisition sessions or subjects, this field Leff can be used to weigh each file with the number of trials from which it was computed:
    mean(x) = sum(Leff(i) * x(i)) / sum(Leff(i))
    In most cases, this option should be selected when averaging within a subject and disabled when averaging across subjects. It has no impact in the current example (no averages, Leff=1).

  • Keep all the event markers: If this option is selected, all the event markers that were available in all the individual trials are reported to the average file. It can be useful to check the relative position of the artifacts or the subject responses, or quickly detect some unwanted configuration such as a subject who would constantly blink immediately after a visual stimulus.

Visual exploration

The average response contains interesting information about the brain operations that occur shortly after the presentation of the stimulus. We can explore two dimensions: the location of the various brain regions involved in the sensory processing and the precise timing of their activation. Because these two types of information are of equal interest, we typically explore the recordings with two figures at the same time, one that shows all the signals in time and one that shows their spatial distribution at one instant.

  • Open the MEG recordings for the deviant average in Run#01: double-click on the file.

  • In the Record tab: Select the "butterfly" view more (first button in the toolbar).
  • In the Filter tab: Add a low-pass filter at 40Hz.

  • In the Record tab: Delete the "cardiac" event type, we are not interested by their distribution.

    deviant_ts.gif

  • This figure shows a typical clean evoked response, with a high signal-to-noise ratio. This represents the brain response to a simple auditory stimulation, the large peak around 90ms probably corresponds to the main response in the primary auditory cortex.

  • The green line represents the global field power (GFP), i.e. the standard deviation of all the sensors values at each time point. This measure is sometimes used to identify transient or stable states in ERP/ERF. You can hide it with the display options menu Extra > Show GFP.

  • This is the response to the deviant beeps (clearly higher in pitch), for which the subject is supposed to press a button to indicate that he/she detected the target. These responses are represented with the "button" events, distributed between 350ms and the end of the file (many responses happened after 500ms). Because of the variability in the response times, we can already anticipate that we won't be able to study correctly the motor response from this average. For studying the activity in the motor area, we need to epoch the recordings again around the "button" events.

Add a spatial view:

  • Open a 2D topography for the same file (right-click on the figure > View topography, or Ctrl+T).

  • Review the average as a movie with the keyboard shortcuts (hold the left or right arrow key).

  • At 90ms, we can observe a typical topography for a bilateral auditory response. Both on the left sensors and the right sensors we observe field patterns which seem to indicate a dipolar-like activity in the temporal or central regions.

    deviant_topo.gif

  • Close everything with the button [X] in the top-right corner of the Brainstorm window.
    Accept to save the modifications (you deleted the "cardiac" events).

  • Open the "standard" average in the same way and delete the "cardiac" markers.

Repeat the same operations for Run#02:

  • Open the MEG recordings for deviant and standard.
  • Delete the "cardiac" markers in both files.
  • Open a 2D topography and review the recordings.
  • Close everything.

Interpretation

Let's display the two conditions "standard" and "deviant" side-by-side, for Run#01.

  • Right-click on average > MEG > Display time series.

  • Right-click on average > MISC > Display time series (EEG electrodes Cz and Pz)

  • Right-click on average > MEG > 2D Sensor cap

  • In the Filter tab: add a low-pass filter at 40Hz (it makes the figures easier to read).

  • In the Record tab: you can set a common amplitude scale for all the figures with the button [=].

  • Here are the results for the standard (top) and deviant (bottom) beeps:

    average_summary.gif

The legend in blue shows names often used in the EEG ERP literature:

  • P50: 50ms, bilateral auditory response in both conditions.

  • N100: 95ms, bilateral auditory response in both conditions.

  • MMN: 230ms, mismatch negativity in the deviant condition only (detection of an abnormality).

  • P200: 170ms, in both conditions but much stronger in the standard condition.

  • P300: 300ms, deviant condition only (decision making in preparation of the button press).

  • Some seem to have a direct correspondence in MEG (N100) some don't (P300).

Additional quality check with the event markers:

  • The standard average shows two unwanted events between 100ms and 200ms post-stimulus, one "blink" and one "button" response. The trials that contain them should be marked as bad and the average recomputed, because the subject is probably not doing the task correctly.
  • We will not do this here because the SNR is high enough anyway, but remember that this option "Keep all events" from the averaging process provides a good summary of the recordings and can help you identify some bad trials.

Advanced

Averaging bad channels

The bad channels can be defined independently for each trial, therefore we can have different numbers of data points averaged for different electrodes. If we have a channel A considered good for NA trials, the corresponding channel in the average file is computed in this way: sum(NA trials) / NA.

In the average file, a channel is considered good if it is good in at least one trial, and considered as bad if it is bad in all the trials. The entire file is then considered as if it were computed from the maximum number of good trials: Nmax = max(Ni), i=1..Ntrials.

This procedure allows the conservation of the maximum amount of data. However it may cause some unwanted effects across channels: the SNR might be higher for some channels than others. If you want to avoid this: mark the channels as bad in all the trials, or report all the bad channels to the average file. This can be done easily using the database explorer, see tutorial Bad channels.

Advanced

Averaging across runs

As said previously, it is usually not recommended to average MEG recordings in sensor space across multiple acquisition runs because the subject might have moved between the sessions. Different head positions were recorded for each run, so we will reconstruct the sources separately for each each run to take into account these movements.

However, in the case of event-related studies it makes sense to start our data exploration with an average across runs, just to evaluate the quality of the evoked responses. We have seen in tutorial #4 that the subject almost didn't move between the two runs, so the error would be minimal.

  • channel_multiple.gif

Let's compute an approximate average across runs. We will run a formal average in source space later.

  • To run the same process again with different parameters: File > Reload last pipeline. Select:

  • By trial group (subject average): One average per experimental condition, across acquisition runs

  • Arithmetic average + Standard error: Save the standard error across all the trials in the same file

  • Keep all the event markers: Select this option, we are interested in the button press events.

  • The two averages are saved in the folder "Intra-subject". This is where all the results of processes involving multiple folders, within one subject, will be saved.

    average_stderror.gif

Advanced

Standard error

If you computed the standard deviation or the standard error together with an average, it will be automatically represented in the time series figures.

  • Double-click on one of the AvgStderr files to display the MEG sensors.
    The light-grey area around the sensors represent the maximum standard error around the maximum and minimum values across all the sensors.

  • Delete the useless events (cardiac and saccade).
  • Select two sensors and plot them separately (right-click > Channels > View selected, or "Enter").
    The green and red areas represent, at each time point, the standard error around the signal.

    stderror.gif

Advanced

Number of trials

You should always be careful when comparing averages computed from different numbers of trials. In most cases, you can safely include all the trials in your averages, even in the case of imbalanced designs. However, for very low numbers of trials or when comparing peak amplitudes, having the same number of trials becomes more critical. See the following references for more details:

Advanced

Selecting equal numbers of trials

If you decided you want to use the same number of trials across all the experimental conditions and/or across all the subjects, you can use a process to select them easily from the database.

  • Drag and drop all the "standard" and "deviant" trials for both runs in Process1.

  • Select the process "Files > Select uniform number of trials".
    Select the options: By trial group (folder) and Uniformly distributed.

    select_process.gif

  • If you click on [Run], it doesn't do anything but highlighting the first selected file in the database explorer. This process just performs a file selection, it needs to be followed by another process that uses the selected files for computing something. However, you can see what was done in the process report. The reports are displayed only when an error or a warning was reported, but you can open them manually to check for additional messages. Menu File > Report viewer.

    select_report.gif

  • The comment in the report shows the 4 groups of trials that were identified based on the option we selected (group "by trial group and folder"), with the number of good trials per group.
    The process picked 39 trials in each group, uniformly distributed in the list of available trials.
    Example of trial indices selected for Run01/standard: [1, 6, 11, 16, 21, 26, 31, 36, ..., 188, 193]

  • To average these selected trials together, you would just need to add the process "Average > Average files" after this selection process in the pipeline editor.

    average_selected.gif

Process options

Available options in the process: File > Select uniform number of trials.

  • By folder: Groups by subject and by folder, ignoring the trial groups.
    Here, it would identify two groups, one for each acquisition run: Run01, Run02.

  • By trial group (folder): Groups by set of trials with the same name, separately for each folder and each subject. Here it would identify four groups: Run01/deviant, Run01/standard, Run02/deviant, Run01/standard.

  • By trial group (subject): Groups by set of trials with the same name, for each subject. The separation in folders is ignored. Here it would identify two groups: deviant, standard.

How many trials to select in each group:

  • Number of trials per group: This number of trials must be available in all the groups. If set to 0, the group with the lowest number of good trials is identified and the same number of trials is selected from all the other groups.

How to select trials in a group that contains more than the requested number (Nf files, selecting only Ns):

  • Random selection: Select a random subset of Ns trials. Trial indices: randperm(Nf,Ns)

  • First in the list: Select the first Ns trials. Trial indices: 1:Ns

  • Last in the list: Select the last Ns trials. Trial indices: Nf-Ns+1:Nf

  • Uniformly distributed: Select Ns equally spaced trials. Trial indices: round(linspace(1, Nf, Ns)))

On the hard drive

The average files have the same structure as the individual trials, described in the tutorial Import epochs.

  • average_struct.gif

Differences with the imported epochs

  • F: [Nchannels x Ntime] average recordings across all the trials, in Volts.

  • Std: [Nchannels x Ntime] standard error or standard deviation across all the trials, in Volts.

  • Leff: Effective number of averages = Number of good trials that were used to compute the file



Tutorials/AllIntroduction (last edited 2015-07-13 22:50:35 by FrancoisTadel)