Tutorial 20: Head model

Authors: Francois Tadel, John C Mosher, Richard Leahy, Sylvain Baillet

The following tutorials describe how the brain activity can be estimated from the MEG/EEG recordings we have processed until now. This process consists in solving two separate problems: the modeling of the electromagnetic properties of the head (head model or forward model) and the inversion of this model. This tutorial will explain how to compute a head model for the subject of our auditory oddball experiment. Its scope is limited to linear distributed source models (there is no dipole fitting available in the software).

Advanced

Why estimating sources?

Reconstructing the activity of the brain from MEG or EEG recordings can be a difficult and long process. We tried to make it as automatic as possible, but you have to understand its advantages and issues before deciding if it can bring anything to your study.

The primary objective is to localize the regions of the brain that generated our signals of interest. From the sensor topographies in MEG and EEG, we can have a general idea of where the signal generators are located, but we cannot be more precise than identifying the hemisphere and the lobe. Using source estimation methods, we can expect to get a spatial resolution in the range of the centimeter, or even a few milimeters in the good cases.

While localizing the signals, we can also expect some form of deconvolution of our signals. In EEG, the scalp topographies are very smooth and it is common to have different effects that are superimposed in the signals recorded by most of the electrodes. Moving to the source space may help separating the sources and obtain cleaner and unmixed signals.

Specifically for MEG, working in source space solves many of the problems associated with this modality. The subject's head is not fixed in an MEG system, so it has a great chance of moving during the experiment. Also, the position of the sensors do not adapt to the shape and size of the head. Therefore between two acqusition runs, or between subjects with different morphologies, there is a high chance that the same MEG sensor records different parts of the brain. This problem does not exist in EEG, where the electrodes are in standard positions and follow the subject in all the movements.

Another point that makes MEG signals difficult to review is the variability in the sensors. Every manufacturer uses different types of sensors, that are sensitive to different orientations and distances, and that produce signals with different shapes. Again, this is not a problem in EEG: the only variabiliy you can get between two EEG systems is the level of noise and the ease of use of the cap, the signals will look very similar. Projecting all the data on the individual brains addresses these last two issues.

However, if your interest is only in the latency of an event, in detecting the lateralization of a known effect or in training blindly a classifier, you might not be interested in those complicated models that multiply by 50 the size of your files. Go for fancy methods only when you think they can bring some added value to your analysis, not because everybody around uses them.

Advanced

The origins of the MEG/EEG signals

To understand how we are reconstructing the sources, it is interesting to have an idea of the physiological origins of MEG and EEG signals. The models we use are based on some physiological assumptions that are not always valid, understanding them may help you selecting the appropriate method.

It is assumed that most of the currents we record are related with the postsynaptic activity of the pyramidal neurons in the cerebral cortex. Those cells are aligned spatially and perpendicular to the cortex surface. Millions of post-synaptic potentials in the apical dendrites of neighbouring pyramidal neurons sum up in time and space to form what we can approximate at a macroscopic level with a few electric dipoles (green arrows below).

The primary and volume currents generated by these dipoles create a potential distribution and a magnetic field at the surface of the head. We can record them with bipolar montages of electrodes placed on the skin (EEG) or very sensitive superconducting detectors (SQUIDs/MEG).

Advanced

Source space

The source estimation process consists in estimating the position and activity of a set of electric dipoles, to approximate the activity of the brain that produced the MEG/EEG data we recorded. Two families of solutions were explored in the past decades: the dipole fitting methods (we estimate the position and amplitude of a very limited number of dipoles over short time windows) and the distributed methods (we define a priori a dense grid of dipoles and then estimate their activity from the recordings).

The single dipole fitting approaches are very efficient in specific cases were we know in advance the number or regions involved and their latencies of activity. But they are difficult to generalize and automate, and not adapted for group analysis. With Brainstorm, we decided to work only with source distributed models, which require less manual tuning for getting acceptable results.

Our first step of modeling is then to define the position and orientation of the dipoles for which we want to estimate the activity. This set of dipoles is our referred to as our source space. By default, we limit our analysis to the cortex envelope, based on this observation that most of the MEG/EEG signals is related with the synchroneous activity of assemblies or cortical pyramidal cells. The simple solution we recommend is to use directly the vertices of the cortical surface we imported in the first tutorials (the nodes we can see in the grey mesh in the left image below).

In order to represent all the possible dipole orientations, we define three dipoles for each vertex of the cortex surface, corresponding to three orthogonal directions (X,Y,Z). When importing the anatomy of the subject, we downsampled the cortex surface to 15,000 vertices. This will correspond to a source space of 45,000 dipoles. We will compute a forward model that connects the activity of these 45,000 dipoles with the 275 MEG sensors we have in this dataset.

This default number of 15,000 vertices is empirical. Over the years, our experience seemed to show that it represents a good balance between the representation of the brain circumvolutions, the surface sampling and the amount of data that is generated. Using less vertices makes it difficult to preserve the shape of the brain, using more vertices produces more data without adding to the spatial resolution of the method and may lead to memory issues.

Additionally, we can impose constraints of orientation on the dipoles, to match the physiological observation that the pyramidal cells are mostly organized perpendicularly to the cortex surface. This has the advantage of limiting the number of dipoles to 15,000 (one per vertex) and making the results much easier to display and process. However, this constraint is most of the time too strong and distorts the reconstruction. This orientation constraint is an option of the inverse model and will be discussed in the following introduction tutorials.

The spatial constraint of imposing all the dipoles to be on the cortical surface might also be too restrictive in some cases, because our model is then not able to represent correctly the activity in deeper brain structures or in the cerebellum. Therefore we also offer an option to use the entire brain volume as our space space (the green dots below represent dipoles locations in volume model). This produces results that can be better or worse depending on the data, but in all the cases much more difficult to review. Volume and mixed head volumes are discussed in the advanced tutorials about source modeling.

Advanced

Forward problem

The first step of the source reconstruction consists in computing a model that explains how the electric currents or the magnetic fields flow from the electric generators in the brain (source space) through the different tissues of the head (mostly brain, CSF, skull and skin), to finally reach the sensors.

Available methods for MEG forward modeling:

Single sphere model

Select the TutorialCTF protocol, close all the figures, and follow these steps:

  1. Right-click on the Right condition and select Compute head model. The Head modeler window will appear.

    [ATTACH] --- [ATTACH]

  2. Set the options for your head model:
    • Source space: Cortex surface.
      The MRI volume option will be introduced in an advanced tutorial: Volume source estimation.

    • Forward modeling method: Single sphere.

    • You can also edit the Comment field of the file that will be created (the string that will be representing the head model in the database explorer).

    • Click on Run.

  3. Two other windows appear, to help you define the sphere. Estimating the best fitting sphere for a head is not always as easy as it looks like, because a human head is usually not spherical.



    • Read and follow the instructions in the help window.
    • Click on the Scalp button, move and resize the sphere manually, just to see how it works.

    • Click again on Scalp: here we will use directly the estimation of the sphere based on the vertices of the Scalp surface (a simple least-squares fitting using all the vertices of the surface).

    • For EEG 3-shell spheres models, you just estimate and manipulate the largest sphere (scalp), and then use the Edit properties... button in the toolbar to define the relative radii of the 2 other spheres, and their respective conductivities. This will be described in another tutorial.

    • Click on Ok, and wait for a few seconds.

  4. A new file appeared just below the channel file, it represents the head model.

    [ATTACH]

    • There is not much you can do with this file, as it is only a matrix that converts the cortical sources into MEG/EEG recordings, and we do not have any sources information yet.
    • You may just check the sphere(s) that were used to compute the head model.

Overlapping spheres model

Let's compute a more advanced forward model. The overlapping spheres method is based on the estimation of a different sphere for each sensor. Instead of using only one sphere for the whole head, it estimates a sphere that fits locally the shape of the head in the surroundings of each sensor.

  1. Right-click on Right condition and select Compute head model again.

  2. Select the Overlapping spheres method and click on Run.

  3. This algorithm is supposed to use the inner skull surface from the subject, but we usually do not have this information. In this case, a pseudo-innerskull is reconstructed using a dilated version of the cortex envelope.
  4. Right-click on the new head model > Check spheres. This window shows the spheres that were estimated. You can check them by following the indications written in green at the bottom of the window: use left/right arrows. At each step, the current sensor marker is displayed in red, and the sphere you see is its local estimation of the head shape.

  5. Close this window when you reviewed them all.

    [ATTACH] [ATTACH]

  6. Compute a head model for the Left condition (Overlapping spheres).

Selection of a head model

We now have two head models in for our Subject01 / Right condition.

Batching head model computation

You can run in two clicks the computation of the overlapping spheres model for all the conditions or subjects you want in the database.

On the hard drive

how to get the constrained leadfield

Additional documentation








Feedback: Comments, bug reports, suggestions, questions
Email address (if you expect an answer):


Tutorials/HeadModel (last edited 2015-07-17 20:08:13 by FrancoisTadel)