12679
Comment:
|
16106
|
Deletions are marked like this. | Additions are marked like this. |
Line 8: | Line 8: |
<<TAG(Advanced)>> |
|
Line 9: | Line 11: |
Reconstructing the activity of the brain from MEG or EEG recordings can be difficult and long process. We tried to make it as automatic as possible, but you have to understand its advantages and issues before deciding if it can bring anything to your study. | Reconstructing the activity of the brain from MEG or EEG recordings can be a difficult and long process. We tried to make it as automatic as possible, but you have to understand its advantages and issues before deciding if it can bring anything to your study. |
Line 11: | Line 13: |
The first interesting point is obviously to '''localize''' the regions of the brain that generated our signals of interest. From the sensor topographies in MEG and EEG, we can have a general idea of where the signal generators are located, but we cannot be more precise than identifying the hemisphere and the lobe. Using source estimation methods, we can expect to get a spatial resolution in the range of the centimeter, or even a few milimeters in the good cases. | The primary objective is to '''localize''' the regions of the brain that generated our signals of interest. From the sensor topographies in MEG and EEG, we can have a general idea of where the signal generators are located, but we cannot be more precise than identifying the hemisphere and the lobe. Using source estimation methods, we can expect to get a spatial resolution in the range of the centimeter, or even a few milimeters in the good cases. |
Line 15: | Line 17: |
Specifically for '''MEG''', working in source space solves many of the problems associated with this modality. The subject's head is not fixed in an MEG system, so it has a great chance of moving during the experiment. Also, the position of the sensors do adapt to the shape and size of the head. So between two acqusition runs, or between a short woman and a tall man, there is a high chance that '''the same MEG sensor records different parts of the brain'''. This problem does not exist in EEG, where the electrodes are in standard positions and follow the subject in all the movements. | Specifically for '''MEG''', working in source space solves many of the problems associated with this modality. The subject's head is not fixed in an MEG system, so it has a great chance of moving during the experiment. Also, the position of the sensors do not adapt to the shape and size of the head. Therefore between two acqusition runs, or between subjects with different morphologies, there is a high chance that '''the same MEG sensor records different parts of the brain'''. This problem does not exist in EEG, where the electrodes are in standard positions and follow the subject in all the movements. |
Line 17: | Line 19: |
Another point that makes '''MEG signals difficult to review''' is the variability in the sensors. Every manufacturer uses different types of sensors, that are sensitive to different orientations and distances, and that produce signals that have a different shape. Again, this is not a problem in EEG: the only variabiliy you cen get between two EEG systems is the level of noise and the ease of use of the cap, the signals will look very similar. Projecting all the data on the individual brains and then on a template brain addresses these last two issues. | Another point that makes '''MEG signals difficult to review''' is the variability in the sensors. Every manufacturer uses different types of sensors, that are sensitive to different orientations and distances, and that produce signals with different shapes. Again, this is not a problem in EEG: the only variabiliy you can get between two EEG systems is the level of noise and the ease of use of the cap, the signals will look very similar. Projecting all the data on the individual brains addresses these last two issues. |
Line 21: | Line 23: |
<<TAG(Advanced)>> |
|
Line 22: | Line 26: |
To understand why we are reconstructing the sources the way we do, it is interesting to understand where the MEG and EEG signals are coming from. | To understand how we are reconstructing the sources, it is interesting to have an idea of the physiological origins of MEG and EEG signals. The models we use are based on some physiological assumptions that are not always valid, understanding them may help you selecting the appropriate method. |
Line 24: | Line 28: |
It is assumed that most of the currents we record are generated from the postsynaptic activity of the pyramidal neurons in the cerebral cortex. Those cells are aligned spatially and perpendicular to the cortex surface. Millions of post-synaptic potentials in the apical dendrites of neighbouring pyramidal neurons sum up in time and space to form what we can consider at a macroscopic level as one electric dipole. | It is assumed that most of the currents we record are related with the postsynaptic activity of the pyramidal neurons in the cerebral cortex. Those cells are '''aligned spatially''' and '''perpendicular to the cortex surface'''. Millions of post-synaptic potentials in the apical dendrites of neighbouring pyramidal neurons sum up in time and space to form what we can approximate at a macroscopic level with a few electric dipoles (green arrows below). |
Line 26: | Line 30: |
The primary and volume currents generate a potential distribution and a magnetic field at the surface of the head. We can record them with bipolar montages of electrodes placed on the skin (EEG) or very sensitive superconducting detectors (SQUIDs/MEG). | The primary and volume currents generated by these dipoles create a potential distribution and a magnetic field at the surface of the head. We can record them with bipolar montages of electrodes placed on the skin (EEG) or very sensitive superconducting detectors (SQUIDs/MEG). |
Line 29: | Line 33: |
. <<HTML(<RIGHT><FONT size="-1" color="#CCCCCC"><I>Matti Hamalainen, 2007</I></FONT></RIGHT>)>> | . <<HTML(<div align="right"><FONT size="-1" color="#CCCCCC"><I>Matti Hamalainen, 2007</I></FONT></div>)>> |
Line 31: | Line 35: |
== Source model == The idea of a distributed source model estimation is to assume that |
<<TAG(Advanced)>> |
Line 34: | Line 37: |
. {{attachment:source_space.gif||height="233",width="297"}} | == Source space == The source estimation process consists in estimating the position and activity of a set of electric dipoles, to approximate the activity of the brain that produced the MEG/EEG data we recorded. Two families of solutions were explored in the past decades: the '''dipole fitting methods''' (we estimate the position and amplitude of a very limited number of dipoles over short time windows) and the '''distributed methods''' (we define a priori a dense grid of dipoles and then estimate their activity from the recordings). The single dipole fitting approaches are very efficient in specific cases were we know in advance the number or regions involved and their latencies of activity. But they are difficult to generalize and automate, and not adapted for group analysis. With Brainstorm, we decided to work only with source distributed models, which require less manual tuning for getting acceptable results. Our first step of modeling is then to define the '''position and orientation of the dipoles''' for which we want to estimate the activity. This set of dipoles is our referred to as our '''source space'''. By default, we limit our analysis to the '''cortex envelope''', based on this observation that most of the MEG/EEG signals is related with the synchroneous activity of assemblies or cortical pyramidal cells. The simple solution we recommend is to use directly the vertices of the cortical surface we imported in the first tutorials (the nodes we can see in the grey mesh in the left image below). In order to represent all the possible dipole orientations, we define '''three dipoles for each vertex''' of the cortex surface, corresponding to three orthogonal directions (X,Y,Z). When importing the anatomy of the subject, we downsampled the cortex surface to '''15,000 vertices'''. This will correspond to a source space of '''45,000 dipoles'''. We will compute a forward model that connects the activity of these 45,000 dipoles with the 275 MEG sensors we have in this dataset. This default number of 15,000 vertices is empirical. Over the years, our experience seemed to show that it represents a good balance between the representation of the brain circumvolutions, the surface sampling and the amount of data that is generated. Using less vertices makes it difficult to preserve the shape of the brain, using more vertices produces more data without adding to the spatial resolution of the method and may lead to memory issues. Additionally, we can impose '''constraints of orientation''' on the dipoles, to match the physiological observation that the pyramidal cells are mostly organized perpendicularly to the cortex surface. This has the advantage of limiting the number of dipoles to 15,000 (one per vertex) and making the results much easier to display and process. However, this constraint is most of the time too strong and distorts the reconstruction. This orientation constraint is an option of the inverse model and will be discussed in the following introduction tutorials. The spatial constraint of imposing all the dipoles to be on the cortical surface might also be too restrictive in some cases, because our model is then not able to represent correctly the activity in deeper brain structures or in the cerebellum. Therefore we also offer an option to use the entire brain volume as our space space (the green dots below represent dipoles locations in volume model). This produces results that can be better or worse depending on the data, but in all the cases much more difficult to review. Volume and mixed head volumes are discussed in the advanced tutorials about source modeling. . {{attachment:source_space.gif||height="200",width="243"}} {{attachment:source_volume.gif||height="200",width="255"}} <<TAG(Advanced)>> |
Line 37: | Line 57: |
The first step consists in computing a model that explains how the electric currents or the magnetic fields flow from the electric generators in the brain (the assemblies of neurons) through the different tissues of the head (mostly brain, CSF, skull and skin), to finally reach the sensors. | The first step of the source reconstruction consists in computing a model that explains how the electric currents or the magnetic fields flow from the electric generators in the brain (source space) through the different tissues of the head (mostly brain, CSF, skull and skin), to finally reach the sensors. |
Line 39: | Line 59: |
* Building a model that connects the values we observe outside of the head with MEG or EEG to the electric activity of some generators in the brain is called '''forward problem'''. * The model we obtain after solving this problem is call '''head model''' in Brainstorm, but can also be referred to as '''forward model''', '''leadfield matrix''' or '''gain matrix'''. * In the Brainstorm software, we consider by default that the electric or magnetic activity which is recorded by the sensors is produced mainly by a set of electric dipoles located at the surface of the cortex.<<BR>> * The grid of sources (dipoles) that is used is defined by the cortex surface we have imported in one of the previous tutorials; each vertex of this surface is considered as a dipole. * The default surface distributed with Brainstorm have around 15,000 vertices. So we will have 15,000 dipole amplitudes to estimate. Using less vertices would just lower the resolution of the results; using more produces too much data and might lead to memory issues. * What we expect to get at the end of this process is a matrix whose size is [Number of sensors x Number of sources] * For computing this matrix, three methods are available for MEG recordings in Brainstorm: * '''Single sphere''': the head is considered a homogeneous sphere * '''Overlapping spheres''': Refining the previous model by fitting one local sphere for each sensor * '''OpenMEEG BEM''': Symmetric Boundary Element Method from the open-source software OpenMEEG. Described in an advanced tutorial: [[Tutorials/TutBem|BEM head model]]. |
* The question of building a model that connects the values we observe outside of the head (MEG/EEG) to the electric activity of the cortical dipoles in the brain is called '''forward problem'''. * The model we obtain after solving this problem is called '''head model''' in Brainstorm, but can also be referred to as '''forward model''', '''leadfield matrix''' or '''gain matrix'''. * In this tutorial we will use the default source space: the low-resolution cortex surface with 15,000 vertices, as the support of 45,000 dipoles. We will use indifferently the terms '''dipole''' and '''source'''. * What we expect to get at the end of this process is a matrix '''[Nsensors x Nsources]'''. |
Line 50: | Line 64: |
{{attachment:forward_inverse.gif}} | Available methods for MEG forward modeling: * '''Single sphere''': The head is considered to be a homogeneous sphere. * '''Overlapping spheres''': Refining the previous model by fitting one local sphere for each sensor. * '''OpenMEEG BEM''': Symmetric Boundary Element Method from the open-source software OpenMEEG. Described in an advanced tutorial: [[Tutorials/TutBem|BEM head model]]. <<BR>><<BR>> {{attachment:forward_inverse.gif}} |
Tutorial 20: Head model
Authors: Francois Tadel, John C Mosher, Richard Leahy, Sylvain Baillet
The following tutorials describe how the brain activity can be estimated from the MEG/EEG recordings we have processed until now. This process consists in solving two separate problems: the modeling of the electromagnetic properties of the head (head model or forward model) and the inversion of this model. This tutorial will explain how to compute a head model for the subject of our auditory oddball experiment. Its scope is limited to linear distributed source models (there is no dipole fitting available in the software).
Contents
Why estimating sources?
Reconstructing the activity of the brain from MEG or EEG recordings can be a difficult and long process. We tried to make it as automatic as possible, but you have to understand its advantages and issues before deciding if it can bring anything to your study.
The primary objective is to localize the regions of the brain that generated our signals of interest. From the sensor topographies in MEG and EEG, we can have a general idea of where the signal generators are located, but we cannot be more precise than identifying the hemisphere and the lobe. Using source estimation methods, we can expect to get a spatial resolution in the range of the centimeter, or even a few milimeters in the good cases.
While localizing the signals, we can also expect some form of deconvolution of our signals. In EEG, the scalp topographies are very smooth and it is common to have different effects that are superimposed in the signals recorded by most of the electrodes. Moving to the source space may help separating the sources and obtain cleaner and unmixed signals.
Specifically for MEG, working in source space solves many of the problems associated with this modality. The subject's head is not fixed in an MEG system, so it has a great chance of moving during the experiment. Also, the position of the sensors do not adapt to the shape and size of the head. Therefore between two acqusition runs, or between subjects with different morphologies, there is a high chance that the same MEG sensor records different parts of the brain. This problem does not exist in EEG, where the electrodes are in standard positions and follow the subject in all the movements.
Another point that makes MEG signals difficult to review is the variability in the sensors. Every manufacturer uses different types of sensors, that are sensitive to different orientations and distances, and that produce signals with different shapes. Again, this is not a problem in EEG: the only variabiliy you can get between two EEG systems is the level of noise and the ease of use of the cap, the signals will look very similar. Projecting all the data on the individual brains addresses these last two issues.
However, if your interest is only in the latency of an event, in detecting the lateralization of a known effect or in training blindly a classifier, you might not be interested in those complicated models that multiply by 50 the size of your files. Go for fancy methods only when you think they can bring some added value to your analysis, not because everybody around uses them.
The origins of the MEG/EEG signals
To understand how we are reconstructing the sources, it is interesting to have an idea of the physiological origins of MEG and EEG signals. The models we use are based on some physiological assumptions that are not always valid, understanding them may help you selecting the appropriate method.
It is assumed that most of the currents we record are related with the postsynaptic activity of the pyramidal neurons in the cerebral cortex. Those cells are aligned spatially and perpendicular to the cortex surface. Millions of post-synaptic potentials in the apical dendrites of neighbouring pyramidal neurons sum up in time and space to form what we can approximate at a macroscopic level with a few electric dipoles (green arrows below).
The primary and volume currents generated by these dipoles create a potential distribution and a magnetic field at the surface of the head. We can record them with bipolar montages of electrodes placed on the skin (EEG) or very sensitive superconducting detectors (SQUIDs/MEG).
- Matti Hamalainen, 2007
Source space
The source estimation process consists in estimating the position and activity of a set of electric dipoles, to approximate the activity of the brain that produced the MEG/EEG data we recorded. Two families of solutions were explored in the past decades: the dipole fitting methods (we estimate the position and amplitude of a very limited number of dipoles over short time windows) and the distributed methods (we define a priori a dense grid of dipoles and then estimate their activity from the recordings).
The single dipole fitting approaches are very efficient in specific cases were we know in advance the number or regions involved and their latencies of activity. But they are difficult to generalize and automate, and not adapted for group analysis. With Brainstorm, we decided to work only with source distributed models, which require less manual tuning for getting acceptable results.
Our first step of modeling is then to define the position and orientation of the dipoles for which we want to estimate the activity. This set of dipoles is our referred to as our source space. By default, we limit our analysis to the cortex envelope, based on this observation that most of the MEG/EEG signals is related with the synchroneous activity of assemblies or cortical pyramidal cells. The simple solution we recommend is to use directly the vertices of the cortical surface we imported in the first tutorials (the nodes we can see in the grey mesh in the left image below).
In order to represent all the possible dipole orientations, we define three dipoles for each vertex of the cortex surface, corresponding to three orthogonal directions (X,Y,Z). When importing the anatomy of the subject, we downsampled the cortex surface to 15,000 vertices. This will correspond to a source space of 45,000 dipoles. We will compute a forward model that connects the activity of these 45,000 dipoles with the 275 MEG sensors we have in this dataset.
This default number of 15,000 vertices is empirical. Over the years, our experience seemed to show that it represents a good balance between the representation of the brain circumvolutions, the surface sampling and the amount of data that is generated. Using less vertices makes it difficult to preserve the shape of the brain, using more vertices produces more data without adding to the spatial resolution of the method and may lead to memory issues.
Additionally, we can impose constraints of orientation on the dipoles, to match the physiological observation that the pyramidal cells are mostly organized perpendicularly to the cortex surface. This has the advantage of limiting the number of dipoles to 15,000 (one per vertex) and making the results much easier to display and process. However, this constraint is most of the time too strong and distorts the reconstruction. This orientation constraint is an option of the inverse model and will be discussed in the following introduction tutorials.
The spatial constraint of imposing all the dipoles to be on the cortical surface might also be too restrictive in some cases, because our model is then not able to represent correctly the activity in deeper brain structures or in the cerebellum. Therefore we also offer an option to use the entire brain volume as our space space (the green dots below represent dipoles locations in volume model). This produces results that can be better or worse depending on the data, but in all the cases much more difficult to review. Volume and mixed head volumes are discussed in the advanced tutorials about source modeling.
Forward problem
The first step of the source reconstruction consists in computing a model that explains how the electric currents or the magnetic fields flow from the electric generators in the brain (source space) through the different tissues of the head (mostly brain, CSF, skull and skin), to finally reach the sensors.
The question of building a model that connects the values we observe outside of the head (MEG/EEG) to the electric activity of the cortical dipoles in the brain is called forward problem.
The model we obtain after solving this problem is called head model in Brainstorm, but can also be referred to as forward model, leadfield matrix or gain matrix.
In this tutorial we will use the default source space: the low-resolution cortex surface with 15,000 vertices, as the support of 45,000 dipoles. We will use indifferently the terms dipole and source.
What we expect to get at the end of this process is a matrix [Nsensors x Nsources].
Available methods for MEG forward modeling:
Single sphere: The head is considered to be a homogeneous sphere.
Overlapping spheres: Refining the previous model by fitting one local sphere for each sensor.
OpenMEEG BEM: Symmetric Boundary Element Method from the open-source software OpenMEEG. Described in an advanced tutorial: BEM head model.
Single sphere model
Select the TutorialCTF protocol, close all the figures, and follow these steps:
Right-click on the Right condition and select Compute head model. The Head modeler window will appear.
---- Set the options for your head model:
Source space: Cortex surface.
The MRI volume option will be introduced in an advanced tutorial: Volume source estimation.Forward modeling method: Single sphere.
You can also edit the Comment field of the file that will be created (the string that will be representing the head model in the database explorer).
Click on Run.
Two other windows appear, to help you define the sphere. Estimating the best fitting sphere for a head is not always as easy as it looks like, because a human head is usually not spherical.
- Read and follow the instructions in the help window.
Click on the Scalp button, move and resize the sphere manually, just to see how it works.
Click again on Scalp: here we will use directly the estimation of the sphere based on the vertices of the Scalp surface (a simple least-squares fitting using all the vertices of the surface).
For EEG 3-shell spheres models, you just estimate and manipulate the largest sphere (scalp), and then use the Edit properties... button in the toolbar to define the relative radii of the 2 other spheres, and their respective conductivities. This will be described in another tutorial.
Click on Ok, and wait for a few seconds.
A new file appeared just below the channel file, it represents the head model.
- There is not much you can do with this file, as it is only a matrix that converts the cortical sources into MEG/EEG recordings, and we do not have any sources information yet.
- You may just check the sphere(s) that were used to compute the head model.
Overlapping spheres model
Let's compute a more advanced forward model. The overlapping spheres method is based on the estimation of a different sphere for each sensor. Instead of using only one sphere for the whole head, it estimates a sphere that fits locally the shape of the head in the surroundings of each sensor.
Right-click on Right condition and select Compute head model again.
Select the Overlapping spheres method and click on Run.
- This algorithm is supposed to use the inner skull surface from the subject, but we usually do not have this information. In this case, a pseudo-innerskull is reconstructed using a dilated version of the cortex envelope.
Right-click on the new head model > Check spheres. This window shows the spheres that were estimated. You can check them by following the indications written in green at the bottom of the window: use left/right arrows. At each step, the current sensor marker is displayed in red, and the sphere you see is its local estimation of the head shape.
- Compute a head model for the Left condition (Overlapping spheres).
Selection of a head model
We now have two head models in for our Subject01 / Right condition.
- You can have several head models computed for the same dataset, but it is not recommended as it might be difficult afterwards to know which one was used to compute the sources.
If you want to keep them anyway, you have to indicate which one is the default one. You do that by double-clicking on one of them (or right-click > set as default head model), and it is supposed to turn green. The head model displayed in green is the one that will be used for the following computation steps.
- For MEG, when it works properly, the overlapping spheres model usually gives better results than the single sphere one. In this particular case, it produces more focal results, so we are going to use it for the next steps.
- For EEG, always prefer the "OpenMEEG BEM" model.
Now to make things clearer: delete the Single sphere head model, and keep the Overlapping spheres.
Batching head model computation
You can run in two clicks the computation of the overlapping spheres model for all the conditions or subjects you want in the database.
The Compute head model menu is available in popup menus in the tree at all the levels (protocol, subject, condition). It is then applied recursively to all the subjects and conditions contained in the node(s) you selected.
Example: If you want to compute it on all the subjects and all the conditions, select the Compute head model menu from the protocol node TutorialCtf. For all the conditions of Subject01, run it from the Subject01 popup menu. Etc.
If you only want to compute it on some subjects of the protocol, select them at once holding the Ctrl key, right-click on one, and select the Compute head model menu.
To process all the subjects for one condition, switch to the Functional data (sorted by conditions) view of the database.
On the hard drive
how to get the constrained leadfield
Additional documentation
Tutorial: BEM with OpenMEEG
Tutorial: Volume source estimation
Forum: Sensor modeling: http://neuroimage.usc.edu/forums/showthread.php?1295
Forum: Gain matrix: http://neuroimage.usc.edu/forums/showthread.php?918
Forum: EEG reference: http://neuroimage.usc.edu/forums/showthread.php?1525#post6718
Forum: EEG and default anatomy: http://neuroimage.usc.edu/forums/showthread.php?1774
Forum: Mixed head models indices: http://neuroimage.usc.edu/forums/showthread.php?1878