Projecting Sources Error

Well… It calculated what you asked for. But I’m not sure what you are expecting from this result.

The PSD estimator is a method for estimating the power spectrum of long recordings. It splits the input signals in windows of a few seconds, calculates the FFT of those windows, then average the power of those FFTs.
It is not very interesting to run it on an averaged signal (in your case averaged across trials, then averaged across sources).
You should run this on all the epochs and average the result. You can do this by putting all the trials in the Process1 list, then using the PSD option “Output: Average”.

Then maybe you also what to run this on a scout that contains only one source (or very few), or set the scout function to ALL, then average the PSD values together (average rows).


I had extracted the time series from scout before, then I ran the simulation to see what that region is contributing. So just out of curiosity, wanted to check the spectral composition of the signal and how it changes for every scout.
I guess a better way to do that would be to have a virtual electrode for LCMV output? Can that be done?
I read a 2013 post that said, spatial filters have been removed for stability issues but currently that option is present under sources?


A “virtual electrode” on beamformer results are essentially the same thing as a scout on minimum norm source maps.
I would not recommend using this beamformer process in the “sources” menu now. During the next few months, we are going to re-organize the way the source models are calculated in Brainstorm, to offer the computation of minimum norm and beamforming solutions in the same framework.

Hi Kanad,

I have linked the raw data (AUX) first and have performed the necessary pre-processing steps (removal of linear trend, notch filter, baseline correction).

First, the pre-processing of your continuous file should not include any detrending operation or “baseline correction”. Those two operations have the same function (the first removes linear trend, the second removes a constant) and are mean to be applied only on imported epochs. It’s very difficult to estimate what is the effect of removing the average of the entire recordings…
If you want to remove slow components in your signals (<0.5Hz for instance) at this very early stage of analysis, use a high-pass filter.

However, whenever I compute SSPs, I don’t get a reasonable topography (with PCA) at all. I always have to go through the ‘generic’ tab and save one averaged component.

Each subject is different and generates different types of artifacts. Brainstorm offers a variety of options, hoping that you will always find one that cleans correctly the artifact.
Make sure that you remove the cardiac events occurring at the same time as eye movements, to prevent the first step of cleaning for the heartbeats to parasite the blink correction.
You can also try to uncheck the option “Use existing SSP projectors” or compute them the other way around.

Although that doesn’t give me a strong component either.

Image #1 looks good, why are you not satisfied with it?

As a consequence, the source maps are still not free of artifacts (activation in the temporal poles, ventral temporal lobes etc).

What kind of experiment is this? Do you have a fixation cross?
If you don’t, you would have constant eye movements, almost impossible to remove. The only option would be a high-pass filter high enough to remove them.
If you do have a fixation cross, you may still observe other types of eye movements (saccades or slow movements). You would observe them better on horizontal EOG, and may have to mark them manually and calculate a separate SSP projector.
They are usually easy to see, but might be complicated to correct for…

Can the use of LCMV, instead of dSPM get rid off this problem (is it stable now?)?

A new version of the source estimation framework should be available in the software by the summer, including a LCMV beamformer.
If you have eye movements recorded in your recordings, the inverse solution might help a bit but will not remove them as magically as the SSP.

Secondly, what is the criteria for peak-to-peak artifact rejection that you normally use/recommend in case of evoked responses?

We don’t use it. Reviewing manually the recordings is a lot more reliable than any automatic rejection method.


Hi Francois,

Thanks for your reply.

It is an auditory oddball experiment and yes I do have a fixation cross. I reviewed the raw recordings from the V-EOG and H-EOG channels, there are not many saccadic movements so I don’t suspect they are affecting the recordings to a great extent.
I’ll try with a high-pass filter.

Yes, the image 1 looks really good but I have only seen that in one participant after selecting save averaged component.
What I meant is that I am not getting consistent results with the SSPs with the standard procedure described in the tutorial even after having a well defined set of artifacts.

Just to be sure the batch script analysis pipeline should look as follows:

  • Link raw recordings
  • Apply notch filters (to remove power line contamination)
  • Compute SSPs from EOG, ECG recordings
  • Apply high-pass filters if necessary (2-2.5Hz?)
  • Import epochs, apply pre-stimulus baseline correction
  • Detrend?

Does this seem reasonable?


Hi Kanad,

Apply the high-pass filter at the same the time as the notch filter. All the frequency filters should be executed as the first step of analysis, using the same parameters for all the recordings (including the empty room recordings).
Plus If you run it before the SSP, you will not be able to extract any good topography.

High-pass filter: 2Hz seems very high. You can probably go down to 0.3-1Hz, depending on the dynamics you are trying to study.

DC correction/detrending are not always necessary if you apply a high-pass filter.


Hi Francois,

Sorry for troubling you again.

I have followed all your suggestions, the localisation has improved but I haven’t been able to get rid of activation seen on the ventral temporal regions and temporal poles in its entirely.
I applied a 0.5Hz filter to start with but that didn’t improve anything at all. So then I applied a 2.5Hz (assuming there are still residual eye movements even after SSP correction) which should have reduced these artifacts significantly but turns out that doesn’t work either.
before averaging any data, I have removed trials that look noisy or have sensor jumps etc so Im not sure where its coming from. In general, the data looks very clean both on CTF software and with FieldTrip.
I am still unsure where I am going wrong.

Unrelated to above but important, I noticed something last week but forgot to post it here.
Whenever I link the AUX files, the trigger information is read out incorrectly. I have a pseudo-randomised sequence of exact 480 standards and 120 deviants (total 600), however this number somehow goes on to become 636 with whatever combination of standards and deviants. However, whenever the .ds file is linked, it shows the correct number.
Just to be sure, I read AUX and .ds files with CTF and FT, it shows the number 600 exactly.
How can I make sure my trials are of the exact number?


Hi Kanad,
I looked through this thread and I suspect you have a noise source that is coming from something other than the eyes. I would be happy to have a look at one of these recordings. If that works for you, please upload the file to somewhere (for example, Dropbox), then send me a link by email where I can download it. You can get my email by clicking on my username on this forum.
Just a few comments/questions:

  1. How does the MEG/MRI coregistration look? Can you attach a screen shot with the scalp surface and the MEG sensors.
  2. Are there any noisy channels that you have or have not marked bad?
  3. What are you using for the noise covariance? Emptyroom or recordings?
  4. What do you mean by ‘the data looks very clean in both CTF software and with FieldTrip’?

As for the incorrect number of markers in the AUX file, this is normal. Because the AUX file is divided into ‘trials’ not associated with the actual task markers, there can sometimes be extra markers detected, as in the case where a marker spans two ‘trials’ (the end of one and the beginning of an other). The best practice for detecting the correct number of markers in the AUX file is to (1) convert the file to continuous, then (2) re-detect the markers on the stim channel. This will then give you only the onset of each marker.

Hi Beth,

I have just emailed you with a link to the dataset and clarifying the points you have raised.
Thanks for the tip on AUX files, I’ll work according to your suggestion.


Hi Kanad,
I had a very quick look at the data and i realized that you do not have the 3rd order CTF compensation applied on this data. Therefore you won’t get efficient SSP topographies or filtering. I don’t know if you are accounting for this in your processing pipeline. If not, you can do one of the following things:

  1. Open the file in CTF software, apply the compensation and save. Then re-import the raw link and redo the cleaning
  2. Import the raw link in Brainstorm, then apply the compensation as the first step:
    drop the link in Process1 tab -> Run -> Artifacts -> Apply SSP & CTF compensation
    then re-do the cleaning (SSP and filters)
  3. Import the raw link in Brainstorm and check the box ‘Process the entire file at once’ when doing the filtering and computing the SSP

Let me know if this is clear.

Hi Kanad,
Regarding the CTF compensation, this would only be necessary before applying filters. Apparently it is not an issue for computing SSPs.

Also, I had a look at your data. My initial impression is that this subject has dental work - this why you are possibly seeing the artifacts in temporal areas (strongest in left temporal and seem to be riding a low freq like breathing). Also, I see a significant amount of small 'jumps' in the data over the right temporal area - you should be diligent to mark these as 'bad'. I did have some success removing the artifacts by doing the following:
make a new event called 'start' at the beginning of the file
compute SSP: generic with the 'start' event, time window = [0, 200000]ms, [0.5,40]Hz, PCA
This should give you a nice topography of the artifact

Then you can mark events on MRT57 (filter [20,200]Hz) and compute SSP

Then you can detect cardiac events on MRT51 and compute SSP

In addition, I marked additional dental/breathing events on MLT57 and computed an SSP

I guess if you mark some channels bad, your topographies may look different than mine. Hope this helps...

Hi Kanad,
Usually we do a 'pre-test' with all subjects before starting. Many people forget they have dental work. It is very efficient to ask them to blink their eyes a few times, open and close their mouth a few times and take a couple deep breaths while you observe the MEG signal. This will give you a good idea about any artifacts.

With regard to the overall/continuous SSP, yes, just one event at the beginning of the continuous file. Then compute the SSP using about 200s of recording. This should give you a topography of the ongoing artifact.

Here is a sample of the jumps I see:

I found 185 of these using MRT57 for the detection channel.




I have a question regarding the area of the vertices.

How can I access line of code from the Vertarea variable that is within the global variables?


Hi Beth,
I have a patient with dental works. However, I don't find any such pre-test. Can you let me know - how did you detect the 185 events of those sharp jumps ?

I'm not sure I understand the question. For the pre-test, that is done at the beginning of a session, before prepping the patient. It can be done with and normal spontaneous settings. You would see artifacts if the patient has metal.
For SSPs, that is done after a recording, as part of processing. SSPs can be used to find the artifact, typically it will be one of the first projectors. I suggest reading the tutorial on SSPs.
Hope that helps.