FFT analysis

Hello,

I am new to using Brianstorm (and signal processing) and would like to seek advice regarding the calculation of a simple FFT on 5 minute resting state recording, in the sensor space. The steps I have taken thus far are the following:

• I have loaded and reviewed the continuous data file into Brainstorm, and after applying the visualization filter, I have performed the SSP to correct for EKG and eye artifact.
• Next, I have imported this file into Brainstorm as epochs of 5 seconds in length (including DC offset for the entire continuous file).
• Following this, under the processes tab, I have entered the 60 x 5 second files nd have applied the sinusoidal removal (notch).
• Finally, I have reviewed the epoched files individually to remove any “bad” epochs prior to the FFT.
Am I correct in thinking that the next step is to simply enter the epoched files in the Process1 space and under the Frequency option, select the FFT analysis?

Any help/advice is very much appreciated.

Thank you!

Eszter

Hi Eszter,

Overall the procedure is good, but you can improve it bit. Here are a few comments:

I have loaded and reviewed the continuous data file into Brainstorm, and after applying the visualization filter, I have performed the SSP to correct for EKG and eye artifact.

The visualization filters are for visualization only, they have no impact on any processing step, including the SSP correction.

Next, I have imported this file into Brainstorm as epochs of 5 seconds in length (including DC offset for the entire continuous file).

Depending on your sampling rate, you can probably work with segments that are longer than 5s, it could make it easier to manage to have less files in the database.

Following this, under the processes tab, I have entered the 60 x 5 second files nd have applied the sinusoidal removal (notch).

I would recommend you run the sinusoid removal BEFORE epoching instead. With the CTF .ds and FIF file formats, you can run directly this process on the continuous files (drag and drop the Link to raw file in the Process1 box). Other file formats are not support yet but will be before the end of the year. If you can’t process directly your original files, try importing segments of recordings as long as possible in the Brainstorm database (this will directly depend on the amount of memory you have on your computer - with 32Gb you can import and process the entire file). For visualization, you can then re-epoch the long imported block in smaller epochs.
This process works much better on long time series, and does not perform any correction on the edges of the signals. Therefore, if you run this on the imported segments, you multiply the number of time samples that are not properly processed. To avoid those edge effects, you should always run this process on the longest possible time segments.

Finally, I have reviewed the epoched files individually to remove any “bad” epochs prior to the FFT.
Am I correct in thinking that the next step is to simply enter the epoched files in the Process1 space and under the Frequency option, select the FFT analysis?

If you do this, it will calculate the FFT for each block of 5s, and then average the power of the FFT across those blocks.
The frequency resolution of the final power spectrum depends on the length of the individual blocks (number of frequency bins ~ number of time samples).

Another, simpler way obtain the same thing in a more flexible way: run the PSD process on the continuous: drag and drop the Link to raw file in the Process1 box, then run the process “Power Spectrum Density (Welch)” process with an estimator time window of 5s. An overlap between the different windows allows to increase the number of segments and the stability of the estimator.
The Welch method is a more stable estimator of the power spectrum than the simple FFT. For more information, you can refer for instance to the help of the pwelch method in Matlab: http://www.mathworks.com/help/signal/ref/pwelch.html

To skip the bad segments in the calculation of the PSD, before running the process, review the continuous file (5min) and mark the segments as bad.
Select with your mouse the time segments to ignore and right-click > Reject time segment (CTRL+B).
Later, when you import the files in the database, all the imported trials that contain a piece of BAD event and going to be marker as bad.

Please let me know if you have further questions.
Cheers,
Francois

Thank you Francois! I will follow your suggestions!

Hi,
I have a question regarding the PSD (Welch). I am not quite sure how to find a correct a valid window length. In the tutorial it says : ''179/4000ms" means 179 windows of 4s each (total 716s). With the 50% overlap, it sums up to a little less than 2x the file length (360s).'' (https://neuroimage.usc.edu/brainstorm/Tutorials/ArtifactsFilter#On_the_hard_drive). I understand the overlap of 50%, but for the window length is it always 2x the file length? I have data for resting-state.

Thank you
Caroline

Hi Caroline

4 second windows with 50% overlap means that it will compute the PSD from 0-4s, 2-6s, 4-8s, 6-10s... Sum up all the blocks of recordings used to estimate the PSD, you obtain exactly:
floor(FileLengh/4) - 1) * WinLength

I'm not sure I understand which of the parameters is not clear to you. All the input parameters are described at the beginning of this tutorial:
https://neuroimage.usc.edu/brainstorm/Tutorials/ArtifactsFilter#Evaluation_of_the_noise_level

Can you try to explain better what you don't understand in this description? It may help us improve the documentation.

Francois

I understand what you are saying (with the overlap), but I am new to EEG and I don't understand why the ''4 second window'' was selected. Why not 2 seconds for example?

Thank you

Caroline

As the tutorial says:

Estimator length = length of the overlapping time windows for which we calculate the FFT. The number of time samples in the estimator is the same as the number of frequency bins in the output file. Increasing this parameter increases the output frequency resolution (distance between two frequency bins) but degrades the stability of the estimator, as it also decreases the total number of averaged time windows.

If you use 4s windows and obtain a frequency resolution of 0.25Hz in your PSD, then 2s windows would give you a frequency resolution of 0.5Hz (which is maybe still acceptable), and 1s windows would correspond to frequency bins distant of 1Hz (which is in most cases too coarse).
Longer windows => more precision in frequency domain, but less windows so degraded estimation. And it's also not good to have too much precision: 0.01Hz frequency bins would be too noisy to be analyzed or compared across subjects.

I think I am starting to understand, thank you very much!

If 4s window gives a resolution of 0.25Hz and 100s window (0.01 Hz) is too noisy, then how can we choose between 4s and less then 100s? Is it like a rule of thumb and 4s is a standard or there is more factors to take into account?

Thank you !

Caroline

It depends on the amount of data you have and the frequency resolution you want to obtain (which depends on the data quality, the sampling rate, the objectives you have with the PSD you compute...)
You have to test it, and see what works best for your own data.

Maybe start by reading some background literature on signal processing, to become more familiar with what a FFT and a periodogram are. If you have no idea what these are, you won't be able to make appropriate choices to compute and interpret them.
And maybe some addition background literature on MEG/EEG signal processing: there are some suggested books at the bottom of the tutorial page. Or ask your supervisors or colleagues.