SPM analysis

Hi ,

Once I have the source currents trial wise, how can I use the SPM to isolate significant activation between conditions.
And also I want to try using freesurfer created surfaces but, when I run the freesurfer2bstorm.m it asks for a read_surf which I fail to find. Where is it found ?

Thank you,
shriks

Hello,

Once I have the source currents trial wise, how can I use the SPM to isolate significant activation between conditions.

If this is a SPM for one given subject, you need to call the permtest.m routine which is sitting in the statistics folder of the Toolbox. We are currently designing the graphical user interface for easy computation of the SPMs but for now, you need to design your own Matlab script that will call the permtest.m routine. See the file header for more help, but also http://neuroimage.usc.edu/brainstorm/TutorialBasicStats.html

And also I want to try using freesurfer created surfaces but, when I run the freesurfer2bstorm.m it asks for a read_surf which I fail to find. Where is it found ?

We need to check this and we will let you know whether it is missing from the distribution.

Cheers,

Dear Shrinks,

The file read_surf.m file is available with the FreeSurfer software, inside a folder called ‘matlab’. To download FreeSurfer, visit: http://surfer.nmr.mgh.harvard.edu/fswiki

Because of license issues, we decided not to include this file with our distribution. Apologies for the inconvenience.

Best,
Dimitrios Pantazis

Hí Sylvain,

Thanks alot for you help this far.

I apologise for not making myself clear. Once the SPM is done between conditons I intend to continue with between subjects. Can I use the same scripts ?
When estimating a noise normalized minimum norm solution, can the baseline noise be estimated over multiple recording sessions of a subject ? This is keeping in mind that I am calculating trial wise minimum norm solutions. And also will this be a good estimate of noise, since the recording sessions last for ~4 hrs the baseline activity would vary considerably, i.e. what could a optimal method for noise estimation in this case.

Kindly advice.
Thank you.

shriks

Hello there,

I’m not sure I understand what you mean by ‘between subjects’. What would be your statistical samples in that case (individual data trials)? You may run your analysis subject by subject by using the individual data trials as samples. This should work out just fine.

As for the choice of a baseline, it really depends on your paradigm and on the type of analysis and hypothesis you want to test. If this is just a noise normalization approach, and provided that this is an experiment in the long-run, I’d indeed strongly suggest you take a baseline that follows the time course of your experiment (eg, something like the pre-stimulus interval before each individual trial, if this is relevant to your paradigm).

Hope this helps.

Hi,

Can you please guide me to the m-file which estimates the baseline noise variance used for noise normalization ?

Regards,
Shriks

Hello,

Please use bst_baseline_correction.m with the option ‘zscore’; see the help header of this script for detailed information, and email me in case you’re in trouble.

All the best,

Hi,

I am having some issues when I try running the permtest.m. The statistics needed is over trials belonging to a single condition.
I execute [pv,S0,NP,PS,P]=permtest(X,-20,{‘ttest’ []},-1,[],[],1,2) where X is the trialsXsourcePointsXtime.
And end up with the error

Maximum variable size allowed by the program is exceeded.
Error in ==> permtest at 535
PS=zeros([NP sz 1],‘single’);

How can I solve this problem ?

Thanks you,
Shriks

Hello,

X might be too large in terms of number of time points. you may run the permtest by decomposing the time window of interest in shorter time periods and run permtest sequentially on these latter.

Hi,

I am sorry, I did not understand. Can you please be alittle more clear.
I am looking at 900ms of behaviour data. Which makes X aound 100x3000x542.

Thank you,
Shriks

Sure, sorry.

My point is that you need to split the statistical analysis into smaller time windows using loops, e.g.:

for k=1:10
[pv(k),S0,NP,PS,P]=permtest(X(:,:,[t(k):t(k+1)],-20,{‘ttest’ []},-1,[],[],1,2)
end

(caution: abusive notations, this is just to illustrate the principle)

where t(k) are time samples that split the original 1:512 time window in 10 intervals.

Hope this helps further.