Computing MUSIC INVERSE PROBLEM method in brainstorm

Hello again,
I am working on developing MUSIC method using Brainstorm Lead Field Matrix (and hopefully expand it later to RAP, TRAP and EX MUSIC methods after).
The developement of the algorithm seems to be straightforward, and I am getting some results but the result is very off. I was wondering if I could check couple of points with you with regards to the lead field matrix.

1- I got the Lead Filed matrix shown with "L" of the following code which is in bst_inverse_linear_2018 . Is this correct? are we able to use this L matrix as the Lead filed matrix in the computation of any new inverse problem?

2- This is my code:

Which is my aim on computing MUSIC algorithm described in *Electromagnetic Brain Mapping * article.

I also looked at MUSIC method computer in FieldTrip which is bellow, I think its similar to what I have done

But the problem is the obtained source estimate is very off even with known number of sources. Is this normal behavior of MUSIC method or am I doing something wrong?

Hi,

What is the function you are plotting? Looking at your code snippet, I guess it is J... if so, take another look at the formula or read the next paragraph of the paper :).

In the nominator of formula (16), P projects the tested lead field to the noise subspace, not to the signal subspace. What you are probably looking for is either the reciprocal of J (as written in the paper) or a normalized projection to the signal subspace (U_sU_s^t).

For a couple of more words on these subspaces and projections, maybe take a look at our TRAP MUSIC paper --- formula (3) there is what I think you are looking for.

Cheers,
Matti

1 Like

Thank you for your reply Matti.
I looked at the TRAP MUSIC PAPER and here is what I have got:
For the equation


I computed as:

Where M is EEG data, L is lead field matrix and J is the source estimate. Additionally "num" is the true number of sources (in here is 3).
Here is my new output:

Im still not quit sure wether my computation is right and this is what is expected by MUSIC, or im still missing something ... . Additionally increasing NUM doesn't make the result any better. I am guessing it might be a correct result of MUSIC as lead field matrix is not accurate (spherical).
Please let me know what you think.

@Sylvain if I could possibly get your input as well, it would be great!

Best
Younes

Hi Younes,

I am wondering, why you are computing
(Ps). * L(:,i) instead of Ps * L(:,i). You are supposed to compute matrix--vector product, not an elementwise product.

It is difficult to see what might be wrong, when we do not know, how you simulated the data M. I would first do something easy like "C=[lj, lk] * [lj, lk]’", where lj and lk are ith and kth columns of L. Next, add some noise alpha * eye(size(C)), and so on.

When implementing a new method, it is good at first stage to simulate the test data using the same type of forward model as you use in testing. Add model errors only after verifying that your implementation works in an ideal case. With EEG, it does not matter, whether these verifications are done in a spherical or realistic model.

Cheers,
Matti

PS I had to put spaces around the asterisks to the code examples to prevent the forum parser from converting my multiplications to some formulation commands.

1 Like

Following Matti's comment on the nature of simulations, remember that MUSIC and beamformers in general notoriously fail when there are multiple sources simultaneously active. So if you activate 2 or more sources at once with no distinction in their dynamics - and for that matter, two or more static sources with no dynamical fluctuations - this will fool the beamformer's score map. This is one of the reasons why we don't feature beamformers in Brainstorm as default. I recommend you proceed with only one source, then two sources with uncorrelated time series, etc.

1 Like

@MattiS Thank you so much for your reply.
I am computing "(PS).*L(:,i)" because as indicated in (3) we are computing per grid location (p) which in here is "i". Additionally Ps is 63x63 and L is 63x15000 which leads to 63x15000 , while the J (source estimate) must be 1x15000 or (15000x1) vector. So matrix matrix production doesnt lead to right matrix size.

I have computed measurement M by first generating 3 source-space data using "Simulated Generic Data" of brainstorm. Then I projected the data into sensor-space after defining three positions on scout using "Simulate recordings from scouts" of Brainstorm.
What I have here as M is 63x1 which is 63 channels and 1 discrete point. As you mentioned I am using same lead field matrix forward model for simulation and estimation. So maybe as suggestion by Dr. @Sylvain it might be due to number of sources, since the method works fine for one source . Let me know what you think?

@Sylvain Thank you so much for your reply. This might be the case, as I tried the code above for 1 sources, the performance was acceptable!
We have been working on a method for finding NUMBER OF SOURCES having EEG data, so my primary goal was to use MUSIC method along with our NUMBER OF SOURCE estimate we have to see if it improves the existing MUSIC method. But based on your suggestion, MUSIC may not be improved (useful for our objective) as for more than one - two sources it does not perform well. I was wondering if there is any particular method/application that could benefit from our number of source estimate method. I would really appreciate your suggestion and guidance.

Best
Younes

Hi,
I meant that you should not be using the dot in your multiplication (PS). * L(:,i). Just compute the standard matrix--vector product PS * L(:,i).

If your three true source regions have identical or similar time-course within each region, and the time-courses of different source regions are (nearly) independent (-> you have a total of three extended sources), you should get some kind of blobs in correct regions with the MUSIC algorithm (although overall, MUSIC and beamformers work best with quite focal sources). But if the time-courses vary a lot within each of your source region, you'll be losing sensitivity. And if two of your different source regions have identical or very similar time-courses, the MUSIC projects them to the locations that best resemble the total topography produced by the synchronous signals -> the localization goes typically wrong.

Cheers,
Matti

1 Like

Your approach @unes1111 sounds akin to selecting the number of meaningful components following PCA (SSP) or ICA. I suppose your method could be applied on sensor data as well of is it based on a decomposition/segmentation of a data map (in your example: on cortex). ? You may also want to consider that with brain signals, the temporal signature of a component is a very salient feature that needs to be considered in addition to spatial separation.

1 Like

Yes that was my question as well which I cant figure out why using matrix-vector product PS*L(:,i) the solution is very distributed and inaccurate even with one source like below:

While using PS.*L(:,i) the output is much better and make more sense.

The Three Sources I have generated have the same time-course
Here is what I get with PS*L(:,i ) :

and with PS.*L(:,i ) : I get

Which is clearly missing the two other sources. I`m sorry for keeping this back and forth conversation and keep asking you questions. The computation is only couple of lines and I dont see what could have gone wrong ...

With Regards
Younes

Thank you Dr. @Sylvain for your reply. Yes , our method is based on SVD and finding optimum number of single values by probabilistic approach. Since the MUSIC method starts with the same consideration (performing SVD on looking for optimum number of single values) I thought it would perfectly match the method and enhance the performance. Yet, I am still not sure of my computation of MUSIC is 100% correct.
Thank you for the suggestion, yes I am looking into PCA and ICA as well. Just because I have had done some works with regards to inverse problems in EEG and have the mathematical/ computational background, I thought I would start with Inverse Problem.

I really appreciate your feedback and putting the time.

With Regards
Yones

@MattiS Just wanted to say I resolved the problem mentioned above (Now I get the correct source map by Matrix-vector multiplication). Thanks to your helps and guidance :pray:
The only problem is that some estimates are set in depth, even though I chose surface cortex rather instead of volume for the head model. Im not sure how I can enforce the "L" lead field matrix to act only on surface. Here is what I mean:

^ This one is NO Noise estimate by MUSIC method .

@unes1111, if you look at your results with proper matrix--vector multiplication, you see that the range is between 0 and 1 as it should be, as you are essentially testing, how strongly a unit-norm vector projects onto a subspace. I still have no clue what the projection (PS) .* L(:,i) does, but certainly it is not a subspace projection...

If I read your reply correctly, your three different source regions have identical time-courses. ("The Three Sources I have generated have the same time-course"). That is, they operate in synchrony. Effectively this means that in the sensor-level you see only one source, which happens to have a rather complicated topography. Standard "one-source-at-time" MUSIC (or any covariance-based method) cannot spatially separate sources on the cortex, unless they have time-courses that are different enough. Essentially it comes to down to the match between single topographies in your L matrix and the true source topographies (see our dual-source work DS MUSIC, https://doi.org/10.1101/230672)

Your example case with one source region looks reasonable; this is what to expect with EEG. MEG will do a bit better.

I edited my previous reply --- edits are in italics.

Have a nice start for the week!
Matti

1 Like