MEM BEST interpretation

Dear community,

I need to interpret which regions are higher on entropy. I have read the tutorial and searched on internet. I thought maximum entropy regions was what BEST showed. But it seems that is not the case.

Meaning higher entropy is not localized and lower entropy specific regions will appear?

Can you explain if I'm wrong in my interpretation.

Thank you for your help as usual.

Adding some of the developers of the MEM:
@cgrova, @jafnan, @edelaire

1 Like

Hello,

Thanks a lot for your question. When we mention Entropy in Maximum Entropy on the Mean (MEM); we are not referring to the entropy of the data but as the relative entropy between the prior and the posterior distribution (more specifically the Kullber-Leibler divergence). Here Maximum Entropy is a property of the solver, not the data.

To make it more clear, we can see how other solver works. When solving the inverse problem, we are trying to solve for x the equation Y = Gx + e.

To solve for x, you need to rank all the possible solutions for x. For that, one might use probabilistic modeling assigning the likelihood of observing specific data based on a set of parameters: p( Y | x)

One solution is then to take x that maximizes this likelihood: this is the maximum likelihood estimator (MLE [1])

Using the Bayesian framework, it is possible to introduce prior on the distribution of the parameter and using Bayes rules, update the prior to estimate the posterior probability distribution of the parameter. One point estimate, one can then choose the Maximum a Posteriori solution [2]

In the case of MEM, we choose to maximize the relative entropy between the prior and the posterior distribution which has some nice statistical properties From the tutorial: " The maximum entropy distribution is uniquely determined as the one which is maximally noncommittal with regard to missing information” (E.T. Jaynes, Information and statistical mechanics, Phys. Rev. 106(4), 1957)."

(note, I had to make some simplification*, but I think it conveys the message well - feel free to ask if you need any more information)

  • if a statistician see this message, sorry :slight_smile:
  1. Maximum likelihood estimation - Wikipedia.

  2. Maximum a posteriori estimation - Wikipedia

1 Like

Thnak you so much for your answer.

Correct me if I'm wrong!

I think I understand you don't work with entropy of the data rather with relative entropy between the prior and the posterior distributions i.e. Kullback-Leibler divergence.

That means less divergence from the distributions is higher entropy and more divergence lower entropy.

If that is correct. How can I check in source estimates that there is more or less divergence? Is there any heatmap from the KL divergence for the computed results?

Hello,

the following Matlab code should allow you to plot the entropy drop as a function of time:


sFiles = {...
    'PA65/.../results_MEM_MEG_240531_1021.mat'};

sResult = in_bst_results(sFiles{1});

entropy_drop = sResult.MEMoptions.automatic.entropy_drops;
time = sResult.Time;

figure; 
plot(time, entropy_drop)


(illustration of the entropy drop during the localization of a spike)

I will need to ask during our lab MEM meeting on Tuesday. My understanding so far is that the entropy drop at a time point is a reflection of the amount of information that was present at that time point so more information means less entropy (ie larger entropy drop).

But I will confirm on Tuesday :slight_smile:

1 Like

Thank you for sharing the code.I look forward to all the information that comes after the meeting.

I have succesfully computed the drop entropy. What would be more interesting than computing for each MEM file is to be able to do so with the average of one group or the difference to another group.

I tried with this code but it seem is sensitive to the name of the file.

Is it possible another way? May the trick is to rename the file. I just don't want to try if it is not allowed for some other reason.

Thanks for all the help in advance.

Any news? @edelaire