Spatial Smoothing Process modification

Hello everyone,

My experiment is processing-heavy so I limit it to my ROI. I end up with multiple sources files that are mostly zeros except for that patch of interest.

Now I want to spatially smooth these sources file. If I use the Vanilla 'Spatial Smoothing' process, the minimal input size value of 1 is way too big for my smaller ROI. We managed to hack it a bit so it would allow for smaller values.

My next issue is that the smoothing process doesn't ignore the zeros of the non-ROI areas, creating a great edge effect around the ROI. I try to look into the mcgill-Matlab functions that lead to that smoothing, but they are quite difficult to navigate: no comments, variables that are only a single letter and calling of other mcgill-Matlab functions similarly obfuscating.

I was wondering if someone faced the same issue and if they have found a solution that can help me. Alternatively, someone that knows those uncharted scripts pretty well could also assist me in making this modification.

I'll try to figure it on my own meanwhile,
Thank you,
Jonathan

Hi Jonathan,

Now I want to spatially smooth these sources file. If I use the Vanilla 'Spatial Smoothing' process, the minimal input size value of 1 is way too big for my smaller ROI. We managed to hack it a bit so it would allow for smaller values.

You have nothing to hack. Instead of running the process from the pipeline editor, generate the corresponding Matlab script, edit the value and execute it.
But how much precision do you think you can expect for these methods?
If your source maps are coming from a minimum norm estimation, I'm afraid it might not make much sense to try looking at differences that are 1mm apart...

My next issue is that the smoothing process doesn't ignore the zeros of the non-ROI areas, creating a great edge effect around the ROI.

This function from th SurfStat toolbox will not let you do what you want. If you want to completely ignore what is outside of your domain of interest, maybe you could try cutting out the vertices that you are not interested in (in the scout tab, menu Scout > Edit Surface > Keep only selected scouts), editing the source file to keep only the corresponding indices, and running the smoothing function again. But what would happen at the borders, I have no idea...

Cheers,
Francois

I will be trying your suggestion right away.
Meanwhile, here's s little snapshot illustrating why the minimal smoothing option in Brainstorm is still too big.

This is how far the smoothing reaches whiles everything not-red is my ROI.

P.-S.: I'm a bit afraid of using that 'Keep only selected scouts'. Having it outside the process box makes it feel quite permanent. Will it cut out the whole brain forever?

The menus to edit surfaces create new surfaces, nothing is deleted.

But I still don't understand what the point is for this smoothing.
Your ROI is so small, if you want to compare multiple subjects, just average the signals on the ROI...
Or if you want to do a very precise mapping of unique sources of activity at a given latency, maybe a dipole fitting approach would make more sense.
Have you discussed this with @Sylvain?

Why not apply an empirical threshold right above 0-ish to get rid of all the red for visualization purposes?

Hello Sylvain, thanks for dropping by in this thread.

I do use thresholding to remove what is close to zero. The reason I left it in the figure in the above post was to display how many zeros the smoothing process was including.

@Francois The nature of my project is a bit unorthodox. What you see is a reverse-correlation analysis-made frequency selectivity map of the auditory cortex. Averaging the ROI goes against the goal of this experiment. Mapped on the cortex is the preferred sound frequency of each vertex.

Now that you reassured me concerning the Edit Surface menu, I will try it.

Furthermore, I'll be trying to smooth surfaces before I do any filtering removing non-significant vertices from the surface. The smoothing will then be done with noise closer to average than all those zeros.

Thanks for the support, guys,
Cheers,

@Francois I tried to work just with the ROI, but when I do that the smoothing behave strangely and has barely any effect. I'll keep trying stuff. Thanks again,

Sorry for writing so much,

So I discovered that Brainstorm looses sense of sizes and distances when projecting on this smaller chunk. It thinks that chunk is as big as a whole brain. If anyone is doing something similar to me, one needs to input a large smoothing FWHM (10mm in my case) to trick Brainstorm into smoothing properly.

I hope this can be helpful to someone else!

1 Like

Good catch, Jonathan!

I'm not sure I understand what this means...
If you are at the MNI, can you please discuss this directly with @MartinC ?

I'd gladly address this remotely as I am not around the MNI too much anymore.

Some more potential issues with the spatial filtering, which could also address one of your earlier concerns, @Francois.

The fMRI literature about the auditory cortex often uses a spatial smoothing with a Gaussian kernal with a FWHM of 3mm. Doing so gives nice looking auditory cortex map and seem like a reasonable level of smoothing. For example, here a figure from McKetton et al., 2019. http://www.jneurosci.org/content/39/15/2930.long

Now here's what Brainstorm outputs when I enter 3mm FWHM.

That red wraps around the whole hemishpere. Now, I don't pretend that I fully understands the math behind this smoothing, but I feel like this is not a 3mm FWHM. Am I doing something wrong or is my understanding flawed?

I look forward to reading your input on this,

Isn't it the issue you were reporting about the smoothing kernel not respecting the actual size and geometry when data have NaNs or are not defined everywhere on cortex? That would be weird though because the kernel shall depend only on anatomy, not on source data.

Before my reply, just a quick distinction: The data doesn't have NaNs, it's zeros. The ImageGridAmp stores 0 in the absence of data.

I am not certain to fully understand your question. My first issue was that I wanted to find a way to prevent the smoothing from integrating the zeros.

The current, newer issue, is that I am under the impression that the smoothing process in Brainstorm doesn't really understand what millimeters represent in space. I am not under the impression that this strange behaviour stems from my dataset. I believe I'd have similar results using unfiltered, full-brain data.

My first hint that the smoothing didn't really understand millimeters properly was when I extracted my ROI with the edit surface menu as suggested by Francois. With the new anatomy, I needed a FWHM a hundred times larger to get the same effect.

Lastly, I compared the literature FWHM values. Their 3mm smoothing is not having a result anywhere close to my 3mm smoothing. The red spread in my latest figure represents how wide the smoothing process collects information. With 3mm, the whole hemisphere is red. Using 0.3mm, we get figure a little more expected. A bit of red-spread around my data (edge effect), but the rest of the hemisphere is still gray - zeros or no data.

My current hypothesis might be about the anatomy I use. My cortices are made up of 150,000 vertices instead of the usual 15,000.

I might run a few test tonight or tomorrow to see if my hypotheses about this issue are right.

Check the code of process_ssmooth_surfstat.m.
You'll see that the SurfStatSmooth understands only edges, not distances.

It's a piece of code in the process function that estimates a number of edges based on the average distance between two vertices in your surface. After the process execution, open the execution report (menu File > Report viewer) to see how many edges of FWHM where used for the smoothing.

This cannot work if your surface have a very inhomogeneous mesh.
If you are expecting an accurate geodesic smoothing, sorry, this function is not for you...

For us, it's a quick and dirty way to align multiple subjects before group statistics.
For example, with the introduction tutorials the FWHM=3mm gives very decent results:

image
image

I quickly ran a test verifying my last hypothesis about the number of vertices. I think I might be right. Running the smoothing with 3mm on a 15000 has it much more restricted than on the 1500000 cortex I usually use.

Here's the 15000 vertices figure:
MEG_3D_Subject01_01_Subject01-01_AEF_20150112_04_2400_notch_band_02_clean_S1D1_smooth_take8-M100-peak-GAUSSIAN__cortex_15002V__abs__ssmooth3

I might be wrong about the math behind it all, but in the current state of my understanding is that brainstorm has no constant knowledge of what 3mm is. It understands it in a 15000 vertices context but it doesn't carry over to 150000 vertices.

Hmm, yeah if this and the "whole red hemisphere" are supposed to be the same smoothing it does look like there's a bug in how Bst converts distances to an estimated number of edges.

I might be wrong about the math behind it all, but in the current state of my understanding is that brainstorm has no constant knowledge of what 3mm is

No, indeed. As I was mentioning before, the function SurfStatSmooth from the SurfStat toolbox works only based on the adjacency graph, regardless of the distances. From one vertex, the smoothing will be equivalent for all its neighbors, no matter if one is 1mm away and another one 1cm away.

Suggestions for code improvements are welcome. I see two options:

  1. You try to debug the code and understand why 3mm converts to a number of neighbors that is too high with the high resolution surface, and propose a fix for it. A note should be added in the process options (option type "label", look for other examples of processes) to explain that this conversion is an approximation, that the smoothing happens in number of edges and that the number of edges can be seen in the information messages generated by the process.
  2. We just stop trying to convert distances into edges, and just offer a smoothing in number of edges... but this would not be understandable by most people, and

To submit code modification, open a pull request on the Brainstorm git repository (let us know if you need more instructions):
GitHub - brainstorm-tools/brainstorm3: Brainstorm software: MEG, EEG, fNIRS, ECoG, sEEG and electrophysiology

Hey guys!

So I've returned to this smoothing idea for limited ROI and last night I had a new thought.

I used the 'Keep only selected scouts' as suggested. I remembered I had dropped this because using it was losing the vertex indices and I could not return the chunk to the whole brain picture afterwards. My new idea was to make a new source file with the vertices number as values in ImageGridAmp. After that, I used 'Keep only selected scouts' both on my raw data and my new idxSources. I smoothed the chunk of raw data and intended to use what was stored in idxSources to return to a full brain ImageGridAmp.

Here's this issue. It turns out that the idxSources file now contained non-integers (About 14 out of ~3k vertices). There seem to be an unexpected distortion that happens when one uses 'Keep only selected scouts'. Is this a bug? Is this a feature? I did receive an alert that told me to use FreeSurfer instead, but it surprises me that there are imprecision in what seems to me just a binary filter of what to show and hide.

I will try to ignore those non-integer as I return the smoothed chunk back into a full brain ImageGridAmp. I'm afraid it might leave holes. Maybe not.

Thanks again for your constant support,