Dipole scanning - array exceeds maximum array size preference

Dear BST community,

I am trying to perform LCMV beamforming on some already preprocessed continuous resting state EEG data (512 Hz) with the ICBM152 template. My goal is to extract the source reconstructed time series using scouts from the DK atlas.

Accordingly, due to using the MNI template, as stated here https://neuroimage.usc.edu/brainstorm/Tutorials/SourceEstimation, I used unconstrained solutions:

Unconstrained solutions are particularly appropriate when using the MNI template

I ultimately want one signal per scout. Now, I am a bit confused as to what the next step would be.

My first thought was to use Extract Values, with the Scout function set to PCA, but this still provides 3 signals per scout. I want the signals to oscillate around 0, thus cannot use the "norm for unconstrained sources" from my understanding.

Going over the tutorials once again, it looks like Dipole Scanning (https://neuroimage.usc.edu/brainstorm/Tutorials/TutDipScan) is what I want.

I tried it, but the issue my files are too large. I got the following warning:

Requested 45006x245761 (82.4GB) array exceeds maximum array size preference (64.0GB). This might cause MATLAB to become unresponsive.

All my files are around this size. I am wondering what should I do in this case. Would downsampling help?

Another option is "Unconstrained to flat map" using PCA, but this also runs into the same array size issues.

Thank you,
Paul

My first thought was to use Extract Values, with the Scout function set to PCA, but this still provides 3 signals per scout.

The option "Scout function: PCA" groups the various signals per region of interest: 1 signal per ROI for constrained orientations, 3 signals per ROI for unconstrained sources.
https://neuroimage.usc.edu/brainstorm/Tutorials/Scouts#Scout_function

Working with unconstrained source maps makes everything much more complicated, and not all the processing options are available. Using constrained orientations helps decreasing the complexity of the problem, but this simplification is difficult to justify in the case of a template brain.

Another option is "Unconstrained to flat map" using PCA, but this also runs into the same array size issues.

If you want to convert from 3 signals to 1 per time point, you may use the process "Unconstrained to flat map", indeed. But it requires reconstructing the full source matrix (45006x245761) instead of working in the optimized format (kernel=45000xNchannels + data=Nchannels x 250000), you won't be able to run this on such large file, as you already noticed.

Additionally, there is no better justification for using this kind of simplification instead of using an arbitrary source orientation. This approach has not been published anywhere and is not part of our recommended workflows.

Going over the tutorials once again, it looks like Dipole Scanning (https://neuroimage.usc.edu/brainstorm/Tutorials/TutDipScan) is what I want. I tried it, but the issue my files are too large. I got the following warning: Requested 45006x245761 (82.4GB) array exceeds maximum array size preference (64.0GB).

The dipole scanning is designed to work on a few time points, in the context of an ERP/ERF study, not to process long time series.

All my files are around this size. I am wondering what should I do in this case. Would downsampling help?

Processing full resolution cortex surfaces over long recordings is not a solution, you need to simplify your analysis somewhere: reducing the number of time points, reducing the number of sources, working with regions of interest, working with summary metrics at the sensor level.

We optimized the computation of the PSD from resting state recordings at the source level:
https://neuroimage.usc.edu/brainstorm/Tutorials/RestingOmega
Optimization for other metrics are possible, but may require that you code them yourself.

Remember that all the information is available in your EEG recordings, the source estimation does not create any extra information, it just helps localizing it spatially. Before trying to process everything in source space, make sure you can already observe what you expect at the sensor level.

Thank you Francois.

Sorry for what is a naive question (this is my first time working with unconstrained source maps), but if my goal is to ultimately have a single time series from each of the 68 scouts of the DK atlas, do you see any way that could help reduce the data size somewhere in the pipeline?

I also downsampled my data to 256 Hz, and the "Unconstrained to flat map" worked (in a somewhat reasonable time window), but as you mentioned, there is no better justification for this approach.

Thank you again.

Sorry for what is a naive question (this is my first time working with unconstrained source maps), but if my goal is to ultimately have a single time series from each of the 68 scouts of the DK atlas, do you see any way that could help reduce the data size somewhere in the pipeline?

Downsampling (time dimension), reducing the number of sources (= number of vertices).

To keep the full dimensions: computing the inverse kernel only, then write a Matlab script to write a loop to compute your measure of interest on only one scout at a time without having to generate the full source matrix. This is not trivial.

You could also compute the measure on the three dimensions independently, and then group them in some way at the end...