Could you advice me how to apply ICA for EEG in sleep recordings in a correct way. The problem is, that the recording sessions are several hours (~6-8 hours) in duration and when computing 64 ICA components, I get memory issues. The sleep stage analysis is already done and the rest of the analysis is going to be based on the epoching. So I could divide the recording to shorter segments, if needed.
No data division to segments:
In ICA process window, I could select time window in a way, that only one hour is processed at a time. Then I could shift this one hour time window forward without overlaps or gaps in data. So by running this ICA process six times on different time windows, I could work out the whole six hours EEG recording. Of course I would get six different sets of projectors and I should remember which projectors were calculated for time window under investigation. Is this bad idea?
Importing data to database in one hour segments. Then review as raw and then running ICA to whole file. Is this better option than idea 1? At least this method might be easier to use?
Some other approach?
The "time window" parameter in the ICA options defines the part of the data used for estimating the ICA components. If you decide to remove a component, this is then applied to all the recordings, including the parts that were not selected in the "time window".
As a consequence, if the patterns of noise (ie. the topographies of the artifacts) are not changing much over the night, you can compute the IC only one segment that contains the artifacts you want to remove. A few tens of minutes could be enough, one hour would be a lot.
Try to run the ICA analysis on different window lengths and from different part of the recordings, and observe if the topographies a relatively stable. Your goal here is to obtain ICA components from a short time window that efficiently remove your artifacts over the entire recording session.
If this is not possible to achieve (eg. the noise patterns change, the quality of the data drops significantly, ...), then the most reasonable approach would be to split the recordings in smaller blocks that have more or less stable artifact topographies, and clean each of them separately.
Thank you once again! I believe this resolves our problem.