Hi Francois,
I have a similar problem with slow saves and have tried the recommended steps above. My protocol.mat is too big (8.4GB!), but I don’t have any head model problems as described in the previous post, I just have a lot of files.
I have defined 20 epoch types, and each subject has 4 runs. In order to be able to make a source model for each of the 4 runs separately, which I believe is the best way to do it for accurate source localization, I have further separated the epochs within each session by the four runs. That means each session has 80 conditions. So,
90 sessions x 80 conditions (different epoch types) for each, and each has 450-900 epochs in it = big protocol.mat.
I am currently trying to run artefact rejection and make an average on the non-rejected epochs. Each of these steps is quite short, but then it spends a very long time saving the database. Thereafter I can delete or detach the individual epochs and just work with the averages, but in the meantime every save is impossibly long.
Is there some way I can do these steps without saving the database until the end? Or do you have another suggestion?
Best regards,
Emily
P.S. Happy New Year!