Hi,
I have rather a lot of conditions and trials and have scripted a couple of functions to do event detection and averaging (relevant portions below). I have the autosave commented because I thought the bottleneck was reloading the protocol file, which is really huge. That seemed to help with some operations, but the length of these operations on a single subject’s data has been creeping up from about 24 hours to 5 days. I checked that our server’s memory load is not too high and it’s not swapping, so it seems to be in matlab/brainstorm.
The windows “Running process: Detect bad channels: peak-to-peak” sometimes go pretty fast and sometimes seem frozen. There are no reported errors. Is there some trick I’m missing, should I be clearing matlab’s memory or something before runs?
Best,
E
% Process: Detect bad trials: Peak-to-peak MEGGRAD(0-1000)
sFiles = bst_process(‘CallProcess’, ‘process_detectbad’, epochnames, [], …
‘timewindow’, [], …
‘meggrad’, [0, 1000], …
‘megmag’, [0, 0], …
‘eeg’, [0, 0], …
‘eog’, [0, 0], …
‘ecg’, [0, 0], …
‘rejectmode’, 2); % Reject the entire trial
% Process: Average: By trial group (folder average)
sFiles = bst_process('CallProcess', 'process_average', sFiles, [], ...
'avgtype', 5, ... % By trial group (folder average)
'avg_func', 1, ... % Arithmetic average: mean(x)
'weighted', 0, ...
'keepevents', 0);