I'm working on resting-state MEG data. If I understand correctly, labeling bad segments in this kind of data doesn't affect any subsequent analysis, even the data covariance matrix.
It depends what: the PSD calculation excludes the bad segments, the noise covariance computation and the SSP as well.
I'm not sure how you would use a data covariance for processing resting state data.
I'm going to extract scout time series and compute connectivity between them. I plan to use extended events of the bad segments that I selected based on the sensors data and write code to remove these time periods from the extracted scout time series before computing connectivity. I wanted to know if that makes sense. Or is there any other solution to deal with bad segments?
You can import all your recordings by blocks of 1s (use the option "Split" in the import options). The segments of 1s including part of a bad segment would be tagged as bad, so if you select all the imported trials in Process1 and run the process "Standardize > Concatenate time", it would produce a new file with all the blocks minus the bad ones.
The approach you describe (create "good" segments, import them and concatenate them) also works, but it is typically less practical to select interactively long segments than short bad segments.
If you need this re-concatenated file to be handled by Brainstorm as a continuous file, right-click on it > Review as raw.
Note that this is not always a recommended procedure: cutting out bad segments introduce discontinuities in the signals, which can have unpredictable results on the sensitive connectivity measures.