I am analyzing a MEG detaset for which I need to compute the data covariance for the LCMV beamformer computation. The problem is that the inter-trial interval for my experiment is too short, causing the baseline period for one trial to be contaminated by the previous post-movement rebound trial. I am wondering what is the impact of this contamination on the baseline for the data covariance, and ultimately on the LCMV beamformer. I am currently using as baseline -2.5s to 0s, but I could also use -0.5s to 0s in order to reduce the contamination from the previous trial. Please let me know if you have any suggestions to get around this issue.

Or is it that you are using the entire epoch for computing the data covariance?
If this is the case, I am not competent to explain the implication of this experimental design on the beamformer results. An alternative is to use a minum norm solution with a noise covariance estimated from empty-room recordings...

The more specific the definition of baseline vs. data, the better your LCMV model. If you are concerned of possible contamination from previous trial, you need to shorten your noise baseline as much as possible and derive the covariance stats across trials. Alternatively, you may assume an identity-matrix model for the noise covariance altogether.

Thank you for your answers. I did not mention in the main body of the text, but I calculated the noise covariance from a 2-min recorded room MEG activity just before the study took place. For data covariance, I used the whole trial -2.5 - 14s, with baseline -2.5 - 0s (0s marks the start of the trial).

I was indeed thinking to shorten the baseline period as much as possible to reduce the contamination. I considered to use -0.5 - 0s instead.