PAC calculation method

Hi

I wonder what method is used for calculating phase amplitude coupling in Brainstorm, e.g. Ozkurt, Canolty etc.?

Also, when we run PAC on several files to get average PAC plot, how the averaging is working, is it like 1 + 2 + 3/ 3?

Thanks

Please read the online tutorials and get back to us if you still have questions:
https://neuroimage.usc.edu/brainstorm/Tutorials/TutPac
https://neuroimage.usc.edu/brainstorm/Tutorials/Resting

@Samiee

Thanks Francois.

I am using PAC estimation with MVL. When I am trying to calculate PAC using a code in Python (Ozkurt or Canolty), I am getting plots for PAC that are very different from Brainstorm ones. I wonder what I am doing differently in Python.

Bellow are the codes I used.

@Samiee @Sylvain @eflorin

Individual PAC plots

def get_comod_vals(file_name, channel_name, title=''):
edf_file = mne.io.read_raw_edf(file_name, preload=True, verbose=0)
edf_file.rename_channels({x: x[4:] for x in edf_file.info.ch_names})
edf_file.set_montage(mne.channels.make_standard_montage('standard_1020'))

length_of_edf = edf_file.n_times / edf_file.info['sfreq']

times = np.arange(0, length_of_edf, 240)
durations = [240] * len(times)
descriptions = [0] * len(times)

my_annot = mne.Annotations(onset=times, duration=durations, description=descriptions)
edf_file = edf_file.set_annotations(my_annot)
events, event_id = mne.events_from_annotations(edf_file)

e = mne.Epochs(edf_file, events, tmin=0, tmax=240, baseline=None, verbose=0)

ix = e.info.ch_names.index(channel_name)
ixs = (ix, ix)
all_events = e.events
low_sig, high_sig, mask = raw_to_mask(edf_file, ixs=ixs, events=all_events, tmin=0, tmax=240)
low_sig, high_sig = edf_file[ixs[0], :][0], edf_file[ixs[1], :][0]
mask = MaskIterator(all_events, -2, 2, edf_file.n_times, edf_file.info['sfreq'])

create the instance of comodulogram

estimator = Comodulogram(fs=edf_file.info['sfreq'], low_fq_range=np.linspace(4, 8, 4),
high_fq_range=np.linspace(65, 100, 8), low_fq_width=2.,
method='ozkurt', progress_bar=False)

compute the comodulogram

estimator.fit(low_sig, high_sig, mask)
if title != '':
# save graph
estimator.plot(vmax=0.2, vmin=0, cmap='jet')
plt.title(title)
plt.savefig(title.replace(' ', '') + '.png')
plt.show()

# save comod values
pd.DataFrame(np.flip(estimator.comod_[0], axis=1).transpose()).to_csv(title.replace(' ', '') + '.csv')

return np.flip(estimator.comod_[0], axis=1).transpose()

Average pre/post plots per subject

channel_name = 'C3'

for file_suffix in [f'_pre.edf', f'_post.edf']:

print(f'Running {file_suffix[1:-4]}')
comod_vals = []

for subj_name in ['sb1', 'sb2', 'sb3', 'sb4', 'sb5', 'sb6', 'sb7', 'sb8', 'sb9', 'sb10', 'sb11']:
file_name = subj_name + file_suffix
comod_vals.append(get_comod_vals(file_name, channel_name=channel_name))

visualization of average values

averaged_values = (comod_vals[0] + comod_vals[1] + comod_vals[2] + comod_vals[3] + comod_vals[4] + comod_vals[5] + comod_vals[6] + comod_vals[7] + comod_vals[8] + comod_vals[9] + comod_vals[10]) / 11

fig, ax = plt.subplots(figsize=(5, 3))
title = f'Averaged values {file_suffix[1:-4]} channel {channel_name}'
ax.set_title(title)
i = ax.imshow(averaged_values, cmap='jet', vmax=0.02, vmin=0, aspect='auto', extent=[4, 8, 65, 100])
fig.colorbar(i, ax=ax)
plt.show()
plt.savefig(title.replace(' ', '') + '.png')
pd.DataFrame(averaged_values).to_csv(title.replace(' ', '') + '.csv')

print('plotting z-scores')
return_z_scores_of_comod((comod_vals[0] + comod_vals[1] + comod_vals[2] + comod_vals[3] + comod_vals[4] + comod_vals[5] + comod_vals[6] + comod_vals[7] + comod_vals[8] + comod_vals[9] + comod_vals[10]) / 11, True)

Hi Niko,

The code that is used for MVL PAC estimation is coming from original Canolty's paper (shared by that group and imported into brainstorm). The result should not be different if you implement it in any other platform. The only potential source of difference that I can think of is the choice of filter bank!

Thanks Samiee. Saying filter bank, do you mean frequency bands I chose for lower and higher frequencies.