EGI GSN_Hydrocel_64 channel locations and Error during grand average

I imported my cleaned, averaged ERP data from EEGlab to Brainstorm in order to further run some source localization analysis

We recommend importing the individual epochs, and computing the average in Brainstorm.
This would allow you to compute a much better noise covariance matrix:
https://neuroimage.usc.edu/brainstorm/Tutorials/NoiseCovariance#Variations_on_how_to_estimate_sample_noise_covariance

My data was already average referenced prior to import during pre-processing, which means that it has an additional channel for Cz, turning 64-channel ERP data to 65 channel ERP data.

The data at Cz is the average of the original signal?
I'm not sure it is necessary to include this extra signal for the inverse modeling. This is mostly useful to revert the transformation to the original reference. You could just ignore it.
But I'd like to have a confirmation from @John_Mosher on this point.

However, since I don't know the exact coordinates for Cz for the specific head model, my additions are approximate, which is probably not a very good solution.

You are working with a template anatomy and standard electrodes positions. This is already probably quite far from the actual anatomy of you subjects, and the real position of the elastic EGI nets on their heads... A few millimeters in your approximation of the position of the reference at Cz is negligible compared with your other approximations.

This is the error I get (any of the averaging options would give me the same error (by trial/subject/folder):

What is this process_average 2.m ?
If you want to create new processes yourself, do not use spaces in the filenames.
https://neuroimage.usc.edu/brainstorm/Tutorials/TutUserProcess#Create_your_own_process

The problem is in the History field of your input files.
To see what's in there: right-click on the file > File > Export to Matlab > DataMat. In the Matlab command window: type DataMat.History.
Please share one of the files if you want an advice on what the problem might be.

I added an additional error capture so that this kind of error is not blocking anymore: