Checking to see if my script for extracting EEG source data is correct

Most of it looks good.
A few comments below.

I don't understand this part. If you do not have events in your 3x 1min continuous recording, import them without selecting any events. Then move the files around the way you want (create folders, rename...), just organize the files in a way suitable for your. As you used "No, use one channel file per acquisition run", if you are using multiple subfolders, you would need to copy-paste all the info needed for source estimation (channel file with correct positions, forward model, inverse model), into all the folders. If you don't really need multiple subfolders, it might be easier to have all the blocks of recordings in the same folder.
But these are only comfort considerations, you probably figured out something that works for you, no need to change anything.

(Sensors/anatomy registration) refine registration now? YES

If you are using an anatomy template, you should not try to refine the registration between the electrodes positions (coming from an EEGLAB template?) with the head surface in the Brainstorm database. The two surfaces don't match, and using a rigid Iterative Closest Point algorithm to align them WILL fail.

You should answer "no" to this question. Then place the electrodes the way you want on the head (right-click on the channel file > MRI registration > Edit, Rotate/Translate/Resize/Project), or use template positions available for the ICBM152 anatomy (right-click on the channel file > Add EEG positions > ICBM152 > ...)

If you are using the default head model, would you ever need to change the fiducials?

No. This is irrelevant for EEG most of the time.
Instead you modify manually the positions of the electrodes on the surface of the head.

I click on the data set folder and click --> noise covariance --> no -noise modeling, since all of my EEG imported is meaningful data

Don't you have a pre-stim baseline?
You'd get much more interesting information by constrasting the "during stim" and "post stim" recordings vs. "pre stim" recordings.

min norm imaging and sLORETA with no noise data covariance?

There is no clear counter-indication, but you might get weird central localizations.
Note that sLORETA is popular in EEG because of another implementation of sLORETA from R Pascual-Marqui. Among the Brainstorm and MNE-Python developers, we are more familiar with dSPM.
The two might look very different for a single file, but ideally, after doing all your group statistics (contrasts between groups of between experimental conditions), you would obtain similar output with both.

b. Pipeline editor process section pops up --> extract

If you are computing scouts times series only, prefer using the process "Extract scouts time series" instead.

Question: If im using 1 continuous file, I don’t understand the difference between concatenate signals (dimension 1) vs concatenate time (dimension 2). Both options give me the same values.

This is relevant only if you have multiple files, it defines how the scouts time series extracted from multiple files are concatenated together.

d. After I extract my values I do further analysis in matlab and treat this source date the same exact way as EEG data when applying these matlab scripts for different analysis (wavelet transformation, power analysis.. partial directed coherence etc….)

Sounds good.