It would be interesting to implement a way to get access to the Eyelink files in Brainstorm. It’s a widely used system and having its output in Brainstorm could be useful for various purposes.
However I have very limited resources at the moment to deal with this, so I would need help from you or your collaborators to implement this.
There are a few issues to deal with first, starting from legal considerations. To include anything in Brainstorm, we need it to be compatible with a GPL license.
Sounds good, we are of course happy to help in any way we can!
This is not an open file format, and the EDF API doesn’t look like open software.
We probably don’t have the right to distribute this (neither as part of the Brainstorm forum, nor as a .zip file posted on this forum).
Could you contact the company and ask them what the actual license is for this library? What are we allowed to do with it, are we allowed to redistribute it, and if not what is the correct distribution path for it?
The license for the edfImport library is not explicit either, so we need to ask the author the same questions: license and redistribution rights.
It looks like the code comes from a German university, so there is a higher chance to have this solved easily.
We can note that the author didn’t include the “edfapi.dll” in the package so maybe he doesn’t have the right to.
I will contact them and let you know of the answer.
3) Once the integration and distribution questions are solved, we have to deal with the technical ones.
Using the call "edfImport(‘file.edf’, [1 1 1]) returns tons of data. How would you like this to be available in Brainstorm?
- Would you like all the values to be accessible as continuous signals?
- If this is the case, there might be sampling issues: is the time sampled regularly? Brainstorm cannot handle irregular time sampling or recordings with gaps.
- What is the difference between “Events” and “Samples” (sorry, I’ve never been using an eye tracking device myself, I’ll a lot of guidance)?
- What is the meaning of all these fields?
- In the, what is the final goal of having this data available in Brainstorm? Can you provide examples?
Running edfImport, you will get two variables: “Preamble”, which has a bit of text with infos about date and hardware, and “Trials”, which is an (1*X) struct, where X is the number of “trials” we recorded. The definition of such a trial is up to the user who records the eyetracker data. For example, we start the eyelink “trial” with each recording session, and stop it when we also stop the EEG recording; thus, an eyelink “trial” for us would be around 10-15 minutes of recording and thus not correspond with the trials of the actual EEG experiment.
The “Trials” struct includes 3 different fields:
[li]“Header” with some general info and also start- and end-time
[/li][li]“Samples”. In there, all the raw, continuous data (sampled regularly) is saved. Including such data as gaze position, pupil size, and a lot of other infos (see data field description link below)
[/li][li] “Events”. Here, no continuous data is stored. Instead, the data is only stored at certain events, for example blinks, saccades, fixations or, most importantly, message triggers sent by the user. In Trials(X).Events.message, the event names which were sent from the user are saved. These are the same as event triggers and EEG and can be used to synchronize the data.
You can run the function “edfExtractInterestingEvents” from the “edfImport”-library as following:
Trials = edfExtractInterestingEvents(Trials, ‘^TRIALID’)
Then, you will get additional fields for fixation, blinks and saccades events. Also a field for button presses, which in our case would be empty because we did not use this function. With “edfExtractMicrosaccades”, you can also extract microsaccades, if the data was recorded binocularely.
I think the Events and Blinks fields are the easiest to start with. After synchronizing it with the EEG, the user could very easily exclude trials with blinks or too many eye movements.
[B]Description of data fields: http://download.sr-support.com/dispdoc/page9.html [/B]
… I don’t know If we actually need all those fields.
At the moment, we have two main goals using the eytracker data:
[li] Discarding trials with blinks or too much eye movement
[/li][li] Epoching EEG data based on saccade and microsaccade events and looking for possible neuronal sources
One could also for example check if a subject acutally fixated on the spot (s)he should, and if not, exclude the trial. Or examine attention-based neuronal effects, by extracting from the eye movements what attracted the subject’s attention, for example if you present different stimuli at the same time.
Looking at continuous eye movements is of course also interesting, but I think the sorting out of ocular artifacts is the most important thing for the beginning.
4) The last problem will be the synchronization with MEG/EEG data. If you want the eye tracker data displayed simultaneously with other signals coming from other acquisition devices, we need to solve two issues:
- the sampling rate (Brainstorm can display signals simultaneously only if they have exactly the same time definition)
- the synchronization itself (how do you match the start points after resampling)
=> This will require the development of a separate process, which might be a bit difficult to write properly (but doable with some resource).
Yes, I also think that might need quite some programming to get this done properly.
The sampling rate is normally 500 Hz for binocular recording and 1000 for monocular recording.
For the synchronization itself, we are sending certain trigger Events in each trial at the same time to the eye tracker and to the EEG amplifier. These triggers are text strings, and we usually use numbers to code for different events. So this should be doable somehow, using the fields “Trials(X).Events.message” & “Trials(X).Events.sttime”.
I think all this is doable, and will happen a lot faster if you are willing to contribute to the development.
We are definitely willing to contribute - let us know how we can help!