Error opening nwb files

I am a new user, and I am having problems to open nwb files.
I followed the tutorial to install the latest version of Brainstorm with Matlab (R2020a), and then in a new protocol I go to Functional data > Review raw file. I choose the correct file type and select the file. The first time brainstorm asked to install nwb sdk, then restarted Matlab and run Brainstorm. I get this:

Installing NWB library...
Error using generateExtension (line 23)
Path to file schema/core/nwb.namespace.yaml could not be found.

Error in generateCore (line 37)

Error in brainstorm (line 138)

If I try to run brainstorm again it skips this installation and opens, but when trying to open the file I get the error:

** Error: Line 10: Unrecognized function or variable 'Schema'.
** Call stack:
** >loadSchemaObject.m at 10
** >generate.m at 6
** >nwbRead.m>generateSpec at 84
** >nwbRead.m>checkEmbeddedSpec at 44
** >nwbRead.m at 27
** >in_fopen_nwb.m at 53
** >in_fopen.m at 151
** >import_raw.m at 126
** >bst_call.m at 28
** >tree_callbacks.m>@(h,ev)bst_call(@import_raw) at 518

The file I am trying to open is part of a public dataset in DANDI (, an intracellular electrophys recording.

Any help would be greatly appreciated!


Hi Irene,

Welcome to the brainstorm forum. It seems that there is a mismatch between the install scripts and files that are downloaded. I could reproduce the error and I found out that:

This folder does not exist, it is called nwb-schema in the downloaded files. After changing that folder name back into schema it goes a bit further but then it doesn't find a cache for hdmf and it errors on that. That I couldn't easily debug.

I'm not a brainstorm developer, though @Francois can probably help with this.

Kind regards,

1 Like

thank you Steven for reproducing the error!
Later in the day I also realized that the first error could be solved by renaming the nwb-schema folder, but I also got other error messages that I couldn't solve.

If it can be helpful for other users or dev, these are the errors I get now:

Error using schemes.loadNamespace (line 7)
No cache found for namespace hdmf-common. Please generate any missing dependencies
before generating this namespace.

Error in schemes.loadNamespace (line 14)
ancestry(i) = schemes.loadNamespace(ancestorName);

Error in file.writeNamespace (line 3)
Namespace = schemes.loadNamespace(namespaceName);

Error in generateExtension (line 30)

Error in generateCore (line 37)

Error in brainstorm (line 138)


I indeed also got this error, I saw that a folder hdmf , containing some namespace.yml file, is made in this nwb-schema folder, but I couldn't trace all the paths that were called. Hopefully it helps the developers indeed.

The problem is from the installation/configuration on the Brainstorm side. I need to fix a few things, including a solution for managing versions... I will fix this soon, but need some example files to test the reader.

Could you please share a short NWB example file? Upload the file somewhere (dropbox, google drive...) and post the download link here.
Or are there some public example files available somewhere?


Hi Francois,
Thank you!
Unfortunately I haven't produced any nwb file myself yet, but I was practicing with these open-access files:


I updated the NWB downloader, and it will automatically update the NWB library to the version currently supported by Brainstorm.

However, I can't get the function nwbRead() to read the example files you suggested...
I set Brainstorm to use the last stable release of the matnwb library (, and if I try to use it to read these NWB files, I get and error. If I use the current master branch, I get a different error...
I'm not sure what to do with that.

Are you familiar with this file format?
Can you read them with other programs in Matlab? Using this matnwb library?

@mpompolas Could you help us with this?

K>> nwb2 = nwbRead('sub-626194774_ses-637919731_icephys.nwb')
Dot indexing is not supported for variables of this type.

Error in file.fillExport>traverseRaw (line 120)
        attrmatch = strcmp({}, propname);

Error in file.fillExport>traverseRaw (line 111)
                    res = traverseRaw(suffix, raw.datasets(i));

Error in file.fillExport (line 23)
    pathProps = traverseRaw(pnm, raw);

Error in file.fillClass (line 93)
exporterFcns = file.fillExport(nonInherited, class, depnm);

Error in file.writeNamespace (line 22)
        fwrite(fid, file.fillClass(className, Namespace, processed, ...

Error in nwbRead>tryWriteSpec (line 103)

Error in nwbRead>generateSpec (line 91)
    if ~tryWriteSpec(name)

Error in nwbRead>checkEmbeddedSpec (line 44)
    generateSpec(fid, h5info(filename, specLocation));

Error in nwbRead (line 27)
    specLocation = checkEmbeddedSpec(filename);

@mpompolas @MartinC
I discovered that I need to update the function in_tess_nwb.m in a similar way.
How does this file get used?

There was a change last year in the nwb and as far as I know now they use version control for reading/writing nwb files. Meaning that if a file was created with a specific version, you have to use "generateCore" from that version as well.
The only solution that I can think of, is have a cache of all versions and call the appropriate one depending on the file that needs to be loaded.
So just supporting a single version would unfortunately not be a solution.

Here are the versions:

I can take a look at it if you still have trouble the week after the next one.


Thank you for your time.
I am not familiar with this format, I am trying to learn now.
I tried to open them with brainstorm and with Matlab using matnwb, but also in this case I had errors with generateCore and then with nwbRead.

I have opened a thread in the Neurodata without bordes on Slack and it looks like there is some problem with the matnwb.
I am not sure if you can see the thread via this link
Anyway the matnwb package has been updated ( but click on "Code->Download ZIP") but the nwbRead function still does not work properly. I think this is why you are getting two different errors with the master branch and with the v2.2.5.0.

In the meantime I managed to open the files with python, so there shouldn't be any problem with these files.


@mpompolas Yes, we might need your help... Check again this thread whenever you have some time available.

@mpompolas @MartinC
Where are the files read by in_tess_nwb.m supposed to come from?
This ecog.extensions.yaml you refer to in your code is only available on your personal github repo?

If this "ECoG extension" is not used by anybody and there is no clear need for it, I will remove it from the Brainstorm distribution...

No I don't have access to the NWB Slack and can't create an account (only limited to a few institutions).
I opened a github issue listing all the errors I obtained, maybe you can also post your Slack message there so I can follow part of the discussion:

@mpompolas @MartinC
Could you please share an example of the files you used to write the functions in_fopen_nwb.m/in_fread_nwb.m?

What is currently in Brainstorm can't be used for generic EEG files, it seems extremely specific to one dataset: eg. it needs to have one and only one recorded signal set named "raw" OR one and only processed signal set named "lfp".

I suggest you remove the ECoG extension for now and revisit it if needed.

I wrote this based on some example files I got from the developers about 1.5 years ago. I'm sure things have changed since. The ecog.extensions.yaml was an additional file, I thought I added when I merged, maybe I missed it.

Regarding the utilization of ECoG files in Brainstorm, the developers informed me that there are two ways of storing the locations of the ECoG electrodes now: one by storing the electrode locations on the .nwb, or you can have an additional .nwbaux (or something like that) file.
What was supported was based on the second approach.

@Francois regarding the 'raw' and 'lfp' labels:
unfortunately although NWB is a standardization format, they give too much freedom in labeling.
I had this discussion in the past with the developers and this will be problematic for support by us unless we are more strict with what should be used (but then, if a file is already created and not made based on our "stricter" guidelines it won't be loaded). They suggested maybe to have a brainstorm compatibility check, but this has not been done yet.

Too much freedom is what's going on here.

As you can see in in_nwb_read, I take the "most important" signals as a guide (electrophysiological - raw, lfp), and use their sampling rate as the main one. The behavioral - 'less important' signals, are downsampled or upsampled to match the sampling rate of the raw/lfp. The only way to know which signals should be considered as the guide for the sampling rate is if I force it to be with the label raw or lfp.

If you want this to be semi-automated, I would suggest there is a pop-up when the file is originally imported (in in_fopen_nwb) that asks the user to select one of the present keys within the nwb files as the "sampling guide".

I'll check if I can find a file.

1 Like

I've spent almost two days on the interface of the MATNWB library to read NWB files, and I feel like I've only gone backwards, it would require a lot more work to produce something easy to use in Brainstorm. Unfortunately, I can't dedicate more time to this project at the moment.
Maybe I will wait for a few months or a year a see if it gets any better.

No need to spend time sending me other files. I'll ask later if there is a clear need for a major project to revive your NWB reading code.

Sorry for giving up after sounding so promising...
Feel free to keep on working on the reading functions in Brainstorm if you have time for this (in_fopen_nwb.m / in_fread_nwb.m / bst_install_nwb.m). If you need to read one specific dataset, it should not be too complicated to write some code that reads it. The complicated part would be to make it generic...
Otherwise, you could write Matlab scripts to read the data you are interested in from the .nwb files are save in the Brainstorm database directly. The data structures are all documented on the website:

FYI, the last message I posted on the github issue (

Thank you for your prompt help and suggestions!

However, my goal is not really to read these files... I was hoping to be able to simply add the matnwb library to the Brainstorm environment (downloaded automatically from github when needed) to offer a native support for the NWB file format to the Brainstorm users, as it was accepted as one of the reference file format in the BIDS-EEG specifications.

But it looks a lot more complicated than expected. Apparently it will not be possible to include a version of the library that will be able to read all the .nwb files, it will always require some customized manipulations, and navigation between versions...

Another problem is that is doesn't seem to be clear how EEG/LFP/SEEG/ECOG signals should be stored in the NWB ontology. The various example files I could load in MATLAB do not organize the data in the same way. It doesn't seem easy to write code that could automatically find all the information it needs. This would require an important amount of work to supervise this interactive import of EEG signals from .nwb files into Brainstorm, and then to document it.

Our development resources on the Brainstorm project are limited and we have many projects to lead simultaneously. Unfortunately, we won't be able to invest time in writing supervision tools to manage multiple versions of the matnwb library or the schemas.
At the present time, it is not clear to me whether the matnwb library or the NWB file format are ready for a plug-and-play use from an EEG-processing software environment. Maybe I should wait for an extra year and try again later?

What are your plans for NWB/matnwb in the near future?
Will these problems of schema compatibility be handled more smoothly?
Will you provide more strict specifications/examples/tutorials for EEG/SEEG/ECoG/LFP?
Can you provide help (eg. development time) for integrating your library into external software?

Unless you have better suggestions, I will remove the NWB support currently available in the Brainstorm distribution: it was developed for one specific dataset by a former PhD student of our group (his work is mentioned on your website:, but this code is not working with any other public dataset available using .nwb files. It needs more work than what we can provide at the moment to be maintained properly.

I'm working with 2 labs that use NWB at McGill. I will start working on this again soon

The new version of the ephys toolbox is out!

Github commit:

Updated tutorials:

This includes updates of the NWB reader in Brainstorm.


I'm new to NWB format and had problems with reading NWB data (schema version 2.2.5) in Brainstorm (Version: 21-Jun-2022). The issue is that some NWB data don't have the 'group_name' key that is required by Brainstorm.
I temporarily solved the issue by editing the code in Brainstorm but I feel it's dirty and there must be better solutions, so I recorded it here.

Here is what happened:

  1. Initially I used MATLAB 2017b and encountered the error below:

** Error: Line 34: max
** Invalid option. Option must be 'omitnan' or 'includenan'.
** Call stack:
** >correctType.m at 34
** >checkDtype.m at 111
** >DynamicTableRegion.m>DynamicTableRegion.validate_data at 34
** >Data.m> at 29
** >Data.m>Data.Data at 22
** >VectorData.m>VectorData.VectorData at 13
** >DynamicTableRegion.m>DynamicTableRegion.DynamicTableRegion at 13
** >parseDataset.m at 72
** >parseGroup.m at 22
** >parseGroup.m at 38
** >parseGroup.m at 38
** >nwbRead.m at 59
** >in_fopen_nwb.m at 49
** >in_fopen.m at 171
** >import_raw.m at 127
** >bst_call.m at 28
** >tree_callbacks.m>@(h,ev)bst_call(@import_raw,[],[],iSubject) at 660

  1. It seems to require a newer version of max(). So I changed to R2019b and the error above disappeared, but it threw another error:

** Error: Line 165: containers.Map/subsref
** The specified key is not present in this container.
** Call stack:
** >Set.m>Set.get at 165
** >in_fopen_nwb.m>getDeeperModule at 244
** >in_fopen_nwb.m at 90
** >in_fopen.m at 171
** >import_raw.m at 127
** >bst_call.m at 28
** >tree_callbacks.m>@(h,ev)bst_call(@import_raw,[],[],iSubject) at 660

  1. According to in_fopen_nwb.m>getDeeperModule at 244, there might be something wrong with this sentence not_ordered_groupLabels = nwb.(electrodes_path).vectordata.get('group_name').data.load;. So I checked the data using nwbRead() and didn't find a key named group_name.
    Here is the output of nwb.(electrodes_path).vectordata:

14×1 Set array with properties:

        HCP: [types.hdmf_common.VectorData]
  filtering: [types.hdmf_common.VectorData]
       good: [types.hdmf_common.VectorData]
      group: [types.hdmf_common.VectorData]
       hemi: [types.hdmf_common.VectorData]
        imp: [types.hdmf_common.VectorData]
      label: [types.hdmf_common.VectorData]
   location: [types.hdmf_common.VectorData]
  pial_dist: [types.hdmf_common.VectorData]
     vertex: [types.hdmf_common.VectorData]
          x: [types.hdmf_common.VectorData]
          y: [types.hdmf_common.VectorData]
          z: [types.hdmf_common.VectorData]
       zone: [types.hdmf_common.VectorData]
  1. So I had to extract the names of electrode groups (i.e., group_name) by modifying the code as:

not_ordered_groupLabels = {nwb.(electrodes_path).vectordata.get('group').data.path};
[~, not_ordered_groupLabels] = cellfun(@(x) fileparts(x), not_ordered_groupLabels', 'UniformOutput', 0);

and finally I can read the data. I'm not sure if it's just an issue about labeling or the data was badly formatted (I obtained the data from an open database). As mentioned by mpompolas, the key names in NWB could be arbitrary.