= What's next =
A roadmap for the future developments of Brainstorm.
== Next six months ==
== New interface elements ==
* MEG protocols: More flexible organization of the database; sub-conditions to allow different runs X different conditions.
* Functionnal connectivity (display: see eConnectome)
* SSP:
* Make SSP projections dynamic, and keep the full list instead of always them combining them
* Take the bad channels in account in the application of the SSP
* Refresh display after changing the list of bad channels (imported files + raw) so the SSP are applied correctly
* bst_selections:
* Add user defined combinations of sensors (eg. "double banana" for EEG)
* Use this to produce "inversed polarity" displayes too (useful in EEG)
* Standard setups for al the EEG caps
* Sources on surface: Display peak regions over time (time = color) => A.Gramfort
* Homogenize a selection of several subjects/conditions
* Popup menu when more than one study selected
* Creation of a common channel file (match channels by names, not by order)
* Register MEG runs (recompute fields for a different set of sensors, MEGCoregister from old brainstorm)
* Images of amplitude: [sensor x time], [trial x time], scout: [trial x time] (similaire to erpimage in eeglab)
* Simulation: synthesize pseudo data-files from a cortex patch (duration, amplitude, noise)
* Interface to compute reaction times based on events: <
>Input:selection of cue event + event response; Output: reation time trial by trial, average, std, distribution... => Etienne Labyt
* Database navigator: F4 to switch to next sourcefile
* Segmentation in microstates
== Source modeling ==
* Inverse:
* Stable LCMV Beamformer
* MUSIC
* sLORETA: Values are now multiplied by 1e12 at loading for display => has do to be done in another way
* Unconstrained sources:
* Define as default
* Check all the processes
* Difference and stat should be: norm(A) - norm(B)
* Overlapping spheres: improve the estimation of the spheres for the frontal lobes
* Volume grid:
* Scouts 3D
* Test volume sources with all the subsequent processes (timefreq, stat...)
* Optimize: 3D display (better that 9x9 cubes)
* Optimize: vol_dilate (with 26 neighbors)
* Optimize: grid_interp_mri
* Magnetic extrapolation:
* Do the same thing with EEG
* Optimize bst_extrapm.m, add waitbar
* Use the noise covariance from the database instead of recomputing it
* Project sources:
* Adapt smooth factor to the number of vertices
* Number of neighbors to consider = average number of neighbors in the target mesh.
* Compute by small time blocks
* Noise covariance matrix:
* Save nAvg in noisecov file, to make it easier to scale to other recordings
* When deploying to other conditions: Apply destination SSP (!NoiseCov = SSP . !NoiseCov . SSP' )
== Anatomy ==
* MRI import: auto-reorientation of MRI after selected NAS / LPA / RPA.
* Major bug when importing surfaces for an MRI that was re-oriented manually
* Finalize Brodmann scouts
* Remove NCS/Talairach coordinate system, or fix it => Sylvain, Karim ND
* Clustering cortex: Dimitrios, David, Yu-Teng
== Processes ==
* Important new processes for a full processing pipleline:
* Import, and segmentation of RAW files
* Detect bad segments on RAW (then events in "bad" segments would be ignored)
* Artifact detection: use other things than simple value thresholds (like variance of the sensors...)
* Create events list based on thresholds on some channels (Stim, EOG, ECG...) => Etienne Labyt
* Creation of SSP vectors (ex: Remove cardiac artifac)
* ICA
* Integration of Antoine Ducorps methods (everything in the doc of the dataHandler):
* Cardiac artifact correction in MEG
* PCA correction of ocular artifacts
* Interpolation of missing EEG canal
* Headmodel / sources / noise cov
* Project sources
* Delete intermediary files
* Processing raw/continuous:
* Process non-FIF files (need to convert the header)
* Apply SSP + CTF Compensators(only if it useful - maybe create a specific process for that)
* Other processes:
* Bug: gradnorm crashes with bad channels
* Spatial smoothing: check / document parameters
* Sinusoid removal: fix new function
* Time-frequency:
* Write script for timefreq tutorial
* Make much faster and more memory efficient (C functions coded by Matti ?)
* Power spectrum: computation (FFT/welch, or average of TF) + display(f/Power, 2DLayout)
* Display stat computed on time-frequency data
* Display TF maps separately for the two gradiometers (if not: overlap)
* Source reconstruction by frequency bands
* Scouts on surface / time-freq
* Process selection interface:
* Bug when redimensioning window (with more than one process)
* isAvgRef: warning quand process necessite des donnees en AVG REF en entree
* Save "freqband" option when edited from custom processes
* John's noise cancellation filters
* Chain calls of similar processes (ex. bandpass+sin_remove), to avoid reading the full raw files several times
== Statistics ==
* ANOVA: Use LENA functions
* Output = 1 file per effect, all grouped in a node "ANOVA"
* Display several ANOVA maps (from several files) on one single figure, using a "graphic accumulator", towards which one can send any type of graphic object
* Permutation tests:
* t-test only (wilcoxon? sign-test?): paired, equal var, unequal var
* nb permutations ~ 1000
* maximum statistic over "time" or "time and space"
* Permutations / clustering: cf fieldtrip
* http://fieldtrip.fcdonders.nl/tutorial/cluster_permutation_timelock
* http://fieldtrip.fcdonders.nl/tutorial/cluster_permutation_freq
* Threshold in time: keep only the regions that are significative for contiguous blocks of time, or over a certain number of time points<
> => Process that creates a static representation of a temporal window
== Display ==
* OpenGL options: {none, software, hardware}
* Colormap: Set colormap max with right-click + move on the colorbar
* Bug: Mixing 2 views MRI/3d, white cuts appear after !SetCurrentFigure
* Bug: Scout without overlay, adapt scale for each graph when "Uniformize" option is unchecked (mixing sources + zscores)
* Waitbars:
* Replace old waitbars with java ones
* Add a "Cancel" button on waitbars when the bounds are defined (ie. when bst controls the process)
* Contact sheets & movies: use average of time windows instead of single instants, for each picture.
== I/O ==
* EEG File formats:
* Nihon Kohden
* EDF+
* !BrainVision / !BrainAmp: Get functions from EEGLAB
* EEG !CeeGraph
* NEUROFILE = COHERENCE EEG/video !LongTerm Monitoring => Manfred Spueler
* EGI: Finish support for epoched files (formats 3,5,7)
* Other file formats
* MEG160 (KIT)
* CTF:
* Read continuous RAW files split in several .meg4 files ( > 2 Go)
* Read STIM channel and generate !MarkerFile
* EEGLAB: Apply ICA matrices, get number of trials for AVG files
* !FieldTrip structures: In / Out
* Gzipped Nifti (new BrainVISA standard)
* Analyze / Nifti:<
>
* Fix output function (output MRIs cannot be imported in SPM)
* Use correctly field "orient" to align automatically MRI when importing
* Define scouts from SPM / Analyze 3D masks
* Files > 2Gb: display warnings
== Programming details ==
* Double-click doesn't work well on some Linux workstations
* Bug: Menu "Use default EEG cap" doesn't work for a multiple selection (setting the same EEG cap for several subjects)
* Bug node selection: click on sources > TF: select node-source, not node-condition
* Bug tree_dependencies: sources files, reprojected on default anatomy; If based on data files that are bad trials, they should be ignored by tree_dependencies, and they are not
* bst_warp and channel_project: Use tess_parametrize_new instead of tess_parametrize
* Testing functions: test_ctf "folder" / test_neuromag "folder" / test "folder" => Alex
* Bug in_bst_data_multi: If trials have different sizes, output is random (the one of the first file)...
* Shared kernels: do the "get bad channels" operation in a different way (reading all the files is too slow)
* Rewrite all GUI functions with gui_component
* Remove field "!BrainStormSubject" in the studies file: force same folder name for anat & data
* Allow protocols with same folder for anat and data
* Ensure that all the loaded and saved filenames are in "unix" format
* Use parfor: When not available, replace directly all the "parfor" with "for" in file.m
* Use Matlab GPU toolbox
* Re-use panels instead of delete+create again
* Optimize bst_read_events_track
* Write shepards.m with new algorithm for nearest neighbors
* Use tesselations_stat and tesselations_outwards to clean surfaces
* Movies: Use JAVE (Java)
* Screen captures: Use Yair Altman functions (in Matlab Central)
== Distribution & documentation ==
* Compile stand-alone version: Linux, MacOS
* Version with big fonts for live demos
* Add Help buttons and menus (in popups, dialog windows...) => Links to the website.
* Send email to registered users to anounce major improvements
* Script tutorials:
* Update them to reflect all the recent changes
* Script for the time-frequency computation
* Introduction tutorials:
* Estimate time to complete each tutorial
* Clusters
* Anatomy: Segmentation with !FreeSurfer
* First steps: Brainstorm preferences
* Headmodel: explain the fields + how to get the constrained leadfield
* Coordinate sytems: How to convert between the different coordinates systems in scripts
* Sources: Modelized data
* Sources: theshold min. size (not documented yet)
* Scouts: Atlases of Tzourio-Mazoyer and Brodman
* Processes: Describe all the processes
* Processes: How to write your own processes (user folder for processes)
* Processes: Processing RAW files
* Import raw recordings: Add "detect bad trials/channels" in the pipeline
* Advanced tutorials:
* MNE sample dataset
* EEG (How to import an EEG cap)
* MRI segmentation with !FreeSurfer => David Wheland
* How to make and compress a movie (Brainstorm + !VirtualDub + XVid)
* Display the "What's new" page after downloading new version of brainstorm
* Ask users to send their channel files, align on Colin, distribute
== Unsolved problems ==
* Surfaces:
* Influence of the software used to extract surfaces (!BrainVisa, !BrainSuite, !FreeSurfer)
* Type of cortex surface to use: grey/white, grey/csf, midsurface ?
* How to get good surfaces for BEM ?
* Group analysis:
* 1) localization on individual brain + interpolation on MNI brain ?
* 2) localization directly on MNI brain ?
* Normalize amplitudes across different subjects / runs ?
* Online documentation of forward and inverse methods