We expect that a number of users will be wanting to use MRtrix3 for the analysis of data from the Human Connectome Project (HCP). These data do however present some interesting challenges from a processing perspective. Here I will try to list a few ideas, as well as issues that do not yet have a robust solution; I hope that any users out there with experience with these data will also be able to contribute with ideas or suggestions.
Do my tracking parameters need to be changed for HCP data?
Probably. For instance, the default parameters for length criteria are currently set based on the voxel size rather than absolute values (so e.g. animal data will still get sensible defaults). With such high resolution data, these may not be appropriate. The default maximum length is 100 times the voxel size, or only 125mm at 1.25mm isotropic; this would preclude reconstruction of a number of long-range pathways in the brain, so should be overridden with something more sensible. The minimum length is more difficult, but in the absence of a better argument I’d probably stick with the default (5 x voxel size, or 2 x voxel size if ACT is used).
Also, the default step size for iFOD2 is 0.5 times the voxel size; this will make the track files slightly larger than normal, and will also make the tracks slightly more jittery, but actually disperse slightly less over distance, than standard resolution data. People are free to experiment with the relevant tracking parameters, but we don’t yet have an answer for how these things should ideally behave.
Is it possible to use data from all shells in CSD?
The default CSD algorithm provided in the dwi2fod command is only compatible with a single b-value shell, and will by default select the shell with the largest b-value for processing.
The Multi-Shell Multi-Tissue (MSMT) CSD method has now been incorporated into MRtrix3 , and is provided as part of the dwi2fod command. There are also instructions for its use provided in the documentation.
The image data include information on gradient non-linearities. Can I make use of this?
Unfortunately not yet. Making CSD compatible with such data is more difficult than other diffusion models, due to the canonical response function assumption. There are two possible ways that this could be handled:
- Use the acquired diffusion data to interpolate / extrapolate predicted data on a fixed b-value shell (example);
- Generate a representation of the response function that can be interpolated / extrapolated as a function of b-value, and therefore choose an appropriate response function per voxel (@bjeurissen: Link to your ISMRM abstract on continuous RF representation?).
Work is underway to solve these issues, but there’s nothing available yet. For those wanting to pursue their own solution, bear in mind that the gradient non-linearities will affect both the effective b-value and the effective diffusion sensitisation directions in each voxel. Otherwise, the FODs look entirely reasonable without these corrections…
The anatomical tissue segmentation for ACT from 5ttgen fsl
seems even worse than for ‘normal’ data…?
The combination of high spatial resolution and high receiver coil density results in a pretty high noise level in the middle of the brain. This in turn can trick an intensity-based segmentation like FSL’s FAST into mislabeling things; it just doesn’t have the prior information necessary to disentangle what’s in there. Work is underway for alternative 5ttgen
algorithms that will hopefully provide better tissue segmentation for such data; suggestions are also welcome for wrapping the functionality of other segmentation algorithms to provide their outputs in the 5TT format.