That’s a good suggestion. there’s definitely room for an updated best-practice pipeline from start to finish, and the HCP tutorial is certainly a good place to do that…
I’ve created an issue on GitHub for it (contributions welcome ), hopefully we’ll find the time to do something about it at some point soon.
If you have some recommendation regarding HCP-pipline it will be great! (for the unprocessed data)
In what stage do you average between RL and LR scans?
In what stage do you concatenate between the different scans of each subject (3 different gradient tables: 95,96,97).
Furthermore, there is currently release of 35 subjects scanned in MGH Adult Diffusion MGH Adult Diffusion Data
Minimum data preprocessing:
Gradient nonlinearity correction.
Motion correction.
Eddy current correction.
b-vectors.
What is the equivalent to gradient nonlinearity correction in mrtrix?
Do you recommend to apply also dwidenoise and mrdegibbs?
Is it possible also with mrtrix first to do Motion correction and then Eddy current correction?
The b=0 images interspersed throughout the diffusion scans were used to estimate the bulk head motions with respect to the initial time point (first b=0 image), where the rigid transformation were calculated with the boundary based registration tool in the FreeSurfer package V5.3.0 (Greve and Fischl, 2009). For each b=0 image, this transformation was then applied to itself and the following 13 diffusion weighted images to correct for head motions. After motion correction, the b-vectors were adjusted according to the rigid rotation estimated.
Trying to re-perform the HCP minimal pre-processing is a completely different beast to simply using the provided HCP minimally-processed data, and so would really need its own thread for further discussion. We’ve made our own attempts at this locally, and we were apparently the first people ever to request the private data from Siemens that are requisite for doing such. So unless you are exceptionally determined to achieve this, and have a comprehensive understanding of all pre-processing steps, I would advise sticking with the provided minimally-processed data.