I have a number of datasets of the same subject that are acquired in the same way.
When performing dwi2fod is it best to estimate new wm.txt for each acquisition or do make several estimates average them?
i.e. for run1 I have
# Shells: 0,1000
# command_history: /usr/local/mrtrix3/bin/amp2response dwi.mif voxels_sfwm.mif safe_vecs.mif response_sfwm.txt -shells '0,1000' (version=3.0.2)
1465.75680913335 0 0 0 0 0
860.230338830746 -222.460763135308 70.0638438700654 -19.2822610857832 2.61293932053527 0.635122508535416
For run two
# Shells: 0,1000
# command_history: /opt/mrtrix3/bin/amp2response dwi.mif voxels_sfwm.mif safe_vecs.mif response_sfwm.txt -shells '0,1000' (version=3.0.2)
1475.9663693353 0 0 0 0 0
860.928887143302 -249.822539782667 78.0903113247802 -19.7593152305991 1.80313250235039 0.0224749204708464
What’s the best approach?
That’s a good question. It’s typically best to estimate a new set of responses for each run, if only to account for differences in the scaling of the data (which will typically vary between runs due to scanner calibration). That said, there are ways to scaling the data to match the response, and vice-versa, which means it’s not a black & white answer.
In your case, you’ll probably find that the responses should look identical up to some scaling factor, since they’re acquired from the same subject on identical hardware with the same scan parameters. It even looks like the scaling is essentially the same too…
You can probably use the same response to process both datasets, provided steps are taken to handle potential scaling mismatches. One way to deal with exactly this issue is mtnormalise, so even in the presence of non-negligible difference in scaling, you should find that running
mtnormalise afterwards leads to a correct solution – provided the responses match in all respects other than their overall scaling (i.e. one is a multiple of the other).
Finally, note that using a single average response function is precisely what we advocate for any quantitative analysis (fixel-based analysis or quantitative structural connectivity analysis), and the same issues apply there, with mtnormalise providing our recommended solution. There is some discussion of these issues in the docs if you look for it – not always obvious since it’s often mentioned ‘in passing’ while describing our recommended pipelines…
Interestingly, while the scaling is almost identical, the shape at b=1000 actually changes a decent amount. This might be simply due to the difficulty in choosing appropriate exemplar single-fibre WM voxels, with different subject positions and different voxel grids leading to different manifestations of partial volume. One wouldn’t expect the diffusion-weighted signal profile to actually change in an individual over a short period of time for a fixed b-value if the magnetic field gradient calibration remains stable.
I’ll also briefly note that if using
responsemean, the underlying math is equivalent to having the magnitude and the shape of the response functions being averaged separately (as opposed to a direct averaging of coefficients). In your case there’s a clear difference in shape but not magnitude.