If you’re sure the calibration is identical between scans, you can just
mrcat them together – no need to use
dwiextract at all. The DW encoding will contain all the information about which scans used which b-values.
Even if you do need to correct for different scaling between acquisitions, there should still be no need to use
dwiextract when combining the data.
No, not in general. Most analysis algorithms will assume the exact same acquisition parameters have been used throughout. Interestingly, MSMT-CSD is probably the only method I know of that should be able to handle these types of data, but I’ve not seen anyone do it yet. I’d strongly recommend you use the same TE for all acquisitions you wish to combine, it’ll make your life a lot easier later on, when you come to publish, etc.
The main reason to split your acquisition into separate scans is to allow different numbers of directions per shell. I agree that in theory you could also reduce the TE to boost SNR and reduce acquisition time further, but like I said earlier, it’s not a good idea since it breaks the assumptions of almost all analysis methods you might like to use (outside of MSMT-CSD).
Yes, that’s much more in line with what I was expecting. It’s still not comparing like with like though…
dwi2fod csd algorithm produces slightly different outputs than the
dwi2fod msmt_csd algorithm, even when set to the same parameters. This is due to the use of a soft non-negativity constraint in the
csd variant, as opposed to a hard constraint in
msmt_csd – some discussion on this issue here. If you’re really keen on performing a proper head-to-head comparison, you could use the
dwi2response dhollander and
dwi2fod msmt_csd algorithms for the single-shell analysis also, but only provide WM and CSF responses (and corresponding output files) to the
dwi2fod msmt_csd call – as per the same thread.
I assume you’re referring to the cluster of spurious streamlines next to the spinal cord / cortico-spinal tract? That’s most likely simply down to poor masking… I’d expect it would be entirely dealt with using ACT. I certainly wouldn’t take any account of it to assess the quality of the reconstruction, you really need to look at the results within the brain, and preferably in the central deep GM regions / brainstem, where the SNR is typically lowest (furthest from the coils).
Like I said earlier, looking at whole brain tractography as a marker of quality is fraught with difficulties (it annoys me when I see these types of comparisons used as ‘validation’ in papers…). So it’s difficult for me to make any meaningful statement on the quality of your results from what you show. I’d personally use other strategies: you could compare the raw fODFs within the central brain regions, and maybe compare the slab-cropped whole-brain tractography in those regions. But personally, I tend to look at the SNR in the b=0 images, assessed as the voxel-wise standard deviation over repeat b=0 volumes (don’t let it drop too far below 20), inspect the quality of the DW encoding (
dirstat provides a lot of information on that front), check that the fODFs look as expected, and verify that there are no obvious artefacts (due to e.g. incomplete fat saturation, ghosting, residual eddy-currents problems, etc). The decision as to which b-value to use should probably be guided by other considerations, but these days I’d advocate somewhere between 2,500 to 3000 s/mm² for the highest shell.
Really, this depends on what you hope to do with the data, and future methods will undoubtedly come up that might require higher b-values, etc. It’s really hard to provide any definitive recommendations for a future-proof acquisition, or even for an optimal acquisition for current methods, there are so many things that can be done with these data, and what’s optimal for one type of analysis may not be for another, etc…