OK, that’s a huge (set of) questions… I’ll try to answer as best I can. First off though, the usual caveat: much of this is subjective opinion, I don’t think there is such a thing as an ‘optimal’ acquisition without defining very precisely what it is you will be using it for, and what your constraints are (you can always improve your acquisition by spending more time, for instance). That said, I think we generally all agree on what a ‘good’ acquisition is, and even more so on what a ‘bad’ acquisition looks like…
In general, I think your decisions look very reasonable, and the protocol as a whole seems pretty solid to me. Your responses and FODs look very reasonable and clean to me.
On the SNR calculation issue, a few points:
-
the
dwidenoise -noise
output provides a map of the estimated pure Gaussian noise. Anything that doesn’t look like pure thermal noise will not be considered as noise, and that includes physiological noise like signal dropouts and instabilities due to motion and other processes. In many ways, it’s the lower bound for the noise in your signal. -
the standard deviation of the b=0 volumes is probably more reflective of the genuine signal variations in your data, due to thermal noise or other issues. However, you’ll find that this measure is often greater in the b=0 images than the DWI volumes in the CSF (there seems to be quite a bit of instability in the CSF signal for some reason, probably due to CSF pulsation or flow). So that measure is OK as long as you don’t include CSF regions in your estimate.
-
On modern multi-channel systems, the SNR is spatially variable. While a single summary variable might be informative as an overall benchmark, you should at least restrict it to those parts of the brain you’re interested in, most likely the white matter. Using a whole-brain mask will bring in the CSF, which can be problematic. You can improve things by measuring the noise and signal from a crude white matter mask, e.g. thresholding the power in the l=2 SH fit to your highest shell using:
dwiextract dwi.mif -no_bzero -singleshell - | amp2sh - - | sh2power - -spectrum - | mrconvert - -coord 3 1 - | mrthreshold - wm_mask.mif
and taking the ratio of the mean or median signal within that mask in the b=0 image to the mean or median noise level within that mask.
Better yet, generate the SNR map by simply dividing your mean b=0 image by your noise estimate, and check the values in the most problematic relevant regions. That map will probably be quite noisy, so you can also filter it with
mrfilter smooth
ormrfilter median
. You’ll typically find that the SNR will be lowest in the brainstem, so you may want to ensure you have adequate SNR in that region, which will also mean higher SNR elsewhere.
Looking at your data in particular, your SNR seems a bit low for a 2x2x2mm acquisition. I note that you’re also using the denoised data to compute the standard deviation of the b=0…? So if anything, that value would be an overestimate of the true standard deviation if it were computed from the non-denoised data…?
In answer to more specific questions:
Yes, I think it looks sufficient, but I would try to get a better handle on what your SNR actually is, as mentioned above. In my experience, shooting for SNR>5 in the outer shell requires compromising quite heavily on resolution, and this amount of SNR would in my opinion only be required for more complex microstructure modelling.
This is difficult to judge from the information presented. Ultimately, I recommend you try acquiring a few datasets at slightly different resolutions, process them, and have a look at the quality of the reconstructions you get, in the context of what you want to do with them…
Yes, I think so – provided your SNR is not too low, as per the recommendations above. You can also do quite a bit to avoid the Rician bias if you can acquire the complex DWI data (see below).
That is again dependent on what you want to do… Our current ‘default’ pipelines only require 3 distinct b-values (b=0 + 2 shells), but things will no doubt evolve over time. Personally, I think if you’re setting up a relatively future-proof protocol that you intend to be used for multiple studies for the foreseeable future, you’d be well advised to acquire an additional shell if you can…
I think both have some value, as long as you understand what each is reporting on. Personally, I think the std(b=0) is probably more useful as it captures more sources of variation. There’s also other ways to estimating noise from the DWI using e.g. adjusted residuals from a SH fit, and that’s what I might often use myself. But the standard deviation in the b=0 is I think the most widely used, is easiest to explain and the least controversial…
I’m not sure what this one does… Is this correcting for frequency drift during the acquisition…? It might be worth doing if you notice signal drift over the course of your acquisition, but I can’t say I have much experience here.
We do for the dHCP, but that’s a very unusual cohort, and requires a very bespoke acquisition sequence… This is not usually an option using standard sequences – unless this is a new feature I’ve not come across yet.
I would avoid these, since they’ll no doubt entail some loss of resolution, and the Gibbs ringing can more effectively be removed using mregibbs
.
There is some evidence that it might help, but I must admit this is not something I ever do…
Final point: if you can get your scanner to output the full complex images (i.e. phase in addition to magnitude), it’s very much worth doing. Denoising on the complex data vastly improves the performance of the denoising (see e.g. this paper), because it means the noise is actually Gaussian (not Rician), which better matches the MP-PCA algorithm’s assumptions, and by denoising in the complex domain, your subsequent magnitude images have vastly reduced Rician bias.