Intensity normalization on large groups

Hi,

We are working with a large set of traditional, non-HARDI, diffusion scans (~30 directions), subsets of which may be used in different analyses. We had been using dwinormalise on one subset (after bias/eddy correction), but now that we’re pursuing analyses on different subsets (and/or the whole cohort), running dwinormalise separately for each potential analysis seems excessive, both in terms of computation and disk storage, so was considering using the whole set as the basis for one super-group template, and slicing from there. Would this be valid? Can I do the two parts (group template creation and intensity normalization) separately without completely rewriting the script so that I could use a smaller representative sample for the group template (and/or add subjects afterwards)?

Since this is not HARDI data, I’m presuming the contemporary solution, mtnormalise, is not available to us, but happy to hear otherwise! As I mentioned above, subjects could potentially be added to this group, so a normalization that doesn’t depend on a pre-defined group would be really nice.

Thanks for your advice!

Hi,

You could still use mtnormalise with just two FODs (WM and CSF), see here. Or you could try the new SS3T framework.

Best regards,

Manuel

1 Like

Thanks. Looking deeper at what’s involved, maybe we don’t want to go the full multi-tissue approach quite yet.

Now that I re-read what I wrote, I said “dwinormalise” when I meant “dwiintensitynorm”, and actually looking at the script it doesn’t seem that hard to do what I described – again, if it is valid that the group used to create the group template is not strictly the same as (or is a superset of) the group that will be used for a given analysis.

Hi Syam,

What was formerly the dwiintensitynorm script is now provided as dwinormalise group; so that may be a source of confusion if cross-referencing your own processing with prior discussions on the forum.

For clarity: when you say “non-HARDI”, do you mean that your DWI volumes don’t correspond to equal b-values with different directions, but instead have some more complex organisation in q-space? I think @mblesac may have mistaken “HARDI” with “multi-shell”, but I want to be sure just in cast it was two mistakes cancelling one another out :upside_down_face:

Personally, I would consider your question by flipping it on its head. Let’s say you were to combine data across all scans into a single dwinormalise group call, such that a single FA template image is generated and the same WM mask in the space of that template is used to intensity normalise all scans. Now consider the conditions under which this would turn out to have been a bad decision. How different would the diffusion data from a specific subgroup need to be, in order for the FA template image and WM mask to be sufficiently biased due to inclusion of those data, for the downstream results to be substantially different to what they would have been had dwinormalise group been run on a tailored subgroup for a specific hypothesis?

So as long as you’re not talking about gross differences in age or severe neurological disorders, personally I’d be pooling everything together into a single normalisation step, and keeping just one copy of derivative DWI data for each participant. It makes the scope of possibilities for hypothesis testing down the line far more broad. But that’s just me.

Note however that it still won’t be ideal for post-hoc addition of subjects. Within dwinormalise group, it is the non-linear transformations estimated in the population_template step that are utilised to transform the template WM mask to individual subject space. For adding new subjects after the fact, you would need to manually perform an explicit registration between the novel subject’s FA image and the FA template image, and then manually transform the template WM mask to that subject in order to run dwinormalise individual. This means that the process of deriving the non-linear warp between subject and template FA images would not quite be identical between those subjects used to construct the template, and those added subsequently. A solution to this would be to treat all subjects as post-hoc additions, and perform that explicit registration / transformation / dwinormalise individual for all participants regardless of whether or not they were used to produce the template.

Rob

Hi @rsmith,

Yes, you are right, I just assumed @Syam_Gadde was talking about single-shell data.

Best regards,

Manuel

Thanks all for your help, your responses were very helpful. Yes, we’re talking about single-shell data, and in the end we didn’t need to do intensity normalization for this analysis, but will keep these in mind if we do end up going that route!

1 Like