Upsampling dwi vs tckgen defaults

Dear All,

I wanted to get your input about upsampling dwi as preprocessing step in a connectome pipeline.
I have two sets acquired sequentially within the same scan session (each set contains 10 Xb=0 images acquired first in the respective set, then 60 dwi b=3000; 1st set PA, 2nd set AP encode direction, 2x2x2 mm resolution); and T1 images at 1x1x1 mm res). Thus afar, I have been using dwi2fod msmt_csd with single shell data in the connectome pipeline.

Upsampling has been inconsistently used in connectome studies, including in those by mrtrix group (eg ACT paper), and has not proved statistically significant to improving accuracy in a past ISMRM challenge. This step is nonetheless proposed in a fixel/AFD/FBM analysis on the mrtrix3 website, Since some tckgen defaults are linked to voxel size, I wanted to see if the defaults should be changed after upsampling; if so, this may favor short fibers given -minlength would be changed, which would not be something bad (if sampling from gmwmi this may emphasize U fibers) for my purpose.
Please comment.
The larger question is whether one, if interested in subject-specfic connectomes, should use ‘the usual’ ACT/SIFT(2)/edge weighing or an AFD analysis (where ‘intraaxonal space’ metrics could be calculated per node, say at the gmwmi, in a connectivity fixel enhancement analysis), or they are equivalent (if AFD is the way to go, one may use intersubject registration as recommended, or trick the pipeline by replicating same subject several times to generate the common template).

Thank you

Octavian

Hi Octavian,

I’ll throw in my thoughts, but they may well differ from others.

With upsampling, you really want to consider the benefits & detriments of doing so for any particular context, and it is indeed quite context-specific. I’ll give a few:

  • In AFD analysis, we up-sample:

    • Primarily because it provides improvements in registration accuracy
    • It also makes fixel-wise inference “sharper” / less “blocky”, but means it takes way more memory
  • For tracking: Although up-sampling can theoretically give better tracking in regions of high curvature (e.g. if the FOD lobes orientation changes drastically between two adjacent voxels, interpolating their SH coefficients may not yield a smooth transition in peak orientation), with the extent of orientation dispersion with our current default parameters I’m not sure how much of a difference it will make; but conversely for deterministic FOD tracking it might make a big difference.

  • In the ACT paper, the primary reason for using up-sampling was that I was using my own implementation of a voxel-based method for estimating the inhomogeneity field (this was from the dark, pre-eddy times), and I found that using an up-sampled image greatly improved the stability and accuracy of that algorithm, particularly since it made the discrete Jacobians better-posed.

Since some tckgen defaults are linked to voxel size, I wanted to see if the defaults should be changed after upsampling; if so, this may favor short fibers given -minlength would be changed, which would not be something bad (if sampling from gmwmi this may emphasize U fibers) for my purpose.

Well there’s no way to change the “defaults”, but the “recommendations” probably change. We set the defaults based on voxel size rather than absolute values because we deal with data from different species & across the lifespan; but with up-sampling & high-resolution data being “trendy”, that heuristic has kind of broken down. Personally I would set step size, minimum & maximum lengths to what they would have been had you not upsampled, at least as a starting point.

The larger question is whether one, if interested in subject-specfic connectomes, should use ‘the usual’ ACT/SIFT(2)/edge weighing or an AFD analysis (where ‘intraaxonal space’ metrics could be calculated per node, say at the gmwmi, in a connectivity fixel enhancement analysis), or they are equivalent (if AFD is the way to go, one may use intersubject registration as recommended, or trick the pipeline by replicating same subject several times to generate the common template).

I think there’s a few concepts here getting muddied together.

  • The easy one to answer is that SIFT(2) edge weighting is “equivalent” to AFD, though as a measure of fibre cross-sectional area of a connection rather than a local WM fibre volume, taking both microscopic density and macroscopic scale into account.

  • I’m not quite sure what you’re suggesting regarding calculating intra-axonal space at the GM-WM interface, but:

    • Such a measure would be least reliable at a tissue interface, so I’m not sure why you would want to quantify it there.

    • If your quantitative metric is generated per node prior to connectome construction, then it’s unclear what information the tractogram would be contributing. Usually with “image-contrast-based” connectomes, the quantitative metric is sampled along the pathway for each edge. But if you want to do this in conjunction with AFD: see “easy answer” above.

  • I don’t understand what you’re trying to achieve by duplicating the subject to generate a template image. The whole point of a template image is to get spatial correspondence between subjects; if you don’t need such correspondence, you would simply omit the registration step from your analysis…? Unless you’re trying to ask a deeper question regarding AFD inter-subject intensity normalisation?

Rob