Choice of tractography algorithm & options

Hi there,

I have multi-shell diffusion imaging data with b = 0 (7 directions), b = 700 (20 directions), b = 1000 (32 directions) and b = 2000 (62 directions) of patients 1-2 weeks post-stroke. I have ran all necessary preprocessing (denoising, unringing, motion and distortion correction), followed by dwi2response dhollander for response function estimation and dwi2fod msmt_csd for the estimation of FODs using constrained spherical deconvolution. I would like to run a whole-brain tractography using 5tt and sift, excluding the lesion.

I could use your input on the following:

  1. I am a bit puzzled though which algorithm (deterministic or probabilistic) I should use. In our lab and in our field (aphasia) deterministic approaches seem to be preferred, although I read on the forum and your papers that probabilistic approaches are preferred. I’m also aware of the fact that it depends on what you want to do with the data. Ideally, the approach would be appropriate for either connectome construction followed by the network-based statistic approach to find a subnetwork correlated with some behavioural variable, or to find all connections between a ROI and the rest of the brain. I have tried SD_stream and iFOD2 so far but I’m not sure how to choose between both…

  2. Number of streamlines. I am following the BATMAN protocol and there they use 10 000 000 streamlines. I am however not sure how much is a sensible amount? If I use a deterministic algorithm, the file size gets very big (17-32 GB).

  3. Seeding. I have tried using (50) random seeds per voxel or using the GM-WM interface. When I use random seeds per voxel and I filter the number of streamlines to 200k (with tckedit -number 200k, for visualization purposes) there are almost no tracts in the right hemisphere (both with probabilistic and deterministic algorithms). I thought this could be due to the fact that there are maybe lots of short fibers where the lesion is, but it also happens when the lesion is excluded. It does not happen with the gm-wm seed. Any ideas on this? And, what is a more sensible seed?

  4. Other options: for now I left the default values for most of the options, I once included min and max length of the fibers (20-400 mm). Would you recommend to manipulate other options as well?

I know these are a lot of questions, so I thank you in advance for your input!

Best,

Klara

Hi Klara,

  1. In our neck of the woods, we’re advocates of probabilistic approaches; hence why the default for tckgen is a probabilistic algorithm if no other option is explicitly selected. An argument about which is “better” is unlikely to be resolved in the field, and even fully interrogating the pros and cons of each exhaustively is a complete manuscript in its own right. Unless you have an explicit justification for deviating away from the software recommendation in your particular circumstance, the easy answer is to just go with the default; there’s sufficient precedent set. No reasonable reviewer could knock back a manuscript due to not using their particular favourite algorithm.

  2. Again, there’s a precedent for 10 million, even if it’s not been exhaustively evaluated. But these things also depend on precisely what it is being quantified. Maybe the best reference currently is this article, we also stated down this road here (1558); others feel free to provide other references, as I’m curious myself as to whether anything has been published in this area that I’m not aware of.
    The reason the files for the deterministic algorithm are so much larger is that the step size is smaller (1/10 vs. 1/2 voxel size). If you were to specify -downsample 5 in your tckgen call, you’d get files of comparable size to iFOD2.

  3. Different seeding strategies can operate very differently internally. In the case of seeding randomly from an image, or at the GM-WM interface, each streamline seed point is drawn at random from anywhere in the image. Conversely, with the -seed_random_per_voxel option, while the position of the seed point within each voxel is determined at random, the specification of that number of seeds within each voxel in the mask is done sequentially. This means that the algorithm starts drawing seeds in one corner of the image, and finishes in the opposing corner. So selecting the first 200k streamlines in the file is not a representative subset of the complete tractogram, but a highly biased subset selection. Ideally what you would need in this instance is a way to select at random some number of streamlines from the complete set; I used to have code to do this, but never integrated it into tckedit. But if -seed_random_per_voxel yields some total number of seeds based on the product of the number of voxels in the seed image and the number of seeds per voxel, then you could just use -seed_image and -seeds to yield the same number of seeds but sampling randomly from the mask at each instance, removing this order dependence (but no longer guaranteeing the exact same number of streamline seeds in each voxel).

  4. Again, it would require sufficient justification to deviate away from the default parameters, either due to the nature of a specific dataset or a specific experimental hypothesis, neither of which I think is present here. But there’s nothing stopping you from generating tractograms with different parameters and attempting to evaluate the influence of those parameters.

Cheers
Rob

1 Like

Hi Rob,

Thanks a lot for your answers, that is really helpful! We indeed chose a probabilistic approach with mainly default parameters in the end, after diving into the literature a bit more. As to the filtering, I thought that tckedit -number 200k would randomly select streamlines, so that explains a lot!

Have a nice day :slight_smile:

All the best,

Klara