I am finding it difficult to understand the tracking results based on trials with different step sizes.
Selection of smaller step size appears to reduce numbers of spurious/false positive tracking than using larger step size. This is the reverse of what I would expect them to behave.
HARDI data I am working with: b3000; 60 directions with 7 b0; 2.3mm isotropic TR/TE: 7600/110
using iFOD2, SD_PROB, tracking the corticospinal tract
Seed at PLIC, include: cerebral peduncle and pons; exclude: midline, ALIC & retrolenticular portion of internal capsule.
Trials of step size: 0.1, 0.2, 0.3, 0.5, default (~1)
Default angle (~39 degree); default FOD threshold.
Streamline numbers: 2500
This is actually very much what is effectively expected.
Here’s an old ISMRM abstract from a clever guy who looked into this:
Ack, beat me to it…
The effect basically arises as follows.
- At each discrete step, the algorithm takes an independent sample of the FOD profile obtained through trilinear interpolation. The shape of the FOD profile determines the possible directions in which the streamline may propagate (and their associated probabilities).
- Imagine the following two extremes:
- If your tracking step size were approximately 1 voxel, this could be thought of as sampling from the uncertainty in the FOD in each voxel traversed just once. The distribution of streamlines orientations within the voxel would therefore be approximately equivalent to the shape of the FOD.
- If your tracking step size were ~ 1/100th of a voxel, then the algorithm would effectively be taking 100 independent samples of the FOD in each voxel it passes though. This has the effect of sharpening the probability distribution (imagine multiplying a 1D Gaussian by itself multiple times), such that the orientation dispersion between streamlines within the same bundle becomes smaller.
- The amount of streamlines orientation dispersion will influence how likely it is for a streamline to latch on to a crossing fibre population, which may or may not be a false positive depending on precisely where you look.
Unfortunately there’s no ‘right’ answer on this subject currently…
Wow, you guys remembered my old abstract? I’m touched…
Just to add to this: any reduction in false positives is almost guaranteed to also increase false negatives… You gain in specificity only if you sacrifice on sensitivity. I’ve always been of the opinion that the best compromise should always tend towards maximum sensitivity, given that currently the only direct clinical application of tractography is for neurosurgery - you really don’t want the algorithm to miss potentially important tracts just because the results look prettier…
Big thanks, Experts!
A related question.
I ran tracking diagnostic (-info) when performing tckgen, i notice there is an item called " iFOD2 internal step size"- it is ~0.3 of the step size. (if the designated step size let you hop from one FOD to another, i would expect them to be the same value?)
The way iFOD2 works is that it integrates the FOD over the length of the track segment to figure out its probability. To do this, it takes samples at regular intervals along the segment. By default, it will take 4 samples along each segment. The internal step size refers to the distance between these samples, so that should be a quarter of your nominal step size (might be a third depending on whether the current point is included, can’t quite remember…).
Thanks for the explanation. Makes perfect sense too.