Tckedit after targeted tracking

Dear MRtrix experts,

I’m looking to apply SIFT2 after targeted tracking as follows:

(1) Preprocessing/MT-CSD including group avg response functions and mtnormalise
(2) Seed-to-target tracking of a particular pathway with tckgen
(3) Whole-brain tractogram with tckgen using dynamic seeding
(4) Merge pathway of interest with WB tractogram using tckedit
(5) SIFT2
(6) Select pathway of interest/output track weights using tckedit with the same seed and target regions as step 2

I’m finding that tckedit in step 6 does not re-capture all the streamlines that were originally created in step 2. To double-check this, I performed tckedit again directly after step 2 and observed the same behavior. Sample commands:

tckgen -include Target.nii.gz -act 5tt.mif -backtrack -select 500 -seeds 0 -seed_image Seed.nii.gz -cutoff 0.05 -step 1 -angle 75 -nthreads 12 -info wmfod_norm.mif pathway.tck.

tckedit -include Target.nii.gz  -include Seed.nii.gz pathway.tck pathway_test.tck

I observe that pathway.tck has 500 streamlines and pathway_test.tck has fewer, e.g., 450 streamlines. This behavior is partially ameliorated by dilating Target.nii.gz and Seed.nii.gz prior to tckedit. Is this expected based on how tckgen decides whether streamlines have intersected the target? In practice, I am using some exclusion planes with tckgen and tckedit, but these don’t appear to be the source of the problem. I am not using a mask with tckedit. In the context of SIFT2, I’m concerned about the streamlines that are being lost in the tckedit state (step 6) because the weights associated will not contribute to the total connectivity estimate for the pathway. I would use tck2connectome if not for my use exclusion planes, which are needed because the pathway of interest (acoustic radiation) is small and difficult to capture accurately.

Thanks for your help!

Jon

Hi Jon,

Unfortunately I can’t explain the behaviour you’re seeing. There’s a few nuances that can in other instances contribute to observations like this, but nothing in your use case throws a red flag for me. Providing some example data may help.

Also, in case it’s relevant:

I would use tck2connectome if not for my use exclusion planes, which are needed because the pathway of interest (acoustic radiation) is small and difficult to capture accurately.

If parcellation-based targets make sense, you could use tck2connectome -out_assignments, then connectome2tck, then tckedit to apply the exclusion planes; SIFT2 weights can be propagated throughout.

Rob

Jon,

Thanks for providing some data. It didn’t take too long for me to twig as to what’s going on, and in retrospect I probably should have figured it out without requiring any data…

What’s happening here is a combination of two effects, which are themselves due to different design decisions, so you’ll have to bear with me…

  • When streamlines are tested against a set of ROIs, both in tckgen and tckedit, it is simply the set of streamline vertices that are used to sample values from the underlying binary images (without interpolation). So it is possible for a streamline to “cross” the corner of a voxel that is within the ROI, but for none of the discrete vertex samples to lie within that voxel, such that the streamline is not considered to “intersect” that ROI.

  • Some behaviours that are specific to iFOD2, and some that are most relevant to iFOD2:

    • While the reported “step size” is by default half a voxel, the default “number of samples” is 4. What this actually represents is that each “step” is in fact an arc through space, with vertices at the start, end, and 2 in the middle of that arced trajectory. The FOD amplitudes at the tangents to this arc at those four points are what is used to calculate the probability of following each generated candidate trajectory.

    • When I incorporated ACT into MRtrix3, I decided to keep the intermediate points along each arc, rather than using only the two endpoints that are separated by the “step size”. This provides a more dense sampling of the underlying 5TT image: The default step size would commonly be larger than the voxel size of this image, so taking values from that image only at the “step size” of the tracking algorithm could lead to jumping over tissue contrast.

    • This however led to larger track files, which some people did not like. Accordingly, I implemented downsampling within tckgen: Once a streamline has been fully generated using some step size, the command may then reduce the number of vertices it writes to the output file.

      • This downsampling algorithm always preserves the seed point and both endpoints, regardless of the extent to which you downsample; which also means that the distance from a streamline endpoint to the next vertex is not guaranteed to be the step size, which needs to be accounted for in streamline length calculations.

      • You may notice in the output of tckinfo that track files contain both the “step_size” and “output_step_size” fields. The former is the step size utilised by the tracking algorithm; the latter is the product of the step size and the downsampling factor, as an approximation of the distance between vertices as the data are stored within the file. Any calculations in MRtrix3 involving streamline length use the latter when available, as it’s more reflective of the data within the file.

    • By default, the downsampling factor for iFOD2 is set to (nsamples - 1); that is, for the default of 4 samples per step (1 start, 1 end, and 2 in between for each arc), tckgen will downsample the streamline by a factor of 3; by doing so, the output track file will contain only the streamline vertices at the start and end of each “step”, and not the two in the middle of each “step”.

Therefore:

The reason that tckedit is not selecting all of the streamlines that you produced with tckgen, despite using exactly the same ROIs, is that tckedit is using a less densely sampled representation of the streamline than what was used during tckgen.

I can prove this in two ways:

  • If I first run tckresample on the streamlines, up-sampling the representation by a factor of 3, then run tckedit, it selects 493 out of the 500 streamlines, rather than 443/500 when not using tckresample.

  • If I disable down-sampling during tckgen by using -downsample 1 (unfortunately this won’t work for you currently, I need to fix some code there), then run tckedit, it selects all 500 streamlines.

I suppose this then raises the question: How should streamlines be assigned to ROIs?

In the past we’ve debated about whether linear or nearest-neighbour interpolation should be used, which affects the shape of the outer boundary defining the region in which streamlines vertices should or should not be assigned to the ROI. But what you’ve drawn attention to here is a slightly different issue, as it involves not only the shape of that “inclusion region”, but also how densely each streamline is sampled.

In other contexts (e.g. tckmap -precise, tcksample -precise), MRtrix3 uses the method shown in Appendix 3 of this paper, which detects the voxel edges traversed by each streamline based on a continuous spline representation of such. So this attempts to catch any voxels where the streamline just intersects it for a very short segment length, and is therefore (or at least it’s intended to be) less dependent on the density of vertex samples along each streamline when assigning them to a voxel grid. It however requires greater computation than simply sampling an image underneath each streamline vertex.

Would tckgen and/or tckedit benefit from having this more “precise” mapping of streamlines to voxel ROIs? Open to thoughts on the issue. :exploding_head:

1 Like

Thanks, Rob. Makes perfect sense. I had actually tracked the same pathway before with a much smaller step size (0.2 mm) and, sure enough, 492/500 streamlines were recovered with tckedit. So a straightforward solution is to run tckgen with a smaller step size or, as you suggested, to upsample after tckgen. Since I’m interested in merging this pathway with a whole-brain tractogram for SIFT2, I assume that means I’d also need to obtain denser sampling for the WB tractogram, again by smaller step size or upsampling? That poses some file size issues. I’ve been building my WB tractograms with 10M or 20M streamlines, which would produce some gigantic files with denser sampling. Any suggestions there?

Regarding a “precise” option for tckgen/tckedit, I like the idea. Both of those utilities are mutli-threaded, right? I assume that would keep computation time down to a manageable level on a relatively capable machine. Any plans for GPU-enabled functions?

I assume that means I’d also need to obtain denser sampling for the WB tractogram, again by smaller step size or upsampling? … Any suggestions there?

Perhaps better would be:

  1. Upsample streamlines data corresponding to pathway of interest after tckgen, or run tckgen for that pathway using a smaller step size.

  2. Run tckedit to refine selection of the pathway of interest.

  3. Downsample the streamlines data for the pathway of interest, so that the step size matches that of the whole-brain data.

  4. Concatenate pathway with whole-brain fibre-tracking data.

  5. Run SIFT2.

When concatenating track files, it’s preferable for them to have the same step size, as otherwise the output file from the concatenation process will flag that the step size is ‘variable’, preventing automated calculations of certain parameters down the line. Over and above that, the calculated length of a streamline (and the length of its intersection within each voxel traversed) can in fact change depending on the step size, hence it would be preferable for it to be equivalent between the targeted pathway and the whole-brain tractogram.

Regarding a “precise” option for tckgen/tckedit, I like the idea.

I’ve added it to GitHub issues. But it’s probably not likely to make its way to the top of the priority list any time soon. The list of issues is much longer than the list of people addressing them :ocean: :raising_hand_man:

Any plans for GPU-enabled functions?

As much as @jdtournier and I would love to nerd out by porting various MRtrix3 capabilities to GPU, it simply fails to take priority over fixing bugs / providing new features. So unless a third party chooses to take this on, it’s unlikely.