# Appropropriate density of tck points for metric extraction

Hello everyone,

for a project we want to extract dMRI metrics (FA, RD) from DWI files (2.5mm^3 resolution) along fiber bundles of interest.

Therefore we use the tckresample and tcksample commands to convert the bundle to a set of equidistant and extract the scalar metrics afterwards.

During this the question arose how “dense” the track file should be in relation to the resolution of the diffusion data. The density of points making up the track file and therefore being used for sampling is a characteristic the two dimensions along the bundle and perpendicular to the main direction (density per equidistant plane).

The number of planes is configurable with the tckresample command so that we can calculate the distance between the endpoints also required for the -line option and then use a sensible setting according to the DWI resolution.

For the “density” per slice (equidistant plane) I am now wondering how to change this and in general whether all of this is necessary at all. I did not find anything in the literature yet so I wanted to ask here.

Best Regards,
Darius

I noticed that the above post might not be fully understandable.

I understand tcksample that way, that is samples metrics from several voxels (using trilinear interpolation) of a volumetric image for all the points in the provided track file.

Therefore I would like to understand how the resolution of the diffusion image limits the density (amount of points per volume) of vertices making up the track file up to which this method is sensible.

Hi Darius,

Pretty sure I follow your question; it’s the sort of thing that would be obvious with a figure but hard to put into words.

My answer here is actually the same as that to the perennial “how many streamlines do I need for my connectome?” question. If you generate an absurdly large number of streamlines, do your vertex resampling, and then interpolate your images at those points, what you will get is some weighted average across a set of voxels, where those voxels with many streamline vertices near their centre have a stronger influence on the mean. As you reduce the number of streamlines generated, you inherit imprecision in this measurement: repeating the experiment many times will give slightly different answers. How much imprecision is “tolerable”? Depends on the comprehensive details of the experiment, the magnitude of the effect you’re trying to detect, … Not sure it’s something that’s been explicitly reported on in the literature.

If I had to pick a heuristic… reluctantly… I’d say something like:

• Quantify the maximal macroscopic cross-section of the bundle in mm^2
• Divide by (voxel_volume ^ 0.667) (to get a cross-section in mm^2 per voxel)
• The result should be no less than 10; 100 would be better (physicists like orders of magnitude).

But anyone with relevant experience feel free to publicly expose that I’m making stuff up.

Also:

… samples metrics from several voxels (using trilinear interpolation) of a volumetric image …

Under default operation, yes. You can elect to use nearest-neighbour.
(The other option, `-precise`, isn’t applicable in your scenario, because you want values at vertices rather than a single value per streamline)

Cheers
Rob

1 Like

Thanks a lot rob! That helps.