Tckgen + act : number of rejected streamline

Dear all,

I run tckgen on 30 subject ( with act and seed_dynamic )
tckgen upsampled_mbc_FOD.nii whole_brain_10m.tck -act …/5ttgen/5TT.nii -backtrack -crop_at_gmwmi -seed_dynamic upsampled_mbc_FOD.nii -number 10000000 -nthreads 12 -force

and I was surprise to find large differences for the ratio total_count / count.
For most of the subject the ratio was between 1 and 2 but a few of them had a ratio up to 20

So should I worried about those subject ? or this variability is something common ?
what should I check to see if I can trust the results

here is the output of tckgen with the -info option on only 100 000 tracts

tckgen: [INFO] opening image "../5ttgen/5TT.nii"...
tckgen: [INFO] step size = 0.5 mm
tckgen: [INFO] maximum deviation angle = 45 deg
tckgen: [INFO] minimum radius of curvature = 0.99999997217246561 mm
tckgen: [INFO] iFOD2 internal step size = 0.166666672 mm
tckgen: [INFO] rejection sampling will use 7 directions with a ratio of 2.15373969 (predicted number of samples per step = 12.9595909)
tckgen: [100%]  1314702 generated,   100000 selected
tckgen: [INFO] mean number of samples per step = -nan
tckgen: [INFO] mean number of steps between rejection sampling truncations = -nan
tckgen: [INFO] maximum truncation error = 0
tckgen: [INFO] Total number of track terminations: 1803495
tckgen: [INFO] Termination reason probabilities:
tckgen: [INFO]   Entered cortical grey matter: 31.646663838824061%
tckgen: [INFO]   Calibrator failed: 56.763173726569796%
tckgen: [INFO]   Exited image: 0.058331184727432012%
tckgen: [INFO]   Entered CSF: 0.014527348287630406%
tckgen: [INFO]   Bad diffusion signal: 0.0016634368268279092%
tckgen: [INFO]   Excessive curvature: 0%
tckgen: [INFO]   Max length exceeded: 10.04349887302155%
tckgen: [INFO]   Terminated in subcortex: 1.0304436663256622%
tckgen: [INFO]   Exiting sub-cortical GM: 0.44169792541703745%
tckgen: [INFO] Track rejection counts:
tckgen: [INFO]   Shorter than minimum length: 9746
tckgen: [INFO]   Longer than maximum length: 181134
tckgen: [INFO]   Poor structural termination: 1024014
tckgen: [INFO]   Failed to traverse white matter: 12
tckgen: [INFO] Dynamic seeeding required 162858468 samples to draw 1343179 seeds

Many thanks

Romain

Hi Romain,

With back-tracking I’d be concerned about a factor over 2 or 3. You definitely want to compare the output of mrinfo of those problematic subjects against the rest of the group to see if there’s anything glaringly obvious.

It also looks like you’re tracking on upsampled data without explicitly setting the maximum streamline length, hence ~10% of streamlines are exceeding that length and being rejected. The default maximum length is 100 voxels, which for upsampled data can be less than the length of the white matter.

A few other things you can check:

  • Look at the number of directions being used for rejection sampling calibration between the different subjects.
  • You haven’t used MSMT CSD? This may introduce issues if used in conjunction with ACT when using the default FOD amplitude threshold.
  • Extract the l=0 term of the FOD image of each subject and compare across subjects. If you’ve got gross differences in either DWI intensity or response function size, this could result in inconsistent scaling of FOD size across subjects, which will influence streamlines termination.
  • It may not help a great deal here, but it’s a neat trick nonetheless: If you remove the double-slash at the start of this line, and recompile, then tckgen will create a set of images in the working directory, highlighting the spatial distribution of streamlines terminations under different termination mechanisms.

Rob

Hi Rob,

Thanks for your useful comments.
I just notice the default maximum length. This is quite a strange choice since track length should not depend on voxel size. So what is the mm length you would advise ?

It may be a concern but this may not be the major problem since it is only 10 % of the rejected streamline. The rejection come mainly from the “Poor structural termination”

Note that this ratio of rejected streamline is increasing with increassing number of streamlines ( a ratio of 35 with 43 million and a ratio of 10 with 0.1 million)

I check the act file and the coregistration with DWI, I do not see anything obvious.

I check mrinfo : I have two type of acquisition but the same problem appears in both types. so nothing obvious

The reason I upsampled the data is because I follow the Fixel-Based tutorial (but this make calculation very long may be a should try with original size … ?) :

  • I do not have multi-shell data so I did not used the MSMT CSD
  • I normalize the DWI data (using mean DWI signal in a restricted CSF mask)
  • I used a group average response function

I just compare quickly the L=0 term of the FOD and I have quite big variation (0.12 to 0.6 in white matter)
I need to investigate more with more precise roi (or with registration) but this variation is not correlated with a high ratio of rejected streamline… (the smallest FOD l=0 amplitude was a subject with smallest ratio !)

To dig into this difference in FOD l=0 amplitude I compare 2 extreme subject :
the one with the smallest amplitude has a response function of
83.32344978 -28.79352133 10.01845657 -2.110375982 0.3318414489

the one with the highest amplitude has a response function of
429.9711944 -157.001299 34.50037871 -5.687002666 -0.05492293601

The average response function I use is :
185.17434 -68.66094 16.64707 -2.60875 0.31535

I guess I am doing something wrong …
may be the normalisation with CSF roi is a bad idea (when mixing different acquisitions …)

Many thanks

Romain

removing the double-slash cause a compilation error

error log

g++ -c -std=c++11 -pthread -fPIC -march=native -DMRTRIX_WORD64 -isystem /usr/include/eigen3 -Wall -O2 -DNDEBUG -Isrc -Icmd -I./lib -Icmd -isystem /usr/include/eigen3 src/dwi/tractography/tracking/write_kernel.cpp -o release/src/dwi/tractography/tracking/write_kernel.o

failed with output

In file included from src/dwi/tractography/tracking/write_kernel.h:31:0,
from src/dwi/tractography/tracking/write_kernel.cpp:17:
src/dwi/tractography/tracking/shared.h: In member function void MR::DWI::Tractography::Tracking::SharedBase::add_termination(MR::DWI::Tractography::Tracking::term_t, const Vector3f&) const:
src/dwi/tractography/tracking/shared.h:270:26: error: expected initializer before debug_images
auto voxel debug_images[i]->voxel();
^
src/dwi/tractography/tracking/shared.h:271:57: error: no match for call to (const transform_type {aka const Eigen::Transform<double, 3, 18>}) (const Vector3f&)
const auto pv = transform.scanner2voxel §;
^
In file included from /usr/include/eigen3/Eigen/Geometry:42:0,
from ./lib/types.h:29,
from ./lib/mrtrix.h:38,
from ./lib/cmdline_option.h:28,
from ./lib/app.h:28,
from src/dwi/tractography/file.h:22,
from src/dwi/tractography/tracking/write_kernel.h:26,
from src/dwi/tractography/tracking/write_kernel.cpp:17:
/usr/include/eigen3/Eigen/src/Geometry/Transform.h:176:7: note: candidates are:
class Transform
^
/usr/include/eigen3/Eigen/src/Geometry/Transform.h:361:17: note: Eigen::Transform<Scalar, Dim, Mode, _Options>::Scalar Eigen::Transform<Scalar, Dim, Mode, _Options>::operator()(Eigen::Transform<Scalar, Dim, Mode, _Options>::Index, Eigen::Transform<Scalar, Dim, Mode, _Options>::Index) const [with _Scalar = double; int _Dim = 3; int _Mode = 18; int _Options = 0; Eigen::Transform<Scalar, Dim, Mode, _Options>::Scalar = double; Eigen::Transform<Scalar, Dim, Mode, _Options>::Index = long int]
inline Scalar operator() (Index row, Index col) const { return m_matrix(row,col); }
^
/usr/include/eigen3/Eigen/src/Geometry/Transform.h:361:17: note: candidate expects 2 arguments, 1 provided
/usr/include/eigen3/Eigen/src/Geometry/Transform.h:364:18: note: Eigen::Transform<Scalar, Dim, Mode, _Options>::Scalar& Eigen::Transform<Scalar, Dim, Mode, _Options>::operator()(Eigen::Transform<Scalar, Dim, Mode, _Options>::Index, Eigen::Transform<Scalar, Dim, Mode, _Options>::Index) [with _Scalar = double; int _Dim = 3; int _Mode = 18; int _Options = 0; Eigen::Transform<Scalar, Dim, Mode, _Options>::Scalar = double; Eigen::Transform<Scalar, Dim, Mode, Options>::Index = long int]
inline Scalar& operator() (Index row, Index col) { return m_matrix(row,col); }
^
/usr/include/eigen3/Eigen/src/Geometry/Transform.h:364:18: note: candidate expects 2 arguments, 1 provided
In file included from src/dwi/tractography/tracking/write_kernel.h:31:0,
from src/dwi/tractography/tracking/write_kernel.cpp:17:
src/dwi/tractography/tracking/shared.h:272:15: error: voxel was not declared in this scope
voxel[0] = ssize_t (std::round (pv[0]));
^
In file included from ./lib/image.h:24:0,
from src/dwi/tractography/roi.h:22,
from src/dwi/tractography/properties.h:21,
from src/dwi/tractography/file_base.h:26,
from src/dwi/tractography/file.h:28,
from src/dwi/tractography/tracking/write_kernel.h:26,
from src/dwi/tractography/tracking/write_kernel.cpp:17:
./lib/header.h: In instantiation of MR::Header::Header(const HeaderType&) [with HeaderType = std::basic_string; typename std::enable_if<(! std::is_base_of<MR::Header, HeaderType>::value), void*>::type = 0u]:
src/dwi/tractography/tracking/shared.h:76:39: required from here
./lib/header.h:110:22: error: const class std::basic_string has no member named transform
scale
(1.0) {
^
./lib/header.h:110:22: error: const class std::basic_string has no member named name
./lib/header.h:110:22: error: const class std::basic_string has no member named keyval
./lib/header.h:111:13: error: const class std::basic_string has no member named ndim
set_ndim (original.ndim());
^
./lib/header.h:112:34: error: const class std::basic_string has no member named ndim
for (size_t n = 0; n < original.ndim(); ++n) {
^
./lib/header.h:113:23: error: no matching function for call to std::basic_string::size(std::size_t&) const

I just notice the default maximum length. This is quite a strange choice since track length should not depend on voxel size. So what is the mm length you would advise?

Ideally it shoudn’t; but it also shouldn’t depend on the size of the animal (e.g. shouldn’t use a 250mm maximum length on a rat). I’ve thought about setting this parameter based on e.g. the maximal diagonal of the image field of view; but then your maximum length criterion changes if you crop out the empty space either side of the head. So there’s probably no clear answer. People would probably get angry if they had to manually specify every single tracking parameter every time, so we need some kind of default heuristic.

The rejection come mainly from the “Poor structural termination”

This is what I’d expect. Basically, when ACT is used, if the streamline is terminated by any mechanism other than those imposed by ACT, then ACT will reject the streamline as a ‘poor structural termination’, since that termination has occurred by some mechanism other than those informed by the structural image, and therefore the streamline must still be in WM. (With the exception of sub-cortical GM, which is handled differently)

Note that this ratio of rejected streamline is increasing with increasing number of streamlines ( a ratio of 35 with 43 million and a ratio of 10 with 0.1 million)

This is due to the use of dynamic seeding. As the reconstruction progresses, streamline seeds are increasingly placed in areas that are more difficult to track, so the failure rate goes up.

I normalize the DWI data (using mean DWI signal in a restricted CSF mask)

We tried this for a while as well; CSF would be a good marker for intensity normalisation, but in healthy brains at low resolution it’s very difficult to get a good mask of pure, uncontaminated CSF voxels. And Gibbs ringing can play a part if the number of voxels is small also. Have you tried just using the dwiintensitynorm method?

removing the double-slash cause a compilation error

Apologies, that code must not have been edited when performing the updated_syntax port. Shouldn’t be too hard to get it up and running again.

Hello

Just to follow up, I test the same tckgen command (-act -bactrack -crop_at_gmwmi -seed_dynamic) on the original DWI (no upsampling) with to different response function for the FOD

1_ response function computed from the tournier algorithm
2_ response funtion average across the subject

for the first case tckgen work of (with 10 to 30 % of rejection), but in the second case I get the same problem for a few subject it reject to much streamline.

My understanding is that taking a too diferent response function gives too small fod lobe that makes the tracking failed ? I am a little bit confuse on what way to solve this

Many thanks

Romain

I’m not sure what’s going on, but there’s a few things that make me think the order in which you’re doing things might be problematic. Just to make sure we’re all on the same page:

If you’re using a subject-specific response:

  1. bias field correction [optional]: dwibiascorrect

  2. estimate response: dwi2response

  3. CSD: dwi2fod

All pretty simple…

If you’re going to use a group-average response:

  1. bias field correction [optional, but recommended]: dwibiascorrect - for all subjects

  2. intensity normalisation: dwinormalise or whatever procedure you’ve devised - for all subjects, using the bias corrected data as input if you’ve performed that step

  3. estimate response: dwi2response - for all subjects individually, using the intensity-normalised data as input

  4. average these responses across subjects: average_response (currently not officially documented, but we’ll fix that at some point).

  5. CSD: dwi2fod for each subject, using the intensity-normalised data as input and the average response computed in the previous step.


You’ll note I’ve highlighted the fact that you need to use the intensity-normalised data as input to dwi2response if you’re going to use a group average response. I suspect you used the responses obtained from the raw (or maybe bias-corrected) data, since your response function values are in the range that I’d expect to see in that case. If you’d used the intensity-normalised data, I’d expect to see values a bit higher than ~1000 for the first term if you’d used dwinormalise, since this defaults to a value of 1000 in the b=0 volume as the reference value. If you’d intensity-normalised to some other value, I’d expect your first response coefficient to be around whatever that value was (1 for example).

So basically, if you’re using the group average response computed from the individual responses that you obtained without intensity normalisation, and then use these to compute FODs on data that has been intensity-normalised, you’ll get incorrect results. Furthermore, if you use this same response on data that hasn’t been intensity-normalised, you’ll also get incorrect results - although the values might actually be closer to what they should be in this case, since at least the average response will be in the right ballpark for most of the unscaled data (but badly scaled for the extreme cases in your group).

The trick here is to perform all the bias field correction and intensity normalisation as a pre-processing step, and then to use the resulting images as the sole input for the subsequent stages - that’s the only way you can ensure that the response used is appropriately scaled to the data being processed…

Hi Donald,

I think it is what we did, i am only working with biascorrected and instensity-normalised data,
the difference I see comes only from taking the average response function or not.

I guess it is not a problem to also wor with normalised data for subject-specific response

But since this is a collaborative project (with a student) I need to double check …
I 'll come back to you as soon as I have time to redo the analysis

thanks

Romain

Correct, that shouldn’t actually make any difference - as long as the response is computed from the intensity-normalised data.

@Romain_Valabregue: Just letting you know that uncommenting the DEBUG_TERMINATIONS flag in src/dwi/tractography/tracking/shared.h should now compile.