I have built my tractograms using global tractography, since I thought it was the best algorithm when multi-shell data was available, as it is my case.
After obtaining my .tck files I wanted to apply the SIFT2 algorithm to improve the estimation of streamlines, thinking it would reflect better he cross-sectional area of fibres connecting regions.
However, I got this error message:
tcksift2: [ERROR] Cannot perform streamline mapping: no step size information in track file header
An important note:
When running global tractography, I could not use 10^8 iterations (segmentation fault error) and I had to run 10^5 iterations instead.
So, my questions are:
why is SIFT2 not working? does it have to do with my previous algorithm: global tractography, instead of ACT? or, instead it has to do with the fact that I have very few streamlines, presumably: 10^5, instead of 10^8?
is it very wrong that I cannot run SIFT2 and I jump straightaway into the tck2connectome step?
Global tractography and SIFT indeed donāt really work together. You get this technical error message from SIFT because global tractography, as opposed to streamline tactography, simply has no notion of a step size. More fundamentally, however, global tractography and SIFT aim for the same goal, namely to match the reconstructed fibre density to the measured DWI data. The former does this in one global optimization step, whereas the latter does this in postprocessing by either filtering or reweighting a streamline tractogram. In a way, using them together is therefore like doing double work.
As to why tckglobal is giving you a segmentation fault, this is clearly an error. A few users have reported problems with the multi-threaded code (the default) on some platforms, and I havenāt been able to fix those yet. Which operating system are you using? It might be worth trying tckglobal with -nthreads 1 to disable multithreading and see if that works for you. In any case 10^8 iterations is an absolute minimum for a full brain dataset, and I strongly advise against going for any less than that. Especially for connectivity studies, I recommend 10^9 iterrations, but be aware that this will take multiple hours to run.
Also take into account a tractogram has to fulfil certain anatomical constraints (e.g. streamlines connecting GM regions at their endpoints) to be meaningful in a connectomics analysis. If not, an (often huge) amount of streamlines is discarded at the connectome building stage, which means that there was hardly any point to get good densities for this particular scenario in the first place: your connectome would consist of a small random subset of the tracks, which is in itself not corrected to match the data / apparent fibre densities any more.
Thereās a lot of confounds in connectomics (which is why, admittedly, Iām not a big fan of it ), but I reckon the closest thing to doing something meaningful or at least sensible at the moment, given your particular multi-shell data definition, would be to do MSMT-CSD, followed by probabilistic tractography with the ACT (anatomically constrained tractography) framework, followed by SIFT2, followed by tck2connectome. Even in this (as optimal as it currently gets) scenario, thereās certain shortcomings at the moment, as @chunhungyeh pointed out nicely in a recent pair of abstracts:
No worries Carmen. We realise this is still a challenging topic for most of the community. I reckon thereās still a lot of work to do in āsimplyā explaining this well in a way that is both sufficiently accurate but intuitive at the same time.
I am just curious why do you advice the CSD + sift approach against the global tractography ?
As pointed out by Daan global tractography and SIFT aim for the same goal. Since global tractography is aimed āto match the reconstructed fibre density to the measured DWI dataā it seems to me better than trying to corrected a bias tracktogram. May be I miss something about pitfalls of global tractography ā¦
is there works trying to compare both approach ?
There is conceptually no difference between the goals of CSD + SIFT versus global tractography; I fully agree with that.
My argument was actually not about those, but about being able to incorporate ACT (or any other mechanism for such anatomical constraints) in the pipeline; and furthermore in the correct spot/order in the pipeline. The problem, with respect to this particular scenario, is that the current implementation of global tractography in MRtrix can not enforce anatomical constraints, including (but not limited to) enforcing that tracks connect grey matter with their endpoints. One could enforce this, for example, after global tractography, but then the particular fit to the data is broken. This is essentially what happens when constructing a connectome of, e.g., the global tractography output.
The other option is to do CSD + anatomically constrained tractography (ACT) + SIFT. The difference is that the tracks are, in a way, āa priori anatomically constrainedā at the stage where SIFT takes care of the fit to the data. So you end up with a tractogram that fits the data, but is at the same time also anatomically constrained. What @chunhungyehās abstracts show (as well as @rsmithās earlier works), is that this has a very significant impact on the resulting connectome (and derived metrics thereof). The abstracts also explore (even) further challenges, such as the track assignment mechanism; but these are unrelated to my explanation here.
Independently from this, I donāt know of works that would have directly compared the performance of (MSMT-)CSD + SIFT to global tractography in terms of fitting the data. If ACT is incorporated in the former, itās not unlikely that it may perform slightly less well in fitting the data; in this context, you could see ACT as a further regularisation (well, actually a hard constraint, if you will) of the problem; but it is there for a purpose of course. But as I reasoned, this all depends on the application.
So, what Iām essentially arguing, is that a mechanism for anatomical constraints is essential if the application is to be āquantitative connectomicsā.
I hope this explanation makes sense. (if not, @rsmith will have to give it another go )
thanks a lot,
it is a clear and useful explanation. I miss indeed the point with ACT.
So there is work to do for āglobal tractographerā : incorporate ACT in the optimization.
I think Thijs just about covered this, I just want to add a couple of things:
As a thought experiment, imagine that you seed a large number of ātrack segmentsā in each voxel, and optimise their orientations & density, but donāt form any connections between them. You could pretty much make this āreconstructionā match the image data perfectly, but there would be no connectivity information. As a global tractography algorithm forms connections between these segments, it becomes harder to maintain correspondence between the reconstruction and the image data. Whatās currently lacking is that although many such segment-wise connections are formed, thereās no guarantees provided regarding meaningfulness of the endpoints of the merged tracks, or the absence of āfloatingā track segments in the WM. So not necessarily āincorporating ACTā in global tractography so much as incorporating the intention and guarantees behind ACT.
See this example as a bit of a āmarriageā between SIFT-like āsemi-globalā tractography and genuine global tractography algorithms, that does guarantee meaningful endpoints.
to follow up on this topic, is there any method then for applying sift2 to any tractogram computed with an arbitrary tractography algorithm? As I understood, the problem is indeed the need of knowing the step size of each streamline; do you think that a resampling preprocessing will do? I mean, fitting a spline to each streamline and resample it using a given step size? If so, how can I introduce the step-size in the header?
Thanks a lot,
Alessandro
PS: this is my first post here, and I take the occasion to congratulate all you guys for this fantastic toolbox that is MrTrix! Indeed, very powerful, useful and easy to use at the same time! Thanks guys!
Realistically I can probably relax that restriction, so that the command gives a warning about not knowing the step size rather than an errorā¦
The reason the test is there at all is that I do a reasonably dense upsampling of the streamlines in order to get an accurate calculation of arc length. Without this, I observed a deficit in streamlines density in curved fibre regions compared to straight fibre regions. So I check the step size in the track file header, and determine an appropriate upsampling ratio based on that step size and the voxel size of the FOD image. Without knowing the track step size, I suppose that processing could continue with the upsampling disabled, but the issue of curved v.s. straight pathways might have an effect on the command output.
In the meantime, the command youāre looking for is tckresample with the -step_size option. That will resample the streamlines to a fixed step size, and will correspondingly write that step size to the tracks file header, making tcksift2 happy.
thanks for the very quick response! I tried your suggestion but the step_size file is not written to the header with the command tckresample, and I get the following error indeed: [ERROR] Cannot perform streamline mapping: no step size information in track file header
I had to hack the header manually. Am I doing something wrong?
Ah no, itās me thatās done something wrong. I messed up the logic that determines when to update the header field in tckresample, probably because the fixed step size resampling capability was added after a big code refactor. Here is how to fix your code if you want to save yourself from having to perform the header hack (itāll go out as part of the coming update).
I would be interested to identify several major WM tracts using predefined anatomical priors while utilizing concept of SIFT2 to do quantitative evaluation of their cross-sectional area.
It seems that your referenced method does not seem to be publicly available.
I was thinking on following:
Generate whole-brain tractogram using tckgen with ACT
Use TRACULA to determine pathways of interest with imposing strong anatomical constraints on them
Merge whole-brain and TRACULA tractograms
Apply tcksift2 on merged tractogram to obtain weights of streamlines
Extract weights of streamlines generated by TRACULA
One thing is missing: Using only TRACULA-streamlines for the evaluation of pathway cross-secional area would probably introduce bias due to omitting streamlines from whole-brain tractogram which could possibly contribute to the pathway of interest. Maybe this could be solved by extracting relevant streamlines from whole-brain tractogram using thresholded pathway distribution from TRACULA as a ROI and using their weights in final calculation.
Do you think this makes sense? I would greatly appreciate your opinion.
Hmmm. Iāve never actually used TRACULA before and am not super familiar with the inner workings, so if Iāve gotten something wrong here, please somebody call me out on it.
It seems to me that TRACULA could be described as something like a semi-geodesic semi-global semi-atlas-based algorithm, reconstructing a posterior distribution of spatial extent of a given pathway of interest based on the fraction of intersection of something like āprobabilistic geodesic trajectoriesā. So if thatās the case, then the āprincipalā output of the algorithm is probably not the individual posterior spatial trajectories so much as the spatial posterior distribution of the bundle.
If this is the case, then maybe thinking about quantification from a streamlines trajectories / SIFT2 perspective is not actually the way to go about it. Instead, you could think of something along the lines of what afdconnectivity does: Calculating a total pathway fibre volume then dividing by pathway length to give a measure proportional to fibre cross-sectional area. Using afdconnectivity straight out of the box may not be ideal here, given how it works & how TRACULA works, but you could perhaps do something ācomparableā by breaking down the underlying steps manually: treat the spatial posterior distribution as something akin to partial volume fractions, taking the sum of (FD x posterior probability) for a given pathway (youād need a way to go from TRACULAās purely voxel representation to fixel selection), and dividing by some derived estimate of pathway length.
This wouldnāt fully deal with the fact that even a fixel consistently intersected by TRACULAās trajectories could theoretically also be intersected by other fibre trajectories not part of that bundle; but conversely it would potentially dodge other issues, e.g. variance in streamlines tractography / necessity for regularisation in SIFT2.
Otherwise, what you suggest RE: trying to āincludeā streamlines from the whole-brain tractogram that also correspond to the pathway of interest over and above the TRACULA trajectories is probably reasonably sensible; I just wonder if itās a bit of a āhackā compared to trying to use the output from TRACULA in a more ānativeā fashion as described above.
thank you very much for your insight. As far I can see, TRACULA is actually drawing samples from the posterior distribution (using Markov-chain Monte Carlo) where one sample of such distribution is represented by one āwhole individual streamlineā represented by path.pd.trk file), the voxel-wise probability density of the bundle is calculated ex post. So I thought that these samples can be regarded and handled as spatial trajectories. Am I wrong?
One problem is that the āstreamlinesā are estimated on the voxel grid of DWI, i.e. they are very jagged and low-resolution. See example (corticospinal tract):
So using them in the subsequent SIFT2 would need them to be somewhat spatially smoothed (which is currently not possible with tckresample) or try to generate them in higher resolution (upsampling original DWI? I do not know whether it would not lead to violating some statistical assumptions on the underlying data to preclude using TRACULA).
OK, so it is indeed intrinsically voxel-based; I thought that was the case.
While you can treat these as streamlines trajectories (and I agree that spatial smoothing would likely be preferable before doing so), the tracking algorithm is internally operating on the image voxel grid (maybe with a curvature constraint of 45 or 90 degrees), handling the voxel data without sub-voxel interpolation. So I would personally be very tempted to go with a quantification approach that handles the data in the same way, such as described above.
There isnāt currently any command that will smooth streamlines trajectories. Downsampling then upsampling will do something comparable, given that the upsampling will use Hermite interpolation, but the vertices retained during downsampling will be unjustly preserved through the process. Iām sure there are tailored algorithms for this purpose (e.g. mapping to a Fourier basis would probably work well), Iāve just never needed such.
Upsampling the DWI data prior to TRACULA would only give very limited benefits, since youād still have 45-degree turns everywhere; each segment would just be shorter. In terms of influence on the internals of TRACULA itself Iām not sure, but there would possibly be concerns about MCMC sampling when the image data in adjacent voxels are not independent (or are at least less independent than one would expect from a typical imaging PSF).