Your intended analysis is very much non-standard, so there’s a wide scope of possibilities for what could potentially be done and to what extent things could be made automated / objective rather than manual / subjective. Before getting to your questions, this sticks out to me:
- create a parcellation map based on fiber orientation
Is your intention to implement / utilise some form of automated parcellation algorithm for deriving such in a data-driven way? The answer to that question quite drastically changes the scope of the project. It bears some limited resemblance to some work I did during my PhD, which we could discuss if you plan to go that far; but if you’re purely generating the requisite data in order to perform a manual segmentation upon such, then nevermind.
A potential disadvantage with this sort of approach—which I know to have been a concern with some rather influential manuscripts in the past—is that despite the inter-subject variance in the tensor estimates across subjects once warped to template space, a single group average tensor is produced per voxel, and then a single discrete fibre orientation is extracted per voxel, and then all streamlines traversing that voxel select only that exact orientation to follow. So the tractography outcomes will look exceptionally “clean” and “sharp”, but this runs the risk of being quite misleading. Particularly anywhere where the tensor model is inadequate, and you could even have tensors from different subjects pointing in very different directions, tracking on just the principal eigenvector of just the group average tensor will completely mask that ambiguity. That’s not to say it can’t / shouldn’t be done, it’s just worth bearing in mind when interpreting such streamlines data that its high precision may completely mislead you regarding its accuracy. For instance, you could end up with a very sharp and well-defined structure in template space that is not actually a very good representation of anything that appears in any individual subject.
This will “work”; but you’ll undoubtedly find that the definition of structures of interest is not as sharp. Indeed it might actually look quite unusual having deterministic streamlines from subject spaces all warped to template space, where there will be “batches” of deterministic streamlines following one another perfectly yet many such batches with different orientations…
Is there any strong reason for:
I’m not working with ODF so I’m wondering how the general strategies depicted in the forum can be applied on my data.
? You could warp individual subject FOD data into a common FOD template, and then perform tractography on that just as we do for CFE / FBA.
While FACT and Tensor_* operate on different data, the biggest observable difference between FACT and Tensor_Det is the absence of fibre orientation interpolation in the former. Streamlines look really jagged and ugly as a result. FACT is mostly there implemented as a demonstration of how tracking algorithms implemented in that fashion operate; its existence should not be interpreted as advocacy. Not that we exactly advocate for the use of the Tensor_* algorithms either…
As with the linked post, there is some ambiguity here about what is meant by “averaging” streamlines. Since for Strategy 2 you mentioned concatenating tractograms using
tckedit, you must be intending to refer to something else here, but it’s not clear exactly what that is.
As for the applicability of SIFT(2), there are two points to be made here:
The command interfaces currently expect an FOD image as input, as they perform the same FOD segmentation as is used in
fod2fixel, but retain additional information about the orientation dispersion of fixels as this improves the process of streamline attribution to fixels. I’ve always wanted both to have
2) support the use of a fixel directory as input, and to export data for that superior streamline-fixel assignment to fixel directories for later use, it’s just one more can being kicked down the road; maybe that’s unnecessary detail, but my point is that either getting data into a format that can be read by these tools, or instructing those tools how to receive inputs of a different format, is only a technical implementation issue.
Fundamentally, the logic of these algorithms requires, for every fixel (or even just every voxel theoretically), an estimate of fibre density. So they’re technically not strictly tied to spherical deconvolution ODFs (even though it’s in the name), an alternative diffusion model that provides (a) fibre density estimate(s) could theoretically be used. But the tensor model provides no such meaasure.
tckmap doesn’t care how the streamlines you provide to it were generated. It’s entirely “relevant” in so far as that it will provide you with an alternative way in which to visualise / quantify your data. Whether or not those data, i.e. the streamlines, are of sufficient “relevance” is a question deferred to the wider experiment, not of that specific processing step.
If by “metrics” you are referring to the various options available within the TWI framework, I would refer to your end goal, which is performing some form of parcellation in template space. If you can generate an image that contains contrast that can be used to drive that parcellation (whether manual or automated), I’d consider that “relevant”.
This really links back to my very first question. There’s a number of layers of complexity embedded within this question: what information are you encoding in each voxel, how do you define “similarity” or “dissimilarity” between adjacent voxels, what kind of algorithm do you apply to utilise this information to cluster similar voxels or to determine boundaries between parcels. Back at the start of my PhD I looked into some methods in this kind of domain as it beared some resemblance to my own project; the first one that comes to mind is this method from Maxime Descoteaux; you could probably find similar methods by tracking the citations in either direction. So there’s no existing MRtrix3 command that will do anything like this, and you’ll have to think about the full domain of possible ways to pull something like this off. I’ll link to my 2011 ISMRM abstract: it’s definitely not the same as what you are wanting to do here, in that it aims to segment WM bundles rather than WM volumes, but there might be some ideas from there that you could borrow from if you want to pursue a tailored solution for your task.