Fixel analysis without a template

How could one use fixel-based processing across high-resolution postmortem diffusion samples without putting samples into template space? Are graph theoretical concepts only for functional data?

1 Like

Hi Jackson,

These are very disparate questions so I need to split them:

How could one use fixel-based processing across high-resolution postmortem diffusion samples without putting samples into template space?

Firstly, I think the technical answer to this question is independent whether the data being used are in vivo or post mortem, or their particular resolution, so we can scrub that part to simplify the question:

How could one use fixel-based processing without putting samples into template space?

The role that is served by the transforming of data into template space is fundamentally that of establishing correspondence. In order to derive some quantitative parameter from multiple images, and make a direct comparison of those values between images, you need to have performed some operation to ensure that a like-to-like comparison is being performed. Registering and transforming all data to a common template achieves this by enabling the sampling of data at a fixed location in the template with the knowledge that the data stored there across the various input images comes from the same anatomy.

All that is to say, if you wish to perform a fixel-based analysis without the use of a template, then you would need some other mechanism by which to achieve meaningful correspondence. This could be e.g. using some segmentation / classification algorithm to define the locations of specific regions or pathways of interest, extracting fixel-based quantities from those, and then performing a comparison across subjects. There’s a wide domain of possibilities here that could technically be considered “fixel-based processing”; but without the template normalisation step I would no longer call such a pipeline “a Fixel-Based Analysis”.

Are graph theoretical concepts only for functional data?

Personally I argue that many graph-theoretical concepts are only applicable to structural connectivity data, and it’s the use of such for functional data that is questionable (and I’ve heard that Olaf Sporns holds a similar view). But it depends on the particular tools within the graph theory banner that are being used.

  • For something like the Network-Based Statistic, where you are simply performing statistical inference on effects in connectivity metrics within specific edges, that’s entirely agnostic to the particular modality from which those measures of connectivity were derived.

  • If you think critically about a number of popular graph theory metrics, what they are doing is treating the graph as a network where the values stored within the edges encode the bandwidth of information flow, and those metrics capture higher-order observations regarding how that information flows. In these cases I believe that structural connectivity data is far more amenable to such analyses than is functional connectivity, which is itself a higher-order derivative observation of the consequences of (possible) information flow.

  • There are also a large number of graph theory analyses, particularly those that relate to path lengths, that depend upon use of binary connectivity data. These I personally find totally incompatible with structural connectivity data, since both tractography-based connection density estimates and inter-areal axon counts as derived from quantitative tract tracing span many orders of magnitude, and the higher-order properties of that system cannot possibly be captured by collapsing that dynamic range down to a binary form.
    (Some such analyses can still make use of non-binary connectivity estimates by applying some transform to convert from “connectivity strength” to “connection length”; but this transform is itself ill-posed)

This is however a slightly unusual question to express in conjunction with a question about fixel-based processing, given that FBA and structural connectome construction are very different reconstruction / analysis pipelines. So I’m not sure if that is a symptom of some other misunderstanding; but hopefully the responses above get you on the right track regardless.