Connectome with Fiber Bundle Capacity as edge weight?

Hi experts, I recently came across this paper https://osf.io/c67kn about fiber bundle capacity being a more suitable weight for the node-node edges in a structural connectome than streamlines and that is also better comparable across subjects. I understand the reasoning behind it but I’m still having trouble implementing that on the subject level and how to make this comparable across the connectomes in my sample. I saw that the mrtrix_connectome.py script implements these different steps but ideally I don’t have to rerun (parts of) the pipeline.
I already have existing subject-specific tractograms with 100M streamlines and a txt file produced by SIFT2 (but I think I have to redo this step with the output_mu flag) and a parcellation in DWI space for tck2connectome (Schaefer 400 nodes). Are there any additional steps I need to perform before tck2connectome, and what steps do I need to perform within and across subjects to obtain normalized connectomes with the FBC as edge weights? Do similar steps need to be performed when the streamlines are scaled with FA?

many thanks!

Chris

Hi Chris,

I saw that the mrtrix_connectome.py script implements these different steps but ideally I don’t have to rerun (parts of) the pipeline.

The steps that you need to reproduce to achieve the desired scaling will depend on your data & processing:

  • If all of your DWI series have the same spatial resolution, and you’re not interested in attempting to derive absolute connectivity values with physical units and just want to scale things properly between subjects, then you can ignore the spatial resolution component.

  • The intensity scaling is a little more tricky:

    • If you had previously followed the current typical approach for AFD quantification, i.e. common response functions and the mtnormalise method, then that component of the scaling can be ignored (again, assuming you’re not chasing after connectivity values with physical units).

    • If however you did something less conventional, like use subject-specific response functions but then use mtnormalise, then the potential biases become a bit more complex, and indeed I’ve never gotten around to figuring out the appropriate math in that circumstance (multi-shell & multi-tissue makes things more complex).

    Ultimately the magnitude of this particular correction is likely to be small compared to other factors, so it’s not the end of the world if it can’t be done; but it should at least be known to be a confounding factor.

I already have existing subject-specific tractograms with 100M streamlines and a txt file produced by SIFT2 (but I think I have to redo this step with the output_mu flag)

If you have already computed your connectome matrices, but did not extract the SIFT model proportionality coefficients, you do not need to completely re-run SIFT2 and recompute matrices. What you can do instead is run tcksift with not only the -out_mu flag but also the -nofilter flag. That will construct the model, give you the proportionality coefficient, but not perform any optimisation.

what steps do I need to perform within and across subjects to obtain normalized connectomes with the FBC as edge weights?

Ultimately the inter-subject connection density normalisation is a simple scalar multiplication of each subject’s connectome matrix with a subject-specific multiplier. The question is what factors contribute to that multiplier, and that depends on whether you want to include specific factors and whether specific factors can be safely ignored based on prior processing. But I don’t think I can answer this question here with any greater clarity than what was put into the linked preprint.

Do similar steps need to be performed when the streamlines are scaled with FA?

Mean FA is a completely different connectivity measure with its own distinct attributes. Post hoc increasing or decreasing connectivity matrix values differentially between subjects would only make sense here if you could prove that there was some mechanism or bias that, if left uncorrected, would result in erroneously large or small mean FA values in specific subjects. That’s a completely different discussion to what inter-subject connection density normalisation should be used for FBC. Over and above the differences between FA and AFD, there’s the fact that mean FA involves a mean operation, which regresses out any difference in density in the corresponding tractograms.

Cheers
Rob

Thanks a lot for your guidance, Rob! Much appreciated, and thanks for the tip about recalculating mu; saves a lot of time.

all the best, Chris