Connectome with Fiber Bundle Capacity as edge weight?

Hi experts, I recently came across this paper https://osf.io/c67kn about fiber bundle capacity being a more suitable weight for the node-node edges in a structural connectome than streamlines and that is also better comparable across subjects. I understand the reasoning behind it but I’m still having trouble implementing that on the subject level and how to make this comparable across the connectomes in my sample. I saw that the mrtrix_connectome.py script implements these different steps but ideally I don’t have to rerun (parts of) the pipeline.
I already have existing subject-specific tractograms with 100M streamlines and a txt file produced by SIFT2 (but I think I have to redo this step with the output_mu flag) and a parcellation in DWI space for tck2connectome (Schaefer 400 nodes). Are there any additional steps I need to perform before tck2connectome, and what steps do I need to perform within and across subjects to obtain normalized connectomes with the FBC as edge weights? Do similar steps need to be performed when the streamlines are scaled with FA?

many thanks!

Chris

Hi Chris,

I saw that the mrtrix_connectome.py script implements these different steps but ideally I don’t have to rerun (parts of) the pipeline.

The steps that you need to reproduce to achieve the desired scaling will depend on your data & processing:

  • If all of your DWI series have the same spatial resolution, and you’re not interested in attempting to derive absolute connectivity values with physical units and just want to scale things properly between subjects, then you can ignore the spatial resolution component.

  • The intensity scaling is a little more tricky:

    • If you had previously followed the current typical approach for AFD quantification, i.e. common response functions and the mtnormalise method, then that component of the scaling can be ignored (again, assuming you’re not chasing after connectivity values with physical units).

    • If however you did something less conventional, like use subject-specific response functions but then use mtnormalise, then the potential biases become a bit more complex, and indeed I’ve never gotten around to figuring out the appropriate math in that circumstance (multi-shell & multi-tissue makes things more complex).

    Ultimately the magnitude of this particular correction is likely to be small compared to other factors, so it’s not the end of the world if it can’t be done; but it should at least be known to be a confounding factor.

I already have existing subject-specific tractograms with 100M streamlines and a txt file produced by SIFT2 (but I think I have to redo this step with the output_mu flag)

If you have already computed your connectome matrices, but did not extract the SIFT model proportionality coefficients, you do not need to completely re-run SIFT2 and recompute matrices. What you can do instead is run tcksift with not only the -out_mu flag but also the -nofilter flag. That will construct the model, give you the proportionality coefficient, but not perform any optimisation.

what steps do I need to perform within and across subjects to obtain normalized connectomes with the FBC as edge weights?

Ultimately the inter-subject connection density normalisation is a simple scalar multiplication of each subject’s connectome matrix with a subject-specific multiplier. The question is what factors contribute to that multiplier, and that depends on whether you want to include specific factors and whether specific factors can be safely ignored based on prior processing. But I don’t think I can answer this question here with any greater clarity than what was put into the linked preprint.

Do similar steps need to be performed when the streamlines are scaled with FA?

Mean FA is a completely different connectivity measure with its own distinct attributes. Post hoc increasing or decreasing connectivity matrix values differentially between subjects would only make sense here if you could prove that there was some mechanism or bias that, if left uncorrected, would result in erroneously large or small mean FA values in specific subjects. That’s a completely different discussion to what inter-subject connection density normalisation should be used for FBC. Over and above the differences between FA and AFD, there’s the fact that mean FA involves a mean operation, which regresses out any difference in density in the corresponding tractograms.

Cheers
Rob

Thanks a lot for your guidance, Rob! Much appreciated, and thanks for the tip about recalculating mu; saves a lot of time.

all the best, Chris

Hi Rob (et al.),

I have a follow-up question on this topic (bit confronting to see how little progress I have made since our last communication): when applying the normalisation procedure to get FBC as edge weight, the edges that were zero in the unnormalised matrix (after applying sift2 to tck2connectome) now have a value. This includes the diagonal. This value is very small but still represents a weak link between two nodes. Would it be wise to reset these value back to zero or best leave them as they are and continue to the analyses? ( I want to perform graph analyses and compare these values between two groups)

best wishes, Chris

Hi Rob or others, can I still get some feedback on this post?

Hi Chris,

Apologies for the delay – difficult to find the time, as always…

OK, that’s not what I would have expected… Were these edges strictly zero beforehand? If you ran tck2connectome on your tractogram without including the SIFT weights to produce a raw streamline count, do these edges have a non-zero streamline count? If so, then I guess there’s nothing to worry about, especially if as you say:

If you’re running weighted metrics in your connectivity analysis, then such small values will hopefully not make any significant difference to your results. If you’re discretising your connectivity matrix prior to extracting metrics, then presumably these connections will be sub-threshold anyway? As Rob says, the ‘right’ answer will depend heavily on exactly what you plan to do here. Personally I would recommend trying both approaches (removing these tiny connections vs. leaving them in) and looking at what impact this has on your results, if only to verify that it’s safe to leave them in.

I expect most network analyses will ignore the diagonal anyway? I’m not sure what the advice is on that front, but no doubt others on this forum will be able to answer that.

Cheers,
Donald.