I saw that the mrtrix_connectome.py script implements these different steps but ideally I don’t have to rerun (parts of) the pipeline.
The steps that you need to reproduce to achieve the desired scaling will depend on your data & processing:
If all of your DWI series have the same spatial resolution, and you’re not interested in attempting to derive absolute connectivity values with physical units and just want to scale things properly between subjects, then you can ignore the spatial resolution component.
The intensity scaling is a little more tricky:
If you had previously followed the current typical approach for AFD quantification, i.e. common response functions and the
mtnormalise method, then that component of the scaling can be ignored (again, assuming you’re not chasing after connectivity values with physical units).
If however you did something less conventional, like use subject-specific response functions but then use
mtnormalise, then the potential biases become a bit more complex, and indeed I’ve never gotten around to figuring out the appropriate math in that circumstance (multi-shell & multi-tissue makes things more complex).
Ultimately the magnitude of this particular correction is likely to be small compared to other factors, so it’s not the end of the world if it can’t be done; but it should at least be known to be a confounding factor.
I already have existing subject-specific tractograms with 100M streamlines and a txt file produced by SIFT2 (but I think I have to redo this step with the output_mu flag)
If you have already computed your connectome matrices, but did not extract the SIFT model proportionality coefficients, you do not need to completely re-run SIFT2 and recompute matrices. What you can do instead is run
tcksift with not only the
-out_mu flag but also the
-nofilter flag. That will construct the model, give you the proportionality coefficient, but not perform any optimisation.
what steps do I need to perform within and across subjects to obtain normalized connectomes with the FBC as edge weights?
Ultimately the inter-subject connection density normalisation is a simple scalar multiplication of each subject’s connectome matrix with a subject-specific multiplier. The question is what factors contribute to that multiplier, and that depends on whether you want to include specific factors and whether specific factors can be safely ignored based on prior processing. But I don’t think I can answer this question here with any greater clarity than what was put into the linked preprint.
Do similar steps need to be performed when the streamlines are scaled with FA?
Mean FA is a completely different connectivity measure with its own distinct attributes. Post hoc increasing or decreasing connectivity matrix values differentially between subjects would only make sense here if you could prove that there was some mechanism or bias that, if left uncorrected, would result in erroneously large or small mean FA values in specific subjects. That’s a completely different discussion to what inter-subject connection density normalisation should be used for FBC. Over and above the differences between FA and AFD, there’s the fact that mean FA involves a mean operation, which regresses out any difference in density in the corresponding tractograms.