Normalization of connectomes

How important is it to scale your individual connectomes by these metrics in order to be able to do between-subjects comparisons of connectivity?

Well, it’s difficult to know exactly how consequential the inclusion or exclusion of specific inter-subject scaling factors may or may not be; at this point in time I’ve simply tried to account for known factors that could lead to erroneous conclusions if there were to be substantial variation between participants but no correction were applied.

  • The mu scaling is most important if you either have the prospect of a different number of streamlines generated per subject, or if the streamline length distribution changes considerably across subjects (e.g. different size / shape of WM).

  • The median WM b=0 scaling is essentially equivalent to how global intensity normalisation was done in AFD analysis, and so the 2012 AFD paper is probably the best reference. But this should not be as consequential as of version 0.5.0 of that tool: the global DWI intensity scaling should be mitigated during the pre-processing stage using the results of mtnormalise, and is based on the CSF rather than WM signal (indeed in retrospect it may be preferable to omit that scaling factor during group-level analysis since the CSF is a better reference…).

  • Similarly the compensating for differences in RF size is comparable to the use of a common response function, which is again justified in the 2012 AFD paper.

  • The voxel size scaling is only important if you either have different acquisitions with different voxel sizes, or want to place a more “absolute” rather than “relative” interpretation to estimated connection densities.

After connectome generation and mu multiplication, the values of my connectomes are between 0 and ± 10. However, lots of the graph theory scripts (e.g. from BCT toolbox) require values between [0,1], for example to calculate global efficiency per subject based on its weighted diffusion connectome. Would it make sense to normalize the connectome by the max amount of streamlines per subject, so I get values [0,1]?

We spoke a little about this issue in this thread.

It’s worth recognising that if you were to do this on a per-subject basis, i.e. find the most dense edge in each subject and rescale all values so that that edge has a value of 1.0 and all others are within the [0.0,1.0] range, this would override all of the inter-subject normalisation steps described above. You would also need to consider that if the value in that particular edge that happens to be the maximum is somehow elevated or decreased by some noise process, then this normalisation step will modulate the entire connectome for that subject based on that one noisy estimate, with concomitant effects on any graph theory metrics.

Personally, on a more philosophical level, I am sceptical that any downstream calculation that necessitates that the data be bound within such an interval is even reasonably applicable to structural connectome data. The fact that such a fundamental assumption upon the native distribution of the incoming data is broken makes me doubt whether or not applying such an analysis makes sense, even if it is possible to employ a strategy to manipulate the the data to conform. But that’s just me. Hopefully I can find a way to express these things in part 2 of that manuscript…

Cheers
Rob

1 Like