Tck2connectome result (Structural Connectome) data range issue

After structural connectome construction following BATMAN2.0 protocol, I have the connectomes of subjects inhand.
Unlike functional connectivity matrix (range 0 to 1), the structural connectivity matrix generated using -scale_invnodevol option has far wider range (range 0 to XX sometimes more than 30).

Is it ok to put the result (assignments_hcpmmp1.csv in the tutorial) to general NBS pipeline like graphvar or connectomestats?

Hi @iPsych,

The fundamental unit of “connectivity” being quantified is drastically different between the two modalities, and that includes the numerical ranges over which the values are (or are not) bound. This actually applies to the minimum values in addition to the maximum values: functional connectivity will actually range from -1.0 to 1.0 (it’s just that negative values will be explicitly truncated in many instances), whereas structural connection density is non-negative. You will also find that omitting the -scale_invnodevol option changes the values in the matrix by many orders of magnitude, since it is quantifying the volumes of the parcels in mm3 and then dividing e.g. the streamline counts by those values.

Whether or not any downstream application is or is not compatible with such data depends entirely on that application.
For invoking a GLM, as will occur in connectomestats, imagine that you multiply the values stored in the subject matrices by some fixed factor: the beta coefficients will scale accordingly, the estimated standard deviation will scale accordingly, but the resulting t-values will be unaffected, and so one would expect the final analysis result to be unaffected. This isn’t quite exactly your case, as the node volumes will vary slightly across individuals and so the multiplicative factor won’t be identical across individuals, but to a first-order approximation the argument holds.
There are however other analyses where results can scale directly with the magnitudes of the raw values in the matrices, which is something that’s concerned me for some time and I’ll hopefully talk about in a future article. Sometimes people identify the edge in the connectome with the highest connectivity value, and then divide all edge values by that maximum, such that the largest value in the connectome is 1.0 for every subject. But choices like this come with their own consequences for comparisons across participants; indeed I’d argue the fundamental unit of “connectivity” stored in the connectome is changed by such an operation.
What I personally consider to be the “correct” way to scale estimates of structural connection density across participants is described in this preprint; I also have a small musing in there about how other factors such as variation in node volume should IMO be handled.

Cheers
Rob