Creating Nodes.mif file from Neurosynth Map for Connectomes

Hi MRtrix team!

I am trying to use clusters from a Neurosynth metaanalysis map as connectome targets. I currently have the following resources at my disposal:

  1. Subject-space tractograms (10M streamlines)
  2. SIFT2 weights for these tractograms
  3. A high-resolution MNI FA template (provided with TractSeg) and subject-space FA maps (from FSL DTIFIT)
  4. A MNI-to-subject space transformation matrix calculated by FLIRT. e.g.:
flirt -ref $sub_fa -in $MNI_template -omat MNI_2_sub.mat \ 
-dof 6 -cost mutualinfo -searchcost mutualinfo
  1. Clusters from the metaanlysis map, each saved as a separate .nii.gz. Neurosynth produces these maps in MNI space.

I imagine it would be best to keep the tractograms in their native space as opposed to transforming them to MNI, since information about streamline length would be lost and these files are computationally expensive to work with. So, I anticipate my next steps would be something along the lines of:

  1. Put clusters into a single file (just fslmerge?)
  2. Convert this file to MNI space, something along the lines of:
flirt -ref $sub_fa -in $neurosynth_clusters -out clusters_sub_space.nii.gz \ 
-applyxfm -init MNI_2_sub.mat -dof 6 -interp nearestneighbour
  1. Create nodes.mif from this (this is primarily where I am unsure)
  2. Make connectomes, e.g.
    tck2connectome tracks.tck nodes.mif connectome.csv -tck_weights_in sift2_weights.csv -out_assignments assignments.txt -scale_invnodevol
    or
$ tcksample tracks.tck $sub_fa mean_FA_per_streamline.csv -stat_tck mean; 
$ tck2connectome tracks.tck nodes.mif mean_FA_connectome.csv \
 -scale_file mean_FA_per_streamline.csv -stat_edge mean -tck_weights_in sift2_weights.csv
  1. Perform analyses in “connectome-space”

As suggested by the title, I am unsure what nodes.mif is supposed to be and how I would create it from the individual clusters. Would all the files go into a single volume with “intensities” corresponding to different labels (similar to a color look-up table), or could I merge the files into a 4d nifti, where each volume is a different cluster?

Thank you in advance!
Steven

I think I figured it out. I just used fslmath to add each cluster to each other, but with different amplitudes.

Just to expand on this:

The nature of the parcellation image provided as input to tck2connectome is referred to across MRtrix3 commands as a “label image”: a 3D image, with integer datatype, where each unique integer value (ideally incrementing from 1) corresponding to a particular structure. So if what you have is a 4D image, where each volume is a binary mask, the way to convert from the latter to the former (assuming that none of the masks overlap) is:

  • Extract each volume of the 4D image into its own 3D volume (mrconvert -coord 3, or indeed using multi-file numbered images);

  • Multiply each mask image by a unique integer value (mrcalc -mult);

  • Add the images together (mrmath sum).

While there’s no stand-alone command to do this, writing one using the MRtrix3 API would not be too difficult ("mask2label" maybe?)

The other thing to be wary of in this process is that the default assignment mechanism in tck2connectome is implicitly tailored for parcellations where the vast majority of the regions where streamlines will plausibly terminate are also included within the parcellation. If the cortex around your clusters is not included in the parcellation, then streamlines terminating adjacent to your clusters may nevertheless be assigned to those clusters if the distance between them is below the default maximal search radius. So you might want to consider including cortex outside of your clusters as a dummy parcel (so that streamlines are assigned to it instead of your clusters), or using a different assignment mechanism. How you assign streamlines to regions of interest matters, so it’s worth contemplating how best to do this given the nature of your data.

Cheers
Rob

2 Likes