BIDS app for mrtrix3 connectome to do seed-based tractography

Hi there,

I came across this docker app (https://github.com/BIDS-Apps/MRtrix3_connectome) to generate connectomes from diffusion data.

I want to understand the tract profiles from a seed ROI to other target ROIs. Is there a way to use this connectome to do seed based tractography to ROIs?

Thanks so much.

Hi Sabir,

Within the context of both whole-brain structural connectomes and SIFT-like quantification of structural connectivity, I prefer to avoid the terms “seed ROI” and “target ROI”, as they refer specifically to a targeted tracking experiment. Instead, I prefer to think of the connectome encapsulating all possible pathways of interest; if you are interested in one specific pathway only, which you would otherwise reconstruct using targeted tracking, my preferred line of thought is instead that the pathway of interest simply represents one (or more) specific edge(s) within the connectome. Such information can therefore be extracted from the connectome representation, rather than requiring additional tractography reconstruction experiments; and additionally have the quantitative properties from the SIFT model that could not be obtained from a targeted tracking experiment alone.

(I really need to finish that paper…)

If you are only interested in the connection density, then this is provided within the connectome matrix itself. If you want to be able to access the streamlines corresponding to the pathway of interest, then you need to run the participant-level analysis with -output_verbosity 3 in order for it to provide you with the requisite data for subsequently using the MRtrix3 command connectome2tck to extract the streamlines corresponding to your pathway of interest.

Of course you could simply use the FOD / tissue segmentation data provided by this container to then subsequently do targeted tracking experiments; but that would involve a lot of wasted CPU cycles in generating outputs that you are not interested in.

Rob

Hi Rob,

Thanks so much for the detailed response, this makes sense.

There was one thing I wanted to clarify: The github description mentions that you need reverse spin-echo phase encoded acquisitions, but my data set only has them acquired A-P. Could I specify dwipreproc with the rpe_none in the script for it to run smoothly (acknowledging the distortions that could be present in the analysis)?

Sabir

In its current state, the script will refuse to execute if it is unable to correct EPI distortions. You could conceivably modify the script to allow it to proceed, but there will be significant residual errors in alignment between the T1 and DWI data that will cause ACT to “misbehave”. And attempting to construct a structural connectome without ACT or any other reasonable mechanism for terminating streamlines is more or less frivolous.

Eventually the script will incorporate a method for EPI distortion correction in the absence of reversed phase-encoding data. But there’s a lot of different things being developed in parallel, so I can’t predict how long it might be before that becomes available and can be incorporated into the script.

1 Like

Hey Robert,

Got it, that makes sense. Unfortunately for the diffusion data I have I’ll have to make do without EPI distortion corrections.

I ran it with one subject and it ran properly. I had a couple more questions:

  1. When I add more subjects, what is the best way to run the rest of the subjects in parallel?
  2. What is the difference between the “participant” and “group” analysis types? Do I do the participant analysis first, and then do the group analysis to do comparisons between the subjects?

Thanks!

Hi

Although I understand the risk for connectome construction, but if we are interested by local metric (like fiber density) or connection density of a specific track, may be a way to go is to remove the ACT option ? My understanding is that SIFT only will be enough to have quantitative properties, but in the DTI space (deformed from the T1 space). It may be enough to quantify specific trackts if you adapt the ROI for selection.
does it make sens ?

When I add more subjects, what is the best way to run the rest of the subjects in parallel?

By default, all underlying commands will utilise multi-threading to the maximum extent possible for the system being executed on; so there may not be any large benefit in running multiple subjects in parallel on a single system. Even the FreeSurfer step (if utilised) will engage multi-threading if the installed FreeSurfer version supports it.

If you want to run multiple subjects in parallel across multiple systems, then that is entirely independent of the particulars of this script. You could run each individually, or if operating in a HPC environment use something like sbatch.

What is the difference between the “participant” and “group” analysis types? Do I do the participant analysis first, and then do the group analysis to do comparisons between the subjects?

The two analysis levels are consistent with the interfaces to other BIDS Apps, following a MapReduce type of processing model. For this specific pipeline: the participant level analysis generates as much data as can be generated when each participant is processed independently, including the individual connectome matrix; the group-level analysis performs inter-subject connection density normalisation, modulating the connectome matrices that were produced for each subject independently by the participant-level analysis in order to make the connection densities within them comparable across subjects.

Of the multitude of manuscripts I’m writing at the moment, one explains why this step is even necessary at all, another describes this particular BIDS App script in full detail (including how the normalisation step is performed). Unfortunately I keep making the mistake of releasing tools before having published the details :expressionless:

Although I understand the risk for connectome construction, but if we are interested by local metric (like fiber density) or connection density of a specific track, may be a way to go is to remove the ACT option?

Technically connectome construction is getting connection densities of specific tracts; it’s just that there are many tracts of interest. So the same reasoning applies: Yes technically you can run SIFT / SIFT2 without ACT / on a tractogram generated without ACT, but the difficulty will be in isolating the streamlines that correspond to the pathway of interest. Without stringent control of streamlines terminations, applying endpoint-based criteria (i.e. a grey matter parcellation) is deceptively difficult and error-prone.

In addition: If you are using streamlines tractography only to generate a voxel mask corresponding to a pathway of interest, from which you will extract some quantitative property, then application of SIFT / SIFT2 to the tractogram is unlikely to have much of an influence on the mechanism(s) used to derive that mask.

Hey Rob,

Thanks so much, I really appreciate it. I’ve preprocessed the data and am running it, and am getting an “out of range” error.

File "/mrtrix3_connectome.py", line 1044, in <module>
    runSubject(app.args.bids_dir, subject_label, os.path.abspath(app.args.output_dir))
File "/mrtrix3_connectome.py", line 273, in runSubject
    run.function(os.rename, dwi_image_list[0], 'dwi.mif')
IndexError: list index out of range

I did the --debug option and it doesn’t really indicate what the error was. I currently have it formatted in the BIDS standard, with the data folder containing a .mif file developed from a .nii.gz file along with the bvals and bvecs in the header (from running dwipreproc).

What could this be?

The relevant code is written very specifically to import data that conform to the BIDS standard; indeed writing a script that supports a wide range of possible acquisition strategies would be near impossible without use of a format such as BIDS. The fact that your pre-processed data is stored as a .mif however means that it can’t possibly conform to BIDS. I’m guessing that you’ve also had to provide the --skip-bids-validator command-line option to get to that point, as the validator should have warned you about providing a non-BIDS file within a BIDS dataset.

This line treats all files in the dwi/ directory that contain the string “_dwi.nii” as an input image. This listing will therefore not include your .mif file. Even though your data are pre-processed, and therefore not “raw un-processed” data (and therefore doesn’t strictly conform to BIDS), when using the -preprocessed option the script still expects your input data to be in a BIDS format. This is also the case for the testing data that is downloaded as part of Continuous Integration.

Hi Rob,

Thanks, I was able to fix the error. I also have another question about the group analyses. It seems like the BIDS standard allows for different groups within the dataset (e.g patients and controls). In my dataset, I have both patients and healthy controls of what I want to analyze. Does the MRtrix app do a separate analysis for those groups in the dataset? Or, should I run the pipeline in two different datasets (one for patients, one for controls).

Thanks!

Does the MRtrix app do a separate analysis for those groups in the dataset?

The App does not perform any kind of “analysis” in an automated way; it simply performs a form of normalisation that enables such analyses to be done. It does generate the group mean connectome following this normalisation, but this should not be viewed as the “ultimate” output of the pipeline. Running the group-level analysis separately for two groups may give you the mean connectomes of the two groups, but it would also potentially regress out any global difference between the two groups, so would probably not be a good idea.

Hi Rob,

Can you clarify what you mean by “regress out any global difference between the two groups”?

And I see. So the connectome should be analyzed further to understand the structural connectivity of the area I’m trying to study.

Thanks.

Can you clarify what you mean by “regress out any global difference between the two groups”?

Sorry, I can sound slightly cryptic when I’m trying to be accurate and concise. I’ll explain this using AFD, since it’s slightly simpler, but the same concept applies to connection densities.

Imagine that you’re doing an FBA between two groups, where the “patient” group has a global decline in AFD, reflected in the intensity of the DWI signal in the white matter, particularly at higher b-values. Now imagine that, rather than using a common group average response function for spherical deconvolution, you generate one WM response function for the patient group and one WM response function for the control group. The difference between these two response functions is reflective of the global difference in AFD between the two groups. However if you then perform deconvolution of each subject’s image data using the response function for the group to which that subject is assigned, then patients and controls will have comparable FOD amplitudes. The global difference between the two groups has been “regressed out” implicitly rather than explicitly.

Once you have generated a connectome matrix for each subject, there is a wide range of possible analyses that can be applied to such data. However the purpose of the connection density normalisation is to make the raw values contained within those matrices comparable across subjects.

A post was split to a new topic: BIDS App & EPI distortion correction w/o reversed phase encoding