MRtrix in brains with stroke lesions

Hi all,

I am faced with an unusual dataset and need some help to make the right choice.

My data have 60 orientations at b=1100, 10 orientations at b=300, and 10 b0 volumes (total 80 volumes). This looks like some multishell but not with a true second shell. I don’t have access to the group that acquired the data, so I’m trying to figure out myself what was that b=300 for. The data are from two subject groups: controls and stroke patients.

I have already performed preprocessing for a single control subject (dwidenoise, dwipreproc, dwibiascorrect N4 -ants). Everything looks fine, and hopefully stroke lesions (bright on B0) will not affect negatively bias correction for stroke subejcts.

The questions I have are related to the next steps, response estimation and tractography.

  1. If I assume these are multishell data and I follow the HCP tutorial, response estimation should be obtained with multishell multi-tissue options using 5tt. My data may be multishell, but my patients may not have a reliable 5tt segmentation, a darkened white matter around the lesion may be considered GM. How should I estimate the response on these data?

  2. For tractography, I know that -act followed by SIFT would be the best option, but, again, my patients may not have a plausible WM/GM border to rely on -act. So I think I should run tckgen the old way, with seeds from everywhere, and run SIFT later. Does this seem reasonable?

  3. Finally, I will use the connectome for graph theory estimations. However, patients may have big lesions. This means that the same (i.e., 1 million) streamlines will be distributed only in one part of the brain in patients. Could there be a systematic bias in the pattern of connections when getting the same number of streamlines from an intact brain and from a lesioned brain? In other words, if I have the same identical brain and get two tractograms, one from everywhere, and one from the right hemisphere only, are the tracts within the right hemisphere following the same pattern, or is there some bias simply because the number of streamlines is twice as much? Is deterministic tractography more appropriate in this context?

Thanks a lot for any help.

Just to post an update. I managed to run dwi2response dhollander followed dwi2fod csd_msmt.

Preprocessing also needed a fix because FSL 5.0.5 was outdated and wasn’t rotating the vectors. The 5.0.9 FSL patch for eddy refused to work because the data looked like DSI, I had to hack the dwipreproc script to force --data_is_shelled into the eddy call. It might be a good idea to add this option in the dwipreproc script.

After a quick look, the newer eddy_openmp followed by dhollander and csd_msmt produced less noisy tracts than the old eddy followed by tournier and csd.

Hope this is helpful. I am still uncertain to whether brain lesions will cause a bias in the connectome of the healthy parts, or whether deterministic is a better choice in this scenario.

Hi Dorian,

My data have 60 orientations at b=1100, 10 orientations at b=300, and 10 b0 volumes (total 80 volumes). This looks like some multishell but not with a true second shell.

Well, we tend to refer to even a solitary b=0 volume as a ‘shell’, so I wouldn’t be too harsh :stuck_out_tongue: I’ve heard of a number of people acquiring a low-density shell with a very low b-value. 10 volumes is maybe not ideal though, even at b=300 an lmax=4 fit would probably be preferable.

If I assume these are multishell data and I follow the HCP tutorial, response estimation should be obtained with multishell multi-tissue options using 5tt. My data may be multishell, but my patients may not have a reliable 5tt segmentation, a darkened white matter around the lesion may be considered GM. How should I estimate the response on these data?

Just to post an update. I managed to run dwi2response dhollander followed dwi2fod csd_msmt.

If you don’t trust your 5TT segmentations, then dwi2response dhollander is the best bet. Though out of curiosity, just how big is the ‘darkened white matter around the lesion’? If it’s literally just a ring of partial volume between WM and the lesion hyperintensity, then in an ideal world even an intensity-based tissue segmentation would still correctly label those voxels as WM & CSF; furthermore, dwi2response msmt_5tt will only use voxels in tissue response function estimation if they are labelled as comprising at least 95% of that tissue, so small ‘noisy’ segmentations shouldn’t influence the response estimation too much. But if the ‘darkened white matter’ is more extensive than a couple of voxels, and the tissue segmentation only has a T1 image to work with, then yes, such voxels could be labelled as GM since that’s what it looks like; labelling it otherwise would require more image data, or more anatomical prior information.

For tractography, I know that -act followed by SIFT would be the best option, but, again, my patients may not have a plausible WM/GM border to rely on -act. So I think I should run tckgen the old way, with seeds from everywhere, …

For stroke I’d expect the WM/GM border to be OK in non-affected regions; it’s a question of whether the tissue segmentation in affected regions will result in the incorrect anatomical priors being applied during seeding / tractography. This intrinsically depends on having a sense of what the correct anatomical priors actually are. There’s a few options:

  • Correct the tissue segmentations manually using the 5ttedit command, based on your knowledge / expectation of the condition.

  • Using either manual or automated methods, obtain lesion segmentations, and modify those regions to the pathological tissue type in the 5TT image (again using 5ttedit). This is basically equivalent to saying ‘I don’t know what tissue is underlying these regions, so I’m not going to apply any anatomical priors to streamlines whilst they are traversing these regions, and instead rely purely on diffusion image data’’.

  • Disabling ACT altogether may ‘remove the confound’, and may not require any user intervention, but it means that you won’t get the benefits of ACT in those regions where its application would be perfectly reasonable.

Without ACT you may still get decent tracking if MSMT is able to reduce the influence of non-WM tissue on the WM FODs. But it will still pose some problems for connectome construction later (see below).

… and run SIFT later. Does this seem reasonable?

You can run SIFT in the absence of ACT, but it can be prone to biases. For instance: if all voxels contribute equally to the model fit, then streamlines that project through the cortex and into CSF are more likely to be retained than those that terminate at or near the GM-WM interface, since they assist in reconstructing the non-zero FODs in those non-WM voxels. As with ACT, there’s scope for improving this behaviour a little if your multi-shell DWI-based tissue segmentations are good enough; but I honestly don’t know what the algorithm’s performance will be like for your acquisition scheme.

Finally, I will use the connectome for graph theory estimations.

There will always be difficulties in connectome construction if ACT is not used: the termination points are too ill-posed. If you use a volume-based pacellation like AAL it won’t be as difficult to get streamlines terminating near the cortex to be assigned correctly, but it will also mean that streamlines terminating in lesions will be more likely to be assigned to the closest parcel. You could adjust the maximal distance in the endpoint radial search to somewhat balance between these.

However, patients may have big lesions. This means that the same (i.e., 1 million) streamlines will be distributed only in one part of the brain in patients. Could there be a systematic bias in the pattern of connections when getting the same number of streamlines from an intact brain and from a lesioned brain? In other words, if I have the same identical brain and get two tractograms, one from everywhere, and one from the right hemisphere only, are the tracts within the right hemisphere following the same pattern, or is there some bias simply because the number of streamlines is twice as much?

Depends entirely on the type of analysis you’re performing on the connectomes themselves. It’s pretty obvious that for a brain where one hemisphere is more-or-less knocked out, the overall topology of the network is going to be vastly different, and this will be reflected in a wide range of network measures that can be calculated from the connectome.

It sounds like you’re (theoretically) comparing a stroke patient with one hemisphere basically non-existent with a healthy patient where one hemisphere is ‘masked’, and referring purely to the difference in streamline count rather than any residual topological differences. Again, this depends on the particular analysis being performed - as I’m sure I’ve mentioned a number of times, if your so-called ‘network connectivity measure’ is not invariant to a global scaling of the connectome matrix values (which is approximately what scaling the number of streamlines will do), the meaning / interpretation / usefulness of that measure needs to come under scrutiny. But if you’re referring to differences in the fundamental connectome edge values (streamline counts) between these two cases, then what you have is a connection density normalisation problem; tracking one hemisphere with 1m streamlines, and tracking both hemispheres with 1m streamlines then masking out one hemisphere, will give something resembling a factor of 2 difference in ‘connection strength’. This simply highlights that one streamline in one subject is not quantitatively equivalent to 1 streamline in another subject; this is something I’ve been threatening to publish on for years now…

Is deterministic tractography more appropriate in this context?

The selection of a deterministic vs. probabilistic tractography algorithm doesn’t really have an influence in the issue discussed above (unless that selection were to have ramifications in how other steps of processing / analysis were to be performed).

Preprocessing also needed a fix because FSL 5.0.5 was outdated and wasn’t rotating the vectors. The 5.0.9 FSL patch for eddy refused to work because the data looked like DSI, I had to hack the dwipreproc script to force --data_is_shelled into the eddy call. It might be a good idea to add this option in the dwipreproc script.

Yes, I read something somewhere about eddy now testing for ‘shelled-ness’ of the gradient scheme. However I can’t simply add that option to the script, since that would cause the script to fail if run with an older version of eddy: I need to find a robust way of determining within the script whether or not such an option is available. Script maintenance is becoming a bit of an overhead for me… I also don’t quite understand why eddy refuses to run on DSI data given that the whole framework was supposedly designed around Gaussian Processes which isn’t explicitly dependent on shelled data, can anybody clarify this for me?

After a quick look, the newer eddy_openmp followed by dhollander and csd_msmt produced less noisy tracts than the old eddy followed by tournier and csd.

This will primarily be the effect of multi-tissue CSD, hopefully your results mimic what was shown in that paper. It’s difficult to know how much of an effect the change in eddy version would have, and would require an explicit test where just that stage alone was varied to know for sure.

Cheers
Rob

Thank you Rob, will keep in consideration your comments.

Just to clarify, stroke lesions are no so clear and “focal” as the literature suggests. On T1 there might easily be white matter areas that are darkened and look like edema of healthy brains. This is not just a couple of voxels around the lesion, it may be more than 1 cm, depending on who drew the lesion and what criteria was used. Tissue segmentation would put these regions in GM, which would compromise the logic of -act.

I realize from your post that -act is very important. Will see if that can be achieved on these data.

On another (unrelated) note, I tried global tractography on the same exact .fod and response file that I ran tckgen, and I got a tractogram with a lot of short streamlines. I don’t have access to show you the picture, but just wanted to let you know if you have any thoughts. This was just a test on tckglobal for me anyway.

Dorian

Hehe, talk about being late to the party here… :roll_eyes:
Well, I was just working through some stuff and stumbled upon this old one. With respect to the lesion worries here, and how 5ttgen fsl and dwi2response msmt_5tt versus dwi2response dhollander will work on this, I’ve got a complete understanding of this since quite a long while already now… so I’ll reply for reference, should it help anyone in the future:

Yeah, so those are (non-acute stroke) white matter hyperintense lesions. Brighter on b=0 or T2w in general, darker on T1, with intensities in the range of GM or if really, really bad, CSF. But probably GM.

So that’s what you’ll get with 5ttgen fsl here as well: big blobs of fake GM in the WM. This quite drastically biases the estimated GM response from dwi2response msmt_5tt, since these lesions are typically still a dense mixture with lots of axons still happily running through. But because it’s a big coherent blob, 5ttgen fsl gets it wrong. I know from experience since more than a year now that this is not a problem at all for dwi2response dhollander though. Such lesions are grossly detected as sitting between the tissue types the algorithm is after, some kind of partial volumed mix of WM and GM-like signals. dwi2response dhollander inherently avoids these successfully, just like it avoids areas partial volumed between genuine WM and GM (e.g. close to the cortex).

3-tissue CSD is highly advised in these scenarios as well, because a lot of lesion signal is successfully filtered out as GM-like or CSF-like. See this abstract for an example in Alzheimer’s disease (but microstructurally, this is a very similar story for stroke, and MS, etc…): https://www.researchgate.net/publication/315836029_Towards_interpretation_of_3-tissue_constrained_spherical_deconvolution_results_in_pathology

This helps clean up your FODs to a great extent, allowing to track through the lesions. In that sense, ACT-wise, it’d probably be best that the lesions would be regarded as WM. Sure the white matter is damaged, but it typically isn’t broken up yet (unless you see patches that are purely CSF-like, and are pitch black on the T1w image, and even black (rather than bright) on a FLAIR). But knowing what 5ttgen fsl yields, this would either require another segmentation strategy, or a manual approach where the lesions are corrected to be “WM” segmentation labelled, rather than GM.

So with respect to this, I’d reason we do know what tissue is underlying the regions, in case of stroke lesions that is: what underlies it, is still the original structure, albeit damaged. This is something else than e.g. a tumour with a significant mass effect, or a mix of mass and infiltration effects that can’t trivially be separated. The latter case would indeed justify the pathological tissue type label, because genuinely don’t know (a priori) what’s going on.

Our software (still) does, but I am now definitely tending to avoid it… because everyone else in our field doesn’t refer to it as a shell. :grin: It sadly already led to a lot of confusion here on the forum as well… :wink: