FBA pipeline - quality check for several steps

Hi MRtrix, it me again.

TLDR : see bold questions

After following through most of the tutorial for FBA (Fibre density and cross-section - Multi-tissue CSD — MRtrix 3.0 documentation) now and playing around with things a little bit, I have a few questions to make sure I’m on the right track (questions in bold so they’re easier to find in this wall of text…).

The Data

I’m using low b-value data (10 b=0 (not interleaved – they’re all at the beginning), b=1000 in 60 directions, isotropic 2.4 x 2.4 x2.4mm, all same acquisition protocol) from the same GE scanner (which unfortunately being GE, means no slice volume correction… so I was strict about removing any scans with motion). Used the most recent version of dcm2niix (GitHub - rordenlab/dcm2niix: dcm2nii DICOM to NIfTI converter: compiled versions available from NITRC) to convert all the dicoms to nifti files.

Pre-processing

I pre-processed these using PreQual (GitHub - MASILab/PreQual: An automated pipeline for integrated preprocessing and quality assurance of diffusion weighted MRI images) which did all the following: denoised (MRtrix PCA), susceptibility induced correction with SynB0 (GitHub - MASILab/Synb0-DISCO), topup/eddy correction (FSL) to fix inter-volume motion correction / eddy currents, dwigradcheck (MRtrix) to account/check for b-matrix rotations, N4 bias field correction (ANTs) to correct bias fields, and obtained a brain mask with BET2 (FSL). PreQual mentions having to set MRtrix to favour the sform NIFTI transforms over qform which they say is the default… so I fear they may be using an earlier version of MRtrix in this pipeline. So I guess potential for issues #1 if there have been important MRtrix updates since to any pre-processing commands. Potential for issues (maybe) #2: I have not yet checked whether some files are sform while others are qform… TBD.

Subject Tissue Response Functions

For 3-tissue response function estimation (dwi2response dhollander), I used the masks obtained from Pre-Qual– they fit the brain nicely + no holes (below is actually the upsampled mask but it shows the masking used).
upsampled_mask

I checked the response functions with shview – they appear to be what is expected (I zoomed in a little bit on csf to show it - it was tiny).

I also checked the voxels selected for the response functions – which brings me to my first question. The sampling for GM and WM looks decent – but I notice the CSF seems to be derived largely from an area around the brain stem and not from the ventricles. Even in a participant with large ventricles, the CSF is being sampled from the brain stem area. Is this something to be concerned about? Should I be manually removing this area (I am not interesting in studying it) to prevent this area being sampled, or is this expected? The reason I ask will be clearer later on…



Example with larger ventricles on an older participant:

I moved on to create the average tissue response functions (with 40 subjects), which again look as expected.

Upsampled to 1.25mm (masks too, and checked these to make sure they were good fits), then used SS3T (https://3tissue.github.io/, all default parameters) with my average response functions to generate the 3T FODs. I checked that the wm FOD’s were in the wm as expected per: MRtrix Tutorial #5: Constrained Spherical Deconvolution — Andy's Brain Book 1.0 documentation

However… it looks like I have some small wm FOD’s appearing in the CSF? Is this normal, or does this suggest an issue with my response functions (perhaps related to above, not sampling a good area for CSF in the response functions?)

Bias Field Correction and Intensity Normalisation

I continued on anyway just to see how the normalization results would turn out. Since the mask fit looked pretty tight to the brain (see above)… I used the upsampled mask for mtnormalize as it was, without erosion. I am wondering if it is important to actually erode the mask just in case for this step even if it’s a good brain fit?

Study specific unbiased FOD template

Had some issues with cropping of my population template – but that’s being resolved here: population template FOV cropped issue

Registration

I registered participants to the resulting population template from the initial fix provided (average_inv = None ) and checked the registrations – they looked good.

FODregistration

The intersection of all subject masks in template space included all areas of interest for me (example below - light pink is intersection, red is one subject).

Whole brain tractography on the template

Moving past the fixel steps for now – I started generating streamlines for connectivity-based fixel enhancement (CFE) starting with a cutoff of 0.06 as suggested – but many streamlines appeared in the CSF, so I played around with the cutoff (value in bottom right of gif) to see how much I could improve this.

streamlines

My question is (assuming I can’t improve the streamlines in the CSF from changing something earlier in the pipeline) , does the 0.13 cutoff look decent ? Or is there another way to improve this, somehow using ACT on the template for example?

Thanks a lot for your time and sorry if this was answered already elsewhere

Hi @abesik,

Apologies you have not yet received a response. I don’t interact with this forum often anymore, but have been encouraged to provide some input to your questions. Don’t hesitate to contact me directly if you need or want more hands-on help or QA’ing of your steps; I’m very happy to assist if you think it would help. In any case, here goes some input:

Preprocessing generally: looks all very good to me. The brain masks look excellent in the sense that they trace the shape of the outline very well, but they are otherwise slightly “generous”, reaching a bit beyond even the CSF surrounding the brain, as far as I can see in your screenshots. This is not a big issue, but I will refer to it below at one point. Apart from that point, it can actually be an asset when you get to the stage of computing the intersection mask in template space. So brief conclusion here: excellent preprocessing, and do keep those masks in general.

This is not always expected, and indeed the CSF voxels often at least partially come from the ventricles. But, to reassure you: of all 3 tissues, selecting the CSF voxels is probably the safest and most robust process of them all. Apart from some clean up and extra trickery in the algorithm at the start, eventually these CSF voxels mostly originate from the areas with the largest “signal decay metric (SDM)”, which you can think of as basically equivalent to the ADC for single-shell data. There is very little margin for outliers or other weirdness in this sense. That these voxels were selected down there in (genuine) CSF around the brainstem would indicate the ADC is (probably ever so very very slightly) higher there compared to the ventricles, so these voxels just happen to “win”. What could contribute to this happening is indeed the size of the ventricles: people often think of these as large gaps with CSF, but they’re actually often quite “thin” in 3D, relative to the voxel size, so partial voluming easily happens. There’s also other “stuff” in there that people often overlook, such as the choroid plexus, etc… Often people get CSF voxels also from other CSF spaces for this reason, and only few in the ventricles.

Additionally, and this is again not a problem, your “larger” masks actually come into play here: the dwi2response dhollander algorithm starts by default by eroding your brain mask a bit, so as to get rid of potentially weird voxels outside of the brain/CSF. This was once more important in an earlier version of the algorithm, for robustness, but the current version of the algorithm is most of the time perfectly robust even beyond the brain, as long as things don’t get too crazy. But so, in your case: since the mask is a bit larger, the erosion reaches a bit less deep, hence you’re still retaining some voxels around that brainstem that most other people would inherently not have retained. But again, this is not a problem: this is genuine CSF, and according to the algorithm it’s even a tiny bit better than that in the ventricles. Furthermore, looking at your actual 3-tissue maps after SS3T-CSD: the fit looks excellent, so nothing went wrong with these CSF responses on a drastic level or anything. If it was me, I would use them just the way they are. All good!

This does relate to the above indeed: since the CSF response function is now “a bit better” in the sense that it has slightly larger SDM (or ADC), it actually very “comfortably” helps the fit in the ventricles, i.e. the fit is not “cut off” or “capped” at the SDM/ADC of the CSF response. This leaves a tiny margin for something else to help the fit further, which would be the tiny bit of WM FOD you see there. But, I have to emphasise the “tiny” here: these tiny FODs are orders of magnitude smaller than the FODs in the actual WM. You got nothing to worry about here, as the FBA pipeline will easily deal with this when you apply thresholds in fod2fixel (both on template and subjects), and also in tckgen.

Other than that, on the side, I can see you overall SS3T-CSD fit is absolutely excellent, especially given b=1000 data. That GM is nicely filtered out. Your template (and subject-to-template registration) screenshot shows a very neat WM contrast, and you can tell how removal of GM signal has successfully supported your registration results.

This is then that one spot I referred to above. I reckon you’ll still be fine to be honest: I tested this myself when some upset arose over it in the past, but that upset was driven by masks that were several voxels wider than the subject brain. However, if you want to avoid any and all risks, it also won’t hurt to erode the masks (only for this step) by one or two passes. After all, your eventual intersection area, and even more so your fixel analysis mask, will be smaller than this regardless. So conclusion here: if you want to play it safe and remove any and all worry, just erode by one or two passes to generate a slightly smaller mask specifically for this step only.

0.13 sounds potentially to high to me. On the other hand, 0.06 is almost always too low. I think this has still not yet been updated in the online documentation? In any case, in the last several years, I have been involved with many FBA studies, and we have almost always increased this to e.g. 0.08. By the way, similar for the threshold in the earlier fod2fixel step. What this kind of dense streamline visualisation doesn’t reveal is where streamline bits will start to be removed e.g. in crossing areas, at such high thresholds. This could severely undermine your FBA’s potential to find (parts of) results. Based on what I can see in your screenshot, I would suggest going for 0.08 or 0.09, for both the fod2fixel and tckgen steps. Looking at the output of fod2fixel on the template with several thresholds is probably the best and most direct way of assessing this. This is a bit of an art though; it certainly helps to have seen it in more than one study.

No worries about my time, and I regret you couldn’t get an answer until now.

All the best,
Thijs (Friday, 20-May-22 10:13:36 UTC)

Thank you so much for your detailed reply! I figured it may take awhile for a response since it was quite a wall of text and questions XD

All of this was very helpful and reassuring. I may reach out at some point but for now am going to keep trundling along. Thanks again!

1 Like