Neonatal multi-shell data: optimal preprocessing

Hello everybody,

I want to process some multi-shell neonatal data, following the process that @maxpietsch (and @jdtournier) use in the last ISMRM (Multi-shell neonatal brain HARDI template). I found a similar post here but since was without activity long time, I decided to create this one.

I have neonatal multi-shell data, B0s + 750 (64 dir) + 2500 (64 dir), both shells were acquired in the same session but not in the same acquisition, so I decided to do the following:

For each shell independently: denoising, gibbs ring removal, distortion correction and outlier replacement and bias field correction.

After this I rigid register the lower shell to the average B0 values of the second shell, rotating the bvecs. And finally merging both shells

I decided to do that because the shells were not acquired together, so my first question, is this correct? or I should merge the shells from the beggining and perform the preprocessing together?

After this, to calculate the WM response function I used the Tournier algorithm in the 2500 shell, and for the CSF response function (because I still don’t have an accurate tissue segmentation for this cohort) I decided to proceed as follows:

I calculate NODDI maps using AMICCO. The free water was then thresholded to 0.8 and binarized. Then I divided the average 2500 shell to the average B0 shell. I multiply this signal-attenuation imaging for the CSF mask, and I looked for the higher 100 voxels. A mask resulting from this voxels is then used to calculate the CSF response function in the 2500 shells using the Tournier algorithm.

The questions are:
-Does this sound reasonable to you?
-The WM response function looks as expected, but I don’t know what to expect from the CSF function (probably a sphere), how it should be? I was curious to see for example how the CSF response function of the mentioned abstract looks like.

Note that for now, I didn’t perform intensity normalisation across subjects, because for now, I’m interested only in the tractography. But if at some point I decided to add the intensity normalisation and the average response function, how this would affect to the tractography results?

Thanks in advance!

Regards,

Manuel

Hi,

I’m not able to answer to all questions, but can comment on this one:

Since you have multi-shell data, you can probably simplify your workflow by using the dhollander algorithm of the dwi2response command. This can procure you of white matter and csf response functions. Unless this different approach is what the ISMRM paper you refer to is all about (I can’t access the paper)

Hi @tbilliet,

Thaks for your response. I’m afraid that that’s one of the key points of the abstract, with dhollander algorithm the response functions for the GM and the WM are very similar. Maybe a way to proceed is to calculate the WM response function with tournier algorithm and the CSF response function obtained from the dhollander algorithm…

Regards,

Manuel

I have no experience with images split across multiple acquisitions. I’d say it depends on whether you have different bias fields and intensity scalings between acquisitions. I would try to process the data together at least for denoising and distortion correction - but take that with a grain of salt…

The CSF response (b=0,400,1000,2600) used was:

3790.29217
982.15014
226.96266
111.62807

and is quite similar to that obtained via the dhollander method:

4143.95509
1024.27522
298.17341
178.62485

Yes, that is fine. I would definitely check the single fibre voxel masks and edit them if necessary to exclude WM voxels found for instance in cortical GM.

I’d use mtnormalise to normalise the images before running tckgen. In neonates, the FOD amplitudes can become very small for instance in the periventricular crossroads and, depending on the settings you use for tractography, tracking might stop due to too small FOD amplitudes (-cutoff). For some older 2 shell data, we found that a two tissue decomposition required lowering the default value in order to track through that region.

Cheers,
Max

Hi @maxpietsch,

Thanks for the explanation, very helpful.

I run all the preprocessing separately basically to avoid this kind of issues about the bias field, and also because for dwipreproc, for each shell I used the first B0 as the first image for correction, concatenated with the reverse phase encoding direction in-rpe_pair -se_epi

After using the dhollander algorithm for response function, I obtained:
WM response (b=0,750,2500)
57448.81501264527 0 0 0 0 0
27413.14081638135 -9201.178780653903 1462.479810732295 -181.6236984108175 -6.663587375706664 10.7379770589272
8270.737517574487 -5718.126146376569 2679.804859460965 -794.3296892045378 174.1362550047903 -49.87019714212248

CSF response (b=0,750,2500)

79468.75145167559
3980.660825891341
1233.513501372268

pic

Both of them look fine, here the selected voxels:

I’m not using the GM response function. The WM single fibre voxels and the CSF voxels look fine for me.

The final FOD image:

Seems also fine. However I did not observe any difference before and after applying mtnormalise. Maybe it has sense if I rely on a good tissue segmentation for the tractography, to remove at all the values for the -cutoff and use 0.

Regards,

Manuel

1 Like

Hi all,

As @maxpietsch said before

I found this problem, and correcting for the bias field separately, leads to a high intensity difference in the B0s of both shells, do you suggest correct for the bias estimated from both shells together? thanks in advance.

Regards,

Manuel

Hi Manuel,
I am also processing some multi-shell neonatal data, and your results looks good.I want to know which command that you use to rotate your bvec file, and the pipeline that you create to get the fod file of neonatal data. thank you so much!
Cheers,
liang

Hi @liang,

My framework has changed quite a bit, I was using fdt_rotate_bvecs from FSL. Now, I don’t use any special command to rotate the bvecs, eddy does it everything internally.

I have three acquisitions in the same session, one shell of bvalue 750, one of 2500 and one with 3 reverse encoded B0. Briefly speaking what I do is:

-Inspect the data, and if the first B0 of the 750 shell is affected by any artifact, change it by the next B0 that is not affected by any artifact. This is very important for this data, because the movement is very big with the babies, and if you feed corrupted images to topup and eddy, they will probably crash

  • Concatenate the 750 shell and the 2500 shell (also the bvecs and bvals).

  • Concatenate the first B0 of the 750 shell with the first good B0 of the reverse encoded acquisition.

  • Denoise the concatenated data (dwidenoise).

  • Run topup to estimate the field.

  • Run applytopup to the denoised data.

  • Create the mask of the average of the B0 of the corrected data (bet).

  • Run eddy to the denoised data (with --repol and --mporder flags). This command will correct all the bvecs

  • Finally, run dwibiascorrect to correct for bias field inhomogeneities.

That’s pretty much everything from the diffusion side, note you can add mrdegibbs to the pipeline, I didn’t do it because my data is partial fourier acquired, and just to be extra safe, but there are some posts suggesting that you can run it as well in partial fourier acquisitions. Also several of this steps you could do it using mrtrix scripts, but I like to work with the steps separated.

And then dwi2response with dhollander algorithm and -fa 0.1, dwi2fod using only the average WM and CSF response functions (note there is an improved version of the algorithm, also able to correctly detect 3-tissue reponse functions in neonates) and mtnormalise.

Then a combination of the DrawEM algorithm and M-CRIB atlas for parcellation and obtaining the 5TT file necessary for ACT.

Here you can see some of the results in case you are interested:


I hope this helps,

Best regards,

Manuel

3 Likes

Hi Manuel,
It really helps me a lot,thank you so much!
your result looks so beautiful ,especially for the segementation figure. could you please tell me the detail of the parcellation?which command did you use in the drawem software ?Have you ever made manual correction after the automatic segementation of drawem?thank you so much again!
Cheers,
liang

Hi,

For the parcellation I use the M-CRIB atlas, you can found it here (and the paper here).I use the 10 manually labelled subjects, you can ask for them thorugh the github. Then I use a multi-atlas fusion approach (in my case I use ants joint fusion, but there are a lot of them available). This parcellation is the Desikan-Killany atlas adapted for neonates.

For the DrawEM, I use the dHCP pipeline, I run it in my computer because I had to edit the code a little bit (basically comenting all the parts for the surface reconstruction) I use the segmentations (the tissue probability maps, the -additional flag). I didn’t do any manual editting, just use the results as they are.

To generate the 5TT I use the output of the two processes, I have a series of scripts to do all the process, but as they are now are not user friendly at all, I would like toshare it t some point, but I don’t know the best way to do it. If you want I could share it privatelly with you and guide you trought the steps.

Best regards,

Manuel

1 Like

Hi Manuel,
Thank you for your kind help,I would appreciate it if you would instruct me on the segmentation of the neonates. My email is 18865385126@163.com,could you please sent the pipeline of segmentation to this email address? Thank you so much!
Cheers,
liang

Hi Manuel,

Thanks for documenting your pipeline so well: that’s a solid overview and reference. Very nice results you’re showing in those screenshots too! :+1:

…also thanks for documenting this bit well: the -fa parameter for the algorithm doesn’t come about very often: for many scenarios, the default works well, but for those that deviate from the norm, it’s good to have these things documented. Definitely worth playing around with it a bit for those less common scenarios. The default is 0.2. I haven’t come across solid examples of where it should ever be increased. But for a few very particular scenarios and combinations of data quality, etc… there’s certainly good reasons to decrease it a bit, e.g. towards 0.1: yours is a scenario where this is indeed not unusual.

Finally, thanks for bringing up those links too. :slight_smile: Some people have asked me for the slides / talk since a few weeks ago. I’ve made the ISMRM talk available over here today, for those interested.

Cheers,
Thijs

1 Like