Can I check if the multi-tissue CSD pools the b=0 volumes together to estimate the b=0 signal?
I am told that in some scanners the b=0 is different for different shells and to correct for this issue, one needs to first compute the normalised signals (e.g. b=700 signal divided by b=0 signal) for each shell separately.
Is this issue likely to affect connectome construction and if so how?
At the moment, most default commands, manuals and pipelines we offer don’t actively do anything about the problem you describe: if you concatenate all acquired data, any b=0 images in there are regarded as not having any such global intensity differences between them. More importantly maybe: that also goes for the diffusion weighted images as well. I’m mentioning the latter because, if you see such differences between your b=0 images of different acquisitions, likely (if not surely) the diffusion weighted images will be similarly affected. This will matter (potentially a lot) if you’re trying to compute e.g. ADC values, or anything that involves all your different b-values and assumptions of the contrasts and intensities that come with them.
Here we maybe also need to avoid misunderstanding each other by clarifying terminology: it’s a bit strange to say your statement (although I understand what you’re referring to), it’s probably better to say: the b=0 is different for different acquisitions, and you happen to use different acquisitions for your different shells, and all of them happen to come with b=0 images in their respective acquisition.
The solution to merge these acquisitions properly, would indeed include some form of intensity normalisation. In fact, the b=0 images aren’t the problem here, but probably key to the solution, as they are the only contrast that matches across your acquisitions (whereas the diffusion weighted images are all different per shell, and per acquisition in your scenario). You should look to correct each acquisition by a globally constant factor that then results in all the b=0 images across acquisitions to match (as good as possible) in overall intensity. Does that make sense? To do this, I can think of several different approaches; but not all (if any) can be very easily (read: with a single command that does the job) performed in MRtrix3 currently. The simplest solution (from a practical point of view, and with the tools on offer) is maybe to compute just the average intensity across a brain mask for all b=0 images in each acquisition, and compute the factor such that these average intensities match. dwiextract can extract all b=0 images for you, mrmath can average the b=0 images in each acquisition, mrstats can compute the average intensity within such an average b=0 for each acquisition. A bit convoluted, but it would work.
@bjeurissen, @jdtournier: with MSMT-CSD in mind, you must’ve either had this yourself already, or users in your environment stumbling upon this and asking you for help…? What do you typically tell them / offer them? Maybe we should offer a simple tool that provides a rough strategy, such as the above or otherwise, to perform this step? What do you reckon?
For my pipeline I use dynamic seeding and SIFT2 to create the connectome, thus the edges would be weighted streamlines. I avoid using any diffusion metrics such as FA/MD/ADC. Do you think problem mentioned would be any issue in terms of the connectomes produced?
This is a bit tricky to answer, I’d have to say… I suppose you’re using MSMT-CSD, correct? It could be that this ends up not suffering so much due to this problem if your response functions come from the dataset itself. However, if you’re using average response functions (which you probably should, if you’re going to compare connectomes between subjects), and this problem affects different subjects differently (and it probably will, up to a limited extent), it’s all wrong again.
Long story short, even though it may accidentally end up being ok-ish, it’s in reality really not ok. If your data has been acquired like this (separate shells as separate acquisitions), the only correct way to proceed is to perform some kind of global intensity normalisation along the lines of what I described above…
A safer way to deal with this, is to acquire all shells in a single acquisition (whether this is possible will depend on the vendor and the options you have available from them), so everything comes with the same intensity scaling. But at this stage, you may have to work with the data you have (and you’re not the only one having acquired their data like this; it seems to be quite common even)… so dealing with intensity normalisation it is, I’m afraid…
I would start by looking at that point a little more closely. I can think of two mechanisms that could have led to this information being relayed to you:
The echo time may be altered for the acquisition of different b-values.
The fact that each b-value is acquired as a separate series means that the calibration frequency / receiver gain may have changed between them.
If the first statement is true, then the b=0 volumes for different shells contain different contrast, and hence cannot be logically combined into a single b=0 “shell”.
If the second statement is true, then there may be a global scaling factor in the image intensities between series; for Thijs’ response applies.
… one needs to first compute the normalised signals (e.g. b=700 signal divided by b=0 signal) for each shell separately.
Such a process would actually invalidate some of the best attributes of the spherical deconvolution model, as different contributions to the signal would no longer be additive. I can’t think of a publication off the top of my head that has used this approach, but it’s a method that falls under the umbrella of “intensity normalisation” that we are not in favour of for this reason.
Hi every one,
I am getting very anxious with this thread, because we are always acquiring different shell in separate series. (no good reason for it, it is just an old habit)
We do not do any thing to correct it. So the question is how to be sure this is necessary or not ?
I often look at b0 file concatenate in a 4D nifiti, and I do not see obvious difference in signal intensities between b0 volumes of different series. So if there were big differences in receiver gain I should visually obvious but small difference … ? …
I am working on siemens, and I am not sure that the receiver gain change from series to series (if FOV and sequence type is identical). I will ask our siemens engineer. Does any one know the dicom parameter to look at for a retrospective check on the data ?
The frequency is adjust (to follow small B0 variation), but this will not influence signal intensity will it ?
Sorry I’ve not responded to this thread so far, I’ve had other issues to deal with.
First off, this is an issue we’ve faced fairly routinely in my lab of late (mostly to deal with historical data), and it’s a trivial matter to scale each sequence so as to match the b=0 signal across separate series. Something like this ought to do the trick (although this is from memory, it might need amending):
Sometimes that’s all you can do… And as mentioned above, it’s OK as long as the echo times match, but maybe not so cool if they don’t (although I have to admit, we’ve had to do that for some of our data too… ).
Sure, look for RescaleSlope and RescaleIntercept in the dcminfo -a dicom_file output. Otherwise, just look at the output of mrinfo DICOM_folder/, the ‘Intensity scaling’ entry is lifted directly out of the DICOM headers.
Sure, that should catch any differences. But you’ll probably want to double check that quantitatively just to make sure…
But on a different note: receiver gain is not going to be an issue on Siemens, since there’s only two fixed received gain settings nowadays (low or high) – in DWI, you’ll probably be using high gain all the time.
What you do need to worry about is all the rescaling that takes place during image reconstruction and then again to fit the signal into the 12 bit integers typically stored in DICOM. These are much harder to control, and don’t necessarily translate into the DICOM rescaling parameters, since the signal is arbitrary scaled anyway, so there’s no particular reason for the scanner to record the rescaling parameters.
So I don’t think the DICOM rescale parameters will necessarily give you the answers you want…
So if I follow you, I can not get the exact scaling in the dicom parameter. too bad; How did you notice in your old dataset that you had to do the scalling ?
well to compare signal intensity between session is a good reason. I do not get why those scaling are not in the dicom. and this is not only for DWI, many relaxometry, or MT acquisition rely on intensity comparison between series (and in those case you can not do scalling)… so it is not a minor issue that we can not check it …
I check a few dicom an I get every time
Intensity scaling: offset = 0, multiplier = 1
I would have thought of the reference voltage, but I do not see it in the dicom
Basically, plot the values obtained from something like:
mrstats dwi.mif -mask brain_mask.mif -output mean
But also, the scaling parameters did differ in the mrinfo output…
Basically because there is no clinical drive for it… The scaling parameters are there to store values that are quantitative (e.g. FA values, ADC values, T2 / T1 maps, etc), but the expectation is that these have been computed on the scanner, and will invariably have been derived from a single series, using the manufacturer-supplied sequence.
But in general, there are many factors that influence the signal: coil loading, coil tuning, frequency drift, transmit power calibration, the receive coils used, as well as all the different steps in the image recon. On a Siemens scanner, these differences can be avoided (as far as I know) by making sure that the scanner adjustments are performed for the first scan only, and turned off for subsequent scans - this ensures the same transmit power, etc, is used. But otherwise, yes you do need to worry about this.
I don’t think there’s a standard DICOM tag for that. On Siemens, you might be able to find this info in the CSA fields. Try dcminfo -a -csa dcm_file, see if you can see anything interesting in there…
One of the enhancements included as part of version 3.0_RC3 is the capability of the mrhistmatch command to perform linear as opposed to non-linear histogram matching. One context where this may be appropriate is when there may be an unknown scaling factor between different DWI series, evidenced by differences in the overall intensity of b=0 images that would ideally be identical across those series. mrhistmatch scale could be used to estimate such a scaling factor, rather than using the median value as in @jdtournier’s suggestion, as this would take all image intensities within the mask into account.
Indeed one could even write a Python script to automate the process of rescaling and concatenating multiple DWI series if one were so inclined…