Mrconvert on Philips data: floating point or display value?

Hi all,

I was wondering whether the mrconvert command, when applied to convert Philips dicom to mif format (or nifti), converts to display value (DV) or floating point value (FV). In case a diffusion acquisition consists of multiple sequences (e.g. one containing b=700 and b=1000, another one containing b=2800), the scaling can differ between them. When both are converted to DV instead of FV, this may not be ideal for quantitative use. I was just wondering if mrconvert dealt with this issue?



No, it relies purely on the DICOM standard Rescale parameters, which I assume must be what you mean by display value. I did have a go at converting to floating-point value at one point for precisely the reason you mention, but the results weren’t any good. The scaling was different, but still clearly not equivalent across acquisitions. It wasn’t even closer than using the DV - just different. I’m not sure what to make of that, I was led to believe that this would make the values comparable, but that’s certainly not been my experience. It might be simply that the RF power amplifier was calibrated differently, giving slightly different excitations, but it seemed to be more than that. Maybe the equation I was given wasn’t quite right, but that seems unlikely. So not too sure what the problem was… I resorted to scaling based on the average DW intensity in the b=0 images…

If you want to look into it, you can get these values out of the headers using dcminfo -a. The standard DICOM tags are called RescaleSlope and RescaleIntercept (I think), can’t remember what the Phillips ones were called (they’re non standard, MRtrix would label them as Phillips_XYZ). If you convert to .mih format, you’ll see that the standard DICOM rescale parameters get written into the scaling entry, you can just replace them with the right values using a text editor. If you get it to work, I’d like to hear about it…