My question is not directly related to MRtrix, but there’s such a wealth of DWI knowledge on here that I thought someone might be able to help out.
I am working with a dataset acquired on a Philips Intera scanner. All scans were performed using the same DWI protocol. However, when I look into the PAR files of the scans, it seems like every subject’s repetition time (TR) is slightly different. I’ve never come across this before. Is this normal? What’s causing the slight change in TR with every scan?
Someone more intimately familiar with a Philips scanner may be able to comment more confidently than me, but if I recall correctly, one of the features of Philips scanners is that the interface tries quite hard to optimise the scan parameters on-the-fly based on what it ‘thinks’ is best. In this instance, I expect it’ll try to keep TE and TR to a minimum. This means that even with nominally the exact same settings, a change in slice angulation can lead to minor differences in TE, and hence TR, as the system is able to use multiple gradient axes simultaneously to achieve stronger and faster imaging gradients than it could using a single amplifier alone (e.g. tilting the imaging plane back 45° would allow √2 stronger gradients to be used for slice selection). I have seen this introduce minor variations in TE in other studies, which is not great for research – but the few radiographers I’ve discussed this with really like the automatic optimisation feature…
go to the contrast tab of the sequence and set TR and/or TE to a fixed value instead of “shortest” or “range”. Choose a value slightly larger than the one that gives a conflict when you rotate the slab.
This is something that I stumbled upon recently too! I have DTI data acquired with the same protocol, but when I look at the PAR files of the scans, I see different repetition times. Note however that they seem to always go hand-in-hand with a larger number of slices and/or a different FOV (field of view). My question here then is more to do with how to write-up this, because slight differences in the scanning protocol don’t look that ‘neat’ especially if the scans were required for research purposes only.
Can we say that the FOV/TR for certain patients can be different because a few slices are added to include coverage of more brain regions? Does anyone have any ideas whether there are any effects of longer repetition times after a particular cut-off?
Public Safety Announcement: Make sure that you fix your TE & TR if you’re acquiring multi-shell data using multiple acquisitions, and intend to fit any kind of diffusion model that invokes a parametric relationship between signal intensity and b-value. MSMT CSD is actually fine here if you’re careful, but I’ve seen it cause issues elsewhere.
Can we say that the FOV/TR for certain patients can be different because a few slices are added to include coverage of more brain regions?
Usually one would set the FoV for acceptable coverage of the largest head expected, and swallow the penalty of having some expty space around smaller heads, precisely because when applying a quantitative interpretation to the data, having matched acquisition parameters is far more important than whether or not you have some extra empty voxels.
If by “say” you’re referring to what one would write in a manuscript, my personal advice would be to make sure it is conveyed that this was a mistake, and that you have done what was possible to mitigate the issue, e.g. omitting extreme outliers from response function estimation / including variations in acquisition parameters as nuisance variables in a GLM.
Does anyone have any ideas whether there are any effects of longer repetition times after a particular cut-off?
In simplest terms this is a matter of the relationship in magnitude between T1 and TR. Once you’ve reached maybe TR > 5 x T1, you’re unlikely to measure a difference; but with multi-band TRs can be much shorter, and there are imperfect slice profile and subject motion effects.