I suppose with “2D”, you’re referring to a single slice? The problem here will be that the mask is eroded by default, and that of course in 3D, which will make your “slice mask” vanish entirely. I suppose you’re also supplying your mask manually here via the
-mask option? I can imagine
dwi2mask could have issues with the situation as well. Let’s say you’ve manually defined a “2D” mask, the way to make the algorithm then work properly would be to specify
-erode 0, so the mask doesn’t get eroded / obliterated.
However, your makeshift solution is just as good in practice, I reckon. If you’ve got good responses from artificially creating a 3D volume out of the 2D slice, don’t bother with the above: your responses should almost certainly come out just fine!
By ‘2D’ i meant 2D acquisition acquired at 0.125*0.125 mm to save tim The data has 5 slices in the 3rd dimension with a slice thickness of 0.5 mm.
Yes, I do supply a mask derived from dwi2mask which works quite well!
I tried using the -erode 0 with dhollander but it seems to take borders for the gm response
As suggested, I can use the dhollander on the cubed data! The only concern is that I would be interpolating the 3rd dimension. Besides. I also upsample the image just before obtaining the FOD’s so that would introduce further interpolation in the 3rd dimension.
Just to add, I was wondering why is it working fine for isotropic data & not otherwise
Ok, so not a slice, but definitely still a “slab”. The same “issue” applies though: the default erosion of 3 voxels that is applied to the mask will obliterate your mask (since it’s only 5 voxels wide, and 3 voxels get eroded at both sides). Not eroding, well, doesn’t erode of course; and you see indeed what that step is there for: to avoid those few voxels at the edge! In principle, you would probably only want to erode in 2 dimensions (and not along the 3D dimension, i.e. along the thickness of your slab). But indeed, the easy solution here is to just replicate the slab a few times, and allow erosion as normal (i.e. don’t specify
-erode explicitly, so the default of 3 voxels applies). It’s not “perfect” etc…, but you’ll get the good quality response functions you’ll need anyway (and that is the goal, after all).
No worries for the sake of response function selection: the algorithm never interpolates, only selects existing voxels. You can still up-sample afterwards, just before performing the CSD step.
This does bring up another possibility for your initial problem: up-sampling the data will also create extra “slices”. However, you’d then still end up only selecting voxels from the more middle slice of your slab, if erosion does what it does by default. Plus, the algorithm would run much slower on the up-sampled data. That’s why, typically, I don’t advise to up-sample the data (be)for(e) response function selection; it works perfectly fine on non-upsampled original resolution data.
Probably the reason it didn’t work on non-isotropic data is one of the above; so unrelated to the actual voxels being non-isotropic. I’ve actually run the
dhollander algorithm with success here on data of a ridiculous low quality: adult human in-vivo brain, but highly anisotropic voxels, and low spatial resolution, and a b-value of less than 1000, and only 12 gradient directions (all of this in one dataset)… still selects perfectly fine response functions for all 3 tissue types; and the voxels it selects for it, also still make full sense.
Maybe one extra note though: the erosion step, and also a dilation step that happens within the single-fibre WM response selection (currently
tournier algorithm), are performed in voxel space, using voxel units! So yes, severely anisotropic voxels result in that erosion also being “anisotropic” with respect to “real” (metric) space. Again, not an issue for a whole brain, but may become one if you’re working with a more 2D-ish slab. If you’re really working with some weird or exceptional data in terms of sizes and slab-by-ness, you could of course also first get a mask, and manually (using the ROI editor) edit it yourself to perform some custom erosion around the edges (or include/exclude other bits that the masking algorithm would fail to do so), and then provide it to
dwi2response dhollander via the
-mask option, and (if you’ve done hands-on erosion yourself already) explicitly state
-erode 0. As long as whatever is in the area of the mask is still (and only) the actual tissue and (free) water, the
dhollander algorithm will tackle a wide range of scenarios.
I’m actually working on improving the
dhollander algorithm even further; but I found out that it’s not as trivial as I thought it would be… so depending on whether I can make the little trick I came up with work robustly (or not), that improvement may also eventually come along (or not, should I have to admit defeat at my attempt at some point).
Ok, that’s good to know that upsampling wouldn’t be an issue. I could even upsample the data prior to response function estimation since the file size is pretty nominal (compared to the high-resolution data sets I’m used to ;))
But, I tried manually eroding the mask and response function estimation stating -erode 0 explicitly. It still takes the very edge of the brain as gm (close to the crosshair in the pic).
So, I will just upsample the data and estimate the response functions as it seems to select the voxels neatly! I would still check if the fod’s look normal
It’s nice to hear that there would be another version of the dhollander response function. In general, I’m a huge fan of MRtrix and an avid user too. Thanks to the MRtrix team for all the efforts!! Btw, Is the SS3T implementation also close
Yeah, so what I meant by “hands-on” erosion would be to manually already exclude some of those edges from the mask, but then in the “2D” sense: just shrinking the mask without making it thinner in the third dimension. But if upsampling similarly fixes the issue, that would be fine just as well!
Sounds perfect! As long as the majority of the selected voxels are in sensible locations, you should be good. One or two “wrong” voxels don’t really hurt the result as long as many more others are correct.
…depending on how well I can make my little idea generalise. I’m keen on improving it, but of course not at the cost of broad applicability. It’s hard to test though; and I need to still get it robust in the first place.
I wish! I may do a beta-round at some point, among a few select collaborators where we’ve been running the prototype on their data before…