Hi @celinede,
Looking at this screenshot:
…I share your concern about the orange-y streamlines running through there. I’m not really sure, but this may be due to one (messed up) outlier volume in the acquisition… That, or some uncorrected motion or something. While multi-tissue CSD will help a lot to get rid of noisy tracks in general, these orange-y ones may still (partially) remain, as they seem to show some orientational bias/issue.
On to the MSMT-CSD using responses from the dhollander
algorithm then: this should work mostly out of the box. As well as the post that @jdtournier linked to, I can also advise you to take a look at this one: Multi-tissue CSD
For ex-vivo data, or in general data with particularly low anisotropy due to ex-vivo-ness, unmyelinated / partial myalination due to development / very low b-value / … or any combination of those, it may be worthwhile to specify the -fa
option to the dwi2response dhollander
algorithm. By default, that’s set to 0.2, but you could try to lower it to e.g. 0.1, or anywhere in the range between 0.1 and 0.2. Given your b-value, the fact that it’s ex-vivo, and the fact that it’s a ferret, hmm, it’s a bit of everything. Maybe give the -fa
option a gentle push to 0.15 or something. I reckon you should be good either way.
Also, in that post I linked to above, take a look at how you can visualise the voxels that the dhollander
algorithm selects. The WM, and probably the CSF as well, shouldn’t be an issue; but definitely take a look at where the GM voxels for the GM response are selected from, and see if they’re in sensible (GM) regions. Looking at your actual MSMT-CSD result with WM and GM response functions, I note that does look pretty good already! In general, I would definitely more and more encourage to go with an automated algorithm for response function selection. I’ve come to notice that, while manual
does allow full control, a human isn’t particularly good at all times to select the best voxels for response function estimation. The dhollander
algorithm (if I say so myself ) does a pretty good job over a wide range of data qualities and kinds of data (human, animal, in/ex-vivo, …who knows what else I’ve been looking into ). I’ve got another improvement in the works, which should see daylight soon… stay tuned.
For the automated dhollander
algorithm: no, that doesn’t matter so much. The defaults here are entirely appropriate for any situation I’ve tested so far. If you go manual, definitely make sure you select at least a “decent” number of voxels; if only to “correct” a bit for the less good ones you may naturally select as well. For your data at hand, maybe at least a good 100 or so voxels wouldn’t be a bad idea. But again, you may want to leave this task to the automated algorithm instead…