Instructions for response function

Thanks I have a few questions

  • It says on the response function page, “However if you are involved in the processing of non-human brain images in particular, you may need to experiment with the number of single-fibre voxels as the white matter is typically smaller.” I am doing ferret brain which is related to a pole cat (lower order animal). What would I know is a good result in terms of selecting SF voxels number.

  • Secondly, attached is a picture of a sample track. What type of things would I look for, I understand e.g. red means right-left etc. But what type of things can a whole brain tractography tell me in terms of FA values. I understand this is a technical page, but I just need some guidance on the importance of tractography and some examples of how to analyse a whole brain tractography would be great.

Is there like a forum to discuss tractography?

Yep, this is it…

So this is indeed not as trivial as it sounds… Basically, you want the single-fibre mask (as produced by the -voxels option of dwi2response) to correspond to deep white matter, in regions that you would expect to contain a single well-defined fibre orientation (i.e. not crossing fibres). This does require some anatomical knowledge about white matter anatomy.

Well, first thing is that you have a very strong bias field in your data, which translates directly into a bias field in the FODs (since they’re proportional to the signal), and hence in the tracking (since it uses a threshold on the FOD to terminate tracks). You really need to sort that out. There’s a script on MRtrix3 called dwibiascorrect that should help in this regard.

The other thing is it looks like the x & y gradients are swapped in your DW gradient encoding. I don’t know where you obtained that information, or what format it’s in, but this will need to be fixed too, by swapping these components around in the relevant files (rows in the bvecs, columns in the MRtrix3 encoding format).

Actually, you’ll find a lot of the discussion on this forum is also quite general. But I agree that more guidance as to how to perform basic tractography might be beneficial (this has come up a couple of times recently). However, it seems like you’ve managed just fine, and you’re already beyond what the basic tutorial covers.

So I’m not sure how easy it would be to provide what you’re asking. On the one hand, the issues you’re facing are specific to your data - this is very different from the standard in vivo case, where things should typically work more or less work out of the box. It’s difficult to predict all the issues that people might come across, and cater for all of them. In many ways, this is the purpose of this forum…

On the other hand, it seems you’re also asking for much more general guidance about diffusion MRI. This is something that you’d find in a review article (this one is particularly good :wink:) - but you are of course welcome to ask for advice if you have a specific question.

I understand e.g. red means right-left etc.

We try to make this the case as much as possible. However particularly for images coming from animal scanners, this isn’t always the case: either due to the positioning of the sample within the scanner, or the output data provided by the scanner not conforming to a standard. It’s possible to manually permute the axes using mrconvert, but I think solving your bias field and gradient table woes are more pertinent right now.

But what type of things can a whole brain tractography tell me in terms of FA values.

I’m not sure that this question makes any sense: generating a whole-brain tractography reconstruction, and analysing FA values, are two entirely different things (unless you’re performing a very specific type of connectome construction, which I suspect is not the case). So I agree with @jdtournier that reading up on diffusion MRI articles or books would help in terms of formulating and articulating precisely what type of experiment you wish to perform. :thumbsup:

Hi, thanks a lot for your help. I have a few questions still. This is another tractography image I have gotten from another brain. One is mine (top one), the other (bottom) is from a classmate. Can you tell me if the bias field issues and the gradient swapping issues are present in this one as well? So I know the problem was Really with my last image, or something in the way I am processing it.

Also is there an automatic way to swap the x/y vectors in the gradient file rather than manually.

Cheers

It’s really hard to tell from your images - I don’t even know what animal that is, and which way is up… But in terms of the orientations, they both look the same. The main difference here seems to be the use of a lower threshold for the lower one, allowing more streamlines in areas that would otherwise look sparse.

Also, any reason you’re using the old version of MRView…?

Sorry that screen shot was from my lab partner’s computer.

But heres the same brain i generated in mrtrix3. Do you see any issues with this brain? Like bias field or gradient problems?

Right, the axial (bottom panel) looks OK: nice radial orientation in the cortex. But in the other two panels, you can see that the orientations go funny where they run at 45°: they should probably run orthogonal to that direction at that point. So this suggests the Z components of the bvecs need to be inverted.

What? So theres a different problem with each picture??? I have to go around inverting my -grad file everytime?? Is there a simpler solution?

I’m not sure we’re talking about the same thing here, so just to clarify: when you say ‘each picture’, you’re not just talking about each panel, right? Because the problem is there for the whole dataset, it just manifests much more clearly in the panels where the Z direction is in plane (i.e. the top two panels). An error in the Z is very hard to spot in the axial (bottom) panel since the directions that run in-plane are not overly influenced by the Z component - but things are going wrong there as well.

Again, that depends what you mean by ‘every time’. If you mean for all your datasets acquired with the same settings as what you’re showing, then yes, you’d need to correct them all. If you’re talking about each panel, then that doesn’t make much sense, the grad file applies globally - you can’t get it right for one panel and not for the others.

Yes: talk to whoever acquires or converts the scans for you, and get them to fix up their conversion tools…

Sorry I meant for dataset not panel, like a couple of posts above I showed another brain, and there were issues with x and y directions then. Here theres problem with Z.

So my question is,

  • when you mean theres problem with conversion tool, is there a problem with the dwi image or the grad encoding terminal file i was given.

  • Also how do you spot it, like I am very new to this program, are there any clues like when i look at a picture I can go…bam the x direction is not right etc. You did it for me before (z direction runs orthogonal, but I didnt quite visualise clearly what you meant).

  • How do i modify the gradient file (e.g invert the z components).

  • Lastly if I ignore your advice (as in if i find it too hard to implement) can I still get some proper analysis done on my pictures without doing anything to the image processing steps.

Cheers

That’s actually really hard to say, you were using the old viewer with no indication of the projection you were using. All I could say is that the results were essentially the same in terms of orientation. For all I know, the problem might have been exactly the same.

Just the grad encoding file.

Use the force.

Kidding aside, it does take a bit of experience. But it’s basically geometry, you have to imagine what directions the fibres might be running in if you were to invert the X or the Y or Z component, etc. and see if that would be a better fit to what you expect to see given the anatomy…

Probably easiest in Matlab, or you could write a python script, or you could use awk if you’re a command-line die-hard like I am:

$ cat encoding.b | awk '{ print $1, $2, -$3, $4; }'

Not sure about ‘proper’ - but your ADC, FA, RD & AD values should be unaffected. But in general, it’s really important to get this right, it will come back to bite you eventually…

Sorry I am not talking about the old viewer image, I am talking about the other brain data set further up, see post 9 in this thread, it was 2 days back.

I just realised something, for e.g. when I am making the tractography file through the tckgen command, do I have to specify the -grad encoding option then as well, because I didnt then, I just used it for the spherical convolution and estimate response function command. Could this have affected my analysis?
Cheers

Ah, yes. That one required a switch of the x & y components. If that comes from the same source, using the same sequence on the scanner, that is indeed not great. It’s actually worse than that again since things can get more complicated if the acquisition plane is tilted - the correct fix wouldn’t necessarily be a trivial flip or inversion of the components…

Not if you’re using an FOD-based tracking algorithm (the default). If you’d used the -algo tensor_det option or some other tensor-based approach, then you’d have needed it - but the chances are the command would have failed then anyway, telling you it couldn’t find the DW encoding…

So take home message for today is, I am going to have to resort to observing the images, and seeing which vector needs inverting and going from there? and run the estimate response command again…

Actually, given what you’ve shown me so far, I would personally recommend having a good chat with whoever provides you with the data, and making sure they verify everything they do and that what you get is correct. This is a major source of error, they really need to get it right.

Actually, probably not. We discussed this on another thread recently, and I think we were in agreement that the response should be OK.

So how do i incorporate the new encoding into the track?

OK another freaky thing just happend I used that

$ cat encoding.b | awk ‘{ print $1, $2, -$3, $4; }’

command

and all the values in the z vector I see on the screen when i type this are minus what is actually in the encoding file when i cross check it could this be the problem? for eg, using awk {…z value might be -3, but in real encoding file its 3. I dont know why it did this.

You’d need to run dwi2fod again with the updated encoding, but you should be able to use the same response you estimated earlier.

The whole point of that command is to invert the Z component… Note the -$3 in there. That’s the most likely problem with your current encoding file, this command is trying to fix it. See e.g. here for a intro to using awk - there’s plenty of tutorials online if you just search for them.

But just to complete these instructions, what you’d need to do is take the output and paste it into a new grad_fixed.b file. The simplest way to do this is to use I/O redirection, like so:

$ cat encoding.b | awk '{ print $1, $2, -$3, $4; }' > encoding_fixed.b

You can then try using that encoding_fixed.b file instead of your old one in dwi2fod, see whether that fixes things…

If it doesn’t work, you might need play around with that command by trying to invert other components (e.g. using -$1, $2, $3, $4 will invert the x component) or swapping then around (e.g. $2, $1, $3, $4 will swap x & y).

whoops silly me. Thanks a lot for your help. Appreciate it.