OK, lots going on here. First, getting the orientation right. You’ll note that the orientations are not correct in fslview either, as the orientation labels demonstrate: you get A & P labels where you’d want S & I, and vice versa. The difference is that fslview shows you the image as stored irrespective of the transformation matrix, and adjusts the orientation labels to match, whereas MRView loads the image in its expected anatomical orientation, taking the transformation matrix in account directly.
If you want the images to display correctly (as in the same way) in both fslview and MRView, then yes, you need to worry about the strides. But first, how do the images display in MRView before you run 3dresample? I’m not familiar with this software, so I’ve no idea how it does its thing. It might resample without modifying the transformation matrix to match, in which case the before and after images will differ in orientation in MRView, or it might alter the transformation matrix according to the resampling it did, in which case it would look no different in MRView (but different in fslview). Also, mrinfo
would report a different transformation matrix in the former, and the same matrix in the latter (but with different strides). It’s hard to know without having a look… Also, the name ‘3dresample’ suggests that the data might be interpolated into a different voxel grid, which sounds like a bad idea, it introduces subtle errors in the data for no good reason - ideally all it would do is shuffle the voxel values around without interpolating, and/or modify the transformation matrix. I’ve no idea what it actually does, these are just the issues that I’d be looking into.
If the data look OK in MRView, either before or after some manipulation, then you can always use mrconvert -stride 1:3 in.nii out.nii
to ensure the voxel order matches between fslview and MRView (although you may need to use -stride -1,2,3
if there’s a left/right flip).
The issue of what to do with your bvecs on the other hand is potentially trickier, it depends on exactly what was done… If the resampling only modified the transformation matrix, then it would be safe to use the bvecs as-is. But if there has been a reshuffling of the voxels on file (i.e. a manipulation of the actual image axes), then the bvecs will need to be modified accordingly, since they are defined with respect to the image axes. If all that happened was that the x & y axes were swapped (for example), then it’s typically easy to correct. But if there’s been some actual resampling via interpolation, introducing non-trivial rotations in the anatomy relative to the image axes, then it’s a lot harder to figure out. And also a lot harder to provide any guarantees that the correction is 100% accurate: it might look OK in some data sets because the rotation introduced was small, but fail badly in other cases. In those cases, you’d need to verify every step does exactly what you think it should have done…
So as you can see, lots of scope for tripping yourself up in there. In general, I recommend you use the MRtrix commands mrconvert
(to manipulate the strides) and/or mrtransform
(to actually rotate the images). For the latter, focus on the -linear
option, and don’t use the -template
option (as this will trigger a resampling of the data, rather than simply modifying the transformation matrix). The advantage of using these tools is that they will take care of the DW information for you if it’s provided. The simplest way to do this is to convert everything to MRtrix3’s own .mif
format while you work with them, using mrconvert -fslgrad bvecs bvals in.nii out.mif
to import the data initially (at which point the DW scheme information is embedded in the image header), perform whatever manipulations you need, and then export back to NIfTI using mrconvert in.mif out.nii -export_grad_fsl bvecs bvals
to get the appropriately modified bvecs/bvals (assuming they were correct to begin with…).