I’m finding your first message difficult to interpret; if I can’t catch the issue below, you might need to have another attempt at explaining your uncertainty.
tck2fixel command doesn’t need to generate a new “fixel map”, and if it does, it should just be a duplicate of the index and directions file from the input fixel directory. So there’s no reason for the “dimensions” to have changed at any point, unless you’ve erroneously used the wrong input fixel directory.
When you say “streamline connectivity based fixels.mif”, given that this must be different to the output of
tck2fixel (with or without subsequent
mrthreshold call), are you referring to the file contained within the fixel-fixel connectivity matrix directory? This may at least explain your claim that that file is “at 1mm iso”, which isn’t actually the case because the contents of that file are not represented on a voxel grid (look at the image dimensions), the voxel sizes are just filled with unity values because it’s currently not permissible to either omit or have invalid values for voxel sizes in the
The question then is what steps need to be either modified or re-run based on the generation of such a mask.
fixelcfestats is designed to be able to operate on a fixel-fixel connectivity matrix that was generated using all fixels, but a restricted fixel mask for GLM and statistical enhancement. Technically you could re-run
fixelconnectivity to produce a new fixel-fixel connectivity matrix that includes only those fixels within the mask, and that would slightly reduce the storage size of that matrix, but the outcome should be no different. Where there would potentially be a difference is whether or not fixel data smoothing using
fixelfilter is constrained to only those fixels within the mask (whether by using the
-mask option, or generating a new fixel-fixel connectivity matrix based on that mask and then using that matrix within
fixelfilter). I honestly don’t know how to give a general recommendation here: poorly-connected fixels may correlate with high subject variance and therefore omitting them from not only statistical inference but also data smoothing may be beneficial… I have plans for addressing this more explicitly in the future, but for now I can only say that either approach is valid and that I’ve not done enough testing to advise one way over the other.