Yes, in general, any correction for multiple comparisons (which this is) will increase your p-values (i.e. decrease significance). There’s not much we can do about this, it’s a general statistical result (see e.g. here for details).
As above: yes, this is normal, and without acquiring more data, there isn’t anything simple that can be done to improve these results (assuming all the processing steps have been performed as well as they can). The statistical correction procedures used in CFE are already pretty much as good as we can make them. There are a few things that @rsmith is working on, but I don’t expect they will make an enormous difference to your results, and they certainly won’t make your corrected results as extensive as the uncorrected ones.
It looks plausible, but it’s impossible to tell from a simple snapshot like this – there are far too many streamlines to get a sense of streamline density. A better way to verify is to generate the corresponding TDI (using tckmap
) and compare with the WM FOD that was used to generate the streamlines in the first place (e.g. figure 9 in the SIFT paper)
CFE refers to the overall framework for correction for multiple comparisons using permutation testing, with statistical enhancement along white matter pathways based on estimates of connectivity derived using tractography. The framework in general is relatively agnostic to the exact test performed for each fixel, but the current implementation will typically perform a t-test for each fixel (the ability to perform an F-test has also recently been introduced). I’m not sure this answers your question, but I can’t think of a way to explain this simply without referring you to the CFE article – it’s quite an involved framework!