The SIFT2 paper proposes that one may use SIFT2 on the output of tckgen/SIFT to improve recons, to be tested in the future…
I’m not sure I would be so bold as to say that it could “improve recons”; it’s simply a way in which these commands could plausibly be used. I had little doubt that this sort of combined approach would seem attractive to many people, indeed many actually criticise SIFT2 for retaining all streamlines since it “no longer has the capacity remove false positives”. The reality is that this is a misinterpretation that I’m seeing more and more often: Even though SIFT can remove streamlines, that doesn’t mean it’s actually specifically removing false positives, it’s just reducing the streamlines density in pathways that are over-reconstructed. So whether or not there’s a benefit to throwing out streamlines as opposed to just letting them obtain very small weights: Maybe, since it’ll let you bump up the regularisation coefficient without having those “unwanted” streamlines contribute more than you want, but there are a lot of complex interactions at play.
Not sure whether quantitatively assessing the SIFT / SIFT2 combination in this fashion is going to work. You’re probably looking at Eq. 5 rather than Eq. 11, the latter is specifically for generating L-curves and doesn’t properly scale the regularisation term to match the data term (Eq. 6).
As implemented currently, PM, FD in the formula etc are not available as outputs.
-output_debug option The
-csv option may be handy to you as well.
One may think of generating mean and var (or interquartile range etc) on the product mu* streamline_coefs file to compare tskgen/SIFT recons, say when testing within the same subject (native vs upsampled fod file, different seeding options etc), with the idea that lower mean or dispersion values would reflect better recons, please advise.
As you say, ideally you’d have all coefficients = 0; but you’d also have the data error term being zero. Ultimately neither of those is going to be the case, and hence you need some mechanism by which to “combine” these two sources of “error” into a single metric. Otherwise, if one is bigger than your reference and one is smaller than your reference, there’s no way to say whether or not it’s “better” or just different. This is precisely what the cost function in Eq. 5 does.
It’s possible that some other metric on the distribution of streamline coefficients may be more useful in terms of subsequently “assessing performance” of the algorithm, but I never found the metric I was looking for in that respect.
streamline coefficients tending to 0
I suppose that should be “tending to 1”.
The streamline weighting coefficients will tend to zero, which corresponds to a streamline weighting factor of 1. It’s the latter that gets output from
tcksift2, but all of the optimisation is done with the former.