While the GLM can’t handle such an experiment in the mathematically “purest” sense, the non-idealities introduced by using the pragmatic solution shouldn’t really cause issues except for the most unusual of circumstances.
Assuming that you have two time points for all subjects, and the duration between your two time points is reasonably equivalent between subjects, the solution is to pre-calculate a fixel data file for each subject representing the difference between the two time points (or personally I’d favour pre-calculating the rate of change per unit time for each subject). You then have one input image per subject for statistical inference. If your primary hypothesis is e.g. that the rate of change of your quantitative metric over time is different between the two groups, then you could have one design matrix column calculating the mean rate of change in one group, a separate column for the second group, and a row in your contrast matrix subtracting one of these rates of change from the other, with the null hypothesis being that the value of that subtraction is zero.
It means that the distribution of the values provided as input to
fixelcfestats is different to that of the original quantitative metric, and the variance of those derivative measures is not independent of the duration between acquisitions; but any loss of precision from non-normality is likely overwhelmed by the combination of intrinsic data variance and heuristic statistical enhancement anyway.
If there’s some other detail of your experiment that makes this approach unsuitable, I’d probably need access to those details in order to be able to better advise.