why it is recommended to create average response functions across study population
The fundamental necessity for the use of a common (technically moreso than average) response function for quantification of FD is described back in the 2012 AFD paper.
Imagine you have a cohort of two subjects. In one subject, a typical WM voxel is filled with 20% axons and 80% extra-cellular fluid; in the other, a typical WM voxel is filled with 80% axons and 20% extra-cellular fluid. If you used subject-specific response functions rather than common ones, you’d detect no difference in “fibre density” between these two subjects.
and to eliminate the responses of subjects with global brain pathology (step 4)
This is optional, and its importance depends on the performance of response function estimation in the presence of such pathology. it also reinforces the prior point about the need for response functions to be common rather than strictly average. It’s a matter of, once you perform the deconvolution to express the DWI signal as a weighted sum of the different tissue response functions, the “units” of those response functions may be “the appearance of in healthy brains” or “the average appearance of in the cohort”. It tweaks the interpretation slightly for those pedantic about such things… as long as the response function estimates are not outright erroneous in pathological participants, is unlikely to be of great consequence, but if you know a priori that many subjects have diffuse pathology, the former interpretation is to me slightly preferable.
This also leaks into the discussion of whether the response functions should be “representative” of each particular tissue, or should instead be “the most extreme manifestations” of each particular tissue. The deconvolution can only fully reconstruct the DWI signal in cases that are less extreme in terms of b-value decay / shape as a function of orientation than the provided tissue response functions. This is kind-of handled by, in the course of response function estimation from the image data, selecting only a small percentile of voxels for each tissue type from which to average the DWI signal to form those response functions; that’s technically not an “extremum” estimate, but generally good enough.
If that’s too esoteric, let’s go for a more direct explanation. If you take a WM response function from a subject with diffuse reductions in WM fibre density, and use that to run deconvolution on a healthy participant, the model fit will have large residuals: the empirical DWI signal simply could not be represented by taking some sum of that response function over orientation space. Conversely, if you take a WM response function from a healthy volunteer, and use that to run deconvolution on a subject with diffuse reductions in WM fibre density, you should see smaller WM FODs, but the model should fit the data just fine.
The story is a little more complex with multi-tissue deconvolution, but the basic premise remains the same.
Likewise, in step 9, it is recommended that a study-specific unbiased FOD template is generated. There, data from patients with excessive abnormalities are recommended to avoid for the generation. For this, I am again wondering why a population based template is created instead of subject based and why to exclude subjects with high amount of global brain pathology.
I don’t understand what you mean by “why a population based template is created instead of subject based”. The phrase “subject-based template” would only make sense in a repeated-measures experiment; and even then it would be only as an intermediate step towards a population or standard template. FBA (in the strictest definition of performing statistical inference at the per-fixel level) necessitates spatial correspondence across subjects, thus precluding calculations from being performed solely in subject space.
I think in this case global diffuse pathology is less of a concern as in step 4 above. The documentation’s phrasing is “avoid patients with excessive abnormalities compared to the rest of the population”. So if a diffuse abnormality is apparent in a reasonable fraction of subjects, then individual patients would not have excessive abnormalities compared to the rest of the population, since there’s plenty of similar patients. I think this statement should apply moreso to “excessive spatial abnormalities”. Generally you want a population-based template to be as representative of the entire population as possible so that the registration of individual participants to that template is as unbiased as possible, but e.g. if the inclusion of a subject with hemimegalencephaly would degrade the overall quality of the template, and registration of that subject to the template is going to be poor regardless of whether or not they were used in the generation of the template, then maybe omitting them from said generation is preferable.
I suppose that the images from the template could easily be brought back to subject space with epi_reg, or is another method recommended?
I don’t see why
epi_reg would be used here, given that it is tailored for registration between distorted EPI and undistorted anatomical data from a single participant. What you want to do is store the “full” warp files during registration, which permit both transformation of subject data to template space and transformation of template data to subject space.
However you will need to think about exactly how this transformation should occur if you are dealing with fixel data. Such data cannot be trivially interpolated at sub-voxel locations like voxel-wise data can. Plus there’s no guarantee regarding the number & orientations of fixels for two corresponding locations in subject & template spaces.