I was looking at the response function averaging tool responsemean and noticed that its default behavior is to take a weighted average of the response functions in a group, where the weighting is based on the size of the l=0 coefficients.

But it seems that the weights used in the weighted average are not normalized to have sum 1.

The weights are computed here. The multipliers are multiplied and then an ordinary mean is taken, so if this is to be a weighed average with normalized weights then the sum of multipliers should equal the number of items being averaged. But it does not.

Does anyone know the reasoning behind the weighting (rather than just taking an ordinary average)?

Is there an error here or is there a good reason for the weights to not sum to 1?

The rationale for the approach taken was originally outlined on this GitHub issue. However, looking at the code, I have a feeling the approach relies on the geometric mean, rather than the (weighted) arithmetic mean â€“ which matches this comment a few lines above:

New approach: Calculate a multiplier to use for each subject, based on the geometric mean scaling factor required to bring the subject toward the group mean l=0 terms (across shells)

In which case, the sum of the weights may no longer sum to one â€“ instead, the product of the weights should evaluate to 1 (I thinkâ€¦).

Thatâ€™s correct that it uses â€śdistanceâ€ť from geometric mean to weight the subject response functions. But once those weights are determined then they are used in an ordinary additive mean (they are additively accumulated here). So I would still think the weights should add up to 1. In any case, their product doesnâ€™t seem to be 1 or any special number either.

Thanks for sharing the github issue! The reasoning makes sense now.

Now I am just uneasy about the weights not being normalized.