Population_template parameters


I’m interested to investigate the effects of population_template parameters. Considering that there are many things can be changed/tweaked what would be a reasonable way to try at least a small subset of parameters? which parameters you would pick to alter and why?

Thank you,

I would be thankful if anyone could share their thoughts about this question.

Thank you!

Hi Mahmoud,

I don’t really think that this is a question that can be answered concisely; adequately explaining all of the parameters, what the effect of changing each would be, and the myriad interactions between them, would quite easily be the length of a complete review article. So if you don’t already have an established understanding of the nuances of image registration, it might well be a rabbit hole too deep.


1 Like

Hi Robert,

Thanks for your response. I understand the complexity of the matter. However, even for complex problems, there could be some rules of thumb, something that non-expert people can use to navigate their way. What I’m looking for here is what would an expert do in this case?
My images are from infants younger than 6-months so there is a huge variation in the shape and size of their brain. This is quite different from the adult case which most probably the population_template default parameters are tuned to. It would be great if you could share your thoughts.

Thank you

Hi Mahmoud,

Yes, I agree with you this is an entirely justified question. If there’s not even a general idea about how to deal with these things, then in a certain way they’re useless or over-engineered.

I’ve done my entire PhD throughout on registration and template building, looking at literally gazillions of ways to alter the design and thinking around this problem, even using multi-channel setups beyond what the population_template script offers. In conversations with David Raffelt, some ideas have ended up in there eventually, and e.g. Max’s work in https://www.researchgate.net/publication/315836534_Multi-contrast_diffeomorphic_non-linear_registration_of_orientation_density_functions etc… builds around this as well.

That in and of itself shows that this is indeed not an easy thing; an entire specialised domain in its own right. So here’s my take:

Yep, there’s a lot of parameters and choices that are made available via e.g. the population_template script. While you can tweak them all in principle, this is not per se a good idea: there’s a massive risk of over-fitting or similar kinds of “cherry picking” if you will. So that begs the question: why are these parameters available to the user at all even? Well, because they’re indeed that: parameters, with defaults that are up to a certain extent very arbitrary, and set to “values observed to work “well”” (when I’m starting to use ““quotes” in quotes”, you might get the sense that a lot of abstraction is needed here…). Or to put it differently: it’s often annoying to hide parameter choices as fixed values deep within quotes… as a method developer, there’s often an urge to make them available, even “just in case”. Furthermore, it allows ourselves as well to more easily experiment with them, so we don’t have to dive in and change hard coded choices each time.

With all my experience on this particular problem, my advice would in fact still be: leave those parameters alone, because it’s an ill posed problem. In terms of your actual challenge with huge variations in shape and size: that itself shouldn’t be a problem, we’ve used it successfully (and that includes me inspecting the results closely) in e.g. Alzheimer’s Disease and recently even stroke, where the challenge is definitely massive, with arguably more variations in shape and size than the ones you might be facing. I’ve seen you’ve asked a few questions on similarity metrics recently too; I’m guessing you might be trying to assess the performance of registration. But here lies the problem; if you want the best performance on those metrics, the solution is simple: tweak the regularisation (any; all the regularisations) of the registration or template building algorithm strictly in the sense of weakening them. That’ll allow for more (and more) deformation, until you can deform a baby’s brain to match an elderly person’s brain perfectly: i.e. you can come up with a warp that’ll make the baby brain look almost exactly the same as the elderly person’s brain. And your metric will tell you you’ve won the tweaking game. But obviously, this isn’t what you’re after. This is because registration is at its core a battle between the similarity metric and the regularisation; and they’re both absolutely crucial. “Winning” more in terms of similarity, means losing on the regularisation front and vice versa. It’s a game of Occam’s razor, but we don’t know where the “natural” optimum lies, because there is none: there’s no ground truth for what a warp from one person’s brain to another person’s brain “should” be. So rather than fundamental science, it becomes engineering, in particular the kind that relies on a lot of experience (sadly).

Also do take into account that if you actively tweak parameters like these, it’s very hard to argue why you tweaked some, and why you tweaked them in the way you end up doing. So in the end, I’d advise simply against tweaking them: the current defaults strike a “reasonable” (whatever that means indeed) balance between matching and regularisation. The current defaults also take an incredibly overly careful approach, with certainly far too many iterations and stages than is strictly necessary to get a very (very, very) similar result. But that’s ok, since you’ll typically run these types of analyses and steps on a powerful system in any case, and most people can happily wait until their template is built overnight.

So in conclusion, if I were facing your exact same scenario myself, I’d still go with the defaults. The only thing to check though is the output for sure: if you see one or more subjects are really way way way off, that’s something to fix. But the problem and thus solution here almost always has to do with something else: either initialisation (maybe some subjects are just very far off in space) or something to do with the intensities in your image(s). The initialisation can mostly be ruled out, since there’s a reasonably robust mechanism there now built into population_template. Do make sure your brain masks are generally fine though. The intensities should be managed well by mtnormalise, but that relies on a solid CSD result (as well as reasonable brain masks). Make sure you don’t see any outlier values or outrageously weird (e.g. due to artefact) values or patterns in your images though. I’ve only seen the template building go wrong myself a few times, and that was back in the days where we struggled way more with bias field correction (and when we eventually stumbled upon some problems in the precursor to mtnormalise). When it goes wrong though, it probably goes seriously wrong; and in a typical pipeline where you might compute e.g. an intersection mask later on, that should make it unmistakably clear that something has gone wrong (which is good in a sense; i.e. it’s hard to eventually overlook the consequences of it going wrong).

So well, lots of words above to eventually still conclude… to not change the defaults or tweak values without very unique or exceptional reasons. Not because you’re not an expert on these things though: I would do the same thing (not unnecessarily tweak) myself just as well! :slightly_smiling_face:

I hope this provides some insights… :wink: