# Mrhistmatch documentation?

Hello, I’m using the mrhistmatch function for histogram normalization of 2 images but I’m in need of additional information that isn’t included in the mrhistmatch documentation.

In particular, where can I find information about the exact linear and nonlinear transformation algorithms that are being implemented to match 2 histograms?

Additionally, can you confirm that the linear transformation is applied uniformly to the whole input image? For nonlinear, how does the nonlinear transformation get applied to the input image?

It seems from the doc that the masks are for determining which areas of the input/target images get used for matching the histograms, but then does the calculated transformation get applied to the full image without any mask?

I’ve read through the documentation here (mrhistmatch — MRtrix3 3.0 documentation), tried to read the source code (I struggled, mrtrix3/cmd/mrhistmatch.cpp at master · MRtrix3/mrtrix3 · GitHub), searched the github issues page, and looked on the community forum at this website but didn’t find any additional info.

Thank you and best regards,
Andrew

hi @asilberf,

I’m not sure I’m best placed to answer this, this was a contribution from @rsmith – but I’ll do my best…

For the linear matching, the code tries to find a global scale factor (and optionally a corresponding global offset) to match the intensities in one image relative to the other. This single global scale factor (± offset) is then applied to all the intensities in the image to be matched (at this point for scale only, and this point for scale+offset).

The scale (and offset if requested) is obtained by solving for the scale factor that provides the best match (in a least-squares sense) between the vectors of sorted intensities in the two images.

\{a,b\} = \min_{\{ a,b\} } \sum_{n=0}^N || x_n - (b + a y_n) ||^2

where \{a,b\} are the scale & offset parameters, and \{x_n\} & \{y_n\} are the sorted lists of intensities in images x & y. This is minimised at:

\left( \begin{array} \mathbf{y} & \mathbf{1} \end{array} \right) \left( \begin{array} aa \\ b \end{array} \right) = \mathbf{x}

and solved using a standard linear algebra solver (Cholesky decomposition in our case, at this point in the code).

The nonlinear matching is a bit more complex, but uses the standard histogram matching approach: compute the cumulative density functions (cdf) for both images, then for any intensity y_n in image y, find the corresponding x_n value such that \textrm{cdf}_x(x_n) = \textrm{cdf}_y(y_n). It is also a global mapping.

The masking options control which values are included in the computation of the intensity transformation (scale & offset, or cdfs), but once the transformation parameters have been computed, the transformation is indeed applied to the entire image (not just within the mask – at least as far as I can tell from the code).

Hopefully I’ve got the details right, but @rsmith will no doubt correct me if I’m wrong…

All the best,
Donald