# Cross correlation and mrmetric

Hello all,

What is the formula for cross-correlation in `mrmetric`? and what should be the output of it for two identical images?

``````mrmetric -metric cc -space average FA_WMFOD_template_sub1.mif FA_WMFOD_template_sub1.mif
mrmetric: [100%] precomputing cross correlation data...
-0.0006251076483722004
``````

Is the above something that I should expect?

Thank you,
Mahmoud

The dissimilarity metric CC refers to the (global, not local and not normalised) discrete negative cross-correlation with zero shift between the images or more precisely the cross-covariance (which is equivalent to the cross-correlation of zero-mean signals)
CC(im1, im2) = -𝐸[(X−X’)(Y−Y’)],
with X’ the mean of X.

Values around 0 mean no correlation. Note that this metric is currently not normalised, i.e. the maximum correlation is not ±1. To normalise it, you could square the expectation value and divide by the product of the respective auto-correlations:
CC_norm = - CC(im1, im2)**2 / (CC(im1, im1) * CC(im2, im2))

1 Like

Thanks for the explanation, Max.

Values around 0 mean no correlation

so I don’t understand this part. How should I interpret CC if CC(im1,im1) ~ 0 ?

Hi Max,

``````mrmetric -mask1 \${mask1} -mask2 \${mask1} \${image1} \${image1} -space average -interp linear -metric cc
mrmetric: [100%] precomputing cross correlation data...
-0.029309345376545573

``````

The above command should be equivalent to CC(im1,im1) as you explained above. Also, CC(im1,im1) is equal to the variance of im1? is that correct?
This is what I get from `mrstats`:

``````\$ mrstats -mask \$mask1 \$image1
volume       mean     median      stdev        min        max      count
[ 0 ]     0.1712 0.136524677   0.124254 -0.0431182     1.0047     281439
``````

I can’t tell which output I can trust here?

Thanks for pointing this out!

The CC metric has a bug, mrstats is correct. The bug does not affect anything other than the CC metric, which is currently used only in mrmetric (but fixing it might also fix the not yet working CC registration code). I’ll post here once the fix is out.

Details, if interested:

I would expect the output of mrstats and mrmetric to be different as mrmetric regrids the image to the average space, which uses interpolation causing some degree of blurring – despite the average space being (nearly) identical to the input image grid. mrstats operates on the voxel level. I double checked mrstats stdev with numpy, it is correct.

The following should give you an idea of the order of magnitude of the difference due to interpolation:

`mraverageheader \$image1 \$image1 - | mrtransform \$image1 -template - -oversample 1 -inter linear - | mrview - \$image1`

The variance of \$image1 is 0.124254 ** 2 = 0.015439, close to the output of mrmetric but not close enough. So I had a look at the mean and the stdev of the precalculated image, both are close to those of the input image. Digging further, I found a bug deep in the thread kernel of the registration code that causes erroneous indexing over the precomputed image. Couldn’t be deeper in the code, glad I found it though -.-

2 Likes

Thanks for the details, Max.
Please let us know when the code is fixed.

The question that remains is that `mrstats` can help me with CC(im1,im1). How can I get CC(im1,im2) without using the `mrmetric`?

Hi Mahmoud,

If I were you, I’d use `mrdump` on both images (with the same `-mask`): that will dump the intensities of both in text files, which you can easily import in e.g. Matlab, R, or similar. Then it becomes easy to compute the metric you’re after. If you’re after more solid interpretation of the result, I’d also recommend you compute the normalised cross-correlation; i.e. so the result lies in a well-defined range of [-1,1]. “1” then means perfect correlation, zero means no correlation (at all), and “-1” means perfect anti-correlation.

Cheers,
Thijs

Hi Thijs,

How can I use one mask for both image1 and image2 in case they are one perfectly overlapping?

Thanks,
Mahmoud

…not sure if it’s a typo, but I’m guessing you meant to say “in case they are not perfectly overlapping”?

If so, you need a set of matching voxels (and their intensities) of course, to compute the zero-shift CC (that’s positively what you’re after, right?). So both images need to be sliced / interpolated / represented on the same grid, and you need a single mask to represent the region you want to compute the CC for. `mrmetric` does some of these things for you (see its options); for the masks, it looks like it’s simply going by the overlap (intersection), which is the only reasonable thing to do, as you might not have sensible intensities beyond either mask. So just compute the intersection mask, and use that as the single mask to give to `mrdump`; so you end up with 2 vectors of values of equal length, i.e. pairs of values for the pair of images.

Or just wait until the PR is merged…

Edit: is now merged to master.

How can I use/get this updated code?

Thank you!

It is merged into the master branch so you should be able to get it as usual via git (see docs for details).

1 Like

Sure, I wasn’t trying to address the bug fix, but rather…

…or otherwise encourage to experiment with the metric (or other similarity metrics), so its values and interpretation serve the purpose. Insight and experimentation is never a bad thing.

Just to clarify in case of a misunderstanding, I wrote or to indicate alternatives, pointing at the updated code.

Agreed, experiments are great if they serve a good purpose. I am all for replication (as opposed to duplication) and complementarity experiments especially if they lead to new insights or fix bugs!

Speaking of which, @zeydabadi, a caveat on the normalisation, I posted above:

It assumes that all images are in the same space, if they are not, regrid all images (and masks) first to the average space as outlined above. If the masks’ content differs, just give both to each call of mrmetric.

Thank you all for useful discussion.
I’m going to use the updated code to calculate the `CC_norm = - CC(im1, im2)**2 / (CC(im1, im1) * CC(im2, im2))`

Just to make sure that I’m doing everything right, should I expect that get `CC_norm=1 if im1=im2` ?

There is a function in FSL, called fslcc that gives me exactly 1.0 if im1=im2, however, I don’t know how that function works.

If `im1=im2`, `CC(im1, im2) = CC(im1, im1) = CC(im2, im2)`, hence `CC_norm` becomes by definition -1. This is the negative normalised cross-correlation, fslcc seems to calculate the positive normalised cross-correlation across all volumes in two images. For aligned images with one volume, I think its output should be equivalent to the negative of CC_norm (except for small differences due to interpolation).

The main purpose of mrmetric is to evaluate the dissimilarity of two images taking the header transformation and optionally masks into account. Examples for two images a.mif and b.mif (b.mif is a.mif shifted by 0.5mm in x y and z):

``````\$ cat r.txt
1 0 0 0.5
0 1 0 0.5
0 0 1 0.5
\$ mrtransform a.mif b.mif -linear r.txt
# both images are identical in voxel-space:
\$ mrcalc a.mif b.mif -sub - | mrstats -
# negative cross-correlation between a and b in scanner space:
\$ mrmetric a.mif b.mif -metric cc -space average
# mean squared error in voxel space:
\$ mrmetric a.mif b.mif
# mean squared error in scanner space:
\$ mrmetric a.mif b.mif -space average

# normalised cross-correlation between a and b in scanner space (edit: fixed):
\$ mrtransform a.mif -template av.mif ah.mif -oversample 1 -interp linear
\$ mrtransform b.mif -template av.mif bh.mif -oversample 1 -interp linear
\$ CC_ab=`mrmetric ah.mif bh.mif -metric cc -space average`
\$ CC_aa=`mrmetric ah.mif ah.mif -metric cc -space average`
\$ CC_bb=`mrmetric bh.mif bh.mif -metric cc -space average`
\$ mrcalc \$CC_ab \$CC_ab -mult \$CC_aa \$CC_bb -mult -div

# mean squared error in scanner space (as voxel grids are aligned):
\$ mrcalc ah.mif bh.mif -sub 2 -pow - | mrstats - -output mean
``````
1 Like

Hi Max,

This is great! thank you so much for providing a detailed example.

A couple of questions:
1- This is very basic but what’s the difference between voxel-space and scanner-space here? and where and when I should consider any of them?

2-

Blockquote# normalised cross-correlation between a and b in scanner space:
\$ mrtransform a.mif -template av.mif ah.mif -oversample 1 -interp linear
\$ mrtransform b.mif -template av.mif bh.mif -oversample 1 -interp linear

I was not aware of doing the above transformations before. Instead, I was just doing these:

Blockquote\$ CC_ab=`mrmetric a.mif b.mif -metric cc -space average`
\$ CC_aa=`mrmetric a.mif a.mif -metric cc -space average`
\$ CC_bb=`mrmetric b.mif b.mif -metric cc -space average`
\$ mrcalc \$CC_aa \$CC_aa -mult \$CC_ab \$CC_aa -mult -div

Could you tell me what’s the difference?

Thank you!

Ok, I tried to replicate your example using one of my FA maps.
So here a.mif is a real FA map and the rest is exactly as your example.

The first run is with the updated `mrmetric` code:

``````mrtransform: [100%] copying from "a.mif" to "b.mif"...

mrcalc: [WARNING] header transformations of input images do not match
mrcalc: [100%] computing: (a.mif - b.mif)
volume       mean     median      stdev        min        max      count
[ 0 ]          0          0          0          0          0    3551862

mrmetric: [100%] precomputing cross correlation data...
-0.0062594590568325484

0

0.0003319023313259135

mrtransform: [100%] reslicing "a.mif"

mrtransform: [100%] reslicing "b.mif"

mrmetric: [100%] precomputing cross correlation data...
mrmetric: [100%] precomputing cross correlation data...
mrmetric: [100%] precomputing cross correlation data...
1.04853

mrcalc: [100%] computing: (ah.mif - bh.mif)^2
0.000328109
``````

The second run is with the buggy `mrmetric` code:

``````mrtransform: [100%] copying from "a.mif" to "b.mif"...

mrcalc: [WARNING] header transformations of input images do not match
mrcalc: [100%] computing: (a.mif - b.mif)
volume       mean     median      stdev        min        max      count
[ 0 ]          0          0          0          0          0    3551862

mrmetric: [100%] precomputing cross correlation data...
-0.0011110985686968316

0

mrmetric: trafo:
1, 0, 0, 0
0, 1, 0, 0
0, 0, 1, 0
mrmetric: trafo_inverse:
1,  0,  0, -0
0,  1,  0, -0
0,  0,  1, -0
mrmetric: trafo_half:
1, 0, 0, 0
0, 1, 0, 0
0, 0, 1, 0
mrmetric: trafo_half_inverse:
1, 0, 0, 0
0, 1, 0, 0
0, 0, 1, 0
mrmetric: centre: 0 0 0
0.00033190233132592939

mrtransform: [100%] reslicing "a.mif"

mrtransform: [100%] reslicing "b.mif"

mrmetric: [100%] precomputing cross correlation data...
mrmetric: [100%] precomputing cross correlation data...
mrmetric: [100%] precomputing cross correlation data...
0.988539

mrcalc: [100%] computing: (ah.mif - bh.mif)^2
0.000328109
``````

It seems that by definition of CC_norm, the buggy code gives a more reasonble output (0.988539) compared to the updated code (1.04853) ???

It depends on the use case. Voxel coordinates are coordinates in the data array (without units), scanner coordinates use refer to locations in the real world (in mm). If image transformations are identical you could use either, most voxel-wise operations across images make only sense if the images share the same voxel grid. I’d suggest you read our docs for how we define things and I’d suggest consulting a textbook for context.

`mrmetric -space average` operates in the average space, `mrtransform -linear av.mif ...` regrids the images to the common average space so that you can perform voxel-space operations (both transformed images share the same reference frame). I did this so that the average space of `mrmetric` with identical images is the same as that of `mrmetric` with the two different images. This is only necessary for the normalisation.

My bad, I mixed up variable names in the above example. Should be fixed now. Don’t use the buggy CC metric.

1 Like