`dwidenoise` -- [SYSTEM FATAL CODE: SIGSEGV (11)] Segmentation fault: Invalid memory access

Hi Luis,

Sorry for the late reply. I’m almost certain that the segmentation fault you are experiencing relates to patch edge handling. This is addressed in #1525, merged to dev yesterday. Can you test if this issue persists with the current dev branch?

Cheers
Daan

Hi Daan,

Is there a way to install the dev branch without altering my current MRtrix version?

Best,

Luis

Hi,

I am sorry for “resurrecting” this thread. However, I have also encountered the same error when trying to use dwidenoise with my latest data. In general, I am running dwidenoise without a mask, i.e.

dwidenoise data.nii.gz data_dn.nii.gz -noise data_dn_noisemap.nii.gz

I have successfully used dwidenoise on the (Penthera 3T data. However, if I apply it to my own data it crashes. I am running dwidenoise v3.0.1 with Eigen v3.3.7 on a Mac OS Catalina 10.15.7.

In attempts to reconstruct the error I have tried the following so far:

  1. Create a brain mask using dwi2mask and use this mask for dwidenoise. This did not fix the issue.

  2. Disable multi-threading by including the option -nthreads 0 in the command. This did not fix the issue but slowed calculations down significantly.

  3. Reduce the window size using -extent 7 as an additional option. This did not fix the issue.

  4. Splitting my data set into individual subsets with less diffusion weightings, i.e. reduce the original 633 diffusion weightings down to 300 (still crashed dwidenoise, now at 70% instead of 20%) and 200. Here, I was able to run dwidenoise successfully without a mask or disabling multi-threading.

To me it seems like dwidenoise runs into memory issues if the input data has too many diffusion weightings. I would appreciate some input on this issue so I could set up my pre-processing pipeline accordingly.

Best regards,
Jan

Hi,

it seems like I spoke to soon. Upon returning to the “reduced” data sets it seems dwidenoise does not consistently finish. Sometimes it breaks down and sometimes it does not. I am not sure what causes the error.

Best regards,
Jan

Welcome Jan!

The fact that it’s crashing with a large number of volumes, and crashes later with a reduced number of volumes, hints at a memory leak. I’ve just run a test dataset through dwidenoise with valgrind and it reports that there are no such leaks upon command completion. But I also ran dwidenoise and monitored its memory usage with top, and it does seem that its memory allocation is progressively increasing over time, which is not something that should be necessary for the operation of the command. So that suggests that there is something internally within dwidenoise that is requesting new memory rather than re-using existing memory, such that the total memory usage keeps increasing as it runs. I had a quick go at modifying the dwidenoise code to prevent memory re-allocations, but I still see that progressively increasing memory usage, in which case it may well be an upstream bug in Eigen.

If you’re able to process your data on a system with more RAM, that would get you by for now. In parallel I’ll poke and prod @dchristiaens to look into why it is that his command is gobbling up RAM.

Cheers
Rob

Hi Jan and Rob,

@rsmith, you’re right to poke and prod me. I’ve been ill for a bit and didn’t follow this one up properly. I’ll have look, but based on what you found it looks like it may indeed be an upstream bug in Eigen. I haven’t tested 3.3.7 yet; perhaps that has something to do with it?

@jan, can you confirm if you indeed run out of memory? How much RAM do you have available?

Cheers

Daan

I had a brief look into the possibility of memory leaks this morning (see discussion on GitHub for details), but I don’t think that’s the problem here.

@jan, to help us narrow down the issue, could you report:

  • the output of mrinfo data.nii.gz
  • whether the issue also occurs when you don’t use the -noise option

Thanks!

@jan: sorry, a few more things while you’re it:

  • can you run the command with the -debug option and post the entirety of the output here – including the full command itself?
  • where are the input & output data stored? Things can be handled differently on local vs. network drives, this may come into it.

Thanks,
Donald.

@jdtournier

Here is the second image - dwidenoise with -debug enabled (without -noise):

Please excuse my late reply - the ISMRM deadline rears it’s ugly head again.

@jdtournier Here are the answers to your questions:

  • The output of mrinfo:

  • The issue also occurs if I am not using the -noise option (see below)

  • The data is stored on my local hard drive.

@dchristiaens I am not sure how to actually confirm that I am running out of memory. Besides the error in mrtrix I get no other notification. The computer I am working on has 16 GB of RAM available.

The data has some characteristics that might also influence dwidenoise:

  1. It was acquired using a FLASH sequence modified for diffusion encoding. Therefore the header does not contain diffusion encoding information. However, I have tried formatting the data “properly” with mrconvert, adding diffusion information manually, but dwidenoise still crashed.

  2. The data was acquired over a range of different TR and TE values. In the meantime I have modified the pipeline to split the data set into consistent subsets of singular TR and TE values. However, I am unsure why this would lead to a crash of dwidenoise.

  3. The data was acquired with only five slices. Not sure if this could lead to a crash considering the sliding window size of 5x5x5 or higher (all window sizes crash).

Thanks for looking into this!
Jan

OK, I’m pretty sure that’s the issue… You’ll note the default window for your data set is 9×9×9, as stated in the -debug output of dwidenoise:

dwidenoise: [INFO] select default patch size 9 x 9 x 9.

To verify, I’ve created an artificial dataset consisting of 5 slices of 4 concatenations of one of my existing datasets:

$ mrinfo dwi_x4_5slices.mif 
************************************************
Image name:          "dwi_x4_5slices.mif"
************************************************
  Dimensions:        96 x 96 x 5 x 452
  Voxel size:        2.5 x 2.5 x 2.5 x ?
  Data strides:      [ -1 -3 -4 2 ]
...

If I try to process this with dwidenoise, I get precisely the same fault as you do:

$ dwidenoise dwi_x4_5slices.mif out.mif
dwidenoise: [100%] preloading data for "dwi_x4_5slices.mif"
dwidenoise: [ 20%] running MP-PCA denoising...
dwidenoise: [SYSTEM FATAL CODE: SIGSEGV (11)] Segmentation fault: Invalid memory access

But it completes fine if I explicitly set the extent to 5 (the maximum supported by your data):

$ dwidenoise dwi_x4_5slices.mif out.mif -extent 5
dwidenoise: [100%] preloading data for "dwi_x4_5slices.mif"
dwidenoise: [100%] running MP-PCA denoising

For completeness, it also runs fine if I specify a non-isotropic patch size, as long as it fits within your 5 slices, e.g.:

$ dwidenoise dwi_x4_5slices.mif out.mif -extent 9,9,5 
dwidenoise: [100%] preloading data for "dwi_x4_5slices.mif"
dwidenoise: [100%] running MP-PCA denoising

So I reckon that solves that mystery. I guess we should add a few checks to detect this for the next release, @dchristiaens?

Good catch! Yes, I’ll fix this for the next release. @jan, you can manually limit the -extent for now, as in Donald’s example.

Hi,

thanks for taking the time to look into this. I’ll make sure to use a smaller patch size for my corrections.

Kindly,
Jan