Sorry for the late reply. I’m almost certain that the segmentation fault you are experiencing relates to patch edge handling. This is addressed in #1525, merged to dev yesterday. Can you test if this issue persists with the current dev branch?
I am sorry for “resurrecting” this thread. However, I have also encountered the same error when trying to use dwidenoise with my latest data. In general, I am running dwidenoise without a mask, i.e.
I have successfully used dwidenoise on the (Penthera 3T data. However, if I apply it to my own data it crashes. I am running dwidenoise v3.0.1 with Eigen v3.3.7 on a Mac OS Catalina 10.15.7.
In attempts to reconstruct the error I have tried the following so far:
Create a brain mask using dwi2mask and use this mask for dwidenoise. This did not fix the issue.
Disable multi-threading by including the option -nthreads 0 in the command. This did not fix the issue but slowed calculations down significantly.
Reduce the window size using -extent 7 as an additional option. This did not fix the issue.
Splitting my data set into individual subsets with less diffusion weightings, i.e. reduce the original 633 diffusion weightings down to 300 (still crashed dwidenoise, now at 70% instead of 20%) and 200. Here, I was able to run dwidenoise successfully without a mask or disabling multi-threading.
To me it seems like dwidenoise runs into memory issues if the input data has too many diffusion weightings. I would appreciate some input on this issue so I could set up my pre-processing pipeline accordingly.
it seems like I spoke to soon. Upon returning to the “reduced” data sets it seems dwidenoise does not consistently finish. Sometimes it breaks down and sometimes it does not. I am not sure what causes the error.
The fact that it’s crashing with a large number of volumes, and crashes later with a reduced number of volumes, hints at a memory leak. I’ve just run a test dataset through dwidenoise with valgrind and it reports that there are no such leaks upon command completion. But I also ran dwidenoise and monitored its memory usage with top, and it does seem that its memory allocation is progressively increasing over time, which is not something that should be necessary for the operation of the command. So that suggests that there is something internally within dwidenoise that is requesting new memory rather than re-using existing memory, such that the total memory usage keeps increasing as it runs. I had a quick go at modifying the dwidenoise code to prevent memory re-allocations, but I still see that progressively increasing memory usage, in which case it may well be an upstream bug in Eigen.
If you’re able to process your data on a system with more RAM, that would get you by for now. In parallel I’ll poke and prod @dchristiaens to look into why it is that his command is gobbling up RAM.
@rsmith, you’re right to poke and prod me. I’ve been ill for a bit and didn’t follow this one up properly. I’ll have look, but based on what you found it looks like it may indeed be an upstream bug in Eigen. I haven’t tested 3.3.7 yet; perhaps that has something to do with it?
@jan, can you confirm if you indeed run out of memory? How much RAM do you have available?
I had a brief look into the possibility of memory leaks this morning (see discussion on GitHub for details), but I don’t think that’s the problem here.
@jan, to help us narrow down the issue, could you report:
the output of mrinfo data.nii.gz
whether the issue also occurs when you don’t use the -noise option
The issue also occurs if I am not using the -noise option (see below)
The data is stored on my local hard drive.
@dchristiaens I am not sure how to actually confirm that I am running out of memory. Besides the error in mrtrix I get no other notification. The computer I am working on has 16 GB of RAM available.
The data has some characteristics that might also influence dwidenoise:
It was acquired using a FLASH sequence modified for diffusion encoding. Therefore the header does not contain diffusion encoding information. However, I have tried formatting the data “properly” with mrconvert, adding diffusion information manually, but dwidenoise still crashed.
The data was acquired over a range of different TR and TE values. In the meantime I have modified the pipeline to split the data set into consistent subsets of singular TR and TE values. However, I am unsure why this would lead to a crash of dwidenoise.
The data was acquired with only five slices. Not sure if this could lead to a crash considering the sliding window size of 5x5x5 or higher (all window sizes crash).
OK, I’m pretty sure that’s the issue… You’ll note the default window for your data set is 9×9×9, as stated in the -debug output of dwidenoise:
dwidenoise: [INFO] select default patch size 9 x 9 x 9.
To verify, I’ve created an artificial dataset consisting of 5 slices of 4 concatenations of one of my existing datasets:
$ mrinfo dwi_x4_5slices.mif
************************************************
Image name: "dwi_x4_5slices.mif"
************************************************
Dimensions: 96 x 96 x 5 x 452
Voxel size: 2.5 x 2.5 x 2.5 x ?
Data strides: [ -1 -3 -4 2 ]
...
If I try to process this with dwidenoise, I get precisely the same fault as you do: