Distortion correction using T1

For reference I already downloaded the

Synb0-DISCO-master file in my applications folder.

I also have docker but am not super familiar with using it.

does it have something to do with me not sourcing to or correctly sourcing the docker image in general? Is that what the error means? should something point to the “docker”, justinblaber/synbo_25iso ?

By default the user id and group id is 501 and 20. If you used this command exactly then it looks like you might be missing a dash for the argument --user. Could you change “-user” to “–user”? For some reason it doesn’t know where to look for the image.

1 Like

Thank you!

I changed the command to …

sudo docker run --rm \

-v /Volumes/DANIEL/EPC/analysis/EPC001.post/input:/INPUTS/ \

-v /Volumes/DANIEL/EPC/analysis/EPC001.post/output:/OUTPUTS/ \

-v /Applications/freesurfer_dev/license.txt:/extra/freesurfer/license.txt \

–user $(id -u):$(id -g) \

justinblaber/synbo_25iso

This resulted in the error…

Unable to find image ‘justinblaber/synbo_25iso:latest’ locally

docker: Error response from daemon: pull access denied for justinblaber/synbo_25iso, repository does not exist or may require ‘docker login’: denied: requested access to the resource is denied.

See ‘docker run --help’.

I changed the code to justinblaber/synb0_25iso:latest …

sudo docker run --rm \

-v /Volumes/DANIEL/EPC/analysis/EPC001.post/input:/INPUTS/ \

-v /Volumes/DANIEL/EPC/analysis/EPC001.post/output:/OUTPUTS/ \

-v /Applications/freesurfer/license.txt:/extra/freesurfer/license.txt \

--user $(id -u):$(id -g) \

justinblaber/synb0_25iso:latest

It seems to work however there does seem to be an issue with the freesurfer license.txt…

The path /Applications/freesurfer/license.txt

is not shared from OS X and is not known to Docker.

You can configure shared paths from Docker -> Preferences… -> File Sharing.

See https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.

.

ERRO[0000] error waiting for container: context canceled

But for people who find this thread in the future. …

You just need to mount /Applications/ in the file sharing section of docker.

thanks for all your help! I look forward to trying it out.

1 Like

Very keen to see a result! I’ve got a few use cases where this would be very welcome, and of high impact.

Thanks for chipping in and providing some documentation / help @schilkg1; this will be very useful to many!

Cheers,
Thijs

I will be sure to share some results as I get them.

Unfortunately @schilkg1; eventhough it seems like I have everything up and running fine, there appears to be some type of docker error when running on every one of the three subjects I have tried so far. I appear to be getting the error around the bbregister and FAST segmentation portion.

See here the first error. That then propagates throughout the rest of the script.

Removing job directory...
-------
Skull stripping T1
bet /INPUTS/T1.nii.gz /tmp/tmp.9D4x42aZhk/T1_mask.nii.gz -R
-------
epi_reg distorted b0 to T1
epi_reg --epi=/INPUTS/b0.nii.gz --t1=/INPUTS/T1.nii.gz --t1brain=/tmp/tmp.9D4x42aZhk/T1_mask.nii.gz --out=/tmp/tmp.9D4x42aZhk/epi_reg_d
Running FAST segmentation
/extra/fsl/bin/epi_reg: line 320:  1243 Killed                  $FSLDIR/bin/fast -o ${vout}_fast ${vrefbrain}
Image Exception : #63 :: No image files match: /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_pve_2
terminate called after throwing an instance of 'std::runtime_error'
  what():  No image files match: /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_pve_2
/extra/fsl/bin/epi_reg: line 320:  1244 Aborted                 $FSLDIR/bin/fslmaths ${vout}_fast_pve_2 -thr 0.5 -bin ${vout}_fast_wmseg
Image Exception : #63 :: No image files match: /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_wmseg
terminate called after throwing an instance of 'std::runtime_error'
  what():  No image files match: /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_wmseg
/extra/fsl/bin/epi_reg: line 329:  1269 Aborted                 $FSLDIR/bin/fslmaths ${vout}_fast_wmseg -edge -bin -mas ${vout}_fast_wmseg ${vout}_fast_wmedge
FLIRT pre-alignment
Running BBR
Image Exception : #63 :: No image files match: /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_wmseg
Image Exception : #22 :: Failed to read volume /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_wmseg
Error : No image files match: /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_wmseg
Failed to read volume /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_wmseg
Error : No image files match: /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_wmseg
Could not open matrix file /tmp/tmp.9D4x42aZhk/epi_reg_d.mat

Hi @CallowBrainProject. Once the image is running, it is surprising that there are failures!

This could possibly be three things:

First, it could be a RAM limitation. What system are you running on? If it is a Mac we’ve found that Docker by default limits to very little memory and we recommend >8Gb at a minimum. You can change these Docker settings to allow more memory.

Second, as you’ve brought up, the most common issue we’ve had (again with Docker on Mac systems) is binding the license.txt path to the image. It sounds like you’ve found one solution (manually mounting the path in the docker). Two other solutions are [1] to literally just copy the txt file to the current directory and used “$(pwd)/license.txt:/extra/freesurfer/license.txt”, or [2] use another OS! We realize this isn’t always possible, but we’ve found running Docker on a Mac to be more of a headache than its simplicity on other systems. We again apologize for even including the one freesurfer command and will hope to eliminate needing this license file in the future!

Finally, it could have literally just failed! Our first few steps are skull strip (bet), registration (flirt) and segmentation (FAST) which are pretty robust, but it is possible that one failed. If these are pipeline and analysis issues (rather than Docker issues and syntax) then we can try debugging on our end (if you are allowed to share an example dataset) - although I suspect these are Docker memory issues.

Thanks for the suggestions. You were correct that my memory was very low. I changed the following parameters.
CPUs = 6
Memory = 9 GB
Swap = 1 GB
Disk image size = 59 GB

However, I am still get the issue later in the script if I have 2 dockers running at 1 time. I assume this may be due to memory limits or something.

I wonder, is there a way for me to put a for loop inside the docker command so that I can run this on all my subjects without having to go in after each subject finishes since I can’t seem to run multiple instances without running into errors?

Maybe change

sudo docker run --rm \

-v /Volumes/DANIEL/EPC/analysis/EPC001.pre/input:/INPUTS/ \

-v /Volumes/DANIEL/EPC/analysis/ECP001.pre_correct:/OUTPUTS/ \

-v /Applications/freesurfer/license.txt:/extra/freesurfer/license.txt \

–user $(id -u):$(id -g) \

justinblaber/synb0_25iso:latest

to something like …?

sudo docker run --rm
foreach subj (EPC001 EPC002 EPC003)
foreach cond (pre post)
-v /Volumes/DANIEL/EPC/analysis/${subj}.${cond}/input:/INPUTS/ \

-v /Volumes/DANIEL/EPC/analysis/${subj}.${cond}_correct:/OUTPUTS/ \

-v /Applications/freesurfer/license.txt:/extra/freesurfer/license.txt \

–user $(id -u):$(id -g) \

justinblaber/synb0_25iso:latest

Any suggestions. I will also update if the script running on it’s own doesn’t work.

Thanks,
Daniel

Thanks for the info - this is good to know. My guess is that any errors you are going to see from now on are simply memory issues. Note that the most memory-intensive step will likely be loading and applying the network weights (done 5 times, one for each fold of the 5-fold cross validation), the most time-intensive step is applying TOPUP.

Unfortunately I’m not familiar with smart ways of running multiple docker instances at once on the same machine (we kick off many processes on clusters with many machines). We are slowly BIDS-ifying our pipelines but this is taking quite some time - for now it is 1 process to 1 subject.

Hope that is helpful in some way!
Kurt

Thank you for the help! For reference, the above specifications got me to the 5-fold cross-validation at which point it threw errors. I will try to bump the memory up a bit (10.5 GB) and see if things run smoothly.

And I understand! Thanks for being willing to share the tool!

Hello,

I just wanted to update you and say that the docker continues to fail at the fold portion of the script. This is despite the following resources allotted to the docker.

CPUs=6
Memory=16GB
Swap=1.5GB
Disk image size = 104 GB

What specifications do you use @schilkg1? I would have thought his would be enough.

See the following error.
Copying results to results path…

Removing job directory…
Performing inference on FOLD: 1
/extra/pipeline.sh: line 27: 1712 Killed python3.6 /extra/inference.py /OUTPUTS/T1_norm_lin_atlas_2_5.nii.gz /OUTPUTS/b0_d_lin_atlas_2_5.nii.gz /OUTPUTS/b0_u_lin_atlas_2_5_FOLD_"$i".nii.gz /extra/dual_channel_unet/num_fold_"$i"total_folds"$NUM_FOLDS"seed_1_num_epochs_100_lr_0.0001_betas(0.9,\ 0.999)weight_decay_1e-05_num_epoch.pth
Performing inference on FOLD: 2
Performing inference on FOLD: 3
/extra/pipeline.sh: line 27: 1855 Killed python3.6 /extra/inference.py /OUTPUTS/T1_norm_lin_atlas_2_5.nii.gz /OUTPUTS/b0_d_lin_atlas_2_5.nii.gz /OUTPUTS/b0_u_lin_atlas_2_5_FOLD_"$i".nii.gz /extra/dual_channel_unet/num_fold_"$i"total_folds"$NUM_FOLDS"seed_1_num_epochs_100_lr_0.0001_betas(0.9,\ 0.999)weight_decay_1e-05_num_epoch
.pth
Performing inference on FOLD: 4
/extra/pipeline.sh: line 27: 1998 Killed python3.6 /extra/inference.py /OUTPUTS/T1_norm_lin_atlas_2_5.nii.gz /OUTPUTS/b0_d_lin_atlas_2_5.nii.gz /OUTPUTS/b0_u_lin_atlas_2_5_FOLD_"$i".nii.gz /extra/dual_channel_unet/num_fold_"$i"total_folds"$NUM_FOLDS"seed_1_num_epochs_100_lr_0.0001_betas(0.9,\ 0.999)weight_decay_1e-05_num_epoch.pth
Performing inference on FOLD: 5
/extra/pipeline.sh: line 27: 2141 Killed python3.6 /extra/inference.py /OUTPUTS/T1_norm_lin_atlas_2_5.nii.gz /OUTPUTS/b0_d_lin_atlas_2_5.nii.gz /OUTPUTS/b0_u_lin_atlas_2_5_FOLD_"$i".nii.gz /extra/dual_channel_unet/num_fold_"$i"total_folds"$NUM_FOLDS"seed_1_num_epochs_100_lr_0.0001_betas(0.9,\ 0.999)weight_decay_1e-05_num_epoch
.pth
/extra/pipeline.sh: line 27: 2284 Killed python3.6 /extra/inference.py /OUTPUTS/T1_norm_lin_atlas_2_5.nii.gz /OUTPUTS/b0_d_lin_atlas_2_5.nii.gz /OUTPUTS/b0_u_lin_atlas_2_5_FOLD_"$i".nii.gz /extra/dual_channel_unet/num_fold_"$i"total_folds"$NUM_FOLDS"seed_1_num_epochs_100_lr_0.0001_betas(0.9,\ 0.999)weight_decay_1e-05_num_epoch.pth
Taking ensemble average
Image Exception : #63 :: No image files match: /OUTPUTS/b0_u_lin_atlas_2_5_FOLD_

That should be more than enough. My Mac looks to be set at 6 CPU, 12 Gb RAM, 2Gb Swap, 30Gb Disk image and has successfully run this.

Our Linux machines are where we primarily run this on large datasets and have significantly more space than this.

Quick note - I feel bad taking up MRtrix3 discussion boards with external issues - feel free to email me separately to debug. Although this certainly appears to be a memory issue.

Hey, I got everything working with docker and ran SynB0 to perform topup.

Now I am trying to run eddy with the following eddy command.

To get the parameters for the eddy command I am running mrconvert…

Then I am running the following…

eddy --imain=eddy.nii.gz --mask=path/to/brainmask.nii.gz –-acqp=config.txt --index=indices –-bvecs=bvecs --bvals=bvals –-topup=/Volumes/DANIEL/EPC/analysis/${subj}.${cond}_correct/b0_all_topup.nii.gz --out=eddy_unwarp.nii.gz

But I am getting the following error.

–-acqp=config.txt:  is an unrecognised token!

The config.txt file has the following…

0 -1 0 0.112

Any ideas why this eddy command isn’t working?

Sorry, silly mistake. It turns out when I typed – it was autocorrecting it to a different symbol.