Distortion correction using T1

For reference I already downloaded the

Synb0-DISCO-master file in my applications folder.

I also have docker but am not super familiar with using it.

does it have something to do with me not sourcing to or correctly sourcing the docker image in general? Is that what the error means? should something point to the “docker”, justinblaber/synbo_25iso ?

By default the user id and group id is 501 and 20. If you used this command exactly then it looks like you might be missing a dash for the argument --user. Could you change “-user” to “–user”? For some reason it doesn’t know where to look for the image.

1 Like

Thank you!

I changed the command to …

sudo docker run --rm \

-v /Volumes/DANIEL/EPC/analysis/EPC001.post/input:/INPUTS/ \

-v /Volumes/DANIEL/EPC/analysis/EPC001.post/output:/OUTPUTS/ \

-v /Applications/freesurfer_dev/license.txt:/extra/freesurfer/license.txt \

–user $(id -u):$(id -g) \

justinblaber/synbo_25iso

This resulted in the error…

Unable to find image ‘justinblaber/synbo_25iso:latest’ locally

docker: Error response from daemon: pull access denied for justinblaber/synbo_25iso, repository does not exist or may require ‘docker login’: denied: requested access to the resource is denied.

See ‘docker run --help’.

I changed the code to justinblaber/synb0_25iso:latest …

sudo docker run --rm \

-v /Volumes/DANIEL/EPC/analysis/EPC001.post/input:/INPUTS/ \

-v /Volumes/DANIEL/EPC/analysis/EPC001.post/output:/OUTPUTS/ \

-v /Applications/freesurfer/license.txt:/extra/freesurfer/license.txt \

--user $(id -u):$(id -g) \

justinblaber/synb0_25iso:latest

It seems to work however there does seem to be an issue with the freesurfer license.txt…

The path /Applications/freesurfer/license.txt

is not shared from OS X and is not known to Docker.

You can configure shared paths from Docker -> Preferences… -> File Sharing.

See https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.

.

ERRO[0000] error waiting for container: context canceled

But for people who find this thread in the future. …

You just need to mount /Applications/ in the file sharing section of docker.

thanks for all your help! I look forward to trying it out.

1 Like

Very keen to see a result! I’ve got a few use cases where this would be very welcome, and of high impact.

Thanks for chipping in and providing some documentation / help @schilkg1; this will be very useful to many!

Cheers,
Thijs

I will be sure to share some results as I get them.

Unfortunately @schilkg1; eventhough it seems like I have everything up and running fine, there appears to be some type of docker error when running on every one of the three subjects I have tried so far. I appear to be getting the error around the bbregister and FAST segmentation portion.

See here the first error. That then propagates throughout the rest of the script.

Removing job directory...
-------
Skull stripping T1
bet /INPUTS/T1.nii.gz /tmp/tmp.9D4x42aZhk/T1_mask.nii.gz -R
-------
epi_reg distorted b0 to T1
epi_reg --epi=/INPUTS/b0.nii.gz --t1=/INPUTS/T1.nii.gz --t1brain=/tmp/tmp.9D4x42aZhk/T1_mask.nii.gz --out=/tmp/tmp.9D4x42aZhk/epi_reg_d
Running FAST segmentation
/extra/fsl/bin/epi_reg: line 320:  1243 Killed                  $FSLDIR/bin/fast -o ${vout}_fast ${vrefbrain}
Image Exception : #63 :: No image files match: /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_pve_2
terminate called after throwing an instance of 'std::runtime_error'
  what():  No image files match: /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_pve_2
/extra/fsl/bin/epi_reg: line 320:  1244 Aborted                 $FSLDIR/bin/fslmaths ${vout}_fast_pve_2 -thr 0.5 -bin ${vout}_fast_wmseg
Image Exception : #63 :: No image files match: /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_wmseg
terminate called after throwing an instance of 'std::runtime_error'
  what():  No image files match: /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_wmseg
/extra/fsl/bin/epi_reg: line 329:  1269 Aborted                 $FSLDIR/bin/fslmaths ${vout}_fast_wmseg -edge -bin -mas ${vout}_fast_wmseg ${vout}_fast_wmedge
FLIRT pre-alignment
Running BBR
Image Exception : #63 :: No image files match: /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_wmseg
Image Exception : #22 :: Failed to read volume /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_wmseg
Error : No image files match: /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_wmseg
Failed to read volume /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_wmseg
Error : No image files match: /tmp/tmp.9D4x42aZhk/epi_reg_d_fast_wmseg
Could not open matrix file /tmp/tmp.9D4x42aZhk/epi_reg_d.mat

Hi @CallowBrainProject. Once the image is running, it is surprising that there are failures!

This could possibly be three things:

First, it could be a RAM limitation. What system are you running on? If it is a Mac we’ve found that Docker by default limits to very little memory and we recommend >8Gb at a minimum. You can change these Docker settings to allow more memory.

Second, as you’ve brought up, the most common issue we’ve had (again with Docker on Mac systems) is binding the license.txt path to the image. It sounds like you’ve found one solution (manually mounting the path in the docker). Two other solutions are [1] to literally just copy the txt file to the current directory and used “$(pwd)/license.txt:/extra/freesurfer/license.txt”, or [2] use another OS! We realize this isn’t always possible, but we’ve found running Docker on a Mac to be more of a headache than its simplicity on other systems. We again apologize for even including the one freesurfer command and will hope to eliminate needing this license file in the future!

Finally, it could have literally just failed! Our first few steps are skull strip (bet), registration (flirt) and segmentation (FAST) which are pretty robust, but it is possible that one failed. If these are pipeline and analysis issues (rather than Docker issues and syntax) then we can try debugging on our end (if you are allowed to share an example dataset) - although I suspect these are Docker memory issues.

Thanks for the suggestions. You were correct that my memory was very low. I changed the following parameters.
CPUs = 6
Memory = 9 GB
Swap = 1 GB
Disk image size = 59 GB

However, I am still get the issue later in the script if I have 2 dockers running at 1 time. I assume this may be due to memory limits or something.

I wonder, is there a way for me to put a for loop inside the docker command so that I can run this on all my subjects without having to go in after each subject finishes since I can’t seem to run multiple instances without running into errors?

Maybe change

sudo docker run --rm \

-v /Volumes/DANIEL/EPC/analysis/EPC001.pre/input:/INPUTS/ \

-v /Volumes/DANIEL/EPC/analysis/ECP001.pre_correct:/OUTPUTS/ \

-v /Applications/freesurfer/license.txt:/extra/freesurfer/license.txt \

–user $(id -u):$(id -g) \

justinblaber/synb0_25iso:latest

to something like …?

sudo docker run --rm
foreach subj (EPC001 EPC002 EPC003)
foreach cond (pre post)
-v /Volumes/DANIEL/EPC/analysis/${subj}.${cond}/input:/INPUTS/ \

-v /Volumes/DANIEL/EPC/analysis/${subj}.${cond}_correct:/OUTPUTS/ \

-v /Applications/freesurfer/license.txt:/extra/freesurfer/license.txt \

–user $(id -u):$(id -g) \

justinblaber/synb0_25iso:latest

Any suggestions. I will also update if the script running on it’s own doesn’t work.

Thanks,
Daniel

Thanks for the info - this is good to know. My guess is that any errors you are going to see from now on are simply memory issues. Note that the most memory-intensive step will likely be loading and applying the network weights (done 5 times, one for each fold of the 5-fold cross validation), the most time-intensive step is applying TOPUP.

Unfortunately I’m not familiar with smart ways of running multiple docker instances at once on the same machine (we kick off many processes on clusters with many machines). We are slowly BIDS-ifying our pipelines but this is taking quite some time - for now it is 1 process to 1 subject.

Hope that is helpful in some way!
Kurt

Thank you for the help! For reference, the above specifications got me to the 5-fold cross-validation at which point it threw errors. I will try to bump the memory up a bit (10.5 GB) and see if things run smoothly.

And I understand! Thanks for being willing to share the tool!

Hello,

I just wanted to update you and say that the docker continues to fail at the fold portion of the script. This is despite the following resources allotted to the docker.

CPUs=6
Memory=16GB
Swap=1.5GB
Disk image size = 104 GB

What specifications do you use @schilkg1? I would have thought his would be enough.

See the following error.
Copying results to results path…

Removing job directory…
Performing inference on FOLD: 1
/extra/pipeline.sh: line 27: 1712 Killed python3.6 /extra/inference.py /OUTPUTS/T1_norm_lin_atlas_2_5.nii.gz /OUTPUTS/b0_d_lin_atlas_2_5.nii.gz /OUTPUTS/b0_u_lin_atlas_2_5_FOLD_"$i".nii.gz /extra/dual_channel_unet/num_fold_"$i"total_folds"$NUM_FOLDS"seed_1_num_epochs_100_lr_0.0001_betas(0.9,\ 0.999)weight_decay_1e-05_num_epoch.pth
Performing inference on FOLD: 2
Performing inference on FOLD: 3
/extra/pipeline.sh: line 27: 1855 Killed python3.6 /extra/inference.py /OUTPUTS/T1_norm_lin_atlas_2_5.nii.gz /OUTPUTS/b0_d_lin_atlas_2_5.nii.gz /OUTPUTS/b0_u_lin_atlas_2_5_FOLD_"$i".nii.gz /extra/dual_channel_unet/num_fold_"$i"total_folds"$NUM_FOLDS"seed_1_num_epochs_100_lr_0.0001_betas(0.9,\ 0.999)weight_decay_1e-05_num_epoch
.pth
Performing inference on FOLD: 4
/extra/pipeline.sh: line 27: 1998 Killed python3.6 /extra/inference.py /OUTPUTS/T1_norm_lin_atlas_2_5.nii.gz /OUTPUTS/b0_d_lin_atlas_2_5.nii.gz /OUTPUTS/b0_u_lin_atlas_2_5_FOLD_"$i".nii.gz /extra/dual_channel_unet/num_fold_"$i"total_folds"$NUM_FOLDS"seed_1_num_epochs_100_lr_0.0001_betas(0.9,\ 0.999)weight_decay_1e-05_num_epoch.pth
Performing inference on FOLD: 5
/extra/pipeline.sh: line 27: 2141 Killed python3.6 /extra/inference.py /OUTPUTS/T1_norm_lin_atlas_2_5.nii.gz /OUTPUTS/b0_d_lin_atlas_2_5.nii.gz /OUTPUTS/b0_u_lin_atlas_2_5_FOLD_"$i".nii.gz /extra/dual_channel_unet/num_fold_"$i"total_folds"$NUM_FOLDS"seed_1_num_epochs_100_lr_0.0001_betas(0.9,\ 0.999)weight_decay_1e-05_num_epoch
.pth
/extra/pipeline.sh: line 27: 2284 Killed python3.6 /extra/inference.py /OUTPUTS/T1_norm_lin_atlas_2_5.nii.gz /OUTPUTS/b0_d_lin_atlas_2_5.nii.gz /OUTPUTS/b0_u_lin_atlas_2_5_FOLD_"$i".nii.gz /extra/dual_channel_unet/num_fold_"$i"total_folds"$NUM_FOLDS"seed_1_num_epochs_100_lr_0.0001_betas(0.9,\ 0.999)weight_decay_1e-05_num_epoch.pth
Taking ensemble average
Image Exception : #63 :: No image files match: /OUTPUTS/b0_u_lin_atlas_2_5_FOLD_

That should be more than enough. My Mac looks to be set at 6 CPU, 12 Gb RAM, 2Gb Swap, 30Gb Disk image and has successfully run this.

Our Linux machines are where we primarily run this on large datasets and have significantly more space than this.

Quick note - I feel bad taking up MRtrix3 discussion boards with external issues - feel free to email me separately to debug. Although this certainly appears to be a memory issue.

Hey, I got everything working with docker and ran SynB0 to perform topup.

Now I am trying to run eddy with the following eddy command.

To get the parameters for the eddy command I am running mrconvert…

Then I am running the following…

eddy --imain=eddy.nii.gz --mask=path/to/brainmask.nii.gz –-acqp=config.txt --index=indices –-bvecs=bvecs --bvals=bvals –-topup=/Volumes/DANIEL/EPC/analysis/${subj}.${cond}_correct/b0_all_topup.nii.gz --out=eddy_unwarp.nii.gz

But I am getting the following error.

–-acqp=config.txt:  is an unrecognised token!

The config.txt file has the following…

0 -1 0 0.112

Any ideas why this eddy command isn’t working?

Sorry, silly mistake. It turns out when I typed – it was autocorrecting it to a different symbol.

Hello! Just wanted to touch base again…I’ve had someone else run into a similar issue of Killed during inference. How did you solve this, or was it simply a memory issue? Most of the problems we’ve had are with Mac OS running the Docker and we’d like to put pointers on our GitHub. Thank you!

Hey,

So I was also using a Mac OS system so this likely contributed to the problem. The only way I found I could work around it was by only running 1 docker (so one subject at a time) and by providing a pretty large amount of memory. The key issue was when I would try to run the script on two subjects at a time I would always have it killed.

On another note, is this docker the current most up to date version? “docker”, justinblaber/synbo_25iso. I see you tried to improve on your previous SynB0 work in the 2019 paper. I was just curious which docker was the most up to date? FYI, I used the above method for some older data and it worked pretty well. We are currently writing up a manuscript on the results.

Best,

Ah that’s great news - Sounds like it is only a memory issue!

Yes; we’ve updated the docker. We now give the option of (1) just synthesizing an image (and letting you perform TOPUP with your favorite configuration files, or even with dwifslpreproc), or (2) run synthesis + TOPUP with default configurations. We have a third option without a FreeSurfer license file but we haven’t fully validated it yet.

Hello dear experts.

I am using this wonderful tool to correct my DTI images since I do not have two phase encoding directions or fieldmap acquisition. So far I managed to run it but with some errors. The process ended without creating all the necessary outputs. I get the following:

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/extra/inference.py", line 90, in <module>
    img_model = inference(T1_input_path, b0_input_path, model, device)
  File "/extra/inference.py", line 31, in inference
    img_T1 = np.expand_dims(util.get_nii_img(T1_path), axis=3)
  File "/extra/util.py", line 24, in get_nii_img
    nii = nib.load(path_nii)
  File "/extra/pytorch/lib/python3.6/site-packages/nibabel/loadsave.py", line 42, in load
    raise FileNotFoundError("No such file or no access: '%s'" % filename)
FileNotFoundError: No such file or no access: '/OUTPUTS/T1_norm_lin_atlas_2_5.nii.gz'
Taking ensemble average
Image Exception : #63 :: No image files match: /OUTPUTS/b0_u_lin_atlas_2_5_FOLD_*
terminate called after throwing an instance of 'std::runtime_error'
  what():  No image files match: /OUTPUTS/b0_u_lin_atlas_2_5_FOLD_*
/extra/pipeline.sh: line 45:   744 Aborted                 fslmerge -t /OUTPUTS/b0_u_lin_atlas_2_5_merged.nii.gz /OUTPUTS/b0_u_lin_atlas_2_5_FOLD_*.nii.gz
Image Exception : #63 :: No image files match: /OUTPUTS/b0_u_lin_atlas_2_5_merged
terminate called after throwing an instance of 'std::runtime_error'
  what():  No image files match: /OUTPUTS/b0_u_lin_atlas_2_5_merged
Applying inverse xform to undistorted b0
/extra/pipeline.sh: line 46:   745 Aborted                 fslmaths /OUTPUTS/b0_u_lin_atlas_2_5_merged.nii.gz -Tmean /OUTPUTS/b0_u_lin_atlas_2_5.nii.gz
 file /OUTPUTS/b0_u_lin_atlas_2_5.nii.gz does not exist . 
terminate called after throwing an instance of 'itk::ExceptionObject'
  what():  /home/local/VANDERBILT/blaberj/ANTS_13_FEB_2019/bin/ants/ITKv5/Modules/Core/Common/src/itkProcessObject.cxx:1412:
itk::ERROR: ResampleImageFilter(0x30a0560): Input Primary is required but not set.
Applying slight smoothing to distorted b0
/extra/pipeline.sh: line 50:   746 Aborted                 antsApplyTransforms -d 3 -i /OUTPUTS/b0_u_lin_atlas_2_5.nii.gz -r /INPUTS/b0.nii.gz -n BSpline -t [/OUTPUTS/epi_reg_d_ANTS.txt,1] -t [/OUTPUTS/ANTS0GenericAffine.mat,1] -o /OUTPUTS/b0_u.nii.gz
Running topup
Image Exception : #63 :: No image files match: /OUTPUTS/b0_u
terminate called after throwing an instance of 'std::runtime_error'
  what():  No image files match: /OUTPUTS/b0_u
/extra/pipeline.sh: line 61:   756 Aborted                 fslmerge -t /OUTPUTS/b0_all.nii.gz /OUTPUTS/b0_d_smooth.nii.gz /OUTPUTS/b0_u.nii.gz
Image Exception : #63 :: No image files match: /OUTPUTS/b0_all
Image Exception : #22 :: Failed to read volume /OUTPUTS/b0_all.nii.gz
Error : No image files match: /OUTPUTS/b0_all

Part of FSL (ID: 6.0.1)
topup

Usage: 
topup --imain=<some 4D image> --datain=<text file> --config=<text file with parameters> --out=my_topup_results


Compulsory arguments (You MUST set one or more of):
	--imain		name of 4D file with images
	--datain	name of text file with PE directions/times

Optional arguments (You may optionally specify one or more of):
	--out		base-name of output files (spline coefficients (Hz) and movement parameters)
	--fout		name of image file with field (Hz)
	--iout		name of 4D image file with unwarped images
	--logout	Name of log-file
	--warpres	(approximate) resolution (in mm) of warp basis for the different sub-sampling levels, default 10
	--subsamp	sub-sampling scheme, default 1
	--fwhm		FWHM (in mm) of gaussian smoothing kernel, default 8
	--config	Name of config file specifying command line arguments
	--miter		Max # of non-linear iterations, default 5
	--lambda	Weight of regularisation, default depending on --ssqlambda and --regmod switches. See user documetation.
	--ssqlambda	If set (=1), lambda is weighted by current ssq, default 1
	--regmod	Model for regularisation of warp-field [membrane_energy bending_energy], default bending_energy
	--estmov	Estimate movements if set, default 1 (true)
	--minmet	Minimisation method 0=Levenberg-Marquardt, 1=Scaled Conjugate Gradient, default 0 (LM)
	--splineorder	Order of spline, 2->Qadratic spline, 3->Cubic spline. Default=3
	--numprec	Precision for representing Hessian, double or float. Default double
	--interp	Image interpolation model, linear or spline. Default spline
	--scale		If set (=1), the images are individually scaled to a common mean, default 0 (false)
	--regrid		If set (=1), the calculations are done in a different grid, default 1 (true)
	-h,--help	display help info
	-v,--verbose	Print diagonostic information while running
	-h,--help	display help info



Failed to read volume /OUTPUTS/b0_all.nii.gz
Error : No image files match: /OUTPUTS/b0_all
FINISHED!!!![outputs|690x106]

(upload://oi08bMcicfDU2zkhlbtFNrlE8AT.png) (obtained outputs)

I have same issue with you . Have you solve this problem ?