Dear @chunhungyeh,

it depends on type of installation:



See the relevant code of fsl_sub of my Debian package 5.0.9 version of fsl_sub:

# Allow to override the above automatic detection result with FSLPARALLEL
if [ -n "$FSLPARALLEL" ] ; then
    # Whenever FSLPARALLEL is set enfore using SGE even if no SGE_ROOT is set
    # which, for example, is the case on Debian systems running SGE
    # TODO: move cluster engine detection here to be able to support more than
    #       just SGE
    if [ "$FSLPARALLEL" = "condor" ] ; then
        # if condor shall be used, simply switch to Condor's qsub emulation

Therefore, even unsetting SGE_ROOT would not work in this case when global variable FSLPARALLEL is set. Conversely, when FSLPARALLEL is not set, none of the FSL jobs gets submitted to gridengine. Therefore, FSLPARALLEL is the primary variable controlling gridengine submission in FSL in 5.0.9 version installed as Debian package.

Hi @Antonin_Skoch,

Ah ok, so the code of fsl_sub is different from my FSL 5.0.9 installed via FSL’s install script:

# The following section determines what to do when fsl_sub is called
# by an FSL program. If SGE_ROOT is set it will attempt to pass the
# commands onto the cluster, otherwise it will run the commands
# itself. There are two values for the METHOD variable, "SGE" and
# "NONE". Note that a user can unset SGE_ROOT if they don't want the
# cluster to be used.
unset module
if [ "x$SGE_ROOT" = "x" ] ; then
	QCONF=`which qconf`
	if [ "x$QCONF" = "x" ]; then
		echo "Warning: SGE_ROOT environment variable is set but Grid Engine software not found, will run locally" >&2

I was not aware of such installation dependency (thanks for the information). I think @rsmith may have figured out a solution to deal with the problem more fundamentally, so that we do not need to unset any environment variable before running 5ttgen in the next major update of MRtrix3.


Regrettably, none of them worked. I unset them, but I still getting the same error. Wondering whether is there a problem in my data.

Carloss-MacBook-Pro:mrtrix_test carlosengutierrez$ echo $SGE_ROOT

Carloss-MacBook-Pro:mrtrix_test carlosengutierrez$ echo $FSLPARALLEL

Carloss-MacBook-Pro:mrtrix_test carlosengutierrez$ 5ttgen fsl 8.mif 5tt.mif -nocleanup
5ttgen: Note that this script makes use of commands / algorithms that have relevant articles for citation; INCLUDING FROM EXTERNAL SOFTWARE PACKAGES. Please consult the help page (-help option) for more information.
5ttgen: Generated temporary directory: /tmp/5ttgen-tmp-CNSPG0/
Command: mrconvert /Users/carlosengutierrez/datathon/mrtrix_test/8.mif /tmp/5ttgen-tmp-CNSPG0/input.mif
5ttgen: Changing to temporary directory (/tmp/5ttgen-tmp-CNSPG0/)
Command: mrconvert input.mif T1.nii -stride -1,+2,+3
Command: maskfilter /usr/local/fsl/data/standard/MNI152_T1_1mm_brain_mask_dil.nii.gz dilate mni_mask.nii -npass 4
Command: standard_space_roi T1.nii T1_preBET.nii.gz -maskMASK mni_mask.nii -roiFOV
Command: bet T1_preBET.nii.gz T1_BET.nii.gz -f 0.15 -R
Command: fast T1_BET.nii.gz
Command: run_first_all -s L_Accu,R_Accu,L_Caud,R_Caud,L_Pall,R_Pall,L_Puta,R_Puta,L_Thal,R_Thal -i T1.nii -o first
5ttgen: [ERROR] Missing .vtk file for structure L_Accu; run_first_all must have failed
5ttgen: Changing back to original directory (/Users/carlosengutierrez/datathon/mrtrix_test)
5ttgen: Contents of temporary directory kept, location: /tmp/5ttgen-tmp-CNSPG0/

Carloss-MacBook-Pro:mrtrix_test carlosengutierrez$ mrinfo 8.mif

Image: “8.mif”

Dimensions: 256 x 80 x 320
Voxel size: 1.5 x 3 x 1.5
Data strides: [ 1 3 -2 ]
Format: MRtrix
Data type: signed 16 bit integer (little endian)
Intensity scaling: offset = 0, multiplier = 7.5575500000000002e-06
Transform: 0.9999 0.002821 -0.01607 -234.9
-0.003831 0.998 -0.06319 -93.06
0.01585 0.06324 0.9979 -292.5
comments: 15_HighRes_MP_RAGE_150um
mrtrix_version: 005da18a

I see that there is no number after run_first_all invocation in your terminal output, therefore I would suspect that it is not a gridengine issue.

I would suggest to run run_first_all command manually and see what the command line output is.

Thank you for your answers. I found out that some structures and files are not being created, when fslmerge tries to merge the segmentation the error shows up. I run cat *.logs/*.e* command and it shows:

a) I think this error is a problem in my T1.nii original data

Cannot open volume first-L_Accu_corr for reading!
Cannot open volume first-L_Accu_first for reading!

b) I removed L_Accu when running run_first_all, however I got a problem in new structures, for the case of the _corr files, _first.nii.gz and _first.vtk are created, but not _corr.nii.gz :

Image Exception : #22 :: ERROR: Could not open image first-L_Pall_corr
Image Exception : #22 :: ERROR: Could not open image first_all_none_firstseg
ERROR: Could not open image first-L_Puta_corr

Thanks for your help.

OK, so this is issue with FIRST itself. I think that you can get better support using FSL mailing list in this case.

My suggestion would be to troubleshoot FIRST, mainly to check the image registration:


And I will try to verify validity of the files which have been created:
What the _first.nii.gz files look like? Are they valid binary masks?
What the _first.vtk files look like?


I think @rsmith may have figured out a solution to deal with the problem more fundamentally, so that we do not need to unset any environment variable before running 5ttgen in the next major update of MRtrix3.

For the sake of completeness, since it was mentioned here (but wasn’t actually the problem):

What I’ve tried to do in the update is provide a Python library function that will wait until a particular file is both present, and has no other process writing to it. Therefore, rather than trying to disable SGE, run_first_all will instead be free to make use of SGE, and 5ttgen will simply wait until run_first_all's outputs have been produced. Fingers crossed…


I’m curious if someone could help me with an issue I’m having with 5ttgen, which seems to be related to this topic. It has been stuck at

for nearly an entire day. What’s curious is that this doesn’t always happen, and all 5ttgen files will be created when it does work. The only difference between it not working right now vs times it has worked is that I did a more thorough skullstripping of the T1 before doing a rigid-body registration to the diffusion series. I can’t imagine this would have any impact, but for the sake of providing information, that is the only difference I can think of.

Any idea as to why it’s taking so long/not working?

Thank you :slight_smile:

Dear @aszymanski,

I would suspect that for some reason your particular instance of first either did not yet finish or exited with error. I would suggest to go to temporary directory of first and inspect logs, as in [quote=“Carlos_Gutierrez, post:25, topic:133”]
I run cat .logs/.e* command
which could help you to diagnose the reason.


Oh noes, not again! :unamused:

That message you quoted relates to changes I made in version 3.0_RC1 that aim to permit FSL’s run_first_all script to use Sun Grid Engine (SGE), since that’s the capability that has seemingly caused the greatest amount of grievance with 5ttgen fsl.

Unfortunately it also makes it quite tricky to detect errors: run_first_all may exit “successfully” once the relevant jobs have been submitted to SGE, but 5ttgen fsl needs to wait until the files created by run_first_all actually appear; alternatively, run_first_all may fail for some other reason, will still return “successfully”, but those files will never appear.

As suggested by @Antonin_Skoch, the FIRST logs will hopefully give some indication - the terminal output of 5ttgen fsl should tell you where the script temporary directory was created, and the FIRST directory resides within that. Letting us know whether or not your system is using SGE (echo $SGE_ROOT) would also help.

As an aside: Something I’ve considered trying recently is calling fsl_anat from within 5ttgen fsl and manipulating its output, rather than executing the FSL command stages manually myself with a bit of bastardization in between. If anybody has any thoughts or experiences on this, do let me know.


1 Like


So I ran cat * . logs/.e within one of the 5ttgen tmp directories, which gave me this output:

[USER 5ttgen-tmp-8WWY88]$ cat *.logs/*.e*
Image Exception : #22 :: ERROR: Could not open image first-R_Accu_corr
terminate called after throwing an instance of 'RBD_COMMON::BaseException'
/bin/sh: line 1:  6404 Aborted                 /usr/local/packages/fsl-5.0.1/bin/fslmerge -t first_all_none_firstseg first-L_Accu_corr first-R_Accu_corr first-L_Caud_corr first-R_Caud_corr first-L_Pall_corr first-R_Pall_corr first-L_Puta_corr first-R_Puta_corr first-L_Thal_corr first-R_Thal_corr
Image Exception : #22 :: ERROR: Could not open image first-R_Accu_first
terminate called after throwing an instance of 'RBD_COMMON::BaseException'
/bin/sh: line 1:  6405 Aborted                 /usr/local/packages/fsl-5.0.1/bin/fslmerge -t first_all_none_origsegs first-L_Accu_first first-R_Accu_first first-L_Caud_first first-R_Caud_first first-L_Pall_first first-R_Pall_first first-L_Puta_first first-R_Puta_first first-L_Thal_first first-R_Thal_first
Image Exception : #22 :: ERROR: Could not open image first_all_none_origsegs

And yes, the cluster I am using to execute these commands is using SGE. 

We are really interested in obtaining the 5ttgen images because we are interested in comparing CSF & WM segmentation between 5ttgen & dhollander, so getting 5ttgen working would be wonderful.

Please let me know if there is any more information I can provide. I really appreciate the help :relaxed:

Dear @aszymanski,

so the problem is, that that FIRST itself exited with error.

You could probably get more expert advice to this issue on mailing list dedicated to FIRST, which is FSL list. There are several threads on FSL list discussing reason of NO INTERIOR VOXELS TO ESTIMATE MODE error. Quick looking at them the advices were to inspect the registration and check your image voxel size.



See also

I would also suggest to check the skullstrip.



1 Like

@aszymanski: If you can consistently reproduce that fault with your image(s), could you do me a favour and try running the original T1 through the fsl_anat script? If there’s some aspect of the image processing within that script that helps to prevent such errors from occurring during the run_first_all step, that would give me direction in which to try solutions for 5ttgen.

1 Like

Hey @rsmith,

I just reran a subject’s T1 using fsl_anat and I still got

terminate called after throwing an instance of 'RBD_COMMON::BaseException'
/usr/local/packages/fsl-5.0.1/bin/fsl_anat: line 152: 16220 Aborted                 $@```` :frowning:

Does this help you at all?

In a way: It lets me know that if I were to modify 5ttgen fsl to use the fsl_anat script instead of calling the individual FSL commands, it wouldn’t alleviate the run_first_all issue completely. Having said that, I’d be more confident in that conclusion if the image were tested against an up-to-date FSL. Any chance of sharing that particular subject image with me?

Sure! Let me just get rid of some identifiers and I can send it your way. How would you like me to send it?