Create mif file from scratch

The mrtrix docs precises how to write the header for a file but i would like to know how to convert a jpg/png to a mif file and how to convert a 3d object file to mif file.

The problem with creating mif files from jpg or png is that these images are stored in compressed formats, which are difficult to uncompress into their raw binary format. You can do what you need with other tools though, for instance using Python or Matlab. For the latter, if you manage to read the image data into a 3D array, you can write back to mif using the Matlab functions provided as part of your MRtrix3 installation. Otherwise, as long as you can dump the data back to some file as raw (uncompressed) binary data, you should be able to write a mih to go with it (this is the approach taken in the convert_brucker script, for instance – all it does is create a mih header to go with the existing binary data file).

I understand now how to create a file.
However, i would like to create images and 3d objects files that can be displayed by MrView directly.
For example, i created a 3d object of a face and would like to import it in MrView and/or overlap it with MRI data.
Is there any specific things I should be aware of when I create the file ? Or is the raw data enough, without any additions ?
Thank you

i would like to know how to convert a jpg/png to a mif file

In the coming MRtrix3 update, PNG images will be supported as a native image format. When combined with multi-file numbered image support, and the new command mrcolour, this enables all sorts of trickery. :smiling_imp:

i created a 3d object of a face and would like to import it in MrView and/or overlap it with MRI data.

Your description isn’t adequate to answer this question. The answer depends entirely on the fundamental nature of what you are referring to as “a 3d object”. My suspicion is that you are referring to 3D triangulated surface information, to which the answer is: we don’t yet have the capability to display such information natively. The best we could offer would be to run the data through the mesh2voxel command, and visualize the output of that.