Greetings. First, many thanks for this great set of well-documented tools!
I am interested in seeing if I can calculate a connectome based on a voxel-wise parcellation of the GM/WM intertace and ran into what I think is a memory issue. I have a connectome of ~76000 nodes. tck2connectome throws a std::bad_alloc error when I try to run it with this parcellation image. If this is a memory issue, is there a practical limit on the number of nodes that I will need to limit the connectome generation to?

I would also be glad to hear if there are any other suggestions or ideas for how to do such a thing…!

Yes, that will be a memory issue. The connectome generation code uses 2 double-precision matrices, so that’s 16 bytes per element; a symmetric 76,000 matrix contains ~ 2.888 trillion elements, so total memory usage is … ~ 43GB. I could halve this in the most typical usage scenario by only generating that second matrix when it’s absolutely required; would that make it feasible for you?

More generally, this type of application would typically be implemented using sparse storage techniques within the code itself. But whether or not this is beneficial depends on the sparsity of the matrix; once maybe 25% of all possible connections are present, those techniques won’t actually decrease the memory usage any more (I raise that point since connectivity matrices with our tracking tend to be very dense). It’s not something that I’ve delved into myself, and therefore it’s not something that has been implemented. But if there’s demand for it, it can be added to GitHub as a feature request.

By chance I was doing something similar yesterday with tck2connectome (about 90000 nodes) and running it on a cluster. I ran it as a job with 140GB available and I still hit the memory ceiling. It seems like tck2connectome is using more memory than expected from the calculation (my bad maths put it at a manageable 61GB based on above).

Yes, on second glance looks like it’ll be a bit larger. I used a dense non-symmetric matrix class rather than vectorising symmetric matrix data for simplicity… It also tracks the node indices that each streamline is assigned to, in case the -out_assignments option is used and hence those data are required. I suppose I wasn’t thinking about such large matrices back when I coded this stuff!

I’ve created a GitHub issue for these points. I’ll see if I can have a go at them at some point. Thanks for the reports!

Hi Rob and Jonny, thanks for the confirmation and quick response. I am definitely interested in this and have set a watch on the github issue.

I was originally trying to generate a matrix that included white matter voxels as nodes (with the -assignment_all_voxels and including unique indices for voxels in the CC) when I ran into the issue, so reduced memory requirements may help with some of those questions that I have as well!

I also have access to high-RAM HPCs and can test as necessary. In the meantime, I will try my hand at scripting a more RAM safe but time and IO intensive solution with python. Essentially looping through extraction of streamlines from all label indices, mapping to voxel space, and then reconstructing the connectome from the 76000 files. If you also have additional suggestions on how to decrease IO I would love to hear them. It would be ideal if I could load the tck file into memory once and then string together pipes from tckedit to tckmap for all voxels (though from my poking around I couldn’t get the piping commands to work with tckedit). In case this is interesting to others and since I haven’t seen something similar shared anywhere, I would of course make the scripts available.

though from my poking around I couldn’t get the piping commands to work with tckedit

Piping data between MRtrix3 commands currently only works for image data. I do intend to implement track data piping at some point since it would be useful for something I’m working on myself. It’s a decent block of work though.

Good to know. I am looking into whether or not there is a significant improvement with a temporary ramdisk to handle this, but it is not a solution will easily scale to job submission on HPC or other (non unix-like) platforms. I will let you know as things develop…

I just ran the newest build from the tck2connectome_ram_optimisations branch and it seems to work fine with 90k vertices and with about 10x less needed memory. Thanks a lot for making all the changes!

I was just about to post something similar. This is great, thanks for the changes! Now I can construct connectomes that take >10 mins to write to disk :-). Out of curiosity @JonathanOM, how long did it take to write the 90k connectome?

I found that a reasonable balance is to split a file with a very large number of nodes into sets of 20,000 nodes for tck2connectome (and then reconstructing the full matrix afterwards). It is feasible (though relatively slow) on a personal computer with 16GB of ram and a slow HDD.

Just to post a bit of an update here, I successfully reconstructed a ~150k * 150k node matrix on a personal machine (16GB RAM) by splitting and recombining groups of 20,000 nodes as I suggested earlier. It took quite a while and there needs to be a large amount of disk space to hold the connectome files once they are generated, but it is definitely possible now that the ram usage has come down.

Another question here: since many of the entries in a matrix this size are going to be zeros, I was wondering if anyone has looked into implementing a sparse storage format?

I’m pretty sure it’s been thought about, but the bigger question is how would that work with whatever other software package you might be interested in using for further calculations…? If there is no widely accepted standard for this, I don’t see that there’d be much point in implementing our own sparse format (?). Although if connectomestats does everything you need, then I can see this potentially being useful in its own right - but it’s not something I would undertake lightly…

I successfully reconstructed a ~150k * 150k node matrix on a personal machine (16GB RAM) by splitting and recombining groups of 20,000 nodes as I suggested earlier.

Just to be clear (in case somebody reads this and mis-interprets this): This involves taking pairs of [sets of 10,000 nodes] and constructing each unique block of 10,000 x 10,000 of the full matrix. Simply running tck2connectome once on each successive block of 20,000 nodes is not sufficient!

Another question here: since many of the entries in a matrix this size are going to be zeros, I was wondering if anyone has looked into implementing a sparse storage format?

Eigen has native support for sparse matrices. In retrospect it may be better for tck2connectome to provide a command-line option to construct & save a sparse matrix, rather than the current automatic toggle between single and double-precision floating-point. I don’t see in their documentation how such data are written to string / file, but it wouldn’t surprise me if it’s in triplets like is supported by MatLab.

I found that the most parsimonious solution was to put everything together into an hdf5 file with compression enabled. The final size ends up being 1.8GB (compression level 6 of 9), as opposed to a 90GB file size if I dumped the data to an uncompressed binary blob (with numpy). I also tested converting the flat .txt matrices to sparse matrices (scipy.sparse.lil_matrix) but the cumulative file size was larger than the hdf5 file. We are not currently using the connectomestats tool with this data, but using custom tools in python and matlab to do some segmentation and modeling.

I agree that (if implemented) the output for the format should be as portable as possible. I am no expert, but as @rsmith noted as a possibility in Eigen, I think the most typically used format is a list of triplets.

I would be very happy to test as necessary if you decide to go with this.

Just to be clear (in case somebody reads this and mis-interprets this): This involves taking pairs of [sets of 10,000 nodes] and constructing each unique block of 10,000 x 10,000 of the full matrix. Simply running tck2connectome once on each successive block of 20,000 nodes is not sufficient!

Just a couple of quick comments about using sparse matrices with Eigen:

Eigen doesn’t provide native facilities for reading or writing data to file, whether sparse or not - only for manipulating them in RAM. This would need to be implemented separately. Although there is an unsupported module for storing sparse matrices in market format, which might be suitable - consists essentially of triplets as suggested anyway (@rsmith: is this what you were talking about?).

If you do decide to use a sparse matrix representation internally within tck2connectome, it may turn out to be very inefficient if using the Eigen::SparseMatrix class directly. We had a play around with these recently for other reasons, and the cost of random insertions is enormous - to the point where the whole process stalls rapidly as the number of elements in the matrix grows. It’s probably better to use a std::vector<Eigen::Triplet<double>> directly - this is in fact how they suggest populating a sparse matrix in their documentation.

Eigen doesn’t provide native facilities for reading or writing data to file, whether sparse or not - only for manipulating them in RAM.

Well, I’d have thought it would at least come with a operator<<()… ¯\_(ツ)_/¯

If you do decide to use a sparse matrix representation internally within tck2connectome, it may turn out to be very inefficient if using the Eigen::SparseMatrix class directly.

Eeew.

I won’t be putting the time into this myself any time soon; but I’ve created a GitHub wishlist issue with some hints in case anybody wants to take it on.