SIFT error: Error assigning memory for SIFT gradient vector

Hi gurus,

We have put together an automated structural connectome pipeline based on this thread "Beginner: Connectome pipeline (Updated) ". However, we are running into errors with the SIFT algorithm as follows “[ERROR] Error assigning memory for SIFT gradient vector”. There is more than enough memory available for the pipeline to be completed. Though the tractograms appeared fine, we’ve re-run the tractograms again but SIFT still failed. We also could not find anything on the help boards as it relates with this issue. How do we go about solving this issue? Any help is greatly appreciated.

Ade

How much memory do have on your system? How big is the tractogram you’re feeding in? SIFT has very high memory requirements – as a ballpark, you’ll need in the region of 32GB to SIFT 30M streamlines…

Thanks @jdtourniern for your quick reply!

We are running the pipeline on a high performance cluster; we have about 100GB available. We are sifting a tractogram of 100M streamlines to 20M streamlines. We’ve been able to run a few brains successfully, but are not sure why others are producing the error even when the tractograms look fine.

Ade

OK, so I’d expect that to be running fairly close to the limit. Have a took at your memory consumption for the successful runs, using e.g. htop: you’ll probably find it’s using close to all of the available RAM. Different inputs are likely to require different amounts of RAM, depending on lengths of streamlines, etc.

You may be better off using SIFT2 instead, which would allow the use of a smaller input tractogram with similar quality of results.

We had a closer look, and we’ve found that the tractograms for problem brains [~94GB] are quite larger than the other brains [~70GB], and this is probably why we are having memory issues. What do you think might be causing such differences in size? We’d like to keep the pipeline as consistent as possible for all brains, and were wondering how we could investigate/fix the tractograms?

Thank you very much for your help and prompt replies!
Ade

The storage space required for track data is dependent not only on the number of tracks, but also on the number of vertices per track. This could vary for a number of reasons:

  • The volume of the brain differs, and hence the lengths of various white matter bundles differs, requiring a larger number of vertices to reconstruct the same pathways.

  • The voxel size differs, but the size of the brain is the same: since the default step size is based on the voxel size, this will result in a larger number of vertices to reconstruct streamlines of the same physical length.

  • The shape of the streamline length distribution differs; that is, some brains have a greater number of longer streamlines and a lesser number of shorter streamlines. This in turn could be caused by a wide range of factors. tckstats -histogram is useful here.

You need to check your data manually and figure out which of these is the case. The memory usage of SIFT(2) will (approximately) echo this effect.

Hi Rob,

Thank you for your reply! We did the checks you suggested and discovered that the streamline length distributions are vastly different in the brains that did not work, that is, there are much longer streamlines than usual. The max streamline length was 250.014 mm even though we set the max length at 250 mm. Does this reflect erroneous streamline generation?

Many thanks again,
Ade

The max streamline length was 250.014 mm even though we set the max length at 250 mm

There’s certainly scope for minor differences in how streamlines lengths are quantified; e.g. whether the distance between every streamline vertex is calculated or whether vertices are assumed to be one step size apart, whether any downsampling has occurred (streamlines with less vertices may be quantified as being shorter than those with more vertices), etc… An excess of 0.014mm is really not problematic in this context given the magnitude of uncertainty involved with streamlines tractography, and certainly couldn’t lead to a 35% increase in file size.