I am attempting to construct a connectivity matrix wherein each node is a vertex of a freesurfer white matter surface. I have transformed the tractogram to anatomical space, extracted the endpoints of each fiber, and visually checked that they are close to the freesurfer white matter surface (there are small regions of discrepancy because an FSL segmentation was used to generate the 5tt image, so the wm/gm boundary differs from the freesurfer surface). I originally intended to write a nearest neighbor routine that would assign each fiber endpoint to its nearest white matter surface vertex, and manually construct the connectivity matrix, but I am wondering if there is a way that I can make use of the tck2connectome tool to construct this matrix. Is there a way that I can create a labelled parcellation image in which each node refers to a specific point in on the freesurfer white matter surface and use this in tck2connectome?
Thanks so much!
We will hopefully eventually have tools for doing various types of processing based on surface data, including per-vertex connectomes, but there’s a decent amount of work required for such.
I’d also note that per-vertex connectivity matrices require either huge numbers of streamlines (extrapolate this data to your number of vertices), or some form of connectivity smoothing to account for the relative sparsity of streamline counts (have seen a few papers do this, but can’t think of any off the top of my head to cite).
The issue with going from vertex labelling to a 3D image is that the inherent inaccuracies of that process are grossly exaggerated with more fine-grained labelling. With < 100 nodes per hemisphere, the inappropriate labelling of streamlines that terminate right at the interface between parcels when using an image rather than the surface itself is probably not of huge consequence; but when the distance between vertex-based labels gets close to the image voxel size, such inappropriate assignments become more dominant. With per-vertex labelling this could potentially be quite problematic (just imagine if two vertices resided in the same image voxel). Therefore, if you really wanted to use
tck2connectome, I would suggest that you would need to generate a template image at a higher spatial resolution (maybe a voxel size of 1/4 of the distance between vertices), and project each vertex label to the voxel in which it resides. The default radial search assignment mechanism in
tck2connectome may need to have the maximal distance increased in order to be able to find parcels that are only labelled sparsely within the image, rather than being “filled in”, but utilising that capability would be easier than performing any kind of “dilation” of labels (which is an alternative solution to the streamline-to-node assignment issue, but I don’t have a script for doing such, and the radial search mechanism can be thought of as a more accurate version of).
Thanks so much for your response! To your point about the sparsity of the adjacency matrix, I should have mentioned that in my formulation, vertices on the white matter surface are connected both via the triangular mesh connections that define the surface and by the connections derived from fiber tracking, which is the same method as used in Atasoy et al 2016. Maybe that is one of the papers you were referring to that applies a form of connectivity smoothing??- this method also has the benefit of encapsulating information about the shape of the manifold the adjacency matrix describes. Additionally, I am able to vary the number of vertices in the white matter surface mesh (and therefor the size of the adjacency matrix) which has been helpful for tweaking the accuracy of the vertex assignment for streamline endpoints.
It probably wasn’t the most efficient method, but I actually ended up using a kd tree nearest neighbors search routine, and just filtered the streamline connections based on a tolerance for the distance between the endpoints and their nearest surface vertex. I computed the laplacian matrix of the resulting adjacency matrix, and solved for its eigenvectors/values, and color mapped the eigenvectors back onto the white matter surface, which generated some very pretty images that parcellate the surface into regions of increasing spatial frequency (and look promising for the remainder of my analysis).
Interestingly, when I increase the number of streamlines (before the nearest neighbor filtering) from 250k to 1M, regardless of the resolution of the surface mesh, the resulting eigenvectors of the graph laplacian matrix seem to be meaningless. Specifically, the eigenvectors with the lowest eigenvalues do not divide the surface into the expected regions, and instead have nonzero values focused solely in corpus callosum/ thalamus. This is the issue I am currently trying to address. Do you know of any difficulties that arise from spatially transforming large numbers of fibers? I used the same process to transform both the 250k and 1M tractograms (described by @jdtournier in a previous thread) from diffusion space to the anatomical space in which the white matter surface resides. If not, do you have any input on as to why the graph laplacian derived using 1M streamlines and the method described above might give seemingly less meaningful eigenvectors?
Thanks so much for reading and for your help/input!
It probably wasn’t the most efficient method, but I actually ended up using a kd tree nearest neighbors search routine, and just filtered the streamline connections based on a tolerance for the distance between the endpoints and their nearest surface vertex.
I believe an octree is the way to go, though you need to be able to deal with the same vertex appearing in multiple lookup domains. @chunhungyeh is the person to talk to if you’re really into the guts of this stuff.
I computed the laplacian matrix of the resulting adjacency matrix, and solved for its eigenvectors/values, and color mapped the eigenvectors back onto the white matter surface, which generated some very pretty images that parcellate the surface into regions of increasing spatial frequency (and look promising for the remainder of my analysis).
Looking forward to seeing it!
Interestingly, when I increase the number of streamlines (before the nearest neighbor filtering) from 250k to 1M, regardless of the resolution of the surface mesh, the resulting eigenvectors of the graph laplacian matrix seem to be meaningless.
1M is still a very small number for a whole-brain tractogram looking at vertex-wise connectivities, regardless of the nature of the processing of said data. For comparative purposes I’d typically use ~ 10M streamlines for ~ 100 nodes.
As far as the graph Laplacian is concerned, I’m not familiar with the method (and am not really in a position to be familiarising myself right now given the HBM deadline). But from a naive perspective, there must be some kind of ‘weighting’ between streamlines-based vertex-vertex connectivity and surface-based vertex adjacency if they are to be combined into a single adjacency matrix, and so if the contribution of streamlines is not in ‘appropriately’ scaled according to the total number of streamlines in the tractogram, the relative influence of each of these two sources of information will be modulated by changing the number of streamlines, which may influence the outcome.
Anyone else who knows more about the method, feel free to jump in.