Tracking to Depth Electrodes in Grey Matter

Hello Folks,

To start, thank you for being a great and supportive community - that is really what makes MRtrix3 a great experience all around. Now enough brown-nosing and down to business:

I am trying to generate connectomes with SEEG depth electrodes (~130/ patient) as ROIs. Ideally, I would not assign these electrodes to another ROI parcellation like Desikan-Killiany atlas from freesurfer, but rather allow each contact in the grey matter speak for itself and have a quantifiable structural connectivity measure to all other contacts.

What I have tried:

Approach 1) ACT with 5ttgen and dynamic seeding from WM to generate 10M tracks cropped at GMWM interface --> SIFT2 --> assign to connectome with ‘assignment_radial_search’ ranging from 2-10mm.

With 10mm search radius I still get only 85-90% of contacts assigned any tracks at all because some of the contacts are pretty deep in grey matter (e.g. amygdala). And as you would guess, electrodes near the GMWM interface are gobbling up most of the assignments.

Approach 2) Use CSF as ‘exclude’ region when generating tracks (10M)

As expected, this generates a lot of garbage tracks that erroneously cross sulci where the CSF mask from 5TTgen isn’t sufficient. The connectome results are really bad with many contact pairs not even having any streamlines assigned to them.

Approach 3) Use ACT, but don’t crop at the grey-matter-white-matter interface (10M tracks).

I get very few contact pairs assigned in the connectome unless I greatly increase search radius again as in approach 1.

I understand that tracking into grey matter is completely off-label and not what MRtrix3 is designed to do, but I am wondering what you think is the best way to assign meaningful, quantifiable connectivity information to my electrodes without reverting to an atlas-style large ROI approach. Ideally the really deep electrodes wouldn’t have 0-5 tracks assigned - that is not a fair measure of their connectivity.

Thank you and please let me know your thoughts.

Regards,
Graham

P.S. I can give you the commands I am running, but I wanted an unbiased opinion of feasibility before we got into that.

Hi Graham,

One trick I’ve suggested trying on multiple occasions, but have never gotten around to implementing as a 5ttgen command-line option, is to take the “cortical grey matter” partial volume fraction (5TT volume 0), add it to the “sub-cortical grey matter” partial volume fraction (5TT volume 1), and zero the former. This will permit streamlines to track into the cortex. Really these names for the different images volumes are a little too prescriptive; a description more faithful to the implementation / capabilities would be something more like “grey matter into which streamlines cannot project” / “grey matter into which streamlines can project”.

I would however be very skeptical about making inferences based on differential projection lengths of streamlines into the cortex: not only is the determination of where to terminate ill-posed in such a case, but the trajectories themselves are likely to be very inaccurate in the absence of some prior anatomical model, since axons can curve very sharply as they transition from the white matter into the cortex, but tracking using orientations from the diffusion model alone - at typical spatial resolutions - will not be at all faithful to such.

I’ve unfortunately delayed superseding the “radial search” mechanism for far too long; I had a better solution years ago, and more difficult cases like this are where the inadequacies of the current solution really come to the fore. But if you’re talking about multiple depths along a single electrode separated by small distances, I suspect the issue is one of information content rather than algorithmic solutions.

Rob

Hi @grahamwjohnson,

For reference, some relevant sources of cautionary information:

I’ve recently asked someone’s experience with allowing to track into cortical GM by using the current ACT rules / segmentation type for “sub-cortical grey matter”, and they reported what we suspected all along: the precision of a voxel-wise (even with partial voluming / non-binary) segmentation of the cortex isn’t sufficient to represent the correct continuity and/or topology of the outer cortical surface, due to how narrow many sulci are in healthy human subjects. In practice, this means many parts of the outer cortical surface touch / “connect” to those of topologically distinct parts of the cortex. Such anatomical constraints then very easily allow for many false positive assignments (or in general, end points) to entirely wrong parts of the cortex; i.e. they don’t sufficiently constrain streamlines to the correct part of the cortex.

Note this even still holds extensively for e.g the HCP data, which has an outstanding anatomical voxel size of 0.7 x 0.7 x 0.7 mm^3. A quick qualitative inspection of the 5TT GM segmentation reveals this:

T1w image:

5TT GM segmentation, with partial volume:

Good examples of the problem can be seen along the entire length of e.g. the superior temporal sulcus (…at least I think that’s what it’s called), but you’ll easily spot many other locations as well, even only in this screenshot.
Don’t get me wrong; there’s nothing wrong with the segmentation: it’s done a very reasonable job on the anatomical image above for sure. But the precision due to partial voluming simply falls short to represent the topology of the outer cortical surface in a reliable way. Consequentially, you can’t rely on such a segmentation combined with a rule set that relies on a robust outer cortical surface identification. I’ve been providing some people with a few tips and tricks to pragmatically deal with some of this, but the fundamental limitation itself is already introduced at this stage: some of the lost precision can simply not be recovered once lost. A mesh segmentation of both inner and outer surfaces of the cortex allows to retain the required precision, while imposing solid topological assumptions on the segmentation process itself.

So in any case, take that into account in terms of what precision you can expect from such a process. It might not help much currently, but it’s an relevant reality check.

Cheers,
Thijs

Thanks Rob and Thijs,

I will give the “hacked-ACT” technique a try, but will heed the cautions and only do this for my own experimentation and learning. I really appreciate the help. Hope to continue the conversation in the future for my gut feels there should be a way to get something meaningful out of this concept.

Graham

1 Like