Calculating Sum of Streamline weights

Rob and the Team
Apologies if this is a silly question - but what’s the best way to calculate the sum of streamline weights for a pathway? I’ve generated whole brain tractograms, applied SIFT2, and extracted a pathway I’m interested in with tckedit and appropriate ROIs. This leaves me with the tck file with the pathway of interested, and the txt file with streamline weights for the pathway. Do I need to just sum the weights in the text file?

Hello @DanLumsden,

This thread might be helpful.

Short answer is yes :slight_smile: if you are interested in the connection density for that track.

If you wish to compare this metric across individuals, you should scale the sum of streamline weights by the proportionality coefficient “mu” (derived using option -out_mu in tcksift2).

Cheers,
Nick

2 Likes

Thanks Nick
Equally daft question - whats the best way to calculate the sum of weights from the sift output .txt files?

Hi Dan,

I have just been looking into this. After struggling for a bit I found a solution. Might not be the most efficient code, but it worked for me :slight_smile:

My input files were csv files which contain streamline weights for multiple tracks. Each column is a track and the rows contain the streamline weights per track. If your text file looks similar, I think you can use the code below.

To sum the weights in my csv files I used:

for j in cat list.txt ; do echo {j} ; cd {j}/dwi/tracks ; awk -F’\t’ ‘{for (i=1;i<=NF;i++) sum[i]+=$i;}; END{for (i in sum) print i"\t"sum[i];}’ weights_176.csv > summed_weights_176.csv; cd ~-; done

The list.txt files should contain a column with all your subjects. The input file is weights_176.csv. Your output file should contain 2 columns, column 1: column number of input file (in my example: weights_176.csv) and column 2: summed streamline weights. I printed the column number of the input file because the order of the columns in the output file did not correspond with the columns of the input file. I then used the column numbers from the first column to make sure that the output file corresponds with the input file again:

for i in cat list.txt ; do echo {i} ; cd {i}/dwi/tracks ; sort -n -k1,1 summed_weights_176.csv > sorted_summed_weights_176.csv; cd ~-; done

Then I printed just the second column so that you have final output files with just the summed weights for each participant:
for i in cat list.txt ; do echo {i} ; cd {i}/dwi/tracks ; awk ‘{print $2}’ sorted_summed_weights_176.csv > final_summed_weights_176.csv; cd ~-; done

Hope this helps you!
Bauke

1 Like

If you have just a typical streamline weights file, i.e. only the weights for a single tract of interest, I’d typically just do something like:

sed '/^#/d' weights.csv | tr " " "+" | bc

:nerd_face:

Sorry to resurrect a thread more than one year old!
Rob - the sed ‘/^#/d’ textname.txt prints into the terminal the contents of the .txt file (the output weights i’d like to sum) - but when I use the full command as per your suggestion the output is:

(standard_in) 1: syntax error
I suspect this is me missing a very basic solution.