Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow drift tracking and correction #807

Closed
Tingchen-G opened this issue Oct 25, 2024 · 12 comments
Closed

Slow drift tracking and correction #807

Tingchen-G opened this issue Oct 25, 2024 · 12 comments
Assignees

Comments

@Tingchen-G
Copy link

Hi!

Kilosort 4 does not seem to be tracking slow drift as well as we might hope. We record under anesthesia and do not have a lot of fast drift, so we have set nblocks to 0 to turn off “drift correction”. Will Kilosort still track slow drift as suggested in the Wiki with the statements about “initialize the templates at one end of the recording, and then sweep the recording in time, adjusting the templates to keep track of the slightly-changing waveforms of each neuron” and “slow tracking strategy of incremental small updates to a cluster's template as we progress through the batches”?

What parameter controls the timescale etc over which the templates are updated for this slow tracking?

Could “drift_smoothing” also help? The comment for that parameter is somewhat confusing. It says “amount of gaussian smoothing to apply to the spatiotemporal drift estimation, for correlation, time (units of registration blocks), and y (units of batches) axes. The y smoothing has no effect for "nblocks = 1”. Should it say instead “time (units of batches), and y (units of registration blocks)”?

Ideally, we would update our templates on the timescale of ~10 mins. If we want to smooth over time to get the tracking timescale of 10 mins, what would be a good value for the drift_smoothing parameter?

@jacobpennington
Copy link
Collaborator

Hello,

All versions of Kilosort after Kilosort2 do not track slow drift in the way you're describing, they only adjust full sections of the probe on a batch-by-batch basis. Adding separate tracking of slower drift is something we're currently working on.

As for the drift_smoothing parameter, you are correct that there is a typo in the dimension labels, I will fix that. I can't give an exact value on the smoothing parameter to get the timescale you're wanting, but I will point out two things:

  1. Given that batches are ~2 seconds by default, it's likely that increasing the smoothing enough to get the effect you want would result in unexpected / untested behavior.
  2. There is a bug in the way the drift smoothing is currently implemented Drift Correction issues with multi-shank probe #686 , so changing drift_smoothing from the default is not recommended at this time.

@Tingchen-G
Copy link
Author

I see, thank you! What if we turn on drift correction and increase batch size to a few minutes? Do you think it could potentially give better results?

@jacobpennington
Copy link
Collaborator

That is not likely to work, since it would require so much additional memory. You could try increasing the batch size to something like 6s or 10s for your sampling rate (or larger if your machine can handle it), which may help get better drift estimates, but in principle the type of drift that we're correcting for should be handled well by the smaller batch size as well.

@jacobpennington
Copy link
Collaborator

Also wanted to add: after reading your original post again, it sounds like maybe you never tried the existing drift correction algorithm? You should definitely try setting nblocks=1 (or maybe nblocks=5 if you have a lot of channels) to see if it helps with the type of drift you're seeing. Kilosort4's drift correction isn't exclusively for fast drift, there are just certain kinds of slow drift that it won't capture.

@Tingchen-G
Copy link
Author

Thank you for your message! We have tried setting nblocks=1, with batch size = 800000 (40s). The drift amount plot looks good and aligns with our expectation, but the clustering is still not ideal. many of our clusters are split into multiple clusters over time. Would you recommend that we try kilosort2 instead?

@jacobpennington
Copy link
Collaborator

If the drift estimates match your expectations, switching to Kilosort2 will not offer any benefit. Have you tried running with nblocks=1 and batch_size = 80000 (2s)?

@Tingchen-G
Copy link
Author

Hmm I see. Yes I have tried this, and the results described above are what we see using nblocks=1 and batch_size = 80000 (2s).

@jacobpennington
Copy link
Collaborator

Can you please attach some screenshots of the drift plots and the clustering issues you're seeing?

@Tingchen-G
Copy link
Author

Tingchen-G commented Nov 13, 2024

Sure! Here is the drift plot:
drift_amount
This is exactly what we would expect with our drift.

And here is an example of the clustering issue:
image
These two clusters have very similar ISIs and slightly shifted waveforms. Their spike times, as shown in the amplitude view, complement each other.

Here is another example:
image
For these two clusters, again very similar ISIs and waveforms, and complementing spiketimes.

We are seeing around 3 pairs of questionable clusters per shank, and since we have 16 shanks, there are typically around 40~50 pairs of clusters in one recording that are not ideal.

@jacobpennington
Copy link
Collaborator

How many good clusters total across all shanks?

@Tingchen-G
Copy link
Author

There are 443 good clusters in total across all shanks.

@jacobpennington
Copy link
Collaborator

jacobpennington commented Nov 20, 2024

@Tingchen-G In that case, it sounds like you're not likely to get much (if any) improvement by tweaking parameters. We expect some amount of error with automated sorting, and seeing oversplitting / merging in ~10% of good units is not unreasonable.

I will also point out that it looks like the splits you're seeing are directly related to something happening during the recording. The drift amount starts to change suddenly just before ~30000s, and that lines up pretty closely with the times you're seeing the splits happen at. You can also see that the cross-correlograms for the units you've selected are not refractory (which would be expected for the same unit), which is most likely why they were not merged. Whether that's because they're actually different units (for example, some neurons dropping in or out at that point in the recording) or some other issue is unclear.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants