Skip to content

Releases: ksugar/qupath-extension-sam

v0.7.0

03 Sep 17:16
7848e9e
Compare
Choose a tag to compare

Overview

  • Support SAM2-based 2D+T tracking and 3D segmentation

sam2-sequence-demo

v0.6.0

02 Aug 17:29
9d40bec
Compare
Choose a tag to compare

Overview

  • Use the current view for the encoder input in the rectangle mode
    • Previously, the local area was cropped for the encoder, but the new approach is faster in general.
  • Support SAM2 models
    • The SAM2 models are available from the samapi server v0.5.0 and above.

v0.5.0

11 Dec 15:33
Compare
Choose a tag to compare

Overview

v0.4.1

17 Oct 09:33
Compare
Choose a tag to compare

Overview

  • Properly send the checkpoint URL parameter
    • The checkpoint URL was not sent to the server.
  • Add a catalog of SAM weights to README
  • Add example scripts under src/main/resources/scripts

v0.4.0

15 Sep 16:22
Compare
Choose a tag to compare

Overview

Register SAM weights from URL

  1. Select Extensions > SAM from the menu bar.
  2. Press the Register button in the Segment Anyghing Model dialog.

The weights file is downloaded from the URL and registered on the server. After the registration, you can select the weights from the SAM weights dropdown menu.

Parameters

key value
SAM type One of vit_h (huge), vit_l (large), vit_b (base), or vit_t (mobile).
Name The SAM weights name to register. It needs to be unique in the same SAM type.
URL The URL to the SAM weights file.

v0.4.0-SNAPSHOT

14 Sep 10:30
Compare
Choose a tag to compare
v0.4.0-SNAPSHOT Pre-release
Pre-release

This is a pre-release of v0.4.0 for testing.

v0.3.0

05 Jul 07:51
Compare
Choose a tag to compare

Overview

  • Support for both point and rectangle foreground prompts (@petebankhead)
    • Ensure each new point is a distinct object while SAM is running (i.e. turn of 'Multipoint' mode)
    • Support line ROIs as a way of adding multiple points in a single object

  • Support point background prompts (@petebankhead)
    • Points with 'ignored*' classifications are passed to the model as background prompts
      (Sidenote: it seems a large number of background points harm the prediction... or I've done something wrong)

  • Implement 'Live mode' and 'Run for selected' (@petebankhead)
    • 'Live mode' toggle button to turn live detection on or off
    • Alternative 'Run for selected' button to use only the selected foreground and background objects
      • This makes it possible to annotate first, then run SAM across multiple objects - as required on the forum

  • Support SamAutomaticMaskGenerator

  • Menu items simplified to a single command to launch a dialog to control annotation with SAM (@petebankhead)

    • Provide persistent preferences for key choices (e.g. server, model)
    • Run prediction in a background thread with (indeterminate) progress indicator
    • Help the user with tooltips (and prompts shown at the bottom of the dialog)
  • Handle changing the current image while the command is running (@petebankhead)

    • Send entire field of view for point prediction
      This is useful for one-click annotation of visible structures
  • Include the 'quality' metric as a measurement for objects that are created (@petebankhead)

  • Support z-stacks/time series (by using the image plane; there's no support for 3D objects) (@petebankhead, @rharkes)

  • Optionally assign names & random colors to identify the generated objects (@petebankhead)

  • Optionally return multiple (3) detections instead of 1 (@petebankhead)

    • Select which detection to retain based upon size or quality, or keep all of them
  • Optionally keep the prompt objects, instead of immediately deleting them (@petebankhead)

v0.2.0

16 Jun 06:46
Compare
Choose a tag to compare

Updates

  • Support any number of channels

Acknowledgements

Thank you for your contributions:

Assets

14 Jun 10:24
Compare
Choose a tag to compare
assets

Update README

v0.1.0

08 Jun 00:26
Compare
Choose a tag to compare
first commit