-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow training a model on multiple annotations #8071
Conversation
@@ -240,7 +240,7 @@ const defaultState: OxalisState = { | |||
showVersionRestore: false, | |||
showDownloadModal: false, | |||
showPythonClientModal: false, | |||
aIJobModalState: "invisible", | |||
aIJobModalState: "neuron_inferral", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aIJobModalState: "neuron_inferral", | |
aIJobModalState: "invisible", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for implementing this, I'm looking forward to use the feature! Very thorough implementation 👍
Notes from testing:
- Either the placeholder or the validation for the csv modal is wrong. The placeholder indicates that the color and segmentation layer names can be supplied, but the validation complains that only the url/id should be given.
- The csv and the training modal are closed when clicking onto the back drop. The second one (and arguably the first one as well) should not do that, because it makes it to easy to lose entered information.
- Whether the modal is triggered from within an annotation or from the models list, allows to select different layers (i.e., the fallback layer):
@MichaelBuessemeyer Could you please carry this over the finish line? |
- includes extending bbox validation
Ok here we go :)
I removed the placeholder as the current code does not support directly supplying the color and segmentation layer to use for an annotation. If this is a helpful feature I'd prefer to do this as a follow-up :)
I don't understand what the issue is with the option to be able to select layers. An annotation might have multiple volume layers and thus the segmentation data that should be used is most of the time ambiguous. Same goes for color layers. |
What I wanted to express is that the layer options offered to the user are different ones, depending on where the modal is opened. If you look at the screenshots, one time there are two options offered, and the other time there are three options, although I specified the same annotation. This should not be the case. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for taking over this PR!
Please also see my previous comment regarding the testing feedback. I can no longer test this on the dev instance, because the AI modal cannot be opened from within an annotation (Philipp patched the code to open the modal automatically, but you already removed that). Please test this locally or let me know if you don't know what I mean :)
function areBoundingBoxesValid(userBoundingBoxes: UserBoundingBox[] | undefined): { | ||
valid: boolean; | ||
reason: string | null; | ||
function areAllAnnotationsInvalid<T extends HybridTracing | APIAnnotation>( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The All
seems to be incorrect and can probably simply be dropped
.map( | ||
(layer) => | ||
getTracingForAnnotationType(annotation, layer) as Promise<ServerVolumeTracing>, | ||
), | ||
); | ||
// TODO: make bboxs a member again |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ups, forgot to remove this. I already implemented it this way.
} | ||
} | ||
|
||
console.log(volumeTracings); // TODOM remove |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, sorry I sometimes tend to skip reading comments 🙈
const volumeTracings = volumeServerTracings.map((tracing) => | ||
serverVolumeToClientVolumeTracing(tracing), | ||
); | ||
let userBoundingBoxes = volumeTracings[0]?.userBoundingBoxes; | ||
if (!userBoundingBoxes) { | ||
const skeletonLayer = annotation.annotationLayers.find( | ||
(layer) => layer.typ === "Skeleton", | ||
); | ||
if (skeletonLayer) { | ||
const skeletonTracing = await getTracingForAnnotationType(annotation, skeletonLayer); | ||
userBoundingBoxes = convertUserBoundingBoxesFromServerToFrontend( | ||
skeletonTracing.userBoundingBoxes, | ||
); | ||
} else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a comment here that user bounding boxes are always saved in all annotation layers, but if there is no volume annotation layer, the skeleton layer is checked. At least that's how I interpret it, but I was a bit confused at first :)
|
||
const { TextArea } = Input; | ||
const FormItem = Form.Item; | ||
|
||
export type AnnotationInfoForAIJob<GenericAnnotation> = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please note that I also added this comment. It hopefully explains why all these additional properties (volumeTracings, userBoundingBoxes, volumeTracingResolutions
) are needed. Please tell me your opinion on whether you understand what I want to express and maybe on wording improvements 🙏
Oh good finding 🎉 I think I fixed it now by simply filtering out volume data layers of the dataset that work as a fallback layer for one of the volume tracing layers. The rest of your awesome feedback 🦶 🦶 🔙 should also be applied. 👯 |
Co-authored-by: Daniel <[email protected]>
- add comment about why AnnotationInfoForAIJob is so complex - filter out dataset segmentation layers that are used as fallback layers of other volume tracing layers of an annotation
…ssos into multi-anno-training
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thank you for polishing this. I'm looking forward to using this :)
As a note, I've created #8097 for two smaller followup issues that came to my mind, but I didn't want to further bloat and delay this PR :) After merging this, we should test it in production, extend the mentioned issue with any other shortcomings we notice, and then implement the followups 🚀 |
URL of deployed dev instance (used for testing):
Steps to test:
Issues: