Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Embeddings Models #159

Merged
merged 66 commits into from
Jan 29, 2025
Merged
Show file tree
Hide file tree
Changes from 65 commits
Commits
Show all changes
66 commits
Select commit Hold shift + click to select a range
241c74a
Added necessary requirements. Added integration with pytorch-metric-l…
CaptainTrojan Dec 6, 2024
be4c2d2
Add detailed docstring for GhostFaceNetsV2 backbone class
CaptainTrojan Dec 6, 2024
c5c4f16
fix: update docstring for pairwise_distances function in pml_metrics.py
CaptainTrojan Dec 6, 2024
c360f15
Fixed type errors
CaptainTrojan Dec 6, 2024
6eda12a
Implemented improvements and suggestions. Separated GFN into class, b…
CaptainTrojan Dec 16, 2024
0689935
refactor: update type hint for GhostFaceNetsV2 class to use Tensor fr…
CaptainTrojan Dec 16, 2024
9463997
refactor: remove unused unwrap and wrap methods from GhostFaceNetsV2 …
CaptainTrojan Dec 16, 2024
9d5c418
Merge branch 'main' into feat/reid-support
klemen1999 Dec 16, 2024
555fe2a
fix: correct formatting in __all__ list in metrics module
CaptainTrojan Dec 16, 2024
9fe0b79
Improved coverage, explicitly set mdformat github version
CaptainTrojan Dec 16, 2024
b47e79e
Reduced mdformat-gfm version to 0.3.6 to support Python 3.8
CaptainTrojan Dec 17, 2024
8e376a0
Coverage fixes
CaptainTrojan Jan 1, 2025
708691b
Merge branch 'feat/reid-support' of https://github.com/luxonis/luxoni…
CaptainTrojan Jan 1, 2025
23892f9
Merge branch 'main' into feat/reid-support
CaptainTrojan Jan 1, 2025
23e7500
fix: return a model copy for the specified GhostFaceNets variant
CaptainTrojan Jan 1, 2025
a356093
Merge branch 'feat/reid-support' of https://github.com/luxonis/luxoni…
CaptainTrojan Jan 1, 2025
df89eef
initial labels refactor support
kozlov721 Jan 11, 2025
d01816b
updated docs
kozlov721 Jan 14, 2025
e34e893
updated predefined models
kozlov721 Jan 14, 2025
82abeae
updated attached modules
kozlov721 Jan 14, 2025
f48622b
small changes
kozlov721 Jan 14, 2025
7c244af
updated tests
kozlov721 Jan 14, 2025
1de6f74
fixed predefined classification
kozlov721 Jan 14, 2025
8c32014
docs
kozlov721 Jan 14, 2025
8d7685b
fix inspect
kozlov721 Jan 14, 2025
44198cd
Merge branch 'main' into feature/nested-labels
kozlov721 Jan 14, 2025
fbbbc26
fixed tests
kozlov721 Jan 15, 2025
bb5e882
fix debug config
kozlov721 Jan 16, 2025
785f2f8
updated perlin
kozlov721 Jan 16, 2025
c093363
missing doc
kozlov721 Jan 16, 2025
f2cdfa3
reverted bacj to train_rgb
kozlov721 Jan 16, 2025
e32f6ea
fix type issues
kozlov721 Jan 16, 2025
eef219a
replaced deprecated `register_module`
kozlov721 Jan 16, 2025
0379b2a
removed init arguments
kozlov721 Jan 16, 2025
d6344ef
added missing types
kozlov721 Jan 17, 2025
44adfcb
fixed anomaly detection
kozlov721 Jan 17, 2025
c76135c
converting to float
kozlov721 Jan 17, 2025
058f449
helper function
kozlov721 Jan 17, 2025
732ad1f
changes for latest luxonis-ml
kozlov721 Jan 17, 2025
09b1e58
fixed tests
kozlov721 Jan 17, 2025
bcdb303
Merge branch 'feature/nested-labels' into feat/reid-support
kozlov721 Jan 21, 2025
5a10d61
reid fixes
kozlov721 Jan 23, 2025
3dfb8b2
renamed
kozlov721 Jan 23, 2025
f16aad4
simplified
kozlov721 Jan 23, 2025
2fe723e
separated head
kozlov721 Jan 23, 2025
45ade94
simplified
kozlov721 Jan 23, 2025
368188b
updated config
kozlov721 Jan 23, 2025
0041af2
Merge branch 'main' into feat/reid-support
kozlov721 Jan 23, 2025
7b96ab8
small changes
kozlov721 Jan 23, 2025
b14a76c
fix for rectangular images
kozlov721 Jan 23, 2025
1bce803
renamed
kozlov721 Jan 23, 2025
3c0423e
type simplification
kozlov721 Jan 23, 2025
cb970fa
added cross batch memory
kozlov721 Jan 23, 2025
c368723
attached modules improvememnt
kozlov721 Jan 24, 2025
2383283
metadata task override
kozlov721 Jan 25, 2025
dec365b
fix automatic inputs
kozlov721 Jan 25, 2025
1f05da0
cleaned
kozlov721 Jan 25, 2025
6537153
metadata overriding
kozlov721 Jan 25, 2025
fb15dff
type checking
kozlov721 Jan 25, 2025
d680406
embedding tests
kozlov721 Jan 25, 2025
f844d19
fix
kozlov721 Jan 25, 2025
1670566
parametrized tests
kozlov721 Jan 25, 2025
fca9f2c
Merge branch 'main' into feature/embeddings-models
kozlov721 Jan 25, 2025
01de24b
docs
kozlov721 Jan 25, 2025
a008829
Merge branch 'main' into feature/embeddings-models
kozlov721 Jan 25, 2025
8bc9dff
moved colors to luxonis-ml
kozlov721 Jan 29, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@ repos:
hooks:
- id: mdformat
additional_dependencies:
- mdformat-gfm==0.3.6
- mdformat-gfm==0.3.6
47 changes: 47 additions & 0 deletions configs/embeddings_model.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
loader:
name: CustomReIDLoader

model:
name: reid_test
nodes:
- name: GhostFaceNetV2

- name: GhostFaceNetHead
alias: color-embeddings
metadata_task_override: color
params:
embedding_size: 16

losses:
- name: SupConLoss
params:
miner: MultiSimilarityMiner
distance: CosineSimilarity
reducer: ThresholdReducer
reducer_params:
high: 0.3
regularizer: LpRegularizer

metrics:
- name: ClosestIsPositiveAccuracy

- name: MedianDistances

visualizers:
- name: EmbeddingsVisualizer

loader:
params:
dataset_name: ParkingLot

trainer:
preprocessing:
train_image_size: [256, 256]

batch_size: 16
epochs: 100
validation_interval: 10
n_log_images: 8

callbacks:
- name: ExportOnTrainEnd
71 changes: 51 additions & 20 deletions luxonis_train/attached_modules/base_attached_module.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
import logging
from abc import ABC
from contextlib import suppress
from typing import Generic
from typing import Generic, get_args

from luxonis_ml.utils.registry import AutoRegisterMeta
from torch import Size, Tensor, nn
from typing_extensions import TypeVarTuple, Unpack

from luxonis_train.enums import TaskType
from luxonis_train.enums import Task, TaskType
from luxonis_train.nodes import BaseNode
from luxonis_train.utils import IncompatibleException, Labels, Packet

Expand Down Expand Up @@ -57,19 +57,29 @@ class BaseAttachedModule(
labels I{or} segmentation labels.
"""

supported_tasks: list[TaskType | tuple[TaskType, ...]] | None = None
supported_tasks: list[Task | tuple[Task, ...]] | None = None

def __init__(self, *, node: BaseNode | None = None):
super().__init__()
self._node = node
self._epoch = 0

self.required_labels: list[TaskType] = []
if self._node and self.supported_tasks:
self.required_labels: list[Task] = []
if self._node is not None and self.supported_tasks:
for tasks in self.supported_tasks:
if not isinstance(tasks, tuple):
tasks = (tasks,)
for task in tasks:
if isinstance(task, TaskType):
continue
task.name = self.node.metadata_task_override.get(
task.name, task.name
)

module_supported = [
label.value
if isinstance(label, TaskType)
else f"({' + '.join(label)})"
if isinstance(label, Task)
else f"({' + '.join(map(str, label))})"
for label in self.supported_tasks
]
module_supported = f"[{', '.join(module_supported)}]"
Expand All @@ -81,7 +91,7 @@ def __init__(self, *, node: BaseNode | None = None):
)
node_tasks = set(self.node.tasks)
for required_labels in self.supported_tasks:
if isinstance(required_labels, TaskType):
if isinstance(required_labels, Task):
required_labels = [required_labels]
else:
required_labels = list(required_labels)
Expand Down Expand Up @@ -159,7 +169,7 @@ def class_names(self) -> list[str]:
return self.node.class_names

@property
def node_tasks(self) -> list[TaskType]:
def node_tasks(self) -> list[Task]:
"""Getter for the tasks of the attached node.

@type: dict[TaskType, str]
Expand Down Expand Up @@ -201,11 +211,11 @@ def get_label(
@raises ValueError: If the module requires multiple labels and the C{task_type} is not provided.
@raises IncompatibleException: If the label is not found in the labels dictionary.
"""
return self._get_label(labels, task_type)[0]
return self._get_label(labels, task_type)

def _get_label(
self, labels: Labels, task_type: TaskType | None = None
) -> tuple[Tensor, TaskType]:
self, labels: Labels, task_type: Task | None = None
) -> Tensor:
if task_type is None:
if len(self.required_labels) == 1:
task_type = self.required_labels[0]
Expand All @@ -221,15 +231,15 @@ def _get_label(
f"Available labels: {list(labels.keys())}. "
f"Missing label: '{task}'."
)
return labels[task], task_type
return labels[task]

raise ValueError(
f"{self.name} requires multiple labels. You must provide the "
"`task_type` argument to extract the desired label."
)

def get_input_tensors(
self, inputs: Packet[Tensor], task_type: TaskType | str | None = None
self, inputs: Packet[Tensor], task_type: Task | str | None = None
) -> list[Tensor]:
"""Extracts the input tensors from the packet.

Expand Down Expand Up @@ -259,7 +269,7 @@ def get_input_tensors(
For such cases, the C{prepare} method should be overridden.
"""
if task_type is not None:
if isinstance(task_type, TaskType):
if isinstance(task_type, Task):
if task_type not in self.node_tasks:
raise IncompatibleException(
f"Task {task_type.value} is not supported by the node "
Expand Down Expand Up @@ -345,24 +355,45 @@ def prepare(
set(self.supported_tasks) & set(self.node_tasks)
)
x = self.get_input_tensors(inputs)
if labels is None or len(labels) == 0:
if labels is None or not labels:
return x, None # type: ignore
label, task_type = self._get_label(labels)
if task_type in [TaskType.CLASSIFICATION, TaskType.SEGMENTATION]:

label = self._get_label(labels)
generics = self._get_generic_params()
if generics is None or generics[0].__name__ == "Unpack":
return x, label # type: ignore

if len(generics) != 2:
raise RuntimeError(
f"The type signature of '{self.name}' implies a complicated "
f"custom module ({self.name}[{', '.join(g.__name__ for g in generics)}]). "
"Please implement your own `prepare` method. The default "
"`prepare` works only when the generic type of the module "
"is `[Tensor | list[Tensor], Tensor]`."
)

if generics[0] is Tensor:
if len(x) == 1:
x = x[0]
else:
logger.warning(
f"Module {self.name} expects a single tensor as input, "
f"Module '{self.name}' expects a single tensor as input, "
f"but got {len(x)} tensors. Using the last tensor. "
f"If this is not the desired behavior, please override the "
"`prepare` method of the attached module or the `wrap` "
f"method of {self.node.name}."
f"method of '{self.node.name}'."
)
x = x[-1]

return x, label # type: ignore

def _get_generic_params(self) -> tuple[type, ...] | None:
cls = type(self)
try:
return get_args(cls.__orig_bases__[0]) # type: ignore
except Exception:
return None

def _check_node_type_override(self) -> None:
if "node" not in self.__annotations__:
return
Expand Down
47 changes: 45 additions & 2 deletions luxonis_train/attached_modules/losses/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,9 @@ List of all the available loss functions.
- [`AdaptiveDetectionLoss`](#adaptivedetectionloss)
- [`EfficientKeypointBBoxLoss`](#efficientkeypointbboxloss)
- [`FOMOLocalizationLoss`](#fomolocalizationLoss)
- \[`PrecisionDFLDetectionLoss`\] (# precisiondfldetectionloss)
- \[`PrecisionDFLSegmentationLoss`\] (# precisiondflsegmentationloss)
- [Embedding Losses](#embedding-losses)
- [`PrecisionDFLDetectionLoss`](#precisiondfldetectionloss)
- [`PrecisionDFLSegmentationLoss`](#precisiondflsegmentationloss)

## `CrossEntropyLoss`

Expand Down Expand Up @@ -124,6 +125,48 @@ Adapted from [here](https://arxiv.org/abs/2108.07610).
| --------------- | ------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `object_weight` | `float` | `500` | Weight for the objects in the loss calculation. Training with a larger `object_weight` in the loss parameters may result in more false positives (FP), but it will improve accuracy. |

## Embedding Losses

We support the following losses taken from [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/):

- [AngularLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#angularloss)
- [CircleLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#circleloss)
- [ContrastiveLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#contrastiveloss)
- [DynamicSoftMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#dynamicsoftmarginloss)
- [FastAPLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#fastaploss)
- [HistogramLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#histogramloss)
- [InstanceLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#instanceloss)
- [IntraPairVarianceLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#intrapairvarianceloss)
- [GeneralizedLiftedStructureLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#generalizedliftedstructureloss)
- [LiftedStructureLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#liftedstructureloss)
- [MarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#marginloss)
- [MultiSimilarityLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#multisimilarityloss)
- [NPairsLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#npairsloss)
- [NCALoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#ncaloss)
- [NTXentLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#ntxentloss)
- [PNPLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#pnploss)
- [RankedListLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#rankedlistloss)
- [SignalToNoiseRatioContrastiveLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#signaltonoisecontrastiveloss)
- [SupConLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#supconloss)
- [ThresholdConsistentMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#thresholdconsistentmarginloss)
- [TripletMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#tripletmarginloss)
- [TupletMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#tupletmarginloss)

**Parameters:**

For loss specific parameters, see the documentation pages linked above. In addition to the loss specific parameters, the following parameters are available:

| Key | Type | Default value | Description |
| -------------------- | ------ | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `miner` | `str` | `None` | Name of the miner to use with the loss. If `None`, no miner is used. All miners from [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/miners/) are supported. |
| `miner_params` | `dict` | `None` | Parameters for the miner. |
| `distance` | `str` | `None` | Name of the distance metric to use with the loss. If `None`, no distance metric is used. All distance metrics from [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/distances/) are supported. |
| `distance_params` | `dict` | `None` | Parameters for the distance metric. |
| `reducer` | `str` | `None` | Name of the reducer to use with the loss. If `None`, no reducer is used. All reducers from [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/reducers/) are supported. |
| `reducer_params` | `dict` | `None` | Parameters for the reducer. |
| `regularizer` | `str` | `None` | Name of the regularizer to use with the loss. If `None`, no regularizer is used. All regularizers from [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/regularizers/) are supported. |
| `regularizer_params` | `dict` | `None` | Parameters for the regularizer. |

## `PrecisionDFLDetectionLoss`

Adapted from [here](https://arxiv.org/pdf/2207.02696.pdf) and [here](https://arxiv.org/pdf/2209.02976.pdf).
Expand Down
2 changes: 2 additions & 0 deletions luxonis_train/attached_modules/losses/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
from .bce_with_logits import BCEWithLogitsLoss
from .cross_entropy import CrossEntropyLoss
from .efficient_keypoint_bbox_loss import EfficientKeypointBBoxLoss
from .embedding_losses import EmbeddingLossWrapper
from .fomo_localization_loss import FOMOLocalizationLoss
from .ohem_bce_with_logits import OHEMBCEWithLogitsLoss
from .ohem_cross_entropy import OHEMCrossEntropyLoss
Expand All @@ -28,6 +29,7 @@
"OHEMCrossEntropyLoss",
"OHEMBCEWithLogitsLoss",
"FOMOLocalizationLoss",
"EmbeddingLossWrapper",
"PrecisionDFLDetectionLoss",
"PrecisionDFLSegmentationLoss",
]
Loading
Loading