Skip to content

Commit

Permalink
Updated docs
Browse files Browse the repository at this point in the history
  • Loading branch information
KevinMusgrave committed Nov 28, 2021
1 parent 8e7d649 commit 1499aaf
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 4 deletions.
2 changes: 1 addition & 1 deletion docs/distributed.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ utils.distributed.DistributedMinerWrapper(miner, efficient=False)
**Parameters**:

* **miner**: The miner to wrap
* **efficient**: If False, memory usage is not optimal, but the resulting gradients will be identical to the non-distributed code. If True, memory usage is decreased, but gradients will differ from non-distributed code.
* **efficient**: If your distributed loss function has ```efficient=True``` then you must also set the distributed miner's ```efficient``` to True.

Example usage:
```python
Expand Down
11 changes: 8 additions & 3 deletions docs/losses.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ loss = loss_func(embeddings, labels) # in your training for-loop
Or if you are using a loss in conjunction with a [miner](miners.md):

```python
from pytorch_metric_learning import miners, losses
from pytorch_metric_learning import miners
miner_func = miners.SomeMiner()
loss_func = losses.SomeLoss()
miner_output = miner_func(embeddings, labels) # in your training for-loop
Expand All @@ -19,21 +19,26 @@ loss = loss_func(embeddings, labels, miner_output)

You can specify how losses get reduced to a single value by using a [reducer](reducers.md):
```python
from pytorch_metric_learning import losses, reducers
from pytorch_metric_learning import reducers
reducer = reducers.SomeReducer()
loss_func = losses.SomeLoss(reducer=reducer)
loss = loss_func(embeddings, labels) # in your training for-loop
```

For tuple losses, can separate the source of anchors and positives/negatives:
```python
from pytorch_metric_learning import losses
loss_func = losses.SomeLoss()
# anchors will come from embeddings
# positives/negatives will come from ref_emb
loss = loss_func(embeddings, labels, ref_emb=ref_emb, ref_labels=ref_labels)
```

For classification losses, you can get logits using the ```get_logits``` function:
```python
loss_func = losses.SomeClassificationLoss()
logits = loss_func.get_logits(embeddings)
```


## AngularLoss
[Deep Metric Learning with Angular Loss](https://arxiv.org/pdf/1708.01682.pdf){target=_blank}
Expand Down

0 comments on commit 1499aaf

Please sign in to comment.