Skip to content

Commit

Permalink
major cleanup in rate distortion
Browse files Browse the repository at this point in the history
  • Loading branch information
Nathaniel Imel authored and Nathaniel Imel committed Dec 25, 2023
1 parent 726ef9d commit 2c2af64
Show file tree
Hide file tree
Showing 5 changed files with 195 additions and 680 deletions.
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,10 @@ First, set up a virtual environment (e.g. via [miniconda](https://docs.conda.io/

`pip install -e .`

3. In addition, this project requires [rdot](), a python library of rate-distortion optimization tools. When a stable version is available, we will add this to the ULTK `setup.py` file; for now, install via git:

`python3 -m pip install git+https://github.com/nathimel/rdot.git`

## Getting started

- Check out the [examples](https://github.com/CLMBRs/ultk/tree/main/src/examples), starting with a basic signaling game. The examples folder also contains a simiple efficient communication analysis of [indefinites](https://github.com/CLMBRs/ultk/tree/main/src/examples/indefinites).
Expand Down
21 changes: 0 additions & 21 deletions src/ultk/effcomm/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -296,24 +296,3 @@ def __init__(
for i in range(len(self.R)):
col = speaker.S[:, i]
self.R[i] = col @ prior / np.sum(col @ prior)


class BayesianListener(Listener):
"""A Bayesian reciever chooses an interpretation according to p(meaning | word), where
# BUG: This is extremely misleading since we basically only use this function for IB, and IB assumes a DETERMINISTIC bayes-derived listener.
$P(m | w) = \\frac{P(M | W) \cdot P(M)} { P(W) }$
Furthermore, we sometimes require that each word w is deterministically interpreted as meaning $\hat{m}$ as follows:
# BUG: This says nothing about determinism.
$\hat{m}_{w}(u) = \sum_m p(m|w) \cdot m(u)$
See ultk.effcomm.information for more details.
"""

def __init__(self, speaker: Speaker, prior: np.ndarray, name: str = None):
weights = bayes(speaker.normalized_weights(), prior)
# TODO: Change this whole class to DeterministicBayesOptimalListener, and implement the correct weights!
super().__init__(speaker.language, weights=weights, name=name)
Loading

0 comments on commit 2c2af64

Please sign in to comment.