Skip to content

Commit

Permalink
history.md
Browse files Browse the repository at this point in the history
  • Loading branch information
svott03 committed Nov 13, 2023
2 parents b530f55 + f1da318 commit f9bac91
Show file tree
Hide file tree
Showing 49 changed files with 2,439 additions and 1,200 deletions.
1 change: 0 additions & 1 deletion .github/workflows/install.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ jobs:
["pip install ribs[all]", "conda install -c conda-forge pyribs-all"]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
- uses: conda-incubator/setup-miniconda@v2
with:
python-version: ${{ matrix.python-version }}
Expand Down
17 changes: 16 additions & 1 deletion HISTORY.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,28 @@
#### API

- Add Base Operator Interface and Emitter Operator Retrieval ({pr}`416`)
- **Backwards-incompatible:** Return occupied booleans in retrieve ({pr}`414`)
- **Backwards-incompatible:** Deprecate `as_pandas` in favor of
`data(return_type="pandas")` ({pr}`408`)
- **Backwards-incompatible:** Replace ArchiveDataFrame batch methods with
`get_field` ({pr}`413`)
- Add field_list and data methods to archives ({pr}`412`)
- Include threshold in `archive.best_elite` ({pr}`409`)
- **Backwards-incompatible:** Replace Elite and EliteBatch with dicts
({pr}`397`)
- **Backwards-incompatible:** Rename `measure_*` columns to `measures_*` in
`as_pandas` ({pr}`396`)
- Add ArrayStore data structure ({pr}`395`, {pr}`398`, {pr}`400`, {pr}`402`,
{pr}`403`, {pr}`404`, {pr}`406`, {pr}`407`, {pr}`411`)
- Add GradientOperatorEmitter to support OMG-MEGA and OG-MAP-Elites ({pr}`348`)

#### Improvements

- Reimplement ArchiveBase using ArrayStore ({pr}`399`)
- Use chunk computation in CVT brute force calculation to reduce memory usage
({pr}`394`)
- Test pyribs installation in tutorials ({pr}`384`)
- Add cron job for testing installation ({pr}`389`)
- Add cron job for testing installation ({pr}`389`, {pr}`401`)
- Fix broken cross-refs in docs ({pr}`393`)

## 0.6.3
Expand Down
92 changes: 92 additions & 0 deletions benchmarks/benchmark.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
"""Quantifies the performance of different centroid generation techniques
To measure how well a generation technique, i.e., random centroids, CVT, etc,
performs, we measure the probability of generating a random point within a
certain region defined by the centroid of that region.
The equations for this benchmark can be found in Mouret 2023:
https://dl.acm.org/doi/pdf/10.1145/3583133.3590726.
Usage:
python benchmarks.py
This script will generate centroids using 2 techniques, CVT and random
generation. These centroids will then be evaluated by the get_score()
function which will output a probability score between [0, 1].
"""

import numpy as np
from scipy.spatial import distance

from ribs.archives import CVTArchive


def get_score(centroids, num_samples, seed):
"""Returns the performance of generated centroids
Args:
centroids (numpy.ndarray): centroids being evaluated
num_samples (int): number of random points generated
seed (int): RNG seed
Returns:
float: probability a sampled point hits a region
"""

num_centroids = centroids.shape[0]
centroid_dim = centroids.shape[1]

rng = np.random.default_rng(seed=seed)
random_samples = rng.random(size=(num_samples, centroid_dim))

num_closest_pts = np.zeros(num_centroids)

closest_idx = distance.cdist(random_samples, centroids).argmin(axis=1)

for idx in closest_idx:
num_closest_pts[idx] += 1
# Note: The method in the paper detailed the additional division of
# centroid_vol by num_samples. We did not include that here, however
# results remain similar to the paper's.

centroid_vol = num_closest_pts / num_samples

score = np.sum(np.abs(centroid_vol - 1 / num_centroids))

return score


def main():
"""main() function that benchmarks 6 different centroid generation
techniques used in the aforementioned paper.
"""

score_seed = 1
num_samples = 10000
archive = CVTArchive(
solution_dim=20,
cells=512,
ranges=[(0., 1.), (0., 1.)],
)
cvt_centroids = archive.centroids
print(
"Score for CVT generation: ",
get_score(centroids=cvt_centroids,
num_samples=num_samples,
seed=score_seed))

centroid_gen_seed = 100
num_centroids = 1024
dim = 2
rng = np.random.default_rng(seed=centroid_gen_seed)
random_centroids = rng.random((num_centroids, dim))
print(
"Score for random generation: ",
get_score(centroids=random_centroids,
num_samples=num_samples,
seed=score_seed))


if __name__ == "__main__":
main()
6 changes: 1 addition & 5 deletions docs/_templates/autosummary/class.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,7 @@

.. autosummary::
{% if name == "ArchiveDataFrame" %}
~{{ name }}.solution_batch
~{{ name }}.objective_batch
~{{ name }}.measures_batch
~{{ name }}.index_batch
~{{ name }}.metadata_batch
~{{ name }}.get_field
~{{ name }}.iterelites
{% else %}
{% for item in all_methods %}
Expand Down
6 changes: 3 additions & 3 deletions examples/lunar_lander.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
the --outdir flag) with the following files:
- archive.csv: The CSV representation of the final archive, obtained with
as_pandas().
data().
- archive_ccdf.png: A plot showing the (unnormalized) complementary
cumulative distribution function of objectives in the archive. For
each objective p on the x-axis, this plot shows the number of
Expand Down Expand Up @@ -297,7 +297,7 @@ def save_ccdf(archive, filename):
"""
fig, ax = plt.subplots()
ax.hist(
archive.as_pandas(include_solutions=False)["objective"],
archive.data("objective"),
50, # Number of cells.
histtype="step",
density=False,
Expand Down Expand Up @@ -395,7 +395,7 @@ def lunar_lander_main(workers=4,
metrics = run_search(client, scheduler, env_seed, iterations, log_freq)

# Outputs.
scheduler.archive.as_pandas().to_csv(outdir / "archive.csv")
scheduler.archive.data(return_type="pandas").to_csv(outdir / "archive.csv")
save_ccdf(scheduler.archive, str(outdir / "archive_ccdf.png"))
save_heatmap(scheduler.archive, str(outdir / "heatmap.png"))
save_metrics(outdir, metrics)
Expand Down
2 changes: 1 addition & 1 deletion examples/sphere.py
Original file line number Diff line number Diff line change
Expand Up @@ -835,7 +835,7 @@ def sphere_main(algorithm,
final_itr = itr == itrs
if itr % log_freq == 0 or final_itr:
if final_itr:
result_archive.as_pandas(include_solutions=final_itr).to_csv(
result_archive.data(return_type="pandas").to_csv(
outdir / f"{name}_archive.csv")

# Record and display metrics.
Expand Down
21 changes: 21 additions & 0 deletions ribs/_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,27 @@
import numpy as np


def parse_float_dtype(dtype):
"""Parses a floating point dtype.
Returns:
np.float32 or np.float64
Raises:
ValueError: There is an error in the bounds configuration.
"""
# First convert str dtype's to np.dtype.
if isinstance(dtype, str):
dtype = np.dtype(dtype)

# np.dtype is not np.float32 or np.float64, but it compares equal.
if dtype == np.float32:
return np.float32
if dtype == np.float64:
return np.float64

raise ValueError("Unsupported dtype. Must be np.float32 or np.float64")


def check_finite(x, name):
"""Checks that x is finite (i.e. not infinity or NaN).
Expand Down
9 changes: 4 additions & 5 deletions ribs/archives/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
The archives in this subpackage are arranged in a one-layer hierarchy, with all
archives inheriting from :class:`~ribs.archives.ArchiveBase`. This subpackage
also contains several utilities associated with the archives, such as
:class:`~ribs.archives.Elite` and :class:`~ribs.archives.ArchiveDataFrame`.
:class:`~ribs.archives.ArchiveDataFrame`.
.. autosummary::
:toctree:
Expand All @@ -17,9 +17,8 @@
ribs.archives.CVTArchive
ribs.archives.SlidingBoundariesArchive
ribs.archives.ArchiveBase
ribs.archives.ArrayStore
ribs.archives.AddStatus
ribs.archives.Elite
ribs.archives.EliteBatch
ribs.archives.ArchiveDataFrame
ribs.archives.ArchiveStats
ribs.archives.CQDScoreResult
Expand All @@ -28,9 +27,9 @@
from ribs.archives._archive_base import ArchiveBase
from ribs.archives._archive_data_frame import ArchiveDataFrame
from ribs.archives._archive_stats import ArchiveStats
from ribs.archives._array_store import ArrayStore
from ribs.archives._cqd_score_result import CQDScoreResult
from ribs.archives._cvt_archive import CVTArchive
from ribs.archives._elite import Elite, EliteBatch
from ribs.archives._grid_archive import GridArchive
from ribs.archives._sliding_boundaries_archive import SlidingBoundariesArchive

Expand All @@ -39,8 +38,8 @@
"CVTArchive",
"SlidingBoundariesArchive",
"ArchiveBase",
"ArrayStore",
"AddStatus",
"Elite",
"ArchiveDataFrame",
"ArchiveStats",
"CQDScoreResult",
Expand Down
Loading

0 comments on commit f9bac91

Please sign in to comment.