Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: enabled saving and evaluation for moderator (#271) #272

Merged
merged 30 commits into from
Jan 18, 2025
Merged
Show file tree
Hide file tree
Changes from 25 commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
693f792
feat: FastAPI Implementation of Sotopia Part Two (w websocket) (#252)
XuhuiZhou Dec 5, 2024
5a9f4b7
Add customizable evaluation dimensions (#256)
bugsz Dec 8, 2024
dea25d3
Feat/addtional fast apis for non-streaming simulation and managing re…
XuhuiZhou Dec 11, 2024
cadf06d
fix ci error
XuhuiZhou Dec 12, 2024
187a21b
solving pytests
XuhuiZhou Dec 12, 2024
ec5c394
improve the tests
XuhuiZhou Dec 12, 2024
1a1244e
add custom eval fast api (#268)
XuhuiZhou Dec 13, 2024
ae4014e
fix mypy error
XuhuiZhou Dec 13, 2024
ab6903a
aact moderator (#257)
XuhuiZhou Dec 14, 2024
1f4fb0a
Deploy the api to modal (#267)
bugsz Dec 28, 2024
cb6b2d1
Feature/sotopia demo UI (#261)
XuhuiZhou Dec 31, 2024
70293aa
remove dev tag
XuhuiZhou Dec 31, 2024
2526be1
add custom eval
XuhuiZhou Dec 31, 2024
b0b53d8
base dimension
XuhuiZhou Dec 31, 2024
22a1ecf
fix ui mypy
XuhuiZhou Jan 2, 2025
302835a
fix mypy
XuhuiZhou Jan 2, 2025
0e44603
add delete dimension
XuhuiZhou Jan 5, 2025
520a1dd
update streamlit ui
XuhuiZhou Jan 5, 2025
5ffdee3
ignores the ui directory
XuhuiZhou Jan 6, 2025
f9e2ea3
Committing changes before push
openhands-agent Jan 6, 2025
a45e440
pytest for eval dimension
XuhuiZhou Jan 6, 2025
24ca6a3
fix mypy
XuhuiZhou Jan 6, 2025
6b2db2a
clean up comments
bugsz Jan 7, 2025
66da649
feat: enabled saving and evaluation for moderator (#271)
JXZhou0224 Jan 7, 2025
bbf6061
back compatible with evaluators[draft]
XuhuiZhou Jan 7, 2025
7558927
add evaluation node
XuhuiZhou Jan 8, 2025
8fc24b5
implemented the evaluator for multiagent
Jan 15, 2025
a67166e
Merge branch 'main' into feature/multiparty
XuhuiZhou Jan 16, 2025
02ea368
fix mypy errors
Jan 17, 2025
cb441c0
fix mypy errors in sotopia/database
Jan 17, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/.codecov.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ ignore:
- ".github" # ignore the .github directory
- "docs" # ignore the tests directory
- "figs" # ignore the figs directory
- "ui" # ignore the ui directory

coverage:
status:
Expand Down
116 changes: 116 additions & 0 deletions docs/pages/concepts/evaluation_dimension.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
## Overview

Evaluation dimensions are used to evaluate the quality of social interactions.
In original Sotopia paper, there are 7 dimensions to evaluate the quality of social interactions, where we named them as `sotopia` evaluation dimensions:
- believability
- relationship
- knowledge
- secret
- social rules
- financial and material benefits
- goal

The `SotopiaDimensions` can be used directly without initializing the database. It provides a set of predefined evaluation dimensions that are ready to use for evaluating social interactions. For example,

```python
from sotopia.envs.parallel import ParallelSotopiaEnv
from sotopia.envs.evaluators import EvaluationForTwoAgents, ReachGoalLLMEvaluator, RuleBasedTerminatedEvaluator, SotopiaDimensions

env = ParallelSotopiaEnv(
env_profile=env_profile,
model_name=model_names["env"],
action_order="round-robin",
evaluators=[
RuleBasedTerminatedEvaluator(max_turn_number=20, max_stale_turn=2),
],
terminal_evaluators=[
ReachGoalLLMEvaluator(
model_names["env"],
EvaluationForTwoAgents[SotopiaDimensions], # type: ignore
# TODO check how to do type annotation
),
],
)
```


However we observe under many use cases people may want to evaluate with customized evaluation metrics, so we provide a way to build custom evaluation dimensions.
For a quick reference, you can directly check out the `examples/use_custom_dimensions.py`.

### CustomEvaluationDimension
The [`CustomEvaluationDimension`](/python_API/database/evaluation_dimensions) is a class that can be used to create a custom evaluation dimension.
There are four parameters:
- name: the name of the dimension
- description: the description of the dimension
- range_low: the minimum score of the dimension (should be an integer)
- range_high: the maximum score of the dimension (should be an integer)

### CustomEvaluationDimensionList
The [`CustomEvaluationDimensionList`](/python_API/database/evaluation_dimensions) is a class that can be used to create a custom evaluation dimension list based on the existing dimensions. It helps one to group multiple dimensions together for a specific use case.
There are two parameters:
- name: the name of the dimension list
- dimension_pks: the primary keys of the dimensions in the dimension list

### EvaluationDimensionBuilder
The [`EvaluationDimensionBuilder`](/python_API/database/evaluation_dimensions) is a class that can be used to generate a custom evaluation dimension model based on the existing dimensions.


## Usage
### Initialize the database
The default evaluation metric is still `SotopiaDimensions` in `sotopia.env.evaluators`.There is no `CustomEvaluationDimension` in the database by default. To initialize the database, please refer to `examples/use_custom_dimensions.py`.


### Use the custom evaluation dimensions
After you initialize your customized evaluation dimensions, you can choose to use any one of these methods provided below:

#### Method 1: Choose dimensions by names
```python
evaluation_dimensions = (
EvaluationDimensionBuilder.select_existing_dimension_model_by_name(
["transactivity", "verbal_equity"]
)
)
```

#### Method 2: Directly choose the grouped evaluation dimension list
```python
evaluation_dimensions = (
EvaluationDimensionBuilder.select_existing_dimension_model_by_list_name(
"sotopia"
)
)
```

#### Method 3: Build a custom evaluation dimension model temporarily
We provide multiple ways to build a custom evaluation dimension model with `EvaluationDimensionBuilder`, specifically:
- `generate_dimension_model`: build an evaluation dimension from existing dimension primary keys.
- `generate_dimension_model_from_dict`: build an evaluation dimension from a dictionary that specifies the parameters of the `CustomEvaluationDimension`. For example
```json
[
{
"name": "believability",
"description": "The believability of the interaction",
"range_low": 0,
"range_high": 10
},
...
]
```
- `select_existing_dimension_model_by_name`: build an evaluation dimension from existing dimension names. For example `['believability', 'goal']`
- `select_existing_dimension_model_by_list_name`: build an evaluation dimension from existing `CustomEvaluationDimensionList` list names. For example, directly use `sotopia`.


After you get the evaluation dimension model, you can pass it as a parameter for the `Evaluator`, for example,
```python
evaluation_dimensions = (
EvaluationDimensionBuilder.select_existing_dimension_model_by_list_name(
"sotopia"
)
)
terminal_evaluators=[
ReachGoalLLMEvaluator(
model_names["env"],
EvaluationForTwoAgents[evaluation_dimensions], # type: ignore
),
],
```
2 changes: 1 addition & 1 deletion docs/pages/contribution/contribution.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ Please refer to [Dev Containers](https://containers.dev/supporting#editors) to s

You can also set up the development environment without Dev Containers. There are three things you will need to set up manually:

- Python and uv: Please start from an environment supporting Python 3.10+ and install uv using `pip install uv; uv sync --all-extra`.
- Python and uv: Please start from an environment supporting Python 3.10+ and install uv using `pip install uv; uv sync --all-extras`. (Note that this will install all the extra dependencies)
- Redis: Please refer to introduction page for the set up of Redis.
- Local LLM (optional): If you don't have access to model endpoints (e.g. OpenAI, Anthropic or others), you can use a local model. You can use Ollama, Llama.cpp, vLLM or many others which support OpenAI compatible endpoints.

Expand Down
6 changes: 6 additions & 0 deletions docs/pages/examples/deployment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Deploy Sotopia Python API to Modal
We offer a script to deploy Sotopia Python API to [Modal](https://modal.com/).
To do so, simply go to the `sotopia/sotopia/ui` directory and run the following command:
```bash
modal deploy sotopia/ui/modal_api_server.py
```
54 changes: 54 additions & 0 deletions docs/pages/python_API/database/evaluation_dimensions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# `evaluation_dimensions.py`

This module provides classes and utilities for defining and managing custom evaluation dimensions within the Sotopia environment. It includes classes for individual dimensions, lists of dimensions, and a builder for creating dimension models.

## Classes

### `CustomEvaluationDimension`

Represents a custom evaluation dimension with specific attributes such as name, description, and score range.

#### Attributes
- `name`: `str`. The name of the dimension.
- `description`: `str`. A brief description of the dimension.
- `range_low`: `int`. The minimum score for the dimension.
- `range_high`: `int`. The maximum score for the dimension.

### `CustomEvaluationDimensionList`

Groups multiple custom evaluation dimensions together.

#### Attributes
- `name`: `str`. The name of the dimension list.
- `dimension_pks`: `list[str]`. A list of primary keys for the dimensions included in the list.

### `EvaluationDimensionBuilder`

Provides utility methods to create and manage evaluation dimension models.

#### Methods
- `create_range_validator(low: int, high: int)`: Creates a validator for score ranges.

**Arguments:**
- `low`: `int`. The minimum score allowed.
- `high`: `int`. The maximum score allowed.

- `build_dimension_model(dimension_ids: list[str])`: Builds a dimension model from primary keys.

**Arguments:**
- `dimension_ids`: `list[str]`. A list of dimension primary keys.

- `build_dimension_model_from_dict(dimensions: list[dict[str, Union[str, int]]])`: Builds a dimension model from a dictionary.

**Arguments:**
- `dimensions`: `list[dict[str, Union[str, int]]]`. A list of dictionaries specifying dimension attributes.

- `select_existing_dimension_model_by_name(dimension_names: list[str])`: Selects a dimension model by dimension names.

**Arguments:**
- `dimension_names`: `list[str]`. A list of dimension names.

- `select_existing_dimension_model_by_list_name(list_name: str)`: Selects a dimension model by list name.

**Arguments:**
- `list_name`: `str`. The name of the dimension list.
21 changes: 18 additions & 3 deletions examples/experiment_eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,11 @@
EnvAgentComboStorage,
EnvironmentProfile,
EpisodeLog,
EvaluationDimensionBuilder,
)
from sotopia.envs.evaluators import (
EvaluationForTwoAgents,
ReachGoalLLMEvaluator,
EpisodeLLMEvaluator,
RuleBasedTerminatedEvaluator,
SotopiaDimensions,
)
Expand All @@ -34,6 +35,7 @@
)
from sotopia.server import run_async_server
from sotopia_conf.gin_utils import parse_gin_flags, run
# from sotopia.database import EvaluationDimensionBuilder

_DEFAULT_GIN_SEARCH_PATHS = [
os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
Expand Down Expand Up @@ -109,6 +111,18 @@ def _iterate_env_agent_combo_not_in_db(
tag: str | None = None,
) -> Generator[EnvAgentCombo[Observation, AgentAction], None, None]:
"""We iterate over each environment and return the **first** env-agent combo that is not in the database."""
# loading evaluation metric
try:
evaluation_dimensions = EvaluationDimensionBuilder.select_existing_dimension_model_by_list_name(
"sotopia"
) # Initialize your customized dimension, please refer to `examples/use_custom_dimensions.py`
except Exception as e:
print(
"No customized evaluation dimensions found, using default SotopiaDimensions",
e,
)
evaluation_dimensions = SotopiaDimensions

if not env_ids:
env_ids = list(EnvironmentProfile.all_pks())
for env_id in env_ids:
Expand Down Expand Up @@ -150,9 +164,10 @@ def _iterate_env_agent_combo_not_in_db(
RuleBasedTerminatedEvaluator(max_turn_number=20, max_stale_turn=2),
],
terminal_evaluators=[
ReachGoalLLMEvaluator(
EpisodeLLMEvaluator(
model_names["env"],
EvaluationForTwoAgents[SotopiaDimensions],
EvaluationForTwoAgents[evaluation_dimensions], # type: ignore
# TODO check how to do type annotation
),
],
)
Expand Down
2 changes: 2 additions & 0 deletions examples/experimental/nodes/initial_message_node.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ def __init__(
input_tick_channel: str,
output_channels: list[str],
env_scenario: str,
node_name: str,
redis_url: str = "redis://localhost:6379/0",
):
super().__init__(
Expand All @@ -26,6 +27,7 @@ def __init__(
(output_channel, Text) for output_channel in output_channels
],
redis_url=redis_url,
node_name=node_name,
)
self.env_scenario = env_scenario
self.output_channels = output_channels
Expand Down
Loading
Loading