The graphenv
Python library is designed to
- make graph search problems more readily expressible as RL problems via an extension of the OpenAI gym API while
- enabling their solution via scalable learning algorithms in the popular RLLib library.
RLLib provides out-of-the-box support for both parametrically-defined actions and masking of invalid actions. However, native support for action spaces where the action choices change for each state is challenging to implement in a computationally efficient fashion. The graphenv
library provides utility classes that simplify the flattening and masking of action observations for choosing from a set of successor states at every node in a graph search.
Graphenv can be installed with pip:
pip install graphenv
graph-env
allows users to create a customized graph search by subclassing the Vertex
class. Basic examples are provided in the graphenv/examples
folder. The following code snippet shows how to randomly sample from valid actions for a random walk down a 1D corridor:
import random
from graphenv.examples.hallway.hallway_state import HallwayState
from graphenv.graph_env import GraphEnv
state = HallwayState(corridor_length=10)
env = GraphEnv({"state": state, "max_num_children": 2})
obs = env.make_observation()
done = False
total_reward = 0
while not done:
action = random.choice(range(len(env.state.children)))
obs, reward, terminated, truncated, info = env.step(action)
done = terminated or truncated
total_reward += reward
Additional details on this example are given in the documentation
The documentation is hosted on GitHub Pages
We welcome bug reports, suggestions for new features, and pull requests. See our contributing guidelines for more details.
graph-env
is licensed under the BSD 3-Clause License.
Copyright (c) 2022, Alliance for Sustainable Energy, LLC