Skip to content

A Rust library for Goal-Orientated Action Planning, supporting dynamic priority and cost.

License

Notifications You must be signed in to change notification settings

lixitrixi/planning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Planning

crates.io GitHub Actions Workflow Status

A library allowing the planning of minimal sequences of actions to achieve given goal states.

This crate is based on the work on Goal-Oriented Action-Planning (GOAP) by Jeff Orkin. It takes inspiration from tynril's rgoap library, but offers dynamic goal priority and action costs as well as arbitrary state types.

The main access point is the Agent type, which allows dynamic and extensible planning with multiple goals and actions, and complex dynamic interactions between them.

Usage

Add the crate as a dependency with cargo add planning, or by adding the following lines to your Cargo.toml:

[dependencies]
planning = "1.0"

Use the library like so:

use planning::*;
use std::hash::Hash;

#[derive(Clone, Debug, PartialEq, Eq, Hash)]
struct State {
    num_flowers: u16,
    hungry: bool,
    picnic_set: bool,
}

#[derive(Clone, Debug, PartialEq, Eq, Hash)]
enum MyAction {
    PickFlower,
    SetPicnic,
    Eat,
}

impl Action<State> for MyAction {
    fn is_applicable(&self, state: &State) -> bool {
        match self {
            MyAction::PickFlower => state.num_flowers < 5,
            MyAction::SetPicnic => !state.picnic_set,
            MyAction::Eat => state.hungry && state.picnic_set,
        }
    }

    fn apply_mut(&self, state: &mut State) {
        match self {
            MyAction::PickFlower => state.num_flowers += 1,
            MyAction::SetPicnic => state.picnic_set = true,
            MyAction::Eat => state.hungry = false,
        }
    }
}

#[derive(Clone, Debug, PartialEq, Eq, Hash)]
enum MyGoal {
    BouquetMade,
    Eaten,
}

impl Goal<State> for MyGoal {
    fn is_satisfied(&self, state: &State) -> bool {
        match self {
            MyGoal::BouquetMade => state.num_flowers >= 5,
            MyGoal::Eaten => !state.hungry,
        }
    }

    fn priority(&self, state: &State) -> i32 {
        match self {
            MyGoal::BouquetMade => 1,
            MyGoal::Eaten => if state.hungry { 2 } else { 0 },
        }
    }
}

let mut agent = Agent::new(
    State { num_flowers: 0, hungry: true, picnic_set: false },
    vec![MyAction::PickFlower, MyAction::SetPicnic, MyAction::Eat],
    vec![MyGoal::BouquetMade, MyGoal::Eaten],
);

let (goal, plan, _) = agent.plan_dynamic().unwrap();
assert_eq!(goal, &MyGoal::Eaten);
assert_eq!(plan, vec![MyAction::SetPicnic, MyAction::Eat]);

agent.state.hungry = false;
let (goal, plan, _) = agent.plan_dynamic().unwrap();
assert_eq!(goal, &MyGoal::BouquetMade);
assert_eq!(plan, vec![MyAction::PickFlower; 5]);

Features

bevy: Agent implements Bevy's Component type. This is useful when using it as part of a game's AI unit behavior.

serde: Agent implements Serialize and Deserialize.

[dependencies]
planning = { version = "1.0", features = ["bevy", "serde"] }

About

A Rust library for Goal-Orientated Action Planning, supporting dynamic priority and cost.

Topics

Resources

License

Stars

Watchers

Forks

Languages