https://github.com/probcomp/inverseplanning.jl
Agent modeling and inverse planning, using PDDL and Gen.
https://github.com/probcomp/inverseplanning.jl
Last synced: 11 months ago
JSON representation
Agent modeling and inverse planning, using PDDL and Gen.
- Host: GitHub
- URL: https://github.com/probcomp/inverseplanning.jl
- Owner: probcomp
- License: apache-2.0
- Created: 2023-10-22T21:15:57.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2025-04-26T04:24:55.000Z (11 months ago)
- Last Synced: 2025-05-08T22:57:12.024Z (11 months ago)
- Language: Julia
- Size: 1.59 MB
- Stars: 7
- Watchers: 4
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# InversePlanning.jl
An architecture for planning, inverse planning, and inference in planning,
using [PDDL](https://github.com/JuliaPlanners/PDDL.jl) and [Gen](https://www.gen.dev/).
## Setup
To use this library in your own projects, press `]` at the Julia REPL to
enter the package manager, then run:
```julia-repl
add PDDL SymbolicPlanners
add Gen GenParticleFilters
add PDDLViz GLMakie
add https://github.com/probcomp/InversePlanning.jl.git
```
To explore the examples provided in this repository, clone this repository,
press `]` at the Julia REPL to enter the package manager, then run the following
commands:
```julia-repl
activate examples
dev ./
instantiate
```
This will activate the `examples` directory as the project environment, set up
your cloned copy of InversePlanning.jl as a dependency, and install any
remaining dependencies.
## Examples
InversePlanning.jl can be used to model agents that perform model-based heuristic search
to achieve their goals. Below, we visualize a sampled trace for a replanning
agent that interleaves resource-bounded plan search with plan execution:

We can then perform goal inference for these agents:

Notice that the correct goal is eventually inferred, despite backtracking
by the agent. This is because we model the agent as *boundedly rational*:
it does not always produce optimal plans. Indeed, this modeling assumption
also allows us to infer goals from *failed plans*:

Because we use the Planning Domain Definition Language (PDDL) as our underlying
state representation, our architecture supports a large range of domains,
including the classic Blocks World:

For more details about the modeling and inference architecture,
consult our paper:
T. Zhi-Xuan, J. L. Mann, T. Silver, J. B. Tenenbaum, and V. K. Mansinghka,
[“Online Bayesian Goal Inference for Boundedly-Rational Planning Agents,”](http://arxiv.org/abs/2006.07532) arXiv:2006.07532 [cs], Jun. 2020.
Full example code for several domains can be found here:
[Gridworld](examples/gridworld/example.jl);
[Doors, Keys & Gems](examples/doors-keys-gems/example.jl);
[Block Words](examples/block-words/example.jl)