https://github.com/tasl-lab/LaMMA-P
https://github.com/tasl-lab/LaMMA-P
Last synced: 3 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/tasl-lab/LaMMA-P
- Owner: tasl-lab
- Created: 2025-03-16T16:55:06.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2025-09-23T04:44:13.000Z (7 months ago)
- Last Synced: 2025-09-23T06:24:27.726Z (7 months ago)
- Language: Python
- Size: 757 KB
- Stars: 21
- Watchers: 0
- Forks: 5
- Open Issues: 6
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- StarryDivineSky - tasl-lab/LaMMA-P - P:基于 LM 驱动 PDDL 规划器的通用多智能体长时域任务分配与规划。语言模型(LM)具有强大的自然语言理解能力,能够有效地将人类指令转化为简单机器人任务的详细计划。然而,处理长周期任务,特别是协作异构机器人团队的子任务识别和分配,仍然是一个巨大的挑战。为了解决这个问题,我们提出了一种基于语言模型的多智能体 PDDL 规划器(LaMMA-P),这是一个新型的多智能体任务规划框架,在长周期任务上取得了最先进的性能。LaMMA-P 融合了语言模型的推理能力和传统启发式搜索规划器的优势,在实现高成功率和高效率的同时,展现出强大的任务泛化能力。此外,我们基于 AI2-THOR 环境创建了 MAT-THOR,这是一个包含两种不同复杂程度的家庭任务的综合基准测试平台。实验结果表明,与现有的基于语言模型的多智能体规划器相比,LaMMA-P 的成功率提高了 105%,效率提高了 36%。 (A01_文本生成_文本对话 / 大语言对话模型及数据)
README
# **LaMMA-P: Generalizable Multi-Agent Long-Horizon Task Allocation and Planning with LM-Driven PDDL Planner**
This is the official repository for the LaMMA-P codebase. It includes instructions for configuring and running LaMMA-P on the MAT-THOR datasets in the AI2-THOR simulator. It is accepted as a conference paper by the IEEE International Conference on Robotics and Automation (ICRA), Atlanta, 2025.
[Project Website](https://lamma-p.github.io/) | [Paper](https://arxiv.org/abs/2409.20560) | [Video](https://www.youtube.com/watch?v=1edDuJbk_uk)

**Abstract:** Language models (LMs) possess a strong capability to comprehend natural language, making them effective in translating human instructions into detailed plans for simple robot tasks. Nevertheless, it remains a significant challenge to handle long-horizon tasks, especially in subtask identification and allocation for cooperative heterogeneous robot teams. To address this issue, we propose a Language Model-Driven Multi-Agent PDDL Planner (LaMMA-P), a novel multi-agent task planning framework that achieves state-of-the-art performance on long-horizon tasks. LaMMA-P integrates the strengths of the LMs’ reasoning capability and the traditional heuristic search planner to achieve a high success rate and efficiency while demonstrating strong generalization across tasks. Additionally, we create MAT-THOR, a comprehensive benchmark that features household tasks with two different levels of complexity based on the AI2-THOR environment. The experimental results demonstrate that LaMMA-P achieves a 105% higher success rate and 36% higher efficiency than existing LM-based multi-agent planners.
## Code Organization
Below are the details of various important directories
- `resources/`: Contains robot definitions and PDDL domain files
- `scripts/`: Main execution scripts adapted from [SMART-LLM](https://github.com/SMARTlab-Purdue/SMART-LLM)
- `data/`: Test datasets and example tasks extended from [SMART-LLM](https://github.com/SMARTlab-Purdue/SMART-LLM)
- `downward/`: Fast Downward planner from [Fast Downward](https://github.com/aibasel/downward/)
## Datasets
The repository includes various commands and robots with different skill sets for heterogeneous robot tasks:
- Test tasks: `data/final_test/`
- Robot definitions: `resources/robots.py`
- Floor plans: Refer to [AI2Thor Demo](https://ai2thor.allenai.org/demo) for layouts
## Environment Setup
### 1. Environment Setup
Create a conda environment (or virtualenv):
```bash
conda create -n lammap python==3.9
conda activate lammap
```
Install dependencies:
```bash
pip install -r requirements.txt
```
### 2. Fast Downward Planner Setup
The project requires the [Fast Downward Planner](https://github.com/aibasel/downward/). Follow these steps to set it up:
1. Clone the Fast Downward repository as a submodule:
```bash
git submodule update --init --recursive
cd downward
```
2. Build the planner:
```bash
./build.py
```
3. Verify the installation:
```bash
./fast-downward.py --help
```
### 3. OpenAI API Setup
The code relies on OpenAI's API for LLM functionality. To set this up:
1. Create an API Key at https://platform.openai.com/
2. Create a file named `api_key.txt` in the root folder
3. Paste your OpenAI API Key in the file
## Quickstart
### 1. Generate PDDL Plans
To generate PDDL plans for tasks in AI2Thor floor plans, run:
```bash
python scripts/pddlrun_llmseparate.py --floor-plan
```
Additional parameters:
- `--gpt-version`: Choose between 'gpt-3.5-turbo', 'gpt-4o', 'gpt-3.5-turbo-16k' (default: 'gpt-4o')
- `--prompt-decompse-set`: Set decomposition prompt set (default: 'pddl_train_task_decomposesep')
- `--prompt-allocation-set`: Set allocation prompt set (default: 'pddl_train_task_allocationsep')
The script will:
1. Decompose the high-level task into subtasks
2. Generate PDDL problem files for each subtask
3. Run the Fast Downward planner on each subtask
4. Combine the solutions into a complete plan
Output files are stored in the `logs` directory, organized by timestamp and task name.
### 2. Execute Plans in AI2Thor
To execute the generated plans in the AI2Thor environment:
Convert the target plan into code
```bash
python plantocode.py --logs-dir ./logs --validate-code
```
then,
```bash
python scripts/execute_plan.py --command
```
Replace `` with the specific folder name in the `logs` directory containing your generated plan.
## Citation
If you find this work useful for your research, please consider citing:
```bibtex
@inproceedings{zhang2025lamma,
title={LaMMA-P: Generalizable Multi-Agent Long-Horizon Task Allocation and Planning with LM-Driven PDDL Planner},
author={Zhang, Xiaopan and Qin, Hao and Wang, Fuquan and Dong, Yue and Li, Jiachen},
booktitle={2025 IEEE International Conference on Robotics and Automation (ICRA)},
year={2025},
organization={IEEE}
}
```
## Acknowledgement
We sincerely thank the researchers and developers for [SMART-LLM](https://github.com/SMARTlab-Purdue/SMART-LLM), [AI2THOR](https://github.com/allenai/ai2thor), and [Fast Downward](https://github.com/aibasel/downward/) for their amazing work.