https://github.com/opengvlab/instruct2act
Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model
https://github.com/opengvlab/instruct2act
chatgpt clip llm robotics segment-anything
Last synced: about 2 months ago
JSON representation
Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model
- Host: GitHub
- URL: https://github.com/opengvlab/instruct2act
- Owner: OpenGVLab
- Created: 2023-05-18T07:06:32.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2024-06-23T07:39:41.000Z (12 months ago)
- Last Synced: 2024-11-09T14:40:12.811Z (7 months ago)
- Topics: chatgpt, clip, llm, robotics, segment-anything
- Language: Python
- Homepage:
- Size: 30 MB
- Stars: 333
- Watchers: 3
- Forks: 20
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-ChatGPT-repositories - Instruct2Act - Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model (Chatbots)