https://github.com/apexal/necromancer
(Re)animating a Walmart skeleton with Raspberry Pi and computer vision.
https://github.com/apexal/necromancer
Last synced: about 2 months ago
JSON representation
(Re)animating a Walmart skeleton with Raspberry Pi and computer vision.
- Host: GitHub
- URL: https://github.com/apexal/necromancer
- Owner: Apexal
- License: mit
- Created: 2023-10-23T15:57:34.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-08-14T16:00:06.000Z (10 months ago)
- Last Synced: 2025-02-14T09:52:53.691Z (3 months ago)
- Language: Python
- Size: 30.3 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# necromancer
(Re)animating a Walmart skeleton with Raspberry Pi and computer vision.## Feature List
- [ ] Can see with a camera
- [ ] Can hear with a microphone
- [ ] Can hear with a microphone array
- [ ] Can speak via a speaker
- [ ] Can rotate head
- [ ] Can rotate left arm
- [ ] Can rotate right arm
- [ ] Can identify presence and location of faces
- [ ] Can recognize faces from trained images
- [ ] Can follow faces with head rotation
- [ ] Can open/close mouth
- [ ] Can transcribe heard speech to text
- [ ] Can use speech to text
- [ ] Can interact with LLMs## Imagined API Example
```
ryan = Skeleton()target = ryan.find_face()
if target is not None:
ryan.look_at(target)ryan.say("Hello!")
prompt = ryan.listen()
response = ryan.process(prompt)
ryan.say(response)
```