https://github.com/rangermauve/mind-goblin
Your friendly internet enabled assistant. Swap minds with custom prompts and Ollama
https://github.com/rangermauve/mind-goblin
Last synced: 13 days ago
JSON representation
Your friendly internet enabled assistant. Swap minds with custom prompts and Ollama
- Host: GitHub
- URL: https://github.com/rangermauve/mind-goblin
- Owner: RangerMauve
- License: agpl-3.0
- Created: 2024-03-18T22:52:42.000Z (about 1 year ago)
- Default Branch: default
- Last Pushed: 2024-12-13T20:22:19.000Z (5 months ago)
- Last Synced: 2025-03-31T15:08:20.257Z (about 2 months ago)
- Language: JavaScript
- Size: 36.1 KB
- Stars: 10
- Watchers: 3
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# mind-goblin
Your friendly internet enabled assistant. Swap minds with custom prompts and Ollama## What should it do?
- get voice input or text input
- output either text or voice
- output as notifications
- answer questions you'd ask an llm
- look through either the camera, the screen, or a static image
- using mplayer on linux
- reach out to external data sources when it is unsure
- wikipedia
- npmjs
- rust crates
- do basic math
- refactor text from the fs or the clipboard
- query databases
- postgres
- neo4j
- sqlite## Dream api:
```
// Listen for input and output speech
mind-goblin think --listen --speak// execute task from the prompt and output to stdout
mind-goblin think "text prompt"// run a repl
mind-goblin repl
> enter text here to get a response// Read in a file and rewrite it according to the prompt
mind-goblin refactor "capitalize each sentance" ./example.txt// Get files injected into the context
mind-goblin think "summarize this" ./example.txt// It should use the webcam when possible
mind-goblin think "What do you see?"// When getting an image file in the cli, use instead of camera
mind-goblin think "What do you see?" ./screenshot.png
```