Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/vignshwarar/AI-Employe
Create browser automation as if you were teaching a human using GPT-4 Vision.
https://github.com/vignshwarar/AI-Employe
automation automation-testing gpt-4 multimodal productivity rpa
Last synced: about 2 months ago
JSON representation
Create browser automation as if you were teaching a human using GPT-4 Vision.
- Host: GitHub
- URL: https://github.com/vignshwarar/AI-Employe
- Owner: vignshwarar
- License: agpl-3.0
- Created: 2023-12-21T19:10:44.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-02-19T03:15:22.000Z (10 months ago)
- Last Synced: 2024-08-02T16:11:01.270Z (5 months ago)
- Topics: automation, automation-testing, gpt-4, multimodal, productivity, rpa
- Language: TypeScript
- Homepage: https://aiemploye.com
- Size: 971 KB
- Stars: 542
- Watchers: 10
- Forks: 48
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
## Install
Try without Firebase authentication (temporary solution): https://github.com/vignshwarar/AI-Employe/issues/2#issuecomment-1880328518
Our stack consists of Next.js, Rust, Postgres, MeiliSearch, and Firebase Auth for authentication. Please sign up for a Firebase account and create a project.
In Firebase, navigate to Project settings -> Service accounts, generate a private key, and save it inside ```firebaseAdmin/cert/dev.json``` if it's for development or prod.json if it's for production.
After that, make sure you install the dependencies before starting the app.
- Copy the the .env.sample file to .env.production or .env.development
- Fill the .env file with your credentials
- Run `npm install`
- Run `npm run db:deploy`
- Run `npm run dev` (for development)
- Run `npm run build` (for production)
- Run `npm run start` (for production)Once you have run 'dev' or 'build', you will find the extension built inside the `./client/extension/build` folder. You can then load this folder as an unpacked extension in your browser.
## How it Works
There are several problems with current browser agents. Here, we explain the problems and how we have solved them.
### Problem 1: Finding the Right Element
There are several techniques for this, ranging from sending a shortened form of HTML to GPT-3, creating a bounding box with IDs and sending it to GPT-4-vision to take actions, or directly asking GPT-4-vision to obtain the X and Y coordinates of the element. However, none of these methods were reliable; they all led to hallucinations.
To address this, we developed a new technique where we [index](https://github.com/vignshwarar/AI-Employe/blob/db530101c9fd9a0f0d7ce3eeac033e70cb172541/server/src/common/dom/search.rs#L9) the entire DOM in MeiliSearch, allowing GPT-4-vision to generate commands for which element's inner text to click, copy, or perform other actions. We then [search](https://github.com/vignshwarar/AI-Employe/blob/db530101c9fd9a0f0d7ce3eeac033e70cb172541/server/src/common/dom/search.rs#L46) the index with the generated text and retrieve the element ID to send back to the browser to take action. There are a few limitations here, but we have implemented some techniques to overcome them, such as dealing with the same text in multiple elements or clicking on an icon (we are still working on this).
### Problem 2: GPT Derailing from Workflow
To prevent GPT from derailing from tasks, we use a technique that is akin to retrieval-augmented generation, but we kind of call it Actions Augmented Generation. Essentially, when a user creates a workflow, we don't record the screen, microphone, or camera, but we do record the DOM element changes for every action (clicking, typing, etc.) the user takes. We then use the workflow title, objective, and recorded actions to generate a set of tasks. Whenever we execute a task, we embed all the actions the user took on that particular domain with the prompt. This way, GPT stays on track with the task, even if the user has not provided a very brief title and objective; their actions will guide GPT to complete the task.
## Roadmap
- [x] Workflows
- [x] Chat with what you see
- [ ] More actions support scrolling, opening links in a new tab, etc.
- [ ] Loop in workflows
- [ ] Clever Tab management
- [ ] Share workflows
- [ ] Open source models support
- [ ] Community shared workflows
- [ ] Cloud version of AI Employe
- [ ] Control browser by text
- [ ] Control browser by voice
- [ ] more to come...