{"id":13441129,"url":"https://github.com/OthersideAI/self-operating-computer","last_synced_at":"2025-03-20T11:35:26.035Z","repository":{"id":209474599,"uuid":"714143245","full_name":"OthersideAI/self-operating-computer","owner":"OthersideAI","description":"A framework to enable multimodal models to operate a computer.","archived":false,"fork":false,"pushed_at":"2024-08-02T15:50:58.000Z","size":12805,"stargazers_count":8756,"open_issues_count":73,"forks_count":1165,"subscribers_count":124,"default_branch":"main","last_synced_at":"2024-10-23T05:43:42.259Z","etag":null,"topics":["automation","openai","pyautogui"],"latest_commit_sha":null,"homepage":"https://www.hyperwriteai.com/self-operating-computer","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/OthersideAI.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-11-04T03:13:45.000Z","updated_at":"2024-10-23T04:23:01.000Z","dependencies_parsed_at":"2024-02-09T05:23:36.746Z","dependency_job_id":"5fe98e80-4df4-40cb-9d15-ae6c3f7fe61a","html_url":"https://github.com/OthersideAI/self-operating-computer","commit_stats":null,"previous_names":["othersideai/self-operating-computer"],"tags_count":30,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OthersideAI%2Fself-operating-computer","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OthersideAI%2Fself-operating-computer/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OthersideAI%2Fself-operating-computer/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OthersideAI%2Fself-operating-computer/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/OthersideAI","download_url":"https://codeload.github.com/OthersideAI/self-operating-computer/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":221752301,"owners_count":16874957,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["automation","openai","pyautogui"],"created_at":"2024-07-31T03:01:30.263Z","updated_at":"2025-03-20T11:35:26.017Z","avatar_url":"https://github.com/OthersideAI.png","language":"Python","readme":"ome\n\u003ch1 align=\"center\"\u003eSelf-Operating Computer Framework\u003c/h1\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cstrong\u003eA framework to enable multimodal models to operate a computer.\u003c/strong\u003e\n\u003c/p\u003e\n\u003cp align=\"center\"\u003e\n  Using the same inputs and outputs as a human operator, the model views the screen and decides on a series of mouse and keyboard actions to reach an objective. Released Nov 2023, the Self-Operating Computer Framework was one of the first examples of using a multimodal model to view the screen and operate a computer.\n\u003c/p\u003e\n\n\u003cdiv align=\"center\"\u003e\n  \u003cimg src=\"https://github.com/OthersideAI/self-operating-computer/blob/main/readme/self-operating-computer.png\" width=\"750\"  style=\"margin: 10px;\"/\u003e\n\u003c/div\u003e\n\n\u003c!--\n:rotating_light: **OUTAGE NOTIFICATION: gpt-4o**\n**This model is currently experiencing an outage so the self-operating computer may not work as expected.**\n--\u003e\n\n\n## Key Features\n- **Compatibility**: Designed for various multimodal models.\n- **Integration**: Currently integrated with **GPT-4o, o1, Gemini Pro Vision, Claude 3, Qwen-VL and LLaVa.**\n- **Future Plans**: Support for additional models.\n\n## Demo\nhttps://github.com/OthersideAI/self-operating-computer/assets/42594239/9e8abc96-c76a-46fb-9b13-03678b3c67e0\n\n\n## Run `Self-Operating Computer`\n\n1. **Install the project**\n```\npip install self-operating-computer\n```\n2. **Run the project**\n```\noperate\n```\n3. **Enter your OpenAI Key**: If you don't have one, you can obtain an OpenAI key [here](https://platform.openai.com/account/api-keys). If you need you change your key at a later point, run `vim .env` to open the `.env` and replace the old key. \n\n\u003cdiv align=\"center\"\u003e\n  \u003cimg src=\"https://github.com/OthersideAI/self-operating-computer/blob/main/readme/key.png\" width=\"300\"  style=\"margin: 10px;\"/\u003e\n\u003c/div\u003e\n\n4. **Give Terminal app the required permissions**: As a last step, the Terminal app will ask for permission for \"Screen Recording\" and \"Accessibility\" in the \"Security \u0026 Privacy\" page of Mac's \"System Preferences\".\n\n\u003cdiv align=\"center\"\u003e\n  \u003cimg src=\"https://github.com/OthersideAI/self-operating-computer/blob/main/readme/terminal-access-1.png\" width=\"300\"  style=\"margin: 10px;\"/\u003e\n  \u003cimg src=\"https://github.com/OthersideAI/self-operating-computer/blob/main/readme/terminal-access-2.png\" width=\"300\"  style=\"margin: 10px;\"/\u003e\n\u003c/div\u003e\n\n## Using `operate` Modes\n\n#### OpenAI models\n\nThe default model for the project is gpt-4o which you can use by simply typing `operate`. To try running OpenAI's new `o1` model, use the command below. \n\n```\noperate -m o1-with-ocr\n```\n\n\n### Multimodal Models  `-m`\nTry Google's `gemini-pro-vision` by following the instructions below. Start `operate` with the Gemini model\n```\noperate -m gemini-pro-vision\n```\n\n**Enter your Google AI Studio API key when terminal prompts you for it** If you don't have one, you can obtain a key [here](https://makersuite.google.com/app/apikey) after setting up your Google AI Studio account. You may also need [authorize credentials for a desktop application](https://ai.google.dev/palm_docs/oauth_quickstart). It took me a bit of time to get it working, if anyone knows a simpler way, please make a PR.\n\n#### Try Claude `-m claude-3`\nUse Claude 3 with Vision to see how it stacks up to GPT-4-Vision at operating a computer. Navigate to the [Claude dashboard](https://console.anthropic.com/dashboard) to get an API key and run the command below to try it. \n\n```\noperate -m claude-3\n```\n\n#### Try qwen `-m qwen-vl`\nUse Qwen-vl with Vision to see how it stacks up to GPT-4-Vision at operating a computer. Navigate to the [Qwen dashboard](https://bailian.console.aliyun.com/) to get an API key and run the command below to try it. \n\n```\noperate -m qwen-vl\n```\n\n#### Try LLaVa Hosted Through Ollama `-m llava`\nIf you wish to experiment with the Self-Operating Computer Framework using LLaVA on your own machine, you can with Ollama!   \n*Note: Ollama currently only supports MacOS and Linux. Windows now in Preview*   \n\nFirst, install Ollama on your machine from https://ollama.ai/download.   \n\nOnce Ollama is installed, pull the LLaVA model:\n```\nollama pull llava\n```\nThis will download the model on your machine which takes approximately 5 GB of storage.   \n\nWhen Ollama has finished pulling LLaVA, start the server:\n```\nollama serve\n```\n\nThat's it! Now start `operate` and select the LLaVA model:\n```\noperate -m llava\n```   \n**Important:** Error rates when using LLaVA are very high. This is simply intended to be a base to build off of as local multimodal models improve over time.\n\nLearn more about Ollama at its [GitHub Repository](https://www.github.com/ollama/ollama)\n\n### Voice Mode `--voice`\nThe framework supports voice inputs for the objective. Try voice by following the instructions below. \n**Clone the repo** to a directory on your computer:\n```\ngit clone https://github.com/OthersideAI/self-operating-computer.git\n```\n**Cd into directory**:\n```\ncd self-operating-computer\n```\nInstall the additional `requirements-audio.txt`\n```\npip install -r requirements-audio.txt\n```\n**Install device requirements**\nFor mac users:\n```\nbrew install portaudio\n```\nFor Linux users:\n```\nsudo apt install portaudio19-dev python3-pyaudio\n```\nRun with voice mode\n```\noperate --voice\n```\n\n### Optical Character Recognition Mode `-m gpt-4-with-ocr`\nThe Self-Operating Computer Framework now integrates Optical Character Recognition (OCR) capabilities with the `gpt-4-with-ocr` mode. This mode gives GPT-4 a hash map of clickable elements by coordinates. GPT-4 can decide to `click` elements by text and then the code references the hash map to get the coordinates for that element GPT-4 wanted to click. \n\nBased on recent tests, OCR performs better than `som` and vanilla GPT-4 so we made it the default for the project. To use the OCR mode you can simply write: \n\n `operate` or `operate -m gpt-4-with-ocr` will also work. \n\n### Set-of-Mark Prompting `-m gpt-4-with-som`\nThe Self-Operating Computer Framework now supports Set-of-Mark (SoM) Prompting with the `gpt-4-with-som` command. This new visual prompting method enhances the visual grounding capabilities of large multimodal models.\n\nLearn more about SoM Prompting in the detailed arXiv paper: [here](https://arxiv.org/abs/2310.11441).\n\nFor this initial version, a simple YOLOv8 model is trained for button detection, and the `best.pt` file is included under `model/weights/`. Users are encouraged to swap in their `best.pt` file to evaluate performance improvements. If your model outperforms the existing one, please contribute by creating a pull request (PR).\n\nStart `operate` with the SoM model\n\n```\noperate -m gpt-4-with-som\n```\n\n\n\n## Contributions are Welcomed!:\n\nIf you want to contribute yourself, see [CONTRIBUTING.md](https://github.com/OthersideAI/self-operating-computer/blob/main/CONTRIBUTING.md).\n\n## Feedback\n\nFor any input on improving this project, feel free to reach out to [Josh](https://twitter.com/josh_bickett) on Twitter. \n\n## Join Our Discord Community\n\nFor real-time discussions and community support, join our Discord server. \n- If you're already a member, join the discussion in [#self-operating-computer](https://discord.com/channels/877638638001877052/1181241785834541157).\n- If you're new, first [join our Discord Server](https://discord.gg/YqaKtyBEzM) and then navigate to the [#self-operating-computer](https://discord.com/channels/877638638001877052/1181241785834541157).\n\n## Follow HyperWriteAI for More Updates\n\nStay updated with the latest developments:\n- Follow HyperWriteAI on [Twitter](https://twitter.com/HyperWriteAI).\n- Follow HyperWriteAI on [LinkedIn](https://www.linkedin.com/company/othersideai/).\n\n## Compatibility\n- This project is compatible with Mac OS, Windows, and Linux (with X server installed).\n\n## OpenAI Rate Limiting Note\nThe ```gpt-4o``` model is required. To unlock access to this model, your account needs to spend at least \\$5 in API credits. Pre-paying for these credits will unlock access if you haven't already spent the minimum \\$5.   \nLearn more **[here](https://platform.openai.com/docs/guides/rate-limits?context=tier-one)**\n","funding_links":[],"categories":["Python","[Self-operating computer](https://www.hyperwriteai.com/self-operating-computer)","精选文章","多模态大模型","Projects","Learning","5. Computer Use Agents (CUA)","automation","Repos","Tools","GUI \u0026 Computer Control AI Agents","Autonomous Web Agents","Projects and Implementations","AI开源项目"],"sub_categories":["Links","AI Agent","网络服务_其他","Frameworks \u0026 Models","Repositories","4.3 Tree Search + Web Agents","Computer Use","Desktop Automation","Computer-use Agents"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FOthersideAI%2Fself-operating-computer","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FOthersideAI%2Fself-operating-computer","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FOthersideAI%2Fself-operating-computer/lists"}