{"id":13576356,"url":"https://github.com/OpenAdaptAI/OpenAdapt","last_synced_at":"2025-04-05T05:31:39.173Z","repository":{"id":152921527,"uuid":"627024850","full_name":"OpenAdaptAI/OpenAdapt","owner":"OpenAdaptAI","description":"Open Source Generative Process Automation (i.e. Generative RPA). AI-First Process Automation with Large ([Language (LLMs) / Action (LAMs) / Multimodal (LMMs)] / Visual Language (VLMs)) Models","archived":false,"fork":false,"pushed_at":"2024-12-09T23:05:51.000Z","size":30240,"stargazers_count":1021,"open_issues_count":401,"forks_count":143,"subscribers_count":9,"default_branch":"main","last_synced_at":"2024-12-11T09:15:37.192Z","etag":null,"topics":["agents","ai-agents","ai-agents-framework","anthropic","computer-use","generative-process-automation","google-gemini","gpt4o","huggingface","large-action-model","large-language-models","large-multimodal-models","omniparser","openai","process-automation","process-mining","python","segment-anything","transformers","ultralytics"],"latest_commit_sha":null,"homepage":"https://www.OpenAdapt.AI","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/OpenAdaptAI.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null},"funding":{"github":["OpenAdaptAI"],"patreon":null,"open_collective":null,"ko_fi":null,"tidelift":null,"community_bridge":null,"liberapay":null,"issuehunt":null,"lfx_crowdfunding":null,"polar":null,"buy_me_a_coffee":null,"custom":null}},"created_at":"2023-04-12T16:20:23.000Z","updated_at":"2024-12-10T22:04:42.000Z","dependencies_parsed_at":null,"dependency_job_id":"a12b07e0-d79d-45a8-8646-00b0f8e3b109","html_url":"https://github.com/OpenAdaptAI/OpenAdapt","commit_stats":null,"previous_names":["openadaptai/openadapt","mldsai/openadapt"],"tags_count":99,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenAdaptAI%2FOpenAdapt","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenAdaptAI%2FOpenAdapt/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenAdaptAI%2FOpenAdapt/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenAdaptAI%2FOpenAdapt/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/OpenAdaptAI","download_url":"https://codeload.github.com/OpenAdaptAI/OpenAdapt/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247294468,"owners_count":20915335,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agents","ai-agents","ai-agents-framework","anthropic","computer-use","generative-process-automation","google-gemini","gpt4o","huggingface","large-action-model","large-language-models","large-multimodal-models","omniparser","openai","process-automation","process-mining","python","segment-anything","transformers","ultralytics"],"created_at":"2024-08-01T15:01:09.552Z","updated_at":"2025-04-05T05:31:34.162Z","avatar_url":"https://github.com/OpenAdaptAI.png","language":"Python","readme":"[Join us on Discord](https://discord.gg/yF527cQbDG)\n\n[Read our Architecture document](https://github.com/OpenAdaptAI/OpenAdapt/wiki/OpenAdapt-Architecture-(draft))\n\n[Join the Discussion on the Request for Comments](https://github.com/OpenAdaptAI/OpenAdapt/discussions/552)\n\nSee also:\n\n- https://github.com/OpenAdaptAI/SoM\n- https://github.com/OpenAdaptAI/pynput\n- https://github.com/OpenAdaptAI/atomacos\n\n# OpenAdapt: AI-First Process Automation with Large Multimodal Models (LMMs).\n\n**OpenAdapt** is the **open** source software **adapt**er between Large Multimodal Models (LMMs) and traditional desktop and web Graphical User Interfaces (GUIs).\n\n### Enormous volumes of mental labor are wasted on repetitive GUI workflows.\n\n### Foundation Models (e.g. [GPT-4](https://openai.com/research/gpt-4), [ACT-1](https://www.adept.ai/blog/act-1)) are powerful automation tools.\n\n### OpenAdapt connects Foundation Models to GUIs:\n\n\u003cimg width=\"1499\" alt=\"image\" src=\"https://github.com/OpenAdaptAI/OpenAdapt/assets/774615/c811654e-3450-42cd-91ee-935378e3a858\"\u003e\n\n\u003cimg width=\"1511\" alt=\"image\" src=\"https://github.com/OpenAdaptAI/OpenAdapt/assets/774615/82814cdb-f0d5-4a6b-9d44-a4628fca1590\"\u003e\n\nEarly demos (more coming soon!):\n\n- https://twitter.com/abrichr/status/1784307190062342237\n- https://www.loom.com/share/9d77eb7028f34f7f87c6661fb758d1c0\n\nWelcome to OpenAdapt! This Python library implements AI-First Process Automation\nwith the power of Large Multimodal Modals (LMMs) by:\n\n- Recording screenshots and associated user input\n- Aggregating and visualizing user input and recordings for development\n- Converting screenshots and user input into tokenized format\n- Generating and replaying synthetic input via transformer model completions\n- Generating process graphs by analyzing recording logs (work-in-progress)\n\nThe goal is similar to that of\n[Robotic Process Automation](https://en.wikipedia.org/wiki/Robotic_process_automation),\nexcept that we use Large Multimodal Models instead of conventional RPA tools.\n\nThe direction is adjacent to [Adept.ai](https://adept.ai/), with some key differences:\n1. OpenAdapt is model agnostic.\n2. OpenAdapt generates prompts automatically by **learning from human demonstration** (auto-prompted, not user-prompted). This means that agents are **grounded** in **existing processes**, which mitigates hallucinations and ensures successful task completion.\n3. OpenAdapt works with all types of desktop GUIs, including virtualized (e.g. Citrix) and web.\n4. OpenAdapt is open source (MIT license).\n\n## Install\n\n\u003cbr/\u003e\n\n|                 Installation Method                 |   Recommended for   |                                Ease of Use                                 |\n|:---------------------------------------------------:|:-------------------:|:--------------------------------------------------------------------------:|\n| [Scripted](https://openadapt.ai/#start) | Non-technical users | Streamlines the installation process for users unfamiliar with setup steps |\n|                    [Manual](https://github.com/OpenAdaptAI/OpenAdapt#manual-setup)                     |   Technical Users   | Allows for more control and customization during the installation process  |\n\n\u003cbr/\u003e\n\n### Installation Scripts\n\n#### Windows\n- Press Windows Key, type \"powershell\", and press Enter\n- Copy and paste the following command into the terminal, and press Enter (If Prompted for `User Account Control`, click 'Yes'):\n  \u003cpre className=\"whitespace-pre-wrap code text-slate-600 bg-slate-100 p-3 m-2\"\u003e\n   Start-Process powershell -Verb RunAs -ArgumentList '-NoExit', '-ExecutionPolicy', 'Bypass', '-Command', \"iwr -UseBasicParsing -Uri 'https://raw.githubusercontent.com/OpenAdaptAI/OpenAdapt/main/install/install_openadapt.ps1' | Invoke-Expression\"\n  \u003c/pre\u003e\n\n#### MacOS\n- Download and install Git and Python 3.10\n- Press Command+Space, type \"terminal\", and press Enter\n- Copy and paste the following command into the terminal, and press Enter:\n  \u003cpre className=\"whitespace-pre-wrap code text-slate-600 bg-slate-100 p-3 m-2\"\u003e\n   /bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/OpenAdaptAI/OpenAdapt/HEAD/install/install_openadapt.sh)\"\n  \u003c/pre\u003e\n\n\u003cbr/\u003e\n\n### Manual Setup\n\nPrerequisite:\n- Python 3.10\n- Git\n- Tesseract (for OCR)\n- nvm (node version manager)\n\nFor the setup of any/all of the above dependencies, follow the steps [SETUP.md](./SETUP.md).\n\n\u003cbr/\u003e\n\nInstall with [Poetry](https://python-poetry.org/) :\n```\ngit clone https://github.com/OpenAdaptAI/OpenAdapt.git\ncd OpenAdapt\npip3 install poetry\npoetry install\npoetry shell\npoetry run postinstall\ncd openadapt \u0026\u0026 alembic upgrade head \u0026\u0026 cd ..\npytest\n```\n\n### Permissions\n\nSee how to set up system permissions on macOS [here](./permissions_in_macOS.md).\n\n## Usage\n\n### Shell\n\nRun this in every new terminal window once (while inside the `OpenAdapt` root\ndirectory) before running any `openadapt` commands below:\n\n```\npoetry shell\n```\n\nYou should see the something like this:\n\n```\n% poetry shell\nUsing python3.10 (3.10.13)\n...\n(openadapt-py3.10) %\n```\n\nNotice the environment prefix `(openadapt-py3.10)`.\n\n### Tray\nRun the following command to start the system tray icon and launch the web dashboard:\n\n```\npython -m openadapt.entrypoint\n```\nThis command will print the config, update the database to the latest migration, start the system tray icon and launch the web dashboard.\n\n### Record\n\nCreate a new recording by running the following command:\n\n```\npython -m openadapt.record \"testing out openadapt\"\n```\n\nWait until all three event writers have started:\n```\n| INFO     | __mp_main__:write_events:230 - event_type='screen' starting\n| INFO     | __mp_main__:write_events:230 - event_type='action' starting\n| INFO     | __mp_main__:write_events:230 - event_type='window' starting\n```\n\nType a few words into the terminal and move your mouse around the screen\nto generate some events, then stop the recording by pressing CTRL+C.\n\nCurrent limitations:\n- recording should be short (i.e. under a minute), as they are\nsomewhat memory intensive, and there is currently an\n[open issue](https://github.com/OpenAdaptAI/OpenAdapt/issues/5) describing a\npossible memory leak\n- the only touchpad and trackpad gestures currently supported are\npointing the cursor and left or right clicking, as described in this\n[open issue](https://github.com/OpenAdaptAI/OpenAdapt/issues/145)\n\n\n### Visualize\n\nQuickly visualize the latest recording you created by running the following command:\n\n```\npython -m openadapt.visualize\n```\n\nThis will generate an HTML file and open a tab in your browser that looks something like this:\n\n![image](https://github.com/OpenAdaptAI/OpenAdapt/assets/774615/5d7253b7-ae12-477c-94a3-b388e4f37587)\n\nFor a more powerful dashboard, run:\n\n```\npython -m openadapt.app.dashboard.run\n```\n\nThis will start a web server locally, and then open a tab in your browser that looks something like this:\n\n![image](https://github.com/OpenAdaptAI/OpenAdapt/assets/774615/48d27459-4be8-4b96-beb0-1973953b8a09)\n\nFor a desktop app-based visualization, run:\n\n```\npython -m openadapt.app.visualize\n```\n\nThis will open a scrollable window that looks something like this:\n\n\u003cimg width=\"1512\" alt=\"image\" src=\"https://github.com/OpenAdaptAI/OpenAdapt/assets/774615/451dd467-20ae-4ce7-a3b4-f888635afe8c\"\u003e\n\n\u003cimg width=\"1511\" alt=\"image\" src=\"https://github.com/OpenAdaptAI/OpenAdapt/assets/774615/13264cf6-46c0-4413-a29d-59bdd040a32e\"\u003e\n\n### Playback\n\nYou can play back the recording using the following command:\n\n```\npython -m openadapt.replay NaiveReplayStrategy\n```\n\nOther replay strategies include:\n\n- [`StatefulReplayStrategy`](https://github.com/OpenAdaptAI/OpenAdapt/blob/main/openadapt/strategies/stateful.py): Early proof-of-concept which uses the OpenAI GPT-4 API with prompts constructed via OS-level window data.\n- (*)[`VisualReplayStrategy`](https://github.com/OpenAdaptAI/OpenAdapt/blob/main/openadapt/strategies/visual.py): Uses [Fast Segment Anything Model (FastSAM)](https://github.com/CASIA-IVA-Lab/FastSAM) to segment active window.\n- (*)[`VanillaReplayStrategy`](https://github.com/OpenAdaptAI/OpenAdapt/blob/main/openadapt/strategies/vanilla.py): Assumes the model is capable of directly reasoning on states and actions accurately. With future frontier models, we hope that this script will suddenly work a lot better.\n- (*)[`VisualBrowserReplayStrategy`](https://github.com/OpenAdaptAI/OpenAdapt/blob/main/openadapt/strategies/visual_browser.py): Like VisualReplayStrategy but generates segments from the visible DOM read by the browser extension.\n\n\nThe (*) prefix indicates strategies which accept an \"instructions\" parameter that is used to modify the recording, e.g.:\n\n```\npython -m openadapt.replay VanillaReplayStrategy --instructions \"calculate 9-8\"\n```\n\nSee https://github.com/OpenAdaptAI/OpenAdapt/tree/main/openadapt/strategies for a complete list. More ReplayStrategies coming soon! (see [Contributing](#Contributing)).\n\n### Browser integration\n\nTo record browser events in Google Chrome (required by the `BrowserReplayStrategy`), follow these steps:\n\n1. Go to your Chrome extensions page by entering [chrome://extensions](chrome://extensions/) in your address bar.\n\n2. Enable `Developer mode` (located at the top right).\n\n3. Click `Load unpacked` (located at the top left).\n\n4. Select the `chrome_extension` directory in the OpenAdapt repo.\n\n5. Make sure the Chrome extension is enabled (the switch to the right of the OpenAdapt extension widget is turned on).\n\n6. Set the `RECORD_BROWSER_EVENTS` flag to `true` in `openadapt/data/config.json`.\n\n## Features\n\n### State-of-the-art GUI understanding via [Segment Anything in High Quality](https://github.com/SysCV/sam-hq):\n\n![image](https://github.com/OpenAdaptAI/OpenAdapt/assets/774615/5fa6d008-4042-40ea-b3e6-f97ef4dd83db)\n\n### Industry leading privacy (PII/PHI scrubbing) via [AWS Comprehend](https://aws.amazon.com/comprehend/), [Microsoft Presidio](https://microsoft.github.io/presidio/) and [Private AI](https://www.private-ai.com/):\n\n![image](https://github.com/OpenAdaptAI/OpenAdapt/assets/774615/87c3ab4a-1761-4222-b5d1-6368177ca637)\n\n### Decentralized and secure data distribution via [Magic Wormhole](https://github.com/magic-wormhole/magic-wormhole):\n\n![image](https://github.com/OpenAdaptAI/OpenAdapt/assets/774615/cd8bc2a7-6f6d-4218-843f-adfd7a684fc8)\n\n### Detailed performance monitoring via [pympler](https://pympler.readthedocs.io/en/latest/) and [tracemalloc](https://docs.python.org/3/library/tracemalloc.html):\n\n![image](https://github.com/OpenAdaptAI/OpenAdapt/assets/774615/ae047b8a-b584-4f5f-9981-34cb88c5be54)\n\n### System Tray Icon and Client GUI App (work-in-progress)\n\n\u003cimg width=\"661\" alt=\"image\" src=\"https://github.com/OpenAdaptAI/OpenAdapt/assets/774615/601b3a9f-ff16-45e0-a302-39257b06e382\"\u003e\n\n### And much more!\n\n## 🚀 Open Contract Positions at OpenAdapt.AI\n\nWe are thrilled to open new contract positions for developers passionate about pushing boundaries in technology. If you're ready to make a significant impact, consider the following roles:\n\n#### Frontend Developer\n- **Responsibilities**: Develop and test key features such as process visualization, demo booking, app store, and blog integration.\n- **Skills**: Proficiency in modern frontend technologies and a knack for UI/UX design.\n\n#### Machine Learning Engineer\n- **Role**: Implement and refine process replay strategies using state-of-the-art LLMs/LMMs. Extract dynamic process descriptions from extensive process recordings.\n- **Skills**: Strong background in machine learning, experience with LLMs/LMMs, and problem-solving aptitude.\n\n#### Software Engineer\n- **Focus**: Enhance memory optimization techniques during process recording and replay. Develop sophisticated tools for process observation and productivity measurement.\n- **Skills**: Expertise in software optimization, memory management, and analytics.\n\n#### Technical Writer\n- **Focus**: Maintaining [OpenAdapt](https://github.com/OpenAdaptAI) repositories\n- **Skills**: Passion for writing and/or documentation\n\n### 🔍 How to Apply\n- **Step 1**: Submit an empty Pull Request to [OpenAdapt](https://github.com/OpenAdaptAI/OpenAdapt) or [OpenAdapt.web](https://github.com/OpenAdaptAI/OpenAdapt.web). Format your PR title as `[Proposal] \u003cyour title here\u003e`\n- **Step 2**: Include a brief, informal outline of your approach in the PR description. Feel free to add any questions you might have.\n- **Need Clarifications?** Reach out to us on [Discord](https://discord.gg/yF527cQbDG).\n\nWe're looking forward to your contributions. Let's build the future 🚀\n\n## Contributing\n\n### Replay Problem Statement\n\nOur goal is to automate the task described and demonstrated in a `Recording`.\nThat is, given a new `Screenshot`, we want to generate the appropriate\n`ActionEvent`(s) based on the previously recorded `ActionEvent`s in order to\naccomplish the task specified in the\n[`Recording.task_description`](https://github.com/OpenAdaptAI/OpenAdapt/blob/main/openadapt/models.py#L46)\nand narrated by the user in\n[`AudioInfo.words_with_timestamps`](https://github.com/OpenAdaptAI/OpenAdapt/pull/346/files#diff-224d5ce89a18f796cae99bf3da5a9862def2127db2ed38e68a07a25a8624166fR393),\nwhile accounting for differences in screen resolution, window size, application\nbehavior, etc.\n\nIf it's not clear what `ActionEvent` is appropriate for the given `Screenshot`,\n(e.g. if the GUI application is behaving in a way we haven't seen before),\nwe can ask the user to take over temporarily to demonstrate the appropriate\ncourse of action.\n\n### Data Model\n\nThe data model consists of the following entities:\n\n1. `Recording`: Contains information about the screen dimensions, platform, and\n   other metadata.\n2. `ActionEvent`: Represents a user action event such as a mouse click or key\n   press. Each `ActionEvent` has an associated `Screenshot` taken immediately\n   before the event occurred. `ActionEvent`s are aggregated to remove\n   unnecessary events (see [visualize](#visualize).)\n3. `Screenshot`: Contains the PNG data of a screenshot taken during the\n   recording.\n4. `WindowEvent`: Represents a window event such as a change in window title,\n   position, or size.\n\n### API\n\nYou can assume that you have access to the following functions:\n\n- `create_recording(\"doing taxes\")`: Creates a recording.\n- `get_latest_recording()`: Gets the latest recording.\n- `get_events(recording)`: Returns a list of `ActionEvent` objects for the given\n  recording.\n\nSee [GitBook Documentation](https://openadapt.gitbook.io/openadapt.ai/) for more.\n\n### Instructions\n\n[Join us on Discord](https://discord.gg/yF527cQbDG). Then:\n\n1. Fork this repository and clone it to your local machine.\n2. Get OpenAdapt up and running by following the instructions under [Setup](#Setup).\n3. Look through the list of open issues at https://github.com/OpenAdaptAI/OpenAdapt/issues\nand once you find one you would like to address, indicate your interest with a comment.\n4. Implement a solution to the issue you selected. Write unit tests for your\nimplementation.\n5. Submit a Pull Request (PR) to this repository. Note: submitting a PR before your\nimplementation is complete (e.g. with high level documentation and/or implementation\nstubs) is encouraged, as it provides us with the opportunity to provide early\nfeedback and iterate on the approach.\n\n### Evaluation Criteria\n\nYour submission will be evaluated based on the following criteria:\n\n1. **Functionality** : Your implementation should correctly generate the new\n   `ActionEvent` objects that can be replayed in order to accomplish the task in\n   the original recording.\n\n2. **Code Quality** : Your code should be well-structured, clean, and easy to\n   understand.\n\n3. **Scalability** : Your solution should be efficient and scale well with\n   large datasets.\n\n4. **Testing** : Your tests should cover various edge cases and scenarios to\n   ensure the correctness of your implementation.\n\n### Submission\n\n1. Commit your changes to your forked repository.\n\n2. Create a pull request to the original repository with your changes.\n\n3. In your pull request, include a brief summary of your approach, any\n   assumptions you made, and how you integrated external libraries.\n\n4. *Bonus*: interacting with ChatGPT and/or other language transformer models\n   in order to generate code and/or evaluate design decisions is encouraged. If\n   you choose to do so, please include the full transcript.\n\n## Troubleshooting\n\nMacOS: if you encounter system alert messages or find issues when making and replaying recordings, make sure to [set up permissions accordingly](./permissions_in_macOS.md).\n\n![MacOS System Alerts](https://github.com/OpenAdaptAI/OpenAdapt/assets/43456930/dd96ab17-7cd6-4762-9c4f-5131b224a118)\n\nIn summary (from https://stackoverflow.com/a/69673312):\n\n1. Settings -\u003e Security \u0026 Privacy\n2. Click on the Privacy tab\n3. Scroll and click on the Accessibility Row\n4. Click +\n5. Navigate to /System/Applications/Utilities/ (or wherever Terminal.app is installed)\n6. Click okay.\n\n## Developing\n\n### Generate migration (after editing a model)\n\nFrom inside the `openadapt` directory (containing `alembic.ini`):\n\n```\nalembic revision --autogenerate -m \"\u003cmsg\u003e\"\n```\n\n### Pre-commit Hooks\n\nTo ensure code quality and consistency, OpenAdapt uses pre-commit hooks. These hooks\nwill be executed automatically before each commit to perform various checks and\nvalidations on your codebase.\n\nThe following pre-commit hooks are used in OpenAdapt:\n\n- [check-yaml](https://github.com/pre-commit/pre-commit-hooks#check-yaml): Validates the syntax and structure of YAML files.\n- [end-of-file-fixer](https://github.com/pre-commit/pre-commit-hooks#end-of-file-fixer): Ensures that files end with a newline character.\n- [trailing-whitespace](https://github.com/pre-commit/pre-commit-hooks#trailing-whitespace): Detects and removes trailing whitespace at the end of lines.\n- [black](https://github.com/psf/black): Formats Python code to adhere to the Black code style. Notably, the `--preview` feature is used.\n- [isort](https://github.com/PyCQA/isort): Sorts Python import statements in a consistent and standardized manner.\n\nTo set up the pre-commit hooks, follow these steps:\n\n1. Navigate to the root directory of your OpenAdapt repository.\n\n2. Run the following command to install the hooks:\n\n```\npre-commit install\n```\n\nNow, the pre-commit hooks are installed and will run automatically before each commit. They will enforce code quality standards and prevent committing code that doesn't pass the defined checks.\n\n### Status Checks\n\nWhen you submit a PR, the \"Python CI\" workflow is triggered for code consistency. It follows organized steps to review your code:\n\n1. **Python Black Check** : This step verifies code formatting using Python Black style, with the `--preview` flag for style.\n\n2. **Flake8 Review** : Next, Flake8 tool thoroughly checks code structure, including flake8-annotations and flake8-docstrings. Though GitHub Actions automates checks, it's wise to locally run `flake8 .` before finalizing changes for quicker issue spotting and resolution.\n\n# Submitting an Issue\n\nPlease submit any issues to https://github.com/OpenAdaptAI/OpenAdapt/issues with the\nfollowing information:\n\n- Problem description (please include any relevant console output and/or screenshots)\n- Steps to reproduce (please help others to help you!)\n","funding_links":["https://github.com/sponsors/OpenAdaptAI"],"categories":["Python","Application","HarmonyOS","Projects","4. Web Browsing Agents"],"sub_categories":["Taxonomy","Windows Manager","Frameworks \u0026 Models","4.1 Open Source Web Browsing Agents"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FOpenAdaptAI%2FOpenAdapt","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FOpenAdaptAI%2FOpenAdapt","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FOpenAdaptAI%2FOpenAdapt/lists"}