{"id":15287590,"url":"https://github.com/chuloai/oasis","last_synced_at":"2026-04-01T20:54:14.265Z","repository":{"id":167695102,"uuid":"643320114","full_name":"ChuloAI/oasis","owner":"ChuloAI","description":"Local LLaMAs/Models in VSCode","archived":false,"fork":false,"pushed_at":"2023-06-05T14:23:43.000Z","size":1206,"stargazers_count":53,"open_issues_count":1,"forks_count":3,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-03-26T22:12:31.113Z","etag":null,"topics":["code-generation","open-source","vscode-extension"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ChuloAI.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-05-20T19:34:56.000Z","updated_at":"2025-02-16T23:11:33.000Z","dependencies_parsed_at":null,"dependency_job_id":"aacfad40-cd61-4826-9398-d91f5e5cb45c","html_url":"https://github.com/ChuloAI/oasis","commit_stats":null,"previous_names":["paolorechia/oasis"],"tags_count":5,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ChuloAI%2Foasis","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ChuloAI%2Foasis/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ChuloAI%2Foasis/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ChuloAI%2Foasis/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ChuloAI","download_url":"https://codeload.github.com/ChuloAI/oasis/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248670506,"owners_count":21142897,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["code-generation","open-source","vscode-extension"],"created_at":"2024-09-30T15:32:20.735Z","updated_at":"2026-04-01T20:54:14.196Z","avatar_url":"https://github.com/ChuloAI.png","language":"Python","readme":"# Oasis\nThe idea is generate code with the assistance of guidance library, using open source LLM models that run locally.\nThis library is exposed as a VSCode plugin, and adds code-generation commands on editor selection (invoked through right-click or command palette).\n\nNOTE: main is currently unstable, developing the use of guidance prompts (see guidance library: https://github.com/microsoft/guidance)\n\n**WARNING**: Only add docstring to functions command is somewhat stable at the moment.\n\n## v0.2.0\n**Update: 03.06.2023**\nThe guidance server code has been moved to a separate repository: https://github.com/ChuloAI/andromeda-chain\n\nIf you're looking for the code version from Medium Article, try checking out v0.1.5 (more in next section).\n\nStarting this service now happens through Docker, and should be a lot easier.\n\n\n### Setup\n\n\nRequirements:\n\n    - docker-engine\n    - docker-compose v2\n\nIf using GPU also:\n\n    - nvidia-docker: https://github.com/NVIDIA/nvidia-docker\n\n\n#### Clone this repo and download your desired model:\n```bash\ngit clone https://github.com/paolorechia/oasis\ncd oasis\nmkdir models\ncd models\ngit clone https://huggingface.co/Salesforce/codegen-350M-mono\ncd ..\ndocker-compose -f docker-compose.cpu.yaml up\n```\nIf you change the used model, make sure to update the injected environment variable MODEL_PATH passed to the guidance server container:\n\nhttps://github.com/ChuloAI/oasis/blob/4cd7da6f0866e26088f1a326acdf3f1f43d59660/docker-compose.cpu.yaml#L42\n\n##### Running on GPU\n\nChange the command to use the other docker-compose file:\n\n```\ndocker-compose -f docker-compose.gpu.yaml up\n```\n\n\n\n\n## If using v0.1.3\nIf you want to use text-generation-webui with simpler prompts, use v0.1.3. This is a deprecated feature, newer versions will no longer support `text-generation-webui`, at least for the time being.\n\n\n1. Install text-generation-web-ui, start it with API: https://github.com/oobabooga/text-generation-webui\n\n```bash\ngit clone https://github.com/paolorechia/oasis\ncd oasis\ngit checkout v0.1.3\n```\n\n2. Start the FastAPI server in `prompt_server`:\n\n```\ncd prompt_server\npip install -r requirements.txt\n./start_uvicorn.sh\n```\n\n## If using v0.1.5\n### Installation\n```bash\ngit clone https://github.com/paolorechia/oasis\ncd oasis\ngit checkout v0.1.3\n```\n\n#### Running on CPU\nBy default, it will install PyTorch to run with CPU.\n\n1. Start the FastAPI server in `guidance_server`:\n\n```\ncd guidance_server\npip install -r requirements.txt\n./start_uvicorn.sh\n```\n\nThis server is quite heavy on dependencies. \n\n2. Start the FastAPI server in `prompt_server`:\n\n```\ncd prompt_server\npip install -r requirements.txt\n./start_uvicorn.sh\n```\n\n3. Install VSCode plugin called 'oasis-llamas'\n4. Use it!\n\n\n#### Running on GPU\n\nThere's no automated installation to set it up with GPU - when setting up the `guidance_server` above, I recommend the following steps for an NVIDIA card.\n\n1. Remove torch from requirements.txt\n2. If needed, install NVIDIA Developer Toolkit: https://developer.nvidia.com/cuda-11-8-0-download-archive\n3. Install PyTorch following the official documentation instead: https://pytorch.org/get-started/locally/\n4. Install the remaining dependencies in the requirements.txt\n\nYou also need to modify the source code and change this line to \n```\nuse_gpu_4bit = False\n```\n\n```\nuse_gpu_4bit = True\n```\n\nIn `guidance_server/main.py` (currently line 41)\n\n\n\n### Local Codegen Models on VSCode\nHow does it work?\n\nThe Oasis backend receives a pair of command/selected code from the VSCode extension frontend, and uses this input to:\n\n1. Parse the input code using the `ast` module (https://docs.python.org/3/library/ast.html)\n2. Find specific code parts from the parsed code\n3. Choose a guidance prompt to apply\n4. Applies the guidance prompt, delegating the LLM call to the second backend service: `guidance server`\n5. Parses the result and forms an adequate response back to the frontend.\n\n\n![Flow of a command execution](/oasis_architecture.jpg?raw=true \"Basic Flow\")\n\nYou can read more about it in Medium: https://medium.com/@paolorechia/building-oasis-a-local-code-generator-tool-using-open-source-models-and-microsofts-guidance-aef54c3e2840\n\n### Changing Models\nThere is currently no exposed config. If you want to change the loaded model, change the source code in\n`guidance_server/main.py`, in lines 35-39 you will find something like:\n\n```python\n# model = \"TheBloke/wizardLM-7B-HF\"\nmodel = \"Salesforce/codegen-350m-mono\"\n# model = \"Salesforce/codegen-2b-mono\"\n# model = \"Salesforce/codegen-6b-mono\"\n# model = \"Salesforce/codegen-16B-mono\"\n```\n\nUncomment the one you'd like to use this with.\n\nThis plugin works even with the 350m-mono model version! That's currently only possible with something like the guidance library.\nAlthough do expect better results with bigger models.\n\n### Add docstrings to block of code\n![Docstring demo](https://github.com/paolorechia/oasis/assets/5386983/39110f0f-79b1-44cc-aa42-d793fc1eb0f8)\n\n\n**Note:** for better results, select exactly ONE block of function to add docstring too, with NO syntax errors.\n\n\n### Known issues\n\n1. The plugin currently removes the extra new lines in the function definition. This is a problem related to the usage of the `ast.parse` function, which strips newlines. This is used to decompose the function header from the body, and inject the docstring generation.\n2. The plugin sometimes messes up the indentation of the generated docstring/input code.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchuloai%2Foasis","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fchuloai%2Foasis","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchuloai%2Foasis/lists"}