{"id":13585026,"url":"https://github.com/xNul/code-llama-for-vscode","last_synced_at":"2025-04-07T06:32:17.559Z","repository":{"id":190556886,"uuid":"682864344","full_name":"xNul/code-llama-for-vscode","owner":"xNul","description":"Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.","archived":false,"fork":false,"pushed_at":"2024-07-31T23:46:03.000Z","size":11,"stargazers_count":567,"open_issues_count":0,"forks_count":32,"subscribers_count":6,"default_branch":"main","last_synced_at":"2025-03-28T15:45:54.703Z","etag":null,"topics":["assistant","code","code-llama","codellama","continue","continuedev","copilot","llama","llama2","llamacpp","llm","local","meta","ollama","studio","visual","vscode"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/xNul.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-08-25T03:48:12.000Z","updated_at":"2025-02-27T22:29:38.000Z","dependencies_parsed_at":"2024-01-14T04:40:38.105Z","dependency_job_id":"39a197fb-9174-4de7-86d5-231b79f350d8","html_url":"https://github.com/xNul/code-llama-for-vscode","commit_stats":{"total_commits":7,"total_committers":2,"mean_commits":3.5,"dds":0.1428571428571429,"last_synced_commit":"48e344c542b1e37f3adc6b2a284af28fd0ae5308"},"previous_names":["xnul/code-llama-for-vscode"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xNul%2Fcode-llama-for-vscode","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xNul%2Fcode-llama-for-vscode/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xNul%2Fcode-llama-for-vscode/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xNul%2Fcode-llama-for-vscode/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/xNul","download_url":"https://codeload.github.com/xNul/code-llama-for-vscode/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247217222,"owners_count":20903009,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["assistant","code","code-llama","codellama","continue","continuedev","copilot","llama","llama2","llamacpp","llm","local","meta","ollama","studio","visual","vscode"],"created_at":"2024-08-01T15:04:41.537Z","updated_at":"2025-04-07T06:32:17.545Z","avatar_url":"https://github.com/xNul.png","language":"Python","readme":"# Code Llama for VSCode\r\n\r\nAn API which mocks [Llama.cpp](https://github.com/ggerganov/llama.cpp) to enable support for Code Llama with the\r\n[Continue Visual Studio Code extension](https://continue.dev/).\r\n\r\nAs of the time of writing and to my knowledge, this is the only way to use Code Llama with VSCode locally without having\r\nto sign up or get an API key for a service. The only exception to this is Continue with [Ollama](https://ollama.ai/), but\r\nOllama doesn't support Windows or Linux. On the other hand, Code Llama for VSCode is completely cross-platform and will\r\nrun wherever Meta's own [codellama](https://github.com/facebookresearch/codellama) code will run.\r\n\r\nNow let's get started!\r\n\r\n### Setup\r\n\r\nPrerequisites:\r\n- [Download and run one of the Code Llama Instruct models](https://github.com/facebookresearch/codellama)\r\n- [Install the Continue VSCode extension](https://marketplace.visualstudio.com/items?itemName=Continue.continue)\r\n\r\nAfter you are able to use both independently, we will glue them together with Code Llama for VSCode.\r\n\r\nSteps:\r\n1. Move `llamacpp_mock_api.py` to your [`codellama`](https://github.com/facebookresearch/codellama) folder and install Flask to your environment with `pip install flask`.\r\n2. Run `llamacpp_mock_api.py` with your [Code Llama Instruct torchrun command](https://github.com/facebookresearch/codellama#fine-tuned-instruction-models). For example:\r\n```\r\ntorchrun --nproc_per_node 1 llamacpp_mock_api.py \\\r\n    --ckpt_dir CodeLlama-7b-Instruct/ \\\r\n    --tokenizer_path CodeLlama-7b-Instruct/tokenizer.model \\\r\n    --max_seq_len 512 --max_batch_size 4\r\n```\r\n3. Click the settings button at the bottom right of Continue's UI in VSCode and make changes to `config.json` so it looks like [this](https://docs.continue.dev/reference/Model%20Providers/llamacpp)[\u003csup\u003e\\[archive\\]\u003c/sup\u003e](http://web.archive.org/web/20240531162330/https://docs.continue.dev/reference/Model%20Providers/llamacpp). Replace `MODEL_NAME` with `codellama-7b`.\r\n\r\nRestart VSCode or reload the Continue extension and you should now be able to use Code Llama for VSCode!\r\n","funding_links":[],"categories":["Python","vscode"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FxNul%2Fcode-llama-for-vscode","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FxNul%2Fcode-llama-for-vscode","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FxNul%2Fcode-llama-for-vscode/lists"}