{"id":22213958,"url":"https://github.com/xmlking/ai-experiments","last_synced_at":"2025-03-25T06:23:56.790Z","repository":{"id":218495773,"uuid":"746105804","full_name":"xmlking/ai-experiments","owner":"xmlking","description":"LLM mistral","archived":false,"fork":false,"pushed_at":"2024-05-05T18:06:18.000Z","size":5206,"stargazers_count":2,"open_issues_count":6,"forks_count":0,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-01-30T05:43:17.730Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/xmlking.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-01-21T04:12:53.000Z","updated_at":"2024-10-30T09:29:34.000Z","dependencies_parsed_at":null,"dependency_job_id":"d1ca9250-00d0-425d-9f22-2f0be9a60c0e","html_url":"https://github.com/xmlking/ai-experiments","commit_stats":null,"previous_names":["xmlking/ai-experiments"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xmlking%2Fai-experiments","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xmlking%2Fai-experiments/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xmlking%2Fai-experiments/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xmlking%2Fai-experiments/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/xmlking","download_url":"https://codeload.github.com/xmlking/ai-experiments/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":245409541,"owners_count":20610547,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-12-02T21:12:43.648Z","updated_at":"2025-03-25T06:23:56.772Z","avatar_url":"https://github.com/xmlking.png","language":"Python","readme":"# LangChain Document QA\n\nThis example provides an interface for asking questions to a PDF document. \n\nA ready to use 100% local setup \n\n## Prerequisites \n\n1. [ollama](https://ollama.ai/) for mac installed.\n2. Docker Desktop with 12GB RAM allocated\n\n## Setup\n\n```shell\n# pip install -r requirements.txt\n# pipenv install -r requirements.txt\n# pipenv requirements \u003e requirements.txt\n# setup virtualenv\npipenv shell\n# Install from Pipfile\npipenv install\n```\n\n### Pull some models\n```shell\nollama pull mistral\nollama pull llama2\n# verify\nollama list\n```\n\n### Run a model\n```shell\nollama run mistral\n```\n\n### Call REST API\n\n```shell\n# Generate a response\ncurl http://localhost:11434/api/generate -d '{\n  \"model\": \"llama2\",\n  \"prompt\":\"Why is the sky blue?\"\n}'\n\ncurl -X POST http://localhost:11434/api/generate -d '{\n  \"model\": \"mistral\",\n  \"prompt\": \"Why is the sky blue?\",\n  \"stream\": false\n}'\n\n# (OR) Chat with a model\ncurl http://localhost:11434/api/chat -d '{\n  \"model\": \"mistral\",\n  \"messages\": [\n    { \"role\": \"user\", \"content\": \"why is the sky blue?\" }\n  ]\n}'\n```\n\n## Run\n\n### Start Ollama \n\nStart via **ollama** via docker, if you are not running it via CLI\n```shell\ndocker compose up\nopen http://localhost:11434/\n```\n\n### Run a model\n\nNow you can run a model like mistral inside the container.\n```shell\ndocker exec -it ollama ollama run mistral\n```\n\n### Verify\n\nTest if base model respond\n```shell\ncurl -X POST http://localhost:11434/api/generate -d '{\n  \"model\": \"mistral\",\n  \"prompt\": \"Why is the sky blue?\",\n  \"stream\": false\n}'\n```\n\n### Start RAG\n```shell\npipenv run python main.py\n# (Or) You can activate the virtual environment then run the file\npipenv shell\npython main.py\n```\n\nA prompt will appear, where questions may be asked:\n\n```\nQuery: How many locations does WeWork have?\n```\n\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fxmlking%2Fai-experiments","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fxmlking%2Fai-experiments","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fxmlking%2Fai-experiments/lists"}