{"id":13456608,"url":"https://github.com/InternLM/HuixiangDou","last_synced_at":"2025-03-24T11:30:49.947Z","repository":{"id":217614325,"uuid":"736491949","full_name":"InternLM/HuixiangDou","owner":"InternLM","description":"HuixiangDou: Overcoming Group Chat Scenarios with LLM-based Technical Assistance","archived":false,"fork":false,"pushed_at":"2024-05-05T05:31:15.000Z","size":6282,"stargazers_count":826,"open_issues_count":13,"forks_count":72,"subscribers_count":13,"default_branch":"main","last_synced_at":"2024-05-06T00:03:34.758Z","etag":null,"topics":["application","assistance","chatbot","dsl","lark","llm","multimodal","ocr","pipeline","rag","robot","wechat"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-3-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/InternLM.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-12-28T03:57:11.000Z","updated_at":"2024-05-07T03:37:39.951Z","dependencies_parsed_at":"2024-04-29T05:31:59.579Z","dependency_job_id":"019f2362-03a2-4a38-8f50-257c7d015851","html_url":"https://github.com/InternLM/HuixiangDou","commit_stats":null,"previous_names":["internlm/huixiangdou"],"tags_count":2,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InternLM%2FHuixiangDou","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InternLM%2FHuixiangDou/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InternLM%2FHuixiangDou/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InternLM%2FHuixiangDou/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/InternLM","download_url":"https://codeload.github.com/InternLM/HuixiangDou/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":221962415,"owners_count":16908336,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["application","assistance","chatbot","dsl","lark","llm","multimodal","ocr","pipeline","rag","robot","wechat"],"created_at":"2024-07-31T08:01:24.839Z","updated_at":"2025-03-24T11:30:49.927Z","avatar_url":"https://github.com/InternLM.png","language":"Python","readme":"\n# 🎚️ Upgrade\n\n[HuixiangDou2](https://github.com/tpoisonooo/HuixiangDou2) is a validated GraphRAG solution in the plant field. If you are interested in the **effects of HuixiangDou in non-computer fields**, try the new version.\n\n---\n\nEnglish | [简体中文](README_zh.md)\n\n\u003cdiv align=\"center\"\u003e\n\n\u003cimg src=\"resource/logo_black.svg\" width=\"555px\"/\u003e\n\n\u003cdiv align=\"center\"\u003e\n  \u003ca href=\"https://cdn.vansin.top/internlm/dou.jpg\" target=\"_blank\"\u003e\n    \u003cimg alt=\"Wechat\" src=\"https://img.shields.io/badge/wechat-robot%20inside-brightgreen?logo=wechat\u0026logoColor=white\" /\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://huixiangdou.readthedocs.io/en/latest/\" target=\"_blank\"\u003e\n    \u003cimg alt=\"Readthedocs\" src=\"https://img.shields.io/badge/readthedocs-chat%20with%20AI-brightgreen?logo=readthedocs\u0026logoColor=white\" /\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://youtu.be/ylXrT-Tei-Y\" target=\"_blank\"\u003e\n    \u003cimg alt=\"YouTube\" src=\"https://img.shields.io/badge/YouTube-black?logo=youtube\u0026logoColor=red\" /\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://www.bilibili.com/video/BV1S2421N7mn\" target=\"_blank\"\u003e\n    \u003cimg alt=\"BiliBili\" src=\"https://img.shields.io/badge/BiliBili-pink?logo=bilibili\u0026logoColor=white\" /\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://discord.gg/TW4ZBpZZ\" target=\"_blank\"\u003e\n    \u003cimg alt=\"discord\" src=\"https://img.shields.io/badge/discord-red?logo=discord\u0026logoColor=white\" /\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://arxiv.org/abs/2401.08772\" target=\"_blank\"\u003e\n    \u003cimg alt=\"Arxiv\" src=\"https://img.shields.io/badge/arxiv-2401.08772%20-darkred?logo=arxiv\u0026logoColor=white\" /\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://arxiv.org/abs/2405.02817\" target=\"_blank\"\u003e\n    \u003cimg alt=\"Arxiv\" src=\"https://img.shields.io/badge/arxiv-2405.02817%20-darkred?logo=arxiv\u0026logoColor=white\" /\u003e\n  \u003c/a\u003e\n\u003c/div\u003e\n\n\u003c/div\u003e\n\nHuixiangDou1 is a **professional knowledge assistant** based on LLM.\n\nAdvantages:\n\n1. Design three-stage pipelines of preprocess, rejection and response\n    * `chat_in_group` copes with **group chat** scenario, answer user questions without message flooding, see [2401.08772](https://arxiv.org/abs/2401.08772), [2405.02817](https://arxiv.org/abs/2405.02817), [Hybrid Retrieval](./docs/en/doc_knowledge_graph.md) and [Precision Report](./evaluation/)\n    * `chat_with_repo` for **real-time streaming** chat\n2. No training required, with CPU-only, 2G, 10G, 20G and 80G configuration\n3. Offers a complete suite of Web, Android, and pipeline source code, industrial-grade and commercially viable\n\nCheck out the [scenes in which HuixiangDou are running](./huixiangdou-inside.md) and current public service status:\n- [readthedocs ChatWithAI](https://huixiangdou.readthedocs.io/zh-cn/latest/) (cpu-only) is available\n- [OpenXLab](https://openxlab.org.cn/apps/detail/tpoisonooo/huixiangdou-web) is using GPU and under continuous maintenance\n- [WeChat bot](https://cdn.vansin.top/internlm/dou.jpg) has a cost associated with WeChat integration. All code has been verified to be functional for one year. Please deploy it on your own for either the [free](https://github.com/InternLM/HuixiangDou/blob/main/docs/zh/doc_add_wechat_accessibility.md) or [commercial](https://github.com/InternLM/HuixiangDou/blob/main/docs/zh/doc_add_wechat_commercial.md) version.\n\nIf this helps you, please give it a star ⭐\n\n# 🔆 New Features\n\nOur Web version has been released to [OpenXLab](https://openxlab.org.cn/apps/detail/tpoisonooo/huixiangdou-web), where you can create knowledge base, update positive and negative examples, turn on web search, test chat, and integrate into Feishu/WeChat groups. See [BiliBili](https://www.bilibili.com/video/BV1S2421N7mn) and [YouTube](https://www.youtube.com/watch?v=ylXrT-Tei-Y) !\n\nThe Web version's API for Android also supports other devices. See [Python sample code](./tests/test_openxlab_android_api.py).\n\n- \\[2025/03\\] [Forwarding multiple wechat group message](./docs/zh/doc_merge_wechat_group.md)\n- \\[2024/09\\] [Inverted indexer](https://github.com/InternLM/HuixiangDou/pull/387) makes LLM prefer knowledge base🎯\n- \\[2024/09\\] [Code retrieval](./huixiangdou/service/parallel_pipeline.py)\n- \\[2024/08\\] [chat_with_readthedocs](https://huixiangdou.readthedocs.io/en/latest/), see [how to integrate](./docs/zh/doc_add_readthedocs.md) 👍\n- \\[2024/07\\] Image and text retrieval \u0026 Removal of `langchain` 👍\n- \\[2024/07\\] [Hybrid Knowledge Graph and Dense Retrieval](./docs/en/doc_knowledge_graph.md) improve 1.7% F1 score 🎯\n- \\[2024/06\\] [Evaluation of chunksize, splitter, and text2vec model](./evaluation) 🎯\n- \\[2024/05\\] [wkteam WeChat access](./docs/zh/doc_add_wechat_commercial.md), parsing image \u0026 URL, support coreference resolution\n- \\[2024/05\\] [SFT LLM on NLP task, F1 increased by 29%](./sft/) 🎯\n  \u003ctable\u003e\n      \u003ctr\u003e\n          \u003ctd\u003e🤗\u003c/td\u003e\n          \u003ctd\u003e\u003ca href=\"https://huggingface.co/tpoisonooo/HuixiangDou-CR-LoRA-Qwen-14B\"\u003eLoRA-Qwen1.5-14B\u003c/a\u003e\u003c/td\u003e\n          \u003ctd\u003e\u003ca href=\"https://huggingface.co/tpoisonooo/HuixiangDou-CR-LoRA-Qwen-32B\"\u003eLoRA-Qwen1.5-32B\u003c/a\u003e\u003c/td\u003e\n          \u003ctd\u003e\u003ca href=\"https://huggingface.co/datasets/tpoisonooo/HuixiangDou-CR/tree/main\"\u003ealpaca data\u003c/a\u003e\u003c/td\u003e\n          \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/2405.02817\"\u003earXiv\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n  \u003c/table\u003e\n- \\[2024/04\\] [RAG Annotation SFT Q\u0026A Data and Examples](./docs/zh/doc_rag_annotate_sft_data.md)\n- \\[2024/04\\] Release [Web Front and Back End Service Source Code](./web) 👍\n- \\[2024/03\\] New [Personal WeChat Integration](./docs/zh/doc_add_wechat_accessibility.md) and [**Prebuilt APK**](https://github.com/InternLM/HuixiangDou/releases/download/v0.1.0rc1/huixiangdou-20240508.apk) !\n- \\[2024/02\\] \\[Experimental Feature\\] [WeChat Group](https://cdn.vansin.top/internlm/dou.jpg) Integration of multimodal to achieve OCR\n\n# 📖 Support Status\n\n\u003ctable align=\"center\"\u003e\n  \u003ctbody\u003e\n    \u003ctr align=\"center\" valign=\"bottom\"\u003e\n      \u003ctd\u003e\n        \u003cb\u003eLLM\u003c/b\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003cb\u003eFile Format\u003c/b\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003cb\u003eRetrieval Method\u003c/b\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003cb\u003eIntegration\u003c/b\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003cb\u003ePreprocessing\u003c/b\u003e\n      \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr valign=\"top\"\u003e\n      \u003ctd\u003e\n\n- [InternLM2/InternLM2.5](https://github.com/InternLM/InternLM)\n- [Qwen1.5~2.5](https://github.com/QwenLM/Qwen2)\n- [puyu](https://internlm.openxlab.org.cn/)\n- [StepFun](https://platform.stepfun.com)\n- [KIMI](https://kimi.moonshot.cn)\n- [DeepSeek](https://www.deepseek.com)\n- [GLM (ZHIPU)](https://www.zhipuai.cn)\n- [SiliconCloud](https://siliconflow.cn/zh-cn/siliconcloud)\n- [Xi-Api](https://api.xi-ai.cn)\n\n\u003c/td\u003e\n\u003ctd\u003e\n\n- pdf\n- word\n- excel\n- ppt\n- html\n- markdown\n- txt\n\n\u003c/td\u003e\n\n\u003ctd\u003e\n\n- Dense for Document\n- Sparse for Code \n- [Knowledge Graph](./docs/en/doc_knowledge_graph.md)\n- [Internet Search](./huixiangdou/service/web_search.py)\n- [SourceGraph](https://sourcegraph.com)\n- Image and Text\n\n\u003c/td\u003e\n\n\u003ctd\u003e\n\n- WeChat([android](./docs/zh/doc_add_wechat_accessibility.md)/[wkteam](./docs/zh/doc_add_wechat_commercial.md))\n- Lark\n- [OpenXLab Web](https://openxlab.org.cn/apps/detail/tpoisonooo/huixiangdou-web)\n- [Gradio Demo](./huixiangdou/gradio_ui.py)\n- [HTTP Server](./huixiangdou/server.py)\n- [Read the Docs](./docs/zh/doc_add_readthedocs.md)\n\n\u003c/td\u003e\n\n\u003ctd\u003e\n\n- [Coreference Resolution](https://arxiv.org/abs/2405.02817)\n\n\u003c/td\u003e\n\n\u003c/tr\u003e\n\n\u003c/tbody\u003e\n\u003c/table\u003e\n\n# 📦 Hardware Requirements\n\nThe following are the GPU memory requirements for different features, the difference lies only in whether the **options are turned on**.\n\n|              Configuration Example               | GPU mem Requirements |                                                                                   Description                                                                                   |                       Verified on Linux                        |\n| :----------------------------------------------: | :------------------: | :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------: |\n|         [config-cpu.ini](./config-cpu.ini)         |   -    | Use [siliconcloud](https://siliconflow.cn/) API \u003cbr/\u003e for text only | ![](https://img.shields.io/badge/x86-passed-blue?style=for-the-badge) |\n|         [config-2G.ini](./config-2G.ini)         |         2GB          | Use openai API (such as [kimi](https://kimi.moonshot.cn), [deepseek](https://platform.deepseek.com/usage) and [stepfun](https://platform.stepfun.com/) to search for text only | ![](https://img.shields.io/badge/1660ti%206G-passed-blue?style=for-the-badge) |\n| [config-multimodal.ini](./config-multimodal.ini) |         10GB         |                                                                Use openai API for LLM, image and text retrieval                                                                 | ![](https://img.shields.io/badge/3090%2024G-passed-blue?style=for-the-badge)  |\n| \\[Standard Edition\\] [config.ini](./config.ini)  |         19GB         |                                                                    Local deployment of LLM, single modality                                                                     | ![](https://img.shields.io/badge/3090%2024G-passed-blue?style=for-the-badge)  |\n|   [config-advanced.ini](./config-advanced.ini)   |         80GB         |                                                   local LLM, anaphora resolution, single modality, practical for WeChat group                                                   | ![](https://img.shields.io/badge/A100%2080G-passed-blue?style=for-the-badge)  |\n\n# 🔥 Running the Standard Edition\n\nWe take the standard edition (local running LLM, text retrieval) as an introduction example. Other versions are just different in configuration options.\n\n## I. Download and install dependencies\n\n[Click to agree to the BCE model agreement](https://huggingface.co/maidalun1020/bce-embedding-base_v1), log in huggingface\n\n```shell\nhuggingface-cli login\n```\n\nInstall dependencies\n\n```bash\n# parsing `word` format requirements\napt update\napt install python-dev libxml2-dev libxslt1-dev antiword unrtf poppler-utils pstotext tesseract-ocr flac ffmpeg lame libmad0 libsox-fmt-mp3 sox libjpeg-dev swig libpulse-dev\n# python requirements\npip install -r requirements.txt\n# For python3.8, install faiss-gpu instead of faiss\n```\n\n## II. Create knowledge base and ask questions\n\nUse mmpose documents to build the mmpose knowledge base and filtering questions. If you have your own documents, just put them under `repodir`.\n\nCopy and execute all the following commands (including the '#' symbol).\n\n```shell\n# Download the knowledge base, we only take the documents of mmpose as an example. You can put any of your own documents under `repodir`\ncd HuixiangDou\nmkdir repodir\ngit clone https://github.com/open-mmlab/mmpose    --depth=1 repodir/mmpose\n\n# Save the features of repodir to workdir, and update the positive and negative example thresholds into `config.ini`\nmkdir workdir\npython3 -m huixiangdou.service.feature_store\n```\n\nAfter running, test with `python3 -m huixiangdou.main --standalone`. At this time, reply to mmpose related questions (related to the knowledge base), while not responding to weather questions.\n\n```bash\npython3 -m huixiangdou.main --standalone\n\n+---------------------------+---------+----------------------------+-----------------+\n|         Query             |  State  |         Reply              |   References    |\n+===========================+=========+============================+=================+\n| How to install mmpose?    | success | To install mmpose, plea..  | installation.md |\n--------------------------------------------------------------------------------------\n| How is the weather today? | unrelated.. | ..                     |                 |\n+-----------------------+---------+--------------------------------+-----------------+\n🔆 Input your question here, type `bye` for exit:\n..\n```\n\n\u003e \\[!NOTE\\]\n\u003e\n\u003e \u003cdiv align=\"center\"\u003e\n\u003e If restarting LLM every time is too slow, first \u003cb\u003epython3 -m huixiangdou.service.llm_server_hybrid\u003c/b\u003e; then open a new window, and each time only execute \u003cb\u003epython3 -m huixiangdou.main\u003c/b\u003e without restarting LLM.\n\u003e \u003c/div\u003e\n\n\u003cbr/\u003e\n\n💡 Also run a simple Web UI with `gradio`:\n\n```bash\npython3 -m huixiangdou.gradio_ui\n```\n\n\u003cvideo src=\"https://github.com/user-attachments/assets/9e5dbb30-1dc1-42ad-a7d4-dc7380676554\" \u003e\u003c/video\u003e\n\nOr run a server to listen 23333, default pipeline is `chat_with_repo`:\n```bash\npython3 -m huixiangdou.server\n\n# test async API \ncurl -X POST http://127.0.0.1:23333/huixiangdou_stream  -H \"Content-Type: application/json\" -d '{\"text\": \"how to install mmpose\",\"image\": \"\"}'\n# cURL sync API\ncurl -X POST http://127.0.0.1:23333/huixiangdou_inference  -H \"Content-Type: application/json\" -d '{\"text\": \"how to install mmpose\",\"image\": \"\"}'\n```\n\n\nPlease update the `repodir` documents, [good_questions](./resource/good_questions.json) and [bad_questions](./resource/bad_questions.json), and try your own domain knowledge (medical, financial, power, etc.).\n\n## III. Integration into Feishu, WeChat group\n\n- [**One-way** sending to Feishu group](./docs/zh/doc_send_only_lark_group.md)\n- [**Two-way** Feishu group receiving and sending, recalling](./docs/zh/doc_add_lark_group.md)\n- [Personal WeChat Android access](./docs/zh/doc_add_wechat_accessibility.md)\n- [Personal WeChat wkteam access](./docs/zh/doc_add_wechat_commercial.md)\n\n## IV. Deploy web front and back end\n\nWe provide `typescript` front-end and `python` back-end source code:\n\n- Multi-tenant management supported\n- Zero programming access to Feishu and WeChat\n- k8s friendly\n\nSame as [OpenXlab APP](https://openxlab.org.cn/apps/detail/tpoisonooo/huixiangdou-web), please read the [web deployment document](./web/README.md).\n\n# 🍴 Other Configurations\n\n## **CPU-only Edition**\n\nIf there is no GPU available, model inference can be completed using the [siliconcloud](https://siliconflow.cn/) API.\n\nTaking docker miniconda+Python3.11 as an example, install CPU dependencies and run:\n\n```bash\n# Start container\ndocker run -v /path/to/huixiangdou:/huixiangdou -p 7860:7860 -p 23333:23333 -it continuumio/miniconda3 /bin/bash\n# Install dependencies\napt update\napt install python-dev libxml2-dev libxslt1-dev antiword unrtf poppler-utils pstotext tesseract-ocr flac ffmpeg lame libmad0 libsox-fmt-mp3 sox libjpeg-dev swig libpulse-dev\npython3 -m pip install -r requirements-cpu.txt\n# Establish knowledge base\npython3 -m huixiangdou.service.feature_store --config_path config-cpu.ini\n# Q\u0026A test\npython3 -m huixiangdou.main --standalone --config_path config-cpu.ini\n# gradio UI\npython3 -m huixiangdou.gradio_ui --config_path config-cpu.ini\n```\n\nIf you find the installation too slow, a pre-installed image is provided in [Docker Hub](https://hub.docker.com/repository/docker/tpoisonooo/huixiangdou/tags). Simply replace it when starting the docker.\n\n## **2G Cost-effective Edition**\n\nIf your GPU mem exceeds 1.8G, or you pursue cost-effectiveness. This configuration discards the local LLM and uses remote LLM instead, which is the same as the standard edition.\n\nTake `siliconcloud` as an example, fill in the API TOKEN applied from the [official website](https://siliconflow.cn/) into `config-2G.ini`\n\n```toml\n# config-2G.ini\n[llm]\nenable_local = 0   # Turn off local LLM\nenable_remote = 1  # Only use remote\n..\nremote_type = \"siliconcloud\"   # Choose siliconcloud\nremote_api_key = \"YOUR-API-KEY-HERE\" # Your API key\nremote_llm_model = \"alibaba/Qwen1.5-110B-Chat\"\n```\n\n\u003e \\[!NOTE\\]\n\u003e\n\u003e \u003cdiv align=\"center\"\u003e\n\u003e Each Q\u0026A scenario requires calling the LLM 7 times at worst, subject to the free user RPM limit, you can modify the \u003cb\u003erpm\u003c/b\u003e parameter in config.ini\n\u003e \u003c/div\u003e\n\nExecute the following to get the Q\u0026A results\n\n```shell\npython3 -m huixiangdou.main --standalone --config-path config-2G.ini # Start all services at once\n```\n\n## **10G Multimodal Edition**\n\nIf you have 10G GPU mem, you can further support image and text retrieval. Just modify the model used in config.ini.\n\n```toml\n# config-multimodal.ini\n# !!! Download `https://huggingface.co/BAAI/bge-visualized/blob/main/Visualized_m3.pth`    to `bge-m3` folder !!!\nembedding_model_path = \"BAAI/bge-m3\"\nreranker_model_path = \"BAAI/bge-reranker-v2-minicpm-layerwise\"\n```\n\nNote:\n\n- You need to manually download [Visualized_m3.pth](https://huggingface.co/BAAI/bge-visualized/blob/main/Visualized_m3.pth) to the [bge-m3](https://huggingface.co/BAAI/bge-m3) directory\n- Install FlagEmbedding on main branch, we have made [bugfix](https://github.com/FlagOpen/FlagEmbedding/commit/3f84da0796d5badc3ad519870612f1f18ff0d1d3). [Here](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/visual/eva_clip/bpe_simple_vocab_16e6.txt.gz) you can download `bpe_simple_vocab_16e6.txt.gz` \n- Install [requirements/multimodal.txt](./requirements/multimodal.txt)\n\nRun gradio to test, see the image and text retrieval result [here](https://github.com/InternLM/HuixiangDou/pull/326).\n\n```bash\npython3 tests/test_query_gradio.py\n```\n\n## **80G Complete Edition**\n\nThe \"HuiXiangDou\" in the WeChat experience group has enabled all features:\n\n- Serper search and SourceGraph search enhancement\n- Group chat images, WeChat public account parsing\n- Text coreference resolution\n- Hybrid LLM\n- Knowledge base is related to openmmlab's 12 repositories (1700 documents), refusing small talk\n\nPlease read the following topics:\n\n- [Hybrid knowledge graph and dense retrieval](./docs/en/doc_knowledge_graph.md)\n- [Refer to config-advanced.ini configuration to improve effects](./docs/en/doc_full_dev.md)\n- [Group chat scenario anaphora resolution training](./sft)\n- [Use wkteam WeChat access, integrate images, public account parsing, and anaphora resolution](./docs/zh/doc_add_wechat_commercial.md)\n- [Use rag.py to annotate SFT training data](./docs/zh/doc_rag_annotate_sft_data.md)\n\n## **Android Tools**\n\nContributors have provided [Android tools](./android) to interact with WeChat. The solution is based on system-level APIs, and in principle, it can control any UI (not limited to communication software).\n\n# 🛠️ FAQ\n\n1. What if the robot is too cold/too chatty?\n\n   - Fill in the questions that should be answered in the real scenario into `resource/good_questions.json`, and fill the ones that should be rejected into `resource/bad_questions.json`.\n   - Adjust the theme content in `repodir` to ensure that the markdown documents in the main library do not contain irrelevant content.\n\n   Re-run `feature_store` to update thresholds and feature libraries.\n\n   ⚠️ You can directly modify `reject_throttle` in config.ini. Generally speaking, 0.5 is a high value; 0.2 is too low.\n\n2. Launch is normal, but out of memory during runtime?\n\n   LLM long text based on transformers structure requires more memory. At this time, kv cache quantization needs to be done on the model, such as [lmdeploy quantization description](https://github.com/InternLM/lmdeploy/blob/main/docs/en/quantization). Then use docker to independently deploy Hybrid LLM Service.\n\n3. How to access other local LLM / After access, the effect is not ideal?\n\n   - Open [hybrid llm service](./huixiangdou/service/llm_server_hybrid.py), add a new LLM inference implementation.\n   - Refer to [test_intention_prompt and test data](./tests/test_intention_prompt.py), adjust prompt and threshold for the new model, and update them into [prompt.py](./huixiangdou/service/prompt.py).\n\n4. What if the response is too slow/request always fails?\n\n   - Refer to [hybrid llm service](./huixiangdou/service/llm_server_hybrid.py) to add exponential backoff and retransmission.\n   - Replace local LLM with an inference framework such as [lmdeploy](https://github.com/internlm/lmdeploy), instead of the native huggingface/transformers.\n\n5. What if the GPU memory is too low?\n\n   At this time, it is impossible to run local LLM, and only remote LLM can be used in conjunction with text2vec to execute the pipeline. Please make sure that `config.ini` only uses remote LLM and turn off local LLM.\n\n\n# 🍀 Acknowledgements\n\n- [KIMI](https://kimi.moonshot.cn/): Long text LLM, supports direct file upload\n- [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding): BAAI RAG group\n- [BCEmbedding](https://github.com/netease-youdao/BCEmbedding): Chinese-English bilingual feature model\n- [Langchain-ChatChat](https://github.com/chatchat-space/Langchain-Chatchat): Application of Langchain and ChatGLM\n- [GrabRedEnvelope](https://github.com/xbdcc/GrabRedEnvelope): WeChat red packet grab\n\n# 📝 Citation\n\n```shell\n@misc{kong2024huixiangdou,\n      title={HuiXiangDou: Overcoming Group Chat Scenarios with LLM-based Technical Assistance},\n      author={Huanjun Kong and Songyang Zhang and Jiaying Li and Min Xiao and Jun Xu and Kai Chen},\n      year={2024},\n      eprint={2401.08772},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n\n@misc{kong2024labelingsupervisedfinetuningdata,\n      title={Labeling supervised fine-tuning data with the scaling law}, \n      author={Huanjun Kong},\n      year={2024},\n      eprint={2405.02817},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL},\n      url={https://arxiv.org/abs/2405.02817}, \n}\n\n@misc{kong2025huixiangdou2robustlyoptimizedgraphrag,\n      title={HuixiangDou2: A Robustly Optimized GraphRAG Approach}, \n      author={Huanjun Kong and Zhefan Wang and Chenyang Wang and Zhe Ma and Nanqing Dong},\n      year={2025},\n      eprint={2503.06474},\n      archivePrefix={arXiv},\n      primaryClass={cs.IR},\n      url={https://arxiv.org/abs/2503.06474}, \n}\n```\n","funding_links":[],"categories":["Python","A01_文本生成_文本对话","Applications"],"sub_categories":["大语言对话模型及数据","提示语（魔法）"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FInternLM%2FHuixiangDou","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FInternLM%2FHuixiangDou","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FInternLM%2FHuixiangDou/lists"}