{"id":13547651,"url":"https://github.com/chatgpt-web-dev/chatgpt-web","last_synced_at":"2026-03-04T09:02:56.812Z","repository":{"id":131292985,"uuid":"609488972","full_name":"chatgpt-web-dev/chatgpt-web","owner":"chatgpt-web-dev","description":"A third-party ChatGPT Web UI page built with Express and Vue3, through the official OpenAI completion API. / 用 Express 和 Vue3 搭建的第三方 ChatGPT 前端页面, 基于 OpenAI 官方 completion API.","archived":false,"fork":false,"pushed_at":"2026-01-16T04:08:00.000Z","size":10196,"stargazers_count":1864,"open_issues_count":161,"forks_count":476,"subscribers_count":16,"default_branch":"main","last_synced_at":"2026-01-18T07:31:58.186Z","etag":null,"topics":["ai","chatgpt","chatgpt-web","cloud-sync","expressjs","ldap","naive-ui","oidc","openai","openai-api","openai-chatgpt","saml","sso","vue"],"latest_commit_sha":null,"homepage":"https://chatgpt-web.dev","language":"Vue","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":"Chanzhaoyu/chatgpt-web","license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/chatgpt-web-dev.png","metadata":{"files":{"readme":"README.en.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.en.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":"AGENTS.md","dco":null,"cla":null}},"created_at":"2023-03-04T10:31:45.000Z","updated_at":"2026-01-16T04:06:59.000Z","dependencies_parsed_at":"2024-02-03T13:32:03.635Z","dependency_job_id":"5a780930-3e0f-4e0a-9507-cdcbee12ea3e","html_url":"https://github.com/chatgpt-web-dev/chatgpt-web","commit_stats":{"total_commits":549,"total_committers":87,"mean_commits":6.310344827586207,"dds":0.5719489981785064,"last_synced_commit":"d01fabe5cb98a184cb57dc9203cd179b7d5f0711"},"previous_names":["chatgpt-web-dev/chatgpt-web","kerwin1202/chatgpt-web"],"tags_count":120,"template":false,"template_full_name":null,"purl":"pkg:github/chatgpt-web-dev/chatgpt-web","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chatgpt-web-dev%2Fchatgpt-web","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chatgpt-web-dev%2Fchatgpt-web/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chatgpt-web-dev%2Fchatgpt-web/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chatgpt-web-dev%2Fchatgpt-web/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/chatgpt-web-dev","download_url":"https://codeload.github.com/chatgpt-web-dev/chatgpt-web/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chatgpt-web-dev%2Fchatgpt-web/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30076935,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-04T08:01:56.766Z","status":"ssl_error","status_checked_at":"2026-03-04T08:00:42.919Z","response_time":59,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","chatgpt","chatgpt-web","cloud-sync","expressjs","ldap","naive-ui","oidc","openai","openai-api","openai-chatgpt","saml","sso","vue"],"created_at":"2024-08-01T12:00:59.233Z","updated_at":"2026-03-04T09:02:56.796Z","avatar_url":"https://github.com/chatgpt-web-dev.png","language":"Vue","readme":"# ChatGPT Web\n\n[中文](./README.md) | [English](./README.en.md)\n\n\n## Introduction\n\n\u003e [!IMPORTANT]\n\u003e **This project is forked from [Chanzhaoyu/chatgpt-web](https://github.com/Chanzhaoyu/chatgpt-web)**\n\u003e\n\u003e As the original project author does not agree to introduce a dependency on the database, this Hard Fork was created for independent development [discussion for details](https://github.com/Chanzhaoyu/chatgpt-web/pull/589#issuecomment-1469207694)\n\u003e\n\u003e Thank you again, the great [Chanzhaoyu](https://github.com/Chanzhaoyu), for your contributions to the open-source project 🙏\n\nSome unique features have been added:\n\n[✓] Register \u0026 Login \u0026 Reset Password \u0026 2FA\n\n[✓] Sync chat history \n\n[✓] Front-end page setting apikey\n\n[✓] Custom Sensitive Words\n\n[✓] Set unique prompts for each chat room\n\n[✓] Users manager\n\n[✓] Random Key\n\n[✓] Conversation round limit \u0026 setting different limits by user \u0026 giftcards\n\n[✓] Implement SSO login through the auth proxy feature (need to integrate a third-party authentication reverse proxy, it can support login protocols such as LDAP/OIDC/SAML)\n\n[✓] Web Search functionality (Real-time web search based on Tavily API)\n\n[✓] VLLM API model support \u0026 Optional disable deep thinking mode\n\n[✓] Context Window Control\n\n\u003e [!CAUTION]\n\u003e This project is only published on GitHub, based on the MIT license, free and for open source learning usage. And there will be no any form of account selling, paid service, discussion group, discussion group and other behaviors. Beware of being deceived.\n\n## Screenshots\n\u003e Disclaimer: This project is only released on GitHub, under the MIT License, free and for open-source learning purposes. There will be no account selling, paid services, discussion groups, or forums. Beware of fraud.\n\n![cover3](./docs/login.jpg)\n![cover](./docs/c1.png)\n![cover2](./docs/c2.png)\n![cover3](./docs/basesettings.jpg)\n![cover3](./docs/prompt_en.jpg)\n![cover3](./docs/user-manager.jpg)\n![cover3](./docs/key-manager-en.jpg)\n![userlimit](./docs/add_redeem_and_limit.png)\n![setmanuallimit](./docs/manual_set_limit.png)\n![giftcarddb](./docs/giftcard_db_design.png)\n\n- [ChatGPT Web](#chatgpt-web)\n\t- [Introduction](#introduction)\n\t- [Roadmap](#roadmap)\n\t- [Prerequisites](#prerequisites)\n\t\t- [Node](#node)\n\t\t- [PNPM](#pnpm)\n\t\t- [Fill in the Keys](#fill-in-the-keys)\n\t- [Install Dependencies](#install-dependencies)\n\t\t- [Backend](#backend)\n\t\t- [Frontend](#frontend)\n\t- [Run in Test Environment](#run-in-test-environment)\n\t\t- [Backend Service](#backend-service)\n\t\t- [Frontend Webpage](#frontend-webpage)\n\t- [Packaging](#packaging)\n\t\t- [Using Docker](#using-docker)\n\t\t\t- [Docker Parameter Example](#docker-parameter-example)\n\t\t\t- [Docker Build \\\u0026 Run](#docker-build--run)\n\t\t\t- [Docker Compose](#docker-compose)\n\t\t- [Deployment with Railway](#deployment-with-railway)\n\t\t\t- [Railway Environment Variables](#railway-environment-variables)\n\t\t- [Manual packaging](#manual-packaging)\n\t\t\t- [Backend service](#backend-service-1)\n\t\t\t- [Frontend webpage](#frontend-webpage-1)\n\t- [Auth Proxy Mode](#auth-proxy-mode)\n\t- [Web Search Functionality](#web-search-functionality)\n\t\t- [Features](#features)\n\t\t- [Configuration](#configuration)\n\t\t- [Usage](#usage)\n\t\t- [Technical Implementation](#technical-implementation)\n\t\t- [Notes](#notes)\n\t- [Frequently Asked Questions](#frequently-asked-questions)\n\t- [Contributing](#contributing)\n\t- [Sponsorship](#sponsorship)\n\t- [License](#license)\n\n## Introduction\n\nUses the official `OpenAI API` to access `ChatGPT`:\n\n`ChatGPTAPI` uses `gpt-4.1` through the official `OpenAI` completion `API` (requires an API key).\n\n[Details](https://github.com/Chanzhaoyu/chatgpt-web/issues/138)\n\nSetup:\n1. Go to the `service/.env.example` file and copy the contents to the `service/.env` file.\n2. Fill in the `OPENAI_API_KEY` field with your OpenAI API Key [(Get apiKey)](https://platform.openai.com/overview).\n\nEnvironment Variables:\n\nFor all parameter variables, check [here](#docker-parameter-example) or see:\n\n```\n/service/.env\n```\n\n## Roadmap\n[✓] Dual models\n\n[✓] Multiple session storage and context logic\n\n[✓] Formatting and beautifying code-like message types\n\n[✓] Login or Register\n\n[✓] Set API key and other information on the front-end page.\n\n[✓] Data import and export\n\n[✓] Save message to local image\n\n[✓] Multilingual interface\n\n[✓] Interface themes\n\n[✓] VLLM API model support\n\n[✓] Deep thinking mode switch\n\n[✗] More...\n\n## Prerequisites\n\n### Node\n\n`node` requires version `^20 || ^22 || ^24`, and multiple local `node` versions can be managed using [nvm](https://github.com/nvm-sh/nvm).\n\n```shell\nnode -v\n```\n\n### PNPM\nIf you have not installed `pnpm` before:\n```shell\nnpm install pnpm -g\n```\n\n### Fill in the API Key\n\nGet your `OpenAI API Key` and fill in the local environment variables [jump](#introduction)\n\n```\n# service/.env file\n\n# OpenAI API Key - https://platform.openai.com/overview\nOPENAI_API_KEY=\n```\n\n## Install Dependencies\n\n\u003e To make it easier for `backend developers` to understand, we did not use the front-end `workspace` mode, but stored it in different folders. If you only need to do secondary development of the front-end page, delete the `service` folder.\n\n### Backend\n\nEnter the `/service` folder and run the following command\n\n```shell\npnpm install\n```\n\n### Frontend\nRun the following command in the root directory\n```shell\npnpm bootstrap\n```\n\n## Run in Test Environment\n### Backend Service\n\nEnter the `/service` folder and run the following command\n\n```shell\npnpm start\n```\n\n### Frontend Webpage\nRun the following command in the root directory\n```shell\npnpm dev\n```\n\n## Packaging\n\n### Using Docker\n\n#### Docker Parameter Example\n\n- `OPENAI_API_KEY` required\n- `OPENAI_API_BASE_URL` optional, available when `OPENAI_API_KEY` is set\n- `OPENAI_API_MODEL` optional, specify the model to use\n- `AUTH_SECRET_KEY` Access Password，optional\n- `TIMEOUT_MS` timeout, in milliseconds, optional\n- `SOCKS_PROXY_HOST` optional, effective with SOCKS_PROXY_PORT\n- `SOCKS_PROXY_PORT` optional, effective with SOCKS_PROXY_HOST\n- `SOCKS_PROXY_USERNAME` optional, effective with SOCKS_PROXY_HOST and SOCKS_PROXY_PORT\n- `SOCKS_PROXY_PASSWORD` optional, effective with SOCKS_PROXY_HOST and SOCKS_PROXY_PORT\n- `HTTPS_PROXY` optional, support http，https, socks5\n\n![docker](./docs/docker.png)\n\n#### Docker Build \u0026 Run\n\n```bash\nGIT_COMMIT_HASH=`git rev-parse HEAD`\nRELEASE_VERSION=`git branch --show-current`\ndocker build --build-arg GIT_COMMIT_HASH=${GIT_COMMIT_HASH} --build-arg RELEASE_VERSION=${RELEASE_VERSION} -t chatgpt-web .\n\n# foreground operation\n# If run mongodb in host machine, please use MONGODB_URL=mongodb://host.docker.internal:27017/chatgpt\ndocker run --name chatgpt-web --rm -it -p 127.0.0.1:3002:3002 --env OPENAI_API_KEY=your_api_key --env MONGODB_URL=your_mongodb_url chatgpt-web\n\n# background operation\ndocker run --name chatgpt-web -d -p 127.0.0.1:3002:3002 --env OPENAI_API_KEY=your_api_key --env MONGODB_URL=your_mongodb_url chatgpt-web\n\n# running address\nhttp://localhost:3002/\n```\n\n#### Docker Compose\n\n[Hub Address](https://hub.docker.com/r/chatgptweb/chatgpt-web)\n\n```yml\nversion: '3'\n\nservices:\n  app:\n    image: chatgptweb/chatgpt-web # always use latest, pull the tag image again when updating\n    container_name: chatgptweb\n    restart: unless-stopped\n    ports:\n      - 3002:3002\n    depends_on:\n      - database\n    environment:\n      TZ: Asia/Shanghai\n      # Title for site\n      SITE_TITLE: ChatGpt Web\n      # access salt，optional Allow login if not empty.\n      AUTH_SECRET_KEY: xxx\n      # mongodb's connection string\n      MONGODB_URL: 'mongodb://chatgpt:xxxx@database:27017'\n      # After register enabled, Salt for password encryption\n      PASSWORD_MD5_SALT: xxx\n      # After register enabled, super administrator\n      ROOT_USER: me@example.com\n      # Allow anyone register, Must be turned on, otherwise administrators cannot register, can be turned off later.\n      REGISTER_ENABLED: true\n      # More configurations, register an administrator after running and set it in the administrator page.\n    links:\n      - database\n\n  database:\n    image: mongo\n    container_name: chatgptweb-database\n    restart: unless-stopped\n    ports:\n      - '27017:27017'\n    expose:\n      - '27017'\n    volumes:\n      - mongodb:/data/db\n    environment:\n      MONGO_INITDB_ROOT_USERNAME: chatgpt\n      MONGO_INITDB_ROOT_PASSWORD: xxxx\n      MONGO_INITDB_DATABASE: chatgpt\n\nvolumes:\n  mongodb: {}\n```\nThe `OPENAI_API_BASE_URL` is optional and only used when setting the `OPENAI_API_KEY`.\n\n### Deployment with Railway\n\n[![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/new/template/yytmgc)\n\n\u003e Refer to this issue  https://github.com/Kerwin1202/chatgpt-web/issues/266\n\n\u003e Note: Changing environment variables in Railway will cause re-deployment.\n\n### Manual packaging\n\n#### Backend service\n\n\u003e If you don't need the `node` interface of this project, you can skip the following steps.\n\nCopy the `service` folder to a server that has a `node` service environment.\n\n```shell\n# Install\npnpm install\n\n# Build\npnpm build\n\n# Run\npnpm prod\n```\n\nPS: You can also run `pnpm start` directly on the server without packaging.\n\n#### Frontend webpage\n\n1. Refer to the root directory `.env.example` file content to create `.env` file, modify `VITE_GLOB_API_URL` in `.env` at the root directory to your actual backend interface address.\n2. Run the following command in the root directory and then copy the files in the `dist` folder to the root directory of your website service.\n\n[Reference information](https://cn.vitejs.dev/guide/static-deploy.html#building-the-app)\n\n```shell\npnpm build\n```\n\n## Context Window Control\n\n\u003e [!TIP]\n\u003e Context Window Control allows users to flexibly manage context information in AI conversations, optimizing model performance and conversation effectiveness.\n\n### Features\n\n- **Context Management**: Control the amount of chat history the model can reference\n- **Per-conversation Control**: Each conversation can independently enable or disable context window\n- **Real-time Switching**: Context mode can be switched at any time during conversation\n- **Memory Management**: Flexibly control AI's memory scope and continuity\n- **Configurable Quantity**: Administrators can set the maximum number of context messages\n\n### How It Works\n\nThe context window determines the amount of chat history from the current session that the model can reference during generation:\n\n- **Reasonable context window size** helps the model generate coherent and relevant text\n- **Avoid confusion or irrelevant output** caused by referencing too much context\n- **Turning off the context window** will cause the session to lose memory, making each question completely independent\n\n### Usage\n\n#### 1. Enable/Disable Context Window\n\n1. **Enter Conversation Interface**: This feature can be used in any conversation session\n2. **Find Control Switch**: Locate the \"Context Window\" toggle button in the conversation interface\n3. **Switch Mode**:\n   - **Enable**: Model will reference previous chat history, maintaining conversation coherence\n   - **Disable**: Model will not reference history, treating each question independently\n\n#### 2. Usage Scenarios\n\n**Recommended to enable context window when:**\n- Need continuous dialogue and context correlation\n- In-depth discussion of complex topics\n- Multi-turn Q\u0026A and step-by-step problem solving\n- Need AI to remember previously mentioned information\n\n**Recommended to disable context window when:**\n- Independent simple questions\n- Avoid historical information interfering with new questions\n- Handling multiple unrelated topics\n- Need a \"fresh start\" scenario\n\n#### 3. Administrator Configuration\n\nAdministrators can configure in system settings:\n- **Maximum Context Count**: Set the number of context messages included in the conversation\n- **Default State**: Set the default context window state for new conversations\n\n### Technical Implementation\n\n- **Context Truncation**: Automatically truncate specified number of historical messages\n- **State Persistence**: Each conversation independently saves context window switch state\n- **Real-time Effect**: Takes effect immediately for the next message after switching\n- **Memory Optimization**: Reasonably control context length, avoiding model limits\n\n### Notes\n\n- **Conversation Coherence**: Disabling context window will affect conversation continuity\n- **Token Consumption**: More context will increase token usage\n- **Response Quality**: Appropriate context helps improve answer quality\n- **Model Limitations**: Need to consider context length limits of different models\n\n## VLLM API Deep Thinking Mode Control\n\n\u003e [!TIP]\n\u003e Deep thinking mode control is only available when the backend is configured to use VLLM API, allowing users to choose whether to enable the model's deep thinking functionality.\n\n### Features\n\n- **VLLM API Exclusive Feature**: Only available when the backend uses VLLM API\n- **Per-conversation Control**: Each conversation can independently enable or disable deep thinking mode\n- **Real-time Switching**: Deep thinking mode can be switched at any time during conversation\n- **Performance Optimization**: Disabling deep thinking can improve response speed and reduce computational costs\n\n### How It Works\n\nAfter enabling deep thinking, the model will use more computational resources and take longer time to simulate more complex thinking chains for logical reasoning:\n\n- **Suitable for complex tasks or high-requirement scenarios**, such as mathematical derivations and project planning\n- **Daily simple queries do not need to be enabled** deep thinking mode\n- **Disabling deep thinking** can achieve faster response speed\n\n### Prerequisites\n\n**The following conditions must be met to use this feature:**\n\n1. **Backend Configuration**: Backend must be configured to use VLLM API interface\n2. **Model Support**: The model used must support deep thinking functionality\n3. **API Compatibility**: VLLM API version needs to support thinking mode control parameters\n\n### Usage\n\n#### 1. Enable/Disable Deep Thinking Mode\n\n1. **Enter Conversation Interface**: In a conversation session that supports VLLM API\n2. **Find Control Switch**: Locate the \"Deep Thinking\" toggle button in the conversation interface\n3. **Switch Mode**: \n   - Enable: Model will perform deep thinking, providing more detailed and in-depth responses\n   - Disable: Model will respond directly, faster but potentially more concise\n\n#### 2. Usage Scenarios\n\n**Recommended to enable deep thinking when:**\n- Complex problems require in-depth analysis\n- Logical reasoning and multi-step thinking are needed\n- High-quality responses are required\n- Time is not sensitive\n\n**Recommended to disable deep thinking when:**\n- Simple questions need quick answers\n- Fast response is required\n- Need to reduce computational costs\n- Batch processing simple tasks\n\n#### 3. Technical Implementation\n\n- **API Parameter**: Controlled through VLLM API's `disable_thinking` parameter\n- **State Persistence**: Each conversation session independently saves the deep thinking switch state\n- **Real-time Effect**: Takes effect immediately for the next message after switching\n\n### Notes\n\n- **VLLM API Only**: This feature is only available when the backend uses VLLM API, other APIs (such as OpenAI API) do not support this feature\n- **Model Dependency**: Not all models support deep thinking mode, please confirm that your model supports this feature\n- **Response Differences**: Disabling deep thinking may affect the detail and quality of responses\n- **Cost Considerations**: Enabling deep thinking typically increases computational costs and response time\n\n## Frequently Asked Questions\n\nQ: Why does Git always report an error when committing?\n\nA: Because there is submission information verification, please follow the [Commit Guidelines](./CONTRIBUTING.en.md).\n\nQ: Where to change the request interface if only the frontend page is used?\n\nA: The `VITE_GLOB_API_URL` field in the `.env` file at the root directory.\n\nQ: All red when saving the file?\n\nA: For `vscode`, please install the recommended plug-in of the project or manually install the `Eslint` plug-in.\n\nQ: Why doesn't the frontend have a typewriter effect?\n\nA: One possible reason is that after Nginx reverse proxying, buffering is turned on, and Nginx will try to buffer a certain amount of data from the backend before sending it to the browser. Please try adding `proxy_buffering off;` after the reverse proxy parameter and then reloading Nginx. Other web server configurations are similar.\n\nQ: The content returned is incomplete?\n\nA: There is a length limit for the content returned by the API each time. You can modify the `VITE_GLOB_OPEN_LONG_REPLY` field in the `.env` file under the root directory, set it to `true`, and rebuild the front-end to enable the long reply feature, which can return the full content. It should be noted that using this feature may bring more API usage fees.\n\n## Auth Proxy Mode\n\n\u003e [!WARNING]\n\u003e This feature is only provided for Operations Engineer with relevant experience to deploy during the integration of the enterprise's internal account management system. Improper configuration may lead to security risks.\n\nSet env `AUTH_PROXY_ENABLED=true` can enable auth proxy mode.\n\nAfter activating this feature, it is necessary to ensure that chatgpt-web can only be accessed through a reverse proxy.\n\nAuthentication is carried out by the reverse proxy, which then forwards the request with the header to identify the user identity.\nDefault header name is `X-Email`, can custom config use set env `AUTH_PROXY_HEADER_NAME`.\n\nRecommended for current IdP to use LDAP protocol, using [authelia](https://www.authelia.com)\n\nRecommended for current IdP to use OIDC protocol, using [oauth2-proxy](https://oauth2-proxy.github.io/oauth2-proxy)\n\n## Web Search Functionality\n\n\u003e [!TIP]\n\u003e Web Search functionality is based on [Tavily API](https://tavily.com/) implementation, allowing ChatGPT to access the latest web information to answer questions.\n\n### Features\n\n- **Real-time Web Search**: Get the latest web information based on Tavily API\n- **Intelligent Query Extraction**: Automatically extract the most relevant search keywords from user questions\n- **Search Result Integration**: Seamlessly integrate search results into AI conversations\n- **Per-session Control**: Each conversation can independently enable or disable search functionality\n- **Search History**: Save search queries and results to database\n- **Configurable System Messages**: Support custom search-related system prompt messages\n\n### Configuration\n\n#### 1. Get Tavily API Key\n\n1. Visit [Tavily Official Website](https://tavily.com/) to register an account\n2. Obtain API Key\n\n#### 2. Administrator Configuration\n\n1. Login to the system as an administrator\n2. Go to system settings page\n3. Find \"Web Search Configuration\" option\n4. Fill in the following configurations:\n   - **Enable Status**: Turn on/off global search functionality\n   - **API Key**: Enter Tavily API Key\n   - **Max Search Results**: Set the maximum number of search results returned per search (1-20, default 10)\n   - **Search Query System Message**: Prompt template for extracting search keywords\n   - **Search Result System Message**: Prompt template for processing search results\n\n#### 3. System Message Templates\n\n**Search Query Extraction Template** (for extracting search keywords from user questions):\n```\nYou are a search query extraction assistant. Extract the most relevant search query from user's question and wrap it with \u003csearch_query\u003e\u003c/search_query\u003e tags.\nCurrent time: {current_time}\n```\n\n**Search Result Processing Template** (for processing conversations with search results):\n```\nYou are a helpful assistant with access to real-time web search results. Use the provided search information to give accurate and up-to-date responses.\nCurrent time: {current_time}\n```\n\n### Usage\n\n#### User Operations\n\n1. **Enable Search Functionality**:\n   - In the conversation interface, find the search toggle button\n   - Click to enable web search functionality for the current session\n\n2. **Ask Questions for Real-time Information**:\n   - After enabling search, directly ask ChatGPT questions that require real-time information\n   - The system will automatically search for relevant information and integrate it into the response\n\n3. **View Search History**:\n   - Search queries and results are saved in the database\n   - You can view specific search records through the database\n\n#### Workflow\n\n1. **User Question**: User asks a question in a search-enabled session\n2. **Query Extraction**: System uses AI to extract search keywords from the question\n3. **Web Search**: Call Tavily API for real-time search\n4. **Result Integration**: Provide search results as context to AI\n5. **Generate Response**: AI generates more accurate responses based on search results\n\n### Technical Implementation\n\n- **Search Engine**: Tavily API\n- **Query Extraction**: Use OpenAI API to intelligently extract keywords\n- **Result Format**: JSON format to store complete search results\n- **Data Storage**: MongoDB stores search queries and results\n- **Timeout Setting**: Search request timeout is 300 seconds\n- **Result Count Control**: Support configuration of maximum search results returned per search (1-20)\n\n### Notes\n\n- Web Search functionality requires additional Tavily API costs\n- Search functionality will increase response time\n- It is recommended to enable selectively based on actual needs\n- Administrators can control the global search functionality status\n- Each session can independently control whether to use search functionality\n- The maximum search results setting affects the detail level of search and API costs\n\n\n## Contributing\n\nPlease read the [Contributing Guidelines](./CONTRIBUTING.en.md) before contributing.\n\nThanks to all the contributors!\n\n\u003ca href=\"https://github.com/chatgpt-web-dev/chatgpt-web/graphs/contributors\"\u003e\n  \u003cimg alt=\"Contributors Image\" src=\"https://contrib.rocks/image?repo=chatgpt-web-dev/chatgpt-web\" width=\"550\" /\u003e\n\u003c/a\u003e\n\n## Star History\n\n\u003cpicture\u003e\n  \u003csource media=\"(prefers-color-scheme: dark)\" srcset=\"https://api.star-history.com/svg?repos=chatgpt-web-dev/chatgpt-web\u0026type=Date\u0026theme=dark\" /\u003e\n  \u003csource media=\"(prefers-color-scheme: light)\" srcset=\"https://api.star-history.com/svg?repos=chatgpt-web-dev/chatgpt-web\u0026type=Date\" /\u003e\n  \u003cimg alt=\"Star History Chart\" src=\"https://api.star-history.com/svg?repos=chatgpt-web-dev/chatgpt-web\u0026type=Date\" width=\"550\" /\u003e\n\u003c/picture\u003e\n\n## Sponsorship\n\nIf you find this project helpful, please give me a star.\n\n---\n\nThanks to [DigitalOcean](https://www.digitalocean.com/) for sponsoring providing open-source credits used to run our infrastructure servers.\n\n\u003cp\u003e\n  \u003ca href=\"https://www.digitalocean.com/\"\u003e\n    \u003cimg alt=\"digitalocean\" src=\"https://opensource.nyc3.cdn.digitaloceanspaces.com/attribution/assets/SVG/DO_Logo_horizontal_blue.svg\" width=\"201px\"\u003e\n  \u003c/a\u003e\n\u003c/p\u003e\n\n## License\n[MIT © github.com/chatgpt-web-dev Contributors](./LICENSE)\n","funding_links":[],"categories":["A01_文本生成_文本对话","Vue","Openai"],"sub_categories":["大语言对话模型及数据"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchatgpt-web-dev%2Fchatgpt-web","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fchatgpt-web-dev%2Fchatgpt-web","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchatgpt-web-dev%2Fchatgpt-web/lists"}