Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/kx-Huang/ChatGPT-on-WeChat

🤖️ Deploy GPT-4o ChatGPT on your WeChat within 2 steps! 两步在云端部署你的微信ChatGPT聊天机器人!🤖️
https://github.com/kx-Huang/ChatGPT-on-WeChat

chatgpt gpt-4 railway-app wechat-bot

Last synced: about 1 month ago
JSON representation

🤖️ Deploy GPT-4o ChatGPT on your WeChat within 2 steps! 两步在云端部署你的微信ChatGPT聊天机器人!🤖️

Awesome Lists containing this project

README

        

# ChatGPT on WeChat ![GitHub License](https://img.shields.io/github/license/kx-huang/chatgpt-on-wechat?label=License&color=orange) [![wakatime](https://wakatime.com/badge/github/kx-Huang/ChatGPT-on-WeChat.svg)](https://wakatime.com/badge/github/kx-Huang/ChatGPT-on-WeChat) ![Railway Deploy](https://img.shields.io/github/checks-status/kx-huang/chatgpt-on-wechat/master?logo=railway&style=flat&label=Deploy) ![GitHub Repo stars](https://img.shields.io/github/stars/kx-huang/chatgpt-on-wechat?style=social)

🤖️ Turn your WeChat into ChatGPT [**within only 2 steps!**](#12-deploy-on-cloud) 🤖️


Group chat demo for @kx-Huang/ChatGPT-on-WeChat

## Features

This project is implemented based on [this amazing project](https://github.com/fuergaosi233/wechat-chatgpt) that I contibuted before, with [`Wechaty SDK`](https://github.com/wechaty/wechaty) and `OpenAI API`, we achieve:

- fast and robust connection to a set of AI models with different features, typically `gpt-4o` and `gpt-3.5-turbo` which powers `ChatGPT`
- stable, persistent and rapid deployment on cloud servers `Railway`

## 0. Table of Content

- [ChatGPT on WeChat ](#chatgpt-on-wechat----)
- [1. How to Deploy this Bot?](#1-how-to-deploy-this-bot)
- [1.1 Deploy in Local](#11-deploy-in-local)
- [1.1.1 Get your OpenAI API Keys](#111-get-your-openai-api-keys)
- [1.1.2 Configure Environment Variables](#112-configure-environment-variables)
- [1.1.3 Setup the Docker](#113-setup-the-docker)
- [1.1.4 Login your WeChat](#114-login-your-wechat)
- [1.2 Deploy on Railway](#12-deploy-on-railway)
- [1.2.1 Configure on `Railway`](#121-configure-on-railway)
- [1.2.2 Deploy \& Login on `Railway`](#122-deploy--login-on-railway)
- [1.3 Deploy on Alibaba Cloud ComputeNest](#13-deploy-on-alibaba-cloud-computenest)
- [2. Any Fancy Advanced Settings?](#2-any-fancy-advanced-settings)
- [2.1 Config Reply in Error](#21-config-reply-in-error)
- [2.2 Config `OpenAI` Models](#22-config-openai-models)
- [2.3 Config Model Features](#23-config-model-features)
- [2.4 Add Customized Task Handler](#24-add-customized-task-handler)
- [3. Common Errors and Troubleshooting](#3-common-errors-and-troubleshooting)
- [3.1 Assertion Error during Login or Self-chat 🤯](#31-assertion-error-during-login-or-self-chat-)
- [3.2 I can't trigger auto reply 🤔](#32-i-cant-trigger-auto-reply-)
- [4. How to Contribute to this Project?](#4-how-to-contribute-to-this-project)
- [5. Acknowledgement](#5-acknowledgement)
- [Thanks for your support by starring this project!](#thanks-for-your-support-by-starring-this-project)

## 1. How to Deploy this Bot?

You can deploy **in local** or **on cloud**, whatever you want.

The [deploy on cloud](#12-deploy-on-cloud) method is recommended.

### 1.1 Deploy in Local

#### 1.1.1 Get your OpenAI API Keys

- `openaiApiKey` can be generated in the [**API Keys Page** in your OpenAI account](https://beta.openai.com/account/api-keys)
- `openaiOrganizationID` is optional, which can be found in the [**Settings Page** in your Open AI account](https://beta.openai.com/account/org-settings)

---

#### 1.1.2 Configure Environment Variables

You can copy the template `config.yaml.example` into a new file `config.yaml`, and paste the configurations:

```yaml
openaiApiKey: ""
openaiOrganizationID: ""
chatgptTriggerKeyword: ""
```

Or you can export the environment variables listed in `.env.example` to your system, which is a more encouraged method to keep your `OpenAI API Key` safe:

```bash
export OPENAI_API_KEY="sk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
export OPENAI_ORGANIZATION_KEY="org-XXXXXXXXXXXXXXX"
export CHATGPT_TRIGGER_KEYWORD="Hi bot:"
```

**Please note:**

- `chatgptTriggerKeyword` is the keyword which can trigger auto-reply:
- In private chat, the message **starts with** it will trigger auto-reply
- In group chat, the message **starts with** `@Name ` will trigger auto-reply
- `chatgptTriggerKeyword` can be **empty string**, which means:
- In private chat, **every messages** will trigger auto-reply
- In group chat, only **"@ the bot"** will trigger auto-reply

---

#### 1.1.3 Setup the Docker

1. Setup Docker Image

```bash
docker build -t chatgpt-on-wechat .
```

2. Setup Docker Container

```bash
docker run -v $(pwd)/config.yaml:/app/config.yaml chatgpt-on-wechat
```

You can also build with Docker Compose:

1. Start the container

```bash
docker-compose up -d
```

2. View the QR code to log in to wechat

```bash
docker-compose logs -f
```

---

#### 1.1.4 Login your WeChat

Once you deploy the bot successfully, just follow the `terminal` or `Logs` in Docker container prompt carefully:

1. Scan the QR Code with mobile WeChat
2. Click "Log in" to allow desktop login (where our bot stays)
3. Wait a few seconds and start chatting!

🤖 **Enjoy your powerful chatbot!** 🤖

---

### 1.2 Deploy on Railway

Click the button below to fork this repo and deploy with Railway!

[![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/new/template/zKIfYk?referralCode=D6wD0x)

---

#### 1.2.1 Configure on `Railway`

Fill in the following blanks:

1. Your forked repo name (can be any name you like)
2. Choose make it private or not (also up to you)
3. Environment variables (for how to get OpenAI API keys, please refer to [1.1.1 Get your OpenAI API Keys](#111-get-your-openai-api-keys))

![Railway Config](doc/img/Railway_config.png)

**Please note:**

Make sure the environment variables are set in RailWay instead of writing directly in `config.yaml`. It's really **NOT** recommended to implicitly write out your `OpenAI API Key` in public repo. Anyone with your key can get access to the OpenAI API services, and it's possbile for you to lose money if you pay for that.

---

#### 1.2.2 Deploy & Login on `Railway`

The deploy process is automatic. It may take a few minutes for the first time. As you see the `Success`, click the tab to see the details. (which is your secret WeChat console!)

![Railway Deploy](doc/img/Railway_deploy.png)

Click `Deply Logs` and you will see everything is setting up, wait for a QR Code to pop up. Scan it as if you are login to your desktop WeChat, and click "Log in" on your mobile WeChat.

![Railway Scan QR Code](doc/img/Railway_QRCode.png)

Finally, everything is good to go! You will see the logs when people sending you messagem, and whenever the chatbot is auto-triggered to reply.

### 1.3 Deploy on Alibaba Cloud ComputeNest

One-click deployment on Alibaba Cloud ComputeNest:

[![Deploy on AlibabaCloud ComputeNest](doc/img/deploy_to_computenest.svg)](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=ChatGPT-on-WeChat社区版)

Follow the deployment guide to deploy ChatGPT-on-WeChat on Alibaba Cloud. Both domestic site and internationl sites are supported.
- [Deployment Guide (domestic site)](https://computenest.console.aliyun.com/service/detail/cn-hangzhou/service-a81e49ab7dd24520a365?isInstance=true)
- [Deployment Guide (internationl site)](https://computenest.console.aliyun.com/service/detail/ap-southeast-1/service-37a1f9f9b9e1482ba61b?isInstance=true)
Switch Alibaba Cloud console's language to see guide in different language.

First, provides cloud resource configurations such as ECS instance type and network configurations.
![ECS instance configuration](doc/img/computenest_resource_config.png)
Also needs to set ChatGPT-On-WeChat software configuration.
![ChatGPT-On-WeChat software configuration](doc/img/computenest_software_config.png)

When you confirm to deploy, Alibaba Cloud ComputeNest creates ECS instance in your owner Alibaba Cloud account, deploys ChatGPT-on-WeChat application and starts it on ECS instance automatically.

After ComputeNest service instance is deployed, check "How to use" about how to login to ECS instance.

![How to use](doc/img/computenest_how_to_use.png)

Run command in ECS workbench to get the QR code.
![QR code](doc/img/computenest_qr_code.png)

Scan it as if you are login to your desktop WeChat, and click "Log in" on your mobile WeChat.

Finally, everything is good to go! You will see the logs when people sending you messagem, and whenever the chatbot is auto-triggered to reply.

## 2. Any Fancy Advanced Settings?

### 2.1 Config Reply in Error

When the OpenAI API encounters some errors (e.g. over-crowded traffic, no authorization, ...), the chatbot will auto-reply the pre-configured message.

You can change it in `src/chatgpt.js`:

```typescript
const chatgptErrorMessage = "🤖️:ChatGPT摆烂了,请稍后再试~";
```

---

### 2.2 Config `OpenAI` Models

You can change whatever `OpenAI` Models you like to handle task at different capability, time-consumption and expense trade-off. (e.g. model with better capability costs more time to respond)

**Currently, the latest `GPT-4o` model is up and running!**

~~Since the latest `gpt-4` model is currently in a limited beta and only accessible to those who have been granted access, currently we use the `gpt-3.5-turbo` model as default. Of course, if you have the access to `gpt-4` API, you can just change the model to `gpt-4` without any other modification.~~

According to OpenAI doc,

> GPT-4o (“o” for “omni”) is our most advanced model. It is multimodal (accepting text or image inputs and outputting text), and it has the same high intelligence as GPT-4 Turbo but is much more efficient—it generates text 2x faster and is 50% cheaper. Additionally, GPT-4o has the best vision and performance across non-English languages of any of our models.

> ~~GPT-3.5 models can understand and generate natural language or code. Our most capable and cost effective model in the GPT-3.5 family is `gpt-3.5-turbo` which has been optimized for chat but works well for traditional completions tasks as well.~~

Also, for the same model, we can configure dozens of parameter (e.g. answer randomness, maximum word limit...). For example, for the `temperature` field:

> Higher values like **0.8** will make the output more random, while lower values like **0.2** will make it more focused and deterministic.

You can configure all of them in `src/chatgpt.js`:

```typescript
chatgptModelConfig: object = {
// this model field is required
model: "gpt-4o",
// add your ChatGPT model parameters below
temperature: 0.8,
// max_tokens: 2000,
};
```

For more details, please refer to [OpenAI Models Doc](https://beta.openai.com/docs/models/overview).

---

### 2.3 Config Model Features

You can change whatever features you like to handle different types of tasks. (e.g. complete text, edit text, generate code...)

Currently, we use `createChatCompletion()` powered by `gpt-4o` model, which:

> take a series of messages as input, and return a model-generated message as output.

You can configure in `src/chatgpt.js`:

```typescript
const response = await this.openaiApiInstance.createChatCompletion({
...this.chatgptModelConfig,
messages: inputMessages,
});
```

For more details, please refer to [OpenAI API Doc](https://beta.openai.com/docs/api-reference/introduction).

---

### 2.4 Add Customized Task Handler

You can add your own task handlers to expand the ability of this chatbot!

In `src/chatgpt.ts` `ChatGPTBot.onCustimzedTask()`, write your own task handler:

```typescript
// e.g. if a message starts with "Hello", the bot sends "World!"
if (message.text().startsWith("Hello")) {
await message.say("World!");
return;
}
```

## 3. Common Errors and Troubleshooting

### 3.1 Assertion Error during Login or Self-chat 🤯

- Error Log:

```log
uncaughtException AssertionError [ERR_ASSERTION]: 1 == 0
at Object.equal (/app/node_modules/wechat4u/src/util/global.js:53:14)
at /app/node_modules/wechat4u/src/core.js:195:16
at processTicksAndRejections (node:internal/process/task_queues:96:5) {
code: 2,
details: 'AssertionError [ERR_ASSERTION]: 1 == 0\n' +
' at Object.equal (/app/node_modules/wechat4u/src/util/global.js:53:14)\n' +
' at /app/node_modules/wechat4u/src/core.js:195:16\n' +
' at processTicksAndRejections (node:internal/process/task_queues:96:5)'
}
```

- Solution:
- If see this error during login, please check [issue #8](https://github.com/kx-Huang/ChatGPT-on-WeChat/issues/8)
- If see this error during self-chat, please check [issue #38](https://github.com/kx-Huang/ChatGPT-on-WeChat/issues/38)

### 3.2 I can't trigger auto reply 🤔
- Solution:
- Before deployment, read the trigger conditions in [1.1.2 Configure Environment Variables](#112-configure-environment-variables)
- After deployment, check the console logs for following lines:
- 🎯 Trigger keyword in private chat is: ``
- 🎯 Trigger keyword in group chat is: `@Name `

## 4. How to Contribute to this Project?

You are more than welcome to raise some issues, fork this repo, commit your code and submit pull request. And after code review, we can merge your contribution. I'm really looking forward to develop more interesting features!

Also, there're something in the to-do list for future enhancement:

1. Chat with context (integrate with [`LangChain`](https://github.com/langchain-ai/langchain)):
- Keep track of every on-going conversation for each private chat or group chat
- Dynamic drop or summarize the history conversation sent throught API in case the token gets oversized
- Set time-out for a conversation when users stop chatting for a while
2. More AI capability:
- Integrate OpenAI `DALL·E` model for AI image creation. Triggered by customized keyword (e.g. Hi bot, draw...)
- Integrate OpenAi `Whisper` model for speech recognition. Triggered by voice messages and do transcription or translation
3. More flexible depolyment:
- Make deployment templates on other cloud platforms
- Optimize depolyment process to be more robust and compatible on different OS

## 5. Acknowledgement

Great thanks to:

- [@leoncsyang](https://github.com/leoncsyang) for fixing [Issue #81](https://github.com/kx-Huang/ChatGPT-on-WeChat/issues/81)
- [@hdfk7](https://github.com/hdfk7) for fixing [Issue #67](https://github.com/kx-Huang/ChatGPT-on-WeChat/issues/67)
- [@jichangfeng](https://github.com/jichangfeng) for merging [PR #61](https://github.com/kx-Huang/ChatGPT-on-WeChat/pull/61)

## Thanks for your support by starring this project!



Stargazers repo roster for @kx-Huang/ChatGPT-on-WeChat


Star history chart for @kx-Huang/ChatGPT-on-WeChat