{"id":20574384,"url":"https://github.com/hyungkyukimdev/chat_assistant","last_synced_at":"2025-03-06T10:41:47.755Z","repository":{"id":205637703,"uuid":"714709725","full_name":"HyungkyuKimDev/Chat_Assistant","owner":"HyungkyuKimDev","description":"Chat Assistant for senior. Using Python , NAVER CLOVA and ChatGPT","archived":false,"fork":false,"pushed_at":"2023-11-22T07:46:09.000Z","size":1615,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-01-16T21:38:09.852Z","etag":null,"topics":["chat","chatbot","chatgpt","chatgpt-api","clova","clova-speech-recognition","naver","naver-api","naver-cloud-platform","navercloudplatform","openai","openai-api","openai-chatgpt","speaker","voice-assistant","voice-recognition"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/HyungkyuKimDev.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-11-05T16:46:47.000Z","updated_at":"2023-11-09T15:21:54.000Z","dependencies_parsed_at":null,"dependency_job_id":"fa62bf75-ff3a-4e2d-8411-655b2c01253a","html_url":"https://github.com/HyungkyuKimDev/Chat_Assistant","commit_stats":null,"previous_names":["hyungkyukimdev/chat_assistant"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/HyungkyuKimDev%2FChat_Assistant","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/HyungkyuKimDev%2FChat_Assistant/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/HyungkyuKimDev%2FChat_Assistant/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/HyungkyuKimDev%2FChat_Assistant/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/HyungkyuKimDev","download_url":"https://codeload.github.com/HyungkyuKimDev/Chat_Assistant/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":242195752,"owners_count":20087760,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["chat","chatbot","chatgpt","chatgpt-api","clova","clova-speech-recognition","naver","naver-api","naver-cloud-platform","navercloudplatform","openai","openai-api","openai-chatgpt","speaker","voice-assistant","voice-recognition"],"created_at":"2024-11-16T05:33:53.036Z","updated_at":"2025-03-06T10:41:47.727Z","avatar_url":"https://github.com/HyungkyuKimDev.png","language":"Python","readme":"# Language Select\n\u003cdiv align=\"center\"\u003e\n    \u003ca href=\"https://github.com/HyungkyuKimDev/Chat_Assistant/blob/main/README.md\"\u003e\n        \u003cimg src=\"img/america_flag.png\" alt=\"Logo\" width=\"80\" height=\"80\"\u003e\n    \u003c/a\u003e\n        \u003ca href=\"https://github.com/HyungkyuKimDev/Chat_Assistant/blob/main/README_KR_JP/README_jp.md\"\u003e\n        \u003cimg src=\"img/japan_flag.png\" alt=\"Logo\" width=\"80\" height=\"80\"\u003e\n    \u003c/a\u003e\n        \u003ca href=\"https://github.com/HyungkyuKimDev/Chat_Assistant/blob/main/README_KR_JP/README_kr.md\"\u003e\n        \u003cimg src=\"img/korea_flag.png\" alt=\"Logo\" width=\"80\" height=\"80\"\u003e\n    \u003c/a\u003e\n\u003c/div\u003e\n\n\n# Chat Assistant\n\n\u003cb\u003eA Chat Assistant\u003c/b\u003e for senior. Using \u003cb\u003ePython, Naver Clova, Wake Word and ChatGPT 3.5\u003c/b\u003e  \nIt can understand what you say on Mic. And answer like a human on Speaker.   \nIt can be your friend. And If you want to change ChatGPT's prompt, then you can make other Chat Assistant easily.\n\n## Features\n\n1. \u003cb\u003eWake Chat Assistant\u003c/b\u003e : If you say \"Hey\", then It works using.\n\n\n2. \u003cb\u003eTalk Something\u003c/b\u003e : After saying \"Hey\", Say anything on Mic within 3 secs, The Chat Assistant understand what you said.\n    - Execute\n   ```python\n    while response != \"\":\n        response = robot.gpt_send_anw(response)\n        ans = response\n\n        speaking(ans)\n    ```\n3. \u003cb\u003eMake Answer properly\u003c/b\u003e : It is going to make answer something using ChatGPT. And speak out using Speaker. \n\n4. \u003cb\u003eSpecial Function\u003c/b\u003e\n   - \u003cb\u003eReset\u003c/b\u003e : If you say \"Reset\", Then It remove the user's data. And ask you about new user's data.\n   - \u003cb\u003eTurn off\u003c/b\u003e : If you say \"Turn off\", Then It quit the process.\n\n    ```python\n    if response == \"reset\":\n        speaking(\"ok. Reset mode\")\n        name_ini()\n    elif response == \"turn off\":\n        speaking(\"ok. turn off mode\")\n        return False\n   ```\n    \n \n## Setup\n\n- Clone \u003cb\u003ethis Repository\u003c/b\u003e on your PC. \n  ```sh\n  git clone https://github.com/HyungkyuKimDev/Chat_Assistant.git\n  ```\n- Install \u003cb\u003erequirements.txt\u003c/b\u003e file on your Terminal.\n    ```sh\n  pip install -r requirements.txt \n  ```\n\n\n## Usage\n\n\n- change Directory to The directory where you clone This Repository.\n    ```sh\n  cd [The dircetory where you clone]\n  ```\n- execute \u003cb\u003echatbot_voice.py\u003c/b\u003e\n   ```sh\n  python3 chatbot_voice.py\n  ```\n\n## Code\n\n### Wake Word\n```python\nstream = sd.InputStream(\n        samplerate=RATE, channels=CHANNELS, dtype='int16')\n    stream.start()\n\n    owwModel = Model(\n        wakeword_models=[\"../models/hey.tflite\"], inference_framework=\"tflite\")\n\n    n_models = len(owwModel.models.keys())\n\n    # Main loop for wake word detection\n    while True:\n        # Get audio\n        audio_data, overflowed = stream.read(CHUNK)\n        if overflowed:\n            print(\"Audio buffer has overflowed\")\n\n        audio_data = np.frombuffer(audio_data, dtype=np.int16)\n\n        # Feed to openWakeWord model\n        prediction = owwModel.predict(audio_data)\n        common = False\n        # Process prediction results\n        for mdl in owwModel.prediction_buffer.keys():\n            scores = list(owwModel.prediction_buffer[mdl])\n            if scores[-1] \u003e 0.2:  # Wake word detected\n                print(f\"wake word dectected {mdl}!\")\n                mdl = \"\"\n                scores = [0] * n_models\n                audio_data = np.array([])\n                common = True\n        if common:\n            speaking(\"yes sir!\")\n```\n\n- Reference : [dscripka/openWakeWord](https://github.com/dscripka/openWakeWord)\n\n### Recording voice\n```python\ndef mic(time):\n    import requests\n    import sounddevice as sd\n    from scipy.io.wavfile import write\n\n    # Recording Voice\n    fs = 44100\n    seconds = time # time for recording\n    \n    myRecording = sd.rec(int(seconds * fs), samplerate=fs, channels=4)  # channels는 마이크 장치 번호\n    print(\"recording start\")\n    # Find mic channel =\u003e python -m sounddevice\n    sd.wait()\n    write('sampleWav.wav', fs, myRecording)\n\n    # Voice To Text Using Naver Cloud : CLOVA Speech Recognition\n    ## Set\n    lang = \"Eng\"\n    url = \"https://naveropenapi.apigw.ntruss.com/recog/v1/stt?lang=\" + lang\n    \n    ## Recorded Voice File\n    data_voice = open('sampleWav.wav', 'rb')\n\n    ## headers\n    headers = {\n        \"X-NCP-APIGW-API-KEY-ID\": client_id,\n        \"X-NCP-APIGW-API-KEY\": client_secret,\n        \"Content-Type\": \"application/octet-stream\"\n    }\n\n    ## VTT Output\n    response = requests.post(url, data=data_voice, headers=headers)\n\n    result_man = str(response.text)\n    result = list(result_man)\n    count_down = 0\n    say_str = []\n\n    for i in range(0, len(result) - 2):\n        if count_down == 3:\n            say_str.append(result[i])\n\n        if response.text[i] == \"\\\"\":\n            if count_down == 3:\n                break\n            else:\n                count_down += 1\n\n    anw_str = ''.join(map(str, say_str))\n\n    print(anw_str)\n\n    return anw_str\n```\n- Reference : [Naver Cloud : CLOVA Speech Recognition](https://api.ncloud-docs.com/docs/ai-naver-clovaspeechrecognition)\n\u003cbr\u003e\u003c/br\u003e\n### Use Speaker\n```python\ndef speaking(anw_text):\n\n    # NAVER CLOVA : CLOVA Voice\n    encText = urllib.parse.quote(anw_text)\n    data = f\"speaker=djoey\u0026volume=0\u0026speed=0\u0026pitch=0\u0026format=mp3\u0026text=\" + encText\n    urls = \"https://naveropenapi.apigw.ntruss.com/tts-premium/v1/tts\"\n    requests = urllib.request.Request(urls)\n    requests.add_header(\"X-NCP-APIGW-API-KEY-ID\", client_id)\n    requests.add_header(\"X-NCP-APIGW-API-KEY\", client_secret)\n    response = urllib.request.urlopen(requests, data=data.encode('utf-8'))\n    rescodes = response.getcode()\n    if (rescodes == 200):\n        response_body = response.read()\n        with open('./ResultMP3.mp3', 'wb') as f:\n            f.write(response_body)\n\n        # speaker output\n        filename = \"ResultMP3.mp3\"\n        dst = \"test.wav\"\n        sound = AudioSegment.from_mp3(filename)\n        sound.export(dst, format=\"wav\")\n\n        # data, fs = sf.read(filename, dtype='')\n        pl(\"test.wav\")\n    else:\n        print(\"404 error\")\n\n        # Remove Audio data\n        os.remove(\"ResultMP3.mp3\")\n        os.remove(\"test.wav\")\n```\n- Reference : [NAVER CLOVA : CLOVA Voice](https://api.ncloud-docs.com/docs/ai-naver-clovavoice-ttspremium)\n\n\u003cbr\u003e\u003c/br\u003e\n### Make an Answer Using OpenAI\n```python\nclass Robot():\n    memory_size = 100\n\n    with open('./user_value.json', 'r') as f:\n        data = json.load(f)\n        nameValue = data[\"user_name\"]\n        manWomanValue = data[\"user_value\"]\n\n    def set_memory_size(self, memory_size):\n        self.memory_size = memory_size\n\n    def gpt_send_anw(self, question: str):\n        self.gpt_standard_messages = [{\"role\": \"assistant\",\n                                   \"content\": f\"You're a assistant robot for senior in USA. Your name is robot. \"\n                                              f\"Your being purpose is support.  So Please answer politely in english and under 5 seconds. \"\n                                              f\"please be a good friend to your patient. \"\n                                              f\"Your patient's name is {self.nameValue} and {self.manWomanValue} is an old person.\"},\n                                      {\"role\": \"user\", \"content\" : question}]\n\n        response = openai.ChatCompletion.create(\n            model=\"gpt-3.5-turbo\",\n            messages=self.gpt_standard_messages,\n            temperature=0.8\n        )\n\n        answer = response['choices'][0]['message']['content']\n\n        self.gpt_standard_messages.append({\"role\": \"user\", \"content\": question})\n        self.gpt_standard_messages.append({\"role\": \"assistant\", \"content\": answer})\n\n        return answer\n```\n\n## Contact\n\nHyungkyu Kim \n- hyungkyukimdev@gmail.com\n- [Linkedein](https://www.linkedin.com/in/hyung-gyu-kim-202b991b8/)\n- [Blog](https://honoluulu-life.tistory.com/)\n\nProject Link: [HyungkyuKimDev/Chat_Assistant](HyungkyuKimDev/Chat_Assistant)\n\n## Thanks to\n[Topasm](https://github.com/Topasm) \n- Wake Word Part developed \n- Robot Engineer","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhyungkyukimdev%2Fchat_assistant","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhyungkyukimdev%2Fchat_assistant","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhyungkyukimdev%2Fchat_assistant/lists"}