{"id":13645254,"url":"https://github.com/latent-variable/real_time_fallacy_detection","last_synced_at":"2025-04-21T13:32:30.897Z","repository":{"id":197707337,"uuid":"699163063","full_name":"latent-variable/Real_time_fallacy_detection","owner":"latent-variable","description":"Real-time Fallacy Detection using OpenAI whisper and ChatGPT/LLaMA/Mistral","archived":false,"fork":false,"pushed_at":"2023-12-10T00:09:04.000Z","size":3689,"stargazers_count":101,"open_issues_count":0,"forks_count":10,"subscribers_count":3,"default_branch":"main","last_synced_at":"2024-08-02T01:25:15.084Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/latent-variable.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null},"funding":{"github":"latent-variable"}},"created_at":"2023-10-02T04:02:50.000Z","updated_at":"2024-07-31T13:47:46.000Z","dependencies_parsed_at":"2023-10-11T18:34:42.027Z","dependency_job_id":"16cc0eed-814e-4928-94e9-c856b6064b5b","html_url":"https://github.com/latent-variable/Real_time_fallacy_detection","commit_stats":null,"previous_names":["latent-variable/real_time_fallacy_detection"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/latent-variable%2FReal_time_fallacy_detection","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/latent-variable%2FReal_time_fallacy_detection/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/latent-variable%2FReal_time_fallacy_detection/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/latent-variable%2FReal_time_fallacy_detection/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/latent-variable","download_url":"https://codeload.github.com/latent-variable/Real_time_fallacy_detection/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":223867932,"owners_count":17216981,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-02T01:02:32.181Z","updated_at":"2024-11-09T18:30:38.999Z","avatar_url":"https://github.com/latent-variable.png","language":"Python","readme":"\n# Real-time Fallacy Detection\n\n## Overview\n\nThis project aims to perform real-time fallacy detection during events like presidential debates. It uses the [Whisper](https://github.com/openai/whisper) for audio transcription.  For natural language understanding and fallacy classification you have the option to use the OpenAI ChatGPT API or a local LLM through the [text-generation-webui](https://github.com/oobabooga/text-generation-webui). I was able to run both whisper with the [IconicAI_NeuralHermes-2.5-Mistral-7B-exl2-5bpw](https://huggingface.co/IconicAI/NeuralHermes-2.5-Mistral-7B-exl2-5bpw) on a laptop with 3080TI 16GB of VRAM.\n![Alt text](img/Fallacy_classification.PNG)\n[Watch Video](https://www.youtube.com/watch?v=I9ScRL_10So)\n\n## Features\n\n- **Real-time Audio Transcription**: Uses OpenAI's Whisper ASR model for accurate real-time transcription locally or through API access.\n- **Fallacy Detection**: Utilizes OpenAI's ChatGPT to classify and identify fallacies in real-time.\n- **Overlay Display**: Provides a transparent overlay display to show both the transcription and fallacy detection results.\n- **Text analysis**: using GPT-3/4 or local LLM. \n\n\n## Dependencies\n- Anaconda\n- PyQt5 - Overlay\n- PyAudio - Audio processing\n- openai-whisper - ASR\n- openai - ChatGPT API\n- torch with cuda, for real time transtriction \n- Have the text-generation-webui running with the API flag \n\n## Installation\nI build the application using Anaconda with python 3.9 on windows \n\n1. Clone the repository:\n    ```\n    git clone https://github.com/latent-variable/Real_time_fallacy_detection.git\n    ```\n2. Navigate to the project directory:\n    ```\n    cd real-time-fallacy-detection\n    ```\n3. Create a conda environment:\n    ```\n    conda create --name rtfd python=3.9\n    ```\n4. Activate the created environment:\n    ```\n    conda activate rtfd\n    ```\n5. Install the required packages:\n    ```\n    pip install -r requirements.txt\n    ```\n6. (Optional) Install the required packages to run whisper locally:\n    ```\n    pip install -r requirements_local_whisper.txt\n    ```\n7. Installing [VB-Audio](https://vb-audio.com/Cable/), to forward audio ouput as an input device (*Optional, but I don't know how to redirect audio otherwise)\n\n## Usage\n\nRun the main script to start the application:\n```\npython main.py \noptional arguments:\n  -h, --help     show this help message and exit\n  --auto         Automatically get commentary\n  --api_whisper  Run whisper through api instead of locally\n  --api_gpt      Will use use gpt api otherwise, by defualt will use a local LLM through the text-generation-webui API\n```\n\nIf you plan to use your local LLM, please have the text-generation-webui running with the --api flag\n\n**Note**: The application routes the audio to VB-AUDIO for processing and then redirects it back to the user for playback. \n\n## Arguments\n--use_gpt(3/4): Use this flag to toggle between  ChatGPT with or without GPT4. Default is to use local LLM.\n\n## Display\nThe application will display an overlay with two sections:\n\n- **Top Box**: Displays the fallacy classification from ChatGPT.\n- **Bottom Box**: Displays the real-time transcription from the Whisper API.\n\nPress the `Esc` key to close the application.\n\n## Configuration\n[Audio Settings]\nYou can configure the audio input and outsource in the `settings.ini`.\n- **device_input_name = VB-Audio**  \u003c- must have \n- **device_output_name = Headphones (2- Razer** \u003c- replace with your own \n*Note: when the application loads if will redirect the audio back to the output device\n![Alt text](img/audio_selection.png)\n\n[Local LLM Settings]\n- **instruction_template = mistral-openorca** \u003c- replace with the model specific template\n*Note: this is a custom template, which you will likely note have in your text-generation-webui\n\n\nRename the `api_key.txt.template` to `api_key.txt` and add your OpenAI API key to it.\n\n## License\n\nThis project is licensed under the MIT License.\n","funding_links":["https://github.com/sponsors/latent-variable"],"categories":["Langchain"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flatent-variable%2Freal_time_fallacy_detection","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Flatent-variable%2Freal_time_fallacy_detection","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flatent-variable%2Freal_time_fallacy_detection/lists"}