{"id":28361568,"url":"https://github.com/rayyan9477/solace-ai","last_synced_at":"2026-03-10T09:39:54.209Z","repository":{"id":260905306,"uuid":"882675734","full_name":"Rayyan9477/Solace-AI","owner":"Rayyan9477","description":"\"Solace AI: Your Empathetic Digital Confidant\": Solace AI is an empathetic mental health companion that understands your emotions and personality to provide personalized support through natural conversations. It creates a judgment-free space where you can express yourself freely and receive compassionate guidance tailored just for you.","archived":false,"fork":false,"pushed_at":"2026-03-06T10:54:17.000Z","size":6221,"stargazers_count":15,"open_issues_count":0,"forks_count":5,"subscribers_count":1,"default_branch":"main","last_synced_at":"2026-03-06T14:52:45.398Z","etag":null,"topics":["agent-orchestration","ai-agent","cognitive-behavioral-therapy","conversational-ai","emotion-intelligence","faiss","faiss-vector-database","generative-ai","machine-learning","mental-health","multi-agent-systems","nlp","nlp-machine-learning","personality-test","python","sentiment-analysis","transformer","vector-database","voice-ai","voice-assistant"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Rayyan9477.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2024-11-03T13:18:57.000Z","updated_at":"2026-03-06T10:54:20.000Z","dependencies_parsed_at":"2026-01-01T06:01:56.776Z","dependency_job_id":null,"html_url":"https://github.com/Rayyan9477/Solace-AI","commit_stats":null,"previous_names":["rayyan9477/chatbot","rayyan9477/contextual-chatbot"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/Rayyan9477/Solace-AI","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Rayyan9477%2FSolace-AI","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Rayyan9477%2FSolace-AI/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Rayyan9477%2FSolace-AI/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Rayyan9477%2FSolace-AI/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Rayyan9477","download_url":"https://codeload.github.com/Rayyan9477/Solace-AI/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Rayyan9477%2FSolace-AI/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30328891,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-10T05:25:20.737Z","status":"ssl_error","status_checked_at":"2026-03-10T05:25:17.430Z","response_time":106,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agent-orchestration","ai-agent","cognitive-behavioral-therapy","conversational-ai","emotion-intelligence","faiss","faiss-vector-database","generative-ai","machine-learning","mental-health","multi-agent-systems","nlp","nlp-machine-learning","personality-test","python","sentiment-analysis","transformer","vector-database","voice-ai","voice-assistant"],"created_at":"2025-05-28T13:40:23.750Z","updated_at":"2026-03-10T09:39:54.194Z","avatar_url":"https://github.com/Rayyan9477.png","language":"Python","readme":"# 🌟 Solace AI: Your Empathetic Digital Confidant\n\n\u003cdiv align=\"center\"\u003e\n\n![Solace AI Logo](https://img.shields.io/badge/Solace%20AI-Your%20Empathetic%20Digital%20Confidant-brightgreen?style=for-the-badge\u0026logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAyNCAyNCI+PHBhdGggZD0iTTEyIDJjNS41MjIgMCAxMCA0LjQ3OCAxMCAxMHMtNC40NzggMTAtMTAgMTBTMiAxNy41MjIgMiAxMiA2LjQ3OCAyIDEyIDJ6bTAgNGMtMy4zMTQgMC02IDIuNjg2LTYgNnMyLjY4NiA2IDYgNiA2LTIuNjg2IDYtNi0yLjY4Ni02LTYtNnptMCAyYzIuMjEgMCA0IDEuNzkgNCA0cy0xLjc5IDQtNCA0LTQtMS43OS00LTQgMS43OS00IDQtNHptMCAyYy0xLjEwNSAwLTIgLjg5NS0yIDJzLjg5NSAyIDIgMiAyLS44OTUgMi0yLS44OTUtMi0yLTJ6IiBmaWxsPSIjZmZmIi8+PC9zdmc+)\n![Version](https://img.shields.io/badge/Version-1.0.0-blue?style=for-the-badge)\n![License](https://img.shields.io/badge/License-MIT-green?style=for-the-badge)\n[![Python](https://img.shields.io/badge/Python-3.8%2B-blue?style=for-the-badge\u0026logo=python)](https://www.python.org/)\n![Research Project](https://img.shields.io/badge/Research-Project-purple?style=for-the-badge)\n[![Multi-Agent](https://img.shields.io/badge/Multi-Agent-orange?style=for-the-badge)](https://github.com/Rayyan9477/Contextual-Chatbot)\n\n\u003c/div\u003e\n\n---\n\n\u003cp align=\"center\"\u003e\n  \u003cb\u003eAn advanced mental health AI companion that understands your emotions and personality to provide personalized support through natural conversations. Powered by a sophisticated multi-agent architecture, Solace AI creates a judgment-free space where you can express yourself freely and receive compassionate guidance tailored specifically for your unique needs.\u003c/b\u003e\n\u003c/p\u003e\n\n---\n\n## 📋 Table of Contents\n\n- [✨ Features](#-features)\n  - [Core Capabilities](#core-capabilities)\n- [🎬 Demo](#-demo)\n  - [Text-Based Interaction](#text-based-interaction)\n  - [Voice Interaction](#voice-interaction)\n- [🔧 Technology Stack](#-technology-stack)\n  - [Key Library Dependencies](#key-library-dependencies)\n- [📂 Project Structure](#-project-structure)\n- [📥 Installation](#-installation)\n  - [Prerequisites](#prerequisites)\n  - [Setup](#setup)\n  - [Docker Installation](#docker-installation-alternative)\n- [🚀 Getting Started](#-getting-started)\n  - [Web Interface](#web-interface)\n  - [Command Line Interface](#command-line-interface)\n  - [Configuration](#configuration)\n- [🎙️ Voice Interaction](#-voice-interaction)\n  - [Voice Recognition Architecture](#voice-recognition-architecture)\n  - [Text-to-Speech System](#text-to-speech-system)\n  - [Voice Styles and Parameters](#voice-styles-and-parameters)\n  - [Voice Integration Flow](#voice-integration-flow)\n  - [Celebrity Voice Cloning](#celebrity-voice-cloning-integration)\n- [🧠 Advanced Features](#-advanced-features)\n  - [Celebrity Voice Cloning](#celebrity-voice-cloning)\n  - [Comprehensive Mental Health Assessment](#comprehensive-mental-health-assessment-system)\n  - [Therapeutic Approach Integration](#therapeutic-approach-integration)\n  - [Real-time Emotion Tracking](#real-time-emotion-tracking-and-visualization)\n- [🏗️ Project Architecture](#-project-architecture)\n  - [Agent Components](#agent-components)\n  - [Memory Systems Architecture](#memory-systems-architecture)\n- [👥 Contributing](#-contributing)\n- [📄 License](#-license)\n- [📬 Contact](#-contact)\n\n## 🌟 Features\n\n\u003cdiv align=\"center\"\u003e\n\u003ctable\u003e\n  \u003ctr\u003e\n    \u003ctd align=\"center\"\u003e\n      \u003cimg src=\"https://img.shields.io/badge/-Emotional%20Analysis-FF6B6B?style=flat-square\" alt=\"Emotional Analysis\"/\u003e\n      \u003cbr /\u003eSophisticated emotion detection from text and voice\n    \u003c/td\u003e\n    \u003ctd align=\"center\"\u003e\n      \u003cimg src=\"https://img.shields.io/badge/-Voice%20Interaction-4ECDC4?style=flat-square\" alt=\"Voice Interaction\"/\u003e\n      \u003cbr /\u003eNatural voice conversations with emotion recognition\n    \u003c/td\u003e\n    \u003ctd align=\"center\"\u003e\n      \u003cimg src=\"https://img.shields.io/badge/-Personality%20Assessment-FF9A8B?style=flat-square\" alt=\"Personality Assessment\"/\u003e\n      \u003cbr /\u003eTailored support based on your unique traits\n    \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd align=\"center\"\u003e\n      \u003cimg src=\"https://img.shields.io/badge/-Mental%20Health%20Support-845EC2?style=flat-square\" alt=\"Mental Health Support\"/\u003e\n      \u003cbr /\u003eEvidence-based therapeutic approaches\n    \u003c/td\u003e\n    \u003ctd align=\"center\"\u003e\n      \u003cimg src=\"https://img.shields.io/badge/-Context%20Awareness-00B8A9?style=flat-square\" alt=\"Context Awareness\"/\u003e\n      \u003cbr /\u003eRemembers conversations and adapts over time\n    \u003c/td\u003e\n    \u003ctd align=\"center\"\u003e\n      \u003cimg src=\"https://img.shields.io/badge/-Safety%20Monitoring-F9F871?style=flat-square\" alt=\"Safety Monitoring\"/\u003e\n      \u003cbr /\u003eCrisis detection and appropriate interventions\n    \u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\u003c/div\u003e\n\n### Core Capabilities\n\n- **🎭 Emotional Intelligence**: Analyzes emotions in both text and voice to respond appropriately using advanced sentiment analysis algorithms\n- **🧠 Personality Adaptation**: Tailors responses based on Big Five and MBTI personality assessments to match your communication style\n- **🗣️ Voice Conversations**: Natural speech recognition and response using Whisper V3 Turbo ASR with emotional tone detection\n- **💬 Therapeutic Techniques**: Integrates CBT, mindfulness, Solution-Focused Brief Therapy, and other evidence-based approaches\n- **🔄 Contextual Memory**: Employs vector databases and semantic memory to remember past interactions for more meaningful conversations\n- **🛡️ Safety Protocols**: Sophisticated safety agent monitors for crisis indicators and provides appropriate guidance\n- **🎙️ Voice Style Customization**: Multiple voice styles including warm, calm, professional, and more with celebrity voice cloning capabilities\n- **🌐 Multi-Agent Architecture**: Orchestrates specialized agents (emotion, safety, therapy, personality, diagnosis) for comprehensive support\n\n---\n\n## 🎬 Demo\n\n### Text-Based Interaction\n\n```\n🧠 Assistant: Hi there. I'm here to listen and support you in a safe, judgment-free space. What's on your mind today?\n\nUser: I've been feeling really overwhelmed lately with work and family stress\n\n🤔 Processing...\n\n🧠 Assistant: I can hear how overwhelmed you're feeling right now, juggling both work and family responsibilities. That's a lot to carry on your shoulders. It's completely natural to feel this way when multiple areas of life are demanding your attention and energy.\n\nBased on what you've shared, it might help to create some boundaries between these different roles. Perhaps you could try a simple breathing technique when transitioning between work and family time - just taking 3-5 deep breaths while acknowledging that you're switching contexts.\n\nWould you like to talk more about what specific aspects of work or family life are feeling most challenging right now?\n```\n\n### Voice Interaction\n\n\u003cdiv align=\"center\"\u003e\n  \u003cp\u003e\u003ci\u003e🎧 Voice interaction with emotion detection and appropriate voice tone responses\u003c/i\u003e\u003c/p\u003e\n\u003c/div\u003e\n\n---\n\n## 🔧 Technology Stack\n\n\u003cdiv align=\"center\"\u003e\n\n| Component | Technologies | Description |\n|-----------|-------------|-------------|\n| **Core Framework** | Python, PyTorch, LangChain, Streamlit | Foundation for application development with powerful UI capabilities |\n| **NLP/AI** | Google Gemini, NLTK, Transformers, HuggingFace | Advanced language models for natural conversation and understanding |\n| **Voice Technology** | Whisper V3 Turbo ASR, TTS, Voice Emotion Analysis | Enabling natural voice interactions with emotion detection |\n| **Memory Systems** | ChromaDB, Vector Embeddings, Semantic Memory | Storing and retrieving contextual information for conversations |\n| **Architecture** | Multi-Agent System, Agentic RAG, Prompt Engineering | Sophisticated coordination of specialized AI components |\n| **Development** | Docker, Git, Pytest, CI/CD | Tools for reliable development and deployment |\n\n\u003c/div\u003e\n\n### Key Library Dependencies\n\n```python\n# Core NLP and AI\ngemini-api==1.0.1        # Google Gemini API integration\ntorch==2.0.1             # PyTorch for deep learning\ntransformers==4.30.0     # Hugging Face Transformers for NLP models\nlangchain==0.0.267       # Framework for LLM application development\n\n# Voice Processing\nwhisper-v3-turbo==1.2.0  # Advanced speech recognition\nspeechrecognition==3.10.0 # Speech-to-text processing\npyttsx3==2.90            # Text-to-speech conversion\n\n# Memory and Data Management\nchromadb==0.4.6          # Vector database for embeddings\nsentence-transformers==2.2.2 # Text embeddings generation\n\n# UI and Visualization\nstreamlit==1.24.0        # Web interface creation\nplotly==5.15.0           # Interactive visualizations\n```\n\n---\n\n## 📂 Project Structure\n\n```\nContextual-Chatbot/\n├── app.py                   # Streamlit web application entry point\n├── chat.py                  # Command-line chat interface\n├── main.py                  # Main application initialization\n├── config.yaml              # Main configuration file\n├── celebrity_voice_clone_demo.py # Celebrity voice cloning demo\n├── requirements.txt         # Core dependencies\n├── requirements_voice.txt   # Voice-related dependencies\n├── Dockerfile               # Docker containerization\n├── setup_voice.py           # Voice capabilities setup\n├── src/\n│   ├── agents/              # Agent components\n│   │   ├── agent_orchestrator.py  # Manages agent coordination\n│   │   ├── base_agent.py    # Abstract agent class\n│   │   ├── chat_agent.py    # Manages conversation flow\n│   │   ├── emotion_agent.py # Emotion detection and response\n│   │   ├── safety_agent.py  # Crisis detection and intervention\n│   │   ├── therapy_agent.py # Therapeutic techniques\n│   │   └── personality_agent.py # Personality adaptation\n│   ├── api/                 # API endpoints\n│   │   └── router.py        # API routing\n│   ├── components/          # Core components\n│   │   ├── audio_processor.py # Audio processing utilities\n│   │   ├── text_processor.py  # Text processing utilities\n│   │   └── response_generator.py # Response generation\n│   ├── memory/              # Memory management\n│   │   ├── vector_database.py # Vector storage for embeddings\n│   │   ├── conversation_memory.py # Short-term memory\n│   │   └── semantic_memory.py # Long-term memory\n│   ├── models/              # LLM integration\n│   │   ├── llm.py           # Language model interface\n│   │   └── gemini_integration.py # Gemini-specific implementation\n│   ├── personality/         # Personality assessment\n│   │   ├── big_five.py      # Big Five personality model\n│   │   └── mbti.py          # MBTI personality model\n│   ├── diagnosis/           # Mental health assessment\n│   │   ├── anxiety.py       # Anxiety screening tools\n│   │   ├── depression.py    # Depression screening tools\n│   │   └── stress.py        # Stress assessment tools\n│   ├── knowledge/           # Therapeutic knowledge base\n│   │   ├── cbt_techniques.py # Cognitive behavioral therapy\n│   │   ├── mindfulness.py   # Mindfulness practices\n│   │   └── solution_focused.py # Solution-focused brief therapy\n│   ├── planner/             # Task planning\n│   │   └── planner.py       # Task planning functionality\n│   ├── ui/                  # Streamlit UI components\n│   │   ├── sidebar.py       # Sidebar navigation\n│   │   ├── chat_interface.py # Chat UI\n│   │   └── visualizations.py # Data visualizations\n│   └── utils/               # Helper utilities\n│       ├── config.py        # Configuration management\n│       ├── text_utils.py    # Text processing utilities\n│       ├── audio.py         # Audio processing utilities\n│       ├── file_utils.py    # File handling utilities\n│       └── voice_clone_integration.py # Voice cloning utilities\n├── tests/                   # Test suite\n│   ├── unit/                # Unit tests\n│   └── integration/         # Integration tests\n├── scripts/                 # Utility scripts\n│   └── setup_environment.sh # Environment setup script\n└── docs/                    # Documentation\n```\n\nEach module is designed to be modular and reusable, enabling easy extension and customization of Solace AI's capabilities.\n\n---\n\n## 📥 Installation\n\n### Prerequisites\n\n- Python 3.8 or higher\n- pip (Python package installer)\n- Git\n- 8GB+ RAM recommended for optimal performance\n\n### Setup\n\n1. **Clone the repository**\n   ```bash\n   git clone https://github.com/Rayyan9477/Contextual-Chatbot.git\n   cd Contextual-Chatbot\n   ```\n\n2. **Create a virtual environment (recommended)**\n   ```bash\n   python -m venv venv\n   source venv/bin/activate  # On Linux/macOS\n   venv\\Scripts\\activate.bat  # On Windows\n   ```\n\n3. **Install core dependencies**\n   ```bash\n   pip install -r requirements.txt\n   ```\n\n4. **Setup voice capabilities (optional)**\n   ```bash\n   pip install -r requirements_voice.txt\n   python setup_voice.py\n   ```\n\n5. **Set up environment variables**\n   ```bash\n   # Create a .env file with your API keys\n   echo \"GEMINI_API_KEY=your_api_key_here\" \u003e .env\n   ```\n\n### Docker Installation (Alternative)\n\nFor containerized deployment:\n\n1. **Build the Docker image**\n   ```bash\n   docker build -t solace-ai .\n   ```\n\n2. **Run the Docker container**\n   ```bash\n   docker run -p 5000:5000 solace-ai\n   ```\n\n## 🚀 Getting Started\n\n### API Server for Mobile App Integration\n\nStart the API server for mobile app integration:\n\n```bash\npython launch_api.py\n```\n\nThis launches the REST API server with:\n- Complete API documentation at `/docs`\n- Endpoints for chat, assessment, voice processing\n- Mobile app integration capabilities\n- User profile management\n\n### Command Line Interface\n\nFor a simple command-line experience:\n\n```bash\npython start.py\n```\n\nOptional flags:\n- `--voice-only`: Use only voice input for conversation\n- `--text-only`: Use only text input (disables voice features)\n- `--model turbo`: Specify Whisper model version (options: base, small, medium, large)\n- `--voice-style warm`: Choose voice style (options: warm, calm, professional, excited, sad)\n- `--personality-type supportive_counselor`: Select chatbot personality\n- `--memory-size 10`: Set the number of conversation turns to remember\n- `--debug`: Enable debug mode with detailed logging\n\n### Configuration\n\nYou can customize Solace AI by editing the `config.yaml` file:\n\n```yaml\n# Example configuration settings\nagent_settings:\n  emotion_threshold: 0.7\n  safety_check_frequency: 2\n  therapy_approaches: [\"cbt\", \"mindfulness\", \"solution_focused\"]\n  \nvoice_settings:\n  default_style: \"warm\"\n  language: \"en-US\"\n  \nmemory_settings:\n  vector_db_path: \"./data/memory\"\n  conversation_turns: 10\n```\n\n---\n\n## 🎙️ Voice Interaction\n\nSolace AI provides advanced voice interaction capabilities that create natural, emotionally-aware conversations:\n\n### Voice Recognition Architecture\n\n- **Whisper V3 Turbo**: State-of-the-art speech recognition with 99% accuracy\n- **Real-time Processing**: Low-latency voice recognition for fluid conversations\n- **Emotion Detection**: Advanced algorithms analyze tone, pitch, and pacing to detect emotional states\n- **Multilingual Support**: Works across 40+ languages with automatic language detection\n\n### Text-to-Speech System\n\nThe voice synthesis system features adaptive voice styles that respond to the emotional context of the conversation:\n\n```python\n# Sample voice style customization\nasync def respond_to_emotion(user_input, detected_emotion):\n    if detected_emotion == \"anxious\":\n        await voice_manager.speak_text(\n            \"I understand you're feeling anxious right now. Let's take a deep breath together.\",\n            style=\"calm\",\n            rate=0.85\n        )\n    elif detected_emotion == \"sad\":\n        await voice_manager.speak_text(\n            \"I hear that you're feeling down. It's okay to feel this way.\",\n            style=\"warm\",\n            pitch_adjustment=-0.05\n        )\n```\n\n### Voice Styles and Parameters\n\nChoose from multiple voice styles to match your preferences:\n\n| Style | Description | Use Case | Parameters |\n|-------|-------------|----------|------------|\n| **Warm** | Compassionate and supportive | Emotional support | `pitch=0.0, rate=1.0` |\n| **Calm** | Soothing and peaceful | Anxiety reduction | `pitch=-0.1, rate=0.9` |\n| **Professional** | Clear and structured | Information delivery | `pitch=0.05, rate=1.05` |\n| **Excited** | Energetic and motivating | Encouragement | `pitch=0.15, rate=1.1` |\n| **Sad** | Empathetic for difficult moments | Grief support | `pitch=-0.15, rate=0.9` |\n\n### Voice Integration Flow\n\n```mermaid\nsequenceDiagram\n    User-\u003e\u003eSystem: Speak (Audio Input)\n    System-\u003e\u003eSpeech Recognition: Process Audio\n    Speech Recognition-\u003e\u003eEmotion Detector: Audio Features\n    Speech Recognition-\u003e\u003eText Processor: Transcribed Text\n    Emotion Detector-\u003e\u003eAgent Orchestrator: Emotional State\n    Text Processor-\u003e\u003eAgent Orchestrator: Processed Text\n    Agent Orchestrator-\u003e\u003eResponse Generator: Generate Response\n    Response Generator-\u003e\u003eVoice Synthesizer: Text Response\n    Voice Synthesizer-\u003e\u003eSystem: Select Voice Style\n    System-\u003e\u003eUser: Speak (Audio Output)\n```\n\n### Celebrity Voice Cloning Integration\n\nOur advanced voice cloning system allows for personalized interactions with familiar voices:\n\n```bash\n# Example usage of celebrity voice cloning\npython celebrity_voice_clone_demo.py --celebrity \"Morgan Freeman\" --text \"Welcome to your mindfulness session. Let's begin with a moment of calm reflection.\"\n```\n\nThis feature can be customized through the `src/utils/voice_clone_integration.py` module for creating new voice profiles.\n\n---\n\n## 🧠 Advanced Features\n\n### Celebrity Voice Cloning\n\nSolace AI includes an advanced voice cloning system that can replicate celebrity voices for a more engaging and personalized experience:\n\n```bash\n# Clone a celebrity voice and generate audio\npython celebrity_voice_clone_demo.py --celebrity \"Morgan Freeman\" --text \"Hello, I am here to support you on your journey.\"\n\n# Clone with specific emotional tone\npython celebrity_voice_clone_demo.py --celebrity \"Oprah Winfrey\" --text \"Let's explore your thoughts together.\" --emotion \"warm\"\n\n# Save the generated audio to a file\npython celebrity_voice_clone_demo.py --celebrity \"David Attenborough\" --text \"The journey to mental wellness is remarkable.\" --output \"meditation_intro.mp3\"\n```\n\nThe voice cloning system uses a sophisticated neural network architecture to analyze and reproduce the unique characteristics of various voices while maintaining natural-sounding speech patterns.\n\n### Comprehensive Mental Health Assessment System\n\nSolace AI includes an integrated assessment system that combines multiple evaluation methods:\n\n\u003cdiv align=\"center\"\u003e\n\u003ctable\u003e\n  \u003ctr\u003e\n    \u003cth\u003eAssessment Type\u003c/th\u003e\n    \u003cth\u003eMethodology\u003c/th\u003e\n    \u003cth\u003eApplication\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003cb\u003ePersonality Analysis\u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003eBig Five traits (Openness, Conscientiousness, Extraversion, Agreeableness, Neuroticism) and MBTI assessment\u003c/td\u003e\n    \u003ctd\u003eCustomizes conversation style and therapeutic approaches based on personality traits\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003cb\u003eMental Health Screening\u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003eValidated screening tools for anxiety (GAD-7), depression (PHQ-9), and stress (PSS)\u003c/td\u003e\n    \u003ctd\u003eIdentifies potential mental health concerns and suggests appropriate support\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003cb\u003eEmotional Pattern Recognition\u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003eLongitudinal analysis of emotional content across conversations\u003c/td\u003e\n    \u003ctd\u003eIdentifies recurring themes, triggers, and emotional patterns\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003cb\u003eCommunication Style Analysis\u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003eNatural language processing to assess communication preferences\u003c/td\u003e\n    \u003ctd\u003eAdapts responses to match the user's communication style\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\u003c/div\u003e\n\nThe assessment system uses a combination of direct questioning, conversation analysis, and pattern recognition to build a comprehensive profile of the user's mental health needs and preferences.\n\n### Therapeutic Approach Integration\n\nSolace AI integrates evidence-based therapeutic techniques from various psychological approaches:\n\n```mermaid\ngraph TD\n    A[User Input] --\u003e B{Therapy Agent}\n    B --\u003e C[Approach Selection]\n    C --\u003e D[Cognitive Behavioral Therapy]\n    C --\u003e E[Mindfulness Practices]\n    C --\u003e F[Solution-Focused Brief Therapy]\n    C --\u003e G[Motivational Interviewing]\n    C --\u003e H[Acceptance and Commitment Therapy]\n    D --\u003e I[Response Generation]\n    E --\u003e I\n    F --\u003e I\n    G --\u003e I\n    H --\u003e I\n    I --\u003e J[User Output]\n```\n\n#### Therapeutic Techniques Available\n\n- **Cognitive Behavioral Therapy**: Identifies and challenges negative thought patterns\n  ```python\n  # Example CBT implementation\n  def cognitive_restructuring(negative_thought):\n      evidence_for = prompt_for_evidence(negative_thought, \"supporting\")\n      evidence_against = prompt_for_evidence(negative_thought, \"contradicting\")\n      alternative_perspective = generate_balanced_thought(negative_thought, evidence_for, evidence_against)\n      return alternative_perspective\n  ```\n\n- **Mindfulness and Meditation**: Guided exercises for present-moment awareness\n- **Solution-Focused Brief Therapy**: Focuses on solutions rather than problems\n- **Motivational Interviewing**: Resolves ambivalence and increases motivation for change\n- **Acceptance and Commitment Therapy**: Promotes psychological flexibility and values-based action\n\n### Real-time Emotion Tracking and Visualization\n\nThe system tracks emotional states throughout conversations and provides visualizations to help users understand their emotional patterns:\n\n```python\n# Example emotion tracking visualization\ndef generate_emotion_timeline(conversation_history):\n    emotions = extract_emotions(conversation_history)\n    time_points = extract_timestamps(conversation_history)\n    \n    fig = plt.figure(figsize=(10, 6))\n    plt.plot(time_points, emotions['joy'], 'g-', label='Joy')\n    plt.plot(time_points, emotions['sadness'], 'b-', label='Sadness')\n    plt.plot(time_points, emotions['anger'], 'r-', label='Anger')\n    plt.plot(time_points, emotions['anxiety'], 'y-', label='Anxiety')\n    plt.legend()\n    plt.title('Emotional Journey')\n    \n    return fig\n```\n\nThis feature enables users to gain insights into their emotional patterns over time, helping them develop greater emotional awareness and regulation strategies.\n\n## 🏗️ Project Architecture\n\nSolace AI uses a sophisticated multi-agent architecture designed for modularity, extensibility, and efficient task handling:\n\n\u003cdiv align=\"center\"\u003e\n  \u003ctable\u003e\n    \u003ctr\u003e\n      \u003cth colspan=\"3\"\u003eAgent Orchestrator\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003eEmotion Agent\u003c/td\u003e\n      \u003ctd align=\"center\"\u003eSafety Agent\u003c/td\u003e\n      \u003ctd align=\"center\"\u003eChat Agent\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003eTherapy Agent\u003c/td\u003e\n      \u003ctd align=\"center\"\u003ePersonality Agent\u003c/td\u003e\n      \u003ctd align=\"center\"\u003eDiagnosis Agent\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd colspan=\"3\" align=\"center\"\u003eLLM Foundation (Google Gemini)\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/table\u003e\n\u003c/div\u003e\n\n```mermaid\ngraph TD\n    A[User Input] --\u003e B(Agent Orchestrator)\n    B --\u003e C{Task Distribution}\n    C --\u003e D[Emotion Agent]\n    C --\u003e E[Safety Agent]\n    C --\u003e F[Chat Agent]\n    C --\u003e G[Therapy Agent]\n    C --\u003e H[Personality Agent]\n    C --\u003e I[Diagnosis Agent]\n    D --\u003e J{Vector Database}\n    E --\u003e J\n    F --\u003e J\n    G --\u003e J\n    H --\u003e J\n    I --\u003e J\n    J --\u003e K[Response Generation]\n    K --\u003e L[User Output]\n    style B fill:#f9f,stroke:#333,stroke-width:2px\n    style J fill:#ccf,stroke:#333,stroke-width:2px\n```\n\n### Agent Components\n\n1. **Agent Orchestrator**: Central coordinator that manages task distribution, information flow between agents, and aggregating results into coherent responses. Handles the lifecycle of agents and their interactions.\n\n2. **Emotion Agent**: Detects and analyzes emotional content in text and voice inputs using sentiment analysis and tone recognition. Helps tailor responses to match the user's emotional state.\n\n3. **Safety Agent**: Continuously monitors conversations for crisis indicators, self-harm signals, or other safety concerns. Provides appropriate interventions and resources when needed.\n\n4. **Chat Agent**: Manages core conversation flow, ensuring natural dialogue and maintaining context. Interfaces directly with the LLM for response generation.\n\n5. **Therapy Agent**: Implements evidence-based therapeutic techniques including CBT, mindfulness, Solution-Focused Brief Therapy, and more. Selects appropriate therapeutic approaches based on user needs.\n\n6. **Personality Agent**: Adapts responses based on personality assessments (Big Five, MBTI) to match the user's communication style and preferences. Maintains a consistent personality profile.\n\n7. **Diagnosis Agent**: Performs mental health assessments and screening for anxiety, depression, and stress. Provides personalized insights based on recognized patterns.\n\n### Memory Systems Architecture\n\n- **Conversational Memory**: Short-term memory that tracks immediate conversation history and context for coherent dialogue.\n\n- **Semantic Memory**: Long-term storage for important user insights, preferences, and patterns. Enables the system to build a comprehensive understanding of the user over time.\n\n- **Vector Database**: Employs embedding-based storage for efficient similarity search. Retrieves relevant context based on semantic meaning rather than keyword matching.\n\n```python\n# Example of memory interaction\nclass VectorDB:\n    def __init__(self, config):\n        self.config = config\n        self.collection = self.client.get_or_create_collection(name=\"user_memories\")\n        \n    def add_data(self, data):\n        self.collection.add(\n            embeddings=[self.llm.get_embedding(d) for d in data],\n            documents=data,\n            ids=[str(uuid.uuid4()) for _ in data]\n        )\n\n    def query_data(self, query, n_results=5):\n        return self.collection.query(\n            query_embeddings=[self.llm.get_embedding(query)],\n            n_results=n_results\n        )\n```\n\n---\n\n## 👥 Contributing\n\nContributions are welcome! Here's how to get started:\n\n1. Fork the repository\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\n3. Commit your changes (`git commit -m 'Add amazing feature'`)\n4. Push to the branch (`git push origin feature/amazing-feature`)\n5. Open a Pull Request\n\nPlease read the [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.\n\n---\n\n## 📄 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n---\n\n## 📬 Contact\n\n- **GitHub**: [Rayyan9477](https://github.com/Rayyan9477)\n- **LinkedIn**: [Rayyan Ahmed](https://www.linkedin.com/in/rayyan-ahmed9477/)\n- **Email**: rayyanahmed265@yahoo.com\n\n---\n\n\u003cdiv align=\"center\"\u003e\n  \u003cp\u003e\n    \u003cb\u003eSolace AI\u003c/b\u003e: Your compassionate companion for mental wellbeing\n    \u003cbr\u003e\n    Made with ❤️ by Rayyan Ahmed\n  \u003c/p\u003e\n\u003c/div\u003e\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frayyan9477%2Fsolace-ai","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Frayyan9477%2Fsolace-ai","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frayyan9477%2Fsolace-ai/lists"}