{"id":48596181,"url":"https://github.com/emminix/eu-green-agent","last_synced_at":"2026-04-08T21:02:04.238Z","repository":{"id":302956092,"uuid":"1012927895","full_name":"EmminiX/Eu-Green-Agent","owner":"EmminiX","description":"EU Green Deal Compliance Chatbot - Intelligent unified AI assistant for navigating EU environmental policies and sustainability requirements","archived":false,"fork":false,"pushed_at":"2025-07-05T00:40:39.000Z","size":31384,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-07-05T01:33:53.430Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/EmminiX.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-07-03T05:28:41.000Z","updated_at":"2025-07-05T00:40:42.000Z","dependencies_parsed_at":"2025-07-05T01:44:04.412Z","dependency_job_id":null,"html_url":"https://github.com/EmminiX/Eu-Green-Agent","commit_stats":null,"previous_names":["emminix/eu-green-agent"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/EmminiX/Eu-Green-Agent","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EmminiX%2FEu-Green-Agent","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EmminiX%2FEu-Green-Agent/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EmminiX%2FEu-Green-Agent/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EmminiX%2FEu-Green-Agent/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/EmminiX","download_url":"https://codeload.github.com/EmminiX/Eu-Green-Agent/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EmminiX%2FEu-Green-Agent/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31573788,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-08T14:31:17.711Z","status":"ssl_error","status_checked_at":"2026-04-08T14:31:17.202Z","response_time":54,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2026-04-08T21:01:15.983Z","updated_at":"2026-04-08T21:02:04.224Z","avatar_url":"https://github.com/EmminiX.png","language":"TypeScript","readme":"# 🌱 EU Green Policies Chatbot\n\n**🏆 Production-Ready Multilingual AI Assistant for EU Green Deal Compliance** \n\nAn advanced intelligent chatbot specialized in EU environmental regulations and sustainability compliance. Features **Verdana AI Agent**, **OpenAI embeddings with PostgreSQL vector database**, **comprehensive web verification**, and **24+ official EU policy documents**. Built with modern architecture supporting **multiple chat sessions**, **local browser storage**, and **voice accessibility**.\n\n\u003e **🎯 Current Status: Production Ready**  \n\u003e **🌍 Supporting 24 EU Languages | 📚 24+ Official Documents | 🔍 Real-time Web Verification**\n\n![License](https://img.shields.io/badge/license-MIT-green)\n![Python](https://img.shields.io/badge/python-3.11+-blue)\n![Next.js](https://img.shields.io/badge/next.js-14-black)\n![Docker](https://img.shields.io/badge/docker-supported-blue)\n\n## 🚀 Core Features\n\n### 🧠 **Verdana AI Agent Architecture**\n- **Intelligent Query Classification**: Automatically distinguishes between casual conversation, identity queries, and EU Green Deal policy questions\n- **Language Detection \u0026 Persistence**: Automatically detects user's language from first message and maintains consistency throughout session\n- **Conversation Context Awareness**: Maintains full conversation history and can reference previous queries within sessions\n- **Source Deduplication**: Eliminates duplicate documents and provides unique, high-quality sources\n- **Proactive Information Gathering**: Performs additional web searches when detailed information is requested\n\n### 🔍 **Advanced RAG \u0026 Search System**\n- **OpenAI Text-Embedding-3-Large**: 3072-dimensional vectors for precise semantic matching\n- **PostgreSQL with pgvector**: High-performance vector storage and similarity search\n- **Dual Web Verification**: Domain-restricted EU searches + broader policy research via Tavily API\n- **Smart Document Chunking**: 800-token chunks with 300-token overlap for optimal retrieval\n- **Cosine Similarity Threshold**: 0.3 threshold ensuring only relevant documents are retrieved\n\n### 🏛️ EU AI Act Compliance\n- **Article 50 Transparency**: Clear AI system disclosure in 9 EU languages\n- **User Consent Management**: Explicit consent tracking and validation\n- **AI Content Marking**: Machine-readable markers for all AI-generated content\n- **Multilingual Disclosure**: Full compliance across all supported languages\n- **User Rights Protection**: Complete right to information and withdrawal of consent\n\n### 🎙️ **Voice \u0026 Accessibility Features**\n- **OpenAI Whisper Integration**: High-quality speech-to-text processing with 24+ language support\n- **Real-time Audio Processing**: Converts speech to text with visual feedback and status indicators\n- **Browser-compatible Audio**: Works across Chrome, Firefox, and Edge without additional plugins\n- **Accessibility-First Design**: Voice input supports users with typing difficulties or disabilities\n\n### 📚 **Knowledge Base \u0026 Document Processing**\n- **24+ Official EU Documents**: Comprehensive coverage from European Commission official sources\n- **Smart Document Processing**: PyPDF2, python-docx, and BeautifulSoup for multi-format support\n- **Automated Embedding Pipeline**: Batch processing with efficient memory management\n- **Source Transparency**: Every response includes detailed source attribution with relevance scores\n\n### 🌐 **Enhanced User Experience**\n- **Multiple Chat Sessions**: Create separate conversations for different topics with persistent history\n- **Local Browser Storage**: All chat history saved in browser localStorage (privacy-first approach)\n- **Session Management**: Easy switching between conversations via history menu with session titles\n- **Language Consistency**: Automatic detection and maintenance throughout individual sessions\n- **Responsive Design**: Works seamlessly across desktop, tablet, and mobile devices\n- **Font Customization**: Adjustable font sizes for accessibility (auto-adjusts for maximized mode)\n- **Conversation Context**: Each session maintains its own history and can reference previous queries\n\n### 🔐 **Privacy \u0026 Data Management**\n- **GDPR Compliant**: Complete user control over personal conversation data\n- **No External Storage**: Conversations never sent to external storage services\n- **Browser-Only Data**: All chat history remains on user's device until browser cache is cleared\n- **Transparent Data Flow**: Clear information about what data is processed and how\n\n### 🏛️ **EU Policy Coverage (24+ Official Documents)**\n- **Core Strategies**: European Green Deal, REPowerEU Plan, Sustainable Europe Investment Plan\n- **Climate Action**: European Climate Law, Fit for 55 Package, EU Emissions Trading System Reform\n- **Sectoral Policies**: Farm to Fork Strategy, Circular Economy Action Plan, EU Biodiversity Strategy 2030\n- **Energy Transition**: Renewable Energy Directive, Energy Efficiency Directive\n- **Transport \u0026 Industry**: CBAM Implementation Guides, FuelEU Maritime, ReFuelEU Aviation\n- **Standards \u0026 Regulations**: EU Taxonomy Regulation, CO2 Emission Standards, Effort Sharing Regulation\n- **Support Mechanisms**: Social Climate Fund, Just Transition guidance, Investment frameworks\n\n## 🏗️ Architecture\n\n### Backend Stack\n- **FastAPI** (Python 3.11+) for high-performance async API endpoints\n- **Verdana Agent** specialized EU Green Deal compliance assistant with intelligent query classification\n- **PostgreSQL with pgvector** for vector storage and similarity search (replacing RAGFlow)\n- **OpenAI GPT-4** for language processing and response generation\n- **OpenAI text-embedding-3-large** for semantic document embeddings (3072 dimensions)\n- **OpenAI Whisper API** for speech-to-text transcription\n- **Tavily API** for real-time web research and verification\n- **asyncpg** for async PostgreSQL operations\n- **httpx** for async HTTP client operations\n\n### Frontend Stack\n- **Next.js 14** with App Router\n- **TypeScript** for type safety\n- **Tailwind CSS** for styling\n- **Framer Motion** for animations\n- **shadcn/ui** components\n- **Custom Canvas Background** with interactive elements\n\n### External APIs\n- **OpenAI API** for GPT-4, text-embedding-3-large, and Whisper\n- **Tavily Search API** for web research and verification with EU domain restrictions\n\n## 🚦 Quick Start\n\n### Prerequisites\n- **Docker \u0026 Docker Compose**\n- **Node.js 18+** (for local development)\n- **Python 3.11+** (for local development)\n- **OpenAI API Key** (required)\n- **Tavily API Key** (required for web verification)\n\n### 1. Clone the Repository\n```bash\ngit clone https://github.com/EmminiX/EU-Green_policies_chatbot.git\ncd EU-Green_policies_chatbot\n```\n\n### 2. Environment Setup\n```bash\n# Copy environment template\ncp .env.example .env\n\n# Edit the .env file with your API keys\nnano .env\n```\n\nRequired environment variables:\n```env\n# OpenAI Configuration (Required)\n# Get your API key from: https://platform.openai.com/api-keys\nOPENAI_API_KEY=sk-proj-your_openai_api_key_here\n\n# Web search and verification (Required)\n# Get your API key from: https://app.tavily.com/sign-up\nTAVILY_API_KEY=tvly-your_tavily_api_key_here\n\n# PostgreSQL Configuration\nPOSTGRES_DB=eu_green_chatbot\nPOSTGRES_USER=postgres\nPOSTGRES_PASSWORD=your_secure_password\nPOSTGRES_PORT=5432\n\n# Document Processing\nCHUNK_SIZE=800\nCHUNK_OVERLAP=300\nVECTOR_DIMENSION=3072\nOPENAI_EMBEDDING_MODEL=text-embedding-3-large\n\n# Application URLs\nBACKEND_PORT=8000\nFRONTEND_PORT=3000\n```\n\n### 3. Deploy with Docker\n\n```bash\n# Build and start entire system (frontend + backend + PostgreSQL + Redis)\ndocker compose build\ndocker compose up -d\n\n# Check all services are running\ndocker compose ps\n\n# Follow logs to monitor startup\ndocker compose logs -f\n\n# Check system health\ncurl http://localhost:8000/health\n```\n\n**Services Started:**\n- **Frontend**: http://localhost:3000 (Next.js application)\n- **Backend API**: http://localhost:8000 (FastAPI with Verdana agent)\n- **PostgreSQL**: http://localhost:5432 (Vector database with pgvector)\n- **Redis**: http://localhost:6379 (Session and cache storage)\n\n#### Common Commands\n```bash\n# View logs\ndocker compose logs -f [service-name]\n\n# Stop services  \ndocker compose down\n\n# Restart a service\ndocker compose restart [service-name]\n\n# Rebuild after changes\ndocker compose build --no-cache [service-name]\n```\n\n### 4. Access the Application\n\n#### Application Access\n- **Frontend**: http://localhost:3000 (Main chat interface)\n- **Backend API**: http://localhost:8000 (REST API endpoints)\n- **API Documentation**: http://localhost:8000/docs (Interactive API docs)\n- **Health Check**: http://localhost:8000/health (System status)\n\n#### First Time Setup\n1. The database schema will be automatically created on first startup\n2. Upload EU policy documents using the batch upload script:\n   ```bash\n   python scripts/batch_upload_documents.py\n   ```\n3. Start chatting! The Verdana agent will automatically detect your language and maintain session context.\n\n## 🛠️ Development Setup\n\n### Backend Development\n```bash\ncd backend\n\n# Create virtual environment\npython -m venv venv\nsource venv/bin/activate  # On Windows: venv\\Scripts\\activate\n\n# Install dependencies\npip install -r requirements.txt\n\n# Run development server\nuvicorn main:app --reload --host 0.0.0.0 --port 8000\n```\n\n### Frontend Development\n```bash\ncd frontend\n\n# Install dependencies\nnpm install\n\n# Run development server\nnpm run dev\n\n# Build for production\nnpm run build\n```\n\n### Database Setup\n```bash\n# Run PostgreSQL with pgvector\ndocker-compose up postgres -d\n\n# The database will be automatically initialized with the schema\n# from backend/sql/init.sql\n```\n\n## 🚀 Production Deployment\n\n### VPS/Server Deployment with Document Embedding\n\nWhen deploying to a VPS or server, you need to embed the EU policy documents into your vector database. Follow these steps:\n\n#### 1. Initial Server Setup\n```bash\n# Clone repository on your server\ngit clone https://github.com/EmminiX/EU-Green_policies_chatbot.git\ncd EU-Green_policies_chatbot\n\n# Copy and configure environment\ncp .env.example .env\nnano .env  # Add your API keys and configure database settings\n```\n\n#### 2. Start Database Services\n```bash\n# Start PostgreSQL with pgvector extension\ndocker compose up postgres -d\n\n# Wait for PostgreSQL to be ready\ndocker compose logs postgres\n```\n\n#### 3. **IMPORTANT: Document Embedding Process**\n\n**⚠️ Critical Step**: You must embed all EU policy documents when deploying to a new environment:\n\n```bash\n# Install Python dependencies for embedding script\ncd backend\npython -m venv venv\nsource venv/bin/activate  # On Windows: venv\\Scripts\\activate\npip install -r requirements.txt\n\n# Run the document embedding script\ncd ../scripts\npython embed_training_documents.py\n```\n\n**What this script does:**\n- Processes all 24+ PDF documents in `training_docs/` directory\n- Extracts text using PyPDF2\n- Creates 800-token chunks with 300-token overlap\n- Generates 3072-dimensional embeddings using OpenAI text-embedding-3-large\n- Stores vectors in PostgreSQL with pgvector for similarity search\n- **Estimated time**: 15-30 minutes depending on document size and API speed\n- **Cost**: ~$2-5 in OpenAI API usage for embeddings\n\n#### 4. Verify Document Embedding\n```bash\n# Check embedded documents in database\ncd ../backend\npython -c \"\nimport asyncio\nfrom scripts.embed_training_documents import DocumentProcessor\n\nasync def check_stats():\n    processor = DocumentProcessor()\n    await processor.initialize_database()\n    stats = await processor.get_database_stats()\n    print(f'Documents: {stats[\\\"documents\\\"]}')\n    print(f'Chunks: {stats[\\\"chunks\\\"]}')\n    await processor.close_database()\n\nasyncio.run(check_stats())\n\"\n```\n\n**Expected output:**\n```\nDocuments: 24\nChunks: 800+\n```\n\n#### 5. Deploy Full Application\n```bash\n# Build and start all services\ndocker compose build\ndocker compose up -d\n\n# Verify all services are running\ndocker compose ps\ncurl http://your-server-ip:8000/health\n```\n\n#### 6. Environment-Specific Configuration\n\n**For VPS deployment, update these in your `.env`:**\n```env\n# Database URL for production\nDATABASE_URL=postgresql+asyncpg://postgres:your_password@localhost:5432/eu_green_chatbot\n\n# Frontend URLs (replace with your domain)\nNEXT_PUBLIC_API_URL=http://your-domain.com:8000\nNEXT_PUBLIC_WS_URL=ws://your-domain.com:8000\n\n# Production settings\nENVIRONMENT=production\nDEBUG=false\n```\n\n### Deployment Checklist\n\n- [ ] **API Keys**: OpenAI API key added to `.env`\n- [ ] **Database**: PostgreSQL running with pgvector extension\n- [ ] **Documents**: All 24+ EU documents embedded using `embed_training_documents.py`\n- [ ] **Services**: Frontend, Backend, Database all running\n- [ ] **Health Check**: `/health` endpoint returns 200 OK\n- [ ] **Test Chat**: Verdana agent responds to EU policy queries\n- [ ] **SSL/HTTPS**: Configure reverse proxy (nginx/caddy) for HTTPS\n- [ ] **Firewall**: Configure ports 3000 (frontend) and 8000 (backend)\n- [ ] **Browser Compatibility**: Set correct HTTPS URLs for Safari/Brave compatibility\n\n### 🌐 Browser Compatibility (Important for HTTPS Deployments)\n\n**Issue**: Safari and Brave browsers may show \"Unable to connect to server\" while Chrome works fine.\n\n**Root Cause**: Mixed content violation - HTTPS site trying to make HTTP API calls.\n\n**Solution**: Update your `.env` file with HTTPS URLs for production:\n\n```bash\n# For HTTPS deployments (replace with your domain)\nNEXT_PUBLIC_API_URL=https://your-domain.com\nNEXT_PUBLIC_WS_URL=wss://your-domain.com\nNEXT_PUBLIC_ENVIRONMENT=production\n```\n\n**After changing .env, rebuild the frontend:**\n```bash\ndocker compose build --no-cache frontend\ndocker compose up -d --force-recreate frontend\n```\n\n**Timeout Configuration for Safari/Brave:**\n\nSafari and Brave browsers have stricter timeout behaviors. The application includes optimized timeout settings:\n\n```bash\n# Add these to your .env file for optimal Safari/Brave performance\nFRONTEND_TIMEOUT_SECONDS=25\nBACKEND_TIMEOUT_SECONDS=8\nNGINX_PROXY_TIMEOUT_SECONDS=30\n```\n\n**Why this happens:**\n- Next.js bakes environment variables at build time\n- Different browsers handle mixed content differently\n- Safari/Brave: Strict HTTPS enforcement + aggressive timeouts\n- Chrome: More permissive with cross-origin requests and timeouts\n- AbortController with 25s timeout prevents Safari/Brave hanging\n- Nginx proxy timeouts (30s) provide buffer above frontend timeout (25s)\n\n### Re-deployment Notes\n\n**When deploying to a new server/environment:**\n1. You **MUST** re-embed documents - vectors are environment-specific\n2. The embedding process needs to run once per deployment environment\n3. Document vectors are not portable between different database instances\n4. Budget for OpenAI API costs for initial embedding (~$2-5)\n\n**When updating existing deployment:**\n- If documents haven't changed: No re-embedding needed\n- If new documents added: Run embedding script to add new documents\n- If document content updated: Delete old embeddings and re-embed\n\n### Troubleshooting Deployment\n\n**No responses to EU policy questions:**\n```bash\n# Check if documents are embedded\ncurl http://localhost:8000/api/health\n# Should show document count \u003e 0\n```\n\n**Database connection errors:**\n```bash\n# Check PostgreSQL logs\ndocker compose logs postgres\n\n# Test database connection\npython -c \"import asyncpg; print('Database accessible')\"\n```\n\n## 📁 Project Structure\n\n```\neu-green-chatbot/\n├── backend/\n│   ├── agents/              # AI agent system\n│   │   └── verdana_agent.py # Main Verdana agent with query classification\n│   ├── api/                 # FastAPI routes\n│   │   └── routes/          # API endpoints (chat, health)\n│   ├── core/                # Configuration and logging\n│   ├── services/            # Business logic services\n│   │   ├── rag_service.py   # Vector search and document processing\n│   │   ├── web_service.py   # Tavily web search integration\n│   │   └── stt_service.py   # OpenAI Whisper speech-to-text\n│   ├── sql/                 # Database schema initialization\n│   ├── requirements.txt\n│   └── Dockerfile\n├── frontend/\n│   ├── src/\n│   │   ├── app/             # Next.js app router pages\n│   │   │   ├── page.tsx     # Homepage\n│   │   │   ├── policies/    # EU policies page\n│   │   │   ├── about/       # About page\n│   │   │   ├── architecture/ # Technical architecture\n│   │   │   ├── privacy/     # Privacy policy\n│   │   │   ├── terms/       # Terms of service\n│   │   │   └── compliance/  # AI Act compliance\n│   │   ├── components/      # React components\n│   │   │   ├── chat/        # Chat interface components\n│   │   │   ├── ui/          # UI components\n│   │   │   └── layout/      # Layout components\n│   │   └── hooks/           # Custom React hooks\n│   ├── package.json\n│   └── Dockerfile\n├── training_docs/           # 24+ official EU policy documents\n├── scripts/                 # Utility scripts\n├── docker-compose.yml       # Multi-service orchestration\n├── .env.example\n└── README.md\n```\n\n## 🔧 Configuration\n\n### API Keys Setup\n\n1. **OpenAI API Key** (Required)\n   - Get from: https://platform.openai.com/api-keys\n   - Used for: Language processing, embeddings, speech-to-text, and agent reasoning\n\n2. **Tavily Search API** (Required)\n   - Get from: https://app.tavily.com/sign-up\n   - Used for: Real-time web search and verification of EU policy information\n\n### Environment Variables\n\nAll configuration is handled through environment variables. See `.env.example` for a complete list.\n\n### Database Configuration\n\nThe system uses PostgreSQL with the pgvector extension for storing document embeddings and metadata. Key configuration:\n- **Vector Dimensions**: 3072 (OpenAI text-embedding-3-large)\n- **Similarity Search**: Cosine similarity with 0.3 threshold\n- **Document Chunks**: 800 tokens with 300 token overlap\n- **Auto-initialization**: Database schema created automatically on first startup\n\n## 📊 Monitoring \u0026 Analytics\n\n### Agent Performance\n- Real-time agent status monitoring\n- Task execution metrics\n- Success/failure rates\n- Response time analytics\n\n### System Metrics\n- Query processing statistics\n- Vector database performance\n- User interaction patterns\n- Whisper transcription quality\n- Error tracking and logging\n\n### Health Checks\n```bash\n# Check system health\ncurl http://localhost:8000/health/\n\n# Test chat API endpoint\ncurl -X POST \"http://localhost:8000/api/chat/message\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"message\": \"What is the EU Green Deal?\", \"session_id\": \"test\", \"language\": \"en\", \"ai_consent\": {\"accepted\": true}}'\n\n# Test Whisper speech-to-text endpoint\ncurl -X POST http://localhost:8000/api/chat/speech-to-text \\\n  -F \"audio=@your_audio_file.wav\"\n```\n\n## 🔒 Privacy \u0026 Data Management\n\n### Multiple Chat Sessions\n- **Topic Organization**: Create separate conversations for different EU policy areas (e.g., \"CBAM Questions\", \"Circular Economy Research\")\n- **Session Management**: Easy switching between conversations via history menu with clear session titles\n- **Persistent Context**: Each session maintains its own conversation context and language preference\n- **Session History**: View and resume previous conversations with full message history\n\n### Local Data Storage \u0026 Privacy\n- **Browser-Only Storage**: All chat history saved in browser localStorage on user's device\n- **Zero External Storage**: Conversations never sent to external storage services or third-party analytics\n- **Privacy First**: Your data stays on your device until you clear browser cache or delete sessions\n- **GDPR Compliant**: Complete user control over personal conversation data with clear data retention policies\n- **Language Persistence**: Each session remembers detected language (24 EU languages supported)\n\n### Data Lifecycle\n```\nUser Query → Verdana Agent Processing → Response Generation → Browser localStorage\n     ↓                ↓                      ↓                     ↓\nLanguage Detection → Vector Search → Web Verification → Local Session Storage\n                 (No conversation data sent to external storage)\n```\n\n### What Data is Processed\n- **Local Storage**: Chat messages, session metadata, language preferences\n- **Processing Only**: Individual queries sent to OpenAI/Tavily for response generation\n- **Not Stored Externally**: Full conversation history, personal information, session data\n\n## 🌍 Multilingual Support\n\nThe Verdana agent supports all 24 official EU languages with intelligent language detection and session persistence:\n\n### Supported Languages\n**Germanic**: English (en), German (de), Dutch (nl), Swedish (sv), Danish (da)\n**Romance**: French (fr), Italian (it), Spanish (es), Portuguese (pt), Romanian (ro)\n**Slavic**: Polish (pl), Czech (cs), Slovak (sk), Bulgarian (bg), Croatian (hr), Slovenian (sl)\n**Baltic**: Lithuanian (lt), Latvian (lv), Estonian (et)\n**Other**: Finnish (fi), Hungarian (hu), Greek (el), Maltese (mt), Irish (ga)\n\n### Language Features\n- **Automatic Detection**: Detects language from your first message with high accuracy\n- **Session Persistence**: Maintains chosen language throughout individual conversations\n- **Technical Preservation**: Preserves official EU policy names and technical terminology\n- **Context Awareness**: Understands multilingual policy context and cross-references\n\n## 📈 Usage Examples\n\n### Basic Chat\n```python\n# Send a message to the chatbot\nimport requests\n\nresponse = requests.post(\"http://localhost:8000/api/chat/message\", json={\n    \"message\": \"What is the European Green Deal?\",\n    \"session_id\": \"demo-session\",\n    \"language\": \"en\"\n})\n\nprint(response.json())\n```\n\n### Voice Input (Speech-to-Text)\n```javascript\n// Record audio and convert to text\nconst startRecording = async () =\u003e {\n    const response = await fetch('/api/chat/speech-to-text', {\n        method: 'POST',\n        body: audioBlob,\n        headers: {'Content-Type': 'audio/wav'}\n    });\n    const result = await response.json();\n    console.log('Transcribed text:', result.text);\n};\n```\n\n### Session Management\n```javascript\n// Create new chat session\nconst newSession = () =\u003e {\n    const sessionId = `chat-${Date.now()}-${Math.random().toString(36).slice(2)}`;\n    localStorage.setItem('currentSession', sessionId);\n    return sessionId;\n};\n\n// Resume existing session\nconst resumeSession = (sessionId) =\u003e {\n    const history = JSON.parse(localStorage.getItem(`chat-${sessionId}`) || '[]');\n    return history;\n};\n```\n\n## 🤝 Contributing\n\nI welcome contributions! Please see my [Contributing Guidelines](CONTRIBUTING.md) for details.\n\n### Development Guidelines\n1. Follow PEP 8 for Python code\n2. Use TypeScript for frontend development\n3. Add tests for new features\n4. Update documentation as needed\n5. Follow conventional commit messages\n\n### Pull Request Process\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Add tests if applicable\n5. Submit a pull request\n\n## 📄 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## 🆘 Support\n\n### Getting Help\n- **Documentation**: Check this README and code comments\n- **Issues**: Report bugs and request features on GitHub\n- **Discussions**: Join our GitHub Discussions for questions\n\n### Common Issues\n\n**1. OpenAI API Errors**\n```bash\n# Check your API key is valid\ncurl -H \"Authorization: Bearer YOUR_API_KEY\" \\\n     https://api.openai.com/v1/models\n```\n\n**2. Database Connection Issues**\n```bash\n# Check PostgreSQL is running\ndocker-compose ps postgres\n\n# Check database logs\ndocker-compose logs postgres\n```\n\n**3. Frontend Build Errors**\n```bash\n# Clear Next.js cache\ncd frontend\nrm -rf .next\nnpm run build\n```\n\n\n## 🙏 Acknowledgments\n\n- **European Commission** for open access to official EU Green Deal policy documents\n- **PostgreSQL \u0026 pgvector** for high-performance vector database capabilities\n- **OpenAI** for powerful language models, embeddings, and Whisper speech recognition\n- **Tavily** for reliable web search and verification capabilities\n- **Next.js \u0026 Vercel** for excellent frontend development framework\n- **FastAPI** for high-performance Python web framework\n- **All Contributors** who help improve this project\n\n---\n\n**Built with ❤️ for a sustainable future**\n\nFor more information, visit our [GitHub repository](https://github.com/EmminiX/EU-Green_policies_chatbot).\n\n**Support this project**: [![Buy me a coffee](https://img.shields.io/badge/Buy%20me%20a%20coffee-☕-orange)](https://buymeacoffee.com/emmix)","funding_links":["https://buymeacoffee.com/emmix"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Femminix%2Feu-green-agent","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Femminix%2Feu-green-agent","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Femminix%2Feu-green-agent/lists"}