{"id":25646523,"url":"https://github.com/ideavision/vc_assistant","last_synced_at":"2025-11-08T23:04:34.645Z","repository":{"id":237688167,"uuid":"795042391","full_name":"ideavision/vc_assistant","owner":"ideavision","description":"Venture Capital firms Analyzer with LLM","archived":false,"fork":false,"pushed_at":"2024-05-02T13:39:30.000Z","size":1957,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"master","last_synced_at":"2025-07-06T00:07:04.044Z","etag":null,"topics":["ai","api","fastapi","generative-ai","gpt","langchain","llm","nlp","python","qdrant","rag","scraper-api","vector-database","venture-capital"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ideavision.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-05-02T13:22:58.000Z","updated_at":"2025-04-17T08:48:32.000Z","dependencies_parsed_at":null,"dependency_job_id":"a9686ec8-ca8d-4c74-b02b-14b2d924fedb","html_url":"https://github.com/ideavision/vc_assistant","commit_stats":null,"previous_names":["ideavision/vc_assistant"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/ideavision/vc_assistant","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ideavision%2Fvc_assistant","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ideavision%2Fvc_assistant/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ideavision%2Fvc_assistant/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ideavision%2Fvc_assistant/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ideavision","download_url":"https://codeload.github.com/ideavision/vc_assistant/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ideavision%2Fvc_assistant/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":283431127,"owners_count":26834772,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-11-08T02:00:06.281Z","response_time":57,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","api","fastapi","generative-ai","gpt","langchain","llm","nlp","python","qdrant","rag","scraper-api","vector-database","venture-capital"],"created_at":"2025-02-23T10:27:50.937Z","updated_at":"2025-11-08T23:04:34.639Z","avatar_url":"https://github.com/ideavision.png","language":"Python","readme":"# 🤖 VC_assistant (Venture Capital Assistant)\n\n\n\n# Overview\nVC_assistant is an adaptive intelligence chatbot designed to facilitate the creation of enterprise applications with advanced conversational capabilities for Venture Capital info scraping , embedding , storing in vectordatabase, extract special content in JSON format and find similarity. It leverages a tool-using agentic architecture and retrieval augmented generation (RAG) to integrate state-of-the-art open-source frameworks such as LangChain, FastAPI and etc. This document outlines the architecture, components, and use cases of VC_Assistant, providing a comprehensive understanding of its functionality and deployment.\n\n\n# Main Features:\n### Given VC website URL as an input by the user, your Generative AI assistant should be able to:\n\u003cbr /\u003e\n\n![info](./img/localhost_8000_docs.png)\n\u003cbr\u003e\n## Scrape Home page of the website and store the content in vectorDB of your choice,\n\n![info](./img/scraper.png)\n\n![info](./img/process-scrape2vector.png)\n\n\u003cbr\u003e\n\n## Extract the following information and show it to the user as a JSON object: VC name,contacts, industries that they invest in, investment rounds that they participate/lead.\n\n## Compare VC website content to the other VC website contents in the database.\n![info](./img/extract-info2json.png)\n\u003cbr\u003e\u003cbr\u003e\u003cbr\u003e\n\n\n# Prerequisites\n\nTo use this project, you will need:\n\n- Docker and Docker Compose installed  \n- Python 3.7+\n- An OpenAI API key\n\n# Setup  \n\nTo set up the project:\n\n1. Clone this repository to your local machine.\n\n2. Rename `key.env.example` to `key.env` and add your OpenAI API key.  \n\n3. Review `config.yml` and choose `openai` or `local` for your `Embedding_Type`\n\n4. In `docker-compose.yml`, update the `volumes` path for `VC_ASSISTANT_QDRANT` to a local folder where you want persistent storage for the vector database.\n\n5. Create needed directories for persistant storage\n   ```bash\n   mkdir -p .data/qdrant/\n   ```\n\n6. Build the Docker images:\n   ```bash\n    docker-compose build\n    ```\n7. Start the services:\n   ```bash\n   docker-compose up -d\n   ```\nThe services will now be running.\n\n\n\n\n##  **Deployment and Usage**\nOnce the Docker containers are up and running, you can start interacting::\n\n- The **interactive Swagger docs** at [http://localhost:8000/docs](http://localhost:8000/docs)\n- The **Qdrant Web Interface** at [http://localhost:6333/dashboard](http://localhost:6333/dashboard)\n\n### Build the default Collection\n1. **Scrape Documents:**\n    - Go to the FastAPI server by navigating to the interactive Swagger docs at [http://localhost:8000/docs](http://localhost:8000/docs).\n    - Use the `scraper` endpoint to scrape content from a specified URL. You will need to provide the URL you want to scrape in the request body.\n    - The `scraper` endpoint will return the scraped content which will be processed in the next step.\n\n\n2. **Create a Vector Index:**\n    - Use the `process-scrape2vector` endpoint to create a vector index from the scraped content.\n    - In case the `default` collection does not exist, the `process-scrape2vector` endpoint will create one for you.\n    - This endpoint will process the scraped documents, create a vector index, and load it into Qdrant.\n\n\n\n\n## Architecture Overview\n\nThe VC_Assistant architecture consists of the following key components:\n\n- FastAPI - High performance REST API framework. Handles HTTP requests and routes them to application logic.\n- Qdrant - Vector database for storing document embeddings and enabling similarity search.\n- AgentHandler - Orchestrates the initialization and execution of the conversational agent.\n- Scraper - A tool that scrapes a web page and converts it to markdown.\n- Loader - A tool that loads content from the scrped_data directory to a VectorStoreIndex\n- Tools - Custom tools that extend the capabilities of the agent.\n\n## Infrastructure\nLet's take a closer look at some of the key VC_assistant infrastructure components:\n\n### FastAPI\nFastAPI provides a robust web framework for handling the API routes and HTTP requests/responses.\n\nSome key advantages:\n\n- Built on modern Python standards like type hints and ASGI.\n- Extremely fast - benchmarked to be faster than NodeJS and Go.\n- Automatic interactive docs using OpenAPI standards.\n\n\n### Qdrant\nQdrant is a vector database optimized for ultra-fast similarity search across large datasets. It is used in this project to store and index document embeddings, enabling the bot to quickly find relevant documents based on a search query or conversation context.\n\n\n\n### Document Scraping Section\n\nThe `scraper` module, located in `/app/src/scraper/scraper_main.py`, serves as a robust utility for extracting content from web pages and converting it into structured markdown format. This module is integral for enabling the framework to access and utilize information from a plethora of web sources. Below is a succinct overview focusing on its core functionalities and workflow for developers aiming to integrate and leverage this module effectively.\n\n#### Components:\n- **WebScraper Class:**\n  - Inherits from the base Scraper class and implements the Singleton pattern to ensure a unique instance.\n  - Orchestrates the entire scraping process, from fetching to parsing, and saving the content.\n  - Leverages `ContentParser` to extract and convert meaningful data from HTML tags into markdown format.\n\n- **ContentParser Class:**\n  - Designed to parse and convert meaningful content from supported HTML tags into markdown format.\n  - Supports a variety of HTML tags including paragraphs, headers, list items, links, inline code, and code blocks.\n\n#### Workflow:\n\n1. **URL Validation:**\n   - The provided URL undergoes validation to ensure its correctness and accessibility.\n   - If the URL is invalid, the process is terminated, and an error message is logged.\n\n2. **Content Fetching:**\n   - Content from the validated URL is fetched using HTTP requests.\n   - Utilizes random user agents to mimic genuine user activity and avoid potential blocking by web servers.\n   - If the content fetching fails, the process is halted, and an error message is logged.\n\n3. **Content Parsing:**\n   - The fetched content is parsed using BeautifulSoup, and the `ContentParser` class is employed to extract meaningful data.\n   - The parsed data includes the title, metadata, and the content in markdown format.\n\n4. **File Saving:**\n   - The parsed content is saved to a file, the filename is generated using a hash of the URL.\n   - The file is stored in a pre-configured data directory.\n   - If the file saving fails, an error message is logged.\n\n5. **Result Return:**\n   - Upon the successful completion of the scraping process, a success message and the filepath of the saved content are returned.\n   - If any step in the process fails, an appropriate error message is returned.\n\n#### Usage:\nDevelopers can initiate the scraping process by invoking the `run_web_scraper(url)` function with the desired URL. This function initializes a `WebScraper` instance and triggers the scraping process, returning a dictionary containing the outcome of the scraping process, including messages indicating success or failure and the location where the scraped data has been saved.\n\n#### Example:\n```python\nresult = run_web_scraper(\"http://example.com\")\nif result and result.get(\"message\") == \"Scraping completed successfully\":\n    print(f\"Scraping complete! Saved to {result['data']}\")\nelse:\n    print(result[\"message\"])\n```\n\n\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fideavision%2Fvc_assistant","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fideavision%2Fvc_assistant","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fideavision%2Fvc_assistant/lists"}