{"id":29664889,"url":"https://github.com/foo290/neurotrace","last_synced_at":"2025-07-22T13:07:32.325Z","repository":{"id":304790054,"uuid":"991984673","full_name":"foo290/neurotrace","owner":"foo290","description":"A private repository for neurotrace project","archived":false,"fork":false,"pushed_at":"2025-07-15T15:11:35.000Z","size":1441,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-07-15T15:25:24.824Z","etag":null,"topics":["ai","artificial-intelligence","chatbot","generative-ai","graph-da","graph-retrieval","graphdatabase","knowledge-graph","langchain","llm","llm-inference","llm-memory","ml","neo4","rag","rag-pipeline","retriv"],"latest_commit_sha":null,"homepage":"https://foo290.github.io/neurotrace/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/foo290.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-05-28T12:50:41.000Z","updated_at":"2025-07-15T15:11:06.000Z","dependencies_parsed_at":"2025-07-15T15:54:32.739Z","dependency_job_id":"73088a4f-2c40-4d09-88f6-d71bc739ff9b","html_url":"https://github.com/foo290/neurotrace","commit_stats":null,"previous_names":["foo290/neurotrace"],"tags_count":4,"template":false,"template_full_name":null,"purl":"pkg:github/foo290/neurotrace","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/foo290%2Fneurotrace","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/foo290%2Fneurotrace/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/foo290%2Fneurotrace/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/foo290%2Fneurotrace/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/foo290","download_url":"https://codeload.github.com/foo290/neurotrace/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/foo290%2Fneurotrace/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":266499470,"owners_count":23938872,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-07-22T02:00:09.085Z","response_time":66,"last_error":null,"robots_txt_status":null,"robots_txt_updated_at":null,"robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","artificial-intelligence","chatbot","generative-ai","graph-da","graph-retrieval","graphdatabase","knowledge-graph","langchain","llm","llm-inference","llm-memory","ml","neo4","rag","rag-pipeline","retriv"],"created_at":"2025-07-22T13:07:31.507Z","updated_at":"2025-07-22T13:07:32.307Z","avatar_url":"https://github.com/foo290.png","language":"Python","readme":"# Neurotrace\n\n\n[![PyPI version](https://img.shields.io/pypi/v/neurotrace)](https://pypi.org/project/neurotrace/)\n\nA hybrid memory library designed for LangChain agents, providing dual-layer memory architecture with short-term buffer memory and long-term hybrid RAG system capabilities.\n\n## Overview\n\nNeurotrace provides persistent, intelligent memory for conversational agents that improves over time and enables contextual understanding and recall. It combines vector-based and graph-based RAG (Retrieval Augmented Generation) systems to provide deeper and more accurate contextual reasoning.\n\n## 🎯 Key Features\n\n- **Dual-Layer Memory Architecture**\n  - Short-term buffer memory for immediate context\n  - Long-term hybrid RAG system for persistent storage\n\n- **Real-time Processing**\n  - Real-time recall during conversations\n  - Intelligent storage and compression\n\n- **Rich Message Structure**\n  - Custom metadata-rich message formats\n  - Support for filtering and semantic tracing\n\n- **Hybrid Retrieval System**\n  - Combined vector and graph-based RAG\n  - Enhanced contextual reasoning capabilities\n\n## Graph db integration (Graph RAG)\n\n![neo4j.png](https://github.com/foo290/neurotrace/blob/main/readme_assets/images/neo4j.png)\n\n## 🎯 Target Users\n\n- Developers building AI agents with LangChain\n- Researchers exploring memory augmentation in LLMs\n- Enterprises deploying context-aware AI assistants\n\n## Quick Start\n\n### Installation\n\n```bash\npip install neurotrace\n```\n\n### Complete Example\n\nA complete, runnable example is available in `examples/agent_example.py`. This example demonstrates:\n- Setting up a Neurotrace agent with both short-term and long-term memory\n- Configuring vector and graph storage\n- Implementing an interactive conversation loop\n- Monitoring memory usage\n\nTo run the example:\n```bash\n# First set up your environment variables\nexport NEO4J_URL=bolt://localhost:7687\nexport NEO4J_USERNAME=neo4j\nexport NEO4J_PASSWORD=your_password\nexport GOOGLE_API_KEY=your_google_api_key\n\n# Then run the example\npython examples/agent_example.py\n```\n\n### Required Environment Variables\n\n```bash\nNEO4J_URL=bolt://localhost:7687\nNEO4J_USERNAME=neo4j\nNEO4J_PASSWORD=your_password\nGOOGLE_API_KEY=your_google_api_key  # For Gemini LLM\n```\n\n## Technical Documentation\n\n### Core Schema\n\nThe `neurotrace.core.schema` module defines the fundamental data structures used throughout the project.\n\n### Message\n\nThe core Message class represents a single message in the system:\n\n```python\nfrom neurotrace.core.schema import Message, MessageMetadata, EmotionTag\n\nmessage = Message(\n    role=\"user\",           # Can be \"user\", \"ai\", or \"system\"\n    content=\"Hello!\",      # The message text content\n    metadata=MessageMetadata(\n        source=\"chat\",\n        emotions=EmotionTag(sentiment=\"positive\")\n    )\n)\n```\n\nKey features of Message:\n- Auto-generated UUID for each message\n- Automatic timestamp on creation\n- Type-safe role validation\n- Rich metadata support via MessageMetadata\n\n### Message Components\n\n#### EmotionTag\n\nRepresents the emotional context of a message:\n\n```python\nfrom neurotrace.core.schema import EmotionTag\n\nemotion = EmotionTag(\n    sentiment=\"positive\",  # Can be \"positive\", \"neutral\", or \"negative\"\n    intensity=0.8         # Optional float value indicating intensity\n)\n```\n\n#### MessageMetadata\n\nContains additional information and context about a message:\n\n```python\nfrom neurotrace.core.schema import MessageMetadata, EmotionTag\n\nmetadata = MessageMetadata(\n    token_count=150,                    # Number of tokens in the message\n    embedding=[0.1, 0.2, 0.3],         # Vector embedding for similarity search\n    source=\"chat\",                      # Source: \"chat\", \"web\", \"api\", or \"system\"\n    tags=[\"important\", \"follow-up\"],    # Custom tags\n    thread_id=\"thread_123\",            # Conversation thread identifier\n    user_id=\"user_456\",               # Associated user identifier\n    related_ids=[\"msg_789\"],          # Related message IDs\n    emotions=EmotionTag(sentiment=\"positive\"),  # Emotional context\n    compressed=False                   # Compression status\n)\n```\n\nEach field in MessageMetadata is optional and provides specific context:\n- `token_count`: Used for tracking token usage\n- `embedding`: Vector representation for similarity search\n- `source`: Indicates message origin\n- `tags`: Custom categorization\n- `thread_id`: Groups messages in conversations\n- `user_id`: Links messages to users\n- `related_ids`: Connects related messages\n- `emotions`: Captures emotional context\n- `compressed`: Indicates if content is compressed\n\n### Usage\n\n```python\n\"\"\"\nA complete example of implementing a Neurotrace-powered agent with both short-term and long-term memory.\n\"\"\"\n\nimport os\n\nfrom dotenv import load_dotenv\nfrom langchain.agents import AgentType, initialize_agent\nfrom langchain.vectorstores import Chroma\nfrom langchain_community.graphs import Neo4jGraph\nfrom langchain_google_genai import ChatGoogleGenerativeAI, GoogleGenerativeAIEmbeddings\n\nfrom neurotrace.core.hippocampus.memory_orchestrator import MemoryOrchestrator\nfrom neurotrace.core.memory import NeurotraceMemory\nfrom neurotrace.core.schema import Message\nfrom neurotrace.core.tools.memory import memory_search_tool, save_memory_tool\nfrom neurotrace.core.tools.system import get_system_tools_list\n\n\ndef setup_agent():\n    \"\"\"Initialize and configure the Neurotrace agent with memory components.\"\"\"\n\n    # Load environment variables\n    load_dotenv()\n\n    # Initialize LLM\n    llm = ChatGoogleGenerativeAI(model=\"gemini-2.5-flash\", temperature=0.3)\n\n    # Setup short-term memory\n    memory = NeurotraceMemory(max_tokens=100, llm=llm)\n\n    # Setup vector store\n    embedding_model = GoogleGenerativeAIEmbeddings(model=\"models/embedding-001\")\n    vectorstore = Chroma(embedding_function=embedding_model, persist_directory=\".chromadb\")\n\n    # Setup graph database\n    graph_store = Neo4jGraph(\n        url=os.environ.get(\"NEO4J_URL\", \"bolt://localhost:7687\"),\n        username=os.environ.get(\"NEO4J_USERNAME\", \"neo4j\"),\n        password=os.environ.get(\"NEO4J_PASSWORD\", \"password\"),\n    )\n\n    # Initialize Memory Orchestrator\n    mem_orchestrator = MemoryOrchestrator(\n        llm=llm,\n        vector_store=vectorstore,\n        graph_store=graph_store,\n    )\n\n    # Setup memory tools\n    mem_save_tool = save_memory_tool(memory_orchestrator=mem_orchestrator)\n    mem_search_tool = memory_search_tool(memory_orchestrator=mem_orchestrator)\n\n    # Initialize Agent\n    agent = initialize_agent(\n        tools=[mem_search_tool, mem_save_tool, *get_system_tools_list()],\n        llm=llm,\n        agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,\n        memory=memory,\n        verbose=True,\n    )\n\n    return agent, memory\n\n\ndef run_agent():\n    \"\"\"Run the agent in an interactive conversation loop.\"\"\"\n\n    agent, memory = setup_agent()\n\n    print(\"Neurotrace Agent Ready. Type 'exit' to quit.\")\n    while True:\n        user_input = input(\"\\nYou: \")\n        if user_input.strip().lower() == \"exit\":\n            break\n\n        # Process user input\n        response = agent.invoke({\"input\": user_input})\n        output = response[\"output\"]\n        print(\"Agent:\", output)\n\n        # Save conversation to memory\n        user_msg = Message(role=\"human\", content=user_input)\n        ai_msg = Message(role=\"ai\", content=output)\n\n        # Debug Memory State\n        print(\"\\n-- Memory State --\")\n        print(\"STM Messages:\", len(memory._stm.get_messages()))\n        print(\"STM Tokens:\", memory._stm.total_tokens())\n        print(\"------------------\\n\")\n\n\nif __name__ == \"__main__\":\n    run_agent()\n\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffoo290%2Fneurotrace","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffoo290%2Fneurotrace","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffoo290%2Fneurotrace/lists"}