{"id":22980803,"url":"https://github.com/aiafterdark/lors","last_synced_at":"2025-04-22T10:38:30.049Z","repository":{"id":259746338,"uuid":"879329814","full_name":"AIAfterDark/LORS","owner":"AIAfterDark","description":"LORS (Local O1 Reasoning System) - A distributed reasoning framework using local LLMs for advanced prompt analysis and response generation through parallel processing pipelines. Implements multi-agent architecture with dynamic scaling for complex query processing.","archived":false,"fork":false,"pushed_at":"2024-10-27T16:41:22.000Z","size":15,"stargazers_count":5,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-04-22T10:38:04.965Z","etag":null,"topics":["artificial-intelligence","async-programming","deep-learning","distributed-systems","llm","machine-learning","multi-agent-systems","natural-language-processing","ollama","parallel-processing","prompt-engineering","python"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/AIAfterDark.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-10-27T16:14:40.000Z","updated_at":"2025-02-28T01:32:08.000Z","dependencies_parsed_at":"2024-10-27T19:36:27.148Z","dependency_job_id":"4a3cc0c7-8a54-4914-8239-1317c69af563","html_url":"https://github.com/AIAfterDark/LORS","commit_stats":null,"previous_names":["aiafterdark/lors"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AIAfterDark%2FLORS","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AIAfterDark%2FLORS/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AIAfterDark%2FLORS/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AIAfterDark%2FLORS/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/AIAfterDark","download_url":"https://codeload.github.com/AIAfterDark/LORS/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":250222048,"owners_count":21394807,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["artificial-intelligence","async-programming","deep-learning","distributed-systems","llm","machine-learning","multi-agent-systems","natural-language-processing","ollama","parallel-processing","prompt-engineering","python"],"created_at":"2024-12-15T01:45:04.101Z","updated_at":"2025-04-22T10:38:29.974Z","avatar_url":"https://github.com/AIAfterDark.png","language":"Python","readme":"# Local O1 Reasoning System (LORS)\n\n## Abstract\nThe Local O1 Reasoning System (LORS) is an advanced distributed reasoning framework that implements a novel approach to prompt analysis and response generation using local Large Language Models (LLMs). Inspired by OpenAI's o1 architecture, LORS utilizes a multi-agent system with dynamic scaling capabilities to process complex queries through parallel processing pipelines of varying computational depths.\n\n## System Architecture\n\n### Core Components\n\n```\nLORS Architecture\n├── Prompt Analysis Engine\n│   ├── Complexity Analyzer\n│   ├── Domain Classifier\n│   └── Cognitive Load Estimator\n├── Agent Management System\n│   ├── Fast Reasoning Agents (llama3.2)\n│   └── Deep Reasoning Agents (llama3.1)\n├── Response Synthesis Pipeline\n│   ├── Thought Aggregator\n│   ├── Context Enhancer\n│   └── Final Synthesizer\n└── Response Management System\n    ├── Intelligent Naming\n    └── Structured Storage\n```\n\n### Technical Specifications\n\n#### 1. Prompt Analysis Engine\nThe system employs a sophisticated prompt analysis mechanism that evaluates:\n\n- **Linguistic Complexity Metrics**\n  - Sentence structure depth (dependency parsing)\n  - Technical term density\n  - Named entity recognition\n  - Cognitive load estimation\n\n- **Domain-Specific Analysis**\n  ```python\n  domain_complexity = {\n      'technical': [algorithm, system, framework],\n      'scientific': [hypothesis, analysis, theory],\n      'mathematical': [equation, formula, calculation],\n      'business': [strategy, market, optimization]\n  }\n  ```\n\n- **Complexity Scoring Algorithm**\n  ```mathematics\n  C = Σ(wi * fi)\n  where:\n  C = total complexity score\n  wi = weight of feature i\n  fi = normalized value of feature i\n  ```\n\n#### 2. Dynamic Agent Scaling\n\nThe system implements an adaptive scaling mechanism based on prompt complexity:\n\n| Complexity Score | Fast Agents | Deep Agents | Use Case |\n|-----------------|-------------|-------------|-----------|\n| 80-100 | 5 | 3 | Complex technical analysis |\n| 60-79  | 4 | 2 | Moderate complexity |\n| 40-59  | 3 | 2 | Standard analysis |\n| 0-39   | 2 | 1 | Simple queries |\n\n#### 3. Agent Types and Characteristics\n\n**Fast Reasoning Agents (llama3.2)**\n- Optimized for rapid initial analysis\n- Lower token limit for quicker processing\n- Focus on key concept identification\n- Parameters:\n  ```python\n  {\n      'temperature': 0.7,\n      'max_tokens': 150,\n      'response_time_target': '\u003c 2s'\n  }\n  ```\n\n**Deep Reasoning Agents (llama3.1)**\n- Designed for thorough analysis\n- Higher token limit for comprehensive responses\n- Focus on relationships and implications\n- Parameters:\n  ```python\n  {\n      'temperature': 0.9,\n      'max_tokens': 500,\n      'response_time_target': '\u003c 5s'\n  }\n  ```\n\n## Implementation Details\n\n### 1. Asynchronous Processing Pipeline\n```python\nasync def process_prompt(prompt):\n    complexity_analysis = analyze_prompt_complexity(prompt)\n    fast_thoughts = await process_fast_agents(prompt)\n    enhanced_context = synthesize_initial_thoughts(fast_thoughts)\n    deep_thoughts = await process_deep_agents(enhanced_context)\n    return synthesize_final_response(fast_thoughts, deep_thoughts)\n```\n\n### 2. Complexity Analysis Implementation\nThe system uses a weighted feature analysis approach:\n\n```python\ndef calculate_complexity_score(features):\n    weights = {\n        'sentence_count': 0.1,\n        'avg_sentence_length': 0.15,\n        'subjectivity': 0.1,\n        'named_entities': 0.15,\n        'technical_term_count': 0.2,\n        'domain_complexity': 0.1,\n        'cognitive_complexity': 0.1,\n        'dependency_depth': 0.1\n    }\n    return weighted_sum(features, weights)\n```\n\n### 3. Response Synthesis\nThe system implements a three-phase synthesis approach:\n1. Fast Analysis Aggregation\n2. Context Enhancement\n3. Deep Analysis Integration\n\n## Performance Characteristics\n\n### Benchmarks\n- Average response time: 2-8 seconds\n- Memory usage: 4-8GB\n- GPU utilization: 60-80%\n\n## Installation and Usage\n\n### Prerequisites\n```bash\npip install ollama asyncio rich textblob spacy nltk\npython -m spacy download en_core_web_sm\n```\n\n### Basic Usage\n```bash\npython local-o1-reasoning.py -p \"Your complex query here\"\n```\n\n### Response Storage\nResponses are stored in JSON format:\n```json\n{\n    \"prompt\": \"original_prompt\",\n    \"timestamp\": \"ISO-8601 timestamp\",\n    \"complexity_analysis\": {\n        \"score\": 75.5,\n        \"features\": {...}\n    },\n    \"result\": {\n        \"fast_analysis\": [...],\n        \"deep_analysis\": [...],\n        \"final_synthesis\": \"...\"\n    }\n}\n```\n\n## Installation and Usage\n\n### Prerequisites\n\n1. **Install Ollama**\n   ```bash\n   # For Linux\n   curl -L https://ollama.com/download/ollama-linux-amd64 -o ollama\n   chmod +x ollama\n   ./ollama serve\n\n   # For Windows\n   # Download and install from https://ollama.com/download/windows\n   ```\n\n2. **Install Required Models**\n   ```bash\n   # Install the fast reasoning model (3B Model - fast thought)\n   ollama pull llama3.2\n\n   # Install the deep reasoning model (8B Model - deep thought)\n   ollama pull llama3.1\n\n   # Verify installations\n   ollama list\n   ```\n   Expected output:\n   ```\n   NAME                    ID              SIZE      MODIFIED      \n   llama3.2:latest    6c2d00dcdb27    2.1 GB    4 seconds ago    \n   llama3.1:latest    3c46ab11d5ec    4.9 GB    6 days ago\n   ```\n\n3. **Set Up Python Environment**\n   ```bash\n   # Create virtual environment\n   python -m venv lors-env\n\n   # Activate environment\n   # On Windows\n   lors-env\\Scripts\\activate\n   # On Unix or MacOS\n   source lors-env/bin/activate\n\n   # Install requirements\n   pip install -r requirements.txt\n\n   # Install spaCy language model\n   python -m spacy download en_core_web_sm\n   ```\n\n### Basic Usage\n```bash\n# Simple query\npython local-o1-reasoning.py -p \"Explain the concept of quantum entanglement\"\n\n# Complex analysis\npython local-o1-reasoning.py -p \"Analyze the implications of quantum computing on modern cryptography systems and propose potential mitigation strategies\"\n```\n\n### Troubleshooting\n\n1. **Model Loading Issues**\n   ```bash\n   # Verify model status\n   ollama list\n\n   # Restart Ollama service if needed\n   ollama stop\n   ollama serve\n   ```\n\n2. **GPU Memory Issues**\n   - Ensure no other GPU-intensive applications are running\n   - Monitor GPU usage:\n   ```bash\n   nvidia-smi -l 1\n   ```\n\n3. **Common Error Solutions**\n   - If models fail to load: `ollama pull [model_name] --force`\n   - If out of CUDA memory: Reduce concurrent agent count in configuration\n   - If response directory error: Check write permissions\n\n### Directory Structure\n```\nLORS/\n├── local-o1-reasoning.py\n├── requirements.txt\n├── responses/\n│   └── [automated response files]\n└── README.md\n```\n\n## License\nMIT License\n\n## Contributing\nWe welcome contributions! Please see our contributing guidelines for more information.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faiafterdark%2Flors","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Faiafterdark%2Flors","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faiafterdark%2Flors/lists"}