{"id":31762247,"url":"https://github.com/griffincancode/callosum","last_synced_at":"2025-10-09T22:17:22.497Z","repository":{"id":317380021,"uuid":"1066071954","full_name":"GriffinCanCode/Callosum","owner":"GriffinCanCode","description":"A DSL for defining AI personalities.","archived":false,"fork":false,"pushed_at":"2025-09-30T13:26:58.000Z","size":572,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-09-30T15:22:28.208Z","etag":null,"topics":["ai","ai-characters","ai-personalities","desktop","dsl","ocaml"],"latest_commit_sha":null,"homepage":"","language":"OCaml","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/GriffinCanCode.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-09-29T01:30:45.000Z","updated_at":"2025-09-30T13:27:56.000Z","dependencies_parsed_at":"2025-09-30T15:22:38.332Z","dependency_job_id":"c9725655-3606-4d40-b1c4-6a7e33e45eb9","html_url":"https://github.com/GriffinCanCode/Callosum","commit_stats":null,"previous_names":["griffincancode/callosum"],"tags_count":null,"template":false,"template_full_name":null,"purl":"pkg:github/GriffinCanCode/Callosum","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/GriffinCanCode%2FCallosum","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/GriffinCanCode%2FCallosum/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/GriffinCanCode%2FCallosum/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/GriffinCanCode%2FCallosum/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/GriffinCanCode","download_url":"https://codeload.github.com/GriffinCanCode/Callosum/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/GriffinCanCode%2FCallosum/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":279002122,"owners_count":26083307,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-09T02:00:07.460Z","response_time":59,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","ai-characters","ai-personalities","desktop","dsl","ocaml"],"created_at":"2025-10-09T22:17:20.577Z","updated_at":"2025-10-09T22:17:22.487Z","avatar_url":"https://github.com/GriffinCanCode.png","language":"OCaml","readme":"![Callosum Project](assets/callosum-project.png)\n\n# Callosum Personality DSL Compiler\n\n[![PyPI version](https://badge.fury.io/py/callosum-dsl.svg)](https://badge.fury.io/py/callosum-dsl)\n[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\n**A provider-agnostic DSL compiler for creating AI personalities that produce genuinely different behaviors.** Write personalities in a clean, readable domain-specific language that compiles to comprehensive system prompts, creating measurably distinct AI responses across **any AI provider** - OpenAI, Anthropic, **any LangChain model**, or custom systems.\n\n**Works with ANY AI provider:**\n- **LangChain** - Use any LangChain-compatible model (OpenAI, Anthropic, local models, etc.)\n- **OpenAI** - Direct integration with GPT models\n- **Anthropic** - Direct integration with Claude models  \n- **Custom AI systems** - Integrate with any AI via simple function wrapper\n\n## What is Callosum?\n\n**Callosum creates genuinely different AI personalities** that produce measurably distinct behaviors across any AI provider. Unlike simple prompt templates, Callosum compiles rich personality definitions into comprehensive system prompts that actually change how AI models respond.\n\nCallosum compiles `.colo` personality files into formats you can use with AI systems:\n\n```python\nfrom callosum_dsl import PersonalityAI, PERSONALITY_TEMPLATES\n\n# Use pre-built personality with auto-detected AI provider\nai = PersonalityAI(PERSONALITY_TEMPLATES[\"helpful_assistant\"])\n\n# Works with any provider - just switch as needed!\nai.set_provider(\"openai\", api_key=\"your-key\")\nresponse = ai.chat(\"Help me code\")\n\n# Switch to LangChain (works with ANY LangChain model)\nfrom langchain_openai import ChatOpenAI\nllm = ChatOpenAI(model=\"gpt-4\", api_key=\"your-key\") \nai.set_provider(\"langchain\", llm=llm)\nresponse = ai.chat(\"Same personality, different model!\")\n```\n\n**Or just compile personalities for your own systems:**\n\n```python\nfrom callosum_dsl import Callosum\n\ncallosum = Callosum()\nsystem_prompt = callosum.to_prompt(personality_dsl)  # For any LLM API\npersonality_config = callosum.to_json(personality_dsl)  # Structured data\n```\n\n## Quick Start\n\n### Installation\n\n```bash\npip install callosum-dsl\n```\n\n*Note: AI provider packages (like `openai`, `anthropic`, `langchain-*`) are optional and installed separately as needed.*\n\n### Provider-Agnostic AI Usage\n\n```python\nfrom callosum_dsl import PersonalityAI, PERSONALITY_TEMPLATES\n\n# Create AI with personality (auto-detects available providers)\nai = PersonalityAI(PERSONALITY_TEMPLATES[\"technical_mentor\"])\n\n# Option 1: Use with OpenAI\nai.set_provider(\"openai\", api_key=\"your-openai-key\")\nresponse = ai.chat(\"Explain Python decorators\")\n\n# Option 2: Use with Anthropic  \nai.set_provider(\"anthropic\", api_key=\"your-anthropic-key\")\nresponse = ai.chat(\"Explain Python decorators\")\n\n# Option 3: Use with ANY LangChain model\nfrom langchain_openai import ChatOpenAI\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_community.llms import Ollama  # Local models!\n\n# OpenAI via LangChain\nllm = ChatOpenAI(model=\"gpt-4\", api_key=\"your-key\")\nai.set_provider(\"langchain\", llm=llm)\n\n# Anthropic via LangChain  \nllm = ChatAnthropic(model=\"claude-3-sonnet-20240229\", api_key=\"your-key\")\nai.set_provider(\"langchain\", llm=llm)\n\n# Local model via LangChain\nllm = Ollama(model=\"llama2\")\nai.set_provider(\"langchain\", llm=llm)\n\n# Same personality, different models!\nresponse = ai.chat(\"Explain Python decorators\")\n```\n\n### Basic Compilation\n\n```python\nfrom callosum_dsl import Callosum, PERSONALITY_TEMPLATES\n\n# Initialize the compiler\ncallosum = Callosum()\n\n# Use a ready-made personality\npersonality_dsl = PERSONALITY_TEMPLATES[\"helpful_assistant\"]\n\n# Compile to different formats\npersonality_data = callosum.to_json(personality_dsl)\nsystem_prompt = callosum.to_prompt(personality_dsl)\n\nprint(f\"Created: {personality_data['name']}\")\nprint(f\"Traits: {len(personality_data['traits'])}\")\n```\n\n### Create Custom Personalities\n\n```python\n# Define a custom AI personality\ncustom_personality = '''\npersonality: \"Python Expert Assistant\"\n\ntraits:\n  technical_expertise: 0.95\n  helpfulness: 0.90\n    amplifies: teaching * 1.3\n  patience: 0.85\n    when: \"explaining_concepts\"\n  creativity: 0.75\n\nknowledge:\n  domain programming:\n    python: expert\n    debugging: expert\n    best_practices: advanced\n    frameworks: advanced\n    \n  domain teaching:\n    code_explanation: expert\n    mentoring: advanced\n    connects_to: programming (0.9)\n\nbehaviors:\n  - when technical_expertise \u003e 0.9 → prefer \"detailed code examples\"\n  - when helpfulness \u003e 0.8 → seek \"comprehensive solutions\"\n  - when patience \u003e 0.8 → avoid \"overwhelming complexity\"\n\nevolution:\n  - learns \"user_coding_style\" → patience += 0.05\n  - learns \"effective_teaching\" → helpfulness += 0.1\n'''\n\n# Compile the personality\npersonality = callosum.to_json(custom_personality)\nsystem_prompt = callosum.to_prompt(custom_personality)\n```\n\n## How Callosum Actually Works\n\n### Personalities Create Real Behavioral Differences\n\nCallosum doesn't just change labels - it creates **measurably different AI behaviors**. Here's what the same question produces across different personalities:\n\n**Question:** *\"How should I approach learning something new?\"*\n\n**🎭 Helpful Assistant (helpfulness: 0.95, empathy: 0.85):**\n\u003e \"Learning something new can be an exciting yet challenging endeavor. Here are some steps you might consider: 1. Set Clear Goals... 2. Break it Down: Large tasks can seem daunting...\"\n\n*Response style: Structured, supportive, acknowledges user challenges*\n\n**🎨 Creative Writer (creativity: 0.95, imagination: 0.90):**\n\u003e \"Ah, the thrill of embarking on a new learning journey! It's akin to stepping into a new world, filled with unexplored territories and hidden treasures... inspired by the art of storytelling.\"\n\n*Response style: Metaphorical, poetic language, storytelling approach*\n\n**👨‍💻 Technical Mentor (technical_expertise: 0.95, precision: 0.88):**\n\u003e \"Learning something new, especially in the field of programming, can be a challenging yet rewarding experience. Here's a systematic approach that you can follow...\"\n\n*Response style: Domain-focused, methodical, technical precision*\n\n### Trait Values Matter\n\nSmall changes in trait values produce **measurably different behaviors**:\n\n**Low Helpfulness (0.3):** *\"I might not be able to solve everything for you\"* - hesitant, self-limiting\n**High Helpfulness (0.95):** *\"I'm here to help you with the most effective assistance\"* - confident, proactive\n\n**Low Creativity (0.2):** *\"A rainy day is characterized by continuous fall of water droplets from the sky...\"* - factual, scientific\n**High Creativity (0.95):** *\"A symphony of droplets descends from the heavens, each one a tiny messenger...\"* - poetic, metaphorical\n\n### Cross-Provider Consistency\n\nThe same personality maintains its characteristics across **any AI provider**:\n\n```python\n# Same personality, different providers - consistent behavior\npersonality = PERSONALITY_TEMPLATES[\"technical_mentor\"]\n\n# OpenAI: Systematic, programming-focused explanations\nai.set_provider(\"openai\", api_key=\"key\")\n\n# Anthropic: Same systematic, programming-focused style  \nai.set_provider(\"anthropic\", api_key=\"key\")\n\n# LangChain + any model: Maintains technical mentor traits\nai.set_provider(\"langchain\", llm=any_langchain_model)\n```\n\n### What You Get vs. Generic Prompts\n\n**❌ Generic Prompt:** *\"You are a helpful AI assistant\"*\n- Produces standard, predictable responses\n- No personality consistency across interactions\n- Generic tone regardless of context\n\n**✅ Callosum Personality:** Compiles to 1,000+ character system prompts including:\n- Detailed trait specifications with numerical values\n- Knowledge domain expertise mappings  \n- Behavioral rules and contextual preferences\n- Evolution patterns for learning and adaptation\n\n```\n# AI Personality Profile: Technical Programming Mentor\n\n## Core Traits\nTechnical_expertise: Very high strength (0.95/1.0)\nTeaching_ability: Very high strength (0.90/1.0)\nPrecision: Very high strength (0.88/1.0)\nPatience: High strength (0.85/1.0)\n\n## Knowledge Domains\nProgramming: Expert level proficiency\n- Software architecture: Expert\n- Debugging: Expert  \n- Code review: Advanced\n- Best practices: Expert\n\n## Behavioral Guidelines\nWhen technical_expertise \u003e 0.9: Prefer detailed explanations\nWhen teaching_ability \u003e 0.8: Seek teaching opportunities\n...\n```\n\n## AI Provider Integration\n\n### LangChain Integration (Recommended)\n```python\nfrom callosum_dsl import PersonalityAI, PERSONALITY_TEMPLATES\n\n# Works with ANY LangChain model!\npersonality = PERSONALITY_TEMPLATES[\"creative_writer\"]\n\n# Google Gemini (example - requires langchain-google-genai)\n# from langchain_google_genai import ChatGoogleGenerativeAI\n# llm = ChatGoogleGenerativeAI(model=\"gemini-pro\", google_api_key=\"key\")\n# ai = PersonalityAI(personality, provider=\"langchain\", llm=llm)\n\n# Local Ollama models (example - requires langchain-community)\n# from langchain_community.llms import Ollama\n# llm = Ollama(model=\"codellama\")\n# ai = PersonalityAI(personality, provider=\"langchain\", llm=llm)\n\n# Most common: OpenAI via LangChain\nfrom langchain_openai import ChatOpenAI\nllm = ChatOpenAI(model=\"gpt-4\", api_key=\"your-key\")\nai = PersonalityAI(personality, provider=\"langchain\", llm=llm)\n\n# Same personality across all models\nresponse = ai.chat(\"Write a creative story\")\n```\n\n### Direct Provider Integration\n```python\nfrom callosum_dsl import PersonalityAI, PERSONALITY_TEMPLATES\n\nai = PersonalityAI(PERSONALITY_TEMPLATES[\"technical_mentor\"])\n\n# OpenAI\nai.set_provider(\"openai\", api_key=\"your-key\")\nresponse = ai.chat(\"Explain design patterns\")\n\n# Anthropic\nai.set_provider(\"anthropic\", api_key=\"your-key\") \nresponse = ai.chat(\"Explain design patterns\")\n\n# Conversation history works with all providers\nresponse1 = ai.chat(\"Hello, I'm learning Python\", use_history=True)\nresponse2 = ai.chat(\"What did I say I was learning?\", use_history=True)\n```\n\n### Custom AI Integration\n```python\n# Integrate with your own AI system\ndef my_ai_function(messages, model, **kwargs):\n    # Your custom AI logic here\n    user_msg = messages[-1][\"content\"]\n    return f\"Custom AI response to: {user_msg}\"\n\nai = PersonalityAI(\n    personality_dsl,\n    provider=\"generic\",\n    chat_function=my_ai_function,\n    model_name=\"my-custom-model\"\n)\n\nresponse = ai.chat(\"Hello!\")  # Uses your custom AI with personality\n```\n\n### Traditional Compilation Approach\n```python\n# For manual integration with any system\nfrom callosum_dsl import Callosum\n\ncallosum = Callosum()\nsystem_prompt = callosum.to_prompt(personality_dsl)\n\n# Use with any LLM API manually\nimport openai\nclient = openai.OpenAI(api_key=\"your-key\")\nresponse = client.chat.completions.create(\n    model=\"gpt-4\",\n    messages=[\n        {\"role\": \"system\", \"content\": system_prompt},\n        {\"role\": \"user\", \"content\": \"Help me code\"}\n    ]\n)\n\n# Or get structured data for your own system\npersonality_data = callosum.to_json(personality_dsl)\nsql_schema = callosum.to_sql(personality_dsl) \n```\n\n## What Callosum Does\n\n**Callosum is a personality definition compiler that creates behaviorally distinct AI personalities** - it takes structured personality descriptions and compiles them into comprehensive system prompts (1,000+ characters) that produce measurably different AI behaviors across any AI system.\n\n### **Compilation Targets**\n```python\n# System prompts for LLM APIs\nprompt = callosum.to_prompt(personality_dsl)\n\n# Structured JSON for custom frameworks  \ndata = callosum.to_json(personality_dsl)\n\n# Lua scripts for dynamic systems\nscript = callosum.to_lua(personality_dsl)\n\n# Database schemas for persistence\nsql = callosum.to_sql(personality_dsl)\n\n# Graph queries for relationship modeling\ncypher = callosum.to_cypher(personality_dsl)\n```\n\n### **Personality Features**\n- **Trait System** - Numeric values with modifiers (decay, amplification, context)\n- **Knowledge Domains** - Structured expertise areas with connections\n- **Behavioral Rules** - Context-aware response preferences\n- **Evolution Patterns** - How personalities change through interactions\n\n### **Output Formats**\n- **System Prompts** - Use with OpenAI, Anthropic, Claude, any LLM\n- **JSON Config** - Feed into custom AI frameworks and applications  \n- **Lua Scripts** - Runtime personality systems\n- **SQL/Cypher** - Database storage for persistent personalities\n\n## Key Benefits\n\n### **Provider Agnostic**\n- **Switch AI providers instantly** without changing your personality definitions\n- **Future-proof** - works with new AI models as they emerge\n- **No vendor lock-in** - your personalities work everywhere\n\n### **LangChain Integration**\n- Works with **LangChain-compatible models**:\n  - OpenAI, Anthropic models via LangChain\n  - Local models (Ollama integration)\n  - HuggingFace endpoints\n  - Custom LangChain implementations\n\n### **Verified Behavioral Differences**\n- **Measurably different AI responses** - personalities create 200-400+ character response variations\n- **Trait-specific behaviors** - creativity: 0.2 vs 0.95 produces factual vs poetic language\n- **Cross-provider consistency** - same personality maintains characteristics on OpenAI, Anthropic, LangChain\n- **Precision matters** - small trait changes (0.3 → 0.95) create distinct behavioral shifts\n- Rich personality features (traits, knowledge domains, behaviors, evolution)\n\n### **Developer Friendly**\n- Simple API - change providers with one line of code\n- Auto-detection of available providers\n- Comprehensive examples and documentation\n- Backwards compatible with existing code\n\n## **DSL Language Reference**\n\n### Trait Modifiers\n```python\n# Traits that change over time\npatience: 0.8\n  decays: 0.02/month\n\n# Context-dependent activation  \ncreativity: 0.9\n  when: \"creative_tasks\"\n\n# Conditional suppression\nformality: 0.6\n  unless: \"user_frustrated\"\n\n# Cross-trait interactions\ncuriosity: 0.8\n  amplifies: helpfulness * 1.4\n```\n\n### Knowledge Domains\n```python\ndomain programming:\n  python: expert           # Expertise levels\n  debugging: advanced      # beginner | intermediate | advanced | expert\n  frameworks: intermediate\n\n# Domain connections\ndomain teaching:\n  connects_to: programming (0.9)\n```\n\n### Behavioral Rules\n```python\nbehaviors:\n  - when technical_expertise \u003e 0.9 → prefer \"detailed examples\"\n  - when \"user_confused\" → avoid \"complex jargon\"\n  - when patience \u003e 0.8 → seek \"step-by-step guidance\"\n```\n\n### Evolution \u0026 Learning\n```python\nevolution:\n  - learns \"user_preference\" → helpfulness += 0.05\n  - after 100.0 interactions → unlock \"advanced_topics\"\n  - learns \"effective_method\" → connect domain1 ↔ domain2 (0.9)\n```\n\n## **Ready-Made Personalities**\n\n```python\nfrom callosum_dsl import PersonalityAI, PERSONALITY_TEMPLATES\n\n# Available templates\nprint(list(PERSONALITY_TEMPLATES.keys()))\n# ['helpful_assistant', 'creative_writer', 'technical_mentor']\n\n# Use directly with any AI provider\nai = PersonalityAI(PERSONALITY_TEMPLATES[\"creative_writer\"])\n\n# Works instantly with any provider:\nai.set_provider(\"openai\", api_key=\"your-key\")\nstory = ai.chat(\"Write a story about AI\")\n\n# Or use with LangChain models\nfrom langchain_openai import ChatOpenAI\nllm = ChatOpenAI(model=\"gpt-4\", api_key=\"your-key\")\nai.set_provider(\"langchain\", llm=llm)\npoem = ai.chat(\"Write a poem about coding\")\n\n# Traditional compilation still works\nfrom callosum_dsl import Callosum\ncallosum = Callosum()\nwriter_prompt = callosum.to_prompt(PERSONALITY_TEMPLATES[\"creative_writer\"])\n```\n\n## Development\n\n### Project Structure\n```\ncallosum/\n├── README.md                    # Main project documentation\n├── LICENSE                      # MIT license\n├── Makefile                     # Development commands\n│\n├── core/                        # OCaml DSL Compiler\n│   ├── bin/                     # Executable entry point\n│   │   └── main.ml              # Command-line interface\n│   ├── lib/                     # Core library modules\n│   │   ├── ast.ml               # Abstract Syntax Tree\n│   │   ├── compiler.ml          # Multi-target compilation\n│   │   ├── lexer.mll            # Lexical analysis\n│   │   ├── parser.mly           # Grammar parsing\n│   │   ├── semantic.ml          # Semantic analysis\n│   │   ├── types.ml             # Type definitions\n│   │   └── optimize.ml          # Optimization passes\n│   ├── test/                    # Comprehensive test suite\n│   ├── examples/                # Sample .colo personality files\n│   ├── infrastructure/          # Docker deployment\n│   ├── dune-project             # OCaml build configuration\n│   └── dsl-parser.opam          # Package definition\n│\n├── python/                      # Python Package\n│   ├── callosum_dsl/           # Python package source\n│   ├── tests/                   # Python tests\n│   ├── examples/                # Python usage examples\n│   ├── pyproject.toml           # Modern Python build config\n│   └── requirements.txt         # Python dependencies\n│\n├── docs/                        # Documentation\n│   ├── README_PYTHON.md         # Python package documentation\n│   ├── QUICK_START.md           # Quick start guide\n│   ├── PACKAGING.md             # Package maintenance docs\n│   └── READY_FOR_PYPI.md        # PyPI publishing guide\n│\n└── scripts/                     # Build and utility scripts\n    ├── build_package.py         # Automated build script\n    └── publish.py               # Publishing script\n```\n\n### Development\n\n```bash\n# Quick development setup\nmake dev\n\n# Build OCaml compiler\nmake build-core\n\n# Build Python package  \nmake build-python\n\n# Run tests\nmake test\n\n# Clean builds\nmake clean\n```\n\nFor detailed OCaml development:\n```bash\ncd core\ndune build              # Build\ndune runtest            # Run tests\ndune exec bin/main.exe  # Run compiler directly\n```\n\n### Adding New Features\n\n1. **Extend types** in `types.ml`\n2. **Update lexer** in `lexer.mll` for new tokens\n3. **Modify grammar** in `parser.mly` for syntax\n4. **Add validation** in `semantic.ml` \n5. **Update compiler** in `compiler.ml` for new targets\n6. **Write tests** in `test/test_parser.ml`\n\n## API Reference\n\n### Core Functions\n```python\n# Python API\nfrom callosum_dsl import Callosum, PersonalityAI\n\n# Compile DSL to different formats\ncallosum = Callosum()\nsystem_prompt = callosum.to_prompt(dsl_content)\ndata = callosum.to_json(dsl_content)\nlua_script = callosum.to_lua(dsl_content)\n\n# Provider-agnostic AI usage\nai = PersonalityAI(dsl_content)\nai.set_provider(\"openai\", api_key=\"your-key\")\nresponse = ai.chat(\"Hello!\")\n```\n\n## File Extensions\n\n- `.colo` - Personality definition files (Callosum language)\n- Output formats: JSON, system prompts, Lua, SQL, Cypher\n\n## License\n\nMIT\n\n## Contributing\n\n1. Fork the repository\n2. Create feature branch (`git checkout -b feature/amazing-feature`)\n3. Add tests for new functionality\n4. Ensure all tests pass (`dune runtest`)\n5. Submit pull request\n\n---\n\n*Part of the Callosum AI Personality System*\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgriffincancode%2Fcallosum","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fgriffincancode%2Fcallosum","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgriffincancode%2Fcallosum/lists"}