{"id":31812615,"url":"https://github.com/synalinks/synalinks","last_synced_at":"2025-10-11T07:17:09.157Z","repository":{"id":279994414,"uuid":"926543839","full_name":"SynaLinks/synalinks","owner":"SynaLinks","description":"🧠🔗 From idea to production in just few lines: Graph-Based Programmable Neuro-Symbolic LM Framework - a production-first LM framework built with decade old Deep Learning best practices","archived":false,"fork":false,"pushed_at":"2025-10-09T10:28:08.000Z","size":18467,"stargazers_count":334,"open_issues_count":3,"forks_count":29,"subscribers_count":3,"default_branch":"main","last_synced_at":"2025-10-11T07:17:07.821Z","etag":null,"topics":["context-engineering","in-context-learning","in-context-reinforcement-learning","language-model","llmops","llms","neuro-symbolic","neuro-symbolic-ai","reinforcement-learning","reinforcement-learning-agent","test-time-adaptation","test-time-training"],"latest_commit_sha":null,"homepage":"https://www.synalinks.com/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/SynaLinks.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-02-03T12:53:48.000Z","updated_at":"2025-10-09T10:27:36.000Z","dependencies_parsed_at":"2025-02-28T21:10:46.745Z","dependency_job_id":"c16cc948-dfc6-4094-b344-ce1056ea3866","html_url":"https://github.com/SynaLinks/synalinks","commit_stats":null,"previous_names":["synalinks/synalinks"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/SynaLinks/synalinks","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SynaLinks%2Fsynalinks","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SynaLinks%2Fsynalinks/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SynaLinks%2Fsynalinks/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SynaLinks%2Fsynalinks/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/SynaLinks","download_url":"https://codeload.github.com/SynaLinks/synalinks/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SynaLinks%2Fsynalinks/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":279006595,"owners_count":26084130,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-11T02:00:06.511Z","response_time":55,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["context-engineering","in-context-learning","in-context-reinforcement-learning","language-model","llmops","llms","neuro-symbolic","neuro-symbolic-ai","reinforcement-learning","reinforcement-learning-agent","test-time-adaptation","test-time-training"],"created_at":"2025-10-11T07:17:06.055Z","updated_at":"2025-10-11T07:17:09.151Z","avatar_url":"https://github.com/SynaLinks.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n\u003cimg height=200 src=\"https://github.com/SynaLinks/synalinks/blob/main/img/synalinks_logo_square.png?raw=true\"\u003e\n\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n\n\u003cb\u003eFrom idea to production in just few lines\u003c/b\u003e\n\n\u003cem\u003eThe first neuro-symbolic LM framework to leverage decades-old best practices in Deep Learning frameworks from the most user-friendly framework ever built - Keras\u003c/em\u003e\n\n\u003cb\u003eBuild RAGs, autonomous agents, multi-agents systems, self-evolving systems and more in just few lines\u003c/b\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://synalinks.github.io/synalinks\" target=\"_blank\"\u003e\u003cstrong\u003eDocumentation\u003c/strong\u003e\u003c/a\u003e ·\n  \u003ca href=\"https://synalinks.github.io/synalinks/FAQ/\" target=\"_blank\"\u003e\u003cstrong\u003eFAQ\u003c/strong\u003e\u003c/a\u003e ·\n  \u003ca href=\"https://discord.gg/82nt97uXcM\" target=\"_blank\"\u003e\u003cstrong\u003eDiscord\u003c/strong\u003e\u003c/a\u003e ·\n  \u003ca href=\"https://github.com/SynaLinks/synalinks/tree/main/examples\" target=\"_blank\"\u003e\u003cstrong\u003eCode Examples\u003c/strong\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n\n⭐ Help us reach more AI/ML engineers and grow the Synalinks community. Star this repo ⭐\n\n![Beta](https://img.shields.io/badge/Release-Beta-blue.svg)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n![Coverage Badge](https://raw.githubusercontent.com/SynaLinks/synalinks/refs/heads/main/coverage-badge.svg)\n[![Downloads](https://static.pepy.tech/badge/synalinks)](https://pepy.tech/project/synalinks)\n[![Discord](https://img.shields.io/discord/1118241178723291219)](https://discord.gg/82nt97uXcM)\n[![Python package](https://github.com/SynaLinks/Synalinks/actions/workflows/tests.yml/badge.svg)](https://github.com/SynaLinks/SynaLinks/actions/workflows/tests.yml)\n[![License: Apache-2.0](https://img.shields.io/badge/License-Apache_2.0-green.svg)](https://opensource.org/license/apache-2-0)\n[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/SynaLinks/synalinks)\n\n\u003c/div\u003e\n\n## What is Synalinks?\n\nSynalinks is an open-source framework that makes it easy to create, evaluate, train, and deploy industry-standard Language Models (LMs) applications like **graph RAGs, autonomous agents, multi-agent systems or self-evolving systems**. Synalinks follows the principle of *progressive disclosure of complexity*: meaning that simple workflows should be quick and easy, while arbitrarily advanced ones should be possible via a clear path that builds upon what you've already learned.\n\nSynalinks is an *adaptation of Keras 3* focused on neuro-symbolic systems and in-context reinforcement learning, an ensemble of techniques that enhance the LMs predictions and accuracy without changing the weights of the model. The goal of Synalinks is to facilitate the rapid setup of simple applications while providing the flexibility for researchers and advanced users to develop sophisticated systems.\n\n## Who is Synalinks for?\n\nSynalinks is designed for a diverse range of users, from professionals and AI researchers to students, independent developers, and hobbyists. It is suitable for anyone who wants to learn about AI by building/composing blocks or build solid foundations for enterprise-grade products. While a background in Machine Learning and Deep Learning can be advantageous — as Synalinks leverages design patterns from Keras, one of the most user-friendly and popular Deep Learning frameworks — it is not a prerequisite. Synalinks is designed to be accessible to anyone with programming skills in Python, making it a versatile and inclusive platform for AI development.\n\n## Why use Synalinks?\n\nDevelopping a successful LM application in a profesional context, beyond stateless chatbots, is difficult and typically include:\n\n- **Building optimized prompts with examples/instructions at each step**: Synalinks uses advanced In-Context Reinforcement Learning techniques to optimize **each** prompt of your workflow/agent.\n- **Pipelines that change over time**: Easily edit your pipelines, re-run your training, and you're good to go.\n- **Ensuring the correctness of the LMs output**: Synalinks combines *constrained structured output* with In-Context RL to ensure **both format and content correctness**.\n- **Async Optimization**: Synalinks automatically optimizes your pipelines by detecting parallel processes, so you don't have to worry about it.\n- **Assessing the performance of your application**: Synalinks provides built-in metrics and rewards to evaluate your workflows.\n- **Configuring Language \u0026 Embedding Models**: Seamlessly integrate multiple LM providers like Ollama, OpenAI, Azure, Anthropic, Mistral or Groq.\n- **Configuring Graph Databases**: Seamlessly integrate with Neo4J or MemGraph.\n- **Documenting your ML workflows**: Plot your workflows, training history, and evaluations; document everything.\n- **Versioning the prompts/pipelines**: Each program is serializable into JSON so you can version it with git.\n- **Deploying REST APIs or MCP servers**: Compatible out-of-the-box with FastAPI and FastMCP so your Data Scientists and Web Developers can stop tearing each other apart.\n- **Finding Hyperparameters**: Synalinks is compatible with [KerasTuner](https://keras.io/keras_tuner/), so you don't have to guess the hyperparameters.\n\nWe can help you simplify these tasks by leveraging decade old practices in Deep Learning frameworks. We provide a comprehensive suite of tools and features designed to streamline the development process, making it easier to create, evaluate, train, document and deploy robust neuro-symbolic LMs applications.\n\n\u003cdiv align=\"center\"\u003e\n\n| Framework | MCP | Graph DB | Logical Flow | Robust Branching | Parallel Function Calling | Hyperparameter Tuning | Constrained JSON Decoding | Ease of Use |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| Synalinks | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | 😀 |\n| DSPy      | ✅ Yes | ❌ No | ❌ No | ❌ No | ❌ No | ❌ No | ❌ No | 😢 |\n| AdalFlow  | ✅ Yes | ❌ No | ❌ No | ❌ No | ❌ No | ❌ No | ❌ No | 😢 |\n| TextGrad  | ❌ No | ❌ No | ❌ No | ❌ No | ❌ No | ❌ No | ❌ No | 😭 |\n| Trace     | ❌ No | ❌ No | ❌ No | ❌ No | ❌ No | ❌ No | ✅ Yes | 😭 |\n\n\u003c/div\u003e\n\n## Install\n\n```shell\nuv pip install synalinks\n```\n\nStart your project with\n\n```shell\nuv run synalinks init\n```\n\n## Programming your application: 4 ways\n\n### Using the `Functional` API\n\nYou start from `Input`, you chain modules calls to specify the program's structure, and finally, you create your program from inputs and outputs:\n\n```python\nimport synalinks\nimport asyncio\n\nclass Query(synalinks.DataModel):\n    query: str = synalinks.Field(\n        description=\"The user query\",\n    )\n\nclass AnswerWithThinking(synalinks.DataModel):\n    thinking: str = synalinks.Field(\n        description=\"Your step by step thinking\",\n    )\n    answer: float = synalinks.Field(\n        description=\"The correct numerical answer\",\n    )\n\nasync def main():\n\n    language_model = synalinks.LanguageModel(\n        model=\"ollama/mistral\",\n    )\n\n    x0 = synalinks.Input(data_model=Query)\n    x1 = await synalinks.Generator(\n        data_model=AnswerWithThinking,\n        language_model=language_model,\n    )(x0)\n\n    program = synalinks.Program(\n        inputs=x0,\n        outputs=x1,\n        name=\"chain_of_thought\",\n        description=\"Useful to answer in a step by step manner.\",\n    )\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n### Subclassing the `Program` class\n\nIn that case, you should define your modules in `__init__()` and implement the program's structure in `call()`. This way of programming is more similar to PyTorch style, if you are an experimented user, this way will give you the maximum flexibility.\n\n**Note:** you can optionaly have a `training` argument (boolean), which you can use to specify a different behavior in training and inference.\n\n```python\nimport synalinks\nimport asyncio\n\nclass Query(synalinks.DataModel):\n    query: str = synalinks.Field(\n        description=\"The user query\",\n    )\n\nclass AnswerWithThinking(synalinks.DataModel):\n    thinking: str = synalinks.Field(\n        description=\"Your step by step thinking process\",\n    )\n    answer: float = synalinks.Field(\n        description=\"The correct numerical answer\",\n    )\n\nclass ChainOfThought(synalinks.Program):\n    \"\"\"Useful to answer in a step by step manner.\n    \n    The first line of the docstring is provided as description\n    for the program if not provided in the `super().__init__()`.\n    In a similar way the name is automatically infered based on\n    the class name if not provided.\n    \"\"\"\n\n    def __init__(\n        self,\n        language_model=None,\n        name=None,\n        description=None,\n        trainable=True,\n    ):\n        super().__init__(\n            name=name,\n            description=description,\n            trainable=trainable,\n        )\n        self.answer = synalinks.Generator(\n            data_model=AnswerWithThinking,\n            language_model=language_model,\n            name=self.name+\"_generator\",\n        )\n\n    async def call(self, inputs, training=False):\n        if not inputs:\n            return None\n        x = await self.answer(inputs, training=training)\n        return x\n\n    def get_config(self):\n        config = {\n            \"name\": self.name,\n            \"description\": self.description,\n            \"trainable\": self.trainable,\n        }\n        language_model_config = \\\n        {\n            \"language_model\": synalinks.saving.serialize_synalinks_object(\n                self.language_model\n            )\n        }\n        return {**config, **language_model_config}\n\n    @classmethod\n    def from_config(cls, config):\n        language_model = synalinks.saving.deserialize_synalinks_object(\n            config.pop(\"language_model\")\n        )\n        return cls(language_model=language_model, **config)\n\nasync def main():\n\n    language_model = synalinks.LanguageModel(\n        model=\"ollama/mistral\",\n    )\n\n    program = ChainOfThought(\n        language_model=language_model,\n    )\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n### Mixing the subclassing and the `Functional` API\n\nThis way of programming is recommended to encapsulate your application while providing an easy to use setup.\nIt is the recommended way for most users as it avoid making your program/agents from scratch.\nIn that case, you should implement only the `__init__()` and `build()` methods.\n\n```python\nimport synalinks\nimport asyncio\n\nclass Query(synalinks.DataModel):\n    query: str = synalinks.Field(\n        description=\"The user query\",\n    )\n\nclass AnswerWithThinking(synalinks.DataModel):\n    thinking: str = synalinks.Field(\n        description=\"Your step by step thinking process\",\n    )\n    answer: float = synalinks.Field(\n        description=\"The correct numerical answer\",\n    )\n\nasync def main():\n\n    class ChainOfThought(synalinks.Program):\n        \"\"\"Useful to answer in a step by step manner.\"\"\"\n\n        def __init__(\n            self,\n            language_model=None,\n            name=None,\n            description=None,\n            trainable=True,\n        ):\n            super().__init__(\n                name=name,\n                description=description,\n                trainable=trainable,\n            )\n\n            self.language_model = language_model\n        \n        async def build(self, inputs):\n            outputs = await synalinks.Generator(\n                data_model=AnswerWithThinking,\n                language_model=self.language_model,\n            )(inputs)\n\n            # Create your program using the functional API\n            super().__init__(\n                inputs=inputs,\n                outputs=outputs,\n                name=self.name,\n                description=self.description,\n                trainable=self.trainable,\n            )\n\n    language_model = synalinks.LanguageModel(\n        model=\"ollama/mistral\",\n    )\n\n    program = ChainOfThought(\n        language_model=language_model,\n    )\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\nThis allows you to not have to implement the `call()` and serialization methods\n(`get_config()` and `from_config()`). The program will be built for any inputs the first time called.\n\n### Using the `Sequential` API\n\nIn addition, `Sequential` is a special case of program where the program\nis purely a stack of single-input, single-output modules.\n\n```python\nimport synalinks\nimport asyncio\n\nclass Query(synalinks.DataModel):\n    query: str = synalinks.Field(\n        description=\"The user query\",\n    )\n\nclass AnswerWithThinking(synalinks.DataModel):\n    thinking: str = synalinks.Field(\n        description=\"Your step by step thinking\",\n    )\n    answer: float = synalinks.Field(\n        description=\"The correct numerical answer\",\n    )\n\nasync def main():\n\n    language_model = synalinks.LanguageModel(\n        model=\"ollama/mistral\",\n    )\n\n    program = synalinks.Sequential(\n        [\n            synalinks.Input(\n                data_model=Query,\n            ),\n            synalinks.Generator(\n                data_model=AnswerWithThinking,\n                language_model=language_model,\n            ),\n        ],\n        name=\"chain_of_thought\",\n        description=\"Useful to answer in a step by step manner.\",\n    )\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n## Getting a summary of your program\n\nTo print a tabular summary of your program:\n\n```python\nprogram.summary()\n```\n\nOr a plot (Useful to document your system):\n\n```python\nsynalinks.utils.plot_program(\n    program,\n    show_module_names=True,\n    show_trainable=True,\n    show_schemas=True,\n)\n```\n\n![chain_of_thought](./docs/assets/chain_of_thought.png)\n\n## Running your program\n\nTo run your program use the following:\n\n```python\nresult = await program(\n    Query(query=\"What is the French city of aerospace?\"),\n)\n```\n\n## Training your program\n\n```python\nasync def main():\n\n    # ... your program definition\n\n    (x_train, y_train), (x_test, y_test) = synalinks.datasets.gsm8k.load_data()\n\n    program.compile(\n        reward=synalinks.rewards.ExactMatch(\n            in_mask=[\"answer\"],\n        ),\n        optimizer=synalinks.optimizers.OMEGA(\n            language_model=language_model,\n            embedding_model=embedding_model,\n        ),\n    )\n\n    batch_size=1\n    epochs=10\n\n    history = await program.fit(\n        x_train,\n        y_train,\n        validation_split=0.2,\n        batch_size=batch_size,\n        epochs=epochs,\n    )\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\u003cdiv align=\"center\"\u003e\n\n![gsm8k_evaluation_comparison](./docs/assets/gsm8k_evaluation_comparison.png)\n\n\u003c/div\u003e\n\n## Saving \u0026 Loading\n\nTo save the entire architecture and variables (the program's state) into a JSON file, do:\n\n```python\nprogram.save(\"my_program.json\")\n```\n\nIn order to load it, do:\n\n```python\nloaded_program = synalinks.Program.load(\"my_program.json\")\n```\n\nTo save only the state your program (the variables) into JSON:\n\n```python\nprogram.save_variables(\"my_program.variables.json\")\n```\n\nTo load its variables (needs a program with the same architecture), do:\n\n```python\nprogram.load_variables(\"my_program.variables.json\")\n```\n\n## Logging\n\nTo enable logging, use the following at the beginning of your script:\n\n```python\nsynalinks.enable_logging()\n```\n\n### Learn more\n\nYou can learn more by reading our [documentation](https://synalinks.github.io/synalinks/). If you have questions, the [FAQ](https://synalinks.github.io/synalinks/FAQ/) might help you.\n\n### Contributions\n\nContributions are welcome, either for the implementation of additional modules, metrics, or optimizers.\nFor more information, or help for implementing your ideas (or ones from a paper), please join our discord.\n\nBeware that every additional metric/module/optimizer should be approved by the core team, we want to keep the library minimal and clean as possible to avoid an uncontrolled growth leading to bad software practices like in most current leading LM frameworks.\n\nIf you have specific feedbacks or features request we invite you to open an [issue](https://github.com/SynaLinks/synalinks/issues).\n\n### Contributors\n\n\u003ca href=\"https://github.com/SynaLinks/synalinks/graphs/contributors\"\u003e\n  \u003cimg src=\"https://contrib.rocks/image?repo=SynaLinks/synalinks\"/\u003e\n\u003c/a\u003e\n\n### Community\n\nJoin our community to learn more about neuro-symbolic systems and the future of AI. We welcome the participation of people from very different backgrounds or education levels.\n\n### Citing our work\n\nThis work have been done under the supervision of François Chollet, the author of Keras. If this work is useful for your research please use the following bibtex entry:\n\n```bibtex\n@misc{sallami2025synalinks,\n  title={Synalinks},\n  author={Sallami, Yoan and Chollet, Fran\\c{c}ois},\n  year={2025},\n  howpublished={\\url{https://github.com/SynaLinks/Synalinks}},\n}\n```\n\n### Credit\n\nSynalinks would not be possible without the great work of the following open-source projects:\n\n- [Keras](https://keras.io/) for the graph-based computation backbone, API and overall code, design and philosophy.\n- [DSPy](https://dspy.ai/) for the modules/optimizers inspiration.\n- [Pydantic](https://docs.pydantic.dev/latest/) for the backend data layer.\n- [LiteLLM](https://docs.litellm.ai/docs/) for the LMs integrations.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsynalinks%2Fsynalinks","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsynalinks%2Fsynalinks","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsynalinks%2Fsynalinks/lists"}