{"id":16683484,"url":"https://github.com/marcominerva/kernelmemoryservice","last_synced_at":"2025-03-17T00:32:45.706Z","repository":{"id":220301616,"uuid":"750253633","full_name":"marcominerva/KernelMemoryService","owner":"marcominerva","description":"A lightweight implementation of Kernel Memory as a Service","archived":false,"fork":false,"pushed_at":"2025-02-24T14:22:21.000Z","size":76,"stargazers_count":37,"open_issues_count":0,"forks_count":4,"subscribers_count":4,"default_branch":"master","last_synced_at":"2025-03-16T06:41:11.508Z","etag":null,"topics":["ai","artificial-intelligence","azure","azure-openai","azureopenai","chat-completion","csharp","embeddings","gpt","kernel-memory","openai","rag","retrieval-augmented-generation","semantic-kernel","visual-studio"],"latest_commit_sha":null,"homepage":"","language":"C#","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/marcominerva.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-01-30T09:36:36.000Z","updated_at":"2025-03-11T20:38:34.000Z","dependencies_parsed_at":"2024-02-12T12:24:55.905Z","dependency_job_id":"29982862-fb53-4157-a75d-7f0811e8ac15","html_url":"https://github.com/marcominerva/KernelMemoryService","commit_stats":null,"previous_names":["marcominerva/kernelmemoryservice"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/marcominerva%2FKernelMemoryService","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/marcominerva%2FKernelMemoryService/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/marcominerva%2FKernelMemoryService/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/marcominerva%2FKernelMemoryService/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/marcominerva","download_url":"https://codeload.github.com/marcominerva/KernelMemoryService/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243955668,"owners_count":20374371,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","artificial-intelligence","azure","azure-openai","azureopenai","chat-completion","csharp","embeddings","gpt","kernel-memory","openai","rag","retrieval-augmented-generation","semantic-kernel","visual-studio"],"created_at":"2024-10-12T14:24:55.673Z","updated_at":"2025-03-17T00:32:45.210Z","avatar_url":"https://github.com/marcominerva.png","language":"C#","readme":"# Kernel Memory Service\n\n[Kernel Memory](https://github.com/microsoft/kernel-memory) provides a [Service implementation](https://github.com/microsoft/kernel-memory/tree/main/service/Service) that can be used to manage memory settings, ingest data and query for answers. While it is a good solution, in some scenarios it can be too complex.\n\nSo, the goal of this repository is to provide a lightweight implementation of Kernel Memory as a Service. This project is quite simple, so it can be easily customized according to your needs and even integrated in existing application with little effort.\n\n### How to use\n\nThe service can be directly configured in the [Program.cs file](https://github.com/marcominerva/KernelMemoryService/blob/master/KernelMemoryService/Program.cs). The default implementation uses the following settings:\n\n- Azure OpenAI Service for embeddings and text generation.\n- File system for Content Storage, Vector Storage and Orchestration.\n\nThe configuration values are stored in the [appsettings.json file](https://github.com/marcominerva/KernelMemoryService/blob/master/KernelMemoryService/appsettings.json).\n\nYou can easily change all these options by using any of the [supported backends](https://github.com/microsoft/kernel-memory?tab=readme-ov-file#supported-data-formats-and-backends).\n\n### Conversational support\n\nEmbeddings are generated based on a given text. So, in a conversational scenario, it is necessary to keep track of the previous messages in order to generate valid embeddings for a particular question.\n\nFor example, suppose we have imported a couple of Wikipedia articles, one about [Taggia](https://en.wikipedia.org/wiki/Taggia) and the other about [Sanremo](https://en.wikipedia.org/wiki/Sanremo), two cities in Italy. Now, we want to ask questions about them (of course, this information is publicly available and known by GPT models, so using embeddings and RAG aren't really necessary, but this is just an example). So, we start with the following:\n\n- How many people live in Taggia?\n\nUsing embeddings and RAG, Kernel Memory will generate the correct answer. Now, as we are in a chat context, we ask another question:\n\n- And in Sanremo?\n\nFrom our point of view, this question is the \"continuation\" of the chat, so it means \"And how many people live in Sanremo?\", However, if we directly generate embeddings for \"And in Sanremo?\", they won't contain anything about the fact we are interested in the population number, so we won't get any result.\n\nTo solve this problem, we need to keep track of the previous messages and, when asking a question, we need to reformulate it taking into account the whole conversation. In this way, we can generate the correct embeddings.\n\nThe Service automatically handles this scenario by using a Memory Cache and a **ConversationId** associated to each question. Questions and answers are kept in memory, so the Service is able to [reformulate questions](https://github.com/marcominerva/KernelMemoryService/blob/master/KernelMemoryService/Services/ChatService.cs) based on the current chat context before using Kernel Memory.\n\n\u003e **Note**\nThis isn't the only way to keep track of the conversation context. The Service uses an explicit approach to make it clear how the workflow should work.\n\nTwo settings in [appsettings.json file](https://github.com/marcominerva/KernelMemoryService/blob/master/KernelMemoryService/appsettings.json) are used to configure the cache:\n\n- _MessageLimit_: specifies how many messages for each conversation must be saved. When this limit is reached, oldest messages are automatically removed.\n- _MessageExpiration_: specifies the time interval used to maintain messages in cache, regardless their count.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmarcominerva%2Fkernelmemoryservice","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmarcominerva%2Fkernelmemoryservice","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmarcominerva%2Fkernelmemoryservice/lists"}