Microsoft.KernelMemory.Core 0.68.240716.1

Prefix Reserved
This package has a SemVer 2.0.0 package version: 0.68.240716.1+2ff894c.
dotnet add package Microsoft.KernelMemory.Core --version 0.68.240716.1                
NuGet\Install-Package Microsoft.KernelMemory.Core -Version 0.68.240716.1                
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="Microsoft.KernelMemory.Core" Version="0.68.240716.1" />                
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add Microsoft.KernelMemory.Core --version 0.68.240716.1                
#r "nuget: Microsoft.KernelMemory.Core, 0.68.240716.1"                
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install Microsoft.KernelMemory.Core as a Cake Addin
#addin nuget:?package=Microsoft.KernelMemory.Core&version=0.68.240716.1

// Install Microsoft.KernelMemory.Core as a Cake Tool
#tool nuget:?package=Microsoft.KernelMemory.Core&version=0.68.240716.1                

Kernel Memory

License: MIT Discord

This repository presents best practices and a reference architecture for memory in specific AI and LLMs application scenarios. Please note that the provided code serves as a demonstration and is not an officially supported Microsoft offering.

Kernel Memory (KM) is a multi-modal AI Service specialized in the efficient indexing of datasets through custom continuous data hybrid pipelines, with support for Retrieval Augmented Generation (RAG), synthetic memory, prompt engineering, and custom semantic memory processing.

KM is available as a Web Service, as a Docker container, a Plugin for ChatGPT/Copilot/Semantic Kernel, and as a .NET library for embedded applications.


Utilizing advanced embeddings and LLMs, the system enables Natural Language querying for obtaining answers from the indexed data, complete with citations and links to the original sources.


Designed for seamless integration as a Plugin with Semantic Kernel, Microsoft Copilot and ChatGPT, Kernel Memory enhances data-driven features in applications built for most popular AI platforms.

Synchronous Memory API (aka "serverless")

Kernel Memory works and scales at best when running as an asynchronous Web Service, allowing to ingest thousands of documents and information without blocking your app.

However, Kernel Memory can also run in serverless mode, embedding MemoryServerless class instance in .NET backend/console/desktop apps in synchronous mode. This approach works as well as in ASP.NET Web APIs and Azure Functions. Each request is processed immediately, although calling clients are responsible for handling transient errors.


Importing documents into your Kernel Memory can be as simple as this:

var memory = new KernelMemoryBuilder()

// Import a file
await memory.ImportDocumentAsync("meeting-transcript.docx", tags: new() { { "user", "Blake" } });

// Import multiple files and apply multiple tags
await memory.ImportDocumentAsync(new Document("file001")
    .AddTag("user", "Blake")
    .AddTag("collection", "business")
    .AddTag("collection", "plans")
    .AddTag("fiscalYear", "2023"));

Asking questions:

var answer1 = await memory.AskAsync("How many people attended the meeting?");

var answer2 = await memory.AskAsync("what's the project timeline?", filter: new MemoryFilter().ByTag("user", "Blake"));

The example leverages the default documents ingestion pipeline:

  1. Extract text: recognize the file format and extract the information
  2. Partition the text in small chunks, to optimize search
  3. Extract embedding using an LLM embedding generator
  4. Save embedding into a vector index such as Azure AI Search, Qdrant or other DBs.

In the example, memories are organized by users using tags, safeguarding private information. Furthermore, memories can be categorized and structured using tags, enabling efficient search and retrieval through faceted navigation.

Data lineage, citations, referencing sources:

All memories and answers are fully correlated to the data provided. When producing an answer, Kernel Memory includes all the information needed to verify its accuracy:

await memory.ImportFileAsync("NASA-news.pdf");

var answer = await memory.AskAsync("Any news from NASA about Orion?");

Console.WriteLine(answer.Result + "/n");

foreach (var x in answer.RelevantSources)
    Console.WriteLine($"  * {x.SourceName} -- {x.Partitions.First().LastUpdate:D}");

Yes, there is news from NASA about the Orion spacecraft. NASA has invited the media to see a new test version [......] For more information about the Artemis program, you can visit the NASA website.

  • NASA-news.pdf -- Tuesday, August 1, 2023

Memory as a Service - Asynchronous API

Depending on your scenarios, you might want to run all the code locally inside your process, or remotely through an asynchronous and scalable service.


If you're importing small files, and need only C# and can block the process during the import, local-in-process execution can be fine, using the MemoryServerless seen above.

However, if you are in one of these scenarios:

  • I'd just like a web service to import data and send queries to answer
  • My app is written in TypeScript, Java, Rust, or some other language
  • I'm importing big documents that can require minutes to process, and I don't want to block the user interface
  • I need memory import to run independently, supporting failures and retry logic
  • I want to define custom pipelines mixing multiple languages like Python, TypeScript, etc

then you can deploy Kernel Memory as a backend service, plugging in the default handlers, or your custom Python/TypeScript/Java/etc. handlers, and leveraging the asynchronous non-blocking memory encoding process, sending documents and asking questions using the MemoryWebClient.

Here you can find a complete set of instruction about how to run the Kernel Memory service.

Kernel Memory (KM) and SK Semantic Memory (SM)

Kernel Memory (KM) is a service built on the feedback received and lessons learned from developing Semantic Kernel (SK) and Semantic Memory (SM). It provides several features that would otherwise have to be developed manually, such as storing files, extracting text from files, providing a framework to secure users' data, etc. The KM codebase is entirely in .NET, which eliminates the need to write and maintain features in multiple languages. As a service, KM can be used from any language, tool, or platform, e.g. browser extensions and ChatGPT assistants.

Semantic Memory (SM) is a library for C#, Python, and Java that wraps direct calls to databases and supports vector search. It was developed as part of the Semantic Kernel (SK) project and serves as the first public iteration of long-term memory. The core library is maintained in three languages, while the list of supported storage engines (known as "connectors") varies across languages.

Here's comparison table:

Feature Kernel Memory Semantic Memory
Data formats Web pages, PDF, Images, Word, PowerPoint, Excel, Markdown, Text, JSON, HTML Text only
Search Cosine similarity, Hybrid search with filters (AND/OR conditions) Cosine similarity
Language support Any language, command line tools, browser extensions, low-code/no-code apps, chatbots, assistants, etc. C#, Python, Java
Storage engines Azure AI Search, Elasticsearch, MongoDB Atlas, Postgres+pgvector, Qdrant, Redis, SQL Server, In memory KNN, On disk KNN. Azure AI Search, Chroma, DuckDB, Kusto, Milvus, MongoDB, Pinecone, Postgres, Qdrant, Redis, SQLite, Weaviate
File storage Disk, Azure Blobs, AWS S3, MongoDB Atlas, In memory (volatile) -
RAG Yes, with sources lookup -
Summarization Yes -
OCR Yes via Azure Document Intelligence -
Security Filters Yes -
Large document ingestion Yes, including async processing using queues (Azure Queues, RabbitMQ, File based or In memory queues) -
Document storage Yes -
Custom storage schema some DBs -
Vector DBs with internal embedding Yes -
Concurrent write to multiple vector DBs Yes -
LLMs Azure OpenAI, OpenAI, Anthropic, LLamaSharp via llama.cpp, LM Studio, Semantic Kernel connectors Azure OpenAI, OpenAI, Gemini, Hugging Face, ONNX, custom ones, etc.
LLMs with dedicated tokenization Yes No
Cloud deployment Yes -
Web service with OpenAPI Yes -

Quick test using the Docker image

If you want to give the service a quick test, use the following command to start the Kernel Memory Service using OpenAI:

docker run -e OPENAI_API_KEY="..." -it --rm -p 9001:9001 kernelmemory/service

If you prefer using custom settings and services such as Azure OpenAI, Azure Document Intelligence, etc., you should create an appsettings.Development.json file overriding the default values set in appsettings.json, or using the configuration wizard included:

cd service/Service
dotnet run setup

Then run this command to start the Docker image with the configuration just created:

on Windows:

docker run --volume .\appsettings.Development.json:/app/appsettings.Production.json -it --rm -p 9001:9001 kernelmemory/service

on macOS/Linux:

docker run --volume ./appsettings.Development.json:/app/appsettings.Production.json -it --rm -p 9001:9001 kernelmemory/service

Import files using KM web service and MemoryWebClient

#reference clients/WebClient/WebClient.csproj

var memory = new MemoryWebClient(""); // <== URL where the web service is running

// Import a file (default user)
await memory.ImportDocumentAsync("meeting-transcript.docx");

// Import a file specifying a Document ID, User and Tags
await memory.ImportDocumentAsync("business-plan.docx",
    new DocumentDetails("", "file001")
        .AddTag("collection", "business")
        .AddTag("collection", "plans")
        .AddTag("fiscalYear", "2023"));

Get answers via the web service

curl -d'{"query":"Any news from NASA about Orion?"}' -H 'Content-Type: application/json'
  "Query": "Any news from NASA about Orion?",
  "Text": "Yes, there is news from NASA about the Orion spacecraft. NASA has invited the media to see a new test version [......] For more information about the Artemis program, you can visit the NASA website.",
  "RelevantSources": [
      "Link": "...",
      "SourceContentType": "application/pdf",
      "SourceName": "file5-NASA-news.pdf",
      "Partitions": [
          "Text": "Skip to main content\nJul 28, 2023\nMEDIA ADVISORY M23-095\nNASA Invites Media to See Recovery Craft for\nArtemis Moon Mission\n(/sites/default/files/thumbnails/image/ksc-20230725-ph-fmx01_0003orig.jpg)\nAboard the [......] to Mars (/topics/moon-to-\nmars/),Orion Spacecraft (/exploration/systems/orion/index.html)\nNASA Invites Media to See Recovery Craft for Artemis Moon Miss...\n2 of 3 7/28/23, 4:51 PM",
          "Relevance": 0.8430657,
          "SizeInTokens": 863,
          "LastUpdate": "2023-08-01T08:15:02-07:00"

You can find a full example here.

Custom memory ingestion pipelines

On the other hand, if you need a custom data pipeline, you can also customize the steps, which will be handled by your custom business logic:

// Memory setup, e.g. how to calculate and where to store embeddings
var memoryBuilder = new KernelMemoryBuilder()

var memory = memoryBuilder.Build();

// Plug in custom .NET handlers

// Use the custom handlers with the memory object
await memory.ImportDocumentAsync(
    new Document("mytest001")
    steps: new[] { "step1", "step2", "step3" });

Web API specs with OpenAI swagger

The API schema is available at when running the service locally with OpenAPI enabled.

Examples and Tools


  1. Collection of Jupyter notebooks with various scenarios
  2. Using Kernel Memory web service to upload documents and answer questions
  3. Importing files and asking question without running the service (serverless mode)
  4. Using KM Plugin for Semantic Kernel
  5. Processing files with custom logic (custom handlers) in serverless mode
  6. Processing files with custom logic (custom handlers) in asynchronous mode
  7. Upload files and ask questions from command line using curl
  8. Customizing RAG and summarization prompts
  9. Custom partitioning/text chunking options
  10. Using a custom embedding/vector generator
  11. Using custom LLMs
  12. Using LLama
  13. Summarizing documents, using synthetic memories
  14. Using Semantic Kernel LLM connectors
  15. Using custom content decoders
  16. Using a custom web scraper to fetch web pages
  17. Generating answers with Anthropic LLMs
  18. Hybrid Search with Azure AI Search
  19. Writing and using a custom ingestion handler
  20. Running a single asynchronous pipeline handler as a standalone service
  21. Test project using KM package from
  22. Integrating Memory with ASP.NET applications and controllers
  23. Sample code showing how to extract text from files
  24. .NET configuration and logging
  25. Expanding chunks retrieving adjacent partitions
  26. Using local models via LM Studio
  27. Using Context Parameters to customize RAG prompt during a request
  28. Creating a Memory instance without KernelMemoryBuilder


  1. .NET appsettings.json generator
  2. Curl script to upload files
  3. Curl script to ask questions
  4. Curl script to search documents
  5. Script to start Qdrant for development tasks
  6. Script to start Elasticsearch for development tasks
  7. Script to start MS SQL Server for development tasks
  8. Script to start Redis for development tasks
  9. Script to start RabbitMQ for development tasks
  10. Script to start MongoDB Atlas for development tasks

.NET packages

  • Microsoft.KernelMemory.WebClient: .NET web client to call a running instance of Kernel Memory web service.

    Nuget package Example code

  • Microsoft.KernelMemory.Core: Kernel Memory core library including all extensions, can be used to build custom pipelines and handlers, contains also the serverless client to use memory in a synchronous way without the web service.

    Nuget package Example code

  • Microsoft.KernelMemory.Service.AspNetCore: an extension to load Kernel Memory into your ASP.NET apps.

    Nuget package Example code

  • Microsoft.KernelMemory.SemanticKernelPlugin: a Memory plugin for Semantic Kernel, replacing the original Semantic Memory available in SK.

    Nuget package Example code

Packages for Python, Java and other languages

Kernel Memory service offers a Web API out of the box, including the OpenAPI swagger documentation that you can leverage to test the API and create custom web clients. For instance, after starting the service locally, see

A .NET Web Client and a Semantic Kernel plugin are available, see the nugets packages above.

A python package with a Web Client and Semantic Kernel plugin will soon be available. We also welcome PR contributions to support more languages.


<img alt="aaronpowell" src="" width="110"> <img alt="afederici75" src="" width="110"> <img alt="akordowski" src="" width="110"> <img alt="alexibraimov" src="" width="110"> <img alt="alkampfergit" src="" width="110"> <img alt="amomra" src="" width="110">
aaronpowell afederici75 akordowski alexibraimov alkampfergit amomra
<img alt="anthonypuppo" src="" width="110"> <img alt="chaelli" src="" width="110"> <img alt="cherchyk" src="" width="110"> <img alt="coryisakson" src="" width="110"> <img alt="crickman" src="" width="110"> <img alt="dependabot[bot]" src="" width="110">
anthonypuppo chaelli cherchyk coryisakson crickman dependabot[bot]
<img alt="dluc" src="" width="110"> <img alt="DM-98" src="" width="110"> <img alt="EelcoKoster" src="" width="110"> <img alt="Foorcee" src="" width="110"> <img alt="GraemeJones104" src="" width="110"> <img alt="jurepurgar" src="" width="110">
dluc DM-98 EelcoKoster Foorcee GraemeJones104 jurepurgar
<img alt="kbeaugrand" src="" width="110"> <img alt="koteus" src="" width="110"> <img alt="KSemenenko" src="" width="110"> <img alt="lecramr" src="" width="110"> <img alt="luismanez" src="" width="110"> <img alt="marcominerva" src="" width="110">
kbeaugrand koteus KSemenenko lecramr luismanez marcominerva
<img alt="neel015" src="" width="110"> <img alt="pascalberger" src="" width="110"> <img alt="pawarsum12" src="" width="110"> <img alt="pradeepr-roboticist" src="" width="110"> <img alt="qihangnet" src="" width="110"> <img alt="roldengarm" src="" width="110">
neel015 pascalberger pawarsum12 pradeepr-roboticist qihangnet roldengarm
<img alt="slapointe" src="" width="110"> <img alt="slorello89" src="" width="110"> <img alt="spenavajr" src="" width="110"> <img alt="TaoChenOSU" src="" width="110"> <img alt="teresaqhoang" src="" width="110"> <img alt="v-msamovendyuk" src="" width="110">
slapointe slorello89 spenavajr TaoChenOSU teresaqhoang v-msamovendyuk
<img alt="Valkozaur" src="" width="110"> <img alt="vicperdana" src="" width="110"> <img alt="westdavidr" src="" width="110"> <img alt="xbotter" src="" width="110">
Valkozaur vicperdana westdavidr xbotter
Product Compatible and additional computed target framework versions.
.NET net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages (8)

Showing the top 5 NuGet packages that depend on Microsoft.KernelMemory.Core:

Package Downloads

OpenAI, ChatGPT


This package provide helpers to integrate Kernel Memory into ASP.NET applications, such as builders, web endpoints, HTTP models


MOTD as it pertains to VeeFriends.


Added some extensions for Kernel Memory.


Playwright for KernelMemory

GitHub repositories (2)

Showing the top 2 popular GitHub repositories that depend on Microsoft.KernelMemory.Core:

Repository Stars
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.
RAG architecture: index and query any data using LLM and natural language, track sources, show citations, asynchronous memory patterns.
Version Downloads Last updated
0.68.240716.1 143 7/16/2024
0.67.240712.1 574 7/12/2024
0.66.240709.1 1,049 7/9/2024
0.65.240620.1 13,535 6/21/2024
0.64.240619.1 437 6/20/2024
0.63.240618.1 819 6/18/2024
0.62.240605.1 8,886 6/5/2024
0.62.240604.1 323 6/4/2024
0.61.240524.1 7,919 5/24/2024
0.61.240519.2 7,662 5/19/2024
0.60.240517.1 178 5/18/2024
0.51.240513.2 4,447 5/13/2024
0.50.240504.7 3,116 5/4/2024
0.40.240501.1 581 5/1/2024
0.39.240427.1 4,645 4/28/2024
0.38.240425.1 968 4/25/2024
0.38.240423.1 1,001 4/24/2024
0.37.240420.2 1,523 4/21/2024
0.36.240416.1 12,316 4/16/2024
0.36.240415.2 1,218 4/16/2024
0.36.240415.1 299 4/15/2024
0.35.240412.2 1,174 4/12/2024
0.35.240321.1 14,137 3/21/2024
0.35.240318.1 10,016 3/18/2024
0.34.240313.1 7,026 3/13/2024
0.33.240312.1 507 3/12/2024
0.32.240308.1 1,628 3/8/2024
0.32.240307.3 574 3/7/2024
0.32.240307.2 368 3/7/2024
0.30.240227.1 17,292 2/28/2024
0.29.240219.2 6,015 2/20/2024
0.28.240212.1 3,193 2/13/2024
0.27.240207.1 1,293 2/7/2024
0.27.240205.2 2,971 2/6/2024
0.27.240205.1 208 2/5/2024
0.26.240121.1 12,102 1/22/2024
0.26.240116.2 2,186 1/16/2024
0.26.240115.4 406 1/16/2024
0.26.240104.1 3,247 1/5/2024
0.25.240103.1 240 1/4/2024
0.24.231228.5 1,508 12/29/2023
0.24.231228.4 124 12/29/2023
0.23.231224.1 4,919 12/24/2023
0.23.231221.1 627 12/22/2023
0.23.231219.1 2,331 12/20/2023
0.22.231217.1 161 12/18/2023
0.21.231214.1 243 12/15/2023
0.20.231212.1 641 12/13/2023
0.19.231211.1 699 12/11/2023