LM-Kit.NET 2026.1.2

Prefix Reserved
dotnet add package LM-Kit.NET --version 2026.1.2
                    
NuGet\Install-Package LM-Kit.NET -Version 2026.1.2
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="LM-Kit.NET" Version="2026.1.2" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="LM-Kit.NET" Version="2026.1.2" />
                    
Directory.Packages.props
<PackageReference Include="LM-Kit.NET" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add LM-Kit.NET --version 2026.1.2
                    
#r "nuget: LM-Kit.NET, 2026.1.2"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package LM-Kit.NET@2026.1.2
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=LM-Kit.NET&version=2026.1.2
                    
Install as a Cake Addin
#tool nuget:?package=LM-Kit.NET&version=2026.1.2
                    
Install as a Cake Tool

title: "LM-Kit.NET - On-Device AI Agent Platform for .NET Developers"

On-Device AI Agent Platform for .NET Developers

Your AI. Your Data. On Your Device.

LM-Kit.NET is a very unique full-stack AI framework for .NET that unifies everything you need to build and deploy AI agents with zero cloud dependency and zero external dependencies. It combines the fastest .NET inference engine, production-ready trained models, agent orchestration, RAG pipelines, and MCP-compatible tool calling in a single in-process SDK for C# and VB.NET. That makes LM-Kit.NET a category of one in the .NET ecosystem.

🔒 100% Local    ⚡ No Signup    🌐 Cross-Platform


Why LM-Kit.NET

A complete AI stack with no moving parts. LM-Kit.NET integrates inference, models, orchestration, and RAG into your .NET application as a single NuGet package. No Python runtimes, no containers, no external services, no dependencies to manage. Everything runs in-process.

Built by experts, updated continuously. Our team ships the latest advances in generative AI, symbolic AI, and NLP research directly into the SDK. Check our changelog to see the pace of innovation.

Not every problem requires a massive LLM. Dedicated task agents deliver faster execution, lower costs, and higher accuracy for specific workflows.

  • Complete data sovereignty - sensitive information stays within your infrastructure
  • Zero network latency - responses as fast as your hardware allows
  • No per-token costs - unlimited inference once deployed
  • Offline operation - works without internet connectivity
  • Regulatory compliance - meets GDPR, HIPAA, and data residency requirements by design

What You Can Build

  • Autonomous AI agents that reason, plan, and execute multi-step tasks using your application's tools and APIs
  • RAG-powered knowledge assistants over local documents, databases, and enterprise data sources
  • PDF chat and document Q&A with retrieval, reranking, and grounded generation
  • Multi-agent workflows that orchestrate specialized task agents for complex business processes
  • Voice-driven assistants with speech-to-text, reasoning, and function calling
  • OCR and extraction pipelines for invoices, forms, IDs, emails, and scanned documents
  • Compliance-focused text intelligence - PII extraction, NER, classification, sentiment analysis

Core Capabilities

LM-Kit.NET delivers a complete AI stack: the fastest .NET inference engine, domain-tuned models that solve real-world problems out of the box, and a comprehensive orchestration layer for building agents and RAG applications.

🤖 AI Agents and Orchestration

Build autonomous AI agents that reason, plan, and execute complex workflows within your applications.

  • Task Agents - Reusable specialists designed for specific tasks with high speed and accuracy
  • Agent Orchestration - Compose multi-agent workflows with RAG, tools, and APIs under strict control
  • Function Calling - Let models dynamically invoke your application's methods with structured parameters
  • Tool Registry - Define and manage collections of tools agents can use
  • MCP Client Support - Connect to Model Context Protocol servers for extended capabilities including resources, prompts, and tool discovery
  • Agent Memory - Persistent memory that survives across conversation sessions
  • Reasoning Control - Adjust reasoning depth for models that support extended thinking

🔍 Multimodal Intelligence

Process and understand content across text, images, documents, and audio.

  • Vision Language Models (VLM) - Analyze images, extract information, answer questions about visual content
  • VLM-Based OCR - High-accuracy text extraction from images and scanned content
  • Speech-to-Text - Transcribe audio with voice activity detection and multi-language support
  • Document Processing - Native support for PDF, DOCX, XLSX, PPTX, HTML, and image formats
  • Image Embeddings - Generate semantic representations of images for similarity search

📚 Retrieval-Augmented Generation (RAG)

Ground AI responses in your organization's knowledge with a flexible, extensible RAG framework.

  • Modular RAG Architecture - Use built-in pipelines or implement custom retrieval strategies
  • Built-in Vector Database - Store and search embeddings without external dependencies
  • PDF Chat and Document RAG - Chat and retrieve over documents with dedicated workflows
  • Multimodal RAG - Retrieve relevant content from both text and images
  • Advanced Chunking - Markdown-aware, semantic, and layout-based chunking strategies
  • Reranking - Improve retrieval precision with semantic reranking
  • External Vector Store Integration - Connect to Qdrant and other vector databases

📊 Structured Data Extraction

Transform unstructured content into structured, actionable data.

  • Schema-Based Extraction - Define extraction targets using JSON schemas or custom elements
  • Named Entity Recognition (NER) - Extract people, organizations, locations, and custom entity types
  • PII Detection - Identify and classify personal identifiers for privacy compliance
  • Multimodal Extraction - Extract structured data from images and documents
  • Layout-Aware Processing - Detect paragraphs and lines, support region-based workflows

💡 Content Intelligence

Analyze and understand text and visual content.

  • Sentiment and Emotion Analysis - Detect emotional tone from text and images
  • Custom Classification - Categorize text and images into your defined classes
  • Keyword Extraction - Identify key terms and phrases
  • Language Detection - Identify languages from text, images, or audio
  • Summarization - Condense long content with configurable strategies

✍️ Text Generation and Transformation

Generate and refine content with precise control.

  • Conversational AI - Build context-aware chatbots with multi-turn memory
  • Constrained Generation - Guide model outputs using JSON schemas, templates, or custom grammar rules
  • Translation - Convert text between languages with confidence scoring
  • Text Enhancement - Improve clarity, fix grammar, adapt tone

🛠️ Model Customization

Tailor models to your specific domain.

  • Fine-Tuning - Train models on your data with LoRA support
  • Dynamic LoRA Loading - Switch adapters at runtime without reloading base models
  • Quantization - Optimize models for your deployment constraints
  • Training Dataset Tools - Prepare and export datasets in standard formats

Supported Models

LM-Kit.NET includes domain-tuned models optimized for real-world tasks, plus broad compatibility with models from leading providers:

Text Models: LLaMA, Mistral, Mixtral, Qwen, Phi, Gemma, Granite, DeepSeek, Falcon, and more

Vision Models: Qwen-VL, MiniCPM-V, Pixtral, Gemma Vision, LightOnOCR

Embedding Models: BGE, Nomic, Qwen Embedding, Gemma Embedding

Speech Models: Whisper (all sizes), with voice activity detection

Browse production-ready models in the Model Catalog, or load models directly from any Hugging Face repository.


Performance and Hardware

The Fastest .NET Inference Engine

LM-Kit.NET automatically leverages the best available acceleration on any hardware:

  • NVIDIA GPUs - CUDA backends with optimized kernels
  • AMD/Intel GPUs - Vulkan backend for cross-vendor GPU support
  • Apple Silicon - Metal acceleration for M-series chips
  • Multi-GPU - Distribute models across multiple GPUs
  • CPU Fallback - Optimized CPU inference when GPU unavailable

Dual Backend Architecture

Choose the optimal inference engine for your use case:

  • llama.cpp Backend - Broad model compatibility, memory efficiency
  • ONNX Runtime - Optimized inference for supported model formats

Observability

Gain full visibility into AI operations with comprehensive instrumentation:

  • OpenTelemetry Integration - GenAI semantic conventions for distributed tracing and metrics
  • Inference Metrics - Token counts, processing rates, generation speeds, context utilization, perplexity scores, and sampling statistics
  • Event Callbacks - Fine-grained hooks for token sampling, tool invocations, and generation lifecycle

Platform Support

Operating Systems

  • Windows - Windows 7 through Windows 11
  • macOS - macOS 11+ (Intel and Apple Silicon)
  • Linux - glibc 2.27+ (x64 and ARM64)

.NET Frameworks

Compatible from .NET Framework 4.6.2 through the latest .NET releases, with optimized binaries for each version.


Integration

Zero Dependencies

LM-Kit.NET ships as a single NuGet package with absolutely no external dependencies:

dotnet add package LM-Kit.NET

No Python runtime. No containers. No external services. No native libraries to manage separately. The entire AI stack runs in-process within your .NET application, making deployment as simple as any other NuGet package.

Ecosystem Connections

  • Semantic Kernel - Use LM-Kit.NET as a backend for Microsoft Semantic Kernel
  • Vector Databases - Integrate with Qdrant via open-source connectors
  • MCP Servers - Connect to Model Context Protocol servers for extended tool access

Data Privacy and Security

Running inference locally provides inherent security advantages:

  • No data transmission - Content never leaves your network
  • No third-party access - No external services process your data
  • Audit-friendly - Complete visibility into AI operations
  • Air-gapped deployment - Works in disconnected environments

This architecture simplifies compliance with GDPR, HIPAA, SOC 2, and other regulatory frameworks.


Getting Started

using LMKit;
using LMKit.Model;

// Load a model
var model = new LM("path/to/model.gguf");

// Create a conversation
var conversation = new MultiTurnConversation(model);

// Chat
var response = await conversation.SubmitAsync("Explain quantum computing briefly.");
Console.WriteLine(response);

Explore more:

Product Compatible and additional computed target framework versions.
.NET net5.0 was computed.  net5.0-windows was computed.  net6.0 was computed.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 was computed.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 is compatible.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 is compatible.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
.NET Core netcoreapp2.0 was computed.  netcoreapp2.1 was computed.  netcoreapp2.2 was computed.  netcoreapp3.0 was computed.  netcoreapp3.1 was computed. 
.NET Standard netstandard2.0 is compatible.  netstandard2.1 was computed. 
.NET Framework net461 was computed.  net462 was computed.  net463 was computed.  net47 was computed.  net471 was computed.  net472 was computed.  net48 was computed.  net481 was computed. 
MonoAndroid monoandroid was computed. 
MonoMac monomac was computed. 
MonoTouch monotouch was computed. 
Tizen tizen40 was computed.  tizen60 was computed. 
Xamarin.iOS xamarinios was computed. 
Xamarin.Mac xamarinmac was computed. 
Xamarin.TVOS xamarintvos was computed. 
Xamarin.WatchOS xamarinwatchos was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages (2)

Showing the top 2 NuGet packages that depend on LM-Kit.NET:

Package Downloads
LM-Kit.NET.Data.Connectors.Qdrant

LM-Kit.NET.Data.Connectors.Qdrant acts as a seamless integration bridge between LM-Kit.NET and Qdrant vector databases.

LM-Kit.NET.SemanticKernel

LM-Kit.NET.SemanticKernel acts as a seamless integration bridge between LM-Kit.NET and Microsoft Semantic Kernel.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
2026.1.2 2 1/11/2026
2026.1.1 168 1/4/2026
2025.12.4 317 12/13/2025
2025.12.3 114 12/13/2025
2025.12.2 446 12/8/2025
2025.12.1 681 12/3/2025
2025.11.3 269 11/22/2025
2025.11.2 447 11/12/2025
2025.11.1 252 11/5/2025
2025.10.5 567 10/24/2025
2025.10.4 254 10/16/2025
2025.10.3 226 10/9/2025
2025.10.2 213 10/6/2025
2025.9.3 205 9/29/2025
2025.9.2 244 9/13/2025
2025.9.1 181 9/12/2025
2025.8.4 262 8/22/2025
2025.8.3 200 8/19/2025
2025.8.2 338 8/8/2025
2025.8.1 125 8/2/2025
2025.7.5 500 7/25/2025
2025.7.4 365 7/17/2025
2025.7.3 230 7/10/2025
2025.7.2 206 7/7/2025
2025.7.1 212 7/1/2025
2025.6.5 203 6/27/2025
2025.6.4 318 6/19/2025
2025.6.3 226 6/17/2025
2025.6.2 385 6/10/2025
2025.6.1 235 6/2/2025
2025.5.5 288 5/25/2025
2025.5.4 242 5/20/2025
2025.5.3 220 5/20/2025
2025.5.2 343 5/13/2025
2025.5.1 244 5/5/2025
2025.4.13 354 4/30/2025
2025.4.12 447 4/27/2025
2025.4.11 562 4/23/2025
2025.4.10 449 4/22/2025
2025.4.9 525 4/15/2025
2025.4.8 692 4/9/2025
2025.4.7 403 4/9/2025
2025.4.6 330 4/9/2025
2025.4.5 554 4/3/2025
2025.4.2 396 4/2/2025
2025.4.1 376 4/2/2025
2025.3.6 571 3/24/2025
2025.3.5 313 3/17/2025
2025.3.4 376 3/12/2025
2025.3.3 386 3/11/2025
2025.3.2 931 3/3/2025
2025.3.1 582 3/1/2025
2025.2.4 472 2/26/2025
2025.2.3 1,053 2/19/2025
2025.2.2 778 2/12/2025
2025.2.1 1,505 2/4/2025
2025.1.10 511 1/30/2025
2025.1.9 288 1/27/2025
2025.1.8 288 1/26/2025
2025.1.7 296 1/24/2025
2025.1.6 674 1/22/2025
2025.1.4 325 1/19/2025
2025.1.3 431 1/18/2025
2025.1.2 407 1/11/2025
2025.1.1 641 1/1/2025
2024.12.13 259 12/29/2024
2024.12.12 392 12/26/2024
2024.12.11 427 12/23/2024
2024.12.10 456 12/22/2024
2024.12.9 366 12/20/2024
2024.12.8 316 12/19/2024
2024.12.7 487 12/15/2024
2024.12.6 457 12/13/2024
2024.12.5 406 12/11/2024
2024.12.4 413 12/10/2024
2024.12.3 510 12/7/2024
2024.12.2 323 12/7/2024
2024.12.1 266 12/6/2024
2024.11.10 405 11/29/2024
2024.11.9 302 11/27/2024
2024.11.8 320 11/25/2024
2024.11.7 269 11/25/2024
2024.11.6 316 11/25/2024
2024.11.5 424 11/23/2024
2024.11.4 353 11/18/2024
2024.11.3 274 11/12/2024
2024.11.2 245 11/5/2024
2024.11.1 250 11/4/2024
2024.10.5 303 10/24/2024
2024.10.4 323 10/17/2024
2024.10.3 214 10/16/2024
2024.10.2 283 10/9/2024
2024.10.1 289 10/1/2024
2024.9.4 219 9/25/2024
2024.9.3 305 9/18/2024
2024.9.2 244 9/11/2024
2024.9.1 243 9/6/2024
2024.9.0 253 9/3/2024
2024.8.4 235 8/26/2024
2024.8.3 291 8/21/2024
2024.8.2 253 8/20/2024
2024.8.1 271 8/15/2024
2024.8.0 200 8/11/2024
2024.7.10 203 8/6/2024
2024.7.9 194 7/31/2024
2024.7.8 167 7/30/2024
2024.7.7 168 7/29/2024
2024.7.6 168 7/27/2024
2024.7.5 219 7/26/2024