ElBruno.LocalLLMs 0.1.8

dotnet add package ElBruno.LocalLLMs --version 0.1.8
                    
NuGet\Install-Package ElBruno.LocalLLMs -Version 0.1.8
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="ElBruno.LocalLLMs" Version="0.1.8" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="ElBruno.LocalLLMs" Version="0.1.8" />
                    
Directory.Packages.props
<PackageReference Include="ElBruno.LocalLLMs" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add ElBruno.LocalLLMs --version 0.1.8
                    
#r "nuget: ElBruno.LocalLLMs, 0.1.8"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package ElBruno.LocalLLMs@0.1.8
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=ElBruno.LocalLLMs&version=0.1.8
                    
Install as a Cake Addin
#tool nuget:?package=ElBruno.LocalLLMs&version=0.1.8
                    
Install as a Cake Tool

ElBruno.LocalLLMs

NuGet NuGet Downloads Build Status License: MIT HuggingFace .NET GitHub stars Twitter Follow

Run local LLMs in .NET through IChatClient โ€” the same interface you'd use for Azure OpenAI, Ollama, or any other provider. Powered by ONNX Runtime GenAI.

Features

  • ๐Ÿ”Œ IChatClient implementation โ€” seamless integration with Microsoft.Extensions.AI
  • ๐Ÿ“ฆ Automatic model download โ€” models are fetched from HuggingFace on first use
  • ๐Ÿš€ Zero friction โ€” works out of the box with sensible defaults (Phi-3.5 mini)
  • ๐Ÿ–ฅ๏ธ Multi-hardware โ€” CPU, CUDA, and DirectML execution providers
  • ๐Ÿ’‰ DI-friendly โ€” register with AddLocalLLMs() in ASP.NET Core
  • ๐Ÿ”„ Streaming โ€” token-by-token streaming via GetStreamingResponseAsync
  • ๐Ÿ“Š Multi-model โ€” switch between Phi-3.5, Phi-4, Qwen2.5, Llama 3.2, and more

Installation

dotnet add package ElBruno.LocalLLMs

This works everywhere (CPU). To enable GPU acceleration, add one extra package:

# ๐ŸŸข NVIDIA GPU (CUDA):
dotnet add package Microsoft.ML.OnnxRuntimeGenAI.Cuda

# ๐Ÿ”ต Any Windows GPU โ€” AMD, Intel, NVIDIA (DirectML):
dotnet add package Microsoft.ML.OnnxRuntimeGenAI.DirectML

๐Ÿš€ The library defaults to ExecutionProvider.Auto โ€” it tries GPU first and falls back to CPU automatically. No code changes needed.

Quick Start

using ElBruno.LocalLLMs;
using Microsoft.Extensions.AI;

// Create a local chat client (downloads Phi-3.5 mini on first run)
using var client = await LocalChatClient.CreateAsync();

var response = await client.GetResponseAsync([
    new(ChatRole.User, "What is the capital of France?")
]);

Console.WriteLine(response.Text);

Streaming

using ElBruno.LocalLLMs;
using Microsoft.Extensions.AI;

using var client = await LocalChatClient.CreateAsync(new LocalLLMsOptions
{
    Model = KnownModels.Phi35MiniInstruct
});

await foreach (var update in client.GetStreamingResponseAsync([
    new(ChatRole.System, "You are a helpful assistant."),
    new(ChatRole.User, "Explain quantum computing in simple terms.")
]))
{
    Console.Write(update.Text);
}

Dependency Injection

builder.Services.AddLocalLLMs(options =>
{
    options.Model = KnownModels.Phi35MiniInstruct;
    options.ExecutionProvider = ExecutionProvider.DirectML;
});

// Inject IChatClient anywhere
public class MyService(IChatClient chatClient) { ... }

Supported Models

Tier Model Parameters ONNX ID
โšช Tiny TinyLlama-1.1B-Chat 1.1B โœ… Native tinyllama-1.1b-chat
โšช Tiny SmolLM2-1.7B-Instruct 1.7B โœ… Native smollm2-1.7b-instruct
โšช Tiny Qwen2.5-0.5B-Instruct 0.5B โœ… Native qwen2.5-0.5b-instruct
โšช Tiny Qwen2.5-1.5B-Instruct 1.5B โœ… Native qwen2.5-1.5b-instruct
โšช Tiny Gemma-2B-IT 2B โœ… Native gemma-2b-it
โšช Tiny StableLM-2-1.6B-Chat 1.6B ๐Ÿ”„ Convert stablelm-2-1.6b-chat
๐ŸŸข Small Phi-3.5 mini instruct 3.8B โœ… Native phi-3.5-mini-instruct
๐ŸŸข Small Qwen2.5-3B-Instruct 3B โœ… Native qwen2.5-3b-instruct
๐ŸŸข Small Llama-3.2-3B-Instruct 3B โœ… Native llama-3.2-3b-instruct
๐ŸŸข Small Gemma-2-2B-IT 2B โœ… Native gemma-2-2b-it
๐ŸŸก Medium Qwen2.5-7B-Instruct 7B โœ… Native qwen2.5-7b-instruct
๐ŸŸก Medium Llama-3.1-8B-Instruct 8B โœ… Native llama-3.1-8b-instruct
๐ŸŸก Medium Mistral-7B-Instruct-v0.3 7B โœ… Native mistral-7b-instruct-v0.3
๐ŸŸก Medium Gemma-2-9B-IT 9B โœ… Native gemma-2-9b-it
๐ŸŸก Medium Phi-4 14B โœ… Native phi-4
๐ŸŸก Medium DeepSeek-R1-Distill-Qwen-14B 14B โœ… Native deepseek-r1-distill-qwen-14b
๐ŸŸก Medium Mistral-Small-24B-Instruct 24B โœ… Native mistral-small-24b-instruct
๐Ÿ”ด Large Qwen2.5-14B-Instruct 14B โœ… Native qwen2.5-14b-instruct
๐Ÿ”ด Large Qwen2.5-32B-Instruct 32B โœ… Native qwen2.5-32b-instruct
๐Ÿ”ด Large Llama-3.3-70B-Instruct 70B โœ… ONNX llama-3.3-70b-instruct
๐Ÿ”ด Large Mixtral-8x7B-Instruct-v0.1 8x7B ๐Ÿ”„ Convert mixtral-8x7b-instruct-v0.1
๐Ÿ”ด Large DeepSeek-R1-Distill-Llama-70B 70B ๐Ÿ”„ Convert deepseek-r1-distill-llama-70b
๐Ÿ”ด Large Command-R (35B) 35B ๐Ÿ”„ Convert command-r-35b

See the Supported Models Guide for detailed model cards, performance benchmarks, and selection guidance.

Samples

Sample Description
HelloChat Minimal console chat
StreamingChat Token-by-token streaming
MultiModelChat Switch models at runtime
DependencyInjection ASP.NET Core DI registration

Requirements

  • .NET 8.0 or .NET 10.0
  • CPU (default), NVIDIA GPU (CUDA), or Windows GPU (DirectML)
  • ~2-8 GB disk space per model (depending on size and quantization)

Documentation

๐Ÿค Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

๐Ÿ“„ License

This project is licensed under the MIT License โ€” see the LICENSE file for details.

๐Ÿ‘‹ About the Author

Hi! I'm ElBruno ๐Ÿงก, a passionate developer and content creator exploring AI, .NET, and modern development practices.

Made with โค๏ธ by ElBruno

If you like this project, consider following my work across platforms:

  • ๐Ÿ“ป Podcast: No Tienen Nombre โ€” Spanish-language episodes on AI, development, and tech culture
  • ๐Ÿ’ป Blog: ElBruno.com โ€” Deep dives on embeddings, RAG, .NET, and local AI
  • ๐Ÿ“บ YouTube: youtube.com/elbruno โ€” Demos, tutorials, and live coding
  • ๐Ÿ”— LinkedIn: @elbruno โ€” Professional updates and insights
  • ๐• Twitter: @elbruno โ€” Quick tips, releases, and tech news
Product Compatible and additional computed target framework versions.
.NET net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 was computed.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 is compatible.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
0.1.8 11 3/19/2026
0.1.7 34 3/18/2026
0.1.6 34 3/18/2026
0.1.0 30 3/18/2026