ElBruno.Text2Image
0.5.3
dotnet add package ElBruno.Text2Image --version 0.5.3
NuGet\Install-Package ElBruno.Text2Image -Version 0.5.3
<PackageReference Include="ElBruno.Text2Image" Version="0.5.3" />
<PackageVersion Include="ElBruno.Text2Image" Version="0.5.3" />
<PackageReference Include="ElBruno.Text2Image" />
paket add ElBruno.Text2Image --version 0.5.3
#r "nuget: ElBruno.Text2Image, 0.5.3"
#:package ElBruno.Text2Image@0.5.3
#addin nuget:?package=ElBruno.Text2Image&version=0.5.3
#tool nuget:?package=ElBruno.Text2Image&version=0.5.3
ElBruno.Text2Image
๐ข This project started with FLUX.2 Flex on Microsoft Foundry โ a cloud-first approach to text-to-image generation with best-in-class text rendering. After wrapping that API, we thought: "Why not bring the same developer experience to local models too?" So we did. Now you can generate images from text prompts using cloud APIs or local Stable Diffusion models with ONNX Runtime โ all through the same clean .NET interface.
A .NET library for text-to-image generation โ cloud and local. Generate images from text prompts using Microsoft Foundry FLUX.2 or Stable Diffusion (ONNX Runtime) with automatic model downloads from HuggingFace. No Python needed. Just dotnet add package and go. ๐
Features
- ๐จ Text-to-Image โ Generate images from text prompts using Stable Diffusion or FLUX.2
- ๐ค Multiple Models โ Stable Diffusion 1.5, LCM Dreamshaper, SDXL Turbo, SD 2.1, FLUX.2 (cloud)
- โฌ๏ธ Auto-Download โ ONNX models are automatically downloaded from HuggingFace on first use
- โ๏ธ Cloud API โ FLUX.2 via Microsoft Foundry for high-quality text-heavy designs
- ๐ง ONNX Runtime โ Fast, cross-platform inference (CPU, CUDA, DirectML)
- โก Auto GPU Detection โ Automatically uses GPU if available (CUDA โ DirectML โ CPU)
- ๐ฆ NuGet Package โ Simple
dotnet add packageinstallation - ๐ฏ Multi-target โ Supports .NET 8.0 and .NET 10.0
- ๐ Microsoft.Extensions.AI โ All generators implement
IImageGeneratorfrom Microsoft.Extensions.AI - ๐ฑ Reproducible โ Seed-based generation for reproducible results
Quick Start
Install
Choose the package matching your hardware:
# CPU (default โ works everywhere)
dotnet add package ElBruno.Text2Image.Cpu
# NVIDIA GPU (CUDA โ 4x faster)
dotnet add package ElBruno.Text2Image.Cuda
# DirectML (AMD/Intel/NVIDIA on Windows)
dotnet add package ElBruno.Text2Image.DirectML
# FLUX.2 cloud via Microsoft Foundry (no GPU needed)
dotnet add package ElBruno.Text2Image.Foundry
Note: These are mutually exclusive โ install only ONE, following the same pattern as
Microsoft.ML.OnnxRuntimevsMicrosoft.ML.OnnxRuntime.Gpu.
Basic Usage โ Local (Stable Diffusion 1.5)
using ElBruno.Text2Image;
using ElBruno.Text2Image.Models;
// Create a Stable Diffusion 1.5 generator (model downloads automatically on first use)
using var generator = new StableDiffusion15();
// Generate an image from a text prompt
var result = await generator.GenerateAsync("a beautiful sunset over a mountain lake, digital art");
// Save the generated image
await result.SaveAsync("output.png");
Console.WriteLine($"Generated in {result.InferenceTimeMs}ms (seed: {result.Seed})");
Basic Usage โ Cloud (FLUX.2 via Microsoft Foundry)
using ElBruno.Text2Image;
using ElBruno.Text2Image.Foundry;
// Create a FLUX.2 generator using Microsoft Foundry
// Default model is FLUX.2-pro (photorealistic image generation)
using var generator = new Flux2Generator(
endpoint: "https://your-resource.services.ai.azure.com",
apiKey: "your-api-key",
modelName: "FLUX.2 Pro", // display name
modelId: "FLUX.2-pro"); // deployment/model name
// Generate an image โ same interface as local models
var result = await generator.GenerateAsync("a futuristic cityscape with neon lights, cyberpunk style");
await result.SaveAsync("flux2-output.png");
With Custom Options
using var generator = new StableDiffusion15();
var result = await generator.GenerateAsync("a futuristic cityscape at night, neon lights",
new ImageGenerationOptions
{
NumInferenceSteps = 20, // More steps = better quality
GuidanceScale = 7.5, // Higher = follows prompt more closely
Width = 512,
Height = 512,
Seed = 42, // For reproducible results
ExecutionProvider = ExecutionProvider.Cpu
});
await result.SaveAsync("cityscape.png");
Microsoft.Extensions.AI Interface
All generators implement Microsoft.Extensions.AI.IImageGenerator, enabling a standard API:
using Microsoft.Extensions.AI;
using ElBruno.Text2Image.Models;
// Any generator can be used via the M.E.AI interface
using var sd15 = new StableDiffusion15();
IImageGenerator generator = sd15;
var request = new ImageGenerationRequest("a whimsical treehouse in a fantasy forest");
var options = new ImageGenerationOptions
{
ImageSize = new System.Drawing.Size(512, 512),
AdditionalProperties = new AdditionalPropertiesDictionary
{
["num_inference_steps"] = 15,
["guidance_scale"] = 7.5,
["seed"] = 42
}
};
var response = await generator.GenerateAsync(request, options);
var imageBytes = response.Contents.OfType<DataContent>().First().Data.ToArray();
await File.WriteAllBytesAsync("output.png", imageBytes);
Custom Model Directory
// Download and use models from a specific directory
using var generator = new StableDiffusion15(new ImageGenerationOptions
{
ModelDirectory = @"D:\MyModels",
NumInferenceSteps = 15
});
await generator.EnsureModelAvailableAsync();
var result = await generator.GenerateAsync("a serene lake");
await result.SaveAsync("output.png");
Dependency Injection
// Local model
services.AddStableDiffusion15(options =>
{
options.NumInferenceSteps = 20;
options.ModelDirectory = "/path/to/models";
});
// Cloud model (requires ElBruno.Text2Image.Foundry package)
services.AddFlux2Generator(
endpoint: "https://your-resource.openai.azure.com",
apiKey: "your-api-key",
modelId: "FLUX.2-pro");
// Inject IImageGenerator anywhere
public class MyService(IImageGenerator generator)
{
public async Task<byte[]> GenerateImage(string prompt)
{
var result = await generator.GenerateAsync(prompt);
return result.ImageBytes;
}
}
Supported Models
Local Models (ONNX Runtime)
| Model | Class | ONNX Source | Steps | VRAM | Status |
|---|---|---|---|---|---|
| Stable Diffusion 1.5 | StableDiffusion15 |
onnx-community/stable-diffusion-v1-5-ONNX |
15-50 | ~4 GB | โ Available |
| LCM Dreamshaper v7 | LcmDreamshaperV7 |
TheyCallMeHex/LCM-Dreamshaper-V7-ONNX |
2-4 | ~4 GB | โ Available |
| SDXL Turbo | SdxlTurbo |
elbruno/sdxl-turbo-ONNX |
1-4 | ~8 GB | โ Available |
| SD 2.1 Base | StableDiffusion21 |
elbruno/stable-diffusion-2-1-ONNX |
15-50 | ~5 GB | โ Available |
Cloud Models (REST API)
| Model | Class | Provider | Quality | Status |
|-------|-------|----------|---------|--------|
| FLUX.2 Pro | Flux2Generator | Microsoft Foundry | Excellent | โ
Default |
| FLUX.2 Flex | Flux2Generator | Microsoft Foundry | Excellent | โ
Available |
See docs/model-support.md for detailed model comparison.
Samples
| Sample | Description |
|---|---|
| scenario-01-simple | Basic text-to-image generation with SD 1.5 |
| scenario-02-custom-options | Custom seeds, guidance scale, and steps |
| scenario-03-flux2-cloud | FLUX.2 cloud API via Microsoft Foundry |
| scenario-04-lcm-fast | Ultra-fast generation with LCM Dreamshaper (2-4 steps) |
| scenario-05-sd21 | Stable Diffusion 2.1 at 768ร768 native resolution |
| scenario-06-model-comparison | Compare SD 1.5 vs LCM side-by-side |
| scenario-07-custom-model-directory | Download models to a custom directory |
| scenario-08-meai-interface | Use via Microsoft.Extensions.AI IImageGenerator |
| scenario-09-batch-generation | Generate multiple images from a batch of prompts |
| scenario-10-progress-reporting | Detailed download progress reporting with progress bar |
| scenario-11-gpu-diagnostics | Show CPU vs GPU provider detection and diagnostics |
Run a Sample
cd src/samples/scenario-01-simple
dotnet run
Documentation
- docs/architecture.md โ Package structure and pipeline diagrams
- docs/gpu-acceleration.md โ GPU setup (CUDA, DirectML, auto-detection)
- docs/flux2-setup-guide.md โ Microsoft Foundry FLUX.2 setup
- docs/model-support.md โ Detailed model comparison
- docs/onnx-conversion-guide.md โ Step-by-step ONNX conversion guide
- docs/publishing.md โ NuGet publishing guide (Trusted Publishing / OIDC)
- docs/security.md โ Security considerations and hardening
- scripts/ โ Python conversion and upload scripts
๐ About the Author
Hi! I'm ElBruno ๐งก, a passionate developer and content creator exploring AI, .NET, and modern development practices.
Made with โค๏ธ by ElBruno
If you like this project, consider following my work across platforms:
- ๐ป Podcast: No Tienen Nombre โ Spanish-language episodes on AI, development, and tech culture
- ๐ป Blog: ElBruno.com โ Deep dives on embeddings, RAG, .NET, and local AI
- ๐บ YouTube: youtube.com/elbruno โ Demos, tutorials, and live coding
- ๐ LinkedIn: @elbruno โ Professional updates and insights
- ๐ Twitter: @elbruno โ Quick tips, releases, and tech news
License
This project is licensed under the MIT License - see the LICENSE file for details.
Related Projects
| Product | Versions Compatible and additional computed target framework versions. |
|---|---|
| .NET | net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 was computed. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 is compatible. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net10.0
- ElBruno.HuggingFace.Downloader (>= 0.5.0)
- Microsoft.Extensions.AI.Abstractions (>= 10.3.0)
- Microsoft.Extensions.DependencyInjection.Abstractions (>= 9.0.13)
- Microsoft.ML.OnnxRuntime.Managed (>= 1.24.1)
- SixLabors.ImageSharp (>= 3.1.12)
-
net8.0
- ElBruno.HuggingFace.Downloader (>= 0.5.0)
- Microsoft.Extensions.AI.Abstractions (>= 10.3.0)
- Microsoft.Extensions.DependencyInjection.Abstractions (>= 9.0.13)
- Microsoft.ML.OnnxRuntime.Managed (>= 1.24.1)
- SixLabors.ImageSharp (>= 3.1.12)
NuGet packages (4)
Showing the top 4 NuGet packages that depend on ElBruno.Text2Image:
| Package | Downloads |
|---|---|
|
ElBruno.Text2Image.Foundry
Microsoft Foundry cloud image generation for ElBruno.Text2Image. Provides FLUX.2 text-to-image generation via the Microsoft Foundry REST API. No local models needed โ runs in the cloud. |
|
|
ElBruno.Text2Image.DirectML
DirectML GPU acceleration for ElBruno.Text2Image. Install this package instead of ElBruno.Text2Image to enable DirectML-based inference for Stable Diffusion image generation on AMD, Intel, and NVIDIA GPUs (Windows only). |
|
|
ElBruno.Text2Image.Cuda
NVIDIA CUDA GPU acceleration for ElBruno.Text2Image. Install this package instead of ElBruno.Text2Image to enable CUDA-based inference for Stable Diffusion image generation on NVIDIA GPUs. |
|
|
ElBruno.Text2Image.Cpu
CPU runtime for ElBruno.Text2Image. Includes all text-to-image generation functionality with CPU-based inference via ONNX Runtime. Install this for environments without a GPU. |
GitHub repositories
This package is not used by any popular GitHub repositories.
| Version | Downloads | Last Updated |
|---|---|---|
| 0.5.3 | 0 | 2/27/2026 |
| 0.5.2-preview | 31 | 2/26/2026 |