VeniceAI.SDK
0.1.6
dotnet add package VeniceAI.SDK --version 0.1.6
NuGet\Install-Package VeniceAI.SDK -Version 0.1.6
<PackageReference Include="VeniceAI.SDK" Version="0.1.6" />
<PackageVersion Include="VeniceAI.SDK" Version="0.1.6" />
<PackageReference Include="VeniceAI.SDK" />
paket add VeniceAI.SDK --version 0.1.6
#r "nuget: VeniceAI.SDK, 0.1.6"
#:package VeniceAI.SDK@0.1.6
#addin nuget:?package=VeniceAI.SDK&version=0.1.6
#tool nuget:?package=VeniceAI.SDK&version=0.1.6
Venice AI .NET SDK
The unofficial .NET SDK for the Venice AI API, providing easy access to advanced AI models for chat completions, image generation, text-to-speech, embeddings, and more.
Status: Beta - under active development with continuous improvements and new features.
Features
- Chat Completions - Text generation with streaming support
- Image Generation - Create, edit, and upscale images from text descriptions
- Text-to-Speech - Convert text to natural-sounding speech with multiple voices
- Embeddings - Generate text embeddings for semantic search and analysis
- Model Management - List and manage available models with type filtering and validation
- Billing Information - Track API usage and costs
- Vision Support - Analyze and understand images with multimodal models
- Function Calling - Execute functions based on natural language requests
- Streaming Support - Real-time streaming for chat, audio, and other responses
- Type Safety - Comprehensive enum system with validation for models and parameters
- Async/Await - Full async support throughout the SDK
- Dependency Injection - Built-in support for .NET DI container
- HttpClient Separation - Complete isolation from your application's HttpClients
Key Principles
🔐 SDK Manages Venice AI Specifics: The SDK automatically handles API endpoints, authentication, and Venice AI-specific configurations. You don't need to configure these manually.
🔗 Complete HttpClient Separation: Your application's HttpClients and the Venice AI HttpClient are completely isolated - no configuration conflicts.
⚙️ Configure What Matters: Focus on your application needs (timeouts, custom headers) while the SDK handles Venice AI requirements.
Installation
dotnet add package VeniceAI.SDK
Setup
API Key Configuration
Set your Venice AI API key using one of these methods:
User Secrets (recommended for development):
dotnet user-secrets set "VeniceAI:ApiKey" "your-api-key-here"
Environment Variable:
# Windows
set VeniceAI__ApiKey=your-api-key-here
# Linux/Mac
export VeniceAI__ApiKey=your-api-key-here
Configuration File:
{
"VeniceAI": {
"ApiKey": "your-api-key-here"
}
}
⚠️ Important: Never commit your API key to source control.
Quick Start
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using VeniceAI.SDK;
using VeniceAI.SDK.Extensions;
using VeniceAI.SDK.Models.Chat;
var host = Host.CreateDefaultBuilder(args)
.ConfigureServices((context, services) =>
{
services.AddVeniceAI(context.Configuration);
})
.Build();
var client = host.Services.GetRequiredService<IVeniceAIClient>();
var request = new ChatCompletionRequest
{
Model = "llama-3.3-70b",
Messages = new List<ChatMessage>
{
new UserMessage("Hello! How are you?")
},
MaxTokens = 100
};
var response = await client.Chat.CreateChatCompletionAsync(request);
Console.WriteLine(response.Choices[0].Message.Content);
Getting Started
Running the Quickstart Sample
Try the comprehensive quickstart example:
cd samples/VeniceAI.SDK.Quickstart
dotnet user-secrets set "VeniceAI:ApiKey" "your-api-key-here"
dotnet run
The quickstart demonstrates:
- Setting up the Venice AI client with dependency injection
- Listing available models and their capabilities
- Creating basic chat completions
- Streaming chat responses in real-time
- Getting detailed model information
- Proper error handling
HttpClient Configuration & Separation
The Venice AI SDK provides multiple options for HttpClient configuration to ensure complete separation from your application's other HttpClient instances. The SDK automatically manages the Venice AI API endpoint (https://api.venice.ai/api/v1/
) - you cannot and should not configure the base URL.
Key Configuration Principles
✅ Only API Key Required: The only setting you need to configure is your API key
✅ SDK Handles Everything Else: Endpoints, authentication, and Venice AI-specific settings are managed internally
❌ No Base URL Override: The Venice AI endpoint is fixed and cannot be changed
Configuration Options
The SDK accepts only these user-configurable options:
Option | Description | Default | Required |
---|---|---|---|
ApiKey |
Your Venice AI API key | - | ✅ Yes |
All other settings (endpoints, timeouts, retry logic) are managed internally by the SDK.
✅ Recommended Usage Patterns
1. Basic Setup (Recommended for Most Cases)
services.AddVeniceAI("your-api-key");
Benefits:
- Automatic HttpClient separation via named client
- SDK manages all Venice AI-specific configuration
- No interference with your other HttpClients
- Only requires your API key
2. Configuration File Setup
// appsettings.json
{
"VeniceAI": {
"ApiKey": "your-api-key-here"
}
}
// Startup/Program.cs
services.AddVeniceAI(context.Configuration);
3. Multiple Services with Complete Separation
// Your application's API service
services.AddHttpClient("MyApiClient", client =>
{
client.BaseAddress = new Uri("https://api.myservice.com/");
client.Timeout = TimeSpan.FromSeconds(30);
client.DefaultRequestHeaders.Add("User-Agent", "MyApp/1.0");
});
// Venice AI service - completely separate and automatic
services.AddVeniceAI("your-api-key");
// SDK automatically configures: BaseAddress, Authorization, Timeout, etc.
4. Custom HttpClient Configuration (Advanced)
services.AddVeniceAI("your-api-key", httpClient =>
{
httpClient.Timeout = TimeSpan.FromMinutes(10); // Custom timeout
httpClient.DefaultRequestHeaders.Add("User-Agent", "MyApp/1.0");
// SDK automatically sets BaseAddress and Authorization
});
Note: The SDK automatically handles all Venice AI-specific configuration. You only need to configure additional settings like timeout and custom headers.
Usage Examples
Chat Completions
// Basic chat
var chatRequest = new ChatCompletionRequest
{
Model = "llama-3.3-70b",
Messages = new List<ChatMessage>
{
new SystemMessage("You are a helpful assistant."),
new UserMessage("What is the capital of France?")
},
MaxTokens = 150,
Temperature = 0.7
};
var response = await client.Chat.CreateChatCompletionAsync(chatRequest);
Console.WriteLine(response.Choices[0].Message.Content);
// Streaming chat
await foreach (var chunk in client.Chat.CreateChatCompletionStreamAsync(chatRequest))
{
if (chunk.IsSuccess && chunk.Choices?.Any() == true)
{
Console.Write(chunk.Choices[0].Message.Content);
}
}
Vision (Image Understanding)
// Analyze an image with vision models
var visionRequest = new ChatCompletionRequest
{
Model = "mistral-31-24b", // Vision-enabled model
Messages = new List<ChatMessage>
{
new UserMessage(new List<MessageContent>
{
new MessageContent
{
Type = "text",
Text = "What do you see in this image? Describe it in detail."
},
new MessageContent
{
Type = "image_url",
ImageUrl = new ImageUrl
{
Url = "https://example.com/image.jpg"
}
}
})
},
MaxTokens = 200
};
var response = await client.Chat.CreateChatCompletionAsync(visionRequest);
Console.WriteLine($"Vision analysis: {response.Choices[0].Message.Content}");
Image Generation
// Basic image generation
var imageRequest = new GenerateImageRequest
{
Model = "flux-dev",
Prompt = "A beautiful sunset over mountains",
Width = 1024,
Height = 1024,
Steps = 25,
CfgScale = 7.5,
Format = "png"
};
var imageResponse = await client.Images.GenerateImageAsync(imageRequest);
if (imageResponse.IsSuccess)
{
var base64Image = imageResponse.Data[0].B64Json;
// Save or process the image
var imageBytes = Convert.FromBase64String(base64Image);
await File.WriteAllBytesAsync("generated_image.png", imageBytes);
}
// Simple image generation
var simpleImageResponse = await client.Images.GenerateImageSimpleAsync(
"A futuristic cityscape at night",
model: "flux-dev",
width: 1024,
height: 1024
);
// Image upscaling
var upscaleRequest = new UpscaleImageRequest
{
Model = "flux-dev",
Image = Convert.ToBase64String(imageBytes),
Scale = 2
};
var upscaleResponse = await client.Images.UpscaleImageAsync(upscaleRequest);
// Get available image styles
var stylesResponse = await client.Images.GetImageStylesAsync();
foreach (var style in stylesResponse.Data)
{
Console.WriteLine($"Available style: {style}");
}
Text-to-Speech
var ttsRequest = new CreateSpeechRequest
{
Model = "tts-kokoro",
Input = "Hello, this is Venice AI speaking!",
Voice = VoiceOptions.Female.Sky,
ResponseFormat = AudioFormat.Mp3,
Speed = 1.0
};
var audioResponse = await client.Audio.CreateSpeechAsync(ttsRequest);
if (audioResponse.IsSuccess)
{
await File.WriteAllBytesAsync("output.mp3", audioResponse.AudioContent);
}
// Streaming TTS
await foreach (var chunk in client.Audio.CreateSpeechStreamAsync(ttsRequest))
{
// Process audio chunk
}
Embeddings
var embeddingRequest = new CreateEmbeddingRequest
{
Model = "text-embedding-bge-m3",
Input = "The quick brown fox jumps over the lazy dog",
EncodingFormat = "float"
};
var embeddingResponse = await client.Embeddings.CreateEmbeddingAsync(embeddingRequest);
if (embeddingResponse.IsSuccess)
{
var embedding = embeddingResponse.Data[0].Embedding;
Console.WriteLine($"Embedding dimensions: {embedding.Count}");
}
Function Calling
var functionRequest = new ChatCompletionRequest
{
Model = "llama-3.3-70b",
Messages = new List<ChatMessage>
{
new UserMessage("What's the weather like in New York?")
},
Tools = new List<Tool>
{
new Tool
{
Function = new FunctionDefinition
{
Name = "get_weather",
Description = "Get current weather for a location",
Parameters = new Dictionary<string, object>
{
["type"] = "object",
["properties"] = new Dictionary<string, object>
{
["location"] = new Dictionary<string, object>
{
["type"] = "string",
["description"] = "The city and state"
}
},
["required"] = new[] { "location" }
}
}
}
},
ToolChoice = "auto"
};
var response = await client.Chat.CreateChatCompletionAsync(functionRequest);
Model Information
// List all models
var modelsResponse = await client.Models.GetModelsAsync();
foreach (var model in modelsResponse.Data)
{
Console.WriteLine($"{model.Id}: {model.ModelSpec.Name} ({model.Type})");
}
// List models by type (text, image, tts, embedding, upscale, inpaint)
var textModels = await client.Models.GetModelsAsync(ModelType.Text);
var imageModels = await client.Models.GetModelsAsync(ModelType.Image);
var allModels = await client.Models.GetModelsAsync(ModelType.All);
// Get specific model
var model = await client.Models.GetModelAsync("llama-3.3-70b");
Console.WriteLine($"Context length: {model.ModelSpec.AvailableContextTokens}");
// Get model traits (with optional type filtering)
var traitsResponse = await client.Models.GetModelTraitsAsync();
var textTraits = await client.Models.GetModelTraitsAsync(ModelType.Text);
var defaultModel = traitsResponse.Traits["default"];
var fastestModel = traitsResponse.Traits["fastest"];
// Get model compatibility mappings (with optional type filtering)
var compatibilityResponse = await client.Models.GetModelCompatibilityAsync();
var textCompatibility = await client.Models.GetModelCompatibilityAsync(ModelType.Text);
// Maps alternative model names to Venice AI models (e.g., "gpt-4o" -> "llama-3.3-70b")
Billing Information
var billingRequest = new BillingUsageRequest
{
StartDate = DateTime.UtcNow.AddDays(-30),
EndDate = DateTime.UtcNow,
Currency = Currency.USD,
Limit = 100,
Page = 1
};
var billingResponse = await client.Billing.GetBillingUsageAsync(billingRequest);
foreach (var entry in billingResponse.Data)
{
Console.WriteLine($"{entry.Timestamp}: {entry.Sku} - ${entry.Amount}");
}
Venice Parameters
// Use Venice-specific features for enhanced responses
var request = new ChatCompletionRequest
{
Model = "llama-3.3-70b",
Messages = new List<ChatMessage>
{
new UserMessage("Tell me about recent developments in AI technology.")
},
VeniceParameters = new VeniceParameters
{
EnableWebSearch = "on",
EnableWebCitations = true,
StripThinkingResponse = false,
IncludeVeniceSystemPrompt = true,
DisableThinking = false
}
};
var response = await client.Chat.CreateChatCompletionAsync(request);
Console.WriteLine($"Enhanced response: {response.Choices[0].Message.Content}");
// Check for web search citations
if (response.VeniceParameters?.WebSearchCitations?.Any() == true)
{
Console.WriteLine("Sources:");
foreach (var citation in response.VeniceParameters.WebSearchCitations)
{
Console.WriteLine($"- {citation.Title}: {citation.Url}");
}
}
Error Handling
try
{
var response = await client.Chat.CreateChatCompletionAsync(request);
if (response.IsSuccess)
{
Console.WriteLine(response.Choices[0].Message.Content);
}
else
{
Console.WriteLine($"Error: {response.Error?.Error}");
Console.WriteLine($"Status Code: {response.StatusCode}");
}
}
catch (VeniceAIException ex)
{
Console.WriteLine($"Venice AI Error: {ex.Message}");
Console.WriteLine($"Status Code: {ex.StatusCode}");
}
catch (HttpRequestException ex)
{
Console.WriteLine($"Network error: {ex.Message}");
}
Samples & Examples
🚀 Quick Start Sample
Location: samples/VeniceAI.SDK.Quickstart/
A comprehensive console application demonstrating core SDK features:
cd samples/VeniceAI.SDK.Quickstart
dotnet user-secrets set "VeniceAI:ApiKey" "your-api-key-here"
dotnet run
Features demonstrated:
- Setting up Venice AI client with dependency injection
- Listing available models and capabilities
- Basic chat completions with different models
- Real-time streaming chat responses
- Getting detailed model information and pricing
- Proper error handling and logging
🔧 HttpClient Separation Examples
Location: samples/VeniceAI.SDK.HttpClientExamples/
Advanced examples showing proper HttpClient configuration and separation:
cd samples/VeniceAI.SDK.HttpClientExamples
export VeniceAI__ApiKey="your-api-key-here" # Linux/Mac
# or: set VeniceAI__ApiKey=your-api-key-here # Windows
dotnet run
Scenarios covered:
- Default HttpClient registration (simplest)
- Custom HttpClient configuration
- Providing your own HttpClient instance
- Multiple HttpClients with different configurations
Benefits demonstrated:
- Complete separation between your HttpClients and Venice AI's
- Flexible configuration for different application needs
- No conflicts or configuration interference
- Proper dependency injection patterns
Testing
Unit Tests
dotnet test tests/VeniceAI.SDK.Tests
Integration Tests
Set your API key and run comprehensive integration tests:
dotnet user-secrets set "VeniceAI:ApiKey" "your-api-key" --project tests/VeniceAI.SDK.IntegrationTests
dotnet test tests/VeniceAI.SDK.IntegrationTests
Note: Integration tests only require your API key - all other settings are managed by the SDK.
Support
License
This project is licensed under the MIT License - see the LICENSE file for details.
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. net9.0 was computed. net9.0-android was computed. net9.0-browser was computed. net9.0-ios was computed. net9.0-maccatalyst was computed. net9.0-macos was computed. net9.0-tvos was computed. net9.0-windows was computed. net10.0 was computed. net10.0-android was computed. net10.0-browser was computed. net10.0-ios was computed. net10.0-maccatalyst was computed. net10.0-macos was computed. net10.0-tvos was computed. net10.0-windows was computed. |
-
net8.0
- Microsoft.Extensions.Configuration.Abstractions (>= 8.0.0)
- Microsoft.Extensions.DependencyInjection.Abstractions (>= 8.0.0)
- Microsoft.Extensions.Http (>= 8.0.0)
- Microsoft.Extensions.Logging.Abstractions (>= 8.0.0)
- Microsoft.Extensions.Options (>= 8.0.0)
- System.ComponentModel.Annotations (>= 5.0.0)
- System.Text.Json (>= 8.0.5)
NuGet packages
This package is not used by any NuGet packages.
GitHub repositories
This package is not used by any popular GitHub repositories.