AgentCircuits.Core 0.7.0

dotnet add package AgentCircuits.Core --version 0.7.0
                    
NuGet\Install-Package AgentCircuits.Core -Version 0.7.0
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="AgentCircuits.Core" Version="0.7.0" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="AgentCircuits.Core" Version="0.7.0" />
                    
Directory.Packages.props
<PackageReference Include="AgentCircuits.Core" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add AgentCircuits.Core --version 0.7.0
                    
#r "nuget: AgentCircuits.Core, 0.7.0"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package AgentCircuits.Core@0.7.0
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=AgentCircuits.Core&version=0.7.0
                    
Install as a Cake Addin
#tool nuget:?package=AgentCircuits.Core&version=0.7.0
                    
Install as a Cake Tool

AgentCircuits

A high-performance C# SDK for building AI agent systems

Simple by Default. Powerful When Needed.

Build everything from one-shot AI queries to complex multi-agent orchestrations—all in idiomatic C# with minimal overhead.

// Get started in one line
var result = await Agent.Query("Create a C# function that validates email addresses");

Build Status NuGet License: MIT


Why AgentCircuits?

Feature AgentCircuits Others
In-Process Performance Run 100+ agents in a single process Most require separate processes/containers
Progressive API Start with 1 line, scale to advanced Often all-or-nothing complexity
Extended Thinking Native ThinkingConfig with streaming events and multi-turn continuity (Anthropic, OpenAI, Gemini, Ollama, Bedrock) Often no thinking/reasoning mode support
Image Generation Native support for Gemini image generation with ImageEvent streaming Usually requires separate APIs
Background Tasks TaskStop, TaskOutput, JSONL logs, progress metrics, TTL-based cleanup Often no background task management
Turn-by-Turn Control StepAsync() for approvals, debugging, multi-day workflows Black-box loops with no control between turns
Context Builders Custom formats (XML, Compact) with 30-50% token savings Fixed message format, no optimisation
Real-Time Streaming Built-in streaming with 83% faster perceived latency Often requires custom SSE handling
Multimodal Input Native image support with all vision models Often limited or provider-specific
Native Multi-Agent Built-in sub-agents and orchestration Usually requires custom coordination
Web Management Portal Built-in dashboard, session viewer, playground UI Usually requires custom development
Chat Interface Modern SvelteKit UI with streaming, tool visualisation, themes Usually requires custom development
A2A Protocol Standard agent-to-agent communication (Python, Java, JS interop) Often no cross-framework integration
C# Idiomatic Feels like native .NET with attributes, async/await, records Python-first with C# as afterthought
MCP Integration First-class Model Context Protocol support Limited or no MCP support
Agent Hooks Cross-cutting concerns (auth, logging, cost tracking) Requires custom middleware
Native PDF Support Upload PDFs directly to Claude and Gemini models Usually requires text extraction
Session Names LLM-generated session summaries for easy identification Manual naming or no naming
Task Management Built-in task CRUD (TaskCreate/Update/List) with shared task lists across sessions Requires custom implementation
Session Cancellation Cancel running executions with execution tracking Often no cancellation support
User Management Built-in user entities, authentication, and access control Requires custom implementation
SQL Storage Production-ready PostgreSQL storage for sessions Often requires custom implementation

Quick Start

Installation

# Core framework (required)
dotnet add package AgentCircuits

# LLM Providers (choose one or more)
dotnet add package AgentCircuits.Providers.Anthropic  # For Claude models
dotnet add package AgentCircuits.Providers.OpenAI     # For GPT models
dotnet add package AgentCircuits.Providers.Gemini     # For Google Gemini models
dotnet add package AgentCircuits.Providers.Grok       # For xAI Grok models
dotnet add package AgentCircuits.Providers.Ollama     # For local models (Llama, Mistral, etc.)
dotnet add package AgentCircuits.Providers.Bedrock    # For AWS Bedrock (Claude, Titan, etc.)

# Optional Features
dotnet add package AgentCircuits.A2A                  # A2A protocol support
dotnet add package AgentCircuits.Channels             # Multi-channel routing (WhatsApp, Slack, Teams)
dotnet add package AgentCircuits.Portal               # Web management portal
dotnet add package AgentCircuits.Server               # Turnkey host (Portal + UI + hubs)

# Storage Providers (for production deployments)
dotnet add package AgentCircuits.Storage.Sql          # PostgreSQL storage

# Chat UI (npm package, requires Portal backend)
# See: agentcircuits.ui/ for the SvelteKit chat interface

Core Package Features:

  • Agent runtime with streaming, thinking mode, and multimodal support
  • Session management, context builders, and auto-compaction
  • 21 built-in tools (Read, Write, Edit, Bash, Glob, Grep, WebFetch, WebSearch, Memory, Base64, AskUserQuestion, Task orchestration, and more)
  • AgentHost for session-aware agent execution with SystemPromptEnricher
  • Hooks for cross-cutting concerns (security, logging, cost tracking, polyglot CommandHook)
  • Task management (TaskCreate, TaskGet, TaskUpdate, TaskList) with shared task lists across sessions
  • User management with authentication and role-based access control

One-Shot Query

The simplest way to use AgentCircuits—get an answer in one line:

using AgentCircuits;
using AgentCircuits.Providers.Anthropic;

var result = await Agent.Query(
    "Analyze this code for bugs",
    model: Anthropic.LanguageModel("claude-sonnet-4-5"),
    tools: [BuiltInTools.Read, BuiltInTools.Grep]
);

Console.WriteLine(result);

Interactive Conversations

For multi-turn interactions with context retention:

await using var agent = new Agent
{
    SystemPrompt = "You are a senior software architect",
    Tools = BuiltInTools.Safe,  // Read, Write, Edit, Grep, etc.
    Model = "claude-sonnet-4-5",
    WorkingDirectory = "./my-project"
};

// First turn
await agent.SendAsync("Review the authentication module");

await foreach (var evt in agent.ReceiveAsync())
{
    if (evt is TextEvent text)
        Console.Write(text.Content);

    if (evt is ToolUseEvent tool)
        Console.WriteLine($"\n[Using: {tool.ToolName}]");
}

// Follow-up turn - agent remembers context
await agent.SendAsync("Now fix the security issues you found");

await foreach (var evt in agent.ReceiveAsync())
{
    if (evt is TextEvent text)
        Console.Write(text.Content);
}

Key Features:

  • 🧠 Full conversation history
  • 🔄 Real-time streaming with events
  • 🛠️ Built-in tools (file I/O, grep, etc. - excludes Bash for safety)
  • 💾 Session persistence

Turn-by-Turn Control (StepAsync)

For advanced scenarios requiring control between turns—human approvals, multi-day workflows, or custom retry logic—use StepAsync() for step-by-step execution:

When to Use Which Pattern

Use ReceiveAsync() for:

  • Chatbots, Q&A, code analysis
  • Tasks that can run to completion
  • Simple, straightforward workflows

Use StepAsync() for:

  • Approval workflows (deployments, deletions)
  • Multi-day async operations
  • Custom retry logic
  • Debugging and observability

Approval Workflow Example

Stop before dangerous operations and require human confirmation:

using AgentCircuits.Events;

var agent = new Agent
{
    SystemPrompt = "You are a deployment assistant",
    Model = "claude-sonnet-4-5",
    Tools = [BuiltInTools.Bash, deployProdTool]
};

await agent.SendAsync("Deploy version 2.1.0 to production");

while (true)
{
    var turn = await agent.StepAsync();

    // Check for dangerous operations BEFORE they execute
    if (turn.ToolCalls.Any(t => t.ToolName == "deploy_prod"))
    {
        Console.Write("Approve deployment? (y/n): ");
        if (Console.ReadLine() != "y")
        {
            await agent.SendAsync("Deployment denied by operator");
            continue;
        }
    }

    foreach (var evt in turn.Events)
        if (evt is TextEvent text)
            Console.WriteLine(text.Content);

    if (turn.FinishReason == FinishReason.Stop || !turn.RequiresNextTurn)
        break;
}

Multi-Day Workflow Example

Exit and resume when waiting for external approvals:

// Tool returns operation ID immediately with Interrupt flag
[Tool("request_approval")]
public async Task<ToolResult> RequestApproval(string approver, string question)
{
    var operationId = $"req_{Guid.NewGuid():N}";

    // Store in database, send notification
    await _db.SavePendingOperationAsync(operationId, approver, question);
    await _notificationService.NotifyAsync(approver, question);

    // Return with Interrupt flag - signals agent to exit
    return new ToolResult
    {
        IsSuccess = true,
        Interrupt = true,  // Exit agent loop
        Content = operationId,
        Metadata = new() { ["operation_id"] = operationId, ["status"] = "pending" }
    };
}

// Orchestration with StepAsync
await agent.SendAsync("I need database access");

while (true)
{
    var turn = await agent.StepAsync();

    // Check for operations that need external approval
    var interruptedOps = turn.Events
        .OfType<ToolResultEvent>()
        .Where(e => e.Metadata?.ContainsKey("operation_id") == true);

    if (interruptedOps.Any())
    {
        // Save session and exit - don't block for hours/days
        await _sessionService.SaveAsync(agent.Session);
        Console.WriteLine("Approval requested. Exiting...");
        break;
    }

    if (turn.FinishReason == FinishReason.Stop || !turn.RequiresNextTurn)
        break;
}

// Hours or days later: Resume when approval arrives
var session = await _sessionService.GetSessionAsync(sessionId);
var resumedAgent = new Agent { Session = session, Model = "claude-sonnet-4-5" };

await resumedAgent.SendAsync($"Approval granted for operation {operationId}");
await foreach (var evt in resumedAgent.ReceiveAsync())
{
    // Agent continues from where it left off
}

Key Benefits:

  • 🛡️ Safety: Inspect and approve dangerous operations before execution
  • ⏸️ Async Operations: Exit and resume without blocking
  • 🔍 Observability: Full visibility into each turn's tool calls
  • 🔄 Retry Control: Custom logic for failed operations
  • 📊 Debugging: Step through agent execution turn by turn

Implementation Notes:

  • ReceiveAsync() internally calls StepAsync() in a loop (single execution path)
  • Both methods share the same underlying turn execution logic
  • Zero overhead for simple cases, full control when needed
  • See ControlFlowDemo for complete examples

Real-Time Streaming

Enable streaming for character-by-character responses—83% faster perceived latency (0.5s vs 3-5s to first content):

await using var agent = new Agent
{
    Model = Anthropic.LanguageModel("claude-sonnet-4-5"),
    UseStreaming = true,  // Enable streaming
    SystemPrompt = "You are a helpful coding assistant"
};

await agent.SendAsync("Explain async/await in C#");

await foreach (var evt in agent.ReceiveAsync())
{
    switch (evt)
    {
        case TextEvent { Partial: true } text:
            // Real-time partial content - display immediately
            Console.Write(text.Content);
            break;

        case TextEvent { Partial: false }:
            // Final complete text - skip (already displayed)
            Console.WriteLine();
            break;

        case ResultEvent result:
            Console.WriteLine($"\nCost: ${result.Usage?.TotalCostUsd:F4}");
            break;
    }
}

Streaming Benefits:

  • Instant Feedback: See response as it's generated
  • 🎯 Better UX: 83% faster time-to-first-content
  • 🔧 Works with Tools: Text streams, tools execute normally
  • 💾 Session Friendly: Only final content persisted (no duplicate partial events)

Extended Thinking (Reasoning Mode)

Enable models to "think out loud" before responding, improving reasoning quality for complex tasks:

using AgentCircuits;
using AgentCircuits.Events;
using AgentCircuits.Providers;
using AgentCircuits.Sessions;

var agent = new Agent(new InMemorySessionService())
{
    Name = "thinking_agent",
    SystemPrompt = "You are a helpful assistant. Think through problems carefully.",
    Model = "claude-sonnet-4-5",
    MaxTokens = 16000,
    Thinking = new ThinkingConfig { Enabled = true, Effort = ThinkingEffort.High },
    UseStreaming = true
};

await agent.SendAsync("What is the sum of the first 10 prime numbers?");

await foreach (var evt in agent.ReceiveAsync())
{
    switch (evt)
    {
        case ThinkingEvent thinking when thinking.Partial:
            // Stream thinking content in real-time
            Console.ForegroundColor = ConsoleColor.DarkGray;
            Console.Write(thinking.Thinking);
            Console.ResetColor();
            break;

        case ThinkingEvent thinking when !thinking.Partial:
            // Complete thinking block - signature preserved for multi-turn
            Console.WriteLine("\n");
            break;

        case TextEvent text when text.Partial:
            // Stream response content
            Console.Write(text.Content);
            break;

        case ResultEvent result:
            Console.WriteLine($"\n[{result.DurationMs}ms, {result.Usage?.TotalCostUsd:F4}]");
            break;
    }
}

ThinkingConfig Options

var agent = new Agent
{
    Thinking = new ThinkingConfig
    {
        Enabled = true,
        Effort = ThinkingEffort.High  // Low, Medium, High
    }
};

Multi-Turn Conversations with Thinking

Thinking signatures are automatically preserved across turns for continuity:

var agent = new Agent(new InMemorySessionService())
{
    Model = "claude-sonnet-4-5",
    MaxTokens = 8192,
    Thinking = new ThinkingConfig { Enabled = true, Effort = ThinkingEffort.Medium },
    UseStreaming = true
};

// First turn
await agent.SendAsync("What is a factorial?");
await foreach (var evt in agent.ReceiveAsync())
{
    // ThinkingEvent emitted with signature
}

// Second turn - model has context from previous thinking
await agent.SendAsync("Calculate 5! step by step");
await foreach (var evt in agent.ReceiveAsync())
{
    // Thinking continues with preserved context
}

Key Features:

  • 🧠 Extended Reasoning: Models show their thought process before responding
  • 📊 Token Tracking: TokenDistribution.ThinkingTokens tracks thinking token usage
  • 🔄 Streaming Support: ThinkingEvent.Partial for real-time thinking deltas
  • 🔗 Multi-Turn Continuity: Signatures automatically preserved across conversation turns
  • 💰 Effort Control: Set thinking intensity with Effort (Low/Medium/High)

Provider Support:

  • Anthropic: Full support with budget-based configuration and interleaved thinking for Claude 4 models
  • AWS Bedrock: Full support via Claude models on Bedrock with interleaved thinking for Claude 4
  • Google Gemini: Reasoning support for Gemini 2.0/2.5/3.0 models with thought images
  • OpenAI: Reasoning support for o1/o3 models
  • Ollama: Reasoning support for compatible local models (DeepSeek-R1, etc.)

See Demo11_ThinkingMode for complete examples.


Multimodal Input

Send images to vision-capable models (Claude, GPT-4o, Gemini) for analysis, description, or comparison:

using AgentCircuits.Providers;

// Load an image
var image = await ImageHelper.FromFileAsync("screenshot.png");

// Send with a prompt
await agent.SendAsync(Message.UserImage(
    imageData: image.Data,
    mediaType: image.MediaType,
    caption: "What UI issues do you see in this screenshot?"
));

await foreach (var evt in agent.ReceiveAsync())
{
    if (evt is TextEvent text)
        Console.WriteLine(text.Content);
}

Multi-Image Messages:

// Compare before/after screenshots
var before = await ImageHelper.FromFileAsync("before.png");
var after = await ImageHelper.FromFileAsync("after.png");

await agent.SendAsync(Message.UserBlocks(
    new TextContent { Text = "Compare these screenshots:" },
    new ImageContent { Data = before.Data, MediaType = before.MediaType },
    new TextContent { Text = "\nAfter:" },
    new ImageContent { Data = after.Data, MediaType = after.MediaType }
));

Supported Formats:

  • Image Types: JPEG, PNG, GIF, WebP
  • Document Types: PDF (Claude and Gemini)
  • Providers: All support vision (Anthropic, Google, OpenAI)
  • Size Limits: 10-20MB per image depending on provider
  • Use Cases: Screenshot analysis, diagram extraction, image description, visual comparison, PDF analysis

Native PDF Support

Upload PDF documents directly to compatible models for analysis:

using AgentCircuits.Providers;

// Load a PDF document
var pdfContent = new DocumentContent
{
    Data = Convert.ToBase64String(await File.ReadAllBytesAsync("report.pdf")),
    MediaType = "application/pdf"
};

// Send to Claude or Gemini
await agent.SendAsync(Message.UserBlocks(
    new TextContent { Text = "Summarise the key findings in this report:" },
    pdfContent
));

await foreach (var evt in agent.ReceiveAsync())
{
    if (evt is TextEvent text)
        Console.WriteLine(text.Content);
}

Supported Providers:

  • Anthropic Claude: Native PDF support via API
  • Google Gemini: Native PDF support via API
  • OpenAI/Others: Require text extraction first

See MultimodalDemo for complete examples including UI/UX analysis and visual diff tools.


Image Generation

Generate images using AI models that support image generation (e.g., Gemini 2.0 Flash with image generation):

using AgentCircuits;
using AgentCircuits.Events;
using AgentCircuits.Providers.Gemini;

var agent = new Agent
{
    Model = Gemini.LanguageModel("gemini-2.0-flash-exp-image-generation"),
    SystemPrompt = "You are a creative assistant that generates images."
};

await agent.SendAsync("Generate an image of a sunset over mountains");

await foreach (var evt in agent.ReceiveAsync())
{
    switch (evt)
    {
        case ImageEvent image:
            // Save generated image
            var imageBytes = Convert.FromBase64String(image.Data);
            await File.WriteAllBytesAsync($"generated_{Guid.NewGuid()}.png", imageBytes);
            Console.WriteLine($"Image generated: {image.MediaType}");
            break;

        case TextEvent text:
            Console.Write(text.Content);
            break;
    }
}

Key Features:

  • 🎨 Streaming Support: ImageEvent emitted for generated images during streaming
  • 📦 Base64 Encoding: Image data returned as base64-encoded string
  • 🔧 Tool Integration: Works alongside text generation and tool use
  • 🎯 Thought Signatures: Gemini 2.0+ models include ThoughtSignature for reasoning transparency

Supported Models:

  • Gemini 2.0 Flash: gemini-2.0-flash-exp-image-generation
  • Gemini 2.5 Flash: gemini-2.5-flash-image (Nano Banana)
  • Gemini 3 Pro: gemini-3-pro-image-preview

Auto-Compaction

AgentCircuits automatically manages context limits to prevent overflow—enabled by default with zero configuration:

await using var agent = new Agent
{
    Model = "claude-sonnet-4-5",
    AutoCompaction = true,  // Default: true (on by default)
    SystemPrompt = "You are a coding assistant"
};

// Agent automatically compacts context at 90% of limit
// No manual intervention needed - continues seamlessly
await agent.SendAsync("Read all files in this large project and analyze them");

How It Works:

  1. Automatic Detection: Monitors context usage before each LLM call
  2. Smart Triggering: Activates at 90% of model's context window (e.g., 180K tokens for Claude)
  3. Intelligent Summarization: Uses a separate "compactor" agent to create a concise summary
  4. Seamless Continuation: Replaces history with summary, conversation continues without interruption
  5. Tool Result Trimming: Automatically truncates large tool outputs to 500 chars before compaction

Key Features:

  • 🤖 On by Default: Zero configuration required
  • 📊 Provider-Agnostic: Works with any LLM (Claude, GPT, Ollama, etc.)
  • 🎯 Model-Aware: Fetches context limits from models.dev API with 24h caching
  • 🛡️ Fail-Safe: If compaction fails, preserves original session and continues
  • 💰 Cost Tracking: Token usage accumulates across compactions for accurate billing

Configure Compaction Behavior:

var agent = new Agent
{
    AutoCompaction = false,  // Turn off for full history retention

    // Or customize thresholds
    ContextWindow = new ContextWindowConfig
    {
        CompactionThreshold = 0.85,        // Trigger at 85% instead of 90%
        OutputBufferTokens = 3000,         // Reserve more space for output
        TokensPerToolDefinition = 200      // Adjust tool definition estimates
    }
};

System Events:

await foreach (var evt in agent.ReceiveAsync())
{
    if (evt is SystemEvent sys && sys.Subtype == "compaction")
    {
        Console.WriteLine($"[SYSTEM] {sys.Data["message"]}");
        // Output: "Context usage approaching limit (90%). Compacting conversation history..."
        // Output: "Compaction complete. Continuing..."
    }
}

Context Builders (Token Optimization)

Customize how session events are converted to LLM messages for 30-50% token savings, model-specific optimization, or domain-specific formats:

await using var agent = new Agent
{
    Model = "claude-sonnet-4-5",
    ContextBuilder = ContextBuilders.Compact(),  // 30-50% token reduction
    SystemPrompt = "You are a helpful assistant"
};

await agent.SendAsync("Analyze this project");
// Events converted to compact format: "U: message" instead of full Message objects

Built-In Context Builders

1. Compact (Token Optimization)

var agent = new Agent
{
    ContextBuilder = ContextBuilders.Compact(maxEventLength: 150)
};

// Output format (30-50% savings):
// U: user message
// A: assistant response
// T: tool_name(arg1=val1)
// R✓: tool result

2. XML (Claude-Optimized)

var agent = new Agent
{
    ContextBuilder = ContextBuilders.Xml(includeTimestamps: true)
};

// Output format (optimized for Claude):
// <conversation>
//   
//   <user>message</user>
//   <tool_use name="read">...</tool_use>
//   <tool_result success="true">result</tool_result>
//   <assistant>response</assistant>
// </conversation>

3. Custom Implementation

public class MedicalContextBuilder : IContextBuilder
{
    public List<Message> BuildMessages(
        IReadOnlyList<Event> events,
        string? systemPrompt,
        IEnumerable<ITool> tools,
        Message? firstTurnOverride = null)
    {
        // Custom formatting logic for medical records...
        return formattedMessages;
    }
}

var agent = new Agent { ContextBuilder = new MedicalContextBuilder() };

4. Inline Functions

var agent = new Agent
{
    ContextBuilder = ContextBuilders.FromFunc(events =>
    {
        var summary = string.Join("\n", events
            .OfType<TextEvent>()
            .Select(e => $"{e.Author}: {e.Content}"));

        return new List<Message>
        {
            Message.User(summary)
        };
    })
};

Key Features:

  • 💰 Token Savings: Compact format reduces costs by 30-50%
  • 🎯 Model-Specific: XML optimized for Claude (trained on XML)
  • 🔧 Extensible: Implement IContextBuilder for custom formats
  • ↔️ Backward Compatible: Null ContextBuilder = default behavior
  • 📊 3 Built-In Formats: Compact, XML, JsonStructured

See ContextBuildersDemo for examples.


Token Monitoring & Performance Metrics

AgentCircuits provides real-time visibility into context window usage and performance metrics:

Context Window Monitoring

Track token usage and get proactive warnings before hitting context limits:

using AgentCircuits.Sessions;

var agent = new Agent
{
    Model = "claude-sonnet-4-5",
    SystemPrompt = "You are a helpful assistant",
    Tools = BuiltInTools.Safe
};

await agent.SendAsync("Analyze this project");

// Monitor context usage during execution
var stats = await agent.GetContextStatsAsync();

Console.WriteLine($"Context Usage: {stats.UsagePercentage:F1}%");
Console.WriteLine($"Tokens: {stats.TotalTokens:N0} / {stats.MaxContextTokens:N0}");
Console.WriteLine($"Remaining: {stats.RemainingTokens:N0}");
Console.WriteLine($"Action: {stats.RecommendedAction}");

// Token distribution breakdown
var dist = stats.Distribution;
Console.WriteLine($"System Prompt: {dist.SystemPromptTokens:N0}");
Console.WriteLine($"Tool Definitions: {dist.ToolDefinitionTokens:N0}");
Console.WriteLine($"User Messages: {dist.UserMessageTokens:N0}");
Console.WriteLine($"Assistant: {dist.AssistantTokens:N0}");
Console.WriteLine($"Tool Results: {dist.ToolResultTokens:N0} (typically 50-80%)");

Recommended Actions:

  • None (<60%): All good
  • Monitor (60-80%): Watch usage
  • Compact (80-95%): Consider compaction
  • Critical (>95%): Near limit

Per-Turn Performance Metrics

Get detailed performance data for every LLM call:

await agent.SendAsync("Review the authentication code");

await foreach (var evt in agent.ReceiveAsync())
{
    if (evt is TurnMetricsEvent metrics)
    {
        Console.WriteLine($"\n=== Turn {metrics.TurnNumber} Metrics ===");
        Console.WriteLine($"Duration: {metrics.DurationMs}ms");
        Console.WriteLine($"Tokens: {metrics.Usage.InputTokens} in, {metrics.Usage.OutputTokens} out");
        Console.WriteLine($"Cost: ${metrics.Usage.TotalCostUsd:F4}");

        var perf = metrics.Performance;
        Console.WriteLine($"Throughput: {perf.TokensPerSecond:F1} tokens/sec");
        Console.WriteLine($"TTFT: {perf.TimeToFirstTokenMs}ms");
        Console.WriteLine($"Tools: {perf.ToolCallsSuccessful}/{perf.ToolCallsAttempted}");

        // Per-tool statistics
        if (perf.ToolCallsByName != null)
        {
            foreach (var (tool, stats) in perf.ToolCallsByName)
            {
                Console.WriteLine($"  {tool}: {stats.SuccessCount}/{stats.CallCount} " +
                                $"({stats.AverageExecutionMs:F0}ms avg)");
            }
        }
    }

    if (evt is ResultEvent result && result.Performance != null)
    {
        // Aggregate metrics across all turns
        var aggPerf = result.Performance;
        Console.WriteLine($"\n=== Session Summary ===");
        Console.WriteLine($"Total Tokens/sec: {aggPerf.TokensPerSecond:F1}");
        Console.WriteLine($"Total Tools: {aggPerf.ToolCallsSuccessful}/{aggPerf.ToolCallsAttempted}");
        Console.WriteLine($"Avg Tool Time: {aggPerf.AverageToolExecutionMs:F0}ms");
    }
}

Metrics Captured:

  • 🚀 Throughput: Tokens per second (total, input, output)
  • ⏱️ Latency: Time to first token (streaming only)
  • 🔧 Tool Stats: Success/failure rates, execution times
  • 💰 Cost: Per-turn and cumulative
  • 📊 Per-Tool Breakdown: Individual tool performance

Use Cases:

  • Monitor production agent performance
  • Identify slow tools or model degradation
  • Track token throughput across conversations
  • Detect runaway agents with anomaly detection
  • Optimize tool execution strategies

Custom Tools

Create domain-specific tools with simple attributes:

public class DatabaseTools
{
    private readonly IDbConnection _db;

    public DatabaseTools(IDbConnection db) => _db = db;

    [Tool("query_users", "Query the user database")]
    public async Task<ToolResult> QueryUsers(
        [ToolParam("SQL WHERE clause")] string where,
        [ToolParam("Maximum rows")] int limit,
        IToolContext context)
    {
        if (limit > 1000)
            return ToolResult.Error("Limit cannot exceed 1000");

        var sql = $"SELECT * FROM users WHERE {where} LIMIT {limit}";
        var users = await _db.QueryAsync(sql);

        return ToolResult.Success(JsonSerializer.Serialize(users));
    }
}

// Use with agent
var dbTools = new DatabaseTools(dbConnection);

var agent = new Agent
{
    SystemPrompt = "You are a data analyst assistant",
    Tools = Tool.FromInstance(dbTools),  // Auto-discovers [Tool] methods
    Model = "claude-sonnet-4-5"
};

Features:

  • 🏷️ Attribute-based tool definition
  • 💉 Dependency injection via instance methods
  • 🔄 Automatic JSON schema generation
  • 🛡️ Built-in validation and error handling

Multi-Agent Systems

Orchestrate specialized sub-agents for complex workflows:

// Define specialized agents
var securityAgent = new Agent
{
    Name = "security_scanner",
    Description = "Scans code for security vulnerabilities",
    SystemPrompt = "You are a security expert. Look for SQL injection, XSS, auth issues...",
    Tools = [BuiltInTools.Read, BuiltInTools.Grep],
    Model = "claude-sonnet-4-5"
};

var performanceAgent = new Agent
{
    Name = "performance_analyzer",
    Description = "Analyzes performance bottlenecks",
    SystemPrompt = "You are a performance expert. Find N+1 queries, memory leaks...",
    Tools = [BuiltInTools.Read, BuiltInTools.Bash],
    Model = "claude-sonnet-4-5"
};

// Orchestrator delegates to specialists
var orchestrator = new Agent
{
    SystemPrompt = """
        Coordinate code analysis by delegating to specialists:
        1. Use security_scanner for security review
        2. Use performance_analyzer for performance analysis
        3. Synthesize findings into actionable report
        """,
    SubAgents = [securityAgent, performanceAgent],
    Tools = [BuiltInTools.Task, BuiltInTools.Read],  // Task tool enables delegation
    Model = "claude-sonnet-4-5"
};

await orchestrator.SendAsync("Analyze the API endpoints");

await foreach (var evt in orchestrator.ReceiveAsync())
{
    if (evt is ToolUseEvent { ToolName: "Task" } task)
    {
        var subAgent = task.Arguments["subagent_type"];
        Console.WriteLine($"→ Delegating to: {subAgent}");
    }

    if (evt is TextEvent text)
        Console.WriteLine(text.Content);
}

Patterns Supported:

  • 🎯 Orchestrator: Root agent delegates to specialists
  • 🔄 Pipeline: Sequential processing through agents
  • Parallel: Concurrent agent execution
  • 🌳 Hierarchical: Multi-level agent trees

AgentHost (Agent Execution)

AgentHost provides a minimal API for running configured agents against sessions:

using AgentCircuits.Host;
using AgentCircuits.Internal;
using AgentCircuits.Providers;
using AgentCircuits.Sessions;

// Setup AgentHost
var sessionService = new InMemorySessionService();
var agentRepo = new FileBasedAgentConfigRepository("./agents");
var host = new AgentHost(sessionService, agentRepo);

var result = await host.RunAsync(
    agentId: "support-bot",
    message: Message.User("Help me reset my password"));

Console.WriteLine($"Session: {result.SessionId}");
var responseText = TextExtractionHelper.ExtractText(result.Response.Events, excludePartial: true);
Console.WriteLine($"Response: {responseText}");

Basic Usage

1. Continue conversation (existing session):

var followUp = await host.RunAsync(
    agentId: "support-bot",
    message: Message.User("What if I forgot my username too?"),
    sessionId: result.SessionId,
    resolution: SessionIdResolution.RequireExisting
);

2. Force a new session:

var freshStart = await host.RunAsync(
    agentId: "support-bot",
    message: Message.User("Start over"),
    sessionId: result.SessionId,
    resolution: SessionIdResolution.CreateNew
);

3. Retrieve session history:

var session = await sessionService.GetSessionAsync(result.SessionId);
if (session != null)
{
    foreach (var evt in session.Events)
    {
        Console.WriteLine($"{evt.Author}: {evt.GetType().Name}");
    }
}

Declarative Agents (Configuration as Code)

Define agents in JSON files for version control and easy deployment:

agents/support-bot.json:

{
  "id": "support-bot",
  "name": "Support Assistant",
  "modelId": "claude-sonnet-4-5",
  "toolNames": ["read", "write"],
  "systemPrompt": "You are a helpful IT support assistant",
  "maxIterations": 50
}
// Agents auto-loaded from *.json files
var repo = new FileBasedAgentConfigRepository("./agents");
var host = new AgentHost(sessionService, repo);

// Use any agent
await host.RunAsync("support-bot", Message.User("Help me reset my password"));

Async Operations

Handle long-running operations with external completion:

var asyncOps = new AsyncOperationService(asyncOpRepo, sessionService, logger);

// Create operation
var op = await asyncOps.CreateAsync(
    sessionId: sessionId,
    agentId: "approval-agent",
    type: "approval",
    metadata: JsonSerializer.SerializeToElement(new { question = "Deploy to production?" }),
    timeout: TimeSpan.FromMinutes(5)
);

// External system completes (e.g., webhook)
await asyncOps.CompleteAsync(op.Id, JsonSerializer.SerializeToElement(new { approved = true }));
// No automatic agent wake-up; trigger follow-up explicitly if needed

Runtime Architecture

AgentHost builds on Core primitives to provide a minimal execution entry point:

                    ┌──────────────────────────────────────┐
                    │      YOUR APPLICATION                │
                    │  host.RunAsync(...)                  │
                    └──────────────┬───────────────────────┘
                                   │
          ╔════════════════════════▼═══════════════════════╗
          ║                                                ║
          ║         AgentHost (Execution Layer)            ║
          ║    Session resolution + agent execution        ║
          ║                                                ║
          ╚══╦═══════════════╦════════════════════════════╦╝
             ║               ║                            ║
             ▼               ▼                            ▼
    ┌─────────────────┐ ┌──────────────┐        ┌────────────────────┐
    │IAgentConfig     │ │ISession      │        │AgentCircuits.Core  │
    │Repository       │ │Service       │        │(Agent, Tools, etc.)│
    ├─────────────────┤ ├──────────────┤        └────────────────────┘
    │• In-Memory      │ │• In-Memory   │
    │• File (JSON)    │ │• JSON File   │
    │• Custom         │ │• Custom      │
    └─────────────────┘ └──────────────┘

Key Design Principles:

  • 🎯 Builds on Core: Uses Agent, Session, Message primitives unchanged
  • 🔌 Dependency Inversion: Delegates to pluggable repositories (config, session)
  • 🧭 Minimal Surface: Single entry point for session-aware execution
  • 💾 Separation of Concerns: Host executes, repositories store, Core runs

See Also:

  • 🔧 Runtime API Reference - Full API documentation
  • 🎯 Examples: AsyncOperationsDemo, DeclarativeAgentsDemo

Web Management Portal

AgentCircuits includes a complete web-based management portal for monitoring, configuring, and interacting with agents:

Features

Dashboard:

  • Real-time system metrics (throughput, tokens/s, messages/s)
  • Performance analytics (time to first token, step duration)
  • Top active users and agents
  • Recent sessions overview

Agent Management:

  • Create, edit, and delete agent configurations
  • Configure system prompts, tools, model settings, thinking config, and max tokens
  • Enable/disable agents
  • System prompt visibility controls
  • Agent card preview and discovery
  • Per-tool approval configuration (require user confirmation before execution)

Session Viewer:

  • Browse all agent sessions
  • View full conversation history with events
  • Inspect tool calls and results
  • Token usage and cost tracking per session
  • Session task list management
  • Export sessions as HTML or JSON for debugging and sharing

Interactive Playground:

  • Chat with any configured agent via REST API execution
  • Real-time streaming responses with session-scoped event routing
  • Session cancellation with execution tracking
  • Tool execution visibility with session-scoped "Always Allow"
  • Multi-turn conversations

User Management:

  • User registration and authentication
  • User-based session access control
  • Admin controls and role management

Provider Management:

  • View available LLM providers (Anthropic, OpenAI, Gemini, Grok, Ollama, Bedrock)
  • Configure API keys and endpoints
  • Test provider connectivity

MCP Server Management:

  • Configure MCP server connections
  • Enable/disable MCP tools
  • Test MCP server connectivity

Session Input Routing:

  • Content block routing with configurable inbox auto-injection
  • REST API for session inputs and external message injection
  • Delivery mode controls for pending inputs

Async Operations Monitoring:

  • View pending, completed, and expired operations
  • Monitor operation status and results with OperationFailed event broadcasting
  • Manual completion/cancellation
  • Agent notification service for no-client execution

Getting Started

Option 1: Docker (Recommended for Production)

docker run -d \
  -p 8080:8080 \
  -v ./data:/data \
  -e AGENTCIRCUITS__PROVIDERS__ANTHROPIC__API_KEY=sk-ant-... \
  -e AGENTCIRCUITS__PROVIDERS__BEDROCK__ENABLED=true \
  ghcr.io/agent-circuits/agentcircuits:latest

Or with docker-compose:

services:
  agentcircuits:
    image: ghcr.io/agent-circuits/agentcircuits:latest
    ports:
      - "8080:8080"
    environment:
      - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
      - AGENTCIRCUITS__STORAGE__PATH=/data
      - AGENTCIRCUITS__PROVIDERS__ANTHROPIC__ENABLED=true
    volumes:
      - ./data:/data

Option 2: AgentCircuits.Server (.NET)

Run Portal + Chat UI + SignalR hubs together:

dotnet run --project agentcircuits.server/src/AgentCircuits.Server.csproj
# Navigate to http://localhost:8080

Option 3: Portal Only

cd src/AgentCircuits.Portal
dotnet run
# Navigate to http://localhost:8080

The portal uses file-based storage by default:

  • Agents: data/agents/*.json
  • Sessions: data/sessions/*.json
  • Providers: data/providers/*.json
  • MCP Servers: data/mcp-servers/*.json
  • Async Operations: data/async-operations/*.json

Portal Architecture

// Portal uses Runtime repositories for data persistence
services.AddSingleton<IAgentConfigRepository, FileBasedAgentConfigRepository>();
services.AddSingleton<ISessionService, JsonFileSessionService>();
services.AddSingleton<IProviderRepository, FileBasedProviderRepository>();
services.AddSingleton<IMcpServerRepository, FileBasedMcpServerRepository>();
services.AddSingleton<IAsyncOperationRepository, FileBasedAsyncOperationRepository>();

// AgentHost for agent execution
services.AddSingleton<IAgentHost, AgentHost>();

Technology Stack:

  • Backend: ASP.NET Core 9.0 Minimal APIs
  • Frontend: Vanilla JavaScript (no framework dependencies)
  • Storage: JSON file-based repositories (production-ready)
  • Real-time: Server-Sent Events for streaming

Key Features:

  • 🎨 Modern UI: Clean, responsive design with dark mode
  • 📊 Real-time Updates: Streaming responses, metrics, and live configuration propagation via SignalR
  • 🔍 Full Observability: Session viewer with complete event history
  • 🎮 Interactive: Playground for testing agents with REST API execution
  • 📁 File-based Storage: Simple, debuggable, version-control friendly
  • 🔌 No Dependencies: Pure JavaScript, no build step required
  • 👤 User Management: Built-in user entities and authentication

Chat Interface (AgentCircuits.UI)

A modern end-user chat interface for interacting with AgentCircuits agents, built with SvelteKit. While the Portal provides admin/management capabilities, the UI provides a polished chat experience for end users.

Installation

NuGet Package (Recommended):

dotnet add package AgentCircuits.UI

Usage:

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

// Mount at root "/"
app.UseAgentCircuitsUI();

// Or mount at a specific path (e.g., behind reverse proxy at /myapp/chat)
app.UseAgentCircuitsUI("/chat");

// With custom options
app.UseAgentCircuitsUI("/chat", options =>
{
    options.EnableCacheControlHeaders = false;
    options.EnableSpaFallback = true;
});

app.Run();

The NuGet package includes pre-built SvelteKit assets as embedded resources—no Node.js required at runtime. When mounted at a sub-path, the UI automatically configures API calls to use the correct base path.

Features

Chat Interface:

  • Streaming responses - Real-time text streaming via SignalR
  • Thinking/reasoning display - Collapsible reasoning content blocks with thought images and reasoning stats
  • Dark/light theme - System preference detection with manual override
  • Responsive design - Desktop and mobile support
  • HTML session export - Export conversations as self-contained HTML files
  • Image preview - Clickable image thumbnails with full-size modal and download
  • Activity indicators - Chat activity status displayed above the input form

Tool Call Visualisation:

  • Collapsible tool calls - View tool name, arguments, and results
  • Status indicators - Pending (spinner), success (tick), error (cross), cancelled
  • Result pairing - Tool results matched to invocations by ID
  • Tool approval dialogs - Interactive approval with session-scoped "Always Allow" option

Task Management Panel:

  • Task tracking - Sidebar panel for tracking agent work progress
  • Shared task lists - Task lists shared across sessions via API
  • Real-time sync - Task panel synced with shared task API

Session Management:

  • Session list - Browse previous conversations with LLM-generated names
  • Session switching - Load any session's history
  • Session cancellation - Cancel running agent executions with active session indicators
  • New session - Start fresh conversations
  • Delete session - Remove unwanted sessions
  • Session export - Download sessions as HTML or JSON
  • Session-scoped event routing - Per-session EventStores for isolated event streams

Agent Details:

  • Agent specs popover - View agent details including system prompt and configuration
  • Live configuration updates - Agent list updates in real-time via SignalR ConfigChanged events

Operation Notifications:

  • Pending inputs UI - View and respond to pending operation notifications
  • Delivery mode controls - Configure how notifications are delivered

Metrics Display:

  • Token counts - Input/output tokens per turn
  • Cost tracking - USD cost when available
  • Performance metrics - TTFT, tokens per second

Keyboard Shortcuts:

Shortcut Action
Enter Send message
Shift+Ctrl/Cmd+O New chat
Shift+Ctrl/Cmd+D Delete conversation
Ctrl/Cmd+K Search conversations

Running the Chat UI

Option 1: Docker (Recommended for Production)

docker run -d -p 8080:8080 \
  -e AGENTCIRCUITS__PROVIDERS__ANTHROPIC__API_KEY=sk-ant-... \
  ghcr.io/agent-circuits/agentcircuits:latest
# Chat UI: http://localhost:8080
# Portal:  http://localhost:8080/portal

Option 2: AgentCircuits.Server (.NET)

dotnet run --project agentcircuits.server/src/AgentCircuits.Server.csproj
# Chat UI: http://localhost:8080
# Portal:  http://localhost:8080/portal

Option 3: Development Setup (for UI modifications)

For UI development (modifying the SvelteKit source):

# 1. Start the backend server
dotnet run --project agentcircuits.server/src/AgentCircuits.Server.csproj

# 2. Install UI dependencies
cd agentcircuits.ui
npm install

# 3. Start the UI development server (proxies to backend on port 8080)
npx vite

# 4. Open http://localhost:5173

Architecture

AgentCircuits.UI (SvelteKit)
         │
         │ REST API + SignalR WebSocket
         ▼
AgentCircuits.Server / Portal (ASP.NET Core)
         │
         │ Agent Execution
         ▼
AgentCircuits SDK (Agent, Tools, LLM Providers)

Technology Stack:

  • Framework: SvelteKit 2.48 + Svelte 5 with runes
  • UI Components: bits-ui + Svelte Sonner (toast notifications)
  • Styling: TailwindCSS 4 + @tailwindcss/typography + @tailwindcss/forms
  • Real-time: SignalR 10.0 WebSocket
  • Build: Vite 7.2
  • Markdown: remark + mdsvex + rehype
  • PDF: pdfjs-dist for in-browser PDF rendering

A2A Protocol (Agent-to-Agent Communication)

AgentCircuits implements the Agent2Agent (A2A) Protocol, an open standard for cross-platform agent communication:

What is A2A?

A2A enables agents from different frameworks and languages to collaborate:

  • Call remote agents: Python, Java, JavaScript agents from C#
  • Expose AgentCircuits agents: Make your agents callable by external systems
  • Standard protocol: JSON-RPC 2.0 over HTTP(S) with agent card discovery
  • Enterprise-ready: OAuth, API keys, bearer token authentication

A2A vs MCP

Key Difference:

  • MCP (Model Context Protocol): Agents → Tools/Resources (databases, APIs, calculators)
  • A2A Protocol: Agents ↔ Agents (multi-turn collaboration, reasoning, negotiation)

Both protocols are complementary: A2A agents use MCP to access their tools.

Call Remote Agents

using AgentCircuits.A2A;
using AgentCircuits.A2A.Tools;

// Register remote agent in Portal or via repository
var remoteAgentConfig = new RemoteAgentConfig
{
    Id = "python-researcher",
    Name = "Python Research Agent",
    BaseUrl = "https://research.company.com",
    Auth = new AuthenticationConfig
    {
        Type = "Bearer",
        Token = "your-api-key"
    }
};

// Call remote agent from your agent
var agent = new Agent
{
    SystemPrompt = "You coordinate research tasks",
    Tools = [new CallRemoteAgentTool(remoteAgentRepository), BuiltInTools.Write],
    Model = "claude-sonnet-4-5"
};

await agent.SendAsync("Use python-researcher to analyze the latest ML papers");
// Agent automatically calls remote Python agent via A2A protocol

Expose Agent Circuit Agents

// Configure agent for A2A exposure in AgentConfig
var agentConfig = new AgentConfig
{
    Id = "code-reviewer",
    Name = "Code Review Agent",
    SystemPrompt = "You review code for bugs and best practices",
    ToolNames = ["read", "grep"],
    ModelId = "claude-sonnet-4-5",

    // A2A exposure settings
    A2A = new A2AExposureSettings
    {
        Enabled = true,
        SkillsOverride = ["code_review", "security_analysis", "performance_optimization"],
        RequiredAuth = new AuthenticationConfig { Type = "Bearer" }
    }
};

// Agent is now discoverable via (default server base path):
// GET https://your-domain.com/portal/a2a/agents/code-reviewer/.well-known/agent-card.json

// And callable via:
// POST https://your-domain.com/portal/a2a/agents/code-reviewer (JSON-RPC 2.0)

The /portal prefix comes from the default AgentCircuits:Portal:BasePath setting. If you mount the portal at a different path (or at /), the A2A route prefix changes accordingly.

Agent Card Discovery

A2A uses agent cards for discovery (similar to OpenAPI specs):

{
  "name": "code-reviewer",
  "description": "Reviews code for bugs and best practices",
  "url": "https://your-domain.com/portal/a2a/agents/code-reviewer",
  "skills": ["code_review", "security_analysis", "performance_optimization"],
  "authentication": {
    "type": "bearer"
  }
}

Portal Integration

The Portal provides full A2A management:

Remote Agents Page:

  • Add/edit/delete remote agents
  • Test connectivity via agent card discovery
  • Status indicators (online/offline)

Exposed Agents Page:

  • Toggle A2A exposure per agent
  • View generated agent cards
  • Copy discovery URLs
  • Configure authentication

Dashboard:

  • A2A activity metrics
  • Remote agent usage statistics

Architecture

┌─────────────────────────────────────────────────────────────┐
│                    AgentCircuits A2A Integration                 │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  ┌───────────────┐              ┌──────────────────┐       │
│  │ A2A Client    │◄────────────►│ Remote Agents    │       │
│  │ (Outbound)    │              │ (Python/Java/JS) │       │
│  ├───────────────┤              └──────────────────┘       │
│  │• AgentCircuits │                                          │
│  │  Client       │              ┌──────────────────┐       │
│  │• Agent Card   │              │ A2A Server       │       │
│  │  Cache        │◄────────────►│ (Inbound)        │       │
│  │• CallRemote   │              ├──────────────────┤       │
│  │  AgentTool    │              │• Discovery       │       │
│  └───────────────┘              │  Endpoints       │       │
│                                 │• Message Handler │       │
│                                 │• Task Tracking   │       │
│                                 └──────────────────┘       │
│                                                             │
│  Official A2A .NET SDK (v0.3.3-preview)                    │
│  • JSON-RPC 2.0 protocol                                   │
│  • Agent card generation                                   │
│  • Authentication                                          │
│                                                             │
└─────────────────────────────────────────────────────────────┘

Key Components:

  • 📦 AgentCircuits.A2A: Client and server implementations
  • 🔧 CallRemoteAgentTool: Built-in tool for calling remote agents
  • 📋 IRemoteAgentRepository: Manage remote agent configurations
  • 🎴 Agent Cards: JSON-based agent capability discovery
  • 🔐 Authentication: Bearer tokens, API keys (OAuth/mTLS in v2)

See Also:


Agent Hooks

Add cross-cutting concerns like security, logging, and cost tracking. Hooks are agent-level, allowing multi-tenant scenarios with different policies per agent:

// Create agent with hooks
var agent = new Agent
{
    Model = "claude-sonnet-4-5",

    // Security: Block dangerous operations
    BeforeToolUse = async ctx =>
    {
        if (ctx.Tool.Name == "Bash" && ctx.Arguments["command"].ToString().Contains("rm -rf"))
            return ToolResult.Denied("Dangerous command blocked");
        return null; // Allow
    },

    // Auto-backup: Save files before modifications
    AfterToolUse = async ctx =>
    {
        if (ctx.Tool.Name is "Write" or "Edit" && ctx.Result?.IsSuccess == true)
        {
            var filePath = ctx.Arguments["file_path"].ToString();
            await CreateBackupAsync(filePath);
        }
        return null;
    },

    // Cost tracking: Monitor LLM spend
    AfterModel = async ctx =>
    {
        if (ctx.Response?.Usage?.TotalCostUsd is decimal cost)
        {
            totalCost += cost;
            Console.WriteLine($"[Cost: ${cost:F4} | Total: ${totalCost:F2}]");
        }
        return null;
    }
};

Fluent Hook Builder for common patterns:

// Composable hook configuration
agent
    .BlockTool("bash", "Security policy prohibits shell access")
    .AllowOnlyTools("read", "write", "grep")
    .OnBeforeToolUse(MyHooks.ValidateFilePaths);

Sample Hook Patterns (see SdkShowcase/HookPatterns/):

  • SecurityHooks - Validate and block operations
  • CostTrackingHooks - Track API spending
  • AuditLoggingHooks - Log all tool uses
  • FileBackupHooks - Auto-backup before edits
  • RateLimitingHooks - Throttle API calls
  • UsageStatisticsHooks - Collect metrics

Tool Matchers (Pattern-Based Tool Filtering)

Use regex-based patterns to target hooks at specific tools:

// Block tools matching a pattern (regex supported)
agent.BlockToolPattern("Bash|Edit", "Destructive tools are disabled");

// Hook only fires for matching tools
agent.OnBeforeToolUse("Write|Edit", async ctx =>
{
    // Only called for Write or Edit tools
    Console.WriteLine($"File operation: {ctx.Tool.Name}");
    return null;
});

// After-hook with pattern matching
agent.OnAfterToolUse(".*Fetch.*", async ctx =>
{
    // Matches WebFetch, DataFetch, etc.
    Console.WriteLine($"Fetch completed: {ctx.Result?.Content?.Length ?? 0} chars");
    return null;
});

Pattern Syntax:

  • Exact match: "Bash" - matches only the Bash tool
  • Alternation: "Bash|Edit|Write" - matches any of the listed tools
  • Regex patterns: "^File.*" - matches tools starting with "File"
  • Case-sensitive: Patterns match tool names exactly as defined

User Prompt Hooks

Intercept, validate, or modify user prompts before they are processed:

var agent = new Agent
{
    Model = "claude-sonnet-4-5",

    // Validate and modify user prompts
    OnUserPrompt = async ctx =>
    {
        // Block inappropriate content
        if (ctx.Prompt.Contains("password", StringComparison.OrdinalIgnoreCase))
        {
            return new UserPromptResult
            {
                Block = true,
                Reason = "Prompts containing passwords are not allowed"
            };
        }

        // Add context to the prompt
        return new UserPromptResult
        {
            AdditionalContext = $"Current directory: {ctx.WorkingDirectory}",
            ModifiedPrompt = null  // Keep original prompt
        };
    }
};

UserPromptResult Options:

  • Block - When true, the prompt is rejected and not processed
  • Reason - Message explaining why the prompt was blocked
  • ModifiedPrompt - Replace the original prompt text
  • AdditionalContext - Inject additional context alongside the prompt

UserPromptHookContext Properties:

  • SessionId - Current session identifier
  • AgentName - Name of the agent receiving the prompt
  • WorkingDirectory - Current working directory
  • Prompt - The original user prompt text
  • FullMessage - Full Message object for multimodal inputs (null for text-only)

Subagent Hooks

Monitor sub-agent lifecycle events for logging, metrics, or coordination:

var orchestrator = new Agent
{
    Model = "claude-sonnet-4-5",
    SubAgents = [securityAgent, performanceAgent],

    // Called when a subagent starts
    OnSubagentStart = async ctx =>
    {
        Console.WriteLine($"[{ctx.Timestamp:HH:mm:ss}] Starting subagent: {ctx.SubagentName}");
        Console.WriteLine($"  Parent: {ctx.ParentAgentName}");
        Console.WriteLine($"  Prompt: {ctx.Prompt}");
    },

    // Called when a subagent completes
    OnSubagentStop = async ctx =>
    {
        Console.WriteLine($"[{ctx.Timestamp:HH:mm:ss}] Subagent completed: {ctx.SubagentName}");
        if (ctx.Result != null)
        {
            Console.WriteLine($"  Success: {ctx.Result.Success}");
            Console.WriteLine($"  Output: {ctx.Result.TextOutput.Length} chars");
            if (!string.IsNullOrEmpty(ctx.Result.ErrorMessage))
                Console.WriteLine($"  Error: {ctx.Result.ErrorMessage}");
        }
    }
};

SubagentHookContext Properties:

  • SessionId - Parent session identifier
  • ParentAgentName - Name of the orchestrating agent
  • SubagentName - Name of the sub-agent being executed
  • Prompt - Prompt passed to the sub-agent
  • Result - Execution result (only in OnSubagentStop, null in OnSubagentStart)
  • Timestamp - UTC timestamp of the lifecycle event

CommandHook (Polyglot Hooks)

Execute external shell scripts as hooks, enabling polyglot hook implementations in Python, Node.js, Bash, or any language:

using AgentCircuits.Hooks;

// Use a Python script as a tool hook
agent.BeforeToolUse = CommandHook.FromCommand("python3 ./hooks/validate.py");

// Use a Node.js script for user prompt validation
agent.OnUserPrompt = CommandHook.FromCommandForUserPrompt("node ./hooks/prompt-filter.js");

// Use a shell script for subagent monitoring
agent.OnSubagentStart = CommandHook.FromCommandForSubagent("./hooks/audit-subagent.sh");

JSON Protocol for Tool Hooks:

The hook command receives context as JSON on stdin:

{
  "session_id": "abc123",
  "agent_name": "code-reviewer",
  "tool_name": "Bash",
  "tool_input": { "command": "ls -la" },
  "working_directory": "/home/user/project"
}

Exit Code Semantics:

Exit Code Meaning
0 Success - allow execution (parse stdout for modifications)
2 Block - deny the operation (stderr contains reason)
Other Warning - proceed but log stderr

JSON Output for Modifications (stdout, exit 0):

{
  "block": false,
  "reason": "optional reason",
  "modified_result": {
    "content": "override content",
    "is_success": true,
    "error_message": null
  }
}

Example Python Hook:

#!/usr/bin/env python3
import json
import sys

# Read context from stdin
context = json.load(sys.stdin)

# Block dangerous rm commands
if context['tool_name'] == 'Bash':
    cmd = context['tool_input'].get('command', '')
    if 'rm -rf' in cmd:
        print('Blocked dangerous rm -rf command', file=sys.stderr)
        sys.exit(2)  # Block

# Allow execution
sys.exit(0)

User Prompt Hooks via CommandHook:

#!/usr/bin/env python3
import json
import sys

context = json.load(sys.stdin)
prompt = context['prompt']

# Modify the prompt
if 'todo' in prompt.lower():
    print(json.dumps({
        "modified_prompt": prompt,
        "additional_context": "Remember to update the task list when done."
    }))
    sys.exit(0)

# Block sensitive content
if 'api_key' in prompt.lower():
    print(json.dumps({
        "block": True,
        "reason": "Prompts containing API keys are not allowed"
    }))
    sys.exit(0)

sys.exit(0)

Subagent Hooks via CommandHook:

Subagent hooks are fire-and-forget (informational only):

{
  "session_id": "abc123",
  "parent_agent_name": "orchestrator",
  "subagent_name": "security_scanner",
  "prompt": "Scan for vulnerabilities",
  "result": {
    "success": true,
    "error_message": null,
    "text_output": "No vulnerabilities found"
  }
}

Key Benefits:

  • Polyglot Support: Write hooks in any language (Python, Node.js, Bash, Ruby, Go, etc.)
  • Process Isolation: Hooks run in separate processes for safety
  • Standard Protocol: JSON stdin/stdout for easy integration
  • Graceful Degradation: Hook failures don't crash agent execution

Logging Configuration

AgentCircuits uses Microsoft.Extensions.Logging for structured, high-performance logging throughout the SDK. Configure log levels via appsettings.json:

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "AgentCircuits": "Information",
      "AgentCircuits.Internal.TurnExecutor": "Debug",
      "AgentCircuits.Internal.ToolExecutor": "Debug",
      "AgentCircuits.Host": "Information"
    }
  }
}

Log Categories:

Category Event ID Range Description
AgentCircuits 1000-1099 Agent lifecycle (start, complete, cancel)
AgentCircuits.Internal.TurnExecutor 1100-1199 Turn execution details
AgentCircuits.Internal.ToolExecutor 1200-1299 Tool execution (start, complete, not found)
AgentCircuits.Sessions 1300-1399 Session/context events, compaction
AgentCircuits.Internal.McpManager 1400-1499 MCP server initialisation
AgentCircuits.Hooks 1500-1599 Hook invocations
AgentCircuits.Host 2000-2199 AgentHost and AsyncOperationService

Recommended Log Levels:

  • Production: Information - Agent lifecycle, operation results
  • Debugging: Debug - Turn/tool execution details
  • Troubleshooting: Trace - Full context, parameters, payloads

Key Events:

[Information] Agent execution started. SessionId=abc, Agent=reviewer, Model=claude-sonnet-4-5
[Debug] Tool execution started. Tool=read, SessionId=abc
[Debug] Tool execution completed. Tool=read, Success=True, DurationMs=45
[Information] Agent execution completed. SessionId=abc, TotalTurns=3, TotalTokens=1250, DurationMs=4500

MCP Integration

Connect to Model Context Protocol servers for extended capabilities:

using AgentCircuits.Mcp;

var agent = new Agent
{
    Model = "claude-sonnet-4-5",
    Tools = BuiltInTools.Safe,  // Built-in tools
    McpServers = new()
    {
        ["filesystem"] = new McpServerConfig
        {
            Type = McpTransportType.Stdio,
            Command = "npx",
            Args = ["-y", "@modelcontextprotocol/server-filesystem", "/my/project"]
        },
        ["github"] = new McpServerConfig
        {
            Type = McpTransportType.Http,
            Url = "https://api.githubcopilot.com/mcp/"
        }
    }
};

// MCP tools are automatically loaded and available alongside built-in tools
await agent.SendAsync("Read the README.md file");

MCP Support:

  • ✅ Stdio, Streamable HTTP, and SSE transports
  • ✅ Built on ModelContextProtocol v0.8.0-preview.1
  • ✅ MCP tools work as regular ITool instances
  • ✅ Compose MCP + built-in + custom tools

Advanced Features

Iteration Control

The agent naturally stops when the LLM signals completion, with MaxIterations as a safety limit:

// Default behavior - natural stopping
var agent1 = new Agent
{
    MaxIterations = 50  // Default, safety limit
};

// Strict limit for simple tasks
var agent2 = new Agent
{
    MaxIterations = 10
};

// Higher limit for complex research
var agent3 = new Agent
{
    MaxIterations = 200
};

// Long-form content generation
var agent4 = new Agent
{
    ContinueOnMaxTokens = true,
    MaxIterations = 100
};

The agent automatically stops when the LLM returns FinishReason.Stop with no tools used.

Session Management

Persist and resume conversations with powerful session helpers:

// Save session for later
var sessionId = agent.SessionId;

// Resume in new process/request
var session = await sessionService.GetSessionAsync(sessionId);
var resumedAgent = new Agent
{
    Session = session,
    Model = "claude-sonnet-4-5"
};

await resumedAgent.SendAsync("Continue from where we left off");

// Fork a session to try different approaches
var experimentalSession = SessionHelpers.Fork(session);

// Rewind by removing last 10 events
var rewindedSession = SessionHelpers.Rewind(session, 10);

// Resume at a specific event
var checkpointSession = SessionHelpers.ResumeAt(session, eventIndex: 42);

// Remove failed tool uses for retry logic
var cleanedSession = SessionHelpers.RemoveFailedTools(session);

// Create checkpoints
var checkpointId = await SessionHelpers.CreateCheckpoint(session,
    sessionService,
    name: "before_risky_operation");

Session Storage Options:

// In-memory (development)
var sessionService = ISessionService.InMemory();

// JSON file (simple persistence)
var sessionService = ISessionService.JsonFile("./sessions");

Production Storage (PostgreSQL)

For multi-user production deployments, use AgentCircuits.Storage.Sql for PostgreSQL-backed storage with session ownership and access control:

dotnet add package AgentCircuits.Storage.Sql
using AgentCircuits.Storage.Sql;

// With Portal - use UseCustomStorage + AddAgentCircuitsSqlStorage
builder.Services.AddAgentCircuitsPortal(portal =>
{
    portal.UseCustomStorage();  // Disable built-in file storage
});

builder.Services.AddAgentCircuitsSqlStorage(options =>
{
    options.ConnectionString = "Host=localhost;Database=agentcircuits;Username=user;Password=pass";
    options.Schema = "public";  // Optional, defaults to "public"
});

Features:

  • 🔐 Multi-User: Session ownership with user-based access control
  • 📊 Efficient Queries: Pagination, filtering by user/agent/date
  • 🔄 All Repositories: Sessions, agents, providers, MCP servers, channels, A2A
  • 🐘 PostgreSQL: Battle-tested relational storage with JSONB for complex data

Session Names (LLM-Generated)

Sessions can be automatically named using LLM-generated summaries for easy identification:

// Generate a session name based on conversation content
var sessionName = await sessionService.GenerateSessionNameAsync(
    sessionId,
    model: "claude-haiku-4"  // Fast, cheap model for summarisation
);
// Returns: "Debugging Authentication Flow" or "API Rate Limiting Discussion"

// Update session with the generated name
await sessionService.UpdateSessionAsync(sessionId, name: sessionName);

// Sessions in the portal/UI now show meaningful names instead of IDs

Key Features:

  • 🏷️ Automatic Naming: Generate descriptive names from conversation content
  • Fast & Cheap: Uses lightweight models (Haiku) for summarisation
  • 🔄 On-Demand: Generate names when needed, not on every message
  • 📊 Portal Integration: Session viewer displays names in sidebar

Multi-Provider Support

Use different LLMs for different tasks:

using AgentCircuits.Providers.Anthropic;
using AgentCircuits.Providers.OpenAI;
using AgentCircuits.Providers.Gemini;
using AgentCircuits.Providers.Grok;
using AgentCircuits.Providers.Ollama;
using AgentCircuits.Providers.Bedrock;

// Classify task complexity with fast model
var classification = await Agent.Query(
    $"Is this complex? {userQuery}",
    model: Anthropic.LanguageModel("claude-haiku-4"),
    systemPrompt: "Classify as 'simple' or 'complex'. Respond with one word only."
);

// Route to appropriate model based on complexity
var model = classification.Contains("complex")
    ? OpenAI.LanguageModel("gpt-4o")           // Complex: expensive model
    : Gemini.LanguageModel("gemini-2.5-flash");  // Simple: fast & cheap

var result = await Agent.Query(userQuery, model: model);

// For AWS infrastructure, use Bedrock
var bedrockResult = await Agent.Query(
    userQuery,
    model: Bedrock.LanguageModel("amazon.nova-lite-v1:0")  // AWS Bedrock
);

// For xAI Grok models
var grokResult = await Agent.Query(
    userQuery,
    model: Grok.LanguageModel("grok-3")  // xAI Grok
);

// For local/privacy-sensitive tasks, use Ollama
var localResult = await Agent.Query(
    userQuery,
    model: Ollama.LanguageModel("llama3")  // Localhost:11434 by default
);

Architecture

AgentCircuits is built around the Agent as the central orchestrator. All capabilities—tools, providers, sessions, hooks, and observability—connect through the agent runtime:

                            ┌──────────────────────────┐
                            │    YOUR APPLICATION      │
                            │   Agent.Query(...)       │
                            │   agent.SendAsync(...)   │
                            │   agent.ReceiveAsync()   │
                            │   agent.GetContextStats()│
                            └────────────┬─────────────┘
                                         │
                    ╔════════════════════▼════════════════════╗
                    ║                                         ║
                    ║            🤖 AGENT CORE                ║
                    ║    (Conversation Engine & Runtime)      ║
                    ║                                         ║
                    ╚══╦═════╦═════╦═════╦═════╦═════╦═════╦═╝
                       ║     ║     ║     ║     ║     ║     ║
        ┌──────────────╨┐ ┌──╨────────┐ ║     ║     ║     ║
        │ LLM PROVIDERS │ │   TOOLS    │ ║     ║     ║     ║
        ├───────────────┤ ├────────────┤ ║     ║     ║     ║
        │ • Anthropic   │ │ Built-in:  │ ║     ║     ║     ║
        │ • OpenAI      │ │  Read      │ ║     ║     ║     ║
        │ • Gemini      │ │  Write     │ ║     ║     ║     ║
        │ • Grok        │ │  Edit      │ ║     ║     ║     ║
        │ • Ollama      │ │  Bash      │ ║     ║     ║     ║
        │ • Bedrock     │ │  Grep/Glob │ ║     ║     ║     ║
        │               │ │  Task*     │ ║     ║     ║     ║
        │ ModelRegistry │ │  Memory    │ ║     ║     ║     ║
        │ (context      │ │  WebFetch  │ ║     ║     ║     ║
        │  limits)      │ │  WebSearch │ ║     ║     ║     ║
        └───────────────┘ │  Base64E/D │ ║     ║     ║     ║
                          │  AskUser   │ ║     ║     ║     ║
                          │  (21 total)│ ║     ║     ║     ║
                          │            │ ║     ║     ║     ║
                          │ Custom:    │ ║     ║     ║     ║
                          │  [Tool]    │ ║     ║     ║     ║
                          │  MCP Srvs  │ ║     ║     ║     ║
                          └────────────┘ ║     ║     ║     ║
                                         ║     ║     ║     ║
                ┌────────────────────────╨┐ ┌──╨─────────┐ ║
                │       SESSIONS         │ │   EVENTS   │ ║
                ├────────────────────────┤ ├────────────┤ ║
                │ Storage:               │ │ TextEvent  │ ║
                │  • InMemory            │ │ ToolUse    │ ║
                │  • JSON File           │ │ ToolResult │ ║
                │  • PostgreSQL (Sql)    │ │ SystemEvt  │ ║
                │                        │ │ TurnMetric │ ║
                │ Helpers:               │ │            │ ║
                │  • Fork (experiment)   │ │IAsyncEnum  │ ║
                │  • Rewind (undo)       │ │ streaming  │ ║
                │  • Checkpoint (save)   │ └────────────┘ ║
                │  • RemoveFailedTools   │                ║
                └────────────────────────┘                ║
                                                          ║
        ┌─────────────────────────────────────────────────╨┐
        │                  OBSERVABILITY                    │
        ├───────────────────────────────────────────────────┤
        │ Context Monitoring:          Performance Metrics: │
        │  • agent.GetContextStats()    • TurnMetricsEvent  │
        │  • Token usage & limits       • Throughput (t/s)  │
        │  • Distribution breakdown     • TTFT (streaming)  │
        │  • Recommended actions        • Tool statistics   │
        │                                                    │
        │ Auto-Compaction: Triggers @ 90% of context limit  │
        └────────────────────────────────────────────────────┘

   ┌──────────────────╨──┐ ┌────────────────╨───┐ ┌──────────────────╨──┐
   │   AGENT-LEVEL HOOKS │ │   SUB-AGENTS       │ │   DEEP AGENTS       │
   │  (Per-Agent Policy) │ │  (Multi-Agent)     │ │ (Long-Horizon)      │
   ├─────────────────────┤ ├────────────────────┤ ├─────────────────────┤
   │ • BeforeModel       │ │ • Orchestrator     │ │ • Memory Tool       │
   │ • AfterModel        │ │ • Specialist       │ │   (cross-session    │
   │ • BeforeToolUse     │ │ • Task delegation  │ │    learning)        │
   │ • AfterToolUse      │ │ • Event aggregate  │ │ • TaskCreate/Update │
   │ • OnSessionStart    │ │ • Session isolate  │ │   (planning)        │
   │ • OnAgentStop       │ └────────────────────┘ │ • High MaxIter      │
   │                     │                        │   (200+)            │
   │ Fluent Builder:     │                        │ • Auto-compaction   │
   │  BlockTool()        │                        │   (context mgmt)    │
   │  AllowOnlyTools()   │                        │ • Multi-day tasks   │
   │  OnBeforeToolUse()  │                        │ • Research agents   │
   └─────────────────────┘                        └─────────────────────┘

Key Design Principles:

  • 🎯 Agent-Centric: Everything flows through the Agent runtime—one unified interface
  • 🏗️ In-Process: Run 100+ agents in a single process with minimal overhead
  • 🔌 Provider-Agnostic: Swap LLMs (Claude, GPT, Llama) without code changes
  • 📦 Zero Dependencies: Core framework has no external dependencies
  • 🧩 Composable: Tools, hooks, sessions, and agents compose naturally
  • 🔍 Observable: Built-in token monitoring, metrics, and auto-compaction
  • 🧪 Testable: MockLanguageModel and TestToolContext for deterministic tests

Examples

AgentCircuits comes with comprehensive example projects in the agentcircuits.samples package, demonstrating all major features.

SdkShowcase - Complete Feature Tour

Interactive demonstrations of all SDK capabilities:

cd agentcircuits.samples/src/SdkShowcase
dotnet run

12 Comprehensive Demos:

  1. Code Review Bot - One-shot Agent.Query() with read-only tools
  2. Interactive Chat - Multi-turn conversation with event streaming
  3. Multi-Agent System - SubAgent orchestration with Task tool
  4. Custom Tools - [Tool] attribute and IToolContext usage
  5. Session Workflows - Fork, Rewind, and Checkpoint operations
  6. Memory Learning - Cross-session knowledge persistence
  7. Hooks Patterns - Audit logging and cost tracking
  8. Iteration Control - Custom stop logic and composition
  9. Cost Optimization - Smart model routing (Haiku vs Sonnet)
  10. Multimodal Input - Vision models with images (Claude, GPT-4o, Gemini)
  11. Thinking Mode - Extended reasoning with streaming ThinkingEvent and multi-turn continuity
  12. User Interaction - AskUserQuestion tool with IToolContextProvider for interactive prompts

Each demo is self-contained and heavily commented for learning.


Feature-Specific Demos

AutoCompactionDemo - Automatic Context Management

cd agentcircuits.samples/src/SessionsDemo
dotnet run
  • Demonstrates auto-compaction at 90% context threshold
  • Shows SystemEvent notifications during compaction
  • Compares enabled vs disabled behavior

ContextBuildersDemo - Token Optimization

cd agentcircuits.samples/src/ContextDemo
dotnet run
  • Compact format (30-50% token savings)
  • XML format (Claude-optimised)
  • Custom IContextBuilder implementation

ControlFlowDemo - Turn-by-Turn Execution

cd agentcircuits.samples/src/WorkflowsDemo
dotnet run
  • Approval workflows with StepAsync()
  • Multi-day operations with interrupts
  • Custom stop conditions
  • Observability and debugging

MultimodalDemo - Vision & Image Analysis

cd agentcircuits.samples/src/MultimodalDemo
dotnet run
  • Screenshot analysis and UI/UX feedback
  • Visual diff tool (before/after comparison)
  • Image description and extraction
  • Works with Claude, GPT-4o, Gemini vision models

ToolsDemo - Custom Tool Creation

cd agentcircuits.samples/src/ToolsDemo
dotnet run
  • Tool.FromType<T>() for attribute-based tools
  • Tool builder pattern for dynamic tools
  • Tool composition patterns
  • Built-in tools overview

WebSearchDemo - Research Assistant

cd agentcircuits.samples/src/AgentDemo
dotnet run
  • Web search with DuckDuckGo (no API key needed)
  • Multi-turn research conversations
  • Fact-checking with sources
  • Works with local models (Ollama + Qwen)

MemoryDemo - Cross-Conversation Learning

Demonstrates the Memory Tool's unique capability to learn and apply knowledge across separate sessions:

cd agentcircuits.samples/src/SdkShowcase
dotnet run
# Select Demo 6: Memory Learning

3 Essential Demos:

1. Cross-Conversation Learning

  • Session 1: Agent reviews code, discovers a race condition, stores pattern in /memories/patterns/concurrency.md
  • Session 2: New agent instance reads the pattern and applies it to different code
  • Zero shared state between sessions—knowledge persists through memory files

2. Multi-Model Compatibility

  • Same MemoryToolHandler works with Anthropic, OpenAI, Google, local models
  • Anthropic: Uses type: "memory_20250818" for automatic memory checking (native support)
  • Others: Explicit prompting in system prompt
  • Provider-agnostic memory architecture—write once, use everywhere

3. Memory Organization

  • Production-ready memory structure: patterns/, stats/, preferences/, knowledge/, decisions/
  • Realistic examples with proper categorization
  • Scalable knowledge management patterns
  • Multi-tenant isolation guidance

Key Insights:

  • No vector databases or embeddings needed
  • Simple file-based storage with full control
  • Works with any LLM provider
  • Debuggable—just open the files

SessionHelpersDemo - Advanced Session Operations

Shows session manipulation patterns:

cd agentcircuits.samples/src/SessionsDemo
dotnet run
  • Fork sessions for experimentation
  • Rewind to retry different approaches
  • Create checkpoints before risky operations
  • Clean up failed tool uses for retries

Provider Demos

Basic usage examples for each LLM provider (see agentcircuits.samples/src/ProvidersDemo):

  • Anthropic - Claude models via Anthropic API
  • Bedrock - AWS Bedrock (Nova, Claude, Titan)
  • Gemini - Google Gemini models
  • Grok - xAI Grok models
  • Ollama - Local models (Llama, Mistral, Qwen)
  • OpenAI - GPT-4o, o1/o3 reasoning models

Production Patterns

Code Review Assistant with Memory:

using AgentCircuits;
using AgentCircuits.Tools;
using AgentCircuits.Tools.BuiltIn;

var memoryHandler = new MemoryToolHandler();
var memoryTool = new MemoryTool(memoryHandler);

var reviewer = new Agent
{
    SystemPrompt = """
        You are a code reviewer that learns from experience.
        Check your memory for similar bugs you've seen before.
        Store new patterns when you find interesting issues.
        """,
    Tools = [BuiltInTools.Read, memoryTool],
    Model = "claude-sonnet-4-5"
};

await reviewer.SendAsync("Review AuthController.cs");
// Agent checks /memories/patterns/ for known bugs, applies learned patterns

Deep Research Agent (Long-Horizon Tasks):

var researcher = new Agent
{
    SystemPrompt = """
        You conduct multi-day research projects.
        1. Create tasks with TaskCreate for each phase
        2. Update task status as you progress (TaskUpdate)
        3. Save findings to files
        4. Synthesise final report
        """,
    Tools = [
        BuiltInTools.TaskCreate,
        BuiltInTools.TaskUpdate,
        BuiltInTools.TaskList,
        BuiltInTools.Read,
        BuiltInTools.Write,
        BuiltInTools.Bash
    ],
    MaxIterations = 200,  // Long-running research tasks
    Model = "claude-sonnet-4-5"
};

See Full Documentation:


Project Status

Current Version: 0.6.0 Completion: ~98% of planned features (SDK + Portal + Chat UI + A2A Protocol + Thinking Mode + Image Generation + SQL Storage + User Management + Task Management + Channels) Stability: Production-ready core with comprehensive observability, web portal, A2A protocol integration, extended thinking support, image generation, user management, task tracking, and multi-channel routing

Core Features (Complete)

  • Agent Framework: Query, Send/Receive patterns with natural conversation flow
  • Turn-by-Turn Control: StepAsync() for approval workflows, debugging, multi-day operations
  • Extended Thinking: ThinkingConfig for reasoning mode with streaming ThinkingEvent, multi-turn signature preservation, and interleaved thinking for Claude 4
  • Context Builders: Custom message formatting (XML, Compact, JSON) with 30-50% token savings
  • Real-Time Streaming: Agent.UseStreaming with partial events for character-by-character display
  • Multimodal Input: Native image support for all vision models (Claude, GPT-4o, Gemini)
  • Native PDF Support: Upload PDFs directly to Claude and Gemini via DocumentContent type
  • Image Generation: Native support for image generation models (Gemini 2.0 Flash) with ImageEvent streaming
  • Auto-Compaction: Automatic context management at 90% threshold (provider-agnostic)
  • Token Monitoring: Real-time tracking with agent.GetContextStats(), distribution breakdown (including thinking tokens)
  • Performance Metrics: Per-turn TurnMetricsEvent with throughput (tok/s), TTFT, tool statistics
  • Session Management: In-memory, JSON file, and PostgreSQL persistence with Fork/Rewind/Checkpoint helpers
  • Session Names: LLM-generated session summaries for easy identification in portal/UI
  • Session Pagination: Efficient querying with pagination and participant tracking
  • Multi-Agent System: SubAgents with natural delegation and event aggregation
  • Agent Hooks: BeforeModel, AfterModel, BeforeToolUse, AfterToolUse, OnSessionStart, OnAgentStop, OnUserPrompt, OnSubagentStart, OnSubagentStop; tool matchers and CommandHook for polyglot hooks
  • Stop Conditions: Natural stopping, max iterations, tool-triggered interrupts
  • Session Cancellation: ExecutionId-based cancellation with server-side execution state tracking
  • Session Input Messages: Inbox message injection with content blocks support and pre-turn injection hook
  • Session-Scoped Event Routing: SessionId on base Event type for per-session event streams
  • User Management: User entities, authentication, and access control with SQL storage support
  • Task List Sharing: Task lists scoped to sessions with concurrency-safe storage and cross-session sharing
  • Agent Prompt Summariser: LLM-powered prompt summaries for agent discoverability
  • SystemPromptEnricher: Inject runtime environment information into system prompts
  • Thought Images: Visual reasoning from model thinking phase (Gemini) with ThoughtSignature round-tripping
  • BashTool Cancellation: Cancel running commands with cancellation token support

Built-In Tools (21 tools)

  • File Operations: Read, Write, Edit, Glob, Grep (with ripgrep-compatible flags: -i, -n, -A, -B, -C, offset)
  • Shell Integration: Bash (with cancellation token support), TaskStop
  • Agent Orchestration: Task (SubAgents with description, max_turns, model, resume, run_in_background), TaskOutput (retrieve background task output), TaskStop (stop running background tasks or shells)
  • Task Management: Granular CRUD-based system (TaskCreate, TaskGet, TaskUpdate with status: "deleted", TaskList) for planning and tracking with shared task lists across sessions
  • Background Task Management: JSONL output files, progress metrics (ToolCallsCount, TokensGenerated), TTL-based cleanup
  • User Interaction: AskUserQuestion (interactive prompts with multi-select support, timeout-exempt for extended user response time)
  • Memory Tool: Cross-conversation learning with markdown file storage and per-user isolation
  • Web Tools: WebFetch (single URL fetching), WebSearch (DuckDuckGo default, pluggable providers)
  • Encoding Tools: Base64Encode, Base64Decode for binary data handling

LLM Providers (Complete)

  • Anthropic: Claude Sonnet 4, Claude 4.5 Sonnet, Opus, Haiku with streaming, tool calling, extended thinking, and interleaved thinking for Claude 4
  • OpenAI: GPT-4o, GPT-4, o1/o3 reasoning models with streaming, tool calling, reasoning support, Azure OpenAI support
  • Gemini: Gemini 2.0 Flash, 2.5 Flash, 2.5 Pro, 3.0 Flash/Pro with streaming (SSE), tool calling, reasoning support, thought images, ThoughtSignature round-tripping, and image generation
  • Grok: Grok 3, Grok 4 with streaming, tool calling, and regional endpoint support
  • Ollama: Llama 3, Mistral, Mixtral, Phi, DeepSeek-R1 with local deployment, reasoning support, and dedicated Thinking property for stream chunks
  • Bedrock: Amazon Nova, Claude (via AWS) with streaming, tool calling, extended thinking, and interleaved thinking for Claude 4; Titan, Llama, Cohere
  • Auto-Registration: Reflection-based provider discovery via LlmProviders.GetModel()

Advanced Features

  • Custom Tools: Attribute-based tool creation with [Tool] and [ToolParam]
  • MCP Integration: Load tools from MCP servers (stdio, HTTP, SSE) with whitelist/blacklist
  • Hook Patterns: Pre-built hooks for audit logging, cost tracking, rate limiting, security
  • Cost Tracking: Accumulates tokens and costs across turns (cache-aware)
  • Web Management Portal: Complete web UI for agent management, session viewer, playground, metrics
  • Chat Interface: Modern SvelteKit end-user chat with streaming, tool visualisation, session names, themes, HTML export, tool approval dialogs, task management panel, image preview modal, and activity indicators
  • Tool Approval: Per-tool approval configuration in Portal with session-scoped "Always Allow"
  • A2A Protocol: Agent-to-agent communication (call Python/Java/JS agents, expose Agent Circuit agents)
  • Runtime Repositories: File-based storage for agents, sessions, providers, MCP servers, async operations
  • REST API Execution: Agent execution dispatched via REST API endpoints with execution tracking
  • Live Configuration: Configuration changes propagated in real-time via SignalR ConfigChanged events
  • Session Input Routing: Content block routing with configurable inbox auto-injection
  • Multi-Channel Routing: WhatsApp, Slack, Teams and custom channel adapters via AgentCircuits.Channels
  • PostgreSQL Storage: Production-ready SQL storage for sessions, users, agents, providers, channels, A2A, and async operations
  • Portal Authentication: LocalUserId, JWT, and Azure AD authentication providers with role-based access

Remaining Work

Critical (production-blocking):

  • 🔴 Tool Result Size Management - WebFetchTool and other tools can return unlimited content, poisoning sessions

Medium Priority:

  • 🟡 Advanced Context Management - Smart pruning, progressive compaction, memory-assisted strategies
  • 🟡 Provider Caching - Cache breakpoints for 10x cost reduction (Anthropic/OpenAI/Bedrock)

Low Priority:

  • 🟢 OpenTelemetry - Distributed tracing and metrics export
  • 🟢 NotebookEdit Tool - Jupyter notebook cell editing
  • 🟢 MCP Advanced Capabilities - Resources, prompts, rich content, sampling
  • 🟢 Documentation - Testing strategy, serialization spec, error handling spec, performance benchmarks

Tests

AgentCircuits has comprehensive test coverage across all packages.

Running Tests

# Run all .NET tests (xUnit)
dotnet test agentcircuits.sln

# Run tests for a specific project
dotnet test agentcircuits/tests/AgentCircuits.Tests.csproj

# Run UI tests (Vitest)
cd agentcircuits.ui && npx vitest run

Test Summary

Suite Tests Framework Description
.NET (xUnit) 4,333 xUnit Core SDK, Portal, providers, storage, channels, A2A, server
UI (Vitest) 617 Vitest SvelteKit chat interface unit tests
E2E (Playwright) 287 Playwright Browser-based end-to-end validation (38 spec files)

.NET breakdown by project:

Project Tests
AgentCircuits (Core) 2,398
AgentCircuits.Portal 813
AgentCircuits.Storage.Sql 328
AgentCircuits.Server 239
AgentCircuits.Providers.Gemini 141
AgentCircuits.Providers.Anthropic 115
AgentCircuits.Providers.Bedrock 115
AgentCircuits.Channels 68
AgentCircuits.Providers.OpenAI 47
AgentCircuits.Providers.Ollama 36
AgentCircuits.Providers.Grok 20
AgentCircuits.A2A 13

The E2E tests live in agentcircuits.e2e/ and run against a live server instance. They cover the Portal admin UI and the Chat interface across 38 Playwright spec files, testing everything from agent CRUD and session lifecycle to streaming resilience, concurrent sessions, and cross-surface workflows. They require the server to be running and are run separately from the unit/integration tests above.


Contributing

We welcome contributions! Please see our Contributing Guide for details.


License

MIT License - see LICENSE for details.


Support


Built with ❤️ for the .NET community

Product Compatible and additional computed target framework versions.
.NET net9.0 is compatible.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 was computed.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages (10)

Showing the top 5 NuGet packages that depend on AgentCircuits.Core:

Package Downloads
AgentCircuits.Portal

Web-based management portal for AgentCircuits. Provides dashboard, agent configuration UI, session viewer, and interactive playground.

AgentCircuits.Providers.Anthropic

Anthropic Claude provider for AgentCircuits agent framework. Supports Claude 3.5 Sonnet, Opus, and Haiku with streaming, tool calling, and prompt caching.

AgentCircuits.Providers.Ollama

Ollama provider for AgentCircuits agent framework. Run Llama 3, Mistral, Mixtral, and other open-source models locally with streaming and tool calling.

AgentCircuits.Providers.Bedrock

AWS Bedrock provider for AgentCircuits agent framework. Enterprise-grade AWS integration with IAM credentials for Claude and other Bedrock models.

AgentCircuits.Providers.OpenAI

OpenAI provider for AgentCircuits agent framework. Supports GPT-4o, GPT-4, GPT-3.5 Turbo with streaming, tool calling, and Azure OpenAI.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
0.7.0 56 2/12/2026

Core agent framework with streaming, multimodal input, auto-compaction, token monitoring, multi-agent orchestration, MCP integration, and web management portal.