Azure.AI.Projects.OpenAI 2.0.0-beta.1

Prefix Reserved
This is a prerelease version of Azure.AI.Projects.OpenAI.
dotnet add package Azure.AI.Projects.OpenAI --version 2.0.0-beta.1
                    
NuGet\Install-Package Azure.AI.Projects.OpenAI -Version 2.0.0-beta.1
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="Azure.AI.Projects.OpenAI" Version="2.0.0-beta.1" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="Azure.AI.Projects.OpenAI" Version="2.0.0-beta.1" />
                    
Directory.Packages.props
<PackageReference Include="Azure.AI.Projects.OpenAI" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add Azure.AI.Projects.OpenAI --version 2.0.0-beta.1
                    
#r "nuget: Azure.AI.Projects.OpenAI, 2.0.0-beta.1"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package Azure.AI.Projects.OpenAI@2.0.0-beta.1
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=Azure.AI.Projects.OpenAI&version=2.0.0-beta.1&prerelease
                    
Install as a Cake Addin
#tool nuget:?package=Azure.AI.Projects.OpenAI&version=2.0.0-beta.1&prerelease
                    
Install as a Cake Tool

Azure AI Projects OpenAI client library for .NET

Develop Agents using the Azure AI Foundry platform, leveraging an extensive ecosystem of models, tools, and capabilities from OpenAI, Microsoft, and other LLM providers.

Note: This package can be used to create requests to the existing agents. It was split from Azure.AI.Projects because the create, update, and delete operations on agents are recommended to be used with enhanced privileges. The Projects library provides simplified access to advanced functionality, such as creating and managing Agents, enumerating AI models, working with datasets, managing search indexes, evaluating generative AI performance, and enabling OpenTelemetry tracing. In this tutorial we are showing how to create agents with the specific data mining functionalities provided by tools.

Product documentation | Samples | API reference documentation | Package (NuGet) | SDK source code

Table of contents

Getting started

Prerequisites

To use Azure AI Agents capabilities, you must have an Azure subscription. This will allow you to create an Azure AI resource and get a connection URL.

Install the package

Install the client library for .NET with NuGet:

dotnet add package Azure.AI.Projects.OpenAI --prerelease

You must have an Azure subscription and Cosmos DB account (SQL API). In order to take advantage of the C# 8.0 syntax, it is recommended that you compile using the .NET Core SDK 3.0 or higher with a language version of latest. It is also possible to compile with the .NET Core SDK 2.1.x using a language version of preview.

Authenticate the client

To be able to create, update and delete Agent, please install Azure.AI.Projects and use AIProjectClient. It is a good practice to only allow this operation for users with elevated permissions, for example, administrators.

AIProjectClient projectClient = new(
    endpoint: new Uri("https://<RESOURCE>.services.ai.azure.com/api/projects/<PROJECT>"),
    tokenProvider: new AzureCliCredential());
AIProjectAgentsOperations agentClient = projectClient.Agents;

If you're already using an AIProjectClient from Azure.AI.Projects, you can initialize an ProjectOpenAIClient instance directly via an extension method:

AIProjectClient projectClient = new(
    endpoint: new Uri("https://<RESOURCE>.services.ai.azure.com/api/projects/<PROJECT>"),
    tokenProvider: new AzureCliCredential());
ProjectOpenAIClient agentClient = projectClient.OpenAI;

For operations based on OpenAI APIs like /responses, /files, and /vector_stores, you can retrieve ResponsesClient, OpenAIFileClient and VectorStoreClient through the appropriate helper methods:

ProjectResponsesClient responseClient = projectClient.OpenAI.GetProjectResponsesClientForAgent("AGENT_NAME");
OpenAIFileClient fileClient = projectClient.OpenAI.GetOpenAIFileClient();
VectorStoreClient vectorStoreClient = projectClient.OpenAI.GetVectorStoreClient();

Key concepts

Service API versions

When clients send REST requests to the endpoint, one of the query parameters is api-version. It allows us to select the API versions supporting different features. Currently supported values for API versions are 2025-11-01 and 2025-11-15-preview (default).

Select a service API version

The API version may be set supplying version parameter to AgentClientOptions constructor as shown in the example code below.

ProjectOpenAIClientOptions option = new()
{
    ApiVersion = "2025-11-15-preview"
};
ProjectOpenAIClient projectClient = new(
    projectEndpoint: new Uri("https://<RESOURCE>.services.ai.azure.com/api/projects/<PROJECT>"),
    tokenProvider: new AzureCliCredential());

Additional concepts

The Azure.AI.Projects.OpenAI framework organized in a way that for each call, requiring the REST API request, there are synchronous and asynchronous counterparts where the letter has the "Async" suffix. For example, the following code demonstrates the creation of a ResponseResult object.

Synchronous call:

ProjectResponsesClient responseClient = projectClient.OpenAI.GetProjectResponsesClientForModel(modelDeploymentName);
ResponseResult response = responseClient.CreateResponse("What is the size of France in square miles?");

Asynchronous call:

ProjectResponsesClient responseClient = projectClient.OpenAI.GetProjectResponsesClientForModel(modelDeploymentName);
ResponseResult response = await responseClient.CreateResponseAsync("What is the size of France in square miles?");

In the most of code snippets we will show only asynchronous sample for brevity. Please refer individual samples for both synchronous and asynchronous code.

Examples

Prompt Agents

Agents

Note: Please intall Azure.AI.Projects to manipulate Agents. When creating the Agents we need to supply Agent definitions to its constructor. To create a declarative prompt Agent, use the PromptAgentDefinition:

string RAW_PROJECT_ENDPOINT = Environment.GetEnvironmentVariable("AZURE_AI_FOUNDRY_PROJECT_ENDPOINT")
    ?? throw new InvalidOperationException("Missing environment variable 'AZURE_AI_FOUNDRY_PROJECT_ENDPOINT'");
string MODEL_DEPLOYMENT = Environment.GetEnvironmentVariable("AZURE_AI_FOUNDRY_MODEL_DEPLOYMENT")
    ?? throw new InvalidOperationException("Missing environment variable 'AZURE_AI_FOUNDRY_MODEL_DEPLOYMENT'");
string AGENT_NAME = Environment.GetEnvironmentVariable("AZURE_AI_FOUNDRY_AGENT_NAME")
    ?? throw new InvalidOperationException("Missing environment variable 'AZURE_AI_FOUNDRY_AGENT_NAME'");

AIProjectClient projectClient = new(new Uri(RAW_PROJECT_ENDPOINT), new AzureCliCredential());

AgentDefinition agentDefinition = new PromptAgentDefinition(MODEL_DEPLOYMENT)
{
    Instructions = "You are a foo bar agent. In EVERY response you give, ALWAYS include both `foo` and `bar` strings somewhere in the response.",
};

AgentVersion newAgentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: AGENT_NAME,
    options: new(agentDefinition));
Console.WriteLine($"Created new agent version: {newAgentVersion.Name}");

The code above will result in creation of AgentVersion object, which is the data object containing Agent's name and version.

Responses

OpenAI API allows you to get the response without creating an agent by using the response API. In this scenario we first create the response object.

ProjectResponsesClient responseClient = projectClient.OpenAI.GetProjectResponsesClientForModel(modelDeploymentName);
ResponseResult response = await responseClient.CreateResponseAsync("What is the size of France in square miles?");

After the response was created we need to wait for it to complete.

Console.WriteLine(response.GetOutputText());

Alternatively, we can stream the response.

await foreach (StreamingResponseUpdate streamResponse in responsesClient.CreateResponseStreamingAsync("What is the size of France in square miles?"))
{
    if (streamResponse is StreamingResponseCreatedUpdate createUpdate)
    {
        Console.WriteLine($"Stream response created with ID: {createUpdate.Response.Id}");
    }
    else if (streamResponse is StreamingResponseOutputTextDeltaUpdate textDelta)
    {
        Console.WriteLine($"Delta: {textDelta.Delta}");
    }
    else if (streamResponse is StreamingResponseOutputTextDoneUpdate textDoneUpdate)
    {
        Console.WriteLine($"Response done with full message: {textDoneUpdate.Text}");
    }
    else if (streamResponse is StreamingResponseErrorUpdate errorUpdate)
    {
        throw new InvalidOperationException($"The stream has failed with the error: {errorUpdate.Message}");
    }
}

Responses can be used with the agents. First we need to create an AgentVersion object.

PromptAgentDefinition agentDefinition = new(model: MODEL_DEPLOYMENT)
{
    Instructions = "You are a physics teacher with a sense of humor.",
};
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent",
    options: new(agentDefinition)
);

To associate the Response with the Agent the agent reference needs to be created. It is done by calling GetProjectResponsesClientForAgent method.

var agentReference = new AgentReference(name: agentVersion.Name);
ProjectResponsesClient responseClient = openaiClient.GetProjectResponsesClientForAgent(agentReference);
CreateResponseOptions responseOptions = new([ResponseItem.CreateUserMessageItem("Write Maxwell's equation in LaTeX format.")]);
ResponseResult response = await responseClient.CreateResponseAsync(responseOptions);
Console.WriteLine(response.GetOutputText());

Previous Response ID may be used to ask follow up questions. In this case we need to set PreviousResponseId property on CreateResponseOptions object.

CreateResponseOptions followupOptions = new()
{
    PreviousResponseId = response.Id,
    InputItems = { ResponseItem.CreateUserMessageItem("What was the previous question?") },
};
response = await responseClient.CreateResponseAsync(followupOptions);
Console.WriteLine(response.GetOutputText());

Finally, we can delete Agent.

await projectClient.Agents.DeleteAgentAsync(agentName: "myAgent");

Previously created responses can also be listed, typically to find all responses associated with a particular agent or conversation.

await foreach (ResponseResult response
    in projectClient.OpenAI.Responses.GetProjectResponsesAsync(agent: new AgentReference(agentName), conversationId: conversationId))
{
    Console.WriteLine($"Matching response: {response.Id}");
}
Conversations

Conversations may be used to store the history of interaction with the agent. To add the responses to a conversation, set the conversation parameter while calling GetProjectResponsesClientForAgent.

CreateResponseOptions CreateResponseOptions = new();
// Optionally, use a conversation to automatically maintain state between calls.
ProjectConversation conversation = await projectClient.OpenAI.Conversations.CreateProjectConversationAsync();
ProjectResponsesClient responseClient = projectClient.OpenAI.GetProjectResponsesClientForAgent(AGENT_NAME, conversation);

Conversations may be deleted to clean up the resources.

await openAIClient.GetConversationClient().DeleteConversationAsync(conversation.Id);

The conversation may be used to communicate messages to the agent.

ProjectConversationCreationOptions conversationOptions = new()
{
    Items = { ResponseItem.CreateSystemMessageItem("Your preferred genre of story today is: horror.") },
    Metadata = { ["foo"] = "bar" },
};
ProjectConversation conversation = await projectClient.OpenAI.Conversations.CreateProjectConversationAsync(conversationOptions);

//
// Add items to an existing conversation to supplement the interaction state
//
string EXISTING_CONVERSATION_ID = conversation.Id;

_ = await projectClient.OpenAI.Conversations.CreateProjectConversationItemsAsync(
    EXISTING_CONVERSATION_ID,
    [ResponseItem.CreateSystemMessageItem(inputTextContent: "Story theme to use: department of licensing.")]);
//
// Use the agent and conversation in a response
//
ProjectResponsesClient responseClient = projectClient.OpenAI.GetProjectResponsesClientForAgent(AGENT_NAME);
CreateResponseOptions responseOptions = new()
{
    AgentConversationId = EXISTING_CONVERSATION_ID,
    InputItems =
    {
        ResponseItem.CreateUserMessageItem("Tell me a one-line story."),
    },
};

List<ResponseItem> items = [];
ResponseResult response = await responseClient.CreateResponseAsync(responseOptions);

Logging

Logging ofservice requests and responses may be a useful tool for troubleshooting of the issues. It can be implemented through custom policy. In the example bwlow we implement LoggingPolicy by inheriting the PipelinePolicy. This class implements two methods Process and ProcessAsync. The Azure pipeline calls the chain of policies, where the preceding one calls the next policy, hence by placing calls to ProcessMessage method before and after ProcessNext we can print request and response. The ProcessMessage method contains logic to show the contents of web request and response along with headers and URI paths.

public class LoggingPolicy : PipelinePolicy
{
    private static void ProcessMessage(PipelineMessage message)
    {
        if (message.Request is not null && message.Response is null)
        {
            Console.WriteLine($"{message?.Request?.Method} URI: {message?.Request?.Uri}");
            Console.WriteLine($"--- New request ---");
            IEnumerable<string> headerPairs = message?.Request?.Headers?.Select(header => $"\n    {header.Key}={(header.Key.ToLower().Contains("auth") ? "***" : header.Value)}");
            string headers = string.Join("", headerPairs);
            Console.WriteLine($"Request headers:{headers}");
            if (message.Request?.Content != null)
            {
                string contentType = "Unknown Content Type";
                if (message.Request.Headers?.TryGetValue("Content-Type", out contentType) == true
                    && contentType == "application/json")
                {
                    using MemoryStream stream = new();
                    message.Request.Content.WriteTo(stream, default);
                    stream.Position = 0;
                    using StreamReader reader = new(stream);
                    string requestDump = reader.ReadToEnd();
                    stream.Position = 0;
                    requestDump = Regex.Replace(requestDump, @"""data"":[\\w\\r\\n]*""[^""]*""", @"""data"":""...""");
                    // Make sure JSON string is properly formatted.
                    JsonSerializerOptions jsonOptions = new()
                    {
                        WriteIndented = true,
                    };
                    JsonElement jsonElement = JsonSerializer.Deserialize<JsonElement>(requestDump);
                    Console.WriteLine("--- Begin request content ---");
                    Console.WriteLine(JsonSerializer.Serialize(jsonElement, jsonOptions));
                    Console.WriteLine("--- End request content ---");
                }
                else
                {
                    string length = message.Request.Content.TryComputeLength(out long numberLength)
                        ? $"{numberLength} bytes"
                        : "unknown length";
                    Console.WriteLine($"<< Non-JSON content: {contentType} >> {length}");
                }
            }
        }
        if (message.Response != null)
        {
            IEnumerable<string> headerPairs = message?.Response?.Headers?.Select(header => $"\n    {header.Key}={(header.Key.ToLower().Contains("auth") ? "***" : header.Value)}");
            string headers = string.Join("", headerPairs);
            Console.WriteLine($"Response headers:{headers}");
            if (message.BufferResponse)
            {
                message.Response.BufferContent();
                Console.WriteLine("--- Begin response content ---");
                Console.WriteLine(message.Response.Content?.ToString());
                Console.WriteLine("--- End of response content ---");
            }
            else
            {
                Console.WriteLine("--- Response (unbuffered, content not rendered) ---");
            }
        }
    }

    public LoggingPolicy(){}

    public override void Process(PipelineMessage message, IReadOnlyList<PipelinePolicy> pipeline, int currentIndex)
    {
        ProcessMessage(message); // for request
        ProcessNext(message, pipeline, currentIndex);
        ProcessMessage(message); // for response
    }

    public override async ValueTask ProcessAsync(PipelineMessage message, IReadOnlyList<PipelinePolicy> pipeline, int currentIndex)
    {
        ProcessMessage(message); // for request
        await ProcessNextAsync(message, pipeline, currentIndex);
        ProcessMessage(message); // for response
    }
}

To apply the policy to the pipeline, we create AIProjectClientOptions object containing LoggingPolicy, inform the pipeline to execute this policy by call and set the option while instantiating AIProjectClient that we will consequently use.

string RAW_PROJECT_ENDPOINT = Environment.GetEnvironmentVariable("PROJECT_ENDPOINT")
    ?? throw new InvalidOperationException("Missing environment variable 'PROJECT_ENDPOINT'");
string MODEL_DEPLOYMENT = Environment.GetEnvironmentVariable("MODEL_DEPLOYMENT_NAME")
    ?? throw new InvalidOperationException("Missing environment variable 'MODEL_DEPLOYMENT_NAME'");
AIProjectClientOptions options = new();
options.AddPolicy(new LoggingPolicy(), PipelinePosition.PerCall);
AIProjectClient projectClient = new(new Uri(RAW_PROJECT_ENDPOINT), new AzureCliCredential(), options: options);

Published Agents

Published Agents are available outside the Microsoft Foundry and can be used by external applications.

Publish Agent
  1. Click New foundry switch at the top of Microsoft Foundry UI.
  2. Click Build at the upper right.
  3. Click Create agent button and name your Agent.
  4. Select the created Agent at the central panel and click Publish at the upper right corner.

After the Agent is published, you will be provided with two URLs

  • https://<Account name>.services.ai.azure.com/api/projects/<Project Name>/applications/<Agent Name>/protocols/activityprotocol?api-version=2025-11-15-preview
  • https://<Account name>.services.ai.azure.com/api/projects/<Project Name>/applications/<Agent Name>/protocols/openai/responses?=2025-11-15-preview

The second URL can be usedto call responses API, we will use it to run sample.

Use the published Agent

The URL, returned during Agent publishing contains /openai/responses path and query parameter, setting api-version. These parts need to be removed.

Create a ProjectResponsesClient, get the response from Agent and print the output.

Synchronous sample:

ProjectResponsesClient responseClient = new(
    projectEndpoint: endpoint,
    tokenProvider: new DefaultAzureCredential()
);
ResponseResult response = responseClient.CreateResponse("What is the size of France in square miles?");
Console.WriteLine(response.GetOutputText());

Container App

Note: This feature is in the preview, to use it, please disable the AAIP001 warning.

#pragma warning disable AAIP001

Azure Container App may act as an agent if it implements the OpenAI-like protocol. Azure.AI.Projects.OpenAI allow you to interact with these applications as with regular agents. The main difference is that in this case agent needs to be created with ContainerAppAgentDefinition. This agent can be used in responses API as a regular agent.

AgentVersion containerAgentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "containerAgent",
    options: new(new ContainerApplicationAgentDefinition(
        containerProtocolVersions: [new ProtocolVersionRecord(protocol: AgentCommunicationMethod.Responses, version: "1")],
        containerAppResourceId: containerAppResourceId,
        ingressSubdomainSuffix: ingressSubdomainSuffix)));

Hosted Agents

Note: This feature is in the preview, to use it, please disable the AAIP001 warning.

#pragma warning disable AAIP001

Hosted agents simplify the custom agent deployment on fully controlled environment see more.

To create the hosted agent, please use the ImageBasedHostedAgentDefinition while creating the AgentVersion object.

private static  HostedAgentDefinition GetAgentDefinition(string dockerImage, string modelDeploymentName, string accountId, string applicationInsightConnectionString, string projectEndpoint)
{
    HostedAgentDefinition agentDefinition = new(
        containerProtocolVersions: [new ProtocolVersionRecord(AgentCommunicationMethod.ActivityProtocol, "v1")],
        cpu: "1",
        memory: "2Gi"
    )
    {
        EnvironmentVariables = {
            { "AZURE_OPENAI_ENDPOINT", $"https://{accountId}.cognitiveservices.azure.com/" },
            { "AZURE_OPENAI_CHAT_DEPLOYMENT_NAME", modelDeploymentName },
            // Optional variables, used for logging
            { "APPLICATIONINSIGHTS_CONNECTION_STRING", applicationInsightConnectionString },
            { "AGENT_PROJECT_RESOURCE_ID", projectEndpoint },
        },
        Image = dockerImage,
    };
    return agentDefinition;
}

The created agent needs to be deployed using Azure CLI

az login
az cognitiveservices agent start --account-name ACCOUNTNAME --project-name PROJECTNAME --name myHostedAgent --agent-version 1

After the deployment is complete, this Agent can be used for calling responses.

Agent deletion should be done through Azure CLI.

az cognitiveservices agent delete-deployment --account-name ACCOUNTNAME --project-name PROJECTNAME --name myHostedAgent --agent-version 1
az cognitiveservices agent delete --account-name ACCOUNTNAME --project-name PROJECTNAME --name myHostedAgent --agent-version 1

Structured Output

The Agent can be instructed to give the response in JSON format, compliant with the provided scheme.

For example, if we have the scheme as the one below:

private static readonly BinaryData s_calendatSchema = BinaryData.FromObjectAsJson(
    new {
        additionalProperties = false,
        properties = new {
            name = new {
                title = "Name",
                type = "string"
            },
            date = new {
                description = "Date in YYYY-MM-DD format",
                title = "Date",
                type = "string"
            },
            participants = new {
                items = new { type = "string" },
                title = "Participants",
                type = "array"
            }
        },
        required = new List<string> { "name", "date", "participants" },
        title ="CalendarEvent",
        type = "object",
    }
);

We can provide it to the Agent through TextOptions property of PromptAgentDefinition to get the Agent output in JSON format.

var textOptions = new ResponseTextOptions()
{
    TextFormat = ResponseTextFormat.CreateJsonSchemaFormat(
        jsonSchemaFormatName: "Calendar",
        jsonSchema: s_calendatSchema
    )
};
PromptAgentDefinition agentDefinition = new(model: MODEL_DEPLOYMENT)
{
    Instructions = "You are a helpful assistant that extracts calendar event information from the input user messages," +
                   "and returns it in the desired structured output format.",
    TextOptions = textOptions
};
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent",
    options: new(agentDefinition)
);

If Agents are provided with FileSearchTool, they can give the responses based on the information from the uploaded file(s). Here are the steps needed to implement the file search. Upload the file:

string filePath = "sample_file_for_upload.txt";
File.WriteAllText(
    path: filePath,
    contents: "The word 'apple' uses the code 442345, while the word 'banana' uses the code 673457.");
OpenAIFileClient fileClient = projectClient.OpenAI.GetOpenAIFileClient();
OpenAIFile uploadedFile = await fileClient.UploadFileAsync(filePath: filePath, purpose: FileUploadPurpose.Assistants);
File.Delete(filePath);

Add it to VectorStore:

VectorStoreClient vctStoreClient = projectClient.OpenAI.GetVectorStoreClient();
VectorStoreCreationOptions options = new()
{
    Name = "MySampleStore",
    FileIds = { uploadedFile.Id }
};
VectorStore vectorStore = await vctStoreClient.CreateVectorStoreAsync(options);

Finally, create the tool, aware of the vector store and add it to the Agent.

PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a helpful agent that can help fetch data from files you know about.",
    Tools = { ResponseTool.CreateFileSearchTool(vectorStoreIds: [vectorStore.Id]), }
};
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent",
    options: new(agentDefinition));

Code interpreter

The CodeInterpreterTool allows Agents to run the code in the container. Here are the steps needed to run Code interpreter. Create an Agent:

PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a personal math tutor. When asked a math question, write and run code using the python tool to answer the question.",
    Tools = {
        ResponseTool.CreateCodeInterpreterTool(
            new CodeInterpreterToolContainer(
                CodeInterpreterToolContainerConfiguration.CreateAutomaticContainerConfiguration([])
            )
        ),
    }
};
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent",
    options: new(agentDefinition));

Now we can ask the agent a question, which requires running python code in the container.

AgentReference agentReference = new(name: agentVersion.Name, version: agentVersion.Version);
ProjectResponsesClient responseClient = projectClient.OpenAI.GetProjectResponsesClientForAgent(agentReference);

ResponseResult response = await responseClient.CreateResponseAsync("I need to solve the equation sin(x) + x^2 = 42");

Computer use

ComputerTool allows Agents to assist customer in computer related tasks. Its constructor is provided with description of an operation system and screen resolution.

PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a computer automation assistant.\n\n" +
                   "Be direct and efficient. When you reach the search results page, read and describe the actual search result titles and descriptions you can see.",
    Tools = {
        ResponseTool.CreateComputerTool(
            environment: new ComputerToolEnvironment("windows"),
            displayWidth: 1026,
            displayHeight: 769
        ),
    }
};
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent",
    options: new(agentDefinition)
);

Users can create a message to the Agent, which contains text and screenshots.

ProjectResponsesClient responseClient = projectClient.OpenAI.GetProjectResponsesClientForAgent(agentVersion.Name);
CreateResponseOptions responseOptions = new()
{
    TruncationMode = ResponseTruncationMode.Auto,
    InputItems =
    {
        ResponseItem.CreateUserMessageItem(
        [
            ResponseContentPart.CreateInputTextPart("I need you to help me search for 'OpenAI news'. Please type 'OpenAI news' and submit the search. Once you see search results, the task is complete."),
            ResponseContentPart.CreateInputImagePart(imageBytes: screenshots["browser_search"], imageBytesMediaType: "image/png", imageDetailLevel: ResponseImageDetailLevel.High)
        ]),
    },
};
bool computerUseCalled = false;
string currentScreenshot = "browser_search";
int limitIteration = 10;
ResponseResult response;
do
{
    response = await CreateResponseAsync(
        responseClient,
        responseOptions
    );
    computerUseCalled = false;
    responseOptions.PreviousResponseId = response.Id;
    responseOptions.InputItems.Clear();
    foreach (ResponseItem responseItem in response.OutputItems)
    {
        responseOptions.InputItems.Add(responseItem);
        if (responseItem is ComputerCallResponseItem computerCall)
        {
            currentScreenshot = ProcessComputerUseCall(computerCall, currentScreenshot);
            responseOptions.InputItems.Add(ResponseItem.CreateComputerCallOutputItem(callId: computerCall.CallId, output: ComputerCallOutput.CreateScreenshotOutput(screenshotImageBytes: screenshots[currentScreenshot], screenshotImageBytesMediaType: "image/png")));
            computerUseCalled = true;
        }
    }
    limitIteration--;
} while (computerUseCalled && limitIteration > 0);
Console.WriteLine(response.GetOutputText());

The Agent in turn can analyze it, and return actions, user need to do, then user sends another screenshot with the actions result. This continues until the task is complete. In our example we have created a simple method, which analyzes Agent's actions and returns the appropriate screenshot name.

private static string ProcessComputerUseCall(ComputerCallResponseItem item, string oldScreenshot)
{
    string currentScreenshot = "browser_search";
    switch (item.Action.Kind)
    {
        case ComputerCallActionKind.Type:
            Console.WriteLine($"  Typing text \"{item.Action.TypeText}\" - Simulating keyboard input");
            currentScreenshot = "search_typed";
            break;
        case ComputerCallActionKind.KeyPress:
            HashSet<string> codes = [.. item.Action.KeyPressKeyCodes];
            if (codes.Contains("Return") || codes.Contains("ENTER"))
            {
                // If we have typed the value to the search field, go to search results.
                if (string.Equals(oldScreenshot, "search_typed"))
                {
                    Console.WriteLine("  -> Detected ENTER key press, when search field was populated, displaying results.");
                    currentScreenshot = "search_results";
                }
                else
                {
                    Console.WriteLine("  -> Detected ENTER key press, on results or unpopulated search, do nothing.");
                    currentScreenshot = oldScreenshot;
                }
            }
            else
            {
                Console.WriteLine($"  Key press: {item.Action.KeyPressKeyCodes.Aggregate("", (agg, next) => agg + "+" + next)} - Simulating key combination");
            }
            break;
        case ComputerCallActionKind.Click:
            Console.WriteLine($"  Click at ({item.Action.ClickCoordinates.Value.X}, {item.Action.ClickCoordinates.Value.Y}) - Simulating click on UI element");
            if (string.Equals(oldScreenshot, "search_typed"))
            {
                Console.WriteLine("  -> Assuming click on Search button when search field was populated, displaying results.");
                currentScreenshot = "search_results";
            }
            else
            {
                Console.WriteLine("  -> Assuming click on Search on results or when search was not populated, do nothing.");
                currentScreenshot = oldScreenshot;
            }
            break;
        case ComputerCallActionKind.Drag:
            string pathStr = item.Action.DragPath.ToArray().Select(p => $"{p.X}, {p.Y}").Aggregate("", (agg, next) => $"{agg} -> {next}");
            Console.WriteLine($"  Drag path: {pathStr} - Simulating drag operation");
            break;
        case ComputerCallActionKind.Scroll:
            Console.WriteLine($"  Scroll at ({item.Action.ScrollCoordinates.Value.X}, {item.Action.ScrollCoordinates.Value.Y}) - Simulating scroll action");
            break;
        case ComputerCallActionKind.Screenshot:
            Console.WriteLine("  Taking screenshot - Capturing current screen state");
            break;
        default:
            break;
    }
    Console.WriteLine($"  -> Action processed: {item.Action.Kind}");

    return currentScreenshot;
}

Function call.

To supply Agents with the information from running local functions the FunctionTool is used. In our example we define three toy functions: GetUserFavoriteCity that always returns "Seattle, WA" and GetCityNickname, which will handle only "Seattle, WA" and will throw exception in response to other city names. The last function GetWeatherAtLocation returns the weather in Seattle, WA.

/// Example of a function that defines no parameters
/// returns user favorite city.
private static string GetUserFavoriteCity() => "Seattle, WA";

/// <summary>
/// Example of a function with a single required parameter
/// </summary>
/// <param name="location">The location to get nickname for.</param>
/// <returns>The city nickname.</returns>
/// <exception cref="NotImplementedException"></exception>
private static string GetCityNickname(string location) => location switch
{
    "Seattle, WA" => "The Emerald City",
    _ => throw new NotImplementedException(),
};

/// <summary>
/// Example of a function with one required and one optional, enum parameter
/// </summary>
/// <param name="location">Get weather for location.</param>
/// <param name="temperatureUnit">"c" or "f"</param>
/// <returns>The weather in selected location.</returns>
/// <exception cref="NotImplementedException"></exception>
public static string GetWeatherAtLocation(string location, string temperatureUnit = "f") => location switch
{
    "Seattle, WA" => temperatureUnit == "f" ? "70f" : "21c",
    _ => throw new NotImplementedException()
};

For each function we need to create FunctionTool, which defines function name, description and parameters.

public static readonly FunctionTool getUserFavoriteCityTool = ResponseTool.CreateFunctionTool(
    functionName: "getUserFavoriteCity",
    functionDescription: "Gets the user's favorite city.",
    functionParameters: BinaryData.FromString("{}"),
    strictModeEnabled: false
);

public static readonly FunctionTool getCityNicknameTool = ResponseTool.CreateFunctionTool(
    functionName: "getCityNickname",
    functionDescription: "Gets the nickname of a city, e.g. 'LA' for 'Los Angeles, CA'.",
    functionParameters: BinaryData.FromObjectAsJson(
        new
        {
            Type = "object",
            Properties = new
            {
                Location = new
                {
                    Type = "string",
                    Description = "The city and state, e.g. San Francisco, CA",
                },
            },
            Required = new[] { "location" },
        },
        new JsonSerializerOptions() { PropertyNamingPolicy = JsonNamingPolicy.CamelCase }
    ),
    strictModeEnabled: false
);

private static readonly FunctionTool getCurrentWeatherAtLocationTool = ResponseTool.CreateFunctionTool(
    functionName: "getCurrentWeatherAtLocation",
    functionDescription: "Gets the current weather at a provided location.",
    functionParameters: BinaryData.FromObjectAsJson(
         new
         {
             Type = "object",
             Properties = new
             {
                 Location = new
                 {
                     Type = "string",
                     Description = "The city and state, e.g. San Francisco, CA",
                 },
                 Unit = new
                 {
                     Type = "string",
                     Enum = new[] { "c", "f" },
                 },
             },
             Required = new[] { "location" },
         },
        new JsonSerializerOptions() { PropertyNamingPolicy = JsonNamingPolicy.CamelCase }
    ),
    strictModeEnabled: false
);

We have created the method GetResolvedToolOutput. It runs the abovementioned functions and wraps their outputs into ResponseItem object.

private static FunctionCallOutputResponseItem GetResolvedToolOutput(FunctionCallResponseItem item)
{
    if (item.FunctionName == getUserFavoriteCityTool.FunctionName)
    {
        return ResponseItem.CreateFunctionCallOutputItem(item.CallId, GetUserFavoriteCity());
    }
    using JsonDocument argumentsJson = JsonDocument.Parse(item.FunctionArguments);
    if (item.FunctionName == getCityNicknameTool.FunctionName)
    {
        string locationArgument = argumentsJson.RootElement.GetProperty("location").GetString();
        return ResponseItem.CreateFunctionCallOutputItem(item.CallId, GetCityNickname(locationArgument));
    }
    if (item.FunctionName == getCurrentWeatherAtLocationTool.FunctionName)
    {
        string locationArgument = argumentsJson.RootElement.GetProperty("location").GetString();
        if (argumentsJson.RootElement.TryGetProperty("unit", out JsonElement unitElement))
        {
            string unitArgument = unitElement.GetString();
            return ResponseItem.CreateFunctionCallOutputItem(item.CallId, GetWeatherAtLocation(locationArgument, unitArgument));
        }
        return ResponseItem.CreateFunctionCallOutputItem(item.CallId, GetWeatherAtLocation(locationArgument));
    }
    return null;
}

Create Agent with the FunctionTool.

PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a weather bot. Use the provided functions to help answer questions. "
            + "Customize your responses to the user's preferences as much as possible and use friendly "
            + "nicknames for cities whenever possible.",
    Tools = { getUserFavoriteCityTool, getCityNicknameTool, getCurrentWeatherAtLocationTool }
};
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent",
    options: new(agentDefinition));

To supply functions outputs, we will need to obtain responses multiple times. We will define method CreateAndWaitForResponseAsync for brevity.

public static async Task<ResponseResult> CreateAndCheckResponseAsync(ResponsesClient responseClient, IEnumerable<ResponseItem> items)
{
    ResponseResult response = await responseClient.CreateResponseAsync(
        inputItems: items);
    Assert.That(response.Status, Is.EqualTo(ResponseStatus.Completed));
    return response;
}

If the local function call is required, the response item will be of FunctionCallResponseItem type and will contain the function name needed by the Agent. In this case we will use our helper method GetResolvedToolOutput to get the FunctionCallOutputResponseItem with function call result. To provide the right answer, we need to supply all the response items to CreateResponse or CreateResponseAsync call. At the end we will print out the function response.

ProjectResponsesClient responseClient = projectClient.OpenAI.GetProjectResponsesClientForAgent(agentVersion.Name);

ResponseItem request = ResponseItem.CreateUserMessageItem("What's the weather like in my favorite city?");
List<ResponseItem> inputItems = [request];
bool funcionCalled = false;
ResponseResult response;
do
{
    response = await CreateAndCheckResponseAsync(
        responseClient,
        inputItems);
    funcionCalled = false;
    foreach (ResponseItem responseItem in response.OutputItems)
    {
        inputItems.Add(responseItem);
        if (responseItem is FunctionCallResponseItem functionToolCall)
        {
            Console.WriteLine($"Calling {functionToolCall.FunctionName}...");
            inputItems.Add(GetResolvedToolOutput(functionToolCall));
            funcionCalled = true;
        }
    }
} while (funcionCalled);
Console.WriteLine(response.GetOutputText());

The WebSearchTool allows the agent to perform web search. To improve the results we can set up the search location. After the agent was created, it can be used as usual. When needed it will use web search to answer the question.

PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a helpful assistant that can search the web",
    Tools = { ResponseTool.CreateWebSearchTool(userLocation: WebSearchToolLocation.CreateApproximateLocation(country: "GB", city: "London", region: "London")), }
};
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent",
    options: new(agentDefinition));

Azure AI Search is an enterprise search system for high-performance applications. It integrates with Azure OpenAI Service and Azure Machine Learning, offering advanced search technologies like vector search and full-text search. Ideal for knowledge base insights, information discovery, and automation. Creating an Agent with Azure AI Search requires an existing Azure AI Search Index. For more information and setup guides, see Azure AI Search Tool Guide.

AzureAISearchToolIndex index = new()
{
    ProjectConnectionId = aiSearchConnectionName,
    IndexName = "sample_index",
    TopK = 5,
    Filter = "category eq 'sleeping bag'",
    QueryType = AzureAISearchQueryType.Simple
};
PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a helpful assistant. You must always provide citations for answers using the tool and render them as: `\u3010message_idx:search_idx\u2020source\u3011`.",
    Tools = { new AzureAISearchTool(new AzureAISearchToolOptions(indexes: [index])) }
};
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent",
    options: new(agentDefinition));

If the agent has found the relevant information in the index, the reference and annotation will be provided in the response. In this example, we add the reference and url to the end of the response. Please note that to get sensible result, the index needs to have fields "title" and "url" in the search index. We have created a helper method to format the reference.

private static string GetFormattedAnnotation(ResponseResult response)
{
    foreach (ResponseItem item in response.OutputItems)
    {
        if (item is MessageResponseItem messageItem)
        {
            foreach (ResponseContentPart content in messageItem.Content)
            {
                foreach (ResponseMessageAnnotation annotation in content.OutputTextAnnotations)
                {
                    if (annotation is UriCitationMessageAnnotation uriAnnotation)
                    {
                        return $" [{uriAnnotation.Title}]({uriAnnotation.Uri})";
                    }
                }
            }
        }
    }
    return "";
}

Use the helper method to output the result.

Assert.That(response.Status, Is.EqualTo(ResponseStatus.Completed));
Console.WriteLine($"{response.GetOutputText()}{GetFormattedAnnotation(response)}");

The same can be done in the streaming scenarios, however in this case the helper method takes response item.

private static string GetFormattedAnnotation(ResponseItem item)
{
    if (item is MessageResponseItem messageItem)
    {
        foreach (ResponseContentPart content in messageItem.Content)
        {
            foreach (ResponseMessageAnnotation annotation in content.OutputTextAnnotations)
            {
                if (annotation is UriCitationMessageAnnotation uriAnnotation)
                {
                    return $" [{uriAnnotation.Title}]({uriAnnotation.Uri})";
                }
            }
        }
    }
    return "";
}

Read the input in streaming mode.

ProjectResponsesClient responseClient = projectClient.OpenAI.GetProjectResponsesClientForAgent(agentVersion.Name);

string annotation = "";
string text = "";
await foreach (StreamingResponseUpdate streamResponse in responseClient.CreateResponseStreamingAsync("What is the temperature rating of the cozynights sleeping bag?"))
{
    if (streamResponse is StreamingResponseCreatedUpdate createUpdate)
    {
        Console.WriteLine($"Stream response created with ID: {createUpdate.Response.Id}");
    }
    else if (streamResponse is StreamingResponseOutputTextDeltaUpdate textDelta)
    {
        Console.WriteLine($"Delta: {textDelta.Delta}");
    }
    else if (streamResponse is StreamingResponseOutputTextDoneUpdate textDoneUpdate)
    {
        text = textDoneUpdate.Text;
    }
    else if (streamResponse is StreamingResponseOutputItemDoneUpdate itemDoneUpdate)
    {
        if (annotation.Length == 0)
        {
            annotation = GetFormattedAnnotation(itemDoneUpdate.Item);
        }
    }
    else if (streamResponse is StreamingResponseErrorUpdate errorUpdate)
    {
        throw new InvalidOperationException($"The stream has failed: {errorUpdate.Message}");
    }
}
Console.WriteLine($"{text}{annotation}");

Bing Grounding

To support the response returned by the Agent, Bing grounding can be used. To implement it, create the BingGroundingTool and use it in PromptAgentDefinition object.

AIProjectConnection bingConnectionName = projectClient.Connections.GetConnection(connectionName: connectionName);
BingGroundingTool bingGroundingAgentTool = new(new BingGroundingSearchToolOptions(
    searchConfigurations: [new BingGroundingSearchConfiguration(projectConnectionId: bingConnectionName.Id)]
    )
);
PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a helpful agent.",
    Tools = { bingGroundingAgentTool, }
};
AgentVersion agentVersion = projectClient.Agents.CreateAgentVersion(
    agentName: "myAgent",
    options: new(agentDefinition));

If the Bing search returned the result, we can get the URL annotation using the same methods we used for AI Search result.

Getting the result of Bing grounding in non-streaming scenarios:

Assert.That(response.Status, Is.EqualTo(ResponseStatus.Completed));
Console.WriteLine($"{response.GetOutputText()}{GetFormattedAnnotation(response)}");

Streaming the results:

ProjectResponsesClient responseClient = projectClient.OpenAI.GetProjectResponsesClientForAgent(agentVersion.Name);

string annotation = "";
string text = "";
await foreach (StreamingResponseUpdate streamResponse in responseClient.CreateResponseStreamingAsync("How does wikipedia explain Euler's Identity?"))
{
    if (streamResponse is StreamingResponseCreatedUpdate createUpdate)
    {
        Console.WriteLine($"Stream response created with ID: {createUpdate.Response.Id}");
    }
    else if (streamResponse is StreamingResponseOutputTextDeltaUpdate textDelta)
    {
        Console.WriteLine($"Delta: {textDelta.Delta}");
    }
    else if (streamResponse is StreamingResponseOutputTextDoneUpdate textDoneUpdate)
    {
        text = textDoneUpdate.Text;
    }
    else if (streamResponse is StreamingResponseOutputItemDoneUpdate itemDoneUpdate)
    {
        if (annotation.Length == 0)
        {
            annotation = GetFormattedAnnotation(itemDoneUpdate.Item);
        }
    }
    else if (streamResponse is StreamingResponseErrorUpdate errorUpdate)
    {
        throw new InvalidOperationException($"The stream has failed: {errorUpdate.Message}");
    }
}
Console.WriteLine($"{text}{annotation}");

Bing Custom Search (preview)<a id="bing-custom-search"></a>

Along with bing grounding, Agents can use the custom search. To implement it, create the BingCustomSearchPreviewTool and use it in PromptAgentDefinition object. The use of this tool is like Bing Grounding, however it requires ID of Grounding with Bing Custom Search and the name of a search configuration. In this scenario, we use Bing to search en.wikipedia.org. This configuration is called "wikipedia" its search URL is configured through Azure.

AIProjectConnection bingConnectionName = await projectClient.Connections.GetConnectionAsync(connectionName: connectionName);
BingCustomSearchPreviewTool customBingSearchAgentTool = new(new BingCustomSearchToolParameters(
    searchConfigurations: [new BingCustomSearchConfiguration(projectConnectionId: bingConnectionName.Id, instanceName: customInstanceName)]
    )
);
PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a helpful agent.",
    Tools = { customBingSearchAgentTool, }
};
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent",
    options: new(agentDefinition));

Sending request and formatting the response is done the same way as in Bing Grounding.

MCP tool

The MCPTool allows Agent to communicate with third party services using Model Context Protocol (MCP). To use MCP we need to create agent definition with the MCPTool.

PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a helpful agent that can use MCP tools to assist users. Use the available MCP tools to answer questions and perform tasks.",
    Tools = { ResponseTool.CreateMcpTool(
        serverLabel: "api-specs",
        serverUri: new Uri("https://gitmcp.io/Azure/azure-rest-api-specs"),
        toolCallApprovalPolicy: new McpToolCallApprovalPolicy(GlobalMcpToolCallApprovalPolicy.AlwaysRequireApproval
    )) }
};
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent",
    options: new(agentDefinition));

Note that in this scenario we are using GlobalMcpToolCallApprovalPolicy.AlwaysRequireApproval, which means that any calls to the MCP server need to be approved. Because of this setup we will need to get the response and check if we need to approve the call. If no calls were made, we are safe to output the Agent result.

ProjectResponsesClient responseClient = projectClient.OpenAI.GetProjectResponsesClientForAgent(agentVersion.Name);

CreateResponseOptions nextResponseOptions = new([ResponseItem.CreateUserMessageItem("Please summarize the Azure REST API specifications Readme")]);
ResponseResult latestResponse = null;

while (nextResponseOptions is not null)
{
    latestResponse = await responseClient.CreateResponseAsync(nextResponseOptions);
    nextResponseOptions = null;

    foreach (ResponseItem responseItem in latestResponse.OutputItems)
    {
        if (responseItem is McpToolCallApprovalRequestItem mcpToolCall)
        {
            nextResponseOptions = new CreateResponseOptions()
            {
                PreviousResponseId = latestResponse.Id,
            };
            if (string.Equals(mcpToolCall.ServerLabel, "api-specs"))
            {
                Console.WriteLine($"Approving {mcpToolCall.ServerLabel}...");
                // Automatically approve the MCP request to allow the agent to proceed
                // In production, you might want to implement more sophisticated approval logic
                nextResponseOptions.InputItems.Add(ResponseItem.CreateMcpApprovalResponseItem(approvalRequestId: mcpToolCall.Id, approved: true));
            }
            else
            {
                Console.WriteLine($"Rejecting unknown call {mcpToolCall.ServerLabel}...");
                nextResponseOptions.InputItems.Add(ResponseItem.CreateMcpApprovalResponseItem(approvalRequestId: mcpToolCall.Id, approved: false));
            }
        }
    }
}
Console.WriteLine(latestResponse.GetOutputText());

MCP tool with project connection

Running MCP tool with project connection allows you to connect to an MCP server that requires authentication. The only difference from the previous example is that we need to provide the connection name. To create connection valid for GitHub please log in to your GitHub profile, click on the profile picture at the upper right corner and select "Settings". At the left panel click "Developer Settings", select "Personal access tokens > Tokens (classic)". At the top choose "Generate new token" and enter password and create a token, which can read public repositories. Save the token, or keep the page open as once the page is closed, token cannot be shown again! In the Azure portal open Microsoft Foundry you are using, at the left panel select "Management center" and then select "Connected resources". Create new connection of "Custom keys" type; name it and add a key value pair. Set the key name Authorization and the value should have a form of Bearer your_github_token. When the connection is created, we can set it on the MCPTool and use it in PromptAgentDefinition.

McpTool tool = ResponseTool.CreateMcpTool(
        serverLabel: "api-specs",
        serverUri: new Uri("https://api.githubcopilot.com/mcp"),
        toolCallApprovalPolicy: new McpToolCallApprovalPolicy(GlobalMcpToolCallApprovalPolicy.AlwaysRequireApproval
    ));
tool.ProjectConnectionId = mcpProjectConnectionName;
PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a helpful agent that can use MCP tools to assist users. Use the available MCP tools to answer questions and perform tasks.",
    Tools = { tool }
};
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent",
    options: new(agentDefinition));

In this scenario the agent can be asked questions about GitHub profile, the token is attributed to. The responses from Agent with project connection should be handled the same way as described in the MCP tool section.

OpenAPI tool

OpenAPI tool allows Agent to get information from Web services using OpenAPI Specification. To use the OpenAPI tool, we need to Create the OpenAPIFunctionDefinition object and provide the specification file to its constructor. OpenAPITool contains a Description property, serving as a hint when this tool should be used.

string filePath = GetFile();
OpenAPIFunctionDefinition toolDefinition = new(
    name: "get_weather",
    specificationBytes: BinaryData.FromBytes(File.ReadAllBytes(filePath)),
    authentication: new OpenAPIAnonymousAuthenticationDetails()
);
toolDefinition.Description = "Retrieve weather information for a location.";
OpenAPITool openapiTool = new(toolDefinition);

PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a helpful assistant.",
    Tools = {openapiTool}
};
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent",
    options: new(agentDefinition));

The Agent created this way can be asked questions, specific to the Web service.

ProjectResponsesClient responseClient = projectClient.OpenAI.GetProjectResponsesClientForAgent(agentVersion.Name);
ResponseResult response = await responseClient.CreateResponseAsync(
        userInputText: "Use the OpenAPI tool to print out, what is the weather in Seattle, WA today."
    );
Console.WriteLine(response.GetOutputText());

OpenAPI tool with project connection

Some Web services, using OpenAPI specification, may require authentication, which can be done through the Microsoft Foundry project connection. In our example we are using TripAdvisor specification, which use key authentication. To create a connection, in the Azure portal open Microsoft Foundry you are using, at the left panel select "Management center" and then select "Connected resources", and, finally, create new connection of "Custom keys" type; name it and add a key value pair. Add key called "Key" and value with the actual TripAdvisor key. Contrary to OpenAPI tool without authentication, in this scenario we need to provide tool constructor with OpenAPIProjectConnectionAuthenticationDetails initialized with OpenAPIProjectConnectionSecurityScheme.

string filePath = GetFile();
AIProjectConnection tripadvisorConnection = projectClient.Connections.GetConnection("tripadvisor");
OpenAPIFunctionDefinition toolDefinition = new(
    name: "tripadvisor",
    specificationBytes: BinaryData.FromBytes(File.ReadAllBytes(filePath)),
    authentication: new OpenAPIProjectConnectionAuthenticationDetails(new OpenAPIProjectConnectionSecurityScheme(
        projectConnectionId: tripadvisorConnection.Id
    ))
);
toolDefinition.Description = "Trip Advisor API to get travel information.";
OpenAPITool openapiTool = new(toolDefinition);

PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a helpful assistant.",
    Tools = { openapiTool }
};
AgentVersion agentVersion = projectClient.Agents.CreateAgentVersion(
    agentName: "myAgent",
    options: new(agentDefinition));

We recommend testing the Web service access before running production scenarios. It can be done by setting ToolChoice = ResponseToolChoice.CreateRequiredChoice() in the CreateResponseOptions. This setting will force Agent to use tool and will trigger the error if it is not accessible.

ProjectResponsesClient responseClient = projectClient.OpenAI.GetProjectResponsesClientForAgent(agentVersion.Name);
CreateResponseOptions responseOptions = new()
{
    ToolChoice = ResponseToolChoice.CreateRequiredChoice(),
    InputItems =
    {
        ResponseItem.CreateUserMessageItem("Recommend me 5 top hotels in paris, France."),
    }
};
ResponseResult response = await responseClient.CreateResponseAsync(responseOptions);
Console.WriteLine(response.GetOutputText());

Browser automation (preview)<a id="browser-automation"></a>

Playwright is a Node.js library for browser automation. Microsoft provides the Azure Playwright workspace, which can execute Playwright-based tasks triggered by an Agent using the BrowserAutomationAgentTool.

Create Azure Playwright workspace
  1. Deploy an Azure Playwright workspace.
  2. In the Get started section, open 2. Set up authentication.
  3. Select Service Access Token, then choose Generate Token. Save the token immediately-once you close the page, it cannot be viewed again.
Configure Microsoft Foundry
  1. Open the left navigation and select Management center.
  2. Choose Connected resources.
  3. Create a new connection of type Serverless Model.
  4. Provide a name, then paste your Access Token into the Key field.
  5. Set the Playwright Workspace Browser endpoint as the Target URI. You can find this endpoint on the Workspace Overview page. It begins with wss://.
Using Browser automation tool

Please note that Browser automation operations may take longer than typical calls to process. Using background mode for Responses or applying a network timeout of at least five minutes for non-background calls is highly recommended.

var projectEndpoint = System.Environment.GetEnvironmentVariable("PROJECT_ENDPOINT");
var modelDeploymentName = System.Environment.GetEnvironmentVariable("MODEL_DEPLOYMENT_NAME");
var playwrightConnectionName = System.Environment.GetEnvironmentVariable("PLAYWRIGHT_CONNECTION_NAME");
AIProjectClientOptions options = new()
{
    NetworkTimeout = TimeSpan.FromMinutes(5)
};
AIProjectClient projectClient = new(endpoint: new Uri(projectEndpoint), tokenProvider: new DefaultAzureCredential(), options: options);

To use Azure Playwright workspace we need to create agent with BrowserAutomationAgentTool.

AIProjectConnection playwrightConnection = await projectClient.Connections.GetConnectionAsync(playwrightConnectionName);
BrowserAutomationPreviewTool playwrightTool = new(
    new BrowserAutomationToolParameters(
        new BrowserAutomationToolConnectionParameters(playwrightConnection.Id)
    ));

PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are an Agent helping with browser automation tasks.\n" +
    "You can answer questions, provide information, and assist with various tasks\n" +
    "related to web browsing using the Browser Automation tool available to you.",
    Tools = {playwrightTool}
};
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent",
    options: new(agentDefinition));

Streaming response outputs with browser automation provides incremental updates as the automation is processed. This is advised for interactive scenarios, as browser automation can require several minutes to fully complete.

ProjectResponsesClient responseClient = projectClient.OpenAI.GetProjectResponsesClientForAgent(agentVersion.Name);
CreateResponseOptions responseOptions = new()
{
    ToolChoice = ResponseToolChoice.CreateRequiredChoice(),
    StreamingEnabled = true,
    InputItems =
    {
        ResponseItem.CreateUserMessageItem("Your goal is to report the percent of Microsoft year-to-date stock price change.\n" +
            "To do that, go to the website finance.yahoo.com.\n" +
            "At the top of the page, you will find a search bar.\n" +
            "Enter the value 'MSFT', to get information about the Microsoft stock price.\n" +
            "At the top of the resulting page you will see a default chart of Microsoft stock price.\n" +
            "Click on 'YTD' at the top of that chart, and report the percent value that shows up just below it.")
    }
};
await foreach (StreamingResponseUpdate update in responseClient.CreateResponseStreamingAsync(responseOptions))
{
    ParseResponse(update);
}

SharePoint tool (preview)<a id="sharepoint"></a>

SharepointPreviewTool allows Agent to access SharePoint pages to get the data context. Use the SharePoint connection name as it is shown in the connections section of Microsoft Foundry to get the connection. Get the connection ID to initialize the SharePointGroundingToolOptions, which will be used to create SharepointPreviewTool.

AIProjectConnection sharepointConnection = await projectClient.Connections.GetConnectionAsync(sharepointConnectionName);
SharePointGroundingToolOptions sharepointToolOption = new()
{
    ProjectConnections = { new ToolProjectConnection(projectConnectionId: sharepointConnection.Id) }
};
PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a helpful assistant.",
    Tools = { new SharepointPreviewTool(sharepointToolOption), }
};
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent",
    options: new(agentDefinition));

Create the response and make sure we are always using tool.

ProjectResponsesClient responseClient = projectClient.OpenAI.GetProjectResponsesClientForAgent(agentVersion.Name);
CreateResponseOptions responseOptions = new()
{
    ToolChoice = ResponseToolChoice.CreateRequiredChoice(),
    InputItems = { ResponseItem.CreateUserMessageItem("What is Contoso whistleblower policy") },
};
ResponseResult response = await responseClient.CreateResponseAsync(responseOptions);

SharePoint tool can create the reference to the page, grounding the data. We will create the GetFormattedAnnotation method to get the URI annotation.

private static string GetFormattedAnnotation(ResponseResult response)
{
    foreach (ResponseItem item in response.OutputItems)
    {
        if (item is MessageResponseItem messageItem)
        {
            foreach (ResponseContentPart content in messageItem.Content)
            {
                foreach (ResponseMessageAnnotation annotation in content.OutputTextAnnotations)
                {
                    if (annotation is UriCitationMessageAnnotation uriAnnotation)
                    {
                        return $" [{uriAnnotation.Title}]({uriAnnotation.Uri})";
                    }
                }
            }
        }
    }
    return "";
}

Print the Agent output and add the annotation at the end.

Assert.That(response.Status, Is.EqualTo(ResponseStatus.Completed));
Console.WriteLine($"{response.GetOutputText()}{GetFormattedAnnotation(response)}");

Fabric Data Agent tool (preview) <a id="fabric"></a>

As a prerequisite to this example, we will need to create Microsoft Fabric with Lakehouse data repository. Please see the end-to end tutorials on using Microsoft Fabric here for more information.

Create a Fabric Capacity
  1. Create a Fabric Capacity resource in the Azure Portal (attention, the rate is being applied!).
  2. Create the workspace in Power BI portal by clicking Workspaces icon on the left panel.
  3. At the bottom click + New workspace.
  4. At the right panel populate the name of a workspace, select Fabric capacity as a License mode; in the Capacity dropdown select Fabric Capacity resource we have just created.
  5. Click Apply.
Create a Lakehouse data repository
  1. Click a Lakehouse icon in Other items you can create with Microsoft Fabric section and name the new data repository.
  2. Download the public holidays data set.
  3. At the Lakehouse menu select Get data > Upload files and upload the publicHolidays.parquet.
  4. In the Files section, click on three dots next to uploaded file and click Load to Tables > new table and then Load in the opened window.
  5. Delete the uploaded file, by clicking three dots and selecting Delete.
Add a data agent to the Fabric
  1. At the top panel select Add to data agent > New data agent and name the newly created Agent.
  2. In the open view on the left panel select the Lakehouse "publicholidays" table and set a checkbox next to it.
  3. Ask the question we will further use in the Requests API. "What was the number of public holidays in Norway in 2024?"
  4. The Agent should show a table containing one column called "NumberOfPublicHolidays" with the single row, containing number 62.
  5. Click Publish and in the description add "Agent has data about public holidays." If this stage was omitted the error, saying "Stage configuration not found." will be returned during sample run.
Create a Fabric connection in Microsoft Foundry.

After we have created the Fabric data Agent, we can connect fabric to our Microsoft Foundry.

  1. Open the Power BI and select the workspace we have created.
  2. In the open view select the Agent we have created.
  3. The URL of the opened page will look like https://msit.powerbi.com/groups/%workspace_id%/aiskills/%artifact_id%?experience=power-bi, where workspace_id and artifact_id are GUIDs in a form like 811acded-d5f7-11f0-90a4-04d3b0c6010a.
  4. In the Microsoft Foundry you are using for the experimentation, on the left panel select Management center.
  5. Choose Connected resources.
  6. Create a new connection of type Microsoft Fabric.
  7. Populate workspace-id and artifact-id fields with GUIDs found in the Microsoft Data Agent URL and name the new connection.
Using Microsoft Fabric tool

To use the Agent with Microsoft Fabric tool, we need to include MicrosoftFabricPreviewTool into PromptAgentDefinition.

AIProjectConnection fabricConnection = await projectClient.Connections.GetConnectionAsync(fabricConnectionName);
FabricDataAgentToolOptions fabricToolOption = new()
{
    ProjectConnections = { new ToolProjectConnection(projectConnectionId: fabricConnection.Id) }
};
PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a helpful assistant.",
    Tools = { new MicrosoftFabricPreviewTool(fabricToolOption), }
};
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent",
    options: new(agentDefinition));

A2APreviewTool (preview)<a id="a2atool"></a>

The A2A or Agent2Agent protocol is designed to enable seamless communication between agents. In the scenario below we assume that we have the application endpoint, which complies with A2A; the authentication is happening through header x-api-key value.

Create a connection to A2A agent

The connection to A2A service can be created in two ways. In classic Microsoft Foundry, we need to create Custom keys connection, however in the new version of Microsoft Foundry we can create the specialized A2A connection.

Classic Microsoft Foundry
  1. In the Microsoft Foundry you are using for the experimentation, on the left panel select Management center.
  2. Choose Connected resources.
  3. Create a new connection of type Custom keys.
  4. Add two key-value pairs:
    • x-api-key: <your key>
    • type: custom_A2A
  5. Name and save the connection.
New Microsoft Foundry

If we are using the Agent2agent connection, we do not need to provide the endpoint as it already contains it.

  1. Click New foundry switch at the top of Microsoft Foundry UI.
  2. Click Tools on the left panel.
  3. Click Connect tool at the upper right corner.
  4. In the open window select Custom tab.
  5. Select Agent2agent(A2A) and click Create.
  6. Populate Name and A2A Agent Endpoint, leave Authentication being "Key-based".
  7. In the Credential Section set key "x-api-key" with the value being your secret key.
Using A2A Tool

To use the Agent with A2A tool, we need to include A2APreviewTool into PromptAgentDefinition.

AIProjectConnection a2aConnection = projectClient.Connections.GetConnection(a2aConnectionName);
A2APreviewTool a2aTool = new()
{
    ProjectConnectionId = a2aConnection.Id
};
if (!string.Equals(a2aConnection.Type.ToString(), "RemoteA2A"))
{
    if (a2aBaseUri is null)
    {
        throw new InvalidOperationException($"The connection {a2aConnection.Name} is of {a2aConnection.Type.ToString()} type and does not carry the A2A service base URI. Please provide this value through A2A_BASE_URI environment variable.");
    }
    a2aTool.BaseUri = new Uri(a2aBaseUri);
}
PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a helpful assistant.",
    Tools = { a2aTool }
};
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent",
    options: new(agentDefinition));

Memory search tool (preview)<a id="memory-search-tool"></a>

Memory in Foundry Agent Service is a managed, long-term memory solution. It enables Agent continuity across sessions, devices, and workflows. Agents can use Memory Stores by defining MemorySearchPreviewTool in PromptAgentDefinition.

agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a prompt agent capable to access memorized conversation.",
};
agentDefinition.Tools.Add(new MemorySearchPreviewTool(memoryStoreName: memoryStore.Name, scope: scope));
AgentVersion agentVersionWithMemory = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "agentWithMemory",
    options: new(agentDefinition));

Azure Function tool

Prerequisites

To make a function call we need to create and deploy the Azure function. In the code snippet below, we have an example of function on C# which can be used by the agent.

namespace FunctionProj
{
    public class Response
    {
        public required string Value { get; set; }
        public required string CorrelationId { get; set; }
    }

    public class Arguments
    {
        public required string CorrelationId { get; set; }
    }

    public class Foo
    {
        private readonly ILogger<Foo> _logger;

        public Foo(ILogger<Foo> logger)
        {
            _logger = logger;
        }

        [Function("Foo")]
        [QueueOutput("azure-function-tool-output", Connection = "AzureWebJobsStorage")]
        public string Run([QueueTrigger("azure-function-foo-input")] Arguments input, FunctionContext executionContext)
        {
            var logger = executionContext.GetLogger("Foo");
            logger.LogInformation("C# Queue function processed a request.");

            var response = new Response
            {
                Value = "Bar",
                // Important! Correlation ID must match the input correlation ID.
                CorrelationId = input.CorrelationId
            };

            return JsonSerializer.Serialize(response);
        }
    }
}

In this code we define function input and output class: Arguments and Response respectively. These two data classes will be serialized in JSON. It is important that these both contain field CorrelationId, which is the same between input and output.

Note: The Azure Function may be only used in standard agent setup. Please follow the instruction to deploy an agent, capable of calling Azure Functions. In our example the function will be stored in the storage account, created with the AI hub. For that we need to allow key access to that storage. In Azure portal go to Storage account > Settings > Configuration and set "Allow storage account key access" to Enabled. If it is not done, the error will be displayed "The remote server returned an error: (403) Forbidden." To create the function resource that will host our function, install azure-cli python package and run the next command:

pip install -U azure-cli
az login
az functionapp create --resource-group your-resource-group --consumption-plan-location region --runtime dotnet-isolated --functions-version 4 --name function_name --storage-account storage_account_already_present_in_resource_group --app-insights existing_or_new_application_insights_name

This function writes data to the output queue and hence needs to be authenticated to Azure, so we will need to assign the function system identity and provide it Storage Queue Data Contributor. To do that in Azure portal select the function, located in your-resource-group resource group and in Settings>Identity, switch it on and click Save. After that assign the Storage Queue Data Contributor permission on storage account used by our function (storage_account_already_present_in_resource_group in the script above) for just assigned System Managed identity.

Now we will create the function itself. Install .NET and Core Tools and create the function project using next commands.

func init FunctionProj --worker-runtime dotnet-isolated --target-framework net8.0
cd FunctionProj
func new --name foo --template "HTTP trigger" --authlevel "anonymous"
dotnet add package Azure.Identity
dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues --prerelease

Note: There is a "Azure Queue Storage trigger", however the attempt to use it results in error for now. We have created a project, containing HTTP-triggered azure function with the logic in Foo.cs file. As far as we need to trigger Azure function by a new message in the queue, we will replace the content of a Foo.cs by the C# sample code above. To deploy the function run the command from dotnet project folder:

func azure functionapp publish function_name

In the storage_account_already_present_in_resource_group select the Queue service and create two queues: azure-function-foo-input and azure-function-tool-output. Note that the same queues are used in our sample. To check that the function is working, place the next message into the azure-function-foo-input and replace storage_account_already_present_in_resource_group by the actual resource group name, or just copy the output queue address.

{
  "OutputQueueUri": "https://storage_account_already_present_in_resource_group.queue.core.windows.net/azure-function-tool-output",
  "CorrelationId": "42"
}

After the processing, the output queue should contain the message with the following contents:

{
  "Value": "Bar",
  "CorrelationId": "42"
}

Please note that the input CorrelationId is the same as output. Hint: Place multiple messages to input queue and keep second internet browser window with the output queue open, hit the refresh button on the portal user interface, so that you will not miss the message. If the function completed with error the message instead gets into the azure-function-foo-input-poison queue. If that happened, please check your setup. After we have tested the function and made sure it works, please make sure that the Azure AI Project have the next roles for the storage account: Storage Account Contributor, Storage Blob Data Contributor, Storage File Data Privileged Contributor, Storage Queue Data Contributor and Storage Table Data Contributor. Now the function is ready to be used by the agent.

Using Agent with Azure Function

To use agent with Azure Function we need to create AzureFunctionTool, containing the description of an Azure Function.

public class Sample_AzureFunction : ProjectsOpenAITestBase
{
    private static AzureFunctionTool GetFunctionTool(string storageQueueUri)
    {
        AzureFunctionDefinitionFunction functionDefinition = new(
            name: "foo",
            parameters: BinaryData.FromObjectAsJson(
                new
                {
                    Type = "object",
                    Properties = new
                    {
                        query = new
                        {
                            Type = "string",
                            Description = "The question to ask.",
                        }
                    }
                },
                new JsonSerializerOptions() { PropertyNamingPolicy = JsonNamingPolicy.CamelCase }
            )
        )
        {
            Description = "Get answers from the foo bot.",
        };
        return new AzureFunctionTool(
            new AzureFunctionDefinition(
                function: functionDefinition,
                inputBinding: new AzureFunctionBinding(
                    new AzureFunctionStorageQueue(queueServiceEndpoint: storageQueueUri, queueName: "azure-function-foo-input")),
                outputBinding: new AzureFunctionBinding(
                    new AzureFunctionStorageQueue(queueServiceEndpoint: storageQueueUri, queueName: "azure-function-tool-output"))
                )
            );
    }

This tool should be used by PromptAgentDefinition so Agent can use the Azure Function when required.

PromptAgentDefinition agentDefinition = new(model: modelDeploymentName)
{
    Instructions = "You are a helpful support agent. Use the provided function any "
        + "time the prompt contains the string 'What would foo say?'. When you invoke "
        + "the function, ALWAYS specify the output queue uri parameter as "
        + $"'{storageQueueUri}/azure-function-tool-output'. Always responds with "
        + "\"Foo says\" and then the response from the tool.",
    Tools = { GetFunctionTool(storageQueueUri) },
};
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
    agentName: "myAgent",
    options: new(agentDefinition));

Troubleshooting

Any operation that fails will throw a ClientResultException. The exception's Status will hold the HTTP response status code. The exception's Message contains a detailed message that may be helpful in diagnosing the issue:

try
{
    AgentVersion agent = await projectClient.Agents.GetAgentVersionAsync(
        agentName: "agent_which_dies_not_exist", agentVersion: "1");
}
catch (ClientResultException e) when (e.Status == 404)
{
    Console.WriteLine($"Exception status code: {e.Status}");
    Console.WriteLine($"Exception message: {e.Message}");
}

To further diagnose and troubleshoot issues, you can enable logging following the Azure SDK logging documentation. This allows you to capture additional insights into request and response details, which can be particularly helpful when diagnosing complex issues.

Next steps

Beyond the introductory scenarios discussed, the AI Agents client library offers support for additional scenarios to help take advantage of the full feature set of the AI services. To help explore some of these scenarios, the AI Agents client library offers a set of samples to serve as an illustration for common scenarios. Please see the Samples

Contributing

See the Azure SDK CONTRIBUTING.md for details on building, testing, and contributing to this library.

Product Compatible and additional computed target framework versions.
.NET net5.0 was computed.  net5.0-windows was computed.  net6.0 was computed.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 was computed.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 was computed.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 is compatible.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
.NET Core netcoreapp2.0 was computed.  netcoreapp2.1 was computed.  netcoreapp2.2 was computed.  netcoreapp3.0 was computed.  netcoreapp3.1 was computed. 
.NET Standard netstandard2.0 is compatible.  netstandard2.1 was computed. 
.NET Framework net461 was computed.  net462 was computed.  net463 was computed.  net47 was computed.  net471 was computed.  net472 was computed.  net48 was computed.  net481 was computed. 
MonoAndroid monoandroid was computed. 
MonoMac monomac was computed. 
MonoTouch monotouch was computed. 
Tizen tizen40 was computed.  tizen60 was computed. 
Xamarin.iOS xamarinios was computed. 
Xamarin.Mac xamarinmac was computed. 
Xamarin.TVOS xamarintvos was computed. 
Xamarin.WatchOS xamarinwatchos was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages (4)

Showing the top 4 NuGet packages that depend on Azure.AI.Projects.OpenAI:

Package Downloads
Azure.AI.Projects

This is the Azure.AI.Projects client library for developing .NET applications with rich experience.

Microsoft.Agents.AI.AzureAI

Provides Microsoft Agent Framework support for Foundry Agents.

UtilityAi.Maf

Integration layer between UtilityAI orchestration framework and Microsoft Agent Framework (MAF). Enables utility-based decision-making to select and orchestrate MAF agents.

Microsoft.Agents.AI.Workflows.Declarative.AzureAI

Provides Microsoft Agent Framework support for declarative workflows for Azure AI Agents.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
2.0.0-beta.1 492 2/25/2026
1.0.0-beta.5 106,855 12/13/2025
1.0.0-beta.4 30,461 11/18/2025
1.0.0-beta.3 10,445 11/16/2025
1.0.0-beta.2 922 11/14/2025
1.0.0-beta.1 8,804 11/14/2025