Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
The Azure AI Foundry Agent Service REST API provides endpoints for creating, configuring, and running AI agents. These agents are cloud-hosted services that pair large language models (LLMs) with tools to read data, call functions, and execute logic on your behalf. The service is fully managed by Azure, so you can focus on building intelligent workflows without managing infrastructure.
The API follows the same protocol as the Azure OpenAI Assistants API. This allows you to use existing OpenAI-compatible tools and SDKs with minimal configuration changes.
For more information about the service, see the Azure AI Agent Service documentation.
REST operation groups
The API operations are grouped by core concepts. Use the table of contents on the left to browse available endpoints. Key operation groups include:
- Agents – Create, retrieve, update, or delete an agent definition. An agent includes the model, instructions, and tool configuration.
- Threads – Create or list conversation threads. A thread represents the message history for an interaction.
- Messages – Add or retrieve messages within a thread. Messages can be from the user or the agent.
- Runs – Start an agent run on a thread. A run processes the thread and may call tools during execution.
- Run steps – Inspect individual actions performed during a run, such as tool invocations or model calls.
- Tools – Register and manage custom tools defined by OpenAPI specifications.
Authentication
The Agents service supports Microsoft Entra ID tokens for authentication. Azure resource keys are not yet supported.
Access is controlled by Azure role-based access control (RBAC). For more details, see What is Azure role-based access control (Azure RBAC)? and Role-based access control for Azure AI Foundry.