Skip to main content
Version: Next

Model Providers

The Model Providers page allows administrators to configure and manage various AI model providers. This guide will walk you through the setup process and explain the available options.

Configuring Model Providers

Obot supports a variety of model providers, including:

  • OpenAI
  • Anthropic
  • xAI
  • Ollama
  • Voyage AI
  • Groq
  • vLLM
  • DeepSeek
  • Google

The UI will indicate whether each provider has been configured. If a provider is configured you will have the ability to modify or deconfigure it.

note

Our Enterprise release adds support for additional Enterprise-grade model providers. See here for more details.

Configuring and enabling a provider

To configure a provider:

  1. Click its "Configure" button
  2. Enter the required information, such as API keys or endpoints
  3. Save the configuration to apply the settings

Upon saving the configuration, the platform will validate your configuration to ensure it can connect to the model provider. You can configure multiple model providers, which will allow you to pick the right provider and model for each use case.

Viewing and managing models

Once a provider is configured, you can view and manage the models it offers. You can set the usage type for each model, which determines how the models are utilized within the application:

Usage TypeDescriptionApplication
Language ModelUsed to drive text generation and tool callsUsed in agents and tasks; can be set as an agent's primary model
Text EmbeddingConverts text into numerical vectorsUsed in the knowledge tool for RAG functionality
Image GenerationCreates images from textual descriptionsUsed by image generation tools
VisionAnalyzes and processes visual dataUsed by the image vision tool
OtherDefault if no specific usage is selectedAvailable for all purposes

You can also activate or deactivate specific models, controlling their availability to users.

Setting Default Models

The "Set Default Models" feature allows you to configure default models for various tasks. Choose default models for the following categories:

  • Language Model (Chat) - Primary conversational model
  • Language Model (Chat - Fast) - Optimized for quick responses
  • Text Embedding (Knowledge) - Used for knowledge base operations
  • Image Generation - For creating images
  • Vision - For image analysis and processing

These defaults ensure that users have pre-selected models for the tools and other functionality throughout the platform. After selecting the desired defaults, click "Save Changes" to confirm your configurations.

Instructions for configuring specific providers

Azure OpenAI (Enterprise only)

The Azure OpenAI model provider supports legacy Azure OpenAI resources. Microsoft Foundry works with API key authentication as well.

This model provider supports two forms of authentication: API keys and Microsoft Entra.

API Key Authentication

In the Azure OpenAI or Microsoft Foundry UI, you can find your API key after you have set up at least one deployment, as well as your endpoint URL. Both of these values are required when you configure this model provider in Obot.

Additionally, you must manually specify the names of all of your deployments, as the API key does not provide the ability to list them. The format is name:type, for example, gpt-5.2:reasoning-llm. The supported types are llm, reasoning-llm, text-embedding, and image-generation. If no type is specified, Obot will assume the type is llm.

If the type specified is llm or none at all, and the deployment name starts with the name of a known reasoning model (such as o3 or gpt-5), Obot will automatically treat it as a reasoning model.

Microsoft Entra Authentication

Instead of using an API key, you can set up a Microsoft Entra app registration as a service principal to use Azure OpenAI.

Obot requires the Client ID, Client Secret, and Tenant ID of the Entra app, as well as the Endpoint URL, Subscription ID, and Resource Group from Azure OpenAI/Microsoft Foundry.

You do NOT need to manually specify your deployment names, as the Entra app's credentials will be sufficient to list them.

After you have created your Entra app registration, you need to go to your Azure OpenAI resource in the Azure portal and add a new role assignment for the app registration, as a service principal. It needs to have the Cognitive Services OpenAI User role.

See the Microsoft docs for more details.