AIConfiguration Configuration
The aiconfiguration.json file is the central configuration point of AI Proxy - it defines all connections to AI providers, available models, and strategies for their usage. This file determines which models are available to WEBCON, in what order they are selected (priority), and what connection parameters are used to communicate with individual providers.
In this guide you will find the complete structure of the configuration file with examples for all three supported providers: Google Vertex AI, OpenAI, and Azure AI Foundry. Each section is described in detail with parameter explanations and value examples. You can use the full configuration as a starting point or select only the sections that match your needs.
In Self-hosted mode, values in ProviderConfiguration contain actual API keys (not secret names from Key Vault). Do not commit this file to the repository.
What you'll learn from this guide
- How to define connections to AI providers (ProviderConnections)
- How to configure available models and their priorities (ProviderModels)
- How to assign models to specific WEBCON functions (MethodTypesConfiguration)
- What parameters are required for each provider
- How to configure failover strategies between providers
Configuration structure
The file contains three sections:
- ProviderConnections - connection definitions to AI providers
- ProviderModels - list of available models
- MethodTypesConfiguration - assignment of models to functions
Complete configuration example
Below is a complete configuration with three AI providers (Azure, Google Vertex, OpenAI):
{
"ProviderConnections": {
"AzureFoundry": {
"Description": "Azure AiFoundry Connector Provider",
"Type": "AzureAi",
"ProviderConfiguration": {
"ApiKey": "your-azure-api-key-here",
"Endpoint": "https://your-endpoint.openai.azure.com/"
}
},
"GoogleVertex": {
"Description": "Google Connector Provider",
"Type": "Gemini",
"ProviderConfiguration": {
"ApiKey": "your-google-api-key-here",
"ServiceAccount": "{\"type\":\"service_account\",\"project_id\":\"your-project\"...}",
"ProjectId": "your-project-id",
"Region": "us-central1",
"BucketName": "your-bucket-name",
"DocumentationProcessBuilderRagCorpus": "your-corpus-id"
}
},
"OpenAi": {
"Description": "OpenAi Connector Provider",
"Type": "OpenAi",
"ProviderConfiguration": {
"ApiKey": "sk-your-openai-api-key-here"
}
}
},
"ProviderModels": [
{
"ConnectionName": "GoogleVertex",
"Priority": 4,
"Name": "Gemini 2.0-flash-lite-001",
"Description": "",
"TextModel": {
"ModelName": "gemini-2.0-flash-lite-001"
},
"ImageModel": {
"ModelName": "imagen-3.0-fast-generate-001"
},
"AudioModel": {
"ModelName": "gemini-2.0-flash-lite-001"
},
"EmbeddingModel": {
"ModelName": "gemini-embedding-001"
}
},
{
"ConnectionName": "GoogleVertex",
"Priority": 3,
"Name": "Vertex gemini 2.5-flash-lite",
"Description": "",
"TextModel": {
"ModelName": "gemini-2.5-flash-lite"
},
"ImageModel": {
"ModelName": "imagen-3.0-fast-generate-001"
},
"AudioModel": {
"ModelName": "gemini-2.5-flash-lite"
},
"EmbeddingModel": {
"ModelName": "gemini-embedding-001"
}
},
{
"ConnectionName": "GoogleVertex",
"Priority": 2,
"Name": "Vertex gemini 2.5-flash",
"Description": "",
"TextModel": {
"ModelName": "gemini-2.5-flash"
},
"ImageModel": {
"ModelName": "imagen-3.0-fast-generate-001"
},
"AudioModel": {
"ModelName": "gemini-2.5-flash"
},
"EmbeddingModel": {
"ModelName": "gemini-embedding-001"
}
},
{
"ConnectionName": "OpenAi",
"Priority": 1,
"Name": "OpenAi BasicTier",
"Description": "",
"TextModel": {
"ModelName": "gpt-4o-mini-2024-07-18"
},
"ImageModel": {
"ModelName": "gpt-4o-mini-2024-07-18"
},
"AudioModel": {
"ModelName": "whisper-01"
},
"EmbeddingModel": {
"ModelName": "text-embedding-3-small"
}
}
],
"MethodTypesConfiguration": {
"ConciergePrompt": [
"Vertex gemini 2.5-flash-lite",
"Gemini 2.0-flash-lite-001"
],
"ConciergeExecuteTool": [
"Vertex gemini 2.5-flash-lite",
"Gemini 2.0-flash-lite-001"
]
}
}
Wyjaśnienie sekcji
ProviderConnections
Definicje połączeń z dostawcami AI. Każdy wpis zawiera:
- Type - typ dostawcy:
AzureAi,Gemini,OpenAi - ProviderConfiguration - klucze API i inne parametry połączenia
Możesz usunąć nieużywanych dostawców z konfiguracji. Jeśli używasz tylko Google Vertex, możesz usunąć sekcje AzureFoundry i OpenAi.
ProviderModels
Lista dostępnych modeli AI. Każdy model zawiera:
- ConnectionName - nazwa połączenia z sekcji
ProviderConnections - Priority - priorytet (wyższy = preferowany), używany przy automatycznym wyborze
- Name - unikalna nazwa modelu
- TextModel, ImageModel, AudioModel, EmbeddingModel - nazwy konkretnych modeli dla różnych typów operacji
MethodTypesConfiguration
Mapowanie typów operacji na modele. Określa które modele będą używane dla różnych funkcji:
- ConciergePrompt - odpowiedzi na pytania użytkowników
- ConciergeExecuteTool - wykonywanie narzędzi przez AI
Kolejność w tablicy określa preferencje - AI Proxy spróbuje najpierw użyć pierwszego modelu, a jeśli nie będzie dostępny, przejdzie do następnego.
Basic configuration steps
1. Prepare API keys
Obtain API keys from your providers:
- Google Vertex AI - Service Account JSON, Project ID, Region
- OpenAI - API Key starting with
sk- - Azure OpenAI - API Key and Endpoint URL
2. Edit aiconfiguration.json file
Fill in the values in the ProviderConfiguration section:
"ProviderConfiguration": {
"ApiKey": "sk-your-actual-api-key-here"
}
3. Save the file
Make sure that:
- JSON format is correct (no missing commas/brackets)
- All API keys are filled in
- File is in the same directory as docker-compose.yml
4. Run the container
docker-compose up -d
Troubleshooting
Error: Invalid JSON format
# Check JSON validity online or in editor
# Make sure that:
# - All quotes are double "
# - Commas between elements (but not after the last one)
# - All brackets {} and [] are closed
Error: Authentication failed
Common causes:
- Invalid API key
- Key expired or revoked
- Incorrect endpoint (for Azure)
Solution:
# Check container logs
docker-compose logs ai-proxy
# Generate new API key from provider
# Update aiconfiguration.json
# Restart container
docker-compose restart ai-proxy
Model is not being used
Causes:
ConnectionNamedoesn't match the name inProviderConnections- Model doesn't exist at provider (typo in
ModelName) NameinMethodTypesConfigurationdoesn't matchNameinProviderModels
Solution: Check name consistency across all sections - they must match exactly (case sensitive).