Skip to main content
Version: 2026 R1

AIConfiguration setup

The aiconfiguration.json file and its environment-specific variants (e.g., aiconfiguration.Staging.json, aiconfiguration.Production.json) contain configuration for connections to AI providers, model definitions, and the mapping of operation types to specific models. This section explains the structure, how to prepare the configuration, and how to validate it.

The purpose is to describe the structure and key sections: ProviderConnections, ProviderModels, and MethodTypesConfiguration and to provide examples and a checklist for quick configuration validation.

Explanation of key sections

ProviderConnections

The ProviderConnections section contains a list of configured connections to AI providers. Each entry includes:

  • Description - a description of the connection,
  • Type - the provider type (e.g., Gemini, OpenAi, Azure),
  • ProviderConfiguration - a configuration object containing Key Vault secret names (NOT the actual secrets).
info

Values in ProviderConfiguration are Key Vault secret names, not the actual API keys. In Self-hosted mode (when Azure Key Vault is not used), you provide the real secret here instead.

ProviderModels

The ProviderModels section is an array of model definitions mapped to a ConnectionName. Each entry includes:

  • ConnectionName - the connection name from ProviderConnections,
  • Priority - model priority (higher value = higher priority during model selection),
  • Name - a unique model name (used in MethodTypesConfiguration mapping),
  • TextModel - an object containing ModelName for text operations.
  • ImageModel (optional) - an object containing ModelName for image operations. Required for image generation.
  • AudioModel (optional) - an object containing ModelName for audio operations. Required for the AudioTranscrabie action.
  • EmbeddingModel (optional) - an object containing ModelName for embedding operations. Required for embedding generation.

MethodTypesConfiguration

The MethodTypesConfiguration section maps logical method types (e.g., ConciergePrompt, ConciergeExecuteTool, AgentPrompt) to ordered arrays of model names from ProviderModels. This defines the preferred models for each operation type.

The order of items in the array matters — the application attempts to use the first model and, if it is unavailable, falls back to the next one.

Configuration example

Below is an example based on aiconfiguration.Staging.json:

{
"ProviderConnections": {
"GoogleVertex": {
"Description": "Google Connector Provider",
"Type": "Gemini",
"ProviderConfiguration": {
"ApiKey": "AiGeminiTestEnvironment",
"ServiceAccount": "AiVertexAiServiceAccountJson",
"ProjectId": "AiVertexAiProjectId",
"Region": "AiVertexAiRegion",
"BucketName": "AiGoogleCloudBucketName"
}
},
"OpenAi": {
"Description": "OpenAi Connector Provider",
"Type": "OpenAi",
"ProviderConfiguration": {
"ApiKey": "AiOpenAiTestEnvironment"
}
}
},

"ProviderModels": [
{
"ConnectionName": "GoogleVertex",
"Priority": 4,
"Name": "Gemini 2.0-flash-lite-001",
"TextModel": { "ModelName": "gemini-2.0-flash-lite-001" }
},
{
"ConnectionName": "OpenAi",
"Priority": 1,
"Name": "OpenAi BasicTier",
"TextModel": { "ModelName": "gpt-4o-mini-2024-07-18" },
"ImageModel": { "ModelName": "dall-e-3" }
}
],

"MethodTypesConfiguration": {
"ConciergePrompt": [
"Gemini 2.0-flash-lite-001",
"Vertex gemini 2.5-flash-lite"
],
"ConciergeExecuteTool": [
"Gemini 2.0-flash-lite-001",
"Vertex gemini 2.5-flash"
],
"AgentPrompt": [
"OpenAi BasicTier"
]
}
}

Preparation checklist (support steps)

Step 1: Validate file location and name

  • Confirm you are editing the correct environment file: aiconfiguration.json, aiconfiguration.Staging.json, or aiconfiguration.Production.json.

Step 2: Confirm no plain-text secrets

  • All secret values in ProviderConfiguration must be Key Vault secret names (e.g., AiOpenAiTestEnvironment), not actual API keys.
  • NEVER put real API keys in the configuration file.

Step 3: Verify secrets exist in Key Vault

  • For each Key Vault secret name listed in the configuration, verify that the Key Vault instance contains a secret with the expected name.
  • Typical Key Vault keys to verify (from the example):
    • AiGeminiTestEnvironment
    • AiVertexAiServiceAccountJson
    • AiVertexAiProjectId
    • AiOpenAiTestEnvironment

Step 4: Check ProviderModels consistency

  • Ensure each ProviderModel.ConnectionName matches a key in ProviderConnections.
  • Confirm the Priority ordering is intentional (higher value = higher priority during model selection).
  • Validate that each ModelName is correct for the given provider (e.g., proper naming conventions for Vertex/OpenAI).

Step 5: Validate MethodTypesConfiguration

  • Ensure the model Name values listed here exactly match ProviderModels[].Name strings.
  • The order defines fallback preferences - confirm the ordering reflects the desired behavior.
  • Check that all method types used by the application have a corresponding mapping.

Step 6: JSON validation

  • Run a JSON linter/validator to ensure the file is correctly formatted.
  • Check commas, brackets, and quotation marks.

Step 7: Deploy / reload the application

  • After changes, the application must reload the configuration by restarting.
  • Verify the deployment pipeline or service restart procedure for the target environment.

Common errors and troubleshooting

Real API keys are placed in aiconfiguration instead of Key Vault secret names.

Symptoms:

  • security risk (secrets end up in the repository),
  • no ability to centrally manage secrets.

Resolution:

  • replace the plain-text value with the corresponding Key Vault secret name,
  • store the actual key in Key Vault.

A mistake in the model name causes a mismatch in MethodTypesConfiguration.

Symptoms::

  • model selection fails or behaves unexpectedly,
  • log errors indicating a missing model.

Resolution:

  • ensure the strings match exactly,
  • redeploy / reload the configuration after the change.

ConnectionName does not correspond to a key in ProviderConnections.

Symptoms:

  • model selection cannot find provider settings,
  • connector initialization errors.

Resolution:

  • fix ConnectionName or add the provider entry to ProviderConnections.

An incorrect provider Type value is used (e.g., Gemini vs Vertex when the connector expects a specific value).

Symptoms:

  • the connector factory cannot construct the provider,
  • initialization errors during application startup.

Resolution:

  • use a provider type supported by the connector implementation,
  • if in doubt, check the Modules/AiConnector code.

Support runbook (step by step)

  1. Open the environment-specific configuration file (e.g., aiconfiguration.Staging.json).
  2. Run JSON validation.
  3. Verify each ProviderConfiguration property value against the corresponding entries in Key Vault.
  4. Check model names and method-type mappings for typos.
  5. If changes are required, update the JSON file in the environment’s repository branch and create a PR.
  6. After the merge, ensure the deployment pipeline runs and the service restarts (or restart it manually).
  7. Monitor logs for errors: Key Vault access, connector initialization, and model selection warnings.

Best practices

  • Always use Key Vault - do not store secrets in configuration files.
  • Document changes - include configuration change notes in the commit message.
  • Test in staging - validate configuration changes in the staging environment.
  • Monitor after deployment - review logs and metrics after every configuration change.
  • Versioning - maintain a change history in version control.
  • Backup - keep backups of known-good configurations.