@@ -18,19 +18,18 @@ Root configuration for AI models
1818#### AmazonBedrockChatModelConfig * (provider=` AMAZON_BEDROCK ` )*
1919| Name | Type | Description | Notes |
2020| ------| ------| -------------| -------|
21- | modelType | AiModelType | | [ optional] [ readonly] |
2221| providerConfig | AmazonBedrockProviderConfig | | |
2322| modelId | String | | |
2423| temperature | Double | | [ optional] |
2524| topP | Double | | [ optional] |
2625| maxOutputTokens | Integer | | [ optional] |
2726| timeoutSeconds | Integer | | [ optional] |
2827| maxRetries | Integer | | [ optional] |
28+ | modelType | AiModelType | | [ optional] [ readonly] |
2929
3030#### AnthropicChatModelConfig * (provider=` ANTHROPIC ` )*
3131| Name | Type | Description | Notes |
3232| ------| ------| -------------| -------|
33- | modelType | AiModelType | | [ optional] [ readonly] |
3433| providerConfig | AnthropicProviderConfig | | |
3534| modelId | String | | |
3635| temperature | Double | | [ optional] |
@@ -39,11 +38,11 @@ Root configuration for AI models
3938| maxOutputTokens | Integer | | [ optional] |
4039| timeoutSeconds | Integer | | [ optional] |
4140| maxRetries | Integer | | [ optional] |
41+ | modelType | AiModelType | | [ optional] [ readonly] |
4242
4343#### AzureOpenAiChatModelConfig * (provider=` AZURE_OPENAI ` )*
4444| Name | Type | Description | Notes |
4545| ------| ------| -------------| -------|
46- | modelType | AiModelType | | [ optional] [ readonly] |
4746| providerConfig | AzureOpenAiProviderConfig | | |
4847| modelId | String | | |
4948| temperature | Double | | [ optional] |
@@ -53,11 +52,11 @@ Root configuration for AI models
5352| maxOutputTokens | Integer | | [ optional] |
5453| timeoutSeconds | Integer | | [ optional] |
5554| maxRetries | Integer | | [ optional] |
55+ | modelType | AiModelType | | [ optional] [ readonly] |
5656
5757#### GitHubModelsChatModelConfig * (provider=` GITHUB_MODELS ` )*
5858| Name | Type | Description | Notes |
5959| ------| ------| -------------| -------|
60- | modelType | AiModelType | | [ optional] [ readonly] |
6160| providerConfig | GitHubModelsProviderConfig | | |
6261| modelId | String | | |
6362| temperature | Double | | [ optional] |
@@ -67,11 +66,11 @@ Root configuration for AI models
6766| maxOutputTokens | Integer | | [ optional] |
6867| timeoutSeconds | Integer | | [ optional] |
6968| maxRetries | Integer | | [ optional] |
69+ | modelType | AiModelType | | [ optional] [ readonly] |
7070
7171#### GoogleAiGeminiChatModelConfig * (provider=` GOOGLE_AI_GEMINI ` )*
7272| Name | Type | Description | Notes |
7373| ------| ------| -------------| -------|
74- | modelType | AiModelType | | [ optional] [ readonly] |
7574| providerConfig | GoogleAiGeminiProviderConfig | | |
7675| modelId | String | | |
7776| temperature | Double | | [ optional] |
@@ -82,11 +81,11 @@ Root configuration for AI models
8281| maxOutputTokens | Integer | | [ optional] |
8382| timeoutSeconds | Integer | | [ optional] |
8483| maxRetries | Integer | | [ optional] |
84+ | modelType | AiModelType | | [ optional] [ readonly] |
8585
8686#### GoogleVertexAiGeminiChatModelConfig * (provider=` GOOGLE_VERTEX_AI_GEMINI ` )*
8787| Name | Type | Description | Notes |
8888| ------| ------| -------------| -------|
89- | modelType | AiModelType | | [ optional] [ readonly] |
9089| providerConfig | GoogleVertexAiGeminiProviderConfig | | |
9190| modelId | String | | |
9291| temperature | Double | | [ optional] |
@@ -97,11 +96,11 @@ Root configuration for AI models
9796| maxOutputTokens | Integer | | [ optional] |
9897| timeoutSeconds | Integer | | [ optional] |
9998| maxRetries | Integer | | [ optional] |
99+ | modelType | AiModelType | | [ optional] [ readonly] |
100100
101101#### MistralAiChatModelConfig * (provider=` MISTRAL_AI ` )*
102102| Name | Type | Description | Notes |
103103| ------| ------| -------------| -------|
104- | modelType | AiModelType | | [ optional] [ readonly] |
105104| providerConfig | MistralAiProviderConfig | | |
106105| modelId | String | | |
107106| temperature | Double | | [ optional] |
@@ -111,11 +110,11 @@ Root configuration for AI models
111110| maxOutputTokens | Integer | | [ optional] |
112111| timeoutSeconds | Integer | | [ optional] |
113112| maxRetries | Integer | | [ optional] |
113+ | modelType | AiModelType | | [ optional] [ readonly] |
114114
115115#### OllamaChatModelConfig * (provider=` OLLAMA ` )*
116116| Name | Type | Description | Notes |
117117| ------| ------| -------------| -------|
118- | modelType | AiModelType | | [ optional] [ readonly] |
119118| providerConfig | OllamaProviderConfig | | |
120119| modelId | String | | |
121120| temperature | Double | | [ optional] |
@@ -125,11 +124,11 @@ Root configuration for AI models
125124| maxOutputTokens | Integer | | [ optional] |
126125| timeoutSeconds | Integer | | [ optional] |
127126| maxRetries | Integer | | [ optional] |
127+ | modelType | AiModelType | | [ optional] [ readonly] |
128128
129129#### OpenAiChatModelConfig * (provider=` OPENAI ` )*
130130| Name | Type | Description | Notes |
131131| ------| ------| -------------| -------|
132- | modelType | AiModelType | | [ optional] [ readonly] |
133132| providerConfig | OpenAiProviderConfig | | |
134133| modelId | String | | |
135134| temperature | Double | | [ optional] |
@@ -139,19 +138,20 @@ Root configuration for AI models
139138| maxOutputTokens | Integer | | [ optional] |
140139| timeoutSeconds | Integer | | [ optional] |
141140| maxRetries | Integer | | [ optional] |
141+ | modelType | AiModelType | | [ optional] [ readonly] |
142142
143143## Referenced Types
144144
145- #### AiModelType (enum)
146- ` CHAT `
147-
148145#### AmazonBedrockProviderConfig
149146| Name | Type | Description | Notes |
150147| ------| ------| -------------| -------|
151148| region | String | | |
152149| accessKeyId | String | | |
153150| secretAccessKey | String | | |
154151
152+ #### AiModelType (enum)
153+ ` CHAT `
154+
155155#### AnthropicProviderConfig
156156| Name | Type | Description | Notes |
157157| ------| ------| -------------| -------|
0 commit comments