Skip to content

Commit f622562

Browse files
authored
Merge branch 'main' into litellm_return-response_headers
2 parents 82764d2 + 36cb63c commit f622562

37 files changed

Lines changed: 570 additions & 89 deletions

docs/my-website/docs/providers/anthropic.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ Anthropic API fails requests when `max_tokens` are not passed. Due to this litel
2222
import os
2323

2424
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"
25+
# os.environ["ANTHROPIC_API_BASE"] = "" # [OPTIONAL] or 'ANTHROPIC_BASE_URL'
2526
```
2627

2728
## Usage

docs/my-website/docs/providers/mistral.md

Lines changed: 104 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
import Tabs from '@theme/Tabs';
2+
import TabItem from '@theme/TabItem';
3+
14
# Mistral AI API
25
https://docs.mistral.ai/api/
36

@@ -41,9 +44,106 @@ for chunk in response:
4144
```
4245

4346

47+
48+
## Usage with LiteLLM Proxy
49+
50+
### 1. Set Mistral Models on config.yaml
51+
52+
```yaml
53+
model_list:
54+
- model_name: mistral-small-latest
55+
litellm_params:
56+
model: mistral/mistral-small-latest
57+
api_key: "os.environ/MISTRAL_API_KEY" # ensure you have `MISTRAL_API_KEY` in your .env
58+
```
59+
60+
### 2. Start Proxy
61+
62+
```
63+
litellm --config config.yaml
64+
```
65+
66+
### 3. Test it
67+
68+
69+
<Tabs>
70+
<TabItem value="Curl" label="Curl Request">
71+
72+
```shell
73+
curl --location 'http://0.0.0.0:4000/chat/completions' \
74+
--header 'Content-Type: application/json' \
75+
--data ' {
76+
"model": "mistral-small-latest",
77+
"messages": [
78+
{
79+
"role": "user",
80+
"content": "what llm are you"
81+
}
82+
]
83+
}
84+
'
85+
```
86+
</TabItem>
87+
<TabItem value="openai" label="OpenAI v1.0.0+">
88+
89+
```python
90+
import openai
91+
client = openai.OpenAI(
92+
api_key="anything",
93+
base_url="http://0.0.0.0:4000"
94+
)
95+
96+
response = client.chat.completions.create(model="mistral-small-latest", messages = [
97+
{
98+
"role": "user",
99+
"content": "this is a test request, write a short poem"
100+
}
101+
])
102+
103+
print(response)
104+
105+
```
106+
</TabItem>
107+
<TabItem value="langchain" label="Langchain">
108+
109+
```python
110+
from langchain.chat_models import ChatOpenAI
111+
from langchain.prompts.chat import (
112+
ChatPromptTemplate,
113+
HumanMessagePromptTemplate,
114+
SystemMessagePromptTemplate,
115+
)
116+
from langchain.schema import HumanMessage, SystemMessage
117+
118+
chat = ChatOpenAI(
119+
openai_api_base="http://0.0.0.0:4000", # set openai_api_base to the LiteLLM Proxy
120+
model = "mistral-small-latest",
121+
temperature=0.1
122+
)
123+
124+
messages = [
125+
SystemMessage(
126+
content="You are a helpful assistant that im using to make a test request to."
127+
),
128+
HumanMessage(
129+
content="test from litellm. tell me why it's amazing in 1 sentence"
130+
),
131+
]
132+
response = chat(messages)
133+
134+
print(response)
135+
```
136+
</TabItem>
137+
</Tabs>
138+
44139
## Supported Models
140+
141+
:::info
45142
All models listed here https://docs.mistral.ai/platform/endpoints are supported. We actively maintain the list of models, pricing, token window, etc. [here](https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json).
46143

144+
:::
145+
146+
47147
| Model Name | Function Call |
48148
|----------------|--------------------------------------------------------------|
49149
| Mistral Small | `completion(model="mistral/mistral-small-latest", messages)` |
@@ -53,6 +153,10 @@ All models listed here https://docs.mistral.ai/platform/endpoints are supported.
53153
| Mixtral 8x7B | `completion(model="mistral/open-mixtral-8x7b", messages)` |
54154
| Mixtral 8x22B | `completion(model="mistral/open-mixtral-8x22b", messages)` |
55155
| Codestral | `completion(model="mistral/codestral-latest", messages)` |
156+
| Mistral NeMo | `completion(model="mistral/open-mistral-nemo", messages)` |
157+
| Mistral NeMo 2407 | `completion(model="mistral/open-mistral-nemo-2407", messages)` |
158+
| Codestral Mamba | `completion(model="mistral/open-codestral-mamba", messages)` |
159+
| Codestral Mamba | `completion(model="mistral/codestral-mamba-latest"", messages)` |
56160

57161
## Function Calling
58162

0 commit comments

Comments
 (0)