Skip to content

Commit 3e20453

Browse files
committed
Merge branch 'main' of https://github.com/MervinPraison/PraisonAI into develop
2 parents 460d515 + 27ed2e8 commit 3e20453

File tree

26 files changed

+8390
-25
lines changed

26 files changed

+8390
-25
lines changed

docs/ui/realtime.mdx

Lines changed: 30 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,18 @@ To use the Realtime Voice Interface, follow these steps:
3636
```bash
3737
export OPENAI_API_KEY="your-api-key-here"
3838
```
39+
40+
<Accordion title="Azure OpenAI Configuration">
41+
To use Azure OpenAI instead of standard OpenAI, configure these environment variables:
42+
43+
```bash
44+
export OPENAI_API_KEY="your-azure-api-key"
45+
export OPENAI_BASE_URL="https://your-resource.openai.azure.com/openai/deployments/your-deployment-name"
46+
export OPENAI_MODEL_NAME="gpt-4o-realtime-preview"
47+
```
48+
49+
The realtime interface will automatically detect the base URL and adjust the WebSocket connection accordingly.
50+
</Accordion>
3951

4052
3. Launch the Realtime Voice Interface:
4153
```bash
@@ -56,9 +68,24 @@ Once the interface is launched:
5668

5769
You can configure various aspects of the Realtime Voice Interface:
5870

59-
- Model selection: Choose different AI models for processing.
60-
- Voice settings: Adjust voice characteristics for the AI's speech output.
61-
- Audio settings: Configure input/output audio formats and quality.
71+
### Environment Variables
72+
73+
| Variable | Description | Default |
74+
|----------|-------------|---------|
75+
| `OPENAI_API_KEY` | Your OpenAI or Azure OpenAI API key | Required |
76+
| `OPENAI_BASE_URL` | Custom base URL for OpenAI-compatible APIs (e.g., Azure OpenAI) | `https://api.openai.com/v1` |
77+
| `OPENAI_MODEL_NAME` | Model to use for realtime API | `gpt-4o-mini-realtime-preview-2024-12-17` |
78+
79+
### Model Selection
80+
Choose different AI models for processing. Supported models include:
81+
- `gpt-4o-realtime-preview`
82+
- `gpt-4o-mini-realtime-preview-2024-12-17`
83+
84+
### Voice Settings
85+
Adjust voice characteristics for the AI's speech output through the session configuration.
86+
87+
### Audio Settings
88+
Configure input/output audio formats and quality. The interface uses PCM16 format by default for optimal compatibility.
6289

6390
## Troubleshooting
6491

examples/cookbooks/DuckDuckGo_PraisonAI_Agent_Notebook.ipynb

Lines changed: 1523 additions & 0 deletions
Large diffs are not rendered by default.
Lines changed: 314 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,314 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "7a6ed531",
6+
"metadata": {
7+
"id": "7a6ed531"
8+
},
9+
"source": [
10+
"# AgentWorkflow & FunctionAgent Beginner Guide\n",
11+
"\n",
12+
"This notebook walks you through setting up and using a basic `AgentWorkflow` with a single `FunctionAgent` using the `llama-index` framework."
13+
]
14+
},
15+
{
16+
"cell_type": "markdown",
17+
"source": [
18+
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhivyaBharathy-web/PraisonAI/blob/main/examples/cookbooks/FunctionAgent_Workflow.ipynb)"
19+
],
20+
"metadata": {
21+
"id": "vR_DwtA9kwNX"
22+
},
23+
"id": "vR_DwtA9kwNX"
24+
},
25+
{
26+
"cell_type": "markdown",
27+
"source": [
28+
"# Dependencies"
29+
],
30+
"metadata": {
31+
"id": "xrCnRE5uhLZB"
32+
},
33+
"id": "xrCnRE5uhLZB"
34+
},
35+
{
36+
"cell_type": "code",
37+
"execution_count": null,
38+
"id": "05277ffb",
39+
"metadata": {
40+
"id": "05277ffb"
41+
},
42+
"outputs": [],
43+
"source": [
44+
"%pip install llama-index tavily-python"
45+
]
46+
},
47+
{
48+
"cell_type": "markdown",
49+
"id": "49f41051",
50+
"metadata": {
51+
"id": "49f41051"
52+
},
53+
"source": [
54+
"## Setup OpenAI LLM"
55+
]
56+
},
57+
{
58+
"cell_type": "code",
59+
"execution_count": 12,
60+
"id": "456794b8",
61+
"metadata": {
62+
"id": "456794b8"
63+
},
64+
"outputs": [],
65+
"source": [
66+
"from llama_index.llms.openai import OpenAI\n",
67+
"\n",
68+
"llm = OpenAI(model=\"gpt-4o-mini\", api_key=\"Enter your api key here\") # Replace with your OpenAI API key\n"
69+
]
70+
},
71+
{
72+
"cell_type": "markdown",
73+
"id": "4923571f",
74+
"metadata": {
75+
"id": "4923571f"
76+
},
77+
"source": [
78+
"## Define Web Search Tool"
79+
]
80+
},
81+
{
82+
"cell_type": "code",
83+
"execution_count": 13,
84+
"id": "6e5f265a",
85+
"metadata": {
86+
"id": "6e5f265a"
87+
},
88+
"outputs": [],
89+
"source": [
90+
"from tavily import AsyncTavilyClient\n",
91+
"\n",
92+
"async def search_web(query: str) -> str:\n",
93+
" \"\"\"Useful for using the web to answer questions.\"\"\"\n",
94+
" client = AsyncTavilyClient(api_key=\"Enter your api key here\") # Replace with your Tavily API key\n",
95+
" return str(await client.search(query))\n"
96+
]
97+
},
98+
{
99+
"cell_type": "markdown",
100+
"id": "a4c5f890",
101+
"metadata": {
102+
"id": "a4c5f890"
103+
},
104+
"source": [
105+
"## Create FunctionAgent"
106+
]
107+
},
108+
{
109+
"cell_type": "code",
110+
"execution_count": 14,
111+
"id": "d5f552ec",
112+
"metadata": {
113+
"id": "d5f552ec"
114+
},
115+
"outputs": [],
116+
"source": [
117+
"from llama_index.core.agent.workflow import FunctionAgent\n",
118+
"\n",
119+
"agent = FunctionAgent(\n",
120+
" tools=[search_web],\n",
121+
" llm=llm,\n",
122+
" system_prompt=\"You are a helpful assistant that can search the web for information.\",\n",
123+
")\n"
124+
]
125+
},
126+
{
127+
"cell_type": "markdown",
128+
"id": "5d7b4245",
129+
"metadata": {
130+
"id": "5d7b4245"
131+
},
132+
"source": [
133+
"## Run the Agent"
134+
]
135+
},
136+
{
137+
"cell_type": "code",
138+
"execution_count": 15,
139+
"id": "49b31603",
140+
"metadata": {
141+
"colab": {
142+
"base_uri": "https://localhost:8080/"
143+
},
144+
"id": "49b31603",
145+
"outputId": "a729c081-a9a0-4019-8b79-5bfdc395f9ce"
146+
},
147+
"outputs": [
148+
{
149+
"output_type": "stream",
150+
"name": "stdout",
151+
"text": [
152+
"The current weather in San Francisco is as follows:\n",
153+
"\n",
154+
"- **Temperature**: 13.3°C (55.9°F)\n",
155+
"- **Condition**: Mist\n",
156+
"- **Wind**: 8.3 mph (13.3 kph) from the WSW\n",
157+
"- **Humidity**: 90%\n",
158+
"- **Visibility**: 16 km (9 miles)\n",
159+
"- **Feels Like**: 12.2°C (53.9°F)\n",
160+
"\n",
161+
"For more details, you can check the [Weather API](https://www.weatherapi.com/).\n"
162+
]
163+
}
164+
],
165+
"source": [
166+
"response = await agent.run(user_msg=\"What is the weather in San Francisco?\")\n",
167+
"print(str(response))\n"
168+
]
169+
},
170+
{
171+
"cell_type": "markdown",
172+
"id": "93c85265",
173+
"metadata": {
174+
"id": "93c85265"
175+
},
176+
"source": [
177+
"## Use AgentWorkflow"
178+
]
179+
},
180+
{
181+
"cell_type": "code",
182+
"execution_count": 16,
183+
"id": "a303658b",
184+
"metadata": {
185+
"colab": {
186+
"base_uri": "https://localhost:8080/"
187+
},
188+
"id": "a303658b",
189+
"outputId": "fcd07905-5300-4cb3-80c1-fb2bf0addc20"
190+
},
191+
"outputs": [
192+
{
193+
"output_type": "stream",
194+
"name": "stdout",
195+
"text": [
196+
"The current weather in San Francisco is as follows:\n",
197+
"\n",
198+
"- **Temperature**: 13.3°C (55.9°F)\n",
199+
"- **Condition**: Mist\n",
200+
"- **Wind**: 8.3 mph (13.3 kph) from the WSW\n",
201+
"- **Humidity**: 90%\n",
202+
"- **Visibility**: 16 km (9 miles)\n",
203+
"- **Feels Like**: 12.2°C (53.9°F)\n",
204+
"\n",
205+
"For more details, you can check the [Weather API](https://www.weatherapi.com/).\n"
206+
]
207+
}
208+
],
209+
"source": [
210+
"from llama_index.core.agent.workflow import AgentWorkflow\n",
211+
"\n",
212+
"workflow = AgentWorkflow(agents=[agent])\n",
213+
"response = await workflow.run(user_msg=\"What is the weather in San Francisco?\")\n",
214+
"print(str(response))\n"
215+
]
216+
},
217+
{
218+
"cell_type": "markdown",
219+
"id": "a9e1ed26",
220+
"metadata": {
221+
"id": "a9e1ed26"
222+
},
223+
"source": [
224+
"## Maintain Context State"
225+
]
226+
},
227+
{
228+
"cell_type": "code",
229+
"execution_count": 17,
230+
"id": "c1ba228f",
231+
"metadata": {
232+
"colab": {
233+
"base_uri": "https://localhost:8080/"
234+
},
235+
"id": "c1ba228f",
236+
"outputId": "e7787f4a-f1fb-438c-bb11-f1ba243c3455"
237+
},
238+
"outputs": [
239+
{
240+
"output_type": "stream",
241+
"name": "stdout",
242+
"text": [
243+
"Nice to meet you, Logan! How can I assist you today?\n",
244+
"Your name is Logan.\n"
245+
]
246+
}
247+
],
248+
"source": [
249+
"from llama_index.core.workflow import Context\n",
250+
"\n",
251+
"ctx = Context(agent)\n",
252+
"response = await agent.run(user_msg=\"My name is Logan, nice to meet you!\", ctx=ctx)\n",
253+
"print(str(response))\n",
254+
"\n",
255+
"response = await agent.run(user_msg=\"What is my name?\", ctx=ctx)\n",
256+
"print(str(response))\n"
257+
]
258+
},
259+
{
260+
"cell_type": "markdown",
261+
"id": "97ec9b2f",
262+
"metadata": {
263+
"id": "97ec9b2f"
264+
},
265+
"source": [
266+
"## Serialize Context"
267+
]
268+
},
269+
{
270+
"cell_type": "code",
271+
"execution_count": 18,
272+
"id": "21aa311f",
273+
"metadata": {
274+
"colab": {
275+
"base_uri": "https://localhost:8080/"
276+
},
277+
"id": "21aa311f",
278+
"outputId": "f8c231d6-19d0-415d-e27f-c67ad2b7f684"
279+
},
280+
"outputs": [
281+
{
282+
"output_type": "stream",
283+
"name": "stdout",
284+
"text": [
285+
"Yes, I remember your name is Logan.\n"
286+
]
287+
}
288+
],
289+
"source": [
290+
"from llama_index.core.workflow import JsonSerializer\n",
291+
"\n",
292+
"ctx_dict = ctx.to_dict(serializer=JsonSerializer())\n",
293+
"restored_ctx = Context.from_dict(agent, ctx_dict, serializer=JsonSerializer())\n",
294+
"\n",
295+
"response = await agent.run(user_msg=\"Do you still remember my name?\", ctx=restored_ctx)\n",
296+
"print(str(response))\n"
297+
]
298+
}
299+
],
300+
"metadata": {
301+
"colab": {
302+
"provenance": []
303+
},
304+
"language_info": {
305+
"name": "python"
306+
},
307+
"kernelspec": {
308+
"name": "python3",
309+
"display_name": "Python 3"
310+
}
311+
},
312+
"nbformat": 4,
313+
"nbformat_minor": 5
314+
}

0 commit comments

Comments
 (0)