|
| 1 | +# UiPath LlamaIndex Template Agent |
| 2 | + |
| 3 | +A quickstart UiPath LlamaIndex agent. It answers user queries using live tools and supports multiple LLM providers. |
| 4 | + |
| 5 | +> **Docs:** [uipath-llamaindex quick start](https://uipath.github.io/uipath-python/llamaindex/quick_start/) — **Samples:** [uipath-llamaindex/samples](https://github.com/UiPath/uipath-integrations-python/tree/main/packages/uipath-llamaindex/samples) |
| 6 | +
|
| 7 | +## What it does |
| 8 | + |
| 9 | +1. **Prepares** the conversation — injects a system prompt and the user question into workflow context |
| 10 | +2. **Runs a ReAct agent step** that autonomously decides which tools to call and in what order |
| 11 | +3. **Postprocesses** — validates and truncates the response if it exceeds the configured max length |
| 12 | + |
| 13 | +### Tools |
| 14 | + |
| 15 | +| Tool | Description | |
| 16 | +| ------------------ | ------------------------------------------------ | |
| 17 | +| `get_current_time` | Returns the current UTC date and time (ISO 8601) | |
| 18 | +| `get_weather` | Returns weather data for a city (mock data) | |
| 19 | + |
| 20 | +### LLM Providers |
| 21 | + |
| 22 | +The template defaults to **Claude Haiku 4.5** via `UiPathChatBedrockConverse`. To switch providers, edit `main.py`: |
| 23 | + |
| 24 | +```python |
| 25 | +# Choose your LLM provider by uncommenting one of the following: |
| 26 | +llm = UiPathChatBedrockConverse(model=BedrockModel.anthropic_claude_haiku_4_5) |
| 27 | +# llm = UiPathOpenAI(model=OpenAIModel.GPT_4_1_MINI_2025_04_14.value) |
| 28 | +# llm = UiPathVertex(model=GeminiModel.gemini_2_5_flash) |
| 29 | +``` |
| 30 | + |
| 31 | +## Workflow |
| 32 | + |
| 33 | +```mermaid |
| 34 | +flowchart TD |
| 35 | + START --> prepare |
| 36 | + prepare --> react_agent |
| 37 | + react_agent -->|tool calls| tool_executor |
| 38 | + tool_executor --> react_agent |
| 39 | + react_agent -->|final| postprocess |
| 40 | + postprocess --> END |
| 41 | +``` |
| 42 | + |
| 43 | +## Input / Output |
| 44 | + |
| 45 | +```json |
| 46 | +// Input |
| 47 | +{ |
| 48 | + "question": "What's the weather like in London?" |
| 49 | +} |
| 50 | + |
| 51 | +// Output |
| 52 | +{ |
| 53 | + "response": "..." |
| 54 | +} |
| 55 | +``` |
| 56 | + |
| 57 | +## Running locally |
| 58 | + |
| 59 | +```bash |
| 60 | +# Run |
| 61 | +uv run uipath run agent --input-file input.json --output-file output.json |
| 62 | + |
| 63 | +# Debug with dynamic node breakpoints |
| 64 | +uv run uipath debug agent --input-file input.json --output-file output.json |
| 65 | +``` |
| 66 | + |
| 67 | +## Evaluation |
| 68 | + |
| 69 | +The agent ships with a tool call order evaluator that verifies the ReAct step calls `get_current_time` **before** `get_weather` when given a time-and-weather query, and an LLM judge that checks weather output for semantic similarity. |
| 70 | + |
| 71 | +```bash |
| 72 | +uv run uipath eval |
| 73 | +``` |
0 commit comments