Skip to content

Commit 39e442f

Browse files
authored
Merge pull request #312 from UiPath/feat/llamaindex-template-input-question-desc
feat: rename llamaindex template input field to question with description
2 parents 410cdd9 + 1ce0d8c commit 39e442f

5 files changed

Lines changed: 85 additions & 8 deletions

File tree

Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
# UiPath LlamaIndex Template Agent
2+
3+
A quickstart UiPath LlamaIndex agent. It answers user queries using live tools and supports multiple LLM providers.
4+
5+
> **Docs:** [uipath-llamaindex quick start](https://uipath.github.io/uipath-python/llamaindex/quick_start/)**Samples:** [uipath-llamaindex/samples](https://github.com/UiPath/uipath-integrations-python/tree/main/packages/uipath-llamaindex/samples)
6+
7+
## What it does
8+
9+
1. **Prepares** the conversation — injects a system prompt and the user question into workflow context
10+
2. **Runs a ReAct agent step** that autonomously decides which tools to call and in what order
11+
3. **Postprocesses** — validates and truncates the response if it exceeds the configured max length
12+
13+
### Tools
14+
15+
| Tool | Description |
16+
| ------------------ | ------------------------------------------------ |
17+
| `get_current_time` | Returns the current UTC date and time (ISO 8601) |
18+
| `get_weather` | Returns weather data for a city (mock data) |
19+
20+
### LLM Providers
21+
22+
The template defaults to **Claude Haiku 4.5** via `UiPathChatBedrockConverse`. To switch providers, edit `main.py`:
23+
24+
```python
25+
# Choose your LLM provider by uncommenting one of the following:
26+
llm = UiPathChatBedrockConverse(model=BedrockModel.anthropic_claude_haiku_4_5)
27+
# llm = UiPathOpenAI(model=OpenAIModel.GPT_4_1_MINI_2025_04_14.value)
28+
# llm = UiPathVertex(model=GeminiModel.gemini_2_5_flash)
29+
```
30+
31+
## Workflow
32+
33+
```mermaid
34+
flowchart TD
35+
START --> prepare
36+
prepare --> react_agent
37+
react_agent -->|tool calls| tool_executor
38+
tool_executor --> react_agent
39+
react_agent -->|final| postprocess
40+
postprocess --> END
41+
```
42+
43+
## Input / Output
44+
45+
```json
46+
// Input
47+
{
48+
"question": "What's the weather like in London?"
49+
}
50+
51+
// Output
52+
{
53+
"response": "..."
54+
}
55+
```
56+
57+
## Running locally
58+
59+
```bash
60+
# Run
61+
uv run uipath run agent --input-file input.json --output-file output.json
62+
63+
# Debug with dynamic node breakpoints
64+
uv run uipath debug agent --input-file input.json --output-file output.json
65+
```
66+
67+
## Evaluation
68+
69+
The agent ships with a tool call order evaluator that verifies the ReAct step calls `get_current_time` **before** `get_weather` when given a time-and-weather query, and an LLM judge that checks weather output for semantic similarity.
70+
71+
```bash
72+
uv run uipath eval
73+
```

packages/uipath-llamaindex/template/entry-points.json

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,18 +4,19 @@
44
"entryPoints": [
55
{
66
"filePath": "agent",
7-
"uniqueId": "d64050f7-add5-4197-91f2-7b9cf3187751",
7+
"uniqueId": "9016cb4a-25b4-44d3-8ace-08c3fea5316e",
88
"type": "agent",
99
"input": {
1010
"type": "object",
1111
"properties": {
12-
"query": {
13-
"title": "Query",
12+
"question": {
13+
"description": "Question for the assistant, e.g. 'What's the weather in Paris?'",
14+
"title": "Question",
1415
"type": "string"
1516
}
1617
},
1718
"required": [
18-
"query"
19+
"question"
1920
]
2021
},
2122
"output": {

packages/uipath-llamaindex/template/evaluations/eval-sets/evaluation-set-default.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
"id": "ada5a2c1-976c-470b-964f-eb70a5e61eb4",
1414
"name": "Weather in Paris",
1515
"inputs": {
16-
"query": "Is it good weather for a walk in Paris?"
16+
"question": "Is it good weather for a walk in Paris?"
1717
},
1818
"evaluationCriterias": {
1919
"evaluator-llm-judge-output": {
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
"query": "What's the weather like in London?"
2+
"question": "What's the weather like in London?"
33
}

packages/uipath-llamaindex/template/main.py

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@
1010
Workflow,
1111
step,
1212
)
13+
from pydantic import Field
1314

1415
from uipath_llamaindex.llms import BedrockModel, GeminiModel, OpenAIModel, UiPathOpenAI
1516
from uipath_llamaindex.llms.bedrock import UiPathChatBedrockConverse
@@ -63,7 +64,9 @@ def get_weather(city: str, utc_time: str) -> str:
6364

6465

6566
class QueryEvent(StartEvent):
66-
query: str
67+
question: str = Field(
68+
description="Question for the assistant, e.g. 'What's the weather in Paris?'"
69+
)
6770

6871

6972
class LLMInputEvent(Event):
@@ -87,7 +90,7 @@ class TemplateAgent(Workflow):
8790
async def prepare(self, ctx: Context, ev: QueryEvent) -> LLMInputEvent:
8891
await ctx.store.set("messages", [
8992
ChatMessage(role="system", content=SYSTEM_PROMPT),
90-
ChatMessage(role="user", content=ev.query),
93+
ChatMessage(role="user", content=ev.question),
9194
])
9295
return LLMInputEvent()
9396

0 commit comments

Comments
 (0)