Skip to content

Commit c917249

Browse files
committed
feat: rename llamaindex template input field to question with description
- rename input field `query` to `question` and add a pydantic Field description so the FE renders a helpful hint - run.sh in testcases exercises each llm provider by uncommenting them one at a time
1 parent 410cdd9 commit c917249

6 files changed

Lines changed: 103 additions & 49 deletions

File tree

Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
# UiPath LlamaIndex Template Agent
2+
3+
A quickstart UiPath LlamaIndex agent. It answers user queries using live tools and supports multiple LLM providers.
4+
5+
> **Docs:** [uipath-llamaindex quick start](https://uipath.github.io/uipath-python/llamaindex/quick_start/)**Samples:** [uipath-llamaindex/samples](https://github.com/UiPath/uipath-integrations-python/tree/main/packages/uipath-llamaindex/samples)
6+
7+
## What it does
8+
9+
1. **Prepares** the conversation — injects a system prompt and the user question into workflow context
10+
2. **Runs a ReAct agent step** that autonomously decides which tools to call and in what order
11+
3. **Postprocesses** — validates and truncates the response if it exceeds the configured max length
12+
13+
### Tools
14+
15+
| Tool | Description |
16+
| ------------------ | ------------------------------------------------ |
17+
| `get_current_time` | Returns the current UTC date and time (ISO 8601) |
18+
| `get_weather` | Returns weather data for a city (mock data) |
19+
20+
### LLM Providers
21+
22+
The template defaults to **Claude Haiku 4.5** via `UiPathChatBedrockConverse`. To switch providers, edit `main.py`:
23+
24+
```python
25+
# Choose your LLM provider by uncommenting one of the following:
26+
llm = UiPathChatBedrockConverse(model=BedrockModel.anthropic_claude_haiku_4_5)
27+
# llm = UiPathOpenAI(model=OpenAIModel.GPT_4_1_MINI_2025_04_14.value)
28+
# llm = UiPathVertex(model=GeminiModel.gemini_2_5_flash)
29+
```
30+
31+
## Workflow
32+
33+
```mermaid
34+
flowchart TD
35+
START --> prepare
36+
prepare --> react_agent
37+
react_agent -->|tool calls| tool_executor
38+
tool_executor --> react_agent
39+
react_agent -->|final| postprocess
40+
postprocess --> END
41+
```
42+
43+
## Input / Output
44+
45+
```json
46+
// Input
47+
{
48+
"question": "What's the weather like in London?"
49+
}
50+
51+
// Output
52+
{
53+
"response": "..."
54+
}
55+
```
56+
57+
## Running locally
58+
59+
```bash
60+
# Run
61+
uv run uipath run agent --input-file input.json --output-file output.json
62+
63+
# Debug with dynamic node breakpoints
64+
uv run uipath debug agent --input-file input.json --output-file output.json
65+
```
66+
67+
## Evaluation
68+
69+
The agent ships with a tool call order evaluator that verifies the ReAct step calls `get_current_time` **before** `get_weather` when given a time-and-weather query, and an LLM judge that checks weather output for semantic similarity.
70+
71+
```bash
72+
uv run uipath eval
73+
```

packages/uipath-llamaindex/template/entry-points.json

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,18 +4,19 @@
44
"entryPoints": [
55
{
66
"filePath": "agent",
7-
"uniqueId": "d64050f7-add5-4197-91f2-7b9cf3187751",
7+
"uniqueId": "9016cb4a-25b4-44d3-8ace-08c3fea5316e",
88
"type": "agent",
99
"input": {
1010
"type": "object",
1111
"properties": {
12-
"query": {
13-
"title": "Query",
12+
"question": {
13+
"description": "Question for the assistant, e.g. 'What's the weather in Paris?'",
14+
"title": "Question",
1415
"type": "string"
1516
}
1617
},
1718
"required": [
18-
"query"
19+
"question"
1920
]
2021
},
2122
"output": {

packages/uipath-llamaindex/template/evaluations/eval-sets/evaluation-set-default.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
"id": "ada5a2c1-976c-470b-964f-eb70a5e61eb4",
1414
"name": "Weather in Paris",
1515
"inputs": {
16-
"query": "Is it good weather for a walk in Paris?"
16+
"question": "Is it good weather for a walk in Paris?"
1717
},
1818
"evaluationCriterias": {
1919
"evaluator-llm-judge-output": {
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
"query": "What's the weather like in London?"
2+
"question": "What's the weather like in London?"
33
}

packages/uipath-llamaindex/template/main.py

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@
1010
Workflow,
1111
step,
1212
)
13+
from pydantic import Field
1314

1415
from uipath_llamaindex.llms import BedrockModel, GeminiModel, OpenAIModel, UiPathOpenAI
1516
from uipath_llamaindex.llms.bedrock import UiPathChatBedrockConverse
@@ -63,7 +64,9 @@ def get_weather(city: str, utc_time: str) -> str:
6364

6465

6566
class QueryEvent(StartEvent):
66-
query: str
67+
question: str = Field(
68+
description="Question for the assistant, e.g. 'What's the weather in Paris?'"
69+
)
6770

6871

6972
class LLMInputEvent(Event):
@@ -87,7 +90,7 @@ class TemplateAgent(Workflow):
8790
async def prepare(self, ctx: Context, ev: QueryEvent) -> LLMInputEvent:
8891
await ctx.store.set("messages", [
8992
ChatMessage(role="system", content=SYSTEM_PROMPT),
90-
ChatMessage(role="user", content=ev.query),
93+
ChatMessage(role="user", content=ev.question),
9194
])
9295
return LLMInputEvent()
9396

packages/uipath-llamaindex/testcases/template-agent/run.sh

Lines changed: 18 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -19,49 +19,26 @@ uv run uipath auth --client-id="$CLIENT_ID" --client-secret="$CLIENT_SECRET" --b
1919
echo "Initializing the project..."
2020
uv run uipath init
2121

22-
run_agent() {
23-
local extra_args="$1"
24-
if uv run uipath run agent --file input.json $extra_args 2>&1; then
25-
return 0
26-
else
27-
if uv run uipath run agent --file input.json $extra_args 2>&1 | grep -q "timed out"; then
28-
return 1
29-
fi
30-
# non-timeout error, fail immediately
31-
return 2
32-
fi
33-
}
22+
# Test each LLM provider from main.py by uncommenting them one at a time.
23+
PROVIDERS=(
24+
"UiPathChatBedrockConverse"
25+
"UiPathOpenAI"
26+
"UiPathVertex"
27+
)
28+
29+
for provider in "${PROVIDERS[@]}"; do
30+
echo "===== Testing provider: $provider ====="
31+
sed -i -E 's/^llm = /# llm = /' main.py
32+
sed -i -E "s|^# llm = ${provider}\(|llm = ${provider}(|" main.py
3433

35-
try_with_fallback() {
36-
local extra_args="$1"
37-
echo "Running agent with Bedrock provider..."
38-
if uv run uipath run agent --file input.json $extra_args; then
39-
return 0
40-
fi
41-
42-
echo "⚠ Bedrock provider timed out or failed, switching to OpenAI fallback..."
43-
sed -i 's/^llm = UiPathChatBedrockConverse.*/# &/' main.py
44-
sed -i 's/^# llm = UiPathOpenAI/llm = UiPathOpenAI/' main.py
4534
uv run uipath init
4635

47-
if uv run uipath run agent --file input.json $extra_args; then
48-
return 0
49-
fi
50-
51-
echo "⚠ OpenAI provider also failed, trying Vertex fallback..."
52-
sed -i 's/^llm = UiPathOpenAI.*/# &/' main.py
53-
sed -i 's/^# llm = UiPathVertex/llm = UiPathVertex/' main.py
54-
uv run uipath init
55-
56-
uv run uipath run agent --file input.json $extra_args
57-
}
58-
59-
echo "Running agent..."
60-
try_with_fallback ""
36+
echo "Running agent with $provider..."
37+
uv run uipath run agent --input-file input.json --output-file output.json
6138

62-
echo "Running agent again with empty UIPATH_JOB_KEY..."
63-
export UIPATH_JOB_KEY=""
64-
try_with_fallback "--trace-file .uipath/traces.jsonl" >> local_run_output.log
39+
echo "Running agent again with empty UIPATH_JOB_KEY ($provider)..."
40+
UIPATH_JOB_KEY="" uv run uipath run agent --trace-file .uipath/traces.jsonl --input-file input.json --output-file output.json >> local_run_output.log
6541

66-
echo "Running evaluation..."
67-
uv run uipath eval --no-report --output-file eval_output.json
42+
echo "Running evaluation with $provider..."
43+
uv run uipath eval --no-report --output-file "eval_output_${provider}.json"
44+
done

0 commit comments

Comments
 (0)