Skip to content

Commit a89feb5

Browse files
docs: update examples from notebooks (#1078)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
1 parent d48ef44 commit a89feb5

1 file changed

Lines changed: 136 additions & 0 deletions

File tree

Lines changed: 136 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,136 @@
1+
---
2+
title: 'Llamaindex Examples Example'
3+
description: 'LlamaIndex AgentOps Integration Example'
4+
---
5+
{/* SOURCE_FILE: examples/llamaindex_examples/llamaindex_example.ipynb */}
6+
7+
_View Notebook on <a href={'https://github.com/AgentOps-AI/agentops/blob/main/examples/llamaindex_examples/llamaindex_example.ipynb'} target={'_blank'}>Github</a>_
8+
9+
# LlamaIndex AgentOps Integration Example
10+
11+
This notebook demonstrates how to use AgentOps with LlamaIndex for observability and monitoring of your context-augmented generative AI applications.
12+
13+
## Setup
14+
15+
First, install the required packages:
16+
17+
18+
```
19+
# Install required packages
20+
!pip install agentops llama-index-instrumentation-agentops llama-index-embeddings-huggingface llama-index-llms-huggingface python-dotenv
21+
```
22+
23+
## Initialize AgentOps Handler
24+
25+
Set up the AgentOps handler for LlamaIndex instrumentation:
26+
27+
28+
```
29+
import os
30+
from dotenv import load_dotenv
31+
from llama_index.core import VectorStoreIndex, Document, Settings
32+
from llama_index.instrumentation.agentops import AgentOpsHandler
33+
34+
# Initialize AgentOps handler
35+
handler = AgentOpsHandler()
36+
handler.init()
37+
38+
# Load environment variables
39+
load_dotenv()
40+
41+
# Set API keys (replace with your actual keys)
42+
os.environ["AGENTOPS_API_KEY"] = os.getenv("AGENTOPS_API_KEY", "your_agentops_api_key_here")
43+
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY", "your_openai_api_key_here")
44+
```
45+
46+
## Configure Local Models (Optional)
47+
48+
For this example, we'll use local HuggingFace models to avoid requiring external API keys:
49+
50+
51+
```
52+
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
53+
from llama_index.llms.huggingface import HuggingFaceLLM
54+
55+
# Configure local embeddings and LLM
56+
Settings.embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
57+
Settings.llm = HuggingFaceLLM(model_name="microsoft/DialoGPT-medium")
58+
print("Using local HuggingFace embeddings and LLM")
59+
```
60+
61+
## Create Sample Documents and Index
62+
63+
Create some sample documents and build a vector index:
64+
65+
66+
```
67+
print("🚀 Starting LlamaIndex AgentOps Integration Example")
68+
print("=" * 50)
69+
70+
# Create sample documents
71+
documents = [
72+
Document(text="LlamaIndex is a framework for building context-augmented generative AI applications with LLMs."),
73+
Document(text="AgentOps provides observability into your AI applications, tracking LLM calls, performance metrics, and more."),
74+
Document(text="The integration between LlamaIndex and AgentOps allows you to monitor your RAG applications seamlessly."),
75+
Document(text="Vector databases are used to store and retrieve embeddings for similarity search in RAG applications."),
76+
Document(text="Context-augmented generation combines retrieval and generation to provide more accurate and relevant responses.")
77+
]
78+
79+
print("📚 Creating vector index from sample documents...")
80+
index = VectorStoreIndex.from_documents(documents)
81+
print("✅ Vector index created successfully")
82+
```
83+
84+
## Perform Queries
85+
86+
Now let's perform some queries to demonstrate the AgentOps integration:
87+
88+
89+
```
90+
# Create query engine
91+
query_engine = index.as_query_engine()
92+
93+
print("🔍 Performing queries...")
94+
95+
# Sample queries
96+
queries = [
97+
"What is LlamaIndex?",
98+
"How does AgentOps help with AI applications?",
99+
"What are the benefits of using vector databases in RAG?"
100+
]
101+
102+
for i, query in enumerate(queries, 1):
103+
print(f"\n📝 Query {i}: {query}")
104+
response = query_engine.query(query)
105+
print(f"💬 Response: {response}")
106+
```
107+
108+
## Results
109+
110+
After running this notebook, you should see:
111+
112+
1. **AgentOps Session Link**: A URL to view the session in your AgentOps dashboard
113+
2. **Cost Tracking**: Information about the cost of LLM calls (if using paid APIs)
114+
3. **Operation Tracking**: All LlamaIndex operations are automatically tracked
115+
116+
Check your AgentOps dashboard to see detailed information about:
117+
- LLM calls and responses
118+
- Performance metrics
119+
- Cost analysis
120+
- Session replay
121+
122+
The session link will be printed in the output above by AgentOps.
123+
124+
125+
```
126+
print("\n" + "=" * 50)
127+
print("🎉 Example completed successfully!")
128+
print("📊 Check your AgentOps dashboard to see the recorded session with LLM calls and operations.")
129+
print("🔗 The session link should be printed above by AgentOps.")
130+
```
131+
132+
133+
<script type="module" src="/scripts/github_stars.js"></script>
134+
<script type="module" src="/scripts/scroll-img-fadein-animation.js"></script>
135+
<script type="module" src="/scripts/button_heartbeat_animation.js"></script>
136+
<script type="module" src="/scripts/adjust_api_dynamically.js"></script>

0 commit comments

Comments
 (0)