diff --git a/README.md b/README.md index f6b85deff..f17bdba2e 100644 --- a/README.md +++ b/README.md @@ -57,18 +57,36 @@ AgentOps helps developers build, evaluate, and monitor AI agents. From prototype ## Key Integrations ๐Ÿ”Œ +### Agent Frameworks
OpenAI Agents SDK - CrewAI - AG2 (AutoGen) - Microsoft + CrewAI + AG2 (AutoGen) + LangChain
- LangChain + Google ADK + Smolagents + Agno Camel AI - LlamaIndex +
+
+ +### LLM Providers +
+
+ OpenAI + Anthropic + Google Generative AI + x.AI +
+ +
+ LiteLLM + Watsonx + Mem0 Cohere
@@ -79,31 +97,39 @@ AgentOps helps developers build, evaluate, and monitor AI agents. From prototype | ๐Ÿ’ธ **LLM Cost Management** | Track spend with LLM foundation model providers | | ๐Ÿงช **Agent Benchmarking** | Test your agents against 1,000+ evals | | ๐Ÿ” **Compliance and Security** | Detect common prompt injection and data exfiltration exploits | -| ๐Ÿค **Framework Integrations** | Native Integrations with CrewAI, AG2 (AutoGen), Camel AI, & LangChain | +| ๐Ÿค **Framework Integrations** | Native Integrations with OpenAI Agents, CrewAI, AG2, LangChain, Google ADK, Smolagents & more | ## Quick Start โŒจ๏ธ +### Installation +First, install the AgentOps SDK. We recommend including `python-dotenv` for easy API key management. + ```bash -pip install agentops +pip install agentops python-dotenv ``` +### Initial Setup (2 Lines of Code) -#### Session replays in 2 lines of code - -Initialize the AgentOps client and automatically get analytics on all your LLM calls. +At its simplest, AgentOps can start monitoring your supported LLM and agent framework calls with just two lines of Python code. [Get an API key](https://app.agentops.ai/settings/projects) ```python import agentops +import os +from dotenv import load_dotenv -# Beginning of your program (i.e. main.py, __init__.py) -agentops.init( < INSERT YOUR API KEY HERE >) +# Load environment variables (recommended for API keys) +load_dotenv() -... +# Initialize AgentOps +# The API key can be passed directly or set as an environment variable AGENTOPS_API_KEY +AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY") +agentops.init(AGENTOPS_API_KEY) -# End of program -agentops.end_session('Success') +# That's it for basic auto-instrumentation! +# If you're using a supported library (like OpenAI, LangChain, CrewAI, etc.), +# AgentOps will now automatically track LLM calls and agent actions. ``` All your sessions can be viewed on the [AgentOps dashboard](https://app.agentops.ai?ref=gh) @@ -146,66 +172,52 @@ Add powerful observability to your agents, tools, and functions with as little c Refer to our [documentation](http://docs.agentops.ai) ```python -# Create a session span (root for all other spans) -from agentops.sdk.decorators import session +# Track custom operations with @operation +from agentops.sdk.decorators import operation -@session -def my_workflow(): - # Your session code here - return result +@operation +def process_data(data): + # Your function logic here + processed_result = data.upper() + return processed_result ``` ```python -# Create an agent span for tracking agent operations -from agentops.sdk.decorators import agent +# Track agent logic with @agent +from agentops.sdk.decorators import agent, operation -@agent +@agent(name="MyCustomAgent") class MyAgent: - def __init__(self, name): - self.name = name + def __init__(self, agent_id): + self.agent_id = agent_id - # Agent methods here -``` - -```python -# Create operation/task spans for tracking specific operations -from agentops.sdk.decorators import operation, task - -@operation # or @task -def process_data(data): - # Process the data - return result + @operation + def perform_task(self, task_description): + # Agent task logic here + return f"Agent {self.agent_id} completed: {task_description}" ``` ```python -# Create workflow spans for tracking multi-operation workflows -from agentops.sdk.decorators import workflow +# Track tools with @tool +from agentops.sdk.decorators import tool -@workflow -def my_workflow(data): - # Workflow implementation - return result +@tool(name="WebSearchTool", cost=0.05) +def web_search(query: str) -> str: + # Tool logic here + return f"Search results for: {query}" ``` ```python -# Nest decorators for proper span hierarchy -from agentops.sdk.decorators import session, agent, operation - -@agent -class MyAgent: - @operation - def nested_operation(self, message): - return f"Processed: {message}" - - @operation - def main_operation(self): - result = self.nested_operation("test message") - return result +# Group operations with @trace +from agentops.sdk.decorators import trace -@session -def my_session(): - agent = MyAgent() - return agent.main_operation() +@trace(name="MyMainWorkflow", tags=["main-flow"]) +def my_workflow(task_to_perform): + # Your workflow code here + main_agent = MyAgent(agent_id="workflow-agent") + result = main_agent.perform_task(task_to_perform) + tool_result = web_search(f"details for {task_to_perform}") + return result, tool_result ``` All decorators support: @@ -214,6 +226,7 @@ All decorators support: - Async/await functions - Generator functions - Custom attributes and names +- Hierarchical span nesting ## Integrations ๐Ÿฆพ @@ -247,12 +260,17 @@ Build Crew agents with observability in just 2 lines of code. Simply set an `AGE pip install 'crewai[agentops]' ``` -- [AgentOps integration example](https://docs.agentops.ai/v1/integrations/crewai) +- [AgentOps integration example](https://docs.agentops.ai/v2/integrations/crewai) - [Official CrewAI documentation](https://docs.crewai.com/how-to/AgentOps-Observability) ### AG2 ๐Ÿค– With only two lines of code, add full observability and monitoring to AG2 (formerly AutoGen) agents. Set an `AGENTOPS_API_KEY` in your environment and call `agentops.init()` +```bash +pip install agentops pyautogen +``` + +- [AG2 Integration Guide](https://docs.agentops.ai/v2/integrations/ag2) - [AG2 Observability Example](https://docs.ag2.ai/notebooks/agentchat_agentops) - [AG2 - AgentOps Documentation](https://docs.ag2.ai/docs/ecosystem/agentops) @@ -357,7 +375,7 @@ Check out the [Langchain Examples Notebook](./examples/langchain/langchain_examp First class support for Cohere(>=5.4.0). This is a living integration, should you need any added functionality please message us on Discord! -- [AgentOps integration example](https://docs.agentops.ai/v1/integrations/cohere) +- [AgentOps integration example](https://docs.agentops.ai/v2/integrations/cohere) - [Official Cohere documentation](https://docs.cohere.com/reference/about)
@@ -410,7 +428,11 @@ agentops.end_session('Success') Track agents built with the Anthropic Python SDK (>=0.32.0). -- [AgentOps integration guide](https://docs.agentops.ai/v1/integrations/anthropic) +```bash +pip install anthropic +``` + +- [AgentOps integration guide](https://docs.agentops.ai/v2/integrations/anthropic) - [Official Anthropic documentation](https://docs.anthropic.com/en/docs/welcome)
@@ -697,7 +719,7 @@ agentops_api_key = os.getenv("AGENTOPS_API_KEY") or "" AgentOps provides support for LiteLLM(>=1.3.1), allowing you to call 100+ LLMs using the same Input/Output Format. -- [AgentOps integration example](https://docs.agentops.ai/v1/integrations/litellm) +- [AgentOps integration example](https://docs.agentops.ai/v2/integrations/litellm) - [Official LiteLLM documentation](https://docs.litellm.ai/docs/providers)
@@ -746,7 +768,8 @@ from llama_index.core import set_global_handler set_global_handler("agentops") ``` -Check out the [LlamaIndex docs](https://docs.llamaindex.ai/en/stable/module_guides/observability/?h=agentops#agentops) for more details. +- [AgentOps integration guide](https://docs.agentops.ai/v2/integrations/llamaindex) +- [LlamaIndex docs](https://docs.llamaindex.ai/en/stable/module_guides/observability/?h=agentops#agentops)
@@ -758,6 +781,90 @@ AgentOps provides support for Llama Stack Python Client(>=0.0.53), allowing you - [AgentOps integration example 2](https://github.com/AgentOps-AI/agentops/pull/530/files/65a5ab4fdcf310326f191d4b870d4f553591e3ea#diff-6688ff4fb7ab1ce7b1cc9b8362ca27264a3060c16737fb1d850305787a6e3699) - [Official Llama Stack Python Client](https://github.com/meta-llama/llama-stack-client-python) +### Google Generative AI (Gemini) ๐Ÿ”ฎ + +Monitor and analyze your Google Gemini API calls with AgentOps automatically. + +```bash +pip install agentops google-genai +``` + +- [AgentOps integration guide](https://docs.agentops.ai/v2/integrations/google_generative_ai) +- [Official Google AI documentation](https://ai.google.dev/) + +
+ Usage Example + +```python +import agentops +from google import genai + +# Initialize AgentOps +agentops.init() + +# Create a client +client = genai.Client(api_key="YOUR_GEMINI_API_KEY") + +# Generate content +for chunk in client.models.generate_content_stream( + model='gemini-2.0-flash-001', + contents='Explain quantum computing in simple terms.', +): + print(chunk.text, end="", flush=True) +``` +
+ +### x.AI (Grok) ๐Ÿš€ + +Track and analyze your x.AI Grok API calls with AgentOps using the OpenAI SDK. + +```bash +pip install agentops openai +``` + +- [AgentOps integration guide](https://docs.agentops.ai/v2/integrations/xai) +- [Official x.AI documentation](https://console.x.ai/) + +
+ Usage Example + +```python +import agentops +from openai import OpenAI + +# Initialize AgentOps +agentops.init() + +# Create OpenAI client configured for xAI +client = OpenAI( + api_key=os.getenv("XAI_API_KEY"), + base_url="https://api.x.ai/v1", +) + +# Basic chat completion +completion = client.chat.completions.create( + model="grok-3-latest", + messages=[ + {"role": "system", "content": "You are a helpful AI assistant."}, + {"role": "user", "content": "Explain AI observability."}, + ], +) + +print(completion.choices[0].message.content) +``` +
+ +### Mem0 ๐Ÿง  + +Track memory operations and AI interactions with Mem0's memory layer. + +```bash +pip install agentops mem0ai +``` + +- [AgentOps integration guide](https://docs.agentops.ai/v2/integrations/mem0) +- [Official Mem0 documentation](https://docs.mem0.ai/) + ### SwarmZero AI ๐Ÿ Track and analyze SwarmZero agents with full observability. Set an `AGENTOPS_API_KEY` in your environment and initialize AgentOps to get started. diff --git a/README_UPDATE_TICKET.md b/README_UPDATE_TICKET.md new file mode 100644 index 000000000..85a81faaa --- /dev/null +++ b/README_UPDATE_TICKET.md @@ -0,0 +1,57 @@ +# Ticket: Update AgentOps README.md with Latest Information + +## Description +Update the main README.md file to reflect the latest AgentOps features, integrations, and documentation structure based on the current v2 documentation. + +## Key Areas to Update + +### 1. Integration Section Updates +- Add missing integrations from v2 docs: + - AG2 (formerly AutoGen) - comprehensive integration + - Google Generative AI (Gemini) - new provider + - x.AI (Grok) - new provider + - Mem0 - memory integration + - Watsonx - IBM integration + - Agno - agent framework + - Google ADK - agent framework + - Smolagents - HuggingFace integration + +### 2. Framework Integration Structure +- Update integration logos and links to match v2 docs structure +- Ensure all integration examples are current and working +- Update installation instructions for each integration + +### 3. Decorator Examples +- Update decorator usage examples to match v2 quickstart +- Add comprehensive examples for @agent, @operation, @tool, @trace decorators +- Show proper decorator nesting and hierarchy + +### 4. Quick Start Section +- Align quick start with v2/quickstart.mdx structure +- Update installation instructions +- Ensure API key setup instructions are current + +### 5. Features and Capabilities +- Update feature descriptions to match current capabilities +- Ensure roadmap sections are current +- Update popular projects section if needed + +## Success Criteria +- [ ] All integrations from v2 docs are represented in README +- [ ] Integration examples are current and functional +- [ ] Decorator usage examples match v2 documentation +- [ ] Installation and setup instructions are accurate +- [ ] All links and references work correctly +- [ ] README structure aligns with current documentation + +## Files to Reference +- `docs/v2/quickstart.mdx` - main quickstart guide +- `docs/v2/introduction.mdx` - integration overview +- `docs/v2/integrations/` - individual integration guides +- `examples/` - current working examples + +## Implementation Notes +- Keep changes focused and minimal +- Only update content that needs to be current +- Maintain existing README structure where appropriate +- Ensure all code examples are tested and working