| title | Introduction |
|---|---|
| description | AgentOps is the developer favorite platform for testing, debugging, and deploying AI agents and LLM apps. |
| mode | wide |
Observability and monitoring for your AI agents and LLM apps. And we do it all in just two lines of code...
python python import agentops agentops.init(<INSERT YOUR API KEY HERE>)
... that logs everything back to your AgentOps Dashboard.
That's it! AgentOps will automatically instrument your code and start tracking traces.
Need more control? You can disable automatic session creation and manage traces manually:
```python python import agentops agentops.init(, auto_start_session=False)# Later, when you're ready to start a trace:
trace = agentops.start_trace("my-workflow-trace")
# Your code here
# ...
# End the trace when done
agentops.end_trace(trace, "Success")
```
You can also set a custom trace name during initialization:
```python python import agentops agentops.init(, trace_name="custom-trace-name") ```Give us a star to bookmark on GitHub, save for later 🖇️)
With just two lines of code, you can free yourself from the chains of the terminal and, instead, visualize your agents' behavior in your AgentOps Dashboard. After setting up AgentOps, each execution of your program is recorded as a session and the above data is automatically recorded for you.
The examples below were captured with two lines of code.
Here you will find a list of all of your previously recorded sessions and useful data about each such as total execution time. You also get helpful debugging info such as any SDK versions you were on if you're building on a supported agent framework like Crew or AutoGen. LLM calls are presented as a familiar chat history view, and charts give you a breakdown of the types of events that were called and how long they took.
Find any past sessions from your Session Drawer.
Most powerful of all is the Session Waterfall. On the left, a time visualization of all your LLM calls, Action events, Tool calls, and Errors. On the right, specific details about the event you've selected on the waterfall. For instance the exact prompt and completion for a given LLM call. Most of which has been automatically recorded for you.
View a meta-analysis of all of your sessions in a single view.
<script type="module" src="/scripts/github_stars.js"></script> <script type="module" src="/scripts/adjust_api_dynamically.js"></script>


