| title | Develop AI agents on Apify |
|---|---|
| description | Build and deploy AI agents on Apify with framework templates, sandboxes, OpenRouter for LLM access, and pay-per-event pricing. |
| sidebar_position | 4.0 |
| sidebar_label | Develop AI agents |
| slug | /actors/development/quick-start/develop-ai-agents |
The Apify platform provides everything you need to build, test, and deploy AI agents. This page walks you through the complete toolkit: templates, sandbox code execution, LLM access through OpenRouter, pay-per-event monetization, and deployment to Apify Store.
This page covers:
- Start from a template - Use pre-built Apify Actor templates for popular AI frameworks to scaffold your agent quickly.
- AI Sandbox - Run code in an isolated environment at runtime. Useful when your agent needs to execute user-provided or dynamically generated code.
- OpenRouter - Access 100+ LLMs through your Apify account without managing separate API keys.
- Pay-per-event pricing - Charge users for specific actions your agent performs, such as API calls or token usage.
- Deploy to Apify - Push your agent to the Apify platform and publish it to Apify Store.
:::tip Build with AI
Looking to use AI coding assistants (Claude Code, Cursor, GitHub Copilot) to help you develop Actors? See Build Actors with AI.
:::
The fastest way to start your AI agent is to use one of the Apify Actor templates built on popular AI frameworks. Each template comes pre-configured with the right file structure, dependencies, and the Apify SDK integration.
Available AI framework templates include:
- LangChain - LLM pipelines with chain-of-thought and tool use
- Mastra - TypeScript-native AI agent framework
- CrewAI - multi-agent orchestration for complex tasks
- LlamaIndex - retrieval-augmented generation (RAG) workflows
- PydanticAI - Python agents with structured, validated outputs
- Smolagents - lightweight agents from Hugging Face
- MCP - expose your Actor as an MCP server
Initialize a template with the Apify CLI:
apify create my-agentThe command guides you through template selection. Browse all available templates at apify.com/templates.
If you don't have the Apify CLI installed, see the installation guide.
AI Sandbox is an isolated, containerized environment where your AI agent can execute code and system commands at runtime. Your agent Actor starts the sandbox and communicates with it through a REST API or MCP interface.
- Code execution - run JavaScript, TypeScript, Python, and bash via
POST /execwith captured stdout/stderr and exit codes - Filesystem access - read, write, list, and delete files through
/fs/{path}endpoints - Dynamic reverse proxy - start a web server inside the sandbox and expose it externally
- Dependency installation - install npm and pip packages at startup through Actor input
- Idle timeout - the sandbox automatically stops after a period of inactivity
- MCP interface - connect directly from Claude Code or other MCP clients for live debugging
- Your agent Actor starts the AI Sandbox Actor using the Apify SDK (similar to calling any other Actor)
- The agent sends code to execute via the REST API (
POST /exec) - AI Sandbox runs the code in isolation and returns results
- The agent processes results and iterates
:::info Sandbox environment
AI Sandbox runs on a Debian image with Node.js version 24 and Python 3 pre-installed. You can install additional dependencies through the Actor input configuration.
:::
The OpenRouter Actor provides access to 100+ LLMs through your Apify account. Supported providers include OpenAI, Anthropic, Google, Mistral, Meta, and more. No separate API keys or billing setup required - all costs are billed as platform usage.
:::caution Paid account recommended
To use the OpenRouter Actor, subscribe to a paying Apify plan. Free-tier accounts may be blocked by anti-fraud protections.
:::
OpenRouter exposes an OpenAI-compatible API, so you can use it with any SDK that supports the OpenAI API format.
Use the Apify OpenRouter proxy endpoint with your Apify token:
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
const openrouter = createOpenRouter({
baseURL: 'https://openrouter.apify.actor/api/v1',
apiKey: 'api-key-not-required',
headers: {
Authorization: `Bearer ${process.env.APIFY_TOKEN}`,
},
});The proxy supports chat completions, streaming, text embeddings, and image generation through vision-capable models.
:::caution Token usage tracking
Pay-per-event pricing can charge users per token. To do this, extract token counts from OpenRouter responses. Check the OpenRouter Actor README for the latest guidance on this workflow.
:::
Pay-per-event (PPE) pricing lets you charge users for specific actions your agent performs. Use Actor.charge() from the JavaScript SDK or Python SDK to bill users for events like API calls, generated results, or token usage.
For AI agents that use OpenRouter, consider these pricing strategies:
- Fixed pricing - charge a flat fee per task or request, regardless of the underlying LLM costs
- Usage-based pricing - charge per token or per LLM call, passing costs through to users with a markup
Your profit is calculated as:
profit = (0.8 × revenue) - platform costs
:::note Free-tier protection
If an Actor's net profit goes negative (for example, from free-tier users consuming LLM resources), the negative amount resets to $0 for aggregation purposes. Negative profit on one Actor doesn't affect earnings from your other Actors.
:::
For detailed pricing guidance, see the pay-per-event documentation.
When your agent is ready, deploy it to the Apify platform:
apify pushThis builds and deploys your Actor. Once deployed, you can:
- Publish to Apify Store - make your agent available to other users and start earning with PPE pricing. See the publishing documentation.
- Run via API - trigger your agent programmatically through the Apify API.
- Set up schedules - run your agent on a recurring schedule.
For more deployment options, see the deployment documentation.