feat: OpenClaw agent integration for AirTrigger#3268
feat: OpenClaw agent integration for AirTrigger#3268buildwithmoses wants to merge 1 commit intotriggerdotdev:mainfrom
Conversation
Add complete agent management system for OpenClaw containers: - agents.setup.tsx: Create agent form with model/platform/tools selection - agents.$agentId.status.tsx: Agent dashboard with executions and health status - api.agents.provision.ts: Endpoint for assigning container ports - webhooks.slack.ts: Slack webhook handler routing messages to agent containers Database schema adds: - AgentConfig: stores agent configuration and container metadata - AgentExecution: logs message/response pairs with token counts - AgentHealthCheck: tracks agent container health status Architecture: - Multi-tenant per-user containers on shared VPS (178.128.150.129) - Port assignment starting at 8001 with auto-increment - Slack integration for messaging - Database-backed execution history and monitoring Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
|
WalkthroughThis PR introduces a complete agent management and execution system with database persistence. It adds three new database models— Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| "createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP, | ||
| "updatedAt" DATETIME NOT NULL, |
There was a problem hiding this comment.
🔴 Migration uses SQLite DATETIME type instead of PostgreSQL TIMESTAMP(3)
The migration file uses DATETIME for timestamp columns (lines 14, 15, 28, 39), but the database is PostgreSQL (as confirmed by migration_lock.toml with provider = "postgresql" and the Prisma schema provider = "postgresql" at internal-packages/database/prisma/schema.prisma:2). DATETIME is not a valid PostgreSQL column type — PostgreSQL uses TIMESTAMP(3). All other migrations in the repo consistently use TIMESTAMP(3) for DateTime fields. This migration will fail when pnpm run db:migrate is executed.
Prompt for agents
In internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql, replace all occurrences of DATETIME with TIMESTAMP(3). Specifically:
Line 14: Change '"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP' to '"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP'
Line 15: Change '"updatedAt" DATETIME NOT NULL' to '"updatedAt" TIMESTAMP(3) NOT NULL'
Line 28: Change '"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP' to '"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP'
Line 39: Change '"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP' to '"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP'
Also fix the PRIMARY KEY syntax to match PostgreSQL convention. Change each 'TEXT NOT NULL PRIMARY KEY' to 'TEXT NOT NULL' and add a separate CONSTRAINT line, e.g. 'CONSTRAINT "AgentConfig_pkey" PRIMARY KEY ("id")'. Also change '"tools" TEXT' to '"tools" JSONB' since the schema defines tools as Json type which maps to JSONB in PostgreSQL.
Was this helpful? React with 👍 or 👎 to provide feedback.
| "name" TEXT NOT NULL, | ||
| "model" TEXT NOT NULL, | ||
| "messagingPlatform" TEXT NOT NULL, | ||
| "tools" TEXT, |
There was a problem hiding this comment.
🔴 Migration defines tools column as TEXT but Prisma schema declares it as Json (should be JSONB)
The migration defines "tools" TEXT at line 8, but the Prisma schema at internal-packages/database/prisma/schema.prisma:2589 declares tools Json. In PostgreSQL, Prisma's Json type maps to JSONB, not TEXT. This mismatch means the actual database column type won't match what the Prisma client expects, which can cause runtime errors when Prisma tries to store/retrieve JSON data from a TEXT column. All other Json fields in the migration history use JSONB.
| "tools" TEXT, | |
| "tools" JSONB, |
Was this helpful? React with 👍 or 👎 to provide feedback.
| export const action = async ({ request }: ActionFunctionArgs) => { | ||
| if (request.method !== "POST") { | ||
| return json({ error: "Method not allowed" }, { status: 405 }); | ||
| } | ||
|
|
||
| const { agentId } = (await request.json()) as { agentId: string }; |
There was a problem hiding this comment.
🔴 Provisioning API endpoint has no authentication, allowing unauthenticated access
The api.agents.provision.ts endpoint has no authentication check. Every other api.* route in the codebase uses authenticateApiRequest or authenticateApiRequestWithPersonalAccessToken from ~/services/apiAuth.server (or at minimum requireUser). Without authentication, any external caller can invoke this endpoint to provision containers, modify agent records, and allocate ports for any agent ID. This is a security vulnerability since the endpoint modifies database state and is intended to provision infrastructure.
Was this helpful? React with 👍 or 👎 to provide feedback.
|
|
||
| // Trigger provisioning endpoint to spin up container | ||
| try { | ||
| const provisionResponse = await fetch("http://localhost:3000/api/agents/provision", { |
There was a problem hiding this comment.
🔴 Hardcoded localhost:3000 URL will fail — webapp defaults to port 3030
The provisioning fetch call at line 71 uses http://localhost:3000/api/agents/provision, but the webapp listens on port 3030 by default. The .env.example sets REMIX_APP_PORT=3030 and APP_ORIGIN=http://localhost:3030, and the server fallback chain in apps/webapp/server.ts:117 is REMIX_APP_PORT || PORT || 3000. With the standard .env setup, the server runs on 3030, so this fetch will fail to connect. Furthermore, a hardcoded localhost URL will never work in production. This should use the APP_ORIGIN env variable instead.
Prompt for agents
In apps/webapp/app/routes/agents.setup.tsx, line 71, replace the hardcoded URL 'http://localhost:3000/api/agents/provision' with a URL derived from the APP_ORIGIN environment variable. Import env from '~/env.server' and use it like:
const provisionResponse = await fetch(`${env.APP_ORIGIN}/api/agents/provision`, {
Note: Per the webapp CLAUDE.md rules, environment variables should be accessed via the env export from app/env.server.ts, never via process.env directly.
Was this helpful? React with 👍 or 👎 to provide feedback.
| const lastAgent = await prisma.agentConfig.findFirst({ | ||
| where: { | ||
| containerPort: { not: null }, | ||
| }, | ||
| orderBy: { containerPort: "desc" }, | ||
| }); | ||
|
|
||
| const nextPort = (lastAgent?.containerPort || 8000) + 1; |
There was a problem hiding this comment.
🔴 Race condition in port allocation allows duplicate port assignments
The port allocation logic at lines 32-39 reads the highest containerPort from the database and adds 1 to get the next port, but this is not atomic. If two provisioning requests arrive concurrently, both will read the same lastAgent.containerPort value and compute the same nextPort, resulting in two agents assigned to the same port. There is no database-level unique constraint on containerPort either (checking internal-packages/database/prisma/schema.prisma:2592), so the duplicate assignment won't be caught by the database.
Was this helpful? React with 👍 or 👎 to provide feedback.
| } | ||
|
|
||
| // Route message to OpenClaw container (on VPS) | ||
| const containerUrl = `http://178.128.150.129:${agent.containerPort}`; |
There was a problem hiding this comment.
🔴 Hardcoded IP address for container routing makes deployment non-portable
The Slack webhook handler at line 58 hardcodes the VPS IP http://178.128.150.129:${agent.containerPort} for routing messages to agent containers. This IP address will not be correct in any environment other than the specific VPS it was written for (not in development, staging, CI, or other production deployments). This should be a configurable environment variable, consistent with how the codebase uses env.APP_ORIGIN and other configurable URLs.
Was this helpful? React with 👍 or 👎 to provide feedback.
| const event = (await request.json()) as any; | ||
|
|
||
| // Handle Slack URL verification | ||
| if (event.type === "url_verification") { | ||
| return json({ challenge: event.challenge }); | ||
| } |
There was a problem hiding this comment.
🚩 No Slack request signature verification on webhook endpoint
The webhooks.slack.ts endpoint does not verify the X-Slack-Signature header, which Slack sends with every request to prove authenticity. Without verification, any external actor can craft fake Slack events and send them to this endpoint, triggering agent executions, database writes, and outbound HTTP requests to containers. While this is a significant security concern, I reported the more fundamental issue (no auth on the provisioning endpoint) as a bug. This webhook endpoint warrants signature verification using the Slack signing secret before processing any events. See Slack's verification docs.
Was this helpful? React with 👍 or 👎 to provide feedback.
| messagingPlatform: data.messagingPlatform, | ||
| tools: tools, | ||
| slackWorkspaceId: data.slackWorkspaceId || null, | ||
| slackWebhookToken: data.slackWebhookToken || null, |
There was a problem hiding this comment.
🚩 Slack webhook token stored as plaintext, inconsistent with repo's token handling
The slackWebhookToken is stored as a plaintext string in the AgentConfig table (internal-packages/database/prisma/schema.prisma:2596). The repository's established pattern for sensitive tokens is to encrypt them — see PersonalAccessToken which uses encryptedToken Json and hashedToken String at internal-packages/database/prisma/schema.prisma:124-131. Storing webhook tokens in plaintext means any database breach exposes all Slack webhook URLs, allowing attackers to post messages to users' Slack workspaces.
Was this helpful? React with 👍 or 👎 to provide feedback.
| "responseTimeMs" INTEGER, | ||
| "errorMessage" TEXT, | ||
| "createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP, |
There was a problem hiding this comment.
🚩 Migration schema mismatches beyond type issues: missing NOT NULL and DEFAULT constraints
Beyond the DATETIME/TIMESTAMP and TEXT/JSONB issues reported as bugs, the migration has additional mismatches with the Prisma schema. For AgentHealthCheck, the migration defines "responseTimeMs" INTEGER (nullable, no default) but the schema declares responseTimeMs Int @default(0) (non-nullable with default). For AgentExecution, the migration has "executionTimeMs" INTEGER NOT NULL without a DEFAULT, but the schema has executionTimeMs Int @default(0). The entire migration appears hand-written rather than generated via pnpm run db:migrate:dev:create as the CONTRIBUTING.md and database CLAUDE.md instruct. The migration should be regenerated using Prisma's tooling.
Was this helpful? React with 👍 or 👎 to provide feedback.
| if (event.type === "event_callback" && event.event.type === "message") { | ||
| const slackEvent = event.event; | ||
| const workspaceId = event.team_id; | ||
| const channel = slackEvent.channel; | ||
| const text = slackEvent.text; | ||
| const userId = slackEvent.user; |
There was a problem hiding this comment.
🚩 Bot messages from Slack will cause infinite message loops
The Slack webhook handler at webhooks.slack.ts:24 processes all message events without filtering out bot messages. When the agent sends a response back to Slack via the webhook at line 93, Slack will send a new event_callback for that bot message. The handler will process it again, send another response, and so on — creating an infinite loop. Standard practice is to check slackEvent.bot_id or slackEvent.subtype === 'bot_message' and ignore those events. This isn't just a theoretical concern — it will happen on every single message exchange.
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
Actionable comments posted: 12
🧹 Nitpick comments (4)
internal-packages/database/prisma/schema.prisma (2)
2582-2613: Consider adding a unique constraint oncontainerPort.Without a unique constraint, the database allows duplicate port assignments. Adding
@uniqueprovides defense-in-depth against the race condition in the provisioning endpoint.♻️ Proposed change
// Container info containerName String? - containerPort Int? + containerPort Int? `@unique`Note: This will require a new migration file.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal-packages/database/prisma/schema.prisma` around lines 2582 - 2613, Add a unique constraint on the AgentConfig.containerPort field to prevent duplicate port assignments at the database level: update the AgentConfig model in schema.prisma to mark containerPort as unique (containerPort Int? `@unique`) and generate a new migration so Prisma applies the constraint; reference the AgentConfig model and the containerPort field when making the change.
2599-2599: Document the allowed status values or consider using a Prisma enum.The
statusfield uses a comment to indicate allowed values (provisioning, healthy, unhealthy). For better type safety and documentation, consider:
- Using a Prisma
enum AgentStatus { PROVISIONING HEALTHY UNHEALTHY }, or- Adding a more detailed doc comment with
///🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal-packages/database/prisma/schema.prisma` at line 2599, The schema's status String field lacks strong typing and docs—replace or augment it by introducing a Prisma enum (e.g., AgentStatus with values PROVISIONING, HEALTHY, UNHEALTHY) and change the model's status field to use AgentStatus (e.g., status AgentStatus `@default`(PROVISIONING)), or at minimum add a doc-comment using /// above the status field describing allowed string values; update any related code that relies on the string literal to use the new enum symbol (AgentStatus) or adjust casts accordingly.apps/webapp/app/routes/agents.$agentId.status.tsx (1)
2-3: Remove unused imports.
jsonfrom@remix-run/nodeanduseLoaderDatafrom@remix-run/reactare imported but not used. The code usestypedjsonanduseTypedLoaderDatainstead.🧹 Remove unused imports
import type { LoaderFunctionArgs } from "@remix-run/node"; -import { json } from "@remix-run/node"; -import { useLoaderData } from "@remix-run/react"; import { typedjson, useTypedLoaderData } from "remix-typedjson";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/webapp/app/routes/agents`.$agentId.status.tsx around lines 2 - 3, Remove the unused imports json and useLoaderData: delete the import of json from "@remix-run/node" and the import of useLoaderData from "@remix-run/react" because the module uses typedjson and useTypedLoaderData instead; update the import statement(s) that currently include json and useLoaderData so only the needed symbols (e.g., typedjson, useTypedLoaderData) are imported and the unused ones are removed.internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql (1)
10-10: Consider adding a UNIQUE constraint oncontainerPort.Without a unique constraint, the race condition in the provisioning endpoint could result in duplicate port assignments being persisted. Adding database-level enforcement provides defense in depth.
♻️ Add UNIQUE constraint
"containerName" TEXT, - "containerPort" INTEGER, + "containerPort" INTEGER UNIQUE,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql` at line 10, This migration currently declares the "containerPort" INTEGER column without uniqueness; update the migration (20260325122458_add_openclaw_agents/migration.sql) to enforce uniqueness by either changing the column definition to "containerPort" INTEGER UNIQUE or adding a table-level constraint like , UNIQUE("containerPort") so the DB will reject duplicate port assignments; ensure the resulting migration is valid SQL and re-run/generate the corresponding Prisma schema change if needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/webapp/app/routes/agents`.$agentId.status.tsx:
- Around line 176-179: The TableCell currently renders check.responseTimeMs
directly which can be null and show "nullms"; update the rendering in the JSX
where check.responseTimeMs is used so it safely handles null/undefined (e.g.,
use a fallback like 0 or "—") and append "ms" only for numeric values; locate
the TableCell that references check.responseTimeMs and change it to
conditionally render (check.responseTimeMs != null ? `${check.responseTimeMs}ms`
: "—") so null values display gracefully while preserving existing behavior for
numbers, and similarly ensure date rendering (new
Date(check.createdAt).toLocaleString()) remains unchanged.
In `@apps/webapp/app/routes/agents.setup.tsx`:
- Around line 70-85: The fetch in routes/agents.setup.tsx is calling a hardcoded
"http://localhost:3000/api/agents/provision" which breaks non-local
deployments—replace the literal URL with the environment-backed origin: import
env (the env export from env.server.ts / env.server) and call
fetch(`${env.APP_ORIGIN}/api/agents/provision`, ...) so the provisioning
endpoint uses APP_ORIGIN; keep the same method/headers/body and error logging
around agentConfig.id.
- Around line 77-85: The provisioning error is being logged but ignored,
allowing agent creation and an unconditional redirect; change the provisioning
branch in the code that handles provisionResponse and the catch block so
failures prevent the redirect and update the agent's status to reflect the
failure. Specifically, when provisionResponse.ok is false or an exception is
caught, call your agent-status update routine (e.g., updateAgentStatus or
setAgentStatus for agentConfig.id) to set a "provision_failed" (or similar)
state, persist that change, and then return/throw an error or a response that
surfaces the failure to the caller instead of executing the existing redirect to
the status page; also enrich logger.error calls with the response body/error
details (provisionResponse.status, body, and the caught error) so debugging info
is preserved.
In `@apps/webapp/app/routes/api.agents.provision.ts`:
- Around line 21-29: The endpoint is missing an ownership check after fetching
agentConfig; obtain the current requester’s user id from the request/session
(e.g., your existing auth helper or session extraction used elsewhere), then
verify agentConfig.userId (or the appropriate ownership field on the AgentConfig
record) matches that user id; if it does not, return a 403 JSON response and
halt provisioning—update the logic around prisma.agentConfig.findUnique /
agentConfig and use the same auth helper you use elsewhere to get the current
user id.
- Around line 15-19: Replace the unchecked type assertion on (await
request.json()) and manual agentId check with a zod schema: define a Zod object
schema that requires agentId (e.g., const schema = z.object({ agentId:
z.string().min(1) })), use schema.parse or await schema.parseAsync on the
request body, and on failure return json({ error: ..., issues: schemaError })
with status 400; update the handler logic that currently references agentId from
the raw request/json() to use the validated value and remove the existing if
(!agentId) branch.
- Around line 31-39: The port assignment using prisma.agentConfig.findFirst to
compute nextPort is racy: concurrent requests can compute the same nextPort (see
prisma.agentConfig.findFirst and the nextPort calculation) leading to duplicate
containerPort values; fix by performing the allocation inside an atomic
transaction or using a dedicated counter/sequence table (or DB sequence) that
you increment and read in a single transaction (e.g., use prisma.$transaction
with SELECT ... FOR UPDATE semantics or a separate PortAllocation model and
update it atomically), and additionally add a UNIQUE constraint on
agentConfig.containerPort to guard against any remaining races and retry on
unique-constraint failure.
In `@apps/webapp/app/routes/webhooks.slack.ts`:
- Line 16: Replace the unsafe any cast for the incoming Slack payload by adding
a zod schema (SlackUrlVerificationSchema, SlackMessageEventSchema) and a
SlackEventSchema union, import { z } from "zod", then parse/validate the body
instead of using const event = (await request.json()) as any; — use
SlackEventSchema.parse(...) or SlackEventSchema.safeParse(...) to validate the
payload and handle validation failures (return a 400 or log error) before
proceeding with logic that expects the validated event object.
- Around line 39-45: The query using prisma.agentConfig.findFirst filters for
status: "healthy" but no codepath ever sets that status, so messages will never
be routed; update the logic either by adding a post-deployment health transition
that flips agentConfig.status to "healthy" after successful container
deployment/healthcheck (implement a health-check routine and update the record),
or as a temporary measure expand the query in the route handling
(prisma.agentConfig.findFirst for slackWorkspaceId/messagingPlatform) to accept
both "provisioning" and "healthy" states so provisioning agents can receive
events until the proper health transition is implemented.
- Around line 57-58: The hardcoded host in the container URL should be replaced
with a configurable environment value: add AGENT_CONTAINER_HOST to env.server.ts
(default "localhost") and use env.AGENT_CONTAINER_HOST when building
containerUrl in routes/webhooks.slack.ts (keep agent.containerPort). Update any
imports to use the env export so containerUrl =
`http://${env.AGENT_CONTAINER_HOST}:${agent.containerPort}` rather than the
literal IP.
- Around line 60-74: The fetch to the container (the await fetch call that sets
containerResponse using containerUrl) needs a timeout so it can't hang; create
an AbortController before the fetch, start a setTimeout to call
controller.abort() after a safe Slack deadline (e.g. ~2500–2800ms), pass
controller.signal to fetch, and clear the timeout after the fetch resolves; also
ensure any abort/timeout errors are handled in the existing try/catch so the
handler returns promptly.
- Around line 15-21: Add Slack signature verification before parsing/handling
the payload: read the x-slack-signature and x-slack-request-timestamp headers
from the incoming request, reconstruct the base string
"v0:{timestamp}:{rawBody}", compute the HMAC-SHA256 using the workspace signing
secret (from the workspace record or config) and compare against the header
using a constant-time comparison; if missing/invalid or timestamp is too old,
return 401 and do not proceed to the existing event handling (including the
url_verification branch that currently runs after const event = (await
request.json())). Use the existing request, event and workspace lookup logic to
obtain the correct signing secret and ensure signature validation runs before
any processing.
In
`@internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql`:
- Around line 1-17: The migration uses PostgreSQL-incompatible DATETIME for the
AgentConfig table timestamps; change the "createdAt" and "updatedAt" column
types in the "AgentConfig" CREATE TABLE to TIMESTAMP(3) (e.g., replace
`"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP` and `"updatedAt"
DATETIME NOT NULL` with TIMESTAMP(3) equivalents), and ensure the createdAt
default uses CURRENT_TIMESTAMP(3) to match other migrations and precision used
across the codebase.
---
Nitpick comments:
In `@apps/webapp/app/routes/agents`.$agentId.status.tsx:
- Around line 2-3: Remove the unused imports json and useLoaderData: delete the
import of json from "@remix-run/node" and the import of useLoaderData from
"@remix-run/react" because the module uses typedjson and useTypedLoaderData
instead; update the import statement(s) that currently include json and
useLoaderData so only the needed symbols (e.g., typedjson, useTypedLoaderData)
are imported and the unused ones are removed.
In
`@internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql`:
- Line 10: This migration currently declares the "containerPort" INTEGER column
without uniqueness; update the migration
(20260325122458_add_openclaw_agents/migration.sql) to enforce uniqueness by
either changing the column definition to "containerPort" INTEGER UNIQUE or
adding a table-level constraint like , UNIQUE("containerPort") so the DB will
reject duplicate port assignments; ensure the resulting migration is valid SQL
and re-run/generate the corresponding Prisma schema change if needed.
In `@internal-packages/database/prisma/schema.prisma`:
- Around line 2582-2613: Add a unique constraint on the
AgentConfig.containerPort field to prevent duplicate port assignments at the
database level: update the AgentConfig model in schema.prisma to mark
containerPort as unique (containerPort Int? `@unique`) and generate a new
migration so Prisma applies the constraint; reference the AgentConfig model and
the containerPort field when making the change.
- Line 2599: The schema's status String field lacks strong typing and
docs—replace or augment it by introducing a Prisma enum (e.g., AgentStatus with
values PROVISIONING, HEALTHY, UNHEALTHY) and change the model's status field to
use AgentStatus (e.g., status AgentStatus `@default`(PROVISIONING)), or at minimum
add a doc-comment using /// above the status field describing allowed string
values; update any related code that relies on the string literal to use the new
enum symbol (AgentStatus) or adjust casts accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 9caae087-2cf6-4b76-91fd-6fb6e43cabee
📒 Files selected for processing (6)
apps/webapp/app/routes/agents.$agentId.status.tsxapps/webapp/app/routes/agents.setup.tsxapps/webapp/app/routes/api.agents.provision.tsapps/webapp/app/routes/webhooks.slack.tsinternal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sqlinternal-packages/database/prisma/schema.prisma
📜 Review details
🧰 Additional context used
📓 Path-based instructions (11)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{ts,tsx}: Use types over interfaces for TypeScript
Avoid using enums; prefer string unions or const objects instead
**/*.{ts,tsx}: For apps and internal packages (apps/*,internal-packages/*), usepnpm run typecheck --filter <package>for verification, never usebuildas it proves almost nothing about correctness
Use testcontainers helpers (redisTest,postgresTest,containerTestfrom@internal/testcontainers) for integration tests with Redis and PostgreSQL instead of mocking
When writing Trigger.dev tasks, always import from@trigger.dev/sdk- never use@trigger.dev/sdk/v3or deprecatedclient.defineJob
Files:
apps/webapp/app/routes/agents.$agentId.status.tsxapps/webapp/app/routes/api.agents.provision.tsapps/webapp/app/routes/agents.setup.tsxapps/webapp/app/routes/webhooks.slack.ts
{packages/core,apps/webapp}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use zod for validation in packages/core and apps/webapp
Files:
apps/webapp/app/routes/agents.$agentId.status.tsxapps/webapp/app/routes/api.agents.provision.tsapps/webapp/app/routes/agents.setup.tsxapps/webapp/app/routes/webhooks.slack.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use function declarations instead of default exports
**/*.{ts,tsx,js,jsx}: Use pnpm for package management in this monorepo (version 10.23.0) with Turborepo for orchestration - run commands from root withpnpm run
Add crumbs as you write code for debug tracing using//@Crumbscomments or `// `#region` `@crumbsblocks - they stay on the branch throughout development and are stripped viaagentcrumbs stripbefore merge
Files:
apps/webapp/app/routes/agents.$agentId.status.tsxapps/webapp/app/routes/api.agents.provision.tsapps/webapp/app/routes/agents.setup.tsxapps/webapp/app/routes/webhooks.slack.ts
apps/webapp/app/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)
Access all environment variables through the
envexport ofenv.server.tsinstead of directly accessingprocess.envin the Trigger.dev webapp
Files:
apps/webapp/app/routes/agents.$agentId.status.tsxapps/webapp/app/routes/api.agents.provision.tsapps/webapp/app/routes/agents.setup.tsxapps/webapp/app/routes/webhooks.slack.ts
apps/webapp/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)
apps/webapp/**/*.{ts,tsx}: When importing from@trigger.dev/corein the webapp, use subpath exports from the package.json instead of importing from the root path
Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp
Files:
apps/webapp/app/routes/agents.$agentId.status.tsxapps/webapp/app/routes/api.agents.provision.tsapps/webapp/app/routes/agents.setup.tsxapps/webapp/app/routes/webhooks.slack.ts
**/*.{js,ts,jsx,tsx,json,md,yaml,yml}
📄 CodeRabbit inference engine (AGENTS.md)
Format code using Prettier before committing
Files:
apps/webapp/app/routes/agents.$agentId.status.tsxapps/webapp/app/routes/api.agents.provision.tsapps/webapp/app/routes/agents.setup.tsxapps/webapp/app/routes/webhooks.slack.ts
apps/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
When modifying only server components (
apps/webapp/,apps/supervisor/, etc.) with no package changes, add a.server-changes/file instead of a changeset
Files:
apps/webapp/app/routes/agents.$agentId.status.tsxapps/webapp/app/routes/api.agents.provision.tsapps/webapp/app/routes/agents.setup.tsxapps/webapp/app/routes/webhooks.slack.ts
apps/webapp/app/**/*.{ts,tsx,server.ts}
📄 CodeRabbit inference engine (apps/webapp/CLAUDE.md)
Access environment variables via
envexport fromapp/env.server.ts. Never useprocess.envdirectly
Files:
apps/webapp/app/routes/agents.$agentId.status.tsxapps/webapp/app/routes/api.agents.provision.tsapps/webapp/app/routes/agents.setup.tsxapps/webapp/app/routes/webhooks.slack.ts
internal-packages/database/**/prisma/migrations/*/*.sql
📄 CodeRabbit inference engine (internal-packages/database/CLAUDE.md)
internal-packages/database/**/prisma/migrations/*/*.sql: Clean up generated Prisma migrations by removing extraneous lines for junction tables (_BackgroundWorkerToBackgroundWorkerFile,_BackgroundWorkerToTaskQueue,_TaskRunToTaskRunTag,_WaitpointRunConnections,_completedWaitpoints) and indexes (SecretStore_key_idx, variousTaskRunindexes) unless explicitly added
When adding indexes to existing tables, useCREATE INDEX CONCURRENTLY IF NOT EXISTSto avoid table locks in production, and place each concurrent index in its own separate migration file
Indexes on newly created tables can useCREATE INDEXwithout CONCURRENTLY and can be combined in the same migration file as theCREATE TABLEstatement
When adding an index on a new column in an existing table, use two separate migrations: first forALTER TABLE ... ADD COLUMN IF NOT EXISTS ..., then forCREATE INDEX CONCURRENTLY IF NOT EXISTS ...in its own file
Files:
internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/otel-metrics.mdc)
**/*.ts: When creating or editing OTEL metrics (counters, histograms, gauges), ensure metric attributes have low cardinality by using only enums, booleans, bounded error codes, or bounded shard IDs
Do not use high-cardinality attributes in OTEL metrics such as UUIDs/IDs (envId, userId, runId, projectId, organizationId), unbounded integers (itemCount, batchSize, retryCount), timestamps (createdAt, startTime), or free-form strings (errorMessage, taskName, queueName)
When exporting OTEL metrics via OTLP to Prometheus, be aware that the exporter automatically adds unit suffixes to metric names (e.g., 'my_duration_ms' becomes 'my_duration_ms_milliseconds', 'my_counter' becomes 'my_counter_total'). Account for these transformations when writing Grafana dashboards or Prometheus queries
Files:
apps/webapp/app/routes/api.agents.provision.tsapps/webapp/app/routes/webhooks.slack.ts
apps/webapp/app/routes/**/*.ts
📄 CodeRabbit inference engine (apps/webapp/CLAUDE.md)
Use Remix flat-file route convention with dot-separated segments where
api.v1.tasks.$taskId.trigger.tsmaps to/api/v1/tasks/:taskId/trigger
Files:
apps/webapp/app/routes/api.agents.provision.tsapps/webapp/app/routes/webhooks.slack.ts
🧠 Learnings (18)
📚 Learning: 2025-11-27T16:26:58.661Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/webapp.mdc:0-0
Timestamp: 2025-11-27T16:26:58.661Z
Learning: Applies to apps/webapp/**/*.{ts,tsx} : Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp
Applied to files:
apps/webapp/app/routes/agents.$agentId.status.tsxapps/webapp/app/routes/api.agents.provision.tsapps/webapp/app/routes/agents.setup.tsx
📚 Learning: 2026-03-23T06:24:25.029Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: apps/webapp/CLAUDE.md:0-0
Timestamp: 2026-03-23T06:24:25.029Z
Learning: Applies to apps/webapp/app/routes/**/*.ts : Use Remix flat-file route convention with dot-separated segments where `api.v1.tasks.$taskId.trigger.ts` maps to `/api/v1/tasks/:taskId/trigger`
Applied to files:
apps/webapp/app/routes/agents.$agentId.status.tsxapps/webapp/app/routes/api.agents.provision.tsapps/webapp/app/routes/agents.setup.tsx
📚 Learning: 2026-03-13T13:45:39.411Z
Learnt from: ericallam
Repo: triggerdotdev/trigger.dev PR: 3213
File: apps/webapp/app/routes/admin.llm-models.missing.$model.tsx:19-21
Timestamp: 2026-03-13T13:45:39.411Z
Learning: In `apps/webapp/app/routes/admin.llm-models.missing.$model.tsx`, the `decodeURIComponent(params.model ?? "")` call is intentionally unguarded. Remix route params are decoded at the routing layer before reaching the loader, so malformed percent-encoding is rejected upstream. The page is also admin-only, so the risk is minimal and no try-catch is warranted.
Applied to files:
apps/webapp/app/routes/agents.$agentId.status.tsxapps/webapp/app/routes/agents.setup.tsx
📚 Learning: 2025-11-27T16:26:37.432Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: The webapp at apps/webapp is a Remix 2.1 application using Node.js v20
Applied to files:
apps/webapp/app/routes/agents.$agentId.status.tsxapps/webapp/app/routes/agents.setup.tsx
📚 Learning: 2026-02-03T18:27:40.429Z
Learnt from: 0ski
Repo: triggerdotdev/trigger.dev PR: 2994
File: apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.environment-variables/route.tsx:553-555
Timestamp: 2026-02-03T18:27:40.429Z
Learning: In apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.environment-variables/route.tsx, the menu buttons (e.g., Edit with PencilSquareIcon) in the TableCellMenu are intentionally icon-only with no text labels as a compact UI pattern. This is a deliberate design choice for this route; preserve the icon-only behavior for consistency in this file.
Applied to files:
apps/webapp/app/routes/agents.$agentId.status.tsxapps/webapp/app/routes/agents.setup.tsx
📚 Learning: 2026-02-11T16:37:32.429Z
Learnt from: matt-aitken
Repo: triggerdotdev/trigger.dev PR: 3019
File: apps/webapp/app/components/primitives/charts/Card.tsx:26-30
Timestamp: 2026-02-11T16:37:32.429Z
Learning: In projects using react-grid-layout, avoid relying on drag-handle class to imply draggability. Ensure drag-handle elements only affect dragging when the parent grid item is configured draggable in the layout; conditionally apply cursor styles based on the draggable prop. This improves correctness and accessibility.
Applied to files:
apps/webapp/app/routes/agents.$agentId.status.tsxapps/webapp/app/routes/agents.setup.tsx
📚 Learning: 2026-03-22T13:26:12.060Z
Learnt from: ericallam
Repo: triggerdotdev/trigger.dev PR: 3244
File: apps/webapp/app/components/code/TextEditor.tsx:81-86
Timestamp: 2026-03-22T13:26:12.060Z
Learning: In the triggerdotdev/trigger.dev codebase, do not flag `navigator.clipboard.writeText(...)` calls for `missing-await`/`unhandled-promise` issues. These clipboard writes are intentionally invoked without `await` and without `catch` handlers across the project; keep that behavior consistent when reviewing TypeScript/TSX files (e.g., usages like in `apps/webapp/app/components/code/TextEditor.tsx`).
Applied to files:
apps/webapp/app/routes/agents.$agentId.status.tsxapps/webapp/app/routes/api.agents.provision.tsapps/webapp/app/routes/agents.setup.tsxapps/webapp/app/routes/webhooks.slack.ts
📚 Learning: 2026-03-22T19:24:14.403Z
Learnt from: matt-aitken
Repo: triggerdotdev/trigger.dev PR: 3187
File: apps/webapp/app/v3/services/alerts/deliverErrorGroupAlert.server.ts:200-204
Timestamp: 2026-03-22T19:24:14.403Z
Learning: In the triggerdotdev/trigger.dev codebase, webhook URLs are not expected to contain embedded credentials/secrets (e.g., fields like `ProjectAlertWebhookProperties` should only hold credential-free webhook endpoints). During code review, if you see logging or inclusion of raw webhook URLs in error messages, do not automatically treat it as a credential-leak/secrets-in-logs issue by default—first verify the URL does not contain embedded credentials (for example, no username/password in the URL, no obvious secret/token query params or fragments). If the URL is credential-free per this project’s conventions, allow the logging.
Applied to files:
apps/webapp/app/routes/agents.$agentId.status.tsxapps/webapp/app/routes/api.agents.provision.tsapps/webapp/app/routes/agents.setup.tsxapps/webapp/app/routes/webhooks.slack.ts
📚 Learning: 2026-03-02T12:43:17.177Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: internal-packages/database/CLAUDE.md:0-0
Timestamp: 2026-03-02T12:43:17.177Z
Learning: Applies to internal-packages/database/**/prisma/migrations/*/*.sql : Clean up generated Prisma migrations by removing extraneous lines for junction tables (`_BackgroundWorkerToBackgroundWorkerFile`, `_BackgroundWorkerToTaskQueue`, `_TaskRunToTaskRunTag`, `_WaitpointRunConnections`, `_completedWaitpoints`) and indexes (`SecretStore_key_idx`, various `TaskRun` indexes) unless explicitly added
Applied to files:
internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
📚 Learning: 2026-03-22T13:49:20.068Z
Learnt from: ericallam
Repo: triggerdotdev/trigger.dev PR: 3244
File: internal-packages/database/prisma/migrations/20260318114244_add_prompt_friendly_id/migration.sql:5-5
Timestamp: 2026-03-22T13:49:20.068Z
Learning: For Prisma migration SQL files under `internal-packages/database/prisma/migrations/`, it is acceptable to create indexes with `CREATE INDEX` / `CREATE UNIQUE INDEX` (i.e., without `CONCURRENTLY`) when the parent table is introduced in the same PR and has no existing production rows yet. Only require `CREATE INDEX CONCURRENTLY` (or otherwise account for existing production data/locks) when the table already exists in production with data.
Applied to files:
internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
📚 Learning: 2026-03-02T12:43:17.177Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: internal-packages/database/CLAUDE.md:0-0
Timestamp: 2026-03-02T12:43:17.177Z
Learning: Applies to internal-packages/database/**/prisma/migrations/*/*.sql : When adding an index on a new column in an existing table, use two separate migrations: first for `ALTER TABLE ... ADD COLUMN IF NOT EXISTS ...`, then for `CREATE INDEX CONCURRENTLY IF NOT EXISTS ...` in its own file
Applied to files:
internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
📚 Learning: 2026-03-02T12:43:17.177Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: internal-packages/database/CLAUDE.md:0-0
Timestamp: 2026-03-02T12:43:17.177Z
Learning: Applies to internal-packages/database/**/prisma/migrations/*/*.sql : Indexes on newly created tables can use `CREATE INDEX` without CONCURRENTLY and can be combined in the same migration file as the `CREATE TABLE` statement
Applied to files:
internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
📚 Learning: 2026-03-02T12:43:17.177Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: internal-packages/database/CLAUDE.md:0-0
Timestamp: 2026-03-02T12:43:17.177Z
Learning: Applies to internal-packages/database/**/prisma/migrations/*/*.sql : When adding indexes to existing tables, use `CREATE INDEX CONCURRENTLY IF NOT EXISTS` to avoid table locks in production, and place each concurrent index in its own separate migration file
Applied to files:
internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
📚 Learning: 2026-02-03T18:48:31.790Z
Learnt from: 0ski
Repo: triggerdotdev/trigger.dev PR: 2994
File: internal-packages/database/prisma/migrations/20260129162810_add_integration_deployment/migration.sql:14-18
Timestamp: 2026-02-03T18:48:31.790Z
Learning: For Prisma migrations targeting PostgreSQL: - When adding indexes to existing tables, create the index in a separate migration file and include CONCURRENTLY to avoid locking the table. - For indexes on newly created tables (in CREATE TABLE statements), you can create the index in the same migration file without CONCURRENTLY. This reduces rollout complexity for new objects while protecting uptime for existing structures.
Applied to files:
internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
📚 Learning: 2026-03-02T12:43:17.177Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: internal-packages/database/CLAUDE.md:0-0
Timestamp: 2026-03-02T12:43:17.177Z
Learning: Edit Prisma schema at `prisma/schema.prisma` and generate migrations using `pnpm run db:migrate:dev:create --name "descriptive_name"` from the `internal-packages/database` directory
Applied to files:
internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
📚 Learning: 2025-11-27T16:26:37.432Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to internal-packages/database/**/*.{ts,tsx} : Use Prisma for database interactions in internal-packages/database with PostgreSQL
Applied to files:
internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
📚 Learning: 2026-02-04T16:34:48.876Z
Learnt from: 0ski
Repo: triggerdotdev/trigger.dev PR: 2994
File: apps/webapp/app/routes/vercel.connect.tsx:13-27
Timestamp: 2026-02-04T16:34:48.876Z
Learning: In apps/webapp/app/routes/vercel.connect.tsx, configurationId may be absent for "dashboard" flows but must be present for "marketplace" flows. Enforce this with a Zod superRefine and pass installationId to repository methods only when configurationId is defined (omit the field otherwise).
Applied to files:
apps/webapp/app/routes/agents.setup.tsx
📚 Learning: 2026-03-13T13:42:25.092Z
Learnt from: ericallam
Repo: triggerdotdev/trigger.dev PR: 3213
File: apps/webapp/app/routes/admin.llm-models.new.tsx:65-91
Timestamp: 2026-03-13T13:42:25.092Z
Learning: In `apps/webapp/app/routes/admin.llm-models.new.tsx`, sequential Prisma writes for model/tier creation are intentionally not wrapped in a transaction. The form is admin-only with low concurrency risk, and the blast radius is considered minimal for admin tooling.
Applied to files:
apps/webapp/app/routes/agents.setup.tsx
🔇 Additional comments (6)
apps/webapp/app/routes/agents.$agentId.status.tsx (1)
19-49: LGTM on loader with proper ownership verification.The loader correctly authenticates the user, validates the
agentIdparam, and verifies ownership before returning data. The inclusion of relatedexecutionsandhealthCheckswith appropriate ordering and limits is well-structured.apps/webapp/app/routes/agents.setup.tsx (1)
14-21: LGTM on Zod validation schema.Good use of zod for form validation with appropriate constraints (required fields, enum values for model and platform).
internal-packages/database/prisma/schema.prisma (3)
68-68: LGTM on User relation.The
agentConfigsrelation is correctly added to the User model, enabling the one-to-many relationship.
2615-2649: LGTM on AgentExecution and AgentHealthCheck models.The models are well-structured with appropriate indexes for common query patterns (by
agentIdandcreatedAtdescending). Cascade delete on theagentrelation is correct for cleanup.
2595-2597:⚠️ Potential issue | 🟠 MajorSensitive credentials stored as plain text.
slackWebhookTokencontains a Slack bot token which is a sensitive credential. Consider using the existingSecretReferencepattern (used byPersonalAccessToken,OrganizationIntegration) to encrypt or externally store this value.⛔ Skipped due to learnings
Learnt from: matt-aitken Repo: triggerdotdev/trigger.dev PR: 3187 File: apps/webapp/app/v3/services/alerts/deliverErrorGroupAlert.server.ts:200-204 Timestamp: 2026-03-22T19:24:18.069Z Learning: In the triggerdotdev/trigger.dev codebase, webhook URLs (e.g., in `ProjectAlertWebhookProperties`) do not contain embedded credentials or secrets. Do not flag logging or including raw webhook URLs in error messages as a credential-leakage risk in this project (e.g., in `apps/webapp/app/v3/services/alerts/deliverErrorGroupAlert.server.ts`).internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql (1)
12-12:⚠️ Potential issue | 🟠 MajorStoring
slackWebhookTokenas plain text may pose a security risk.Slack webhook tokens are sensitive credentials. Consider encrypting this field or storing a reference to a secret store (similar to how
SecretReferenceis used elsewhere in the codebase for sensitive values likePersonalAccessToken.encryptedToken).⛔ Skipped due to learnings
Learnt from: matt-aitken Repo: triggerdotdev/trigger.dev PR: 3187 File: apps/webapp/app/v3/services/alerts/deliverErrorGroupAlert.server.ts:200-204 Timestamp: 2026-03-22T19:24:18.069Z Learning: In the triggerdotdev/trigger.dev codebase, webhook URLs (e.g., in `ProjectAlertWebhookProperties`) do not contain embedded credentials or secrets. Do not flag logging or including raw webhook URLs in error messages as a credential-leakage risk in this project (e.g., in `apps/webapp/app/v3/services/alerts/deliverErrorGroupAlert.server.ts`).
| </span> | ||
| </TableCell> | ||
| <TableCell>{check.responseTimeMs}ms</TableCell> | ||
| <TableCell>{new Date(check.createdAt).toLocaleString()}</TableCell> |
There was a problem hiding this comment.
Handle null responseTimeMs gracefully.
responseTimeMs can be null (per schema default is 0, but the provisioning endpoint creates records without setting it). This could display "nullms" in the UI.
💡 Proposed fix
- <TableCell>{check.responseTimeMs}ms</TableCell>
+ <TableCell>{check.responseTimeMs != null ? `${check.responseTimeMs}ms` : "—"}</TableCell>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| </span> | |
| </TableCell> | |
| <TableCell>{check.responseTimeMs}ms</TableCell> | |
| <TableCell>{new Date(check.createdAt).toLocaleString()}</TableCell> | |
| </span> | |
| </TableCell> | |
| <TableCell>{check.responseTimeMs != null ? `${check.responseTimeMs}ms` : "—"}</TableCell> | |
| <TableCell>{new Date(check.createdAt).toLocaleString()}</TableCell> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/webapp/app/routes/agents`.$agentId.status.tsx around lines 176 - 179,
The TableCell currently renders check.responseTimeMs directly which can be null
and show "nullms"; update the rendering in the JSX where check.responseTimeMs is
used so it safely handles null/undefined (e.g., use a fallback like 0 or "—")
and append "ms" only for numeric values; locate the TableCell that references
check.responseTimeMs and change it to conditionally render (check.responseTimeMs
!= null ? `${check.responseTimeMs}ms` : "—") so null values display gracefully
while preserving existing behavior for numbers, and similarly ensure date
rendering (new Date(check.createdAt).toLocaleString()) remains unchanged.
| try { | ||
| const provisionResponse = await fetch("http://localhost:3000/api/agents/provision", { | ||
| method: "POST", | ||
| headers: { "Content-Type": "application/json" }, | ||
| body: JSON.stringify({ agentId: agentConfig.id }), | ||
| }); | ||
|
|
||
| if (!provisionResponse.ok) { | ||
| logger.error("Provisioning failed", { | ||
| agentId: agentConfig.id, | ||
| status: provisionResponse.status, | ||
| }); | ||
| } | ||
| } catch (error) { | ||
| logger.error("Failed to call provisioning endpoint", { error }); | ||
| } |
There was a problem hiding this comment.
Hardcoded localhost:3000 will fail in non-local environments.
The provisioning endpoint URL is hardcoded to http://localhost:3000/api/agents/provision. This will fail in staging/production. Per coding guidelines, use env from env.server.ts for environment-specific URLs.
🔧 Proposed fix using APP_ORIGIN
+import { env } from "~/env.server";
+
// Trigger provisioning endpoint to spin up container
try {
- const provisionResponse = await fetch("http://localhost:3000/api/agents/provision", {
+ const provisionResponse = await fetch(`${env.APP_ORIGIN}/api/agents/provision`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ agentId: agentConfig.id }),
});As per coding guidelines: "Access all environment variables through the env export of env.server.ts instead of directly accessing process.env in the Trigger.dev webapp". The relevant code snippets show APP_ORIGIN is already defined in env.server.ts with a localhost default.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| const provisionResponse = await fetch("http://localhost:3000/api/agents/provision", { | |
| method: "POST", | |
| headers: { "Content-Type": "application/json" }, | |
| body: JSON.stringify({ agentId: agentConfig.id }), | |
| }); | |
| if (!provisionResponse.ok) { | |
| logger.error("Provisioning failed", { | |
| agentId: agentConfig.id, | |
| status: provisionResponse.status, | |
| }); | |
| } | |
| } catch (error) { | |
| logger.error("Failed to call provisioning endpoint", { error }); | |
| } | |
| try { | |
| const provisionResponse = await fetch(`${env.APP_ORIGIN}/api/agents/provision`, { | |
| method: "POST", | |
| headers: { "Content-Type": "application/json" }, | |
| body: JSON.stringify({ agentId: agentConfig.id }), | |
| }); | |
| if (!provisionResponse.ok) { | |
| logger.error("Provisioning failed", { | |
| agentId: agentConfig.id, | |
| status: provisionResponse.status, | |
| }); | |
| } | |
| } catch (error) { | |
| logger.error("Failed to call provisioning endpoint", { error }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/webapp/app/routes/agents.setup.tsx` around lines 70 - 85, The fetch in
routes/agents.setup.tsx is calling a hardcoded
"http://localhost:3000/api/agents/provision" which breaks non-local
deployments—replace the literal URL with the environment-backed origin: import
env (the env export from env.server.ts / env.server) and call
fetch(`${env.APP_ORIGIN}/api/agents/provision`, ...) so the provisioning
endpoint uses APP_ORIGIN; keep the same method/headers/body and error logging
around agentConfig.id.
| if (!provisionResponse.ok) { | ||
| logger.error("Provisioning failed", { | ||
| agentId: agentConfig.id, | ||
| status: provisionResponse.status, | ||
| }); | ||
| } | ||
| } catch (error) { | ||
| logger.error("Failed to call provisioning endpoint", { error }); | ||
| } |
There was a problem hiding this comment.
Provisioning failures are silently ignored.
If provisioning fails (non-OK response or network error), the agent is still created and the user is redirected to the status page. Consider surfacing the error to the user or preventing the redirect on failure.
💡 Suggested improvement
if (!provisionResponse.ok) {
logger.error("Provisioning failed", {
agentId: agentConfig.id,
status: provisionResponse.status,
});
+ // Optionally: delete the agent or mark it as failed
+ // For now, continue but the status page will show "provisioning"
}
} catch (error) {
logger.error("Failed to call provisioning endpoint", { error });
+ // Consider returning an error instead of redirecting
+ // return json({ error: "Agent created but provisioning failed" }, { status: 500 });
}At minimum, consider updating the agent status to reflect the provisioning failure so users understand why their agent isn't working.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!provisionResponse.ok) { | |
| logger.error("Provisioning failed", { | |
| agentId: agentConfig.id, | |
| status: provisionResponse.status, | |
| }); | |
| } | |
| } catch (error) { | |
| logger.error("Failed to call provisioning endpoint", { error }); | |
| } | |
| if (!provisionResponse.ok) { | |
| logger.error("Provisioning failed", { | |
| agentId: agentConfig.id, | |
| status: provisionResponse.status, | |
| }); | |
| // Optionally: delete the agent or mark it as failed | |
| // For now, continue but the status page will show "provisioning" | |
| } | |
| } catch (error) { | |
| logger.error("Failed to call provisioning endpoint", { error }); | |
| // Consider returning an error instead of redirecting | |
| // return json({ error: "Agent created but provisioning failed" }, { status: 500 }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/webapp/app/routes/agents.setup.tsx` around lines 77 - 85, The
provisioning error is being logged but ignored, allowing agent creation and an
unconditional redirect; change the provisioning branch in the code that handles
provisionResponse and the catch block so failures prevent the redirect and
update the agent's status to reflect the failure. Specifically, when
provisionResponse.ok is false or an exception is caught, call your agent-status
update routine (e.g., updateAgentStatus or setAgentStatus for agentConfig.id) to
set a "provision_failed" (or similar) state, persist that change, and then
return/throw an error or a response that surfaces the failure to the caller
instead of executing the existing redirect to the status page; also enrich
logger.error calls with the response body/error details
(provisionResponse.status, body, and the caught error) so debugging info is
preserved.
| const { agentId } = (await request.json()) as { agentId: string }; | ||
|
|
||
| if (!agentId) { | ||
| return json({ error: "agentId is required" }, { status: 400 }); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Use zod for request body validation.
The request body is parsed with a type assertion (as { agentId: string }) without runtime validation. Per coding guidelines, use zod for validation in apps/webapp.
♻️ Proposed fix using zod
+import { z } from "zod";
+
+const ProvisionSchema = z.object({
+ agentId: z.string().min(1, "agentId is required"),
+});
+
export const action = async ({ request }: ActionFunctionArgs) => {
if (request.method !== "POST") {
return json({ error: "Method not allowed" }, { status: 405 });
}
- const { agentId } = (await request.json()) as { agentId: string };
-
- if (!agentId) {
- return json({ error: "agentId is required" }, { status: 400 });
- }
+ const parseResult = ProvisionSchema.safeParse(await request.json());
+ if (!parseResult.success) {
+ return json({ error: parseResult.error.issues[0]?.message ?? "Invalid request" }, { status: 400 });
+ }
+ const { agentId } = parseResult.data;As per coding guidelines: "Use zod for validation in packages/core and apps/webapp".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/webapp/app/routes/api.agents.provision.ts` around lines 15 - 19, Replace
the unchecked type assertion on (await request.json()) and manual agentId check
with a zod schema: define a Zod object schema that requires agentId (e.g., const
schema = z.object({ agentId: z.string().min(1) })), use schema.parse or await
schema.parseAsync on the request body, and on failure return json({ error: ...,
issues: schemaError }) with status 400; update the handler logic that currently
references agentId from the raw request/json() to use the validated value and
remove the existing if (!agentId) branch.
| try { | ||
| // Get agent config | ||
| const agentConfig = await prisma.agentConfig.findUnique({ | ||
| where: { id: agentId }, | ||
| }); | ||
|
|
||
| if (!agentConfig) { | ||
| return json({ error: "Agent not found" }, { status: 404 }); | ||
| } |
There was a problem hiding this comment.
Missing authorization check allows any user to provision any agent.
The endpoint retrieves the agent by id without verifying that the requesting user owns it. An attacker who knows or guesses an agentId can provision another user's agent.
🔒 Proposed fix to add ownership verification
+import { requireUser } from "~/services/auth.server";
+
export const action = async ({ request }: ActionFunctionArgs) => {
if (request.method !== "POST") {
return json({ error: "Method not allowed" }, { status: 405 });
}
+ const user = await requireUser(request);
const { agentId } = (await request.json()) as { agentId: string };
if (!agentId) {
return json({ error: "agentId is required" }, { status: 400 });
}
try {
// Get agent config
const agentConfig = await prisma.agentConfig.findUnique({
- where: { id: agentId },
+ where: { id: agentId, userId: user.id },
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| // Get agent config | |
| const agentConfig = await prisma.agentConfig.findUnique({ | |
| where: { id: agentId }, | |
| }); | |
| if (!agentConfig) { | |
| return json({ error: "Agent not found" }, { status: 404 }); | |
| } | |
| import { requireUser } from "~/services/auth.server"; | |
| export const action = async ({ request }: ActionFunctionArgs) => { | |
| if (request.method !== "POST") { | |
| return json({ error: "Method not allowed" }, { status: 405 }); | |
| } | |
| const user = await requireUser(request); | |
| const { agentId } = (await request.json()) as { agentId: string }; | |
| if (!agentId) { | |
| return json({ error: "agentId is required" }, { status: 400 }); | |
| } | |
| try { | |
| // Get agent config | |
| const agentConfig = await prisma.agentConfig.findUnique({ | |
| where: { id: agentId, userId: user.id }, | |
| }); | |
| if (!agentConfig) { | |
| return json({ error: "Agent not found" }, { status: 404 }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/webapp/app/routes/api.agents.provision.ts` around lines 21 - 29, The
endpoint is missing an ownership check after fetching agentConfig; obtain the
current requester’s user id from the request/session (e.g., your existing auth
helper or session extraction used elsewhere), then verify agentConfig.userId (or
the appropriate ownership field on the AgentConfig record) matches that user id;
if it does not, return a 403 JSON response and halt provisioning—update the
logic around prisma.agentConfig.findUnique / agentConfig and use the same auth
helper you use elsewhere to get the current user id.
| } | ||
|
|
||
| try { | ||
| const event = (await request.json()) as any; |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Add zod validation for Slack webhook payload.
The Slack event payload is parsed as any without validation. Use zod to validate the expected structure.
♻️ Proposed schema
import { z } from "zod";
const SlackUrlVerificationSchema = z.object({
type: z.literal("url_verification"),
challenge: z.string(),
});
const SlackMessageEventSchema = z.object({
type: z.literal("event_callback"),
team_id: z.string(),
event: z.object({
type: z.literal("message"),
channel: z.string(),
text: z.string().optional(),
user: z.string(),
ts: z.string(),
thread_ts: z.string().optional(),
}),
});
const SlackEventSchema = z.union([
SlackUrlVerificationSchema,
SlackMessageEventSchema,
]);As per coding guidelines: "Use zod for validation in packages/core and apps/webapp".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/webapp/app/routes/webhooks.slack.ts` at line 16, Replace the unsafe any
cast for the incoming Slack payload by adding a zod schema
(SlackUrlVerificationSchema, SlackMessageEventSchema) and a SlackEventSchema
union, import { z } from "zod", then parse/validate the body instead of using
const event = (await request.json()) as any; — use SlackEventSchema.parse(...)
or SlackEventSchema.safeParse(...) to validate the payload and handle validation
failures (return a 400 or log error) before proceeding with logic that expects
the validated event object.
| const agent = await prisma.agentConfig.findFirst({ | ||
| where: { | ||
| slackWorkspaceId: workspaceId, | ||
| messagingPlatform: "slack", | ||
| status: "healthy", | ||
| }, | ||
| }); |
There was a problem hiding this comment.
No code path sets agent status to "healthy" - messages will never be routed.
The query filters for status: "healthy", but reviewing the codebase shows:
agents.setup.tsxcreates agents withstatus: "provisioning"api.agents.provision.tssetsstatus: "provisioning"- This file sets
status: "unhealthy"on errors
No code ever transitions status to "healthy", so this query will never find any agents.
💡 Suggested fix
Either:
- Add logic to set
status: "healthy"after successful container deployment/health check - Or temporarily query for
status: "provisioning"as well:
const agent = await prisma.agentConfig.findFirst({
where: {
slackWorkspaceId: workspaceId,
messagingPlatform: "slack",
- status: "healthy",
+ status: { in: ["healthy", "provisioning"] },
},
});The proper fix is to add a health check mechanism that transitions agents to "healthy" status.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/webapp/app/routes/webhooks.slack.ts` around lines 39 - 45, The query
using prisma.agentConfig.findFirst filters for status: "healthy" but no codepath
ever sets that status, so messages will never be routed; update the logic either
by adding a post-deployment health transition that flips agentConfig.status to
"healthy" after successful container deployment/healthcheck (implement a
health-check routine and update the record), or as a temporary measure expand
the query in the route handling (prisma.agentConfig.findFirst for
slackWorkspaceId/messagingPlatform) to accept both "provisioning" and "healthy"
states so provisioning agents can receive events until the proper health
transition is implemented.
| // Route message to OpenClaw container (on VPS) | ||
| const containerUrl = `http://178.128.150.129:${agent.containerPort}`; |
There was a problem hiding this comment.
Hardcoded VPS IP address will require code changes to update infrastructure.
The container host IP 178.128.150.129 is hardcoded. This should be configurable via environment variable for flexibility and security (avoid exposing infrastructure details in code).
🔧 Proposed fix using environment variable
+import { env } from "~/env.server";
+
// Route message to OpenClaw container (on VPS)
- const containerUrl = `http://178.128.150.129:${agent.containerPort}`;
+ const containerUrl = `http://${env.AGENT_CONTAINER_HOST}:${agent.containerPort}`;Add to env.server.ts:
AGENT_CONTAINER_HOST: z.string().default("localhost"),As per coding guidelines: "Access all environment variables through the env export of env.server.ts".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/webapp/app/routes/webhooks.slack.ts` around lines 57 - 58, The hardcoded
host in the container URL should be replaced with a configurable environment
value: add AGENT_CONTAINER_HOST to env.server.ts (default "localhost") and use
env.AGENT_CONTAINER_HOST when building containerUrl in routes/webhooks.slack.ts
(keep agent.containerPort). Update any imports to use the env export so
containerUrl = `http://${env.AGENT_CONTAINER_HOST}:${agent.containerPort}`
rather than the literal IP.
| try { | ||
| const containerResponse = await fetch(`${containerUrl}/api/message`, { | ||
| method: "POST", | ||
| headers: { "Content-Type": "application/json" }, | ||
| body: JSON.stringify({ | ||
| text, | ||
| userId, | ||
| channel, | ||
| metadata: { | ||
| slackUserId: userId, | ||
| slackChannel: channel, | ||
| timestamp: new Date().toISOString(), | ||
| }, | ||
| }), | ||
| }); |
There was a problem hiding this comment.
Add timeout to container fetch request.
The fetch to the container has no timeout. If the container hangs, this request will block indefinitely, potentially causing webhook timeouts with Slack (which expects responses within 3 seconds).
🔧 Proposed fix with AbortController timeout
+ const controller = new AbortController();
+ const timeoutId = setTimeout(() => controller.abort(), 2500); // Slack wants response in 3s
+
try {
const containerResponse = await fetch(`${containerUrl}/api/message`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
text,
userId,
channel,
metadata: {
slackUserId: userId,
slackChannel: channel,
timestamp: new Date().toISOString(),
},
}),
+ signal: controller.signal,
});
+ clearTimeout(timeoutId);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| const containerResponse = await fetch(`${containerUrl}/api/message`, { | |
| method: "POST", | |
| headers: { "Content-Type": "application/json" }, | |
| body: JSON.stringify({ | |
| text, | |
| userId, | |
| channel, | |
| metadata: { | |
| slackUserId: userId, | |
| slackChannel: channel, | |
| timestamp: new Date().toISOString(), | |
| }, | |
| }), | |
| }); | |
| const controller = new AbortController(); | |
| const timeoutId = setTimeout(() => controller.abort(), 2500); // Slack wants response in 3s | |
| try { | |
| const containerResponse = await fetch(`${containerUrl}/api/message`, { | |
| method: "POST", | |
| headers: { "Content-Type": "application/json" }, | |
| body: JSON.stringify({ | |
| text, | |
| userId, | |
| channel, | |
| metadata: { | |
| slackUserId: userId, | |
| slackChannel: channel, | |
| timestamp: new Date().toISOString(), | |
| }, | |
| }), | |
| signal: controller.signal, | |
| }); | |
| clearTimeout(timeoutId); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/webapp/app/routes/webhooks.slack.ts` around lines 60 - 74, The fetch to
the container (the await fetch call that sets containerResponse using
containerUrl) needs a timeout so it can't hang; create an AbortController before
the fetch, start a setTimeout to call controller.abort() after a safe Slack
deadline (e.g. ~2500–2800ms), pass controller.signal to fetch, and clear the
timeout after the fetch resolves; also ensure any abort/timeout errors are
handled in the existing try/catch so the handler returns promptly.
| -- CreateTable "AgentConfig" | ||
| CREATE TABLE "AgentConfig" ( | ||
| "id" TEXT NOT NULL PRIMARY KEY, | ||
| "userId" TEXT NOT NULL, | ||
| "name" TEXT NOT NULL, | ||
| "model" TEXT NOT NULL, | ||
| "messagingPlatform" TEXT NOT NULL, | ||
| "tools" TEXT, | ||
| "containerName" TEXT, | ||
| "containerPort" INTEGER, | ||
| "slackWorkspaceId" TEXT, | ||
| "slackWebhookToken" TEXT, | ||
| "status" TEXT NOT NULL DEFAULT 'provisioning', | ||
| "createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP, | ||
| "updatedAt" DATETIME NOT NULL, | ||
| CONSTRAINT "AgentConfig_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User" ("id") ON DELETE CASCADE ON UPDATE CASCADE | ||
| ); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the database provider and check other migrations for the correct syntax pattern
echo "=== Checking datasource provider in schema ==="
grep -A2 "datasource db" internal-packages/database/prisma/schema.prisma
echo ""
echo "=== Checking other migrations for DATETIME vs TIMESTAMP usage ==="
rg -l "DATETIME" internal-packages/database/prisma/migrations/ || echo "No other DATETIME usages found"
rg -l "TIMESTAMP" internal-packages/database/prisma/migrations/ | head -5Repository: triggerdotdev/trigger.dev
Length of output: 979
🏁 Script executed:
#!/bin/bash
# Check a sample of migrations using DATETIME and TIMESTAMP to understand the actual syntax
echo "=== Sample migration with DATETIME ==="
head -30 internal-packages/database/prisma/migrations/20250103152909_add_run_engine_v2/migration.sql
echo ""
echo "=== Check if any migrations use TIMESTAMP (PostgreSQL native) ==="
rg "TIMESTAMP" internal-packages/database/prisma/migrations/ -A 2 -B 2 | head -40
echo ""
echo "=== Sample recent migration ==="
ls -la internal-packages/database/prisma/migrations/ | tail -5
head -30 internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sqlRepository: triggerdotdev/trigger.dev
Length of output: 7544
Use TIMESTAMP(3) instead of DATETIME for timestamp columns.
PostgreSQL does not recognize DATETIME as a valid type. All other migrations in this codebase use TIMESTAMP(3) for datetime columns (e.g., see migrations 20240228114913, 20240415134559, 20240411135457). This migration will fail when applied to the PostgreSQL database.
Update "createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP and "updatedAt" DATETIME NOT NULL to use TIMESTAMP(3) instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql`
around lines 1 - 17, The migration uses PostgreSQL-incompatible DATETIME for the
AgentConfig table timestamps; change the "createdAt" and "updatedAt" column
types in the "AgentConfig" CREATE TABLE to TIMESTAMP(3) (e.g., replace
`"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP` and `"updatedAt"
DATETIME NOT NULL` with TIMESTAMP(3) equivalents), and ensure the createdAt
default uses CURRENT_TIMESTAMP(3) to match other migrations and precision used
across the codebase.
Summary
Implements complete OpenClaw agent management system for AirTrigger platform.
Changes
Architecture
Test Plan
🤖 Generated with Claude Code