Skip to content

feat: OpenClaw agent integration for AirTrigger#3268

Closed
buildwithmoses wants to merge 1 commit intotriggerdotdev:mainfrom
buildwithmoses:feat/openclaw-integration
Closed

feat: OpenClaw agent integration for AirTrigger#3268
buildwithmoses wants to merge 1 commit intotriggerdotdev:mainfrom
buildwithmoses:feat/openclaw-integration

Conversation

@buildwithmoses
Copy link
Copy Markdown

Summary

Implements complete OpenClaw agent management system for AirTrigger platform.

  • Allows users to create and configure AI agents with model/platform/tools selection
  • Routes Slack messages to OpenClaw containers running on shared VPS
  • Tracks execution history and container health status
  • Multi-tenant architecture with per-user port assignment

Changes

  • agents.setup.tsx: New route for agent creation with form validation
  • agents.$agentId.status.tsx: Dashboard showing agent configuration, executions, and health
  • api.agents.provision.ts: Endpoint for container port assignment and setup
  • webhooks.slack.ts: Slack webhook handler that routes messages to agent containers
  • Prisma schema: Added AgentConfig, AgentExecution, AgentHealthCheck models
  • Migration: SQL migration for new database tables with indexes and foreign keys

Architecture

  • Multi-tenant per-user containers on shared VPS (178.128.150.129)
  • Port assignment starts at 8001, auto-increments per agent
  • Slack integration via webhooks for messaging
  • Database persistence for execution logs and health tracking

Test Plan

  • Agent creation via setup form
  • Agent status dashboard displays correct metadata
  • Slack messages route to correct container
  • Execution history logged correctly
  • Health checks tracked in database
  • Port assignment works correctly for multiple agents

🤖 Generated with Claude Code

Add complete agent management system for OpenClaw containers:

- agents.setup.tsx: Create agent form with model/platform/tools selection
- agents.$agentId.status.tsx: Agent dashboard with executions and health status
- api.agents.provision.ts: Endpoint for assigning container ports
- webhooks.slack.ts: Slack webhook handler routing messages to agent containers

Database schema adds:
- AgentConfig: stores agent configuration and container metadata
- AgentExecution: logs message/response pairs with token counts
- AgentHealthCheck: tracks agent container health status

Architecture:
- Multi-tenant per-user containers on shared VPS (178.128.150.129)
- Port assignment starting at 8001 with auto-increment
- Slack integration for messaging
- Database-backed execution history and monitoring

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
@changeset-bot
Copy link
Copy Markdown

changeset-bot bot commented Mar 25, 2026

⚠️ No Changeset found

Latest commit: b4efc42

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 25, 2026

Walkthrough

This PR introduces a complete agent management and execution system with database persistence. It adds three new database models—AgentConfig (agent metadata and configuration), AgentExecution (per-execution logs), and AgentHealthCheck (agent health tracking)—and updates the User model to establish ownership relationships. Four new Remix routes provide: a UI for creating agents with configurable tools and Slack integration, a status dashboard displaying agent details with recent execution and health check history, an API endpoint to provision agents (allocating container ports and names), and a Slack webhook handler that routes incoming messages to provisioned containers, logs execution results, and updates agent health status based on container responses.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description covers the summary, changes, architecture, and test plan. However, it deviates from the required template by omitting the issue reference (Closes #), checklist items, testing steps, changelog format, and screenshots section. Add the required template sections: issue reference (Closes #...), completion checklist, detailed testing steps taken, and align changelog format with repository standards.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: OpenClaw agent integration for AirTrigger' directly summarizes the main change—implementing OpenClaw agent integration—which aligns with the core functionality added across multiple new routes and database models.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
⚔️ Resolve merge conflicts
  • Resolve merge conflict in branch feat/openclaw-integration

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 10 potential issues.

View 1 additional finding in Devin Review.

Open in Devin Review

Comment on lines +14 to +15
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" DATETIME NOT NULL,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Migration uses SQLite DATETIME type instead of PostgreSQL TIMESTAMP(3)

The migration file uses DATETIME for timestamp columns (lines 14, 15, 28, 39), but the database is PostgreSQL (as confirmed by migration_lock.toml with provider = "postgresql" and the Prisma schema provider = "postgresql" at internal-packages/database/prisma/schema.prisma:2). DATETIME is not a valid PostgreSQL column type — PostgreSQL uses TIMESTAMP(3). All other migrations in the repo consistently use TIMESTAMP(3) for DateTime fields. This migration will fail when pnpm run db:migrate is executed.

Prompt for agents
In internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql, replace all occurrences of DATETIME with TIMESTAMP(3). Specifically:

Line 14: Change '"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP' to '"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP'
Line 15: Change '"updatedAt" DATETIME NOT NULL' to '"updatedAt" TIMESTAMP(3) NOT NULL'
Line 28: Change '"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP' to '"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP'
Line 39: Change '"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP' to '"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP'

Also fix the PRIMARY KEY syntax to match PostgreSQL convention. Change each 'TEXT NOT NULL PRIMARY KEY' to 'TEXT NOT NULL' and add a separate CONSTRAINT line, e.g. 'CONSTRAINT "AgentConfig_pkey" PRIMARY KEY ("id")'. Also change '"tools" TEXT' to '"tools" JSONB' since the schema defines tools as Json type which maps to JSONB in PostgreSQL.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

"name" TEXT NOT NULL,
"model" TEXT NOT NULL,
"messagingPlatform" TEXT NOT NULL,
"tools" TEXT,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Migration defines tools column as TEXT but Prisma schema declares it as Json (should be JSONB)

The migration defines "tools" TEXT at line 8, but the Prisma schema at internal-packages/database/prisma/schema.prisma:2589 declares tools Json. In PostgreSQL, Prisma's Json type maps to JSONB, not TEXT. This mismatch means the actual database column type won't match what the Prisma client expects, which can cause runtime errors when Prisma tries to store/retrieve JSON data from a TEXT column. All other Json fields in the migration history use JSONB.

Suggested change
"tools" TEXT,
"tools" JSONB,
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +10 to +15
export const action = async ({ request }: ActionFunctionArgs) => {
if (request.method !== "POST") {
return json({ error: "Method not allowed" }, { status: 405 });
}

const { agentId } = (await request.json()) as { agentId: string };
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Provisioning API endpoint has no authentication, allowing unauthenticated access

The api.agents.provision.ts endpoint has no authentication check. Every other api.* route in the codebase uses authenticateApiRequest or authenticateApiRequestWithPersonalAccessToken from ~/services/apiAuth.server (or at minimum requireUser). Without authentication, any external caller can invoke this endpoint to provision containers, modify agent records, and allocate ports for any agent ID. This is a security vulnerability since the endpoint modifies database state and is intended to provision infrastructure.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.


// Trigger provisioning endpoint to spin up container
try {
const provisionResponse = await fetch("http://localhost:3000/api/agents/provision", {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Hardcoded localhost:3000 URL will fail — webapp defaults to port 3030

The provisioning fetch call at line 71 uses http://localhost:3000/api/agents/provision, but the webapp listens on port 3030 by default. The .env.example sets REMIX_APP_PORT=3030 and APP_ORIGIN=http://localhost:3030, and the server fallback chain in apps/webapp/server.ts:117 is REMIX_APP_PORT || PORT || 3000. With the standard .env setup, the server runs on 3030, so this fetch will fail to connect. Furthermore, a hardcoded localhost URL will never work in production. This should use the APP_ORIGIN env variable instead.

Prompt for agents
In apps/webapp/app/routes/agents.setup.tsx, line 71, replace the hardcoded URL 'http://localhost:3000/api/agents/provision' with a URL derived from the APP_ORIGIN environment variable. Import env from '~/env.server' and use it like:

const provisionResponse = await fetch(`${env.APP_ORIGIN}/api/agents/provision`, {

Note: Per the webapp CLAUDE.md rules, environment variables should be accessed via the env export from app/env.server.ts, never via process.env directly.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +32 to +39
const lastAgent = await prisma.agentConfig.findFirst({
where: {
containerPort: { not: null },
},
orderBy: { containerPort: "desc" },
});

const nextPort = (lastAgent?.containerPort || 8000) + 1;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Race condition in port allocation allows duplicate port assignments

The port allocation logic at lines 32-39 reads the highest containerPort from the database and adds 1 to get the next port, but this is not atomic. If two provisioning requests arrive concurrently, both will read the same lastAgent.containerPort value and compute the same nextPort, resulting in two agents assigned to the same port. There is no database-level unique constraint on containerPort either (checking internal-packages/database/prisma/schema.prisma:2592), so the duplicate assignment won't be caught by the database.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

}

// Route message to OpenClaw container (on VPS)
const containerUrl = `http://178.128.150.129:${agent.containerPort}`;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Hardcoded IP address for container routing makes deployment non-portable

The Slack webhook handler at line 58 hardcodes the VPS IP http://178.128.150.129:${agent.containerPort} for routing messages to agent containers. This IP address will not be correct in any environment other than the specific VPS it was written for (not in development, staging, CI, or other production deployments). This should be a configurable environment variable, consistent with how the codebase uses env.APP_ORIGIN and other configurable URLs.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +16 to +21
const event = (await request.json()) as any;

// Handle Slack URL verification
if (event.type === "url_verification") {
return json({ challenge: event.challenge });
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚩 No Slack request signature verification on webhook endpoint

The webhooks.slack.ts endpoint does not verify the X-Slack-Signature header, which Slack sends with every request to prove authenticity. Without verification, any external actor can craft fake Slack events and send them to this endpoint, triggering agent executions, database writes, and outbound HTTP requests to containers. While this is a significant security concern, I reported the more fundamental issue (no auth on the provisioning endpoint) as a bug. This webhook endpoint warrants signature verification using the Slack signing secret before processing any events. See Slack's verification docs.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

messagingPlatform: data.messagingPlatform,
tools: tools,
slackWorkspaceId: data.slackWorkspaceId || null,
slackWebhookToken: data.slackWebhookToken || null,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚩 Slack webhook token stored as plaintext, inconsistent with repo's token handling

The slackWebhookToken is stored as a plaintext string in the AgentConfig table (internal-packages/database/prisma/schema.prisma:2596). The repository's established pattern for sensitive tokens is to encrypt them — see PersonalAccessToken which uses encryptedToken Json and hashedToken String at internal-packages/database/prisma/schema.prisma:124-131. Storing webhook tokens in plaintext means any database breach exposes all Slack webhook URLs, allowing attackers to post messages to users' Slack workspaces.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +37 to +39
"responseTimeMs" INTEGER,
"errorMessage" TEXT,
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚩 Migration schema mismatches beyond type issues: missing NOT NULL and DEFAULT constraints

Beyond the DATETIME/TIMESTAMP and TEXT/JSONB issues reported as bugs, the migration has additional mismatches with the Prisma schema. For AgentHealthCheck, the migration defines "responseTimeMs" INTEGER (nullable, no default) but the schema declares responseTimeMs Int @default(0) (non-nullable with default). For AgentExecution, the migration has "executionTimeMs" INTEGER NOT NULL without a DEFAULT, but the schema has executionTimeMs Int @default(0). The entire migration appears hand-written rather than generated via pnpm run db:migrate:dev:create as the CONTRIBUTING.md and database CLAUDE.md instruct. The migration should be regenerated using Prisma's tooling.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +24 to +29
if (event.type === "event_callback" && event.event.type === "message") {
const slackEvent = event.event;
const workspaceId = event.team_id;
const channel = slackEvent.channel;
const text = slackEvent.text;
const userId = slackEvent.user;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚩 Bot messages from Slack will cause infinite message loops

The Slack webhook handler at webhooks.slack.ts:24 processes all message events without filtering out bot messages. When the agent sends a response back to Slack via the webhook at line 93, Slack will send a new event_callback for that bot message. The handler will process it again, send another response, and so on — creating an infinite loop. Standard practice is to check slackEvent.bot_id or slackEvent.subtype === 'bot_message' and ignore those events. This isn't just a theoretical concern — it will happen on every single message exchange.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

🧹 Nitpick comments (4)
internal-packages/database/prisma/schema.prisma (2)

2582-2613: Consider adding a unique constraint on containerPort.

Without a unique constraint, the database allows duplicate port assignments. Adding @unique provides defense-in-depth against the race condition in the provisioning endpoint.

♻️ Proposed change
   // Container info
   containerName String?
-  containerPort Int?
+  containerPort Int?    `@unique`

Note: This will require a new migration file.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal-packages/database/prisma/schema.prisma` around lines 2582 - 2613,
Add a unique constraint on the AgentConfig.containerPort field to prevent
duplicate port assignments at the database level: update the AgentConfig model
in schema.prisma to mark containerPort as unique (containerPort Int? `@unique`)
and generate a new migration so Prisma applies the constraint; reference the
AgentConfig model and the containerPort field when making the change.

2599-2599: Document the allowed status values or consider using a Prisma enum.

The status field uses a comment to indicate allowed values (provisioning, healthy, unhealthy). For better type safety and documentation, consider:

  1. Using a Prisma enum AgentStatus { PROVISIONING HEALTHY UNHEALTHY }, or
  2. Adding a more detailed doc comment with ///
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal-packages/database/prisma/schema.prisma` at line 2599, The schema's
status String field lacks strong typing and docs—replace or augment it by
introducing a Prisma enum (e.g., AgentStatus with values PROVISIONING, HEALTHY,
UNHEALTHY) and change the model's status field to use AgentStatus (e.g., status
AgentStatus `@default`(PROVISIONING)), or at minimum add a doc-comment using ///
above the status field describing allowed string values; update any related code
that relies on the string literal to use the new enum symbol (AgentStatus) or
adjust casts accordingly.
apps/webapp/app/routes/agents.$agentId.status.tsx (1)

2-3: Remove unused imports.

json from @remix-run/node and useLoaderData from @remix-run/react are imported but not used. The code uses typedjson and useTypedLoaderData instead.

🧹 Remove unused imports
 import type { LoaderFunctionArgs } from "@remix-run/node";
-import { json } from "@remix-run/node";
-import { useLoaderData } from "@remix-run/react";
 import { typedjson, useTypedLoaderData } from "remix-typedjson";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/webapp/app/routes/agents`.$agentId.status.tsx around lines 2 - 3, Remove
the unused imports json and useLoaderData: delete the import of json from
"@remix-run/node" and the import of useLoaderData from "@remix-run/react"
because the module uses typedjson and useTypedLoaderData instead; update the
import statement(s) that currently include json and useLoaderData so only the
needed symbols (e.g., typedjson, useTypedLoaderData) are imported and the unused
ones are removed.
internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql (1)

10-10: Consider adding a UNIQUE constraint on containerPort.

Without a unique constraint, the race condition in the provisioning endpoint could result in duplicate port assignments being persisted. Adding database-level enforcement provides defense in depth.

♻️ Add UNIQUE constraint
     "containerName" TEXT,
-    "containerPort" INTEGER,
+    "containerPort" INTEGER UNIQUE,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql`
at line 10, This migration currently declares the "containerPort" INTEGER column
without uniqueness; update the migration
(20260325122458_add_openclaw_agents/migration.sql) to enforce uniqueness by
either changing the column definition to "containerPort" INTEGER UNIQUE or
adding a table-level constraint like , UNIQUE("containerPort") so the DB will
reject duplicate port assignments; ensure the resulting migration is valid SQL
and re-run/generate the corresponding Prisma schema change if needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/webapp/app/routes/agents`.$agentId.status.tsx:
- Around line 176-179: The TableCell currently renders check.responseTimeMs
directly which can be null and show "nullms"; update the rendering in the JSX
where check.responseTimeMs is used so it safely handles null/undefined (e.g.,
use a fallback like 0 or "—") and append "ms" only for numeric values; locate
the TableCell that references check.responseTimeMs and change it to
conditionally render (check.responseTimeMs != null ? `${check.responseTimeMs}ms`
: "—") so null values display gracefully while preserving existing behavior for
numbers, and similarly ensure date rendering (new
Date(check.createdAt).toLocaleString()) remains unchanged.

In `@apps/webapp/app/routes/agents.setup.tsx`:
- Around line 70-85: The fetch in routes/agents.setup.tsx is calling a hardcoded
"http://localhost:3000/api/agents/provision" which breaks non-local
deployments—replace the literal URL with the environment-backed origin: import
env (the env export from env.server.ts / env.server) and call
fetch(`${env.APP_ORIGIN}/api/agents/provision`, ...) so the provisioning
endpoint uses APP_ORIGIN; keep the same method/headers/body and error logging
around agentConfig.id.
- Around line 77-85: The provisioning error is being logged but ignored,
allowing agent creation and an unconditional redirect; change the provisioning
branch in the code that handles provisionResponse and the catch block so
failures prevent the redirect and update the agent's status to reflect the
failure. Specifically, when provisionResponse.ok is false or an exception is
caught, call your agent-status update routine (e.g., updateAgentStatus or
setAgentStatus for agentConfig.id) to set a "provision_failed" (or similar)
state, persist that change, and then return/throw an error or a response that
surfaces the failure to the caller instead of executing the existing redirect to
the status page; also enrich logger.error calls with the response body/error
details (provisionResponse.status, body, and the caught error) so debugging info
is preserved.

In `@apps/webapp/app/routes/api.agents.provision.ts`:
- Around line 21-29: The endpoint is missing an ownership check after fetching
agentConfig; obtain the current requester’s user id from the request/session
(e.g., your existing auth helper or session extraction used elsewhere), then
verify agentConfig.userId (or the appropriate ownership field on the AgentConfig
record) matches that user id; if it does not, return a 403 JSON response and
halt provisioning—update the logic around prisma.agentConfig.findUnique /
agentConfig and use the same auth helper you use elsewhere to get the current
user id.
- Around line 15-19: Replace the unchecked type assertion on (await
request.json()) and manual agentId check with a zod schema: define a Zod object
schema that requires agentId (e.g., const schema = z.object({ agentId:
z.string().min(1) })), use schema.parse or await schema.parseAsync on the
request body, and on failure return json({ error: ..., issues: schemaError })
with status 400; update the handler logic that currently references agentId from
the raw request/json() to use the validated value and remove the existing if
(!agentId) branch.
- Around line 31-39: The port assignment using prisma.agentConfig.findFirst to
compute nextPort is racy: concurrent requests can compute the same nextPort (see
prisma.agentConfig.findFirst and the nextPort calculation) leading to duplicate
containerPort values; fix by performing the allocation inside an atomic
transaction or using a dedicated counter/sequence table (or DB sequence) that
you increment and read in a single transaction (e.g., use prisma.$transaction
with SELECT ... FOR UPDATE semantics or a separate PortAllocation model and
update it atomically), and additionally add a UNIQUE constraint on
agentConfig.containerPort to guard against any remaining races and retry on
unique-constraint failure.

In `@apps/webapp/app/routes/webhooks.slack.ts`:
- Line 16: Replace the unsafe any cast for the incoming Slack payload by adding
a zod schema (SlackUrlVerificationSchema, SlackMessageEventSchema) and a
SlackEventSchema union, import { z } from "zod", then parse/validate the body
instead of using const event = (await request.json()) as any; — use
SlackEventSchema.parse(...) or SlackEventSchema.safeParse(...) to validate the
payload and handle validation failures (return a 400 or log error) before
proceeding with logic that expects the validated event object.
- Around line 39-45: The query using prisma.agentConfig.findFirst filters for
status: "healthy" but no codepath ever sets that status, so messages will never
be routed; update the logic either by adding a post-deployment health transition
that flips agentConfig.status to "healthy" after successful container
deployment/healthcheck (implement a health-check routine and update the record),
or as a temporary measure expand the query in the route handling
(prisma.agentConfig.findFirst for slackWorkspaceId/messagingPlatform) to accept
both "provisioning" and "healthy" states so provisioning agents can receive
events until the proper health transition is implemented.
- Around line 57-58: The hardcoded host in the container URL should be replaced
with a configurable environment value: add AGENT_CONTAINER_HOST to env.server.ts
(default "localhost") and use env.AGENT_CONTAINER_HOST when building
containerUrl in routes/webhooks.slack.ts (keep agent.containerPort). Update any
imports to use the env export so containerUrl =
`http://${env.AGENT_CONTAINER_HOST}:${agent.containerPort}` rather than the
literal IP.
- Around line 60-74: The fetch to the container (the await fetch call that sets
containerResponse using containerUrl) needs a timeout so it can't hang; create
an AbortController before the fetch, start a setTimeout to call
controller.abort() after a safe Slack deadline (e.g. ~2500–2800ms), pass
controller.signal to fetch, and clear the timeout after the fetch resolves; also
ensure any abort/timeout errors are handled in the existing try/catch so the
handler returns promptly.
- Around line 15-21: Add Slack signature verification before parsing/handling
the payload: read the x-slack-signature and x-slack-request-timestamp headers
from the incoming request, reconstruct the base string
"v0:{timestamp}:{rawBody}", compute the HMAC-SHA256 using the workspace signing
secret (from the workspace record or config) and compare against the header
using a constant-time comparison; if missing/invalid or timestamp is too old,
return 401 and do not proceed to the existing event handling (including the
url_verification branch that currently runs after const event = (await
request.json())). Use the existing request, event and workspace lookup logic to
obtain the correct signing secret and ensure signature validation runs before
any processing.

In
`@internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql`:
- Around line 1-17: The migration uses PostgreSQL-incompatible DATETIME for the
AgentConfig table timestamps; change the "createdAt" and "updatedAt" column
types in the "AgentConfig" CREATE TABLE to TIMESTAMP(3) (e.g., replace
`"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP` and `"updatedAt"
DATETIME NOT NULL` with TIMESTAMP(3) equivalents), and ensure the createdAt
default uses CURRENT_TIMESTAMP(3) to match other migrations and precision used
across the codebase.

---

Nitpick comments:
In `@apps/webapp/app/routes/agents`.$agentId.status.tsx:
- Around line 2-3: Remove the unused imports json and useLoaderData: delete the
import of json from "@remix-run/node" and the import of useLoaderData from
"@remix-run/react" because the module uses typedjson and useTypedLoaderData
instead; update the import statement(s) that currently include json and
useLoaderData so only the needed symbols (e.g., typedjson, useTypedLoaderData)
are imported and the unused ones are removed.

In
`@internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql`:
- Line 10: This migration currently declares the "containerPort" INTEGER column
without uniqueness; update the migration
(20260325122458_add_openclaw_agents/migration.sql) to enforce uniqueness by
either changing the column definition to "containerPort" INTEGER UNIQUE or
adding a table-level constraint like , UNIQUE("containerPort") so the DB will
reject duplicate port assignments; ensure the resulting migration is valid SQL
and re-run/generate the corresponding Prisma schema change if needed.

In `@internal-packages/database/prisma/schema.prisma`:
- Around line 2582-2613: Add a unique constraint on the
AgentConfig.containerPort field to prevent duplicate port assignments at the
database level: update the AgentConfig model in schema.prisma to mark
containerPort as unique (containerPort Int? `@unique`) and generate a new
migration so Prisma applies the constraint; reference the AgentConfig model and
the containerPort field when making the change.
- Line 2599: The schema's status String field lacks strong typing and
docs—replace or augment it by introducing a Prisma enum (e.g., AgentStatus with
values PROVISIONING, HEALTHY, UNHEALTHY) and change the model's status field to
use AgentStatus (e.g., status AgentStatus `@default`(PROVISIONING)), or at minimum
add a doc-comment using /// above the status field describing allowed string
values; update any related code that relies on the string literal to use the new
enum symbol (AgentStatus) or adjust casts accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 9caae087-2cf6-4b76-91fd-6fb6e43cabee

📥 Commits

Reviewing files that changed from the base of the PR and between c00dae0 and b4efc42.

📒 Files selected for processing (6)
  • apps/webapp/app/routes/agents.$agentId.status.tsx
  • apps/webapp/app/routes/agents.setup.tsx
  • apps/webapp/app/routes/api.agents.provision.ts
  • apps/webapp/app/routes/webhooks.slack.ts
  • internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
  • internal-packages/database/prisma/schema.prisma
📜 Review details
🧰 Additional context used
📓 Path-based instructions (11)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{ts,tsx}: Use types over interfaces for TypeScript
Avoid using enums; prefer string unions or const objects instead

**/*.{ts,tsx}: For apps and internal packages (apps/*, internal-packages/*), use pnpm run typecheck --filter <package> for verification, never use build as it proves almost nothing about correctness
Use testcontainers helpers (redisTest, postgresTest, containerTest from @internal/testcontainers) for integration tests with Redis and PostgreSQL instead of mocking
When writing Trigger.dev tasks, always import from @trigger.dev/sdk - never use @trigger.dev/sdk/v3 or deprecated client.defineJob

Files:

  • apps/webapp/app/routes/agents.$agentId.status.tsx
  • apps/webapp/app/routes/api.agents.provision.ts
  • apps/webapp/app/routes/agents.setup.tsx
  • apps/webapp/app/routes/webhooks.slack.ts
{packages/core,apps/webapp}/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use zod for validation in packages/core and apps/webapp

Files:

  • apps/webapp/app/routes/agents.$agentId.status.tsx
  • apps/webapp/app/routes/api.agents.provision.ts
  • apps/webapp/app/routes/agents.setup.tsx
  • apps/webapp/app/routes/webhooks.slack.ts
**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use function declarations instead of default exports

**/*.{ts,tsx,js,jsx}: Use pnpm for package management in this monorepo (version 10.23.0) with Turborepo for orchestration - run commands from root with pnpm run
Add crumbs as you write code for debug tracing using // @Crumbs comments or `// `#region` `@crumbs blocks - they stay on the branch throughout development and are stripped via agentcrumbs strip before merge

Files:

  • apps/webapp/app/routes/agents.$agentId.status.tsx
  • apps/webapp/app/routes/api.agents.provision.ts
  • apps/webapp/app/routes/agents.setup.tsx
  • apps/webapp/app/routes/webhooks.slack.ts
apps/webapp/app/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

Access all environment variables through the env export of env.server.ts instead of directly accessing process.env in the Trigger.dev webapp

Files:

  • apps/webapp/app/routes/agents.$agentId.status.tsx
  • apps/webapp/app/routes/api.agents.provision.ts
  • apps/webapp/app/routes/agents.setup.tsx
  • apps/webapp/app/routes/webhooks.slack.ts
apps/webapp/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)

apps/webapp/**/*.{ts,tsx}: When importing from @trigger.dev/core in the webapp, use subpath exports from the package.json instead of importing from the root path
Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp

Files:

  • apps/webapp/app/routes/agents.$agentId.status.tsx
  • apps/webapp/app/routes/api.agents.provision.ts
  • apps/webapp/app/routes/agents.setup.tsx
  • apps/webapp/app/routes/webhooks.slack.ts
**/*.{js,ts,jsx,tsx,json,md,yaml,yml}

📄 CodeRabbit inference engine (AGENTS.md)

Format code using Prettier before committing

Files:

  • apps/webapp/app/routes/agents.$agentId.status.tsx
  • apps/webapp/app/routes/api.agents.provision.ts
  • apps/webapp/app/routes/agents.setup.tsx
  • apps/webapp/app/routes/webhooks.slack.ts
apps/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (CLAUDE.md)

When modifying only server components (apps/webapp/, apps/supervisor/, etc.) with no package changes, add a .server-changes/ file instead of a changeset

Files:

  • apps/webapp/app/routes/agents.$agentId.status.tsx
  • apps/webapp/app/routes/api.agents.provision.ts
  • apps/webapp/app/routes/agents.setup.tsx
  • apps/webapp/app/routes/webhooks.slack.ts
apps/webapp/app/**/*.{ts,tsx,server.ts}

📄 CodeRabbit inference engine (apps/webapp/CLAUDE.md)

Access environment variables via env export from app/env.server.ts. Never use process.env directly

Files:

  • apps/webapp/app/routes/agents.$agentId.status.tsx
  • apps/webapp/app/routes/api.agents.provision.ts
  • apps/webapp/app/routes/agents.setup.tsx
  • apps/webapp/app/routes/webhooks.slack.ts
internal-packages/database/**/prisma/migrations/*/*.sql

📄 CodeRabbit inference engine (internal-packages/database/CLAUDE.md)

internal-packages/database/**/prisma/migrations/*/*.sql: Clean up generated Prisma migrations by removing extraneous lines for junction tables (_BackgroundWorkerToBackgroundWorkerFile, _BackgroundWorkerToTaskQueue, _TaskRunToTaskRunTag, _WaitpointRunConnections, _completedWaitpoints) and indexes (SecretStore_key_idx, various TaskRun indexes) unless explicitly added
When adding indexes to existing tables, use CREATE INDEX CONCURRENTLY IF NOT EXISTS to avoid table locks in production, and place each concurrent index in its own separate migration file
Indexes on newly created tables can use CREATE INDEX without CONCURRENTLY and can be combined in the same migration file as the CREATE TABLE statement
When adding an index on a new column in an existing table, use two separate migrations: first for ALTER TABLE ... ADD COLUMN IF NOT EXISTS ..., then for CREATE INDEX CONCURRENTLY IF NOT EXISTS ... in its own file

Files:

  • internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/otel-metrics.mdc)

**/*.ts: When creating or editing OTEL metrics (counters, histograms, gauges), ensure metric attributes have low cardinality by using only enums, booleans, bounded error codes, or bounded shard IDs
Do not use high-cardinality attributes in OTEL metrics such as UUIDs/IDs (envId, userId, runId, projectId, organizationId), unbounded integers (itemCount, batchSize, retryCount), timestamps (createdAt, startTime), or free-form strings (errorMessage, taskName, queueName)
When exporting OTEL metrics via OTLP to Prometheus, be aware that the exporter automatically adds unit suffixes to metric names (e.g., 'my_duration_ms' becomes 'my_duration_ms_milliseconds', 'my_counter' becomes 'my_counter_total'). Account for these transformations when writing Grafana dashboards or Prometheus queries

Files:

  • apps/webapp/app/routes/api.agents.provision.ts
  • apps/webapp/app/routes/webhooks.slack.ts
apps/webapp/app/routes/**/*.ts

📄 CodeRabbit inference engine (apps/webapp/CLAUDE.md)

Use Remix flat-file route convention with dot-separated segments where api.v1.tasks.$taskId.trigger.ts maps to /api/v1/tasks/:taskId/trigger

Files:

  • apps/webapp/app/routes/api.agents.provision.ts
  • apps/webapp/app/routes/webhooks.slack.ts
🧠 Learnings (18)
📚 Learning: 2025-11-27T16:26:58.661Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .cursor/rules/webapp.mdc:0-0
Timestamp: 2025-11-27T16:26:58.661Z
Learning: Applies to apps/webapp/**/*.{ts,tsx} : Follow the Remix 2.1.0 and Express server conventions when updating the main trigger.dev webapp

Applied to files:

  • apps/webapp/app/routes/agents.$agentId.status.tsx
  • apps/webapp/app/routes/api.agents.provision.ts
  • apps/webapp/app/routes/agents.setup.tsx
📚 Learning: 2026-03-23T06:24:25.029Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: apps/webapp/CLAUDE.md:0-0
Timestamp: 2026-03-23T06:24:25.029Z
Learning: Applies to apps/webapp/app/routes/**/*.ts : Use Remix flat-file route convention with dot-separated segments where `api.v1.tasks.$taskId.trigger.ts` maps to `/api/v1/tasks/:taskId/trigger`

Applied to files:

  • apps/webapp/app/routes/agents.$agentId.status.tsx
  • apps/webapp/app/routes/api.agents.provision.ts
  • apps/webapp/app/routes/agents.setup.tsx
📚 Learning: 2026-03-13T13:45:39.411Z
Learnt from: ericallam
Repo: triggerdotdev/trigger.dev PR: 3213
File: apps/webapp/app/routes/admin.llm-models.missing.$model.tsx:19-21
Timestamp: 2026-03-13T13:45:39.411Z
Learning: In `apps/webapp/app/routes/admin.llm-models.missing.$model.tsx`, the `decodeURIComponent(params.model ?? "")` call is intentionally unguarded. Remix route params are decoded at the routing layer before reaching the loader, so malformed percent-encoding is rejected upstream. The page is also admin-only, so the risk is minimal and no try-catch is warranted.

Applied to files:

  • apps/webapp/app/routes/agents.$agentId.status.tsx
  • apps/webapp/app/routes/agents.setup.tsx
📚 Learning: 2025-11-27T16:26:37.432Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: The webapp at apps/webapp is a Remix 2.1 application using Node.js v20

Applied to files:

  • apps/webapp/app/routes/agents.$agentId.status.tsx
  • apps/webapp/app/routes/agents.setup.tsx
📚 Learning: 2026-02-03T18:27:40.429Z
Learnt from: 0ski
Repo: triggerdotdev/trigger.dev PR: 2994
File: apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.environment-variables/route.tsx:553-555
Timestamp: 2026-02-03T18:27:40.429Z
Learning: In apps/webapp/app/routes/_app.orgs.$organizationSlug.projects.$projectParam.env.$envParam.environment-variables/route.tsx, the menu buttons (e.g., Edit with PencilSquareIcon) in the TableCellMenu are intentionally icon-only with no text labels as a compact UI pattern. This is a deliberate design choice for this route; preserve the icon-only behavior for consistency in this file.

Applied to files:

  • apps/webapp/app/routes/agents.$agentId.status.tsx
  • apps/webapp/app/routes/agents.setup.tsx
📚 Learning: 2026-02-11T16:37:32.429Z
Learnt from: matt-aitken
Repo: triggerdotdev/trigger.dev PR: 3019
File: apps/webapp/app/components/primitives/charts/Card.tsx:26-30
Timestamp: 2026-02-11T16:37:32.429Z
Learning: In projects using react-grid-layout, avoid relying on drag-handle class to imply draggability. Ensure drag-handle elements only affect dragging when the parent grid item is configured draggable in the layout; conditionally apply cursor styles based on the draggable prop. This improves correctness and accessibility.

Applied to files:

  • apps/webapp/app/routes/agents.$agentId.status.tsx
  • apps/webapp/app/routes/agents.setup.tsx
📚 Learning: 2026-03-22T13:26:12.060Z
Learnt from: ericallam
Repo: triggerdotdev/trigger.dev PR: 3244
File: apps/webapp/app/components/code/TextEditor.tsx:81-86
Timestamp: 2026-03-22T13:26:12.060Z
Learning: In the triggerdotdev/trigger.dev codebase, do not flag `navigator.clipboard.writeText(...)` calls for `missing-await`/`unhandled-promise` issues. These clipboard writes are intentionally invoked without `await` and without `catch` handlers across the project; keep that behavior consistent when reviewing TypeScript/TSX files (e.g., usages like in `apps/webapp/app/components/code/TextEditor.tsx`).

Applied to files:

  • apps/webapp/app/routes/agents.$agentId.status.tsx
  • apps/webapp/app/routes/api.agents.provision.ts
  • apps/webapp/app/routes/agents.setup.tsx
  • apps/webapp/app/routes/webhooks.slack.ts
📚 Learning: 2026-03-22T19:24:14.403Z
Learnt from: matt-aitken
Repo: triggerdotdev/trigger.dev PR: 3187
File: apps/webapp/app/v3/services/alerts/deliverErrorGroupAlert.server.ts:200-204
Timestamp: 2026-03-22T19:24:14.403Z
Learning: In the triggerdotdev/trigger.dev codebase, webhook URLs are not expected to contain embedded credentials/secrets (e.g., fields like `ProjectAlertWebhookProperties` should only hold credential-free webhook endpoints). During code review, if you see logging or inclusion of raw webhook URLs in error messages, do not automatically treat it as a credential-leak/secrets-in-logs issue by default—first verify the URL does not contain embedded credentials (for example, no username/password in the URL, no obvious secret/token query params or fragments). If the URL is credential-free per this project’s conventions, allow the logging.

Applied to files:

  • apps/webapp/app/routes/agents.$agentId.status.tsx
  • apps/webapp/app/routes/api.agents.provision.ts
  • apps/webapp/app/routes/agents.setup.tsx
  • apps/webapp/app/routes/webhooks.slack.ts
📚 Learning: 2026-03-02T12:43:17.177Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: internal-packages/database/CLAUDE.md:0-0
Timestamp: 2026-03-02T12:43:17.177Z
Learning: Applies to internal-packages/database/**/prisma/migrations/*/*.sql : Clean up generated Prisma migrations by removing extraneous lines for junction tables (`_BackgroundWorkerToBackgroundWorkerFile`, `_BackgroundWorkerToTaskQueue`, `_TaskRunToTaskRunTag`, `_WaitpointRunConnections`, `_completedWaitpoints`) and indexes (`SecretStore_key_idx`, various `TaskRun` indexes) unless explicitly added

Applied to files:

  • internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
📚 Learning: 2026-03-22T13:49:20.068Z
Learnt from: ericallam
Repo: triggerdotdev/trigger.dev PR: 3244
File: internal-packages/database/prisma/migrations/20260318114244_add_prompt_friendly_id/migration.sql:5-5
Timestamp: 2026-03-22T13:49:20.068Z
Learning: For Prisma migration SQL files under `internal-packages/database/prisma/migrations/`, it is acceptable to create indexes with `CREATE INDEX` / `CREATE UNIQUE INDEX` (i.e., without `CONCURRENTLY`) when the parent table is introduced in the same PR and has no existing production rows yet. Only require `CREATE INDEX CONCURRENTLY` (or otherwise account for existing production data/locks) when the table already exists in production with data.

Applied to files:

  • internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
📚 Learning: 2026-03-02T12:43:17.177Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: internal-packages/database/CLAUDE.md:0-0
Timestamp: 2026-03-02T12:43:17.177Z
Learning: Applies to internal-packages/database/**/prisma/migrations/*/*.sql : When adding an index on a new column in an existing table, use two separate migrations: first for `ALTER TABLE ... ADD COLUMN IF NOT EXISTS ...`, then for `CREATE INDEX CONCURRENTLY IF NOT EXISTS ...` in its own file

Applied to files:

  • internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
📚 Learning: 2026-03-02T12:43:17.177Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: internal-packages/database/CLAUDE.md:0-0
Timestamp: 2026-03-02T12:43:17.177Z
Learning: Applies to internal-packages/database/**/prisma/migrations/*/*.sql : Indexes on newly created tables can use `CREATE INDEX` without CONCURRENTLY and can be combined in the same migration file as the `CREATE TABLE` statement

Applied to files:

  • internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
📚 Learning: 2026-03-02T12:43:17.177Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: internal-packages/database/CLAUDE.md:0-0
Timestamp: 2026-03-02T12:43:17.177Z
Learning: Applies to internal-packages/database/**/prisma/migrations/*/*.sql : When adding indexes to existing tables, use `CREATE INDEX CONCURRENTLY IF NOT EXISTS` to avoid table locks in production, and place each concurrent index in its own separate migration file

Applied to files:

  • internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
📚 Learning: 2026-02-03T18:48:31.790Z
Learnt from: 0ski
Repo: triggerdotdev/trigger.dev PR: 2994
File: internal-packages/database/prisma/migrations/20260129162810_add_integration_deployment/migration.sql:14-18
Timestamp: 2026-02-03T18:48:31.790Z
Learning: For Prisma migrations targeting PostgreSQL: - When adding indexes to existing tables, create the index in a separate migration file and include CONCURRENTLY to avoid locking the table. - For indexes on newly created tables (in CREATE TABLE statements), you can create the index in the same migration file without CONCURRENTLY. This reduces rollout complexity for new objects while protecting uptime for existing structures.

Applied to files:

  • internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
📚 Learning: 2026-03-02T12:43:17.177Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: internal-packages/database/CLAUDE.md:0-0
Timestamp: 2026-03-02T12:43:17.177Z
Learning: Edit Prisma schema at `prisma/schema.prisma` and generate migrations using `pnpm run db:migrate:dev:create --name "descriptive_name"` from the `internal-packages/database` directory

Applied to files:

  • internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
📚 Learning: 2025-11-27T16:26:37.432Z
Learnt from: CR
Repo: triggerdotdev/trigger.dev PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-27T16:26:37.432Z
Learning: Applies to internal-packages/database/**/*.{ts,tsx} : Use Prisma for database interactions in internal-packages/database with PostgreSQL

Applied to files:

  • internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql
📚 Learning: 2026-02-04T16:34:48.876Z
Learnt from: 0ski
Repo: triggerdotdev/trigger.dev PR: 2994
File: apps/webapp/app/routes/vercel.connect.tsx:13-27
Timestamp: 2026-02-04T16:34:48.876Z
Learning: In apps/webapp/app/routes/vercel.connect.tsx, configurationId may be absent for "dashboard" flows but must be present for "marketplace" flows. Enforce this with a Zod superRefine and pass installationId to repository methods only when configurationId is defined (omit the field otherwise).

Applied to files:

  • apps/webapp/app/routes/agents.setup.tsx
📚 Learning: 2026-03-13T13:42:25.092Z
Learnt from: ericallam
Repo: triggerdotdev/trigger.dev PR: 3213
File: apps/webapp/app/routes/admin.llm-models.new.tsx:65-91
Timestamp: 2026-03-13T13:42:25.092Z
Learning: In `apps/webapp/app/routes/admin.llm-models.new.tsx`, sequential Prisma writes for model/tier creation are intentionally not wrapped in a transaction. The form is admin-only with low concurrency risk, and the blast radius is considered minimal for admin tooling.

Applied to files:

  • apps/webapp/app/routes/agents.setup.tsx
🔇 Additional comments (6)
apps/webapp/app/routes/agents.$agentId.status.tsx (1)

19-49: LGTM on loader with proper ownership verification.

The loader correctly authenticates the user, validates the agentId param, and verifies ownership before returning data. The inclusion of related executions and healthChecks with appropriate ordering and limits is well-structured.

apps/webapp/app/routes/agents.setup.tsx (1)

14-21: LGTM on Zod validation schema.

Good use of zod for form validation with appropriate constraints (required fields, enum values for model and platform).

internal-packages/database/prisma/schema.prisma (3)

68-68: LGTM on User relation.

The agentConfigs relation is correctly added to the User model, enabling the one-to-many relationship.


2615-2649: LGTM on AgentExecution and AgentHealthCheck models.

The models are well-structured with appropriate indexes for common query patterns (by agentId and createdAt descending). Cascade delete on the agent relation is correct for cleanup.


2595-2597: ⚠️ Potential issue | 🟠 Major

Sensitive credentials stored as plain text.

slackWebhookToken contains a Slack bot token which is a sensitive credential. Consider using the existing SecretReference pattern (used by PersonalAccessToken, OrganizationIntegration) to encrypt or externally store this value.

⛔ Skipped due to learnings
Learnt from: matt-aitken
Repo: triggerdotdev/trigger.dev PR: 3187
File: apps/webapp/app/v3/services/alerts/deliverErrorGroupAlert.server.ts:200-204
Timestamp: 2026-03-22T19:24:18.069Z
Learning: In the triggerdotdev/trigger.dev codebase, webhook URLs (e.g., in `ProjectAlertWebhookProperties`) do not contain embedded credentials or secrets. Do not flag logging or including raw webhook URLs in error messages as a credential-leakage risk in this project (e.g., in `apps/webapp/app/v3/services/alerts/deliverErrorGroupAlert.server.ts`).
internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql (1)

12-12: ⚠️ Potential issue | 🟠 Major

Storing slackWebhookToken as plain text may pose a security risk.

Slack webhook tokens are sensitive credentials. Consider encrypting this field or storing a reference to a secret store (similar to how SecretReference is used elsewhere in the codebase for sensitive values like PersonalAccessToken.encryptedToken).

⛔ Skipped due to learnings
Learnt from: matt-aitken
Repo: triggerdotdev/trigger.dev PR: 3187
File: apps/webapp/app/v3/services/alerts/deliverErrorGroupAlert.server.ts:200-204
Timestamp: 2026-03-22T19:24:18.069Z
Learning: In the triggerdotdev/trigger.dev codebase, webhook URLs (e.g., in `ProjectAlertWebhookProperties`) do not contain embedded credentials or secrets. Do not flag logging or including raw webhook URLs in error messages as a credential-leakage risk in this project (e.g., in `apps/webapp/app/v3/services/alerts/deliverErrorGroupAlert.server.ts`).

Comment on lines +176 to +179
</span>
</TableCell>
<TableCell>{check.responseTimeMs}ms</TableCell>
<TableCell>{new Date(check.createdAt).toLocaleString()}</TableCell>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Handle null responseTimeMs gracefully.

responseTimeMs can be null (per schema default is 0, but the provisioning endpoint creates records without setting it). This could display "nullms" in the UI.

💡 Proposed fix
-                    <TableCell>{check.responseTimeMs}ms</TableCell>
+                    <TableCell>{check.responseTimeMs != null ? `${check.responseTimeMs}ms` : "—"}</TableCell>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
</span>
</TableCell>
<TableCell>{check.responseTimeMs}ms</TableCell>
<TableCell>{new Date(check.createdAt).toLocaleString()}</TableCell>
</span>
</TableCell>
<TableCell>{check.responseTimeMs != null ? `${check.responseTimeMs}ms` : "—"}</TableCell>
<TableCell>{new Date(check.createdAt).toLocaleString()}</TableCell>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/webapp/app/routes/agents`.$agentId.status.tsx around lines 176 - 179,
The TableCell currently renders check.responseTimeMs directly which can be null
and show "nullms"; update the rendering in the JSX where check.responseTimeMs is
used so it safely handles null/undefined (e.g., use a fallback like 0 or "—")
and append "ms" only for numeric values; locate the TableCell that references
check.responseTimeMs and change it to conditionally render (check.responseTimeMs
!= null ? `${check.responseTimeMs}ms` : "—") so null values display gracefully
while preserving existing behavior for numbers, and similarly ensure date
rendering (new Date(check.createdAt).toLocaleString()) remains unchanged.

Comment on lines +70 to +85
try {
const provisionResponse = await fetch("http://localhost:3000/api/agents/provision", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ agentId: agentConfig.id }),
});

if (!provisionResponse.ok) {
logger.error("Provisioning failed", {
agentId: agentConfig.id,
status: provisionResponse.status,
});
}
} catch (error) {
logger.error("Failed to call provisioning endpoint", { error });
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Hardcoded localhost:3000 will fail in non-local environments.

The provisioning endpoint URL is hardcoded to http://localhost:3000/api/agents/provision. This will fail in staging/production. Per coding guidelines, use env from env.server.ts for environment-specific URLs.

🔧 Proposed fix using APP_ORIGIN
+import { env } from "~/env.server";
+
     // Trigger provisioning endpoint to spin up container
     try {
-      const provisionResponse = await fetch("http://localhost:3000/api/agents/provision", {
+      const provisionResponse = await fetch(`${env.APP_ORIGIN}/api/agents/provision`, {
         method: "POST",
         headers: { "Content-Type": "application/json" },
         body: JSON.stringify({ agentId: agentConfig.id }),
       });

As per coding guidelines: "Access all environment variables through the env export of env.server.ts instead of directly accessing process.env in the Trigger.dev webapp". The relevant code snippets show APP_ORIGIN is already defined in env.server.ts with a localhost default.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try {
const provisionResponse = await fetch("http://localhost:3000/api/agents/provision", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ agentId: agentConfig.id }),
});
if (!provisionResponse.ok) {
logger.error("Provisioning failed", {
agentId: agentConfig.id,
status: provisionResponse.status,
});
}
} catch (error) {
logger.error("Failed to call provisioning endpoint", { error });
}
try {
const provisionResponse = await fetch(`${env.APP_ORIGIN}/api/agents/provision`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ agentId: agentConfig.id }),
});
if (!provisionResponse.ok) {
logger.error("Provisioning failed", {
agentId: agentConfig.id,
status: provisionResponse.status,
});
}
} catch (error) {
logger.error("Failed to call provisioning endpoint", { error });
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/webapp/app/routes/agents.setup.tsx` around lines 70 - 85, The fetch in
routes/agents.setup.tsx is calling a hardcoded
"http://localhost:3000/api/agents/provision" which breaks non-local
deployments—replace the literal URL with the environment-backed origin: import
env (the env export from env.server.ts / env.server) and call
fetch(`${env.APP_ORIGIN}/api/agents/provision`, ...) so the provisioning
endpoint uses APP_ORIGIN; keep the same method/headers/body and error logging
around agentConfig.id.

Comment on lines +77 to +85
if (!provisionResponse.ok) {
logger.error("Provisioning failed", {
agentId: agentConfig.id,
status: provisionResponse.status,
});
}
} catch (error) {
logger.error("Failed to call provisioning endpoint", { error });
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Provisioning failures are silently ignored.

If provisioning fails (non-OK response or network error), the agent is still created and the user is redirected to the status page. Consider surfacing the error to the user or preventing the redirect on failure.

💡 Suggested improvement
       if (!provisionResponse.ok) {
         logger.error("Provisioning failed", {
           agentId: agentConfig.id,
           status: provisionResponse.status,
         });
+        // Optionally: delete the agent or mark it as failed
+        // For now, continue but the status page will show "provisioning"
       }
     } catch (error) {
       logger.error("Failed to call provisioning endpoint", { error });
+      // Consider returning an error instead of redirecting
+      // return json({ error: "Agent created but provisioning failed" }, { status: 500 });
     }

At minimum, consider updating the agent status to reflect the provisioning failure so users understand why their agent isn't working.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (!provisionResponse.ok) {
logger.error("Provisioning failed", {
agentId: agentConfig.id,
status: provisionResponse.status,
});
}
} catch (error) {
logger.error("Failed to call provisioning endpoint", { error });
}
if (!provisionResponse.ok) {
logger.error("Provisioning failed", {
agentId: agentConfig.id,
status: provisionResponse.status,
});
// Optionally: delete the agent or mark it as failed
// For now, continue but the status page will show "provisioning"
}
} catch (error) {
logger.error("Failed to call provisioning endpoint", { error });
// Consider returning an error instead of redirecting
// return json({ error: "Agent created but provisioning failed" }, { status: 500 });
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/webapp/app/routes/agents.setup.tsx` around lines 77 - 85, The
provisioning error is being logged but ignored, allowing agent creation and an
unconditional redirect; change the provisioning branch in the code that handles
provisionResponse and the catch block so failures prevent the redirect and
update the agent's status to reflect the failure. Specifically, when
provisionResponse.ok is false or an exception is caught, call your agent-status
update routine (e.g., updateAgentStatus or setAgentStatus for agentConfig.id) to
set a "provision_failed" (or similar) state, persist that change, and then
return/throw an error or a response that surfaces the failure to the caller
instead of executing the existing redirect to the status page; also enrich
logger.error calls with the response body/error details
(provisionResponse.status, body, and the caught error) so debugging info is
preserved.

Comment on lines +15 to +19
const { agentId } = (await request.json()) as { agentId: string };

if (!agentId) {
return json({ error: "agentId is required" }, { status: 400 });
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Use zod for request body validation.

The request body is parsed with a type assertion (as { agentId: string }) without runtime validation. Per coding guidelines, use zod for validation in apps/webapp.

♻️ Proposed fix using zod
+import { z } from "zod";
+
+const ProvisionSchema = z.object({
+  agentId: z.string().min(1, "agentId is required"),
+});
+
 export const action = async ({ request }: ActionFunctionArgs) => {
   if (request.method !== "POST") {
     return json({ error: "Method not allowed" }, { status: 405 });
   }

-  const { agentId } = (await request.json()) as { agentId: string };
-
-  if (!agentId) {
-    return json({ error: "agentId is required" }, { status: 400 });
-  }
+  const parseResult = ProvisionSchema.safeParse(await request.json());
+  if (!parseResult.success) {
+    return json({ error: parseResult.error.issues[0]?.message ?? "Invalid request" }, { status: 400 });
+  }
+  const { agentId } = parseResult.data;

As per coding guidelines: "Use zod for validation in packages/core and apps/webapp".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/webapp/app/routes/api.agents.provision.ts` around lines 15 - 19, Replace
the unchecked type assertion on (await request.json()) and manual agentId check
with a zod schema: define a Zod object schema that requires agentId (e.g., const
schema = z.object({ agentId: z.string().min(1) })), use schema.parse or await
schema.parseAsync on the request body, and on failure return json({ error: ...,
issues: schemaError }) with status 400; update the handler logic that currently
references agentId from the raw request/json() to use the validated value and
remove the existing if (!agentId) branch.

Comment on lines +21 to +29
try {
// Get agent config
const agentConfig = await prisma.agentConfig.findUnique({
where: { id: agentId },
});

if (!agentConfig) {
return json({ error: "Agent not found" }, { status: 404 });
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Missing authorization check allows any user to provision any agent.

The endpoint retrieves the agent by id without verifying that the requesting user owns it. An attacker who knows or guesses an agentId can provision another user's agent.

🔒 Proposed fix to add ownership verification
+import { requireUser } from "~/services/auth.server";
+
 export const action = async ({ request }: ActionFunctionArgs) => {
   if (request.method !== "POST") {
     return json({ error: "Method not allowed" }, { status: 405 });
   }

+  const user = await requireUser(request);
   const { agentId } = (await request.json()) as { agentId: string };

   if (!agentId) {
     return json({ error: "agentId is required" }, { status: 400 });
   }

   try {
     // Get agent config
     const agentConfig = await prisma.agentConfig.findUnique({
-      where: { id: agentId },
+      where: { id: agentId, userId: user.id },
     });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try {
// Get agent config
const agentConfig = await prisma.agentConfig.findUnique({
where: { id: agentId },
});
if (!agentConfig) {
return json({ error: "Agent not found" }, { status: 404 });
}
import { requireUser } from "~/services/auth.server";
export const action = async ({ request }: ActionFunctionArgs) => {
if (request.method !== "POST") {
return json({ error: "Method not allowed" }, { status: 405 });
}
const user = await requireUser(request);
const { agentId } = (await request.json()) as { agentId: string };
if (!agentId) {
return json({ error: "agentId is required" }, { status: 400 });
}
try {
// Get agent config
const agentConfig = await prisma.agentConfig.findUnique({
where: { id: agentId, userId: user.id },
});
if (!agentConfig) {
return json({ error: "Agent not found" }, { status: 404 });
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/webapp/app/routes/api.agents.provision.ts` around lines 21 - 29, The
endpoint is missing an ownership check after fetching agentConfig; obtain the
current requester’s user id from the request/session (e.g., your existing auth
helper or session extraction used elsewhere), then verify agentConfig.userId (or
the appropriate ownership field on the AgentConfig record) matches that user id;
if it does not, return a 403 JSON response and halt provisioning—update the
logic around prisma.agentConfig.findUnique / agentConfig and use the same auth
helper you use elsewhere to get the current user id.

}

try {
const event = (await request.json()) as any;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add zod validation for Slack webhook payload.

The Slack event payload is parsed as any without validation. Use zod to validate the expected structure.

♻️ Proposed schema
import { z } from "zod";

const SlackUrlVerificationSchema = z.object({
  type: z.literal("url_verification"),
  challenge: z.string(),
});

const SlackMessageEventSchema = z.object({
  type: z.literal("event_callback"),
  team_id: z.string(),
  event: z.object({
    type: z.literal("message"),
    channel: z.string(),
    text: z.string().optional(),
    user: z.string(),
    ts: z.string(),
    thread_ts: z.string().optional(),
  }),
});

const SlackEventSchema = z.union([
  SlackUrlVerificationSchema,
  SlackMessageEventSchema,
]);

As per coding guidelines: "Use zod for validation in packages/core and apps/webapp".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/webapp/app/routes/webhooks.slack.ts` at line 16, Replace the unsafe any
cast for the incoming Slack payload by adding a zod schema
(SlackUrlVerificationSchema, SlackMessageEventSchema) and a SlackEventSchema
union, import { z } from "zod", then parse/validate the body instead of using
const event = (await request.json()) as any; — use SlackEventSchema.parse(...)
or SlackEventSchema.safeParse(...) to validate the payload and handle validation
failures (return a 400 or log error) before proceeding with logic that expects
the validated event object.

Comment on lines +39 to +45
const agent = await prisma.agentConfig.findFirst({
where: {
slackWorkspaceId: workspaceId,
messagingPlatform: "slack",
status: "healthy",
},
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

No code path sets agent status to "healthy" - messages will never be routed.

The query filters for status: "healthy", but reviewing the codebase shows:

  • agents.setup.tsx creates agents with status: "provisioning"
  • api.agents.provision.ts sets status: "provisioning"
  • This file sets status: "unhealthy" on errors

No code ever transitions status to "healthy", so this query will never find any agents.

💡 Suggested fix

Either:

  1. Add logic to set status: "healthy" after successful container deployment/health check
  2. Or temporarily query for status: "provisioning" as well:
       const agent = await prisma.agentConfig.findFirst({
         where: {
           slackWorkspaceId: workspaceId,
           messagingPlatform: "slack",
-          status: "healthy",
+          status: { in: ["healthy", "provisioning"] },
         },
       });

The proper fix is to add a health check mechanism that transitions agents to "healthy" status.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/webapp/app/routes/webhooks.slack.ts` around lines 39 - 45, The query
using prisma.agentConfig.findFirst filters for status: "healthy" but no codepath
ever sets that status, so messages will never be routed; update the logic either
by adding a post-deployment health transition that flips agentConfig.status to
"healthy" after successful container deployment/healthcheck (implement a
health-check routine and update the record), or as a temporary measure expand
the query in the route handling (prisma.agentConfig.findFirst for
slackWorkspaceId/messagingPlatform) to accept both "provisioning" and "healthy"
states so provisioning agents can receive events until the proper health
transition is implemented.

Comment on lines +57 to +58
// Route message to OpenClaw container (on VPS)
const containerUrl = `http://178.128.150.129:${agent.containerPort}`;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Hardcoded VPS IP address will require code changes to update infrastructure.

The container host IP 178.128.150.129 is hardcoded. This should be configurable via environment variable for flexibility and security (avoid exposing infrastructure details in code).

🔧 Proposed fix using environment variable
+import { env } from "~/env.server";
+
       // Route message to OpenClaw container (on VPS)
-      const containerUrl = `http://178.128.150.129:${agent.containerPort}`;
+      const containerUrl = `http://${env.AGENT_CONTAINER_HOST}:${agent.containerPort}`;

Add to env.server.ts:

AGENT_CONTAINER_HOST: z.string().default("localhost"),

As per coding guidelines: "Access all environment variables through the env export of env.server.ts".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/webapp/app/routes/webhooks.slack.ts` around lines 57 - 58, The hardcoded
host in the container URL should be replaced with a configurable environment
value: add AGENT_CONTAINER_HOST to env.server.ts (default "localhost") and use
env.AGENT_CONTAINER_HOST when building containerUrl in routes/webhooks.slack.ts
(keep agent.containerPort). Update any imports to use the env export so
containerUrl = `http://${env.AGENT_CONTAINER_HOST}:${agent.containerPort}`
rather than the literal IP.

Comment on lines +60 to +74
try {
const containerResponse = await fetch(`${containerUrl}/api/message`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
text,
userId,
channel,
metadata: {
slackUserId: userId,
slackChannel: channel,
timestamp: new Date().toISOString(),
},
}),
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add timeout to container fetch request.

The fetch to the container has no timeout. If the container hangs, this request will block indefinitely, potentially causing webhook timeouts with Slack (which expects responses within 3 seconds).

🔧 Proposed fix with AbortController timeout
+      const controller = new AbortController();
+      const timeoutId = setTimeout(() => controller.abort(), 2500); // Slack wants response in 3s
+
       try {
         const containerResponse = await fetch(`${containerUrl}/api/message`, {
           method: "POST",
           headers: { "Content-Type": "application/json" },
           body: JSON.stringify({
             text,
             userId,
             channel,
             metadata: {
               slackUserId: userId,
               slackChannel: channel,
               timestamp: new Date().toISOString(),
             },
           }),
+          signal: controller.signal,
         });
+        clearTimeout(timeoutId);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try {
const containerResponse = await fetch(`${containerUrl}/api/message`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
text,
userId,
channel,
metadata: {
slackUserId: userId,
slackChannel: channel,
timestamp: new Date().toISOString(),
},
}),
});
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 2500); // Slack wants response in 3s
try {
const containerResponse = await fetch(`${containerUrl}/api/message`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
text,
userId,
channel,
metadata: {
slackUserId: userId,
slackChannel: channel,
timestamp: new Date().toISOString(),
},
}),
signal: controller.signal,
});
clearTimeout(timeoutId);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/webapp/app/routes/webhooks.slack.ts` around lines 60 - 74, The fetch to
the container (the await fetch call that sets containerResponse using
containerUrl) needs a timeout so it can't hang; create an AbortController before
the fetch, start a setTimeout to call controller.abort() after a safe Slack
deadline (e.g. ~2500–2800ms), pass controller.signal to fetch, and clear the
timeout after the fetch resolves; also ensure any abort/timeout errors are
handled in the existing try/catch so the handler returns promptly.

Comment on lines +1 to +17
-- CreateTable "AgentConfig"
CREATE TABLE "AgentConfig" (
"id" TEXT NOT NULL PRIMARY KEY,
"userId" TEXT NOT NULL,
"name" TEXT NOT NULL,
"model" TEXT NOT NULL,
"messagingPlatform" TEXT NOT NULL,
"tools" TEXT,
"containerName" TEXT,
"containerPort" INTEGER,
"slackWorkspaceId" TEXT,
"slackWebhookToken" TEXT,
"status" TEXT NOT NULL DEFAULT 'provisioning',
"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" DATETIME NOT NULL,
CONSTRAINT "AgentConfig_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User" ("id") ON DELETE CASCADE ON UPDATE CASCADE
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify the database provider and check other migrations for the correct syntax pattern
echo "=== Checking datasource provider in schema ==="
grep -A2 "datasource db" internal-packages/database/prisma/schema.prisma

echo ""
echo "=== Checking other migrations for DATETIME vs TIMESTAMP usage ==="
rg -l "DATETIME" internal-packages/database/prisma/migrations/ || echo "No other DATETIME usages found"
rg -l "TIMESTAMP" internal-packages/database/prisma/migrations/ | head -5

Repository: triggerdotdev/trigger.dev

Length of output: 979


🏁 Script executed:

#!/bin/bash
# Check a sample of migrations using DATETIME and TIMESTAMP to understand the actual syntax
echo "=== Sample migration with DATETIME ==="
head -30 internal-packages/database/prisma/migrations/20250103152909_add_run_engine_v2/migration.sql

echo ""
echo "=== Check if any migrations use TIMESTAMP (PostgreSQL native) ==="
rg "TIMESTAMP" internal-packages/database/prisma/migrations/ -A 2 -B 2 | head -40

echo ""
echo "=== Sample recent migration ==="
ls -la internal-packages/database/prisma/migrations/ | tail -5
head -30 internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql

Repository: triggerdotdev/trigger.dev

Length of output: 7544


Use TIMESTAMP(3) instead of DATETIME for timestamp columns.

PostgreSQL does not recognize DATETIME as a valid type. All other migrations in this codebase use TIMESTAMP(3) for datetime columns (e.g., see migrations 20240228114913, 20240415134559, 20240411135457). This migration will fail when applied to the PostgreSQL database.

Update "createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP and "updatedAt" DATETIME NOT NULL to use TIMESTAMP(3) instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@internal-packages/database/prisma/migrations/20260325122458_add_openclaw_agents/migration.sql`
around lines 1 - 17, The migration uses PostgreSQL-incompatible DATETIME for the
AgentConfig table timestamps; change the "createdAt" and "updatedAt" column
types in the "AgentConfig" CREATE TABLE to TIMESTAMP(3) (e.g., replace
`"createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP` and `"updatedAt"
DATETIME NOT NULL` with TIMESTAMP(3) equivalents), and ensure the createdAt
default uses CURRENT_TIMESTAMP(3) to match other migrations and precision used
across the codebase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants