diff --git a/docs/ai-reference.md b/docs/ai-reference.md new file mode 100644 index 000000000..fbd85ade3 --- /dev/null +++ b/docs/ai-reference.md @@ -0,0 +1,51 @@ +# AI Tools Reference + +This page lists the configuration locations, skills, and database skills that Storm installs for each AI coding tool. For the main guide, see [AI-Assisted Development](ai.md). + +--- + +## Tool Configuration Locations + +Each AI tool stores its configuration in a different location, but the content is the same: Storm's conventions, entity rules, query patterns, and verification guidelines. + +| Tool | Rules | Skills | MCP | +|------|-------|--------|-----| +| Claude Code | CLAUDE.md | .claude/skills/ | .mcp.json | +| Cursor | .cursor/rules/storm.md | .cursor/rules/ | .cursor/mcp.json | +| GitHub Copilot | .github/copilot-instructions.md | .github/instructions/ | (tool-dependent) | +| Windsurf | .windsurf/rules/storm.md | .windsurf/rules/ | (manual config) | +| Codex | AGENTS.md | - | .codex/config.toml | + +--- + +## Skills + +Skills are per-topic guides that the AI loads on demand when working on a specific task. Each skill contains focused instructions, code examples, and common pitfalls for one area of Storm. Skills are fetched from orm.st during setup and can be updated automatically on each run without requiring a CLI update. + +| Skill | Purpose | +|-------|---------| +| storm-setup | Configure dependencies (detects Spring Boot, Ktor, or standalone) | +| storm-docs | Load full Storm documentation | +| storm-entity-kotlin | Create Kotlin entities | +| storm-entity-java | Create Java entities | +| storm-repository-kotlin | Write Kotlin repositories (framework-aware: Spring Boot, Ktor, standalone) | +| storm-repository-java | Write Java repositories | +| storm-query-kotlin | Kotlin QueryBuilder queries | +| storm-query-java | Java QueryBuilder queries | +| storm-sql-kotlin | Kotlin SQL Templates | +| storm-sql-java | Java SQL Templates | +| storm-json-kotlin / storm-json-java | JSON columns and JSON aggregation | +| storm-serialization-kotlin / storm-serialization-java | Entity serialization for REST APIs (framework-aware content negotiation) | +| storm-migration | Write Flyway/Liquibase migration SQL | + +--- + +## Database Skills + +With the [MCP server](database-and-mcp.md) configured, three additional skills become available: + +| Skill | Purpose | +|-------|---------| +| storm-schema | Inspect your live database schema | +| storm-validate | Compare entities against the live schema | +| storm-entity-from-schema | Generate, update, or refactor entities from database tables | diff --git a/docs/ai.md b/docs/ai.md index e48d9fa62..580a243c8 100644 --- a/docs/ai.md +++ b/docs/ai.md @@ -5,12 +5,8 @@ import TabItem from '@theme/TabItem'; Storm is an AI-first ORM. Entities are plain Kotlin data classes or Java records. Queries are explicit SQL. Built-in verification lets AI validate its own work before anything touches production. -:::caution Storm is not for vibe coding -Database code affects data integrity, performance, and security. Generating it without understanding or verifying the result is not something Storm encourages. - +:::info AI-first, not AI-only Storm keeps you in control. `ORMTemplate.validateSchema()` validates that entities match the database. `SqlCapture` validates that queries match the intent. `@StormTest` runs both checks in an isolated in-memory database before anything reaches production. The AI generates code, then Storm verifies it. That is what AI-first means here. - -Automated verification catches mistakes, but it does not replace understanding. Your data layer is not the place to let the codebase drift away from you. ::: --- @@ -34,17 +30,7 @@ The interactive setup walks you through three steps: ### 1. Select AI tools -Choose which AI coding tools you use. Storm configures each one with rules, skills, and (optionally) a database-aware MCP server. You can select multiple tools if your team uses different editors. - -| Tool | Rules | Skills | MCP | -|------|-------|--------|-----| -| Claude Code | CLAUDE.md | .claude/skills/ | .mcp.json | -| Cursor | .cursor/rules/storm.md | .cursor/rules/ | .cursor/mcp.json | -| GitHub Copilot | .github/copilot-instructions.md | .github/instructions/ | (tool-dependent) | -| Windsurf | .windsurf/rules/storm.md | .windsurf/rules/ | (manual config) | -| Codex | AGENTS.md | - | .codex/config.toml | - -Each tool stores its configuration in a different location, but the content is the same: Storm's conventions, entity rules, query patterns, and verification guidelines. +Choose which AI coding tools you use. Storm configures each one with rules, skills, and (optionally) a database-aware MCP server. You can select multiple tools if your team uses different editors. Storm currently supports Claude Code, Cursor, GitHub Copilot, Windsurf, and Codex. Each tool stores its configuration in a different location, but the content is the same: Storm's conventions, entity rules, query patterns, and verification guidelines. See [AI Tools Reference](ai-reference.md) for the full list of configuration locations. ### 2. Rules and skills @@ -52,39 +38,23 @@ For each selected tool, Storm installs two types of AI context: **Rules** are a project-level configuration file that is always loaded by the AI tool. They contain Storm's key patterns, naming conventions, and critical constraints (immutable QueryBuilder, no collection fields on entities, `Ref` for circular references, etc.). The rules ensure the AI follows Storm's conventions in every interaction, without you having to repeat them. -**Skills** are per-topic guides that the AI loads on demand when working on a specific task. Each skill contains focused instructions, code examples, and common pitfalls for one area of Storm. Skills are fetched from orm.st during setup and can be updated automatically on each run without requiring a CLI update. - -| Skill | Purpose | -|-------|---------| -| storm-setup | Configure dependencies (detects Spring Boot, Ktor, or standalone) | -| storm-docs | Load full Storm documentation | -| storm-entity-kotlin | Create Kotlin entities | -| storm-entity-java | Create Java entities | -| storm-repository-kotlin | Write Kotlin repositories (framework-aware: Spring Boot, Ktor, standalone) | -| storm-repository-java | Write Java repositories | -| storm-query-kotlin | Kotlin QueryBuilder queries | -| storm-query-java | Java QueryBuilder queries | -| storm-sql-kotlin | Kotlin SQL Templates | -| storm-sql-java | Java SQL Templates | -| storm-json-kotlin / storm-json-java | JSON columns and JSON aggregation | -| storm-serialization-kotlin / storm-serialization-java | Entity serialization for REST APIs (framework-aware content negotiation) | -| storm-migration | Write Flyway/Liquibase migration SQL | +**Skills** are per-topic guides that the AI loads on demand when working on a specific task. Each skill contains focused instructions, code examples, and common pitfalls for one area of Storm (entities, queries, repositories, migrations, JSON, serialization, and more). Skills are fetched from orm.st during setup and can be updated automatically on each run without requiring a CLI update. See [AI Tools Reference](ai-reference.md#skills) for the full list. ### 3. Database connection (optional) If you have a local development database running, Storm can set up a schema-aware MCP server. This gives your AI tool access to your actual database structure (table definitions, column types, foreign keys) without exposing credentials or data. -The MCP server runs locally on your machine, exposes only schema metadata, and stores credentials in `~/.storm/` (outside your project, outside the LLM's reach). It supports PostgreSQL, MySQL, MariaDB, Oracle, SQL Server, SQLite, and H2. +The MCP server runs locally on your machine, exposes only schema metadata by default, and stores credentials in `~/.storm/` (outside your project, outside the LLM's reach). It supports PostgreSQL, MySQL, MariaDB, Oracle, SQL Server, SQLite, and H2. You can connect multiple databases to a single project, even across different database types. + +Optionally, you can enable read-only data access per connection. This lets the AI query individual records to inform type decisions — for example, recognizing that a `VARCHAR` column contains enum-like values, or that a `TEXT` column stores JSON. Data access is disabled by default because it means actual data from your database flows through the AI's context. When enabled, the database connection is read-only (enforced at both the application and database driver level), and the AI cannot write, modify, or delete data. See [Database Connections & MCP — Security](database-and-mcp.md#security) for the full details. -With the database connected, three additional skills become available: +With the database connected, three additional skills become available for schema inspection, entity validation against the live schema, and entity generation from database tables. See [AI Tools Reference](ai-reference.md#database-skills) for details. -| Skill | Purpose | -|-------|---------| -| storm-schema | Inspect your live database schema | -| storm-validate | Compare entities against the live schema | -| storm-entity-from-schema | Generate, update, or refactor entities from database tables | +To manage database connections later, use `storm db` for the global connection library and `storm mcp` for project-level configuration. See [Database Connections & MCP](database-and-mcp.md) for the full guide. -To reconfigure the database connection later, run `storm mcp`. +:::tip Looking for a database MCP server for Python, Go, Ruby, or any other language? +The Storm MCP server works standalone — no Storm ORM required. Run `npx @storm-orm/cli mcp init` to set up schema access and optional read-only data queries without installing Storm rules or skills. See [Using Without Storm ORM](database-and-mcp.md#using-without-storm-orm). +::: --- @@ -140,12 +110,12 @@ The design choices that matter most: - **No persistence context.** No session scope, flush ordering, or detachment rules that require deep framework knowledge. - **Convention over configuration.** Fewer annotations and config files for the AI to keep consistent. - **Compile-time metamodel.** Type errors caught at build time, not at runtime. The AI gets immediate feedback. -- **Secure schema access.** The MCP server gives AI tools structural database knowledge without exposing credentials or data. +- **Secure schema access.** The MCP server gives AI tools structural database knowledge without exposing credentials. Data access is opt-in, read-only by construction, and enforced at the database driver level. Beyond the data model, Storm provides dedicated tooling for AI-assisted workflows: - **Skills** guide AI tools through specific tasks (entity creation, queries, repositories, migrations) with framework-aware conventions and rules. -- **A locally running MCP server** gives AI tools access to your live database schema: table definitions, column types, constraints, and foreign keys. The AI can inspect your actual database structure to generate entities that match, or validate entities it just created. +- **A locally running MCP server** gives AI tools access to your live database schema: table definitions, column types, constraints, and foreign keys. Optionally, the AI can also query individual records (read-only) when sample data would improve type decisions. The AI can inspect your actual database structure to generate entities that match, or validate entities it just created. - **Built-in verification** through `ORMTemplate.validateSchema()` and `SqlCapture` lets the AI validate its own work. After generating entities, the AI can validate them against the database. After writing queries, it can capture and inspect the actual SQL. Both checks run in an isolated in-memory database through `@StormTest`, so verification happens before anything touches production. For dialect-specific code, `@StormTest` supports a static `dataSource()` factory method on the test class, allowing integration with Testcontainers to test against the actual target database. --- @@ -180,13 +150,14 @@ The AI generates and updates code (entities, migrations, queries). Storm validat In a schema-first workflow, the database is the source of truth. The schema already exists (or is managed by a DBA), and entities need to match it. -When the MCP server is configured, the AI has access to the live database through `list_tables` and `describe_table`. This gives it full visibility into table definitions, column types, constraints, and foreign key relationships. +When the MCP server is configured, the AI has access to the live database through `list_tables` and `describe_table`. This gives it full visibility into table definitions, column types, constraints, and foreign key relationships. When data access is enabled, the AI can also use `select_data` to sample individual records — useful when the schema alone is ambiguous about intent (e.g., a `VARCHAR` that holds enum values, or a `TEXT` column that stores JSON). The AI workflow: 1. **Inspect the schema.** The AI calls `list_tables` to discover tables, then `describe_table` for each relevant table. -2. **Generate entities.** Based on the schema metadata and Storm's entity conventions (naming, `@PK`, `@FK`, `@UK`, nullability, `Ref` for circular or self-references), the AI generates Kotlin data classes or Java records. -3. **Validate.** The AI writes a temporary test that validates the generated entities against the database using `ORMTemplate.validateSchema()`. +2. **Sample data (if available).** When `select_data` is enabled and the schema leaves a type decision ambiguous, the AI queries a few rows to inform the choice. +3. **Generate entities.** Based on the schema metadata (and optional sample data) and Storm's entity conventions (naming, `@PK`, `@FK`, `@UK`, nullability, `Ref` for circular or self-references), the AI generates Kotlin data classes or Java records. +4. **Validate.** The AI writes a temporary test that validates the generated entities against the database using `ORMTemplate.validateSchema()`. When the database schema evolves, the same flow applies: the AI inspects the changed tables, updates the affected entities, and re-validates. diff --git a/docs/database-and-mcp.md b/docs/database-and-mcp.md new file mode 100644 index 000000000..d26ba8fcd --- /dev/null +++ b/docs/database-and-mcp.md @@ -0,0 +1,351 @@ +# Database Connections & MCP + +The Storm CLI manages database connections and exposes them as MCP (Model Context Protocol) servers. This gives your AI tools direct access to your database schema — table definitions, column types, constraints, and foreign keys — without exposing credentials or actual data. The AI can use this structural knowledge to generate entities that match your schema, validate entities it just created, or understand relationships between tables before writing a query. + +--- + +## How It Works + +Database configuration in Storm has two layers: a **global connection library** on your machine, and a **per-project configuration** that references connections from that library. + +``` + ~/.storm/connections/ Your project (.storm/) + ┌─────────────────────┐ ┌──────────────────────────┐ + │ localhost-shopdb │◀─────│ default -> localhost- │ + │ staging-analytics │◀─────│ shopdb │ + │ localhost-legacy │ │ reports -> staging- │ + └─────────────────────┘ │ analytics │ + global, shared by all └──────────────────────────┘ + projects on this machine project picks what it needs +``` + +Global connections store the actual credentials and connection details. Projects reference them by name through aliases. This separation means you configure a database once and reuse it across as many projects as you need. Changing a password or hostname in the global connection updates every project that references it. + +Each project alias becomes an MCP server that your AI tool can query. The alias `default` becomes `storm-schema`; any other alias like `reporting` becomes `storm-schema-reporting`. The Storm MCP server handles all supported dialects — PostgreSQL, MySQL, MariaDB, Oracle, SQL Server, SQLite, and H2 — and the necessary drivers are installed automatically. + +--- + +## Global Connections + +Global connections live in `~/.storm/connections/` and are available to any project on your machine. Think of them as your personal library of database connections: your local Postgres, your staging Oracle instance, your team's shared MySQL. + +### Adding a connection + +Run `storm db add` to walk through an interactive setup that asks for the dialect, host, port, database name, and credentials. Storm suggests a connection name based on the host and database, which you can accept or override: + +``` +? Database dialect: PostgreSQL +? Host: localhost +? Port: 5432 +? Database: shopdb +? Username: storm +? Password: •••••• +? Allow AI tools to query data? (read-only SELECT) No +? Connection name: localhost-shopdb + + Connection "localhost-shopdb" saved globally. +``` + +The data access prompt defaults to No. When disabled, the MCP server exposes only schema metadata (table definitions, column types, constraints). When enabled, the AI can also query individual records. See [Security](#security) for details on what this means and how read-only access is enforced. + +You can also provide the name upfront with `storm db add my-postgres` to skip the naming prompt. + +Drivers are managed automatically. When you add your first PostgreSQL connection, Storm installs the `pg` driver. When you later add a MySQL connection, the `mysql2` driver is installed alongside it. You never install or update drivers manually. + +### Listing and removing connections + +Use `storm db list` to see all global connections: + +``` + localhost-shopdb (postgresql://localhost:5432/shopdb) + staging-analytics (oracle://db.staging.internal:1521/analytics) + localhost-legacy (mysql://localhost:3306/legacy) +``` + +To remove a connection, run `storm db remove localhost-legacy` (or just `storm db remove` for an interactive picker). Removing a global connection does not affect any project that already references it — the project alias simply becomes unresolvable until you point it at a different connection. + +--- + +## Project Connections + +A project's `.storm/databases.json` maps aliases to global connection names. Each alias becomes a separate MCP server that your AI tool can use. This is where you decide which databases are relevant to a particular project, and what to call them. + +### Adding a connection to your project + +Run `storm mcp add` to pick a global connection and assign it an alias: + +``` +? Database connection: localhost-shopdb (postgresql://localhost:5432/shopdb) +? Alias for this connection: default + + Database "default" -> localhost-shopdb added. +``` + +The alias determines the MCP server name. The convention is straightforward: + +| Alias | MCP server name | +|-------|----------------| +| `default` | `storm-schema` | +| `reporting` | `storm-schema-reporting` | +| `legacy` | `storm-schema-legacy` | + +You can add multiple connections to a single project by running `storm mcp add` repeatedly. Each one registers a separate MCP server, so your AI tool can query each database independently. This is useful when your application talks to more than one database — for example, a primary PostgreSQL for transactional data and an Oracle database for reporting. + +### Listing project connections + +Run `storm mcp list` to see what is configured for the current project, including which global connection each alias resolves to and the corresponding MCP server name: + +``` + default -> localhost-shopdb (global) postgresql://localhost:5432/shopdb + MCP server: storm-schema + reporting -> staging-analytics (global) oracle://db.staging.internal:1521/analytics + MCP server: storm-schema-reporting +``` + +### Removing a project connection + +Run `storm mcp remove reporting` to remove an alias from the project. This unregisters the MCP server from your AI tool's configuration. The global connection itself is not affected — other projects that reference it continue to work. + +### Re-registering connections + +If your AI tool's MCP configuration gets out of sync (for example, after switching branches or resetting editor config files), run `storm mcp` without arguments. This re-registers all connections from `databases.json` for every configured AI tool. + +--- + +## Using `storm init` + +When you run `storm init`, database configuration is part of the interactive setup. After selecting your AI tools and programming languages, Storm asks if you want to connect to a database: + +``` + Storm can connect to your local development database so AI tools + can read your schema (tables, columns, foreign keys) and generate + entities automatically. Credentials are stored locally and never + exposed to the AI. + +? Connect to a local database? Yes +``` + +If you say yes, it walks you through the same flow as `storm db add` (or lets you pick an existing global connection), including whether to enable data access. It then asks for an alias. After the first connection, it offers to add more. This lets you set up your full database configuration in a single `storm init` run — or you can skip it and add connections later with `storm mcp add`. + +--- + +## Multiple Databases in One Project + +Real-world applications often work with more than one database. A project might have a primary PostgreSQL database for transactional data and an Oracle database for reporting, or a main database plus a legacy system that is being migrated. Storm supports this natively: each connection gets its own MCP server, and your AI tool can query any of them by name. + +``` + .storm/databases.json .mcp.json + ┌───────────────────────────┐ ┌─────────────────────────────────┐ + │ "default" : "local-pg" │ ───▶ │ storm-schema (PG) │ + │ "reporting": "oracle-rpt"│ ───▶ │ storm-schema-reporting (ORA) │ + │ "legacy" : "local-my" │ ───▶ │ storm-schema-legacy (MySQL)│ + └───────────────────────────┘ └─────────────────────────────────┘ +``` + +When using an AI tool, each MCP server identifies itself by connection name. The AI can call `list_tables` and `describe_table` on each server independently, so it always knows which tables belong to which database. This matters when generating entities: the AI can target the right schema and use the right dialect conventions for each database. + +--- + +## Multiple Projects, Shared Connections + +The global/project split also works in the other direction: when several projects use the same database, you configure the connection once globally and reference it from each project. Changing the connection details (for example, updating a password after a credential rotation) updates it for all projects at once. + +``` + ~/.storm/connections/ + ┌───────────────────┐ + │ localhost-shopdb │ + └────────┬──────────┘ + ┌──────────────┼──────────────┐ + ▼ ▼ ▼ + storefront backoffice mobile-api + (default: pg) (default: pg) (default: pg) +``` + +If a project needs a different database, it simply references a different global connection. No duplication, no drift. + +--- + +## Project-Local Connections + +Most connections should be global, since they represent databases on your machine that any project might use. However, some connections are inherently project-specific: a SQLite database file that lives inside the project directory, or a Docker Compose database that uses a project-specific port mapping. + +Project-local connections are stored in `.storm/connections/` inside the project directory. When Storm resolves a connection name, it checks the project-local directory first, then the global directory. This means a project-local connection can shadow a global one with the same name — useful for overriding connection details in a specific project without affecting others. + +``` +.storm/ +├── databases.json +└── connections/ + └── test-h2.json # only this project can see this +``` + +--- + +## Directory Structure + +After setup, the file layout looks like this: + +``` +~/.storm/ # global (machine-wide) +├── package.json # npm package for driver installs +├── node_modules/ # database drivers (pg, mysql2, oracledb, etc.) +├── server.mjs # MCP server script (shared by all connections) +└── connections/ + ├── localhost-shopdb.json # { dialect, host, port, database, username, password, selectAccess } + └── staging-analytics.json + +/ +├── .storm/ # project-level (gitignored) +│ ├── databases.json # { "default": "localhost-shopdb", ... } +│ └── connections/ # optional project-local connections +│ └── test-h2.json +├── .mcp.json # MCP server registrations (gitignored) +└── CLAUDE.md # Storm rules (committed) +``` + +The `.storm/` directory and `.mcp.json` are gitignored because they contain machine-specific paths and credentials. The Storm rules and skills files are committed and shared with the team. + +--- + +## Using Without Storm ORM + +The Storm MCP server is a standalone database tool — it does not require Storm ORM in your project. If you use Python, Go, Ruby, or any other language and just want your AI tool to have schema awareness and optional data access, run: + +```bash +npx @storm-orm/cli mcp init +``` + +This walks you through: + +1. Selecting your AI tools (Claude Code, Cursor, Codex, etc.) +2. Configuring one or more database connections +3. Optionally enabling read-only data access +4. Registering the MCP server with your AI tools + +No Storm rules, skills, or language-specific configuration is installed — just the database MCP server. Your AI tool gets `list_tables`, `describe_table`, and optionally `select_data`, regardless of what language or framework your project uses. + +After setup, you can manage connections with `storm db` and `storm mcp` commands as described above. + +--- + +## Security + +Database credentials are stored in connection JSON files under `~/.storm/connections/` (global) or `.storm/connections/` (project-local). Both locations are outside the AI tool's context window: the MCP server reads them at startup, but the connection details are never sent to the AI. The AI only sees schema metadata — and optionally, query results — but it never learns your credentials. + +### Schema access (always available) + +The MCP server always exposes two schema-only tools: + +| Tool | What it returns | +|------|----------------| +| `list_tables` | Table names | +| `describe_table` | Column names, types, nullability, primary keys, foreign keys (with cascade rules), unique constraints | + +These tools return structural metadata only. No data is returned, and the database cannot be modified. + +### Data access (opt-in) + +When you enable data access for a connection, a third tool becomes available: + +| Tool | What it returns | +|------|----------------| +| `select_data` | Individual rows from a table, filtered by column conditions. Supports pagination (offset + limit), defaults to 50 rows. Results formatted as a markdown table. | + +**Data access is disabled by default.** When you add a database connection, Storm asks: + +``` +? Allow AI tools to query data? (read-only SELECT) No +``` + +If you answer No (the default), the MCP server exposes only `list_tables` and `describe_table`. The AI has full visibility into your database structure but cannot see any data. + +If you answer Yes, the AI can also query individual records using `select_data`. This is useful when sample data helps the AI make better decisions — for example, recognizing that a `status` column contains enum-like values, or that a `TEXT` column stores JSON. But it means **actual data from your database flows through the AI's context**. It is your responsibility to decide whether this is acceptable given the nature of your data. + +The `selectAccess` setting is stored per connection in the connection JSON file. You can change it at any time by running `storm db config`. + +### Configuring data access + +Use `storm db config` to manage data access settings for a connection: + +``` +storm db config localhost-shopdb +``` + +This lets you: + +1. **Toggle data access** on or off for the connection. +2. **Exclude specific tables** from data queries. Storm connects to the database, lists all tables, and presents a searchable checkbox where you can select which tables to exclude. You can type to filter the list, use Page Up/Down for large schemas, and press Space to toggle individual tables. + +Excluded tables still appear in `list_tables` and can be described with `describe_table` — the AI needs to see the schema to generate correct entities and foreign keys. Only `select_data` is restricted for excluded tables. + +The settings are stored in the connection JSON file (`selectAccess` and `excludeTables`). You can re-run `storm db config` at any time to update them. + +### How data access stays read-only + +Enabling data access does not give the AI the ability to write, modify, or delete data. The MCP server enforces read-only access through multiple independent layers: + +**1. Structured queries, not SQL.** The AI never writes SQL. The `select_data` tool accepts a structured request — table name, column names, filter conditions, sort order, and row limit — and the MCP server builds the SQL internally. There is no code path that produces anything other than a `SELECT` statement. This is read-only by construction: the server cannot generate `INSERT`, `UPDATE`, `DELETE`, `DROP`, or any other write operation because it simply does not contain the code to do so. + +**2. Schema validation.** Every table and column name in a `select_data` request is validated against the actual database schema before any query is executed (case-insensitive — the server resolves the correct casing automatically). Unknown tables and columns are rejected. Filter operators are restricted to a fixed whitelist (`=`, `!=`, `<`, `>`, `<=`, `>=`, `LIKE`, `IN`, `IS NULL`, `IS NOT NULL`). Values are always parameterized — they never appear in the SQL string. + +**3. Read-only database connections.** Independent of the query builder, the database connection itself is configured to reject writes at the driver or protocol level: + +| Database | Read-only mechanism | +|----------|-------------------| +| PostgreSQL | `default_transaction_read_only = on` — the server rejects any write statement | +| MySQL / MariaDB | `SET SESSION TRANSACTION READ ONLY` — session-level write rejection | +| SQL Server | `readOnlyIntent: true` — connection-level read-only intent | +| SQLite | `readonly: true` — OS-level read-only file handle | +| H2 | `default_transaction_read_only = on` (via PG wire protocol) | +| Oracle | Relies on the structured query builder (Oracle has no session-level read-only setting) | + +Even if the structured query builder had a bug that somehow produced a write statement, the database would reject it. These are independent safety layers. + +**4. Row and cell limits.** Results default to 50 rows per query (configurable up to 500). Individual cell values are truncated at 200 characters to prevent JSON blobs or large text fields from overloading the AI's context window. Pagination is supported via `offset` and `limit` for all database dialects. + +### Summary + +| Concern | How it is addressed | +|---------|-------------------| +| **Credentials** | Stored in `~/.storm/`, never sent to the AI | +| **Data visibility** | Off by default. Opt-in per connection. Developer's choice. | +| **Sensitive tables** | `storm db config` hides specific tables from data queries while keeping their schema visible | +| **Write protection** | Read-only by construction (structured queries) + read-only database connections (driver-level). Two independent layers. | +| **SQL injection** | Not possible. Values are parameterized. Table/column names are validated against the schema. | +| **Unbounded queries** | Default 50 rows, max 500. Cell values truncated at 200 characters. Pagination via offset + limit. | + +--- + +## Command Reference + +### `storm db` — Global connection library + +| Command | Description | +|---------|-------------| +| `storm db` | List all global connections | +| `storm db add [name]` | Add a global database connection | +| `storm db remove [name]` | Remove a global database connection | +| `storm db config [name]` | Configure data access and table exclusions | + +### `storm mcp` — Project MCP servers + +| Command | Description | +|---------|-------------| +| `storm mcp init` | Set up MCP database server (no Storm ORM required) | +| `storm mcp` | Re-register MCP servers for all project connections | +| `storm mcp add [alias]` | Add a database connection to this project | +| `storm mcp list` | List project database connections with MCP server names | +| `storm mcp remove [alias]` | Remove a database connection from this project | + +### Supported Databases + +The MCP server is a lightweight Node.js process that reads schema metadata. It uses native Node.js database drivers (not JDBC) to connect to the same databases your Storm application uses. + +| Database | Connection | Default Port | +|----------|-----------|-------------| +| PostgreSQL | TCP | 5432 | +| MySQL | TCP | 3306 | +| MariaDB | TCP | 3306 | +| Oracle | TCP (thin mode) | 1521 | +| SQL Server | TCP | 1433 | +| SQLite | Direct file access (read-only) | — | +| H2 | TCP (PG wire protocol, requires `-pgPort`) | 5435 | diff --git a/docs/entities.md b/docs/entities.md index 185e60151..336c58a01 100644 --- a/docs/entities.md +++ b/docs/entities.md @@ -292,16 +292,34 @@ data class User( ) : Entity ``` -For compound unique constraints spanning multiple columns, use an inline record annotated with `@UK`. When the compound key columns overlap with other fields on the entity, use `@Persist(insertable = false, updatable = false)` to prevent duplicate persistence: + + + +```java +record User(@PK Integer id, + @UK String email, + String name +) implements Entity {} +``` + + + + +### Compound Unique Keys + +For compound unique constraints that need a metamodel key (e.g., for keyset pagination or type-safe lookups), use an inline record annotated with `@UK`. When the compound key columns overlap with other fields on the entity, use `@Persist(insertable = false, updatable = false)` to prevent duplicate persistence: + + + ```kotlin -data class UserEmailUK(val userId: Int, val email: String) +data class UserEmailUk(val userId: Int, val email: String) data class SomeEntity( @PK val id: Int = 0, @FK val user: User, val email: String, - @UK @Persist(insertable = false, updatable = false) val uniqueKey: UserEmailUK + @UK @Persist(insertable = false, updatable = false) val uniqueKey: UserEmailUk ) : Entity ``` @@ -309,27 +327,22 @@ data class SomeEntity( ```java -record User(@PK Integer id, - @UK String email, - String name -) implements Entity {} -``` - -For compound unique constraints spanning multiple columns, use an inline record annotated with `@UK`. When the compound key columns overlap with other fields on the entity, use `@Persist(insertable = false, updatable = false)` to prevent duplicate persistence: - -```java -record UserEmailUK(int userId, String email) {} +record UserEmailUk(int userId, String email) {} record SomeEntity(@PK Integer id, @Nonnull @FK User user, @Nonnull String email, - @UK @Persist(insertable = false, updatable = false) UserEmailUK uniqueKey + @UK @Persist(insertable = false, updatable = false) UserEmailUk uniqueKey ) implements Entity {} ``` +Compound unique constraints that do not require a metamodel key do not need to be modeled in the entity. Schema validation does not warn about unmodeled compound constraints. + +Use `@UK(constraint = false)` when the unique constraint does not exist in the database — for example, when uniqueness is enforced at the application level. + When a column is not annotated with `@UK` but becomes unique in a specific query context (for example, a GROUP BY column produces unique values in the result set), wrap the metamodel with `.key()` (Kotlin) or `Metamodel.key()` (Java) to indicate it can serve as a scrolling cursor. See [Manual Key Wrapping](metamodel.md#manual-key-wrapping) for details. --- diff --git a/docs/getting-started.md b/docs/getting-started.md index 6ccf83984..9e45aeb15 100644 --- a/docs/getting-started.md +++ b/docs/getting-started.md @@ -6,7 +6,7 @@ Storm is a modern SQL Template and ORM framework for Kotlin 2.0+ and Java 21+. I Storm is built around a simple idea: your data model should be a plain value, not a framework-managed object. In Storm, entities are Kotlin data classes or Java records. They carry no hidden state, no change-tracking proxies, and no lazy-loading hooks. You can create them, pass them across layers, serialize them, compare them by value, and store them in collections without worrying about session scope, detachment, or side effects. What you see in the source code is exactly what exists at runtime. -This stateless design is a deliberate trade-off. Traditional ORMs like JPA/Hibernate give you automatic dirty checking and transparent lazy loading, but at the cost of complexity: you must reason about managed vs. detached state, proxy initialization, persistence context boundaries, and cascading rules that interact in subtle ways. Storm avoids all of this. When you call `update`, you pass the full entity. When you query a relationship, you get the result in the same query. There are no surprises. +This stateless design is a deliberate trade-off. Traditional ORMs like JPA/Hibernate give you transparent lazy loading and proxy-based dirty checking, but at the cost of complexity: you must reason about managed vs. detached state, proxy initialization, persistence context boundaries, and cascading rules that interact in subtle ways. Storm avoids all of this. It still performs [dirty checking](/dirty-checking), but by comparing entity state within a transaction rather than through proxies or bytecode manipulation. When you query a relationship, you get the result in the same query. There are no surprises. Storm is also SQL-first. Rather than abstracting SQL away behind a query language (like JPQL) or a verbose criteria builder, Storm embraces SQL directly. Its SQL Template API lets you write real SQL with type-safe parameter interpolation and automatic result mapping. For common CRUD patterns, the type-safe DSL and repository interfaces provide concise, compiler-checked alternatives, but the full power of SQL is always available when you need it. diff --git a/docs/index.md b/docs/index.md index 0dd2f970e..c78c6c63a 100644 --- a/docs/index.md +++ b/docs/index.md @@ -12,6 +12,10 @@ import TabItem from '@theme/TabItem'; Storm's concise API, strict conventions, and absence of hidden complexity make it optimized for AI-assisted development. Combined with AI skills and secure schema access via MCP, AI coding tools generate correct Storm code consistently. Install with `npm install -g @storm-orm/cli` and run `storm init` in your project. See [AI-Assisted Development](ai.md). ::: +:::tip Looking for a database MCP server for Python, Go, Ruby, or any other language? +Storm's schema-aware MCP server works standalone — no Storm ORM required. Run `npx @storm-orm/cli mcp init` to give your AI tool access to your database schema and (optionally) read-only data, regardless of your tech stack. See [Using Without Storm ORM](database-and-mcp.md#using-without-storm-orm). +::: + **Storm** is a modern, high-performance ORM for Kotlin 2.0+ and Java 21+, built around a powerful SQL template engine. It focuses on simplicity, type safety, and predictable performance through immutable models and compile-time metadata. **Key benefits:** diff --git a/docs/metamodel.md b/docs/metamodel.md index ac9727daf..fd0f442a3 100644 --- a/docs/metamodel.md +++ b/docs/metamodel.md @@ -397,13 +397,13 @@ For compound unique constraints spanning multiple columns, use an inline record ```kotlin -data class UserEmailUK(val userId: Int, val email: String) +data class UserEmailUk(val userId: Int, val email: String) data class SomeEntity( @PK val id: Int = 0, @FK val user: User, val email: String, - @UK @Persist(insertable = false, updatable = false) val uniqueKey: UserEmailUK + @UK @Persist(insertable = false, updatable = false) val uniqueKey: UserEmailUk ) : Entity ``` @@ -411,12 +411,12 @@ data class SomeEntity( ```java -record UserEmailUK(int userId, String email) {} +record UserEmailUk(int userId, String email) {} record SomeEntity(@PK Integer id, @Nonnull @FK User user, @Nonnull String email, - @UK @Persist(insertable = false, updatable = false) UserEmailUK uniqueKey + @UK @Persist(insertable = false, updatable = false) UserEmailUk uniqueKey ) implements Entity {} ``` diff --git a/pom.xml b/pom.xml index c85ad3d10..a1ba27b3c 100644 --- a/pom.xml +++ b/pom.xml @@ -3,7 +3,7 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> 4.0.0 - 1.11.1 + 1.11.2 21 ${java.version} ${java.version} diff --git a/storm-cli/package.json b/storm-cli/package.json index 50a28cf18..bd97952d5 100644 --- a/storm-cli/package.json +++ b/storm-cli/package.json @@ -1,6 +1,6 @@ { "name": "@storm-orm/cli", - "version": "1.11.1", + "version": "1.11.2", "description": "Storm ORM - AI assistant configuration tool", "type": "module", "bin": { diff --git a/storm-cli/storm.mjs b/storm-cli/storm.mjs index 6f7960c9c..4add496f5 100644 --- a/storm-cli/storm.mjs +++ b/storm-cli/storm.mjs @@ -7,9 +7,9 @@ import { writeFileSync, appendFileSync, mkdirSync, existsSync, readFileSync, readdirSync, unlinkSync, rmdirSync, statSync } from 'fs'; import { basename, join, dirname } from 'path'; import { homedir } from 'os'; -import { execSync } from 'child_process'; +import { execSync, spawn } from 'child_process'; -const VERSION = '1.11.1'; +const VERSION = '1.11.2'; // ─── ANSI ──────────────────────────────────────────────────────────────────── @@ -419,9 +419,33 @@ async function checkbox({ message, choices }) { return new Promise((resolve, reject) => { const selected = new Set(); choices.forEach((c, i) => { if (c.checked) selected.add(i); }); - let cursor = 0; + let filter = ''; + let filtered = choices.map((_, i) => i); // indices into choices + let cursor = 0; // cursor within filtered list let linesWritten = 0; + // Viewport: reserve lines for header (1) + filter (1) + footer (1), rest for items. + const maxVisible = Math.max(3, (stdout.rows || 24) - 5); + let scrollOffset = 0; + + function applyFilter() { + if (filter === '') { + filtered = choices.map((_, i) => i); + } else { + const lower = filter.toLowerCase(); + const exact = [], prefix = [], contains = []; + for (let i = 0; i < choices.length; i++) { + const name = choices[i].name.toLowerCase(); + if (name === lower) exact.push(i); + else if (name.startsWith(lower)) prefix.push(i); + else if (name.includes(lower)) contains.push(i); + } + filtered = exact.concat(prefix, contains); + } + cursor = Math.min(cursor, Math.max(0, filtered.length - 1)); + scrollOffset = 0; + } + function render(final) { let out = ''; if (linesWritten > 0) out += `\x1b[${linesWritten}A`; @@ -429,23 +453,41 @@ async function checkbox({ message, choices }) { if (final) { out += CLEAR_DOWN; const names = choices.filter((_, i) => selected.has(i)).map(c => c.name).join(', '); - out += `${boltYellow('\u2714')} ${bold(message)} ${boltYellow(names)}\n`; + out += `${boltYellow('\u2714')} ${bold(message)} ${boltYellow(names || 'none')}\n`; linesWritten = 1; stdout.write(out); return; } - out += `\x1b[2K${boltYellow('?')} ${bold(message)}\n`; - for (let i = 0; i < choices.length; i++) { - const atCursor = i === cursor; + // Keep cursor in viewport. + if (cursor < scrollOffset) scrollOffset = cursor; + if (cursor >= scrollOffset + maxVisible) scrollOffset = cursor - maxVisible + 1; + + const visibleCount = Math.min(filtered.length, maxVisible); + const hasScrollUp = scrollOffset > 0; + const hasScrollDown = scrollOffset + visibleCount < filtered.length; + + out += `\x1b[2K${boltYellow('?')} ${bold(message)}`; + if (filter) out += ` ${dimText('filter:')} ${boltYellow(filter)}`; + out += '\n'; + if (hasScrollUp) out += `\x1b[2K ${dimText('\u2191 more')}\n`; + for (let v = scrollOffset; v < scrollOffset + visibleCount; v++) { + const i = filtered[v]; + const atCursor = v === cursor; const isChecked = selected.has(i); const prefix = atCursor ? boltYellow('\u276f') : ' '; const check = isChecked ? boltYellow('\u25c9') : dimText('\u25cb'); const label = atCursor ? boltYellow(choices[i].name) : choices[i].name; out += `\x1b[2K ${prefix} ${check} ${label}\n`; } - out += `\x1b[2K${dimText(' Space to toggle, Enter to confirm')}\n`; - linesWritten = choices.length + 2; + if (filtered.length === 0) { + out += `\x1b[2K ${dimText('No matches')}\n`; + } + if (hasScrollDown) out += `\x1b[2K ${dimText('\u2193 more')}\n`; + out += `\x1b[2K${dimText(' Space to toggle, Enter to confirm, type to filter')}\n`; + out += CLEAR_DOWN; + const renderedItems = filtered.length === 0 ? 1 : visibleCount; + linesWritten = renderedItems + 2 + (hasScrollUp ? 1 : 0) + (hasScrollDown ? 1 : 0); stdout.write(out); } @@ -462,16 +504,36 @@ async function checkbox({ message, choices }) { reject(Object.assign(new Error('User cancelled'), { name: 'ExitPromptError' })); return; } - if (key === '\x1b[A') cursor = Math.max(0, cursor - 1); - else if (key === '\x1b[B') cursor = Math.min(choices.length - 1, cursor + 1); + if (key === '\x1b[A') cursor = Math.max(0, cursor - 1); // up + else if (key === '\x1b[B') cursor = Math.min(filtered.length - 1, cursor + 1); // down + else if (key === '\x1b[H' || key === '\x1b[1~') cursor = 0; // home + else if (key === '\x1b[F' || key === '\x1b[4~') cursor = Math.max(0, filtered.length - 1); // end + else if (key === '\x1b[5~') cursor = Math.max(0, cursor - maxVisible); // page up + else if (key === '\x1b[6~') cursor = Math.min(filtered.length - 1, cursor + maxVisible); // page down else if (key === ' ') { - if (selected.has(cursor)) selected.delete(cursor); - else selected.add(cursor); + if (filtered.length > 0) { + const i = filtered[cursor]; + if (selected.has(i)) selected.delete(i); + else selected.add(i); + } } else if (key === '\r') { render(true); cleanup(); resolve(choices.filter((_, i) => selected.has(i)).map(c => c.value)); return; + } else if (key === '\x7f' || key === '\b') { // backspace + if (filter.length > 0) { + filter = filter.slice(0, -1); + applyFilter(); + } + } else if (key === '\x1b') { // escape: clear filter + if (filter.length > 0) { + filter = ''; + applyFilter(); + } + } else if (key.length === 1 && key >= ' ' && key !== ' ') { // printable (except space) + filter += key; + applyFilter(); } render(false); }; @@ -689,6 +751,88 @@ function writeProjectConfig(tools, languages) { writeFileSync(configPath, JSON.stringify({ tools, languages }, null, 2) + '\n'); } +// ─── Database connection helpers ────────────────────────────────────────────── + +function ensureGlobalDir() { + const globalDir = join(homedir(), '.storm'); + const connectionsDir = join(globalDir, 'connections'); + mkdirSync(connectionsDir, { recursive: true }); + + const packageJsonPath = join(globalDir, 'package.json'); + if (!existsSync(packageJsonPath)) { + writeFileSync(packageJsonPath, '{"private":true}\n'); + } + + // Always overwrite server.mjs to keep in sync with CLI version. + writeFileSync(join(globalDir, 'server.mjs'), MCP_SERVER_SOURCE); + return globalDir; +} + +function installDriver(dialect) { + const globalDir = join(homedir(), '.storm'); + const driverPackage = DIALECTS[dialect].driver; + + // Skip if driver is already installed. + const driverDir = join(globalDir, 'node_modules', driverPackage); + if (existsSync(driverDir)) return true; + + console.log(dimText(` Installing ${DIALECTS[dialect].name} driver...`)); + try { + execSync(`npm install ${driverPackage} --prefix "${globalDir}"`, { + stdio: 'pipe', + timeout: 60000, + }); + return true; + } catch (error) { + console.log(boltYellow(' Failed to install driver. Make sure npm is available.')); + console.log(dimText(' ' + (error.stderr?.toString().trim() || error.message))); + return false; + } +} + +function listGlobalConnections() { + const connectionsDir = join(homedir(), '.storm', 'connections'); + if (!existsSync(connectionsDir)) return []; + return readdirSync(connectionsDir) + .filter(f => f.endsWith('.json')) + .map(f => f.replace(/\.json$/, '')); +} + +function listLocalConnections() { + const connectionsDir = join(process.cwd(), '.storm', 'connections'); + if (!existsSync(connectionsDir)) return []; + return readdirSync(connectionsDir) + .filter(f => f.endsWith('.json')) + .map(f => f.replace(/\.json$/, '')); +} + +function resolveConnection(name) { + const localPath = join(process.cwd(), '.storm', 'connections', name + '.json'); + if (existsSync(localPath)) return localPath; + const globalPath = join(homedir(), '.storm', 'connections', name + '.json'); + if (existsSync(globalPath)) return globalPath; + return null; +} + +function readDatabases() { + const databasesPath = join(process.cwd(), '.storm', 'databases.json'); + try { + return JSON.parse(readFileSync(databasesPath, 'utf-8')); + } catch { + return {}; + } +} + +function writeDatabases(map) { + const stormDir = join(process.cwd(), '.storm'); + mkdirSync(stormDir, { recursive: true }); + writeFileSync(join(stormDir, 'databases.json'), JSON.stringify(map, null, 2) + '\n'); +} + +function mcpServerName(alias) { + return alias === 'default' ? 'storm-schema' : `storm-schema-${alias}`; +} + // ─── Content (fetched from orm.st at runtime) ─────────────────────────────── const SKILLS_BASE_URL = 'https://orm.st/skills'; @@ -894,7 +1038,8 @@ import { fileURLToPath } from 'url'; var __dirname = dirname(fileURLToPath(import.meta.url)); var require = createRequire(import.meta.url); -var configPath = process.argv[2] || join(__dirname, 'connection.json'); +var configPath = process.argv[2]; +if (!configPath) { process.stderr.write('Usage: node server.mjs \\n'); process.exit(1); } var config = JSON.parse(readFileSync(configPath, 'utf-8')); // ─── Database ──────────────────────────────────────────── @@ -908,6 +1053,7 @@ async function connectDatabase() { db = new pg.Pool({ host: config.host, port: config.port, database: config.database, user: config.username, password: config.password, + options: '-c default_transaction_read_only=on', }); dbType = 'pg'; } else if (config.dialect === 'mysql' || config.dialect === 'mariadb') { @@ -916,13 +1062,14 @@ async function connectDatabase() { host: config.host, port: config.port, database: config.database, user: config.username, password: config.password, }); + await db.query('SET SESSION TRANSACTION READ ONLY'); dbType = 'mysql'; } else if (config.dialect === 'mssqlserver') { var mssql = require('mssql'); db = await mssql.connect({ server: config.host, port: config.port, database: config.database, user: config.username, password: config.password, - options: { encrypt: false, trustServerCertificate: true }, + options: { encrypt: false, trustServerCertificate: true, readOnlyIntent: true }, }); dbType = 'mssql'; } else if (config.dialect === 'oracle') { @@ -977,6 +1124,8 @@ function ph(n) { // ─── Schema queries ────────────────────────────────────── +var excludedTables = new Set((config.excludeTables || []).map(function(t) { return t.toLowerCase(); })); + async function listTables() { await dbReady; if (dbType === 'sqlite') { @@ -1008,9 +1157,23 @@ function describeSqlite(tableName) { var quoted = '"' + tableName.replace(/"/g, '""') + '"'; var columns = db.pragma('table_info(' + quoted + ')'); var fks = db.pragma('foreign_key_list(' + quoted + ')'); + var indexes = db.pragma('index_list(' + quoted + ')'); var fkMap = {}; fks.forEach(function(fk) { - fkMap[fk.from] = { referencedTable: fk.table, referencedColumn: fk.to }; + fkMap[fk.from] = { + referencedTable: fk.table, referencedColumn: fk.to, + onDelete: fk.on_delete, onUpdate: fk.on_update, + }; + }); + var uniqueConstraints = []; + indexes.forEach(function(idx) { + if (idx.unique && idx.origin !== 'pk') { + var idxCols = db.pragma('index_info("' + idx.name.replace(/"/g, '""') + '")'); + uniqueConstraints.push({ + name: idx.name, + columns: idxCols.map(function(c) { return c.name; }), + }); + } }); return { table: tableName, @@ -1021,6 +1184,7 @@ function describeSqlite(tableName) { foreignKey: fkMap[c.name] || null, }; }), + uniqueConstraints: uniqueConstraints, }; } @@ -1035,24 +1199,41 @@ async function describeOracle(tableName) { + ' WHERE cons.owner = ' + ph(1) + ' AND cons.table_name = ' + ph(2) + " AND cons.constraint_type = 'P' ORDER BY cols.position"; var fkSql = 'SELECT cols.column_name, r_cols.table_name AS referenced_table,' - + ' r_cols.column_name AS referenced_column FROM all_constraints cons' + + ' r_cols.column_name AS referenced_column, cons.delete_rule' + + ' FROM all_constraints cons' + ' JOIN all_cons_columns cols ON cons.constraint_name = cols.constraint_name' + ' AND cons.owner = cols.owner' + ' JOIN all_cons_columns r_cols ON cons.r_constraint_name = r_cols.constraint_name' + ' AND cons.r_owner = r_cols.owner' + ' WHERE cons.owner = ' + ph(1) + ' AND cons.table_name = ' + ph(2) + " AND cons.constraint_type = 'R'"; + var ukSql = 'SELECT cons.constraint_name, cols.column_name FROM all_constraints cons' + + ' JOIN all_cons_columns cols ON cons.constraint_name = cols.constraint_name' + + ' AND cons.owner = cols.owner' + + ' WHERE cons.owner = ' + ph(1) + ' AND cons.table_name = ' + ph(2) + + " AND cons.constraint_type = 'U' ORDER BY cons.constraint_name, cols.position"; var params = [schemaName, tableName]; var columns = await dbQuery(colSql, params); var pks = await dbQuery(pkSql, params); var fks = await dbQuery(fkSql, params); + var uks = await dbQuery(ukSql, params); var pkNames = pks.map(function(r) { return r.COLUMN_NAME; }); var fkMap = {}; fks.forEach(function(r) { fkMap[r.COLUMN_NAME] = { referencedTable: r.REFERENCED_TABLE, referencedColumn: r.REFERENCED_COLUMN, + onDelete: r.DELETE_RULE, onUpdate: 'NO ACTION', }; }); + var uniqueConstraints = []; + var ukMap = {}; + uks.forEach(function(r) { + if (!ukMap[r.CONSTRAINT_NAME]) { + ukMap[r.CONSTRAINT_NAME] = { name: r.CONSTRAINT_NAME, columns: [] }; + uniqueConstraints.push(ukMap[r.CONSTRAINT_NAME]); + } + ukMap[r.CONSTRAINT_NAME].columns.push(r.COLUMN_NAME); + }); return { table: tableName, columns: columns.map(function(c) { @@ -1063,6 +1244,7 @@ async function describeOracle(tableName) { foreignKey: fkMap[c.COLUMN_NAME] || null, }; }), + uniqueConstraints: uniqueConstraints, }; } @@ -1082,7 +1264,8 @@ async function describeInfoSchema(tableName) { var fkSql; if (dbType === 'pg') { fkSql = 'SELECT kcu.column_name, ccu.table_name AS referenced_table,' - + ' ccu.column_name AS referenced_column' + + ' ccu.column_name AS referenced_column,' + + ' rc.update_rule, rc.delete_rule' + ' FROM information_schema.table_constraints tc' + ' JOIN information_schema.key_column_usage kcu' + ' ON tc.constraint_name = kcu.constraint_name' @@ -1090,30 +1273,49 @@ async function describeInfoSchema(tableName) { + ' JOIN information_schema.constraint_column_usage ccu' + ' ON tc.constraint_name = ccu.constraint_name' + ' AND tc.table_schema = ccu.table_schema' + + ' JOIN information_schema.referential_constraints rc' + + ' ON tc.constraint_name = rc.constraint_name' + + ' AND tc.constraint_schema = rc.constraint_schema' + ' WHERE tc.table_schema = ' + ph(1) + ' AND tc.table_name = ' + ph(2) + " AND tc.constraint_type = 'FOREIGN KEY'"; } else if (dbType === 'mssql') { fkSql = 'SELECT COL_NAME(fkc.parent_object_id, fkc.parent_column_id) AS column_name,' + ' OBJECT_NAME(fkc.referenced_object_id) AS referenced_table,' - + ' COL_NAME(fkc.referenced_object_id, fkc.referenced_column_id) AS referenced_column' + + ' COL_NAME(fkc.referenced_object_id, fkc.referenced_column_id) AS referenced_column,' + + " REPLACE(fk.delete_referential_action_desc, '_', ' ') AS delete_rule," + + " REPLACE(fk.update_referential_action_desc, '_', ' ') AS update_rule" + ' FROM sys.foreign_key_columns fkc' + + ' JOIN sys.foreign_keys fk ON fkc.constraint_object_id = fk.object_id' + ' JOIN sys.tables t ON fkc.parent_object_id = t.object_id' + ' WHERE SCHEMA_NAME(t.schema_id) = ' + ph(1) + ' AND t.name = ' + ph(2); } else { fkSql = 'SELECT kcu.column_name,' + ' kcu.referenced_table_name AS referenced_table,' - + ' kcu.referenced_column_name AS referenced_column' + + ' kcu.referenced_column_name AS referenced_column,' + + ' rc.update_rule, rc.delete_rule' + ' FROM information_schema.key_column_usage kcu' + ' JOIN information_schema.table_constraints tc' + ' ON tc.constraint_name = kcu.constraint_name' + ' AND tc.table_schema = kcu.table_schema' + + ' JOIN information_schema.referential_constraints rc' + + ' ON tc.constraint_name = rc.constraint_name' + + ' AND tc.constraint_schema = rc.constraint_schema' + ' WHERE kcu.table_schema = ' + ph(1) + ' AND kcu.table_name = ' + ph(2) + " AND tc.constraint_type = 'FOREIGN KEY'"; } + var ukSql = 'SELECT tc.constraint_name, kcu.column_name' + + ' FROM information_schema.table_constraints tc' + + ' JOIN information_schema.key_column_usage kcu' + + ' ON tc.constraint_name = kcu.constraint_name' + + ' AND tc.table_schema = kcu.table_schema' + + ' WHERE tc.table_schema = ' + ph(1) + ' AND tc.table_name = ' + ph(2) + + " AND tc.constraint_type = 'UNIQUE'" + + ' ORDER BY tc.constraint_name, kcu.ordinal_position'; var params = [schemaName, tableName]; var columns = await dbQuery(colSql, params); var pks = await dbQuery(pkSql, params); var fks = await dbQuery(fkSql, params); + var uks = await dbQuery(ukSql, params); var pkNames = pks.map(function(r) { return r.column_name || r.COLUMN_NAME; }); var fkMap = {}; fks.forEach(function(r) { @@ -1121,8 +1323,21 @@ async function describeInfoSchema(tableName) { fkMap[col] = { referencedTable: r.referenced_table || r.REFERENCED_TABLE, referencedColumn: r.referenced_column || r.REFERENCED_COLUMN, + onDelete: r.delete_rule || r.DELETE_RULE || 'NO ACTION', + onUpdate: r.update_rule || r.UPDATE_RULE || 'NO ACTION', }; }); + var uniqueConstraints = []; + var ukMap = {}; + uks.forEach(function(r) { + var constraintName = r.constraint_name || r.CONSTRAINT_NAME; + var colName = r.column_name || r.COLUMN_NAME; + if (!ukMap[constraintName]) { + ukMap[constraintName] = { name: constraintName, columns: [] }; + uniqueConstraints.push(ukMap[constraintName]); + } + ukMap[constraintName].columns.push(colName); + }); return { table: tableName, columns: columns.map(function(c) { @@ -1135,9 +1350,143 @@ async function describeInfoSchema(tableName) { foreignKey: fkMap[name] || null, }; }), + uniqueConstraints: uniqueConstraints, }; } +// ─── Data queries ──────────────────────────────────────── + +var ALLOWED_OPERATORS = ['=', '!=', '<', '>', '<=', '>=', 'LIKE', 'IN', 'IS NULL', 'IS NOT NULL']; +var DEFAULT_ROWS = 50; +var MAX_ROWS = 500; +var MAX_CELL_LENGTH = 200; +var columnCache = {}; + +async function resolveColumns(tableName) { + if (columnCache[tableName]) return columnCache[tableName]; + var desc = await describeTable(tableName); + var names = desc.columns.map(function(c) { return c.name; }); + columnCache[tableName] = names; + return names; +} + +function quoteIdentifier(name) { + if (dbType === 'mysql') return '\`' + name.replace(/\`/g, '\`\`') + '\`'; + return '"' + name.replace(/"/g, '""') + '"'; +} + +function resolveTableName(tables, name) { + for (var i = 0; i < tables.length; i++) { + if (tables[i].toLowerCase() === name.toLowerCase()) return tables[i]; + } + return null; +} + +function resolveColumnName(validColumns, name) { + for (var i = 0; i < validColumns.length; i++) { + if (validColumns[i].toLowerCase() === name.toLowerCase()) return validColumns[i]; + } + return null; +} + +async function selectData(args) { + if (excludedTables.has((args.table || '').toLowerCase())) throw new Error('Data access is excluded for table: ' + args.table); + var tables = await listTables(); + var tableName = resolveTableName(tables, args.table); + if (!tableName) throw new Error('Unknown table: ' + args.table); + + var validColumns = await resolveColumns(tableName); + + var columns = args.columns; + if (columns && columns.length > 0) { + for (var i = 0; i < columns.length; i++) { + var resolved = resolveColumnName(validColumns, columns[i]); + if (!resolved) throw new Error('Unknown column: ' + columns[i] + ' in table ' + tableName); + columns[i] = resolved; + } + } + + var selectClause = (!columns || columns.length === 0) ? '*' : columns.map(quoteIdentifier).join(', '); + var sql = 'SELECT ' + selectClause + ' FROM ' + quoteIdentifier(tableName); + var params = []; + + if (args.where && args.where.length > 0) { + var conditions = []; + for (var i = 0; i < args.where.length; i++) { + var w = args.where[i]; + var resolvedCol = resolveColumnName(validColumns, w.column); + if (!resolvedCol) throw new Error('Unknown column: ' + w.column + ' in table ' + tableName); + w.column = resolvedCol; + var op = (w.operator || '=').toUpperCase(); + if (ALLOWED_OPERATORS.indexOf(op) < 0) throw new Error('Unsupported operator: ' + w.operator); + var col = quoteIdentifier(w.column); + if (op === 'IS NULL') { + conditions.push(col + ' IS NULL'); + } else if (op === 'IS NOT NULL') { + conditions.push(col + ' IS NOT NULL'); + } else if (op === 'IN') { + if (!Array.isArray(w.value)) throw new Error('IN operator requires an array value'); + var placeholders = w.value.map(function(v) { params.push(v); return ph(params.length); }); + conditions.push(col + ' IN (' + placeholders.join(', ') + ')'); + } else { + params.push(w.value); + conditions.push(col + ' ' + op + ' ' + ph(params.length)); + } + } + sql += ' WHERE ' + conditions.join(' AND '); + } + + if (args.orderBy && args.orderBy.length > 0) { + var orderParts = []; + for (var i = 0; i < args.orderBy.length; i++) { + var o = args.orderBy[i]; + var resolvedOrderCol = resolveColumnName(validColumns, o.column); + if (!resolvedOrderCol) throw new Error('Unknown column: ' + o.column + ' in table ' + tableName); + var dir = (o.direction || 'ASC').toUpperCase(); + if (dir !== 'ASC' && dir !== 'DESC') dir = 'ASC'; + orderParts.push(quoteIdentifier(resolvedOrderCol) + ' ' + dir); + } + sql += ' ORDER BY ' + orderParts.join(', '); + } + + var limit = Math.min(args.limit || DEFAULT_ROWS, MAX_ROWS); + var offset = Math.max(0, Math.floor(args.offset || 0)); + if (dbType === 'mssql') { + if (offset > 0) { + if (!args.orderBy || args.orderBy.length === 0) { + sql += ' ORDER BY (SELECT NULL)'; + } + sql += ' OFFSET ' + offset + ' ROWS FETCH NEXT ' + limit + ' ROWS ONLY'; + } else { + sql = sql.replace('SELECT ', 'SELECT TOP ' + limit + ' '); + } + } else if (dbType === 'oracle') { + sql += ' OFFSET ' + offset + ' ROWS FETCH NEXT ' + limit + ' ROWS ONLY'; + } else { + sql += ' LIMIT ' + limit; + if (offset > 0) sql += ' OFFSET ' + offset; + } + + var rows = await dbQuery(sql, params); + if (!rows || rows.length === 0) return { table: tableName, rows: 0, data: 'No rows found.' }; + + // Format as markdown table. + var keys = Object.keys(rows[0]); + var header = '| ' + keys.join(' | ') + ' |'; + var separator = '| ' + keys.map(function() { return '---'; }).join(' | ') + ' |'; + var body = rows.map(function(row) { + return '| ' + keys.map(function(k) { + var v = row[k]; + if (v === null || v === undefined) return 'NULL'; + var s = String(v).replace(/[|]/g, '/').replace(/[\\n\\r]+/g, ' '); + if (s.length > MAX_CELL_LENGTH) s = s.substring(0, MAX_CELL_LENGTH) + '...'; + return s; + }).join(' | ') + ' |'; + }); + + return { table: tableName, rows: rows.length, data: header + '\\n' + separator + '\\n' + body.join('\\n') }; +} + // ─── MCP Protocol ──────────────────────────────────────── var TOOLS = [ @@ -1148,7 +1497,7 @@ var TOOLS = [ }, { name: 'describe_table', - description: 'Describe a table: columns, types, nullability, primary key, and foreign keys.', + description: 'Describe a table: columns, types, nullability, primary key, foreign keys (with cascade rules), and unique constraints.', inputSchema: { type: 'object', properties: { table: { type: 'string', description: 'Table name' } }, @@ -1157,6 +1506,46 @@ var TOOLS = [ }, ]; +if (config.selectAccess) { + TOOLS.push({ + name: 'select_data', + description: 'Query records from a table. Returns individual rows, no aggregation, grouping, or joins.', + inputSchema: { + type: 'object', + properties: { + table: { type: 'string', description: 'Table name' }, + columns: { type: 'array', items: { type: 'string' }, description: 'Columns to return (omit for all columns)' }, + where: { + type: 'array', description: 'Filter conditions (combined with AND)', + items: { + type: 'object', + properties: { + column: { type: 'string', description: 'Column name' }, + operator: { type: 'string', enum: ['=', '!=', '<', '>', '<=', '>=', 'LIKE', 'IN', 'IS NULL', 'IS NOT NULL'], description: 'Comparison operator (default: =)' }, + value: { description: 'Comparison value (omit for IS NULL/IS NOT NULL, use array for IN)' }, + }, + required: ['column'], + }, + }, + orderBy: { + type: 'array', description: 'Sort order', + items: { + type: 'object', + properties: { + column: { type: 'string', description: 'Column name' }, + direction: { type: 'string', enum: ['ASC', 'DESC'], description: 'Sort direction (default: ASC)' }, + }, + required: ['column'], + }, + }, + offset: { type: 'integer', description: 'Number of rows to skip (default: 0)', minimum: 0 }, + limit: { type: 'integer', description: 'Maximum rows to return (default: 50, max: 500)', minimum: 1, maximum: 500 }, + }, + required: ['table'], + }, + }); +} + function respond(id, result) { process.stdout.write(JSON.stringify({ jsonrpc: '2.0', id: id, result: result }) + '\\n'); } @@ -1175,7 +1564,7 @@ rl.on('line', async function(line) { respond(msg.id, { protocolVersion: '2024-11-05', capabilities: { tools: {} }, - serverInfo: { name: 'storm-schema', version: '${VERSION}' }, + serverInfo: { name: 'storm-schema (' + configPath.replace(/.*[\\\\/]/, '').replace('.json', '') + ')', version: '${VERSION}' }, }); } else if (msg.method === 'notifications/initialized') { // no response @@ -1188,6 +1577,9 @@ rl.on('line', async function(line) { result = await listTables(); } else if (msg.params.name === 'describe_table') { result = await describeTable(msg.params.arguments.table); + } else if (msg.params.name === 'select_data') { + if (!config.selectAccess) { respondError(msg.id, -32601, 'Data access is not enabled for this connection'); return; } + result = await selectData(msg.params.arguments); } else { respondError(msg.id, -32601, 'Unknown tool: ' + msg.params.name); return; @@ -1235,22 +1627,31 @@ function installRulesBlock(filePath, content, created, appended) { } } -function registerMcp(toolConfig, stormDir, created, appended) { +function registerMcp(toolConfig, alias, connectionPath, created, appended) { const cwd = process.cwd(); - const serverPath = join(stormDir, 'mcp-server.mjs'); + const serverPath = join(homedir(), '.storm', 'server.mjs'); + const serverName = mcpServerName(alias); if (toolConfig.mcpFormat === 'codex') { // Codex uses TOML in .codex/config.toml const tomlPath = join(cwd, '.codex', 'config.toml'); - const tomlEntry = '\n[mcp_servers.storm-schema]\n' + const tomlEntry = `\n[mcp_servers.${serverName}]\n` + 'type = "stdio"\n' + 'command = "node"\n' - + 'args = ["' + serverPath + '"]\n'; + + `args = ["${serverPath}", "${connectionPath}"]\n`; if (existsSync(tomlPath)) { const existing = readFileSync(tomlPath, 'utf-8'); - if (!existing.includes('storm-schema')) { + if (!existing.includes(serverName)) { appendFileSync(tomlPath, tomlEntry); appended.push('.codex/config.toml'); + } else { + // Replace existing entry with updated args. + const regex = new RegExp(`\\[mcp_servers\\.${serverName}\\][\\s\\S]*?(?=\\n\\[|$)`); + const updated = existing.replace(regex, tomlEntry.trimStart().trimEnd()); + if (updated !== existing) { + writeFileSync(tomlPath, updated); + appended.push('.codex/config.toml'); + } } } else { mkdirSync(dirname(tomlPath), { recursive: true }); @@ -1274,41 +1675,104 @@ function registerMcp(toolConfig, stormDir, created, appended) { try { mcpConfig = JSON.parse(readFileSync(mcpPath, 'utf-8')); } catch {} } if (!mcpConfig.mcpServers) mcpConfig.mcpServers = {}; - if (mcpConfig.mcpServers['storm-schema']) return; // already registered - mcpConfig.mcpServers['storm-schema'] = { - type: 'stdio', - command: 'node', - args: [serverPath], - }; + const desired = { type: 'stdio', command: 'node', args: [serverPath, connectionPath] }; + const existing = mcpConfig.mcpServers[serverName]; + if (existing && JSON.stringify(existing) === JSON.stringify(desired)) return; + + mcpConfig.mcpServers[serverName] = desired; mkdirSync(dirname(mcpPath), { recursive: true }); writeFileSync(mcpPath, JSON.stringify(mcpConfig, null, 2) + '\n'); (isNew ? created : appended).push(toolConfig.mcpFile); } +function unregisterMcp(toolConfig, alias) { + const serverName = mcpServerName(alias); + + if (toolConfig.mcpFormat === 'codex') { + const tomlPath = join(process.cwd(), '.codex', 'config.toml'); + if (!existsSync(tomlPath)) return false; + const existing = readFileSync(tomlPath, 'utf-8'); + const regex = new RegExp(`\\n?\\[mcp_servers\\.${serverName}\\][\\s\\S]*?(?=\\n\\[|$)`, 'g'); + const updated = existing.replace(regex, ''); + if (updated !== existing) { + writeFileSync(tomlPath, updated); + return true; + } + return false; + } + + if (toolConfig.mcpFormat === 'windsurf' || !toolConfig.mcpFile) return false; + + const mcpPath = join(process.cwd(), toolConfig.mcpFile); + if (!existsSync(mcpPath)) return false; + let mcpConfig; + try { mcpConfig = JSON.parse(readFileSync(mcpPath, 'utf-8')); } catch { return false; } + if (!mcpConfig.mcpServers || !mcpConfig.mcpServers[serverName]) return false; + + delete mcpConfig.mcpServers[serverName]; + writeFileSync(mcpPath, JSON.stringify(mcpConfig, null, 2) + '\n'); + return true; +} + +function registerAllMcpServers(tools, created, appended) { + const databases = readDatabases(); + for (const [alias, connectionName] of Object.entries(databases)) { + const connectionPath = resolveConnection(connectionName); + if (!connectionPath) { + console.log(dimText(` Skipping ${alias}: connection "${connectionName}" not found`)); + continue; + } + for (const toolId of tools) { + const config = TOOL_CONFIGS[toolId]; + if (!config.mcpFormat) continue; + registerMcp(config, alias, connectionPath, created, appended); + } + } +} + // ─── Database setup ────────────────────────────────────────────────────────── -async function setupDatabase(preConfigured) { - const stormDir = join(homedir(), '.storm'); - const connectionPath = join(stormDir, 'connection.json'); +function formatConnectionLabel(name, connection) { + if (connection.host) { + return `${name} (${connection.dialect}://${connection.host}${connection.port ? ':' + connection.port : ''}/${connection.database})`; + } + return `${name} (${connection.dialect}:${connection.database})`; +} +async function setupGlobalConnection(connectionName, preConfigured) { let connection; if (preConfigured) { // Non-interactive: use the provided connection details directly. connection = preConfigured; + if (!connectionName) { + connectionName = preConfigured.dialect + (preConfigured.database ? '-' + preConfigured.database : ''); + connectionName = connectionName.toLowerCase().replace(/[^a-z0-9-]/g, '-'); + } } else { - // Interactive: prompt the user. - if (existsSync(connectionPath)) { - try { - const existing = JSON.parse(readFileSync(connectionPath, 'utf-8')); - console.log(dimText(` Current connection: ${existing.dialect}://${existing.host || ''}${existing.port ? ':' + existing.port : ''}/${existing.database}`)); - console.log(); - const reconfigure = await confirm({ message: 'Reconfigure?', defaultValue: false }); - if (!reconfigure) return stormDir; - console.log(); - } catch {} + // Interactive: show existing connections or create new. + const existingConnections = listGlobalConnections(); + + if (existingConnections.length > 0) { + const choices = existingConnections.map(name => { + const connectionPath = join(homedir(), '.storm', 'connections', name + '.json'); + try { + const conn = JSON.parse(readFileSync(connectionPath, 'utf-8')); + return { name: formatConnectionLabel(name, conn), value: name }; + } catch { + return { name, value: name }; + } + }); + choices.push({ name: 'Create new connection', value: '__new__' }); + + const picked = await select({ message: 'Database connection', choices }); + if (picked !== '__new__') { + const connectionPath = join(homedir(), '.storm', 'connections', picked + '.json'); + return { name: picked, path: connectionPath }; + } + console.log(); } const dialect = await select({ @@ -1362,36 +1826,56 @@ async function setupDatabase(preConfigured) { password: password || '', }; } - } - // Install driver. - const driverPackage = DIALECTS[connection.dialect].driver; - console.log(); - console.log(dimText(` Installing ${DIALECTS[connection.dialect].name} driver...`)); + console.log(); + const selectAccess = await confirm({ + message: 'Allow AI tools to query data? (read-only SELECT)', + defaultValue: false, + }); + connection.selectAccess = selectAccess; - mkdirSync(stormDir, { recursive: true }); + if (selectAccess) { + console.log(dimText(' Tip: Use `storm db config ` to exclude specific tables from data queries.')); + } - const packageJsonPath = join(stormDir, 'package.json'); - if (!existsSync(packageJsonPath)) { - writeFileSync(packageJsonPath, '{"private":true}\n'); - } + // Prompt for a connection name with a sensible default derived from host + database. + if (!connectionName) { + const host = connection.host || 'local'; + const dbName = connection.database + ? basename(connection.database).replace(/\.[^.]+$/, '') // strip extension for file-based + : dialect; + const suggestedName = (host + '-' + dbName).toLowerCase().replace(/[^a-z0-9.-]/g, '-'); + connectionName = await textInput({ message: 'Connection name', defaultValue: suggestedName }); + if (!connectionName || connectionName.trim() === '') { + connectionName = suggestedName; + } + connectionName = connectionName.trim().toLowerCase().replace(/[^a-z0-9-]/g, '-'); + } - try { - execSync(`npm install ${driverPackage} --prefix "${stormDir}"`, { - stdio: 'pipe', - timeout: 60000, - }); - } catch (error) { - console.log(boltYellow(' Failed to install driver. Make sure npm is available.')); - console.log(dimText(' ' + (error.stderr?.toString().trim() || error.message))); - return null; + // Check if this name already exists globally. + const existingPath = join(homedir(), '.storm', 'connections', connectionName + '.json'); + if (existsSync(existingPath)) { + try { + const existing = JSON.parse(readFileSync(existingPath, 'utf-8')); + console.log(dimText(` Existing: ${formatConnectionLabel(connectionName, existing)}`)); + console.log(); + const reconfigure = await confirm({ message: 'Overwrite?', defaultValue: true }); + if (!reconfigure) return { name: connectionName, path: existingPath }; + console.log(); + } catch {} + } } - // Write connection config and MCP server. - writeFileSync(connectionPath, JSON.stringify(connection, null, 2) + '\n'); - writeFileSync(join(stormDir, 'mcp-server.mjs'), MCP_SERVER_SOURCE); + // Ensure global directory structure and install driver. + const globalDir = ensureGlobalDir(); + console.log(); + if (!installDriver(connection.dialect)) return null; + + // Write connection config. + const connectionFilePath = join(globalDir, 'connections', connectionName + '.json'); + writeFileSync(connectionFilePath, JSON.stringify(connection, null, 2) + '\n'); - return stormDir; + return { name: connectionName, path: connectionFilePath }; } // ─── Update (non-interactive) ──────────────────────────────────────────────── @@ -1519,9 +2003,7 @@ async function update() { } // Also update schema-dependent skills if database is configured. - const stormDir = join(homedir(), '.storm'); - const connectionPath = join(stormDir, 'connection.json'); - if (existsSync(connectionPath)) { + if (Object.keys(readDatabases()).length > 0) { const schemaRules = await fetchSkill('storm-schema-rules'); for (const toolId of tools) { const config = TOOL_CONFIGS[toolId]; @@ -1555,6 +2037,11 @@ async function update() { cleanStaleSkills(skillToolConfigs, installedSkillNames, skipped); } + // Update MCP server script if databases are configured. + if (Object.keys(readDatabases()).length > 0) { + ensureGlobalDir(); + } + const uniqueCreated = [...new Set(created)]; const uniqueAppended = [...new Set(appended)]; @@ -1576,38 +2063,361 @@ async function update() { console.log(); } -// ─── MCP (non-interactive re-register) ────────────────────────────────────── +// ─── Database commands (global connection library) ─────────────────────────── -async function updateMcp() { - const stormDir = join(homedir(), '.storm'); - const connectionPath = join(stormDir, 'connection.json'); +async function dbAdd(nameArg) { + const result = await setupGlobalConnection(nameArg || null, null); + if (!result) return; + console.log(); + console.log(bold(` Connection "${result.name}" saved globally.`)); + console.log(dimText(` ${result.path}`)); + console.log(); +} +function dbList() { + const connections = listGlobalConnections(); + if (connections.length === 0) { + console.log(dimText('\n No global database connections configured.')); + console.log(dimText(' Run `storm db add` to add one.\n')); + return; + } + console.log(); + for (const name of connections) { + const connectionPath = join(homedir(), '.storm', 'connections', name + '.json'); + try { + const connection = JSON.parse(readFileSync(connectionPath, 'utf-8')); + console.log(` ${boltYellow(name)} ${formatConnectionLabel(name, connection).replace(name + ' ', '')}`); + } catch { + console.log(` ${boltYellow(name)} (unreadable)`); + } + } + console.log(); +} + +async function dbRemove(nameArg) { + const connections = listGlobalConnections(); + if (connections.length === 0) { + console.log(dimText('\n No global database connections to remove.\n')); + return; + } + + let name = nameArg; + if (!name) { + name = await select({ + message: 'Remove which connection?', + choices: connections.map(n => { + const connectionPath = join(homedir(), '.storm', 'connections', n + '.json'); + try { + const connection = JSON.parse(readFileSync(connectionPath, 'utf-8')); + return { name: formatConnectionLabel(n, connection), value: n }; + } catch { + return { name: n, value: n }; + } + }), + }); + } + + const connectionPath = join(homedir(), '.storm', 'connections', name + '.json'); + if (!existsSync(connectionPath)) { + console.log(boltYellow(`\n Connection "${name}" not found.\n`)); + return; + } + + unlinkSync(connectionPath); + console.log(); + console.log(bold(` Removed global connection "${name}".`)); + console.log(); +} + +async function dbConfig(nameArg) { + const connections = listGlobalConnections(); + if (connections.length === 0) { + console.log(dimText('\n No global database connections configured.')); + console.log(dimText(' Run `storm db add` to add one.\n')); + return; + } + + let name = nameArg; + if (!name) { + name = await select({ + message: 'Configure connection', + choices: connections.map(n => { + const connectionPath = join(homedir(), '.storm', 'connections', n + '.json'); + try { + const connection = JSON.parse(readFileSync(connectionPath, 'utf-8')); + return { name: formatConnectionLabel(n, connection), value: n }; + } catch { + return { name: n, value: n }; + } + }), + }); + } + + const connectionPath = join(homedir(), '.storm', 'connections', name + '.json'); if (!existsSync(connectionPath)) { - // No existing connection; fall back to interactive setup. - console.log(dimText(' No existing database connection found. Starting setup...')); + console.log(boltYellow(`\n Connection "${name}" not found.\n`)); + return; + } + + const connection = JSON.parse(readFileSync(connectionPath, 'utf-8')); + let changed = false; + + // Data access toggle. + console.log(); + const wantsSelectAccess = await confirm({ + message: 'Allow AI tools to query data? (read-only SELECT)', + defaultValue: connection.selectAccess || false, + }); + if (wantsSelectAccess !== (connection.selectAccess || false)) { + connection.selectAccess = wantsSelectAccess; + changed = true; + } + + // Table exclusions (only when data access is enabled). + if (connection.selectAccess) { + // Ensure server.mjs is up to date before spawning it. + ensureGlobalDir(); + + const serverPath = join(homedir(), '.storm', 'server.mjs'); + console.log(dimText('\n Connecting to database...')); + let tables; + try { + tables = await new Promise((resolve, reject) => { + const child = spawn('node', [serverPath, connectionPath], { stdio: ['pipe', 'pipe', 'pipe'] }); + let output = ''; + child.stdout.on('data', data => { output += data.toString(); }); + child.stderr.on('data', () => {}); + + child.stdin.write(JSON.stringify({ jsonrpc: '2.0', id: 1, method: 'initialize', params: { protocolVersion: '2024-11-05', capabilities: {}, clientInfo: { name: 'storm-cli' } } }) + '\n'); + + setTimeout(() => { + child.stdin.write(JSON.stringify({ jsonrpc: '2.0', id: 2, method: 'tools/call', params: { name: 'list_tables', arguments: {} } }) + '\n'); + }, 500); + + setTimeout(() => { + child.kill(); + try { + const lines = output.trim().split('\n'); + for (const line of lines) { + const msg = JSON.parse(line); + if (msg.id === 2 && msg.result) { + resolve(JSON.parse(msg.result.content[0].text)); + return; + } + } + reject(new Error('No table list response received')); + } catch (e) { + reject(e); + } + }, 3000); + + child.on('error', reject); + }); + } catch (error) { + console.log(boltYellow(` Could not connect to database: ${error.message}`)); + tables = null; + } + + if (tables && tables.length > 0) { + const currentExclusions = new Set((connection.excludeTables || []).map(t => t.toLowerCase())); + + console.log(); + const excluded = await checkbox({ + message: 'Exclude tables from data queries', + choices: tables.map(t => ({ + name: t, + value: t, + checked: currentExclusions.has(t.toLowerCase()), + })), + }); + + const newExclusions = excluded.length > 0 ? excluded : undefined; + if (JSON.stringify(newExclusions) !== JSON.stringify(connection.excludeTables)) { + connection.excludeTables = newExclusions; + changed = true; + } + } + } else if (connection.excludeTables) { + // Data access disabled — clear exclusions. + delete connection.excludeTables; + changed = true; + } + + if (changed) { + writeFileSync(connectionPath, JSON.stringify(connection, null, 2) + '\n'); + ensureGlobalDir(); + console.log(); + console.log(bold(` Connection "${name}" updated.`)); + if (connection.selectAccess) { + console.log(dimText(' Data access: enabled')); + if (connection.excludeTables && connection.excludeTables.length > 0) { + console.log(dimText(` Excluded tables: ${connection.excludeTables.join(', ')}`)); + } + } else { + console.log(dimText(' Data access: disabled')); + } console.log(); - const result = await setupDatabase(); - if (!result) return; + console.log(dimText(' Restart your AI tool to apply changes.')); + } else { + console.log(dimText('\n No changes.')); } + console.log(); +} + +async function updateDb(subArgs) { + const subcommand = subArgs ? subArgs[0] : undefined; + + if (subcommand === 'add') { + await dbAdd(subArgs[1]); + } else if (subcommand === 'list' || subcommand === 'ls' || !subcommand) { + dbList(); + } else if (subcommand === 'remove' || subcommand === 'rm') { + await dbRemove(subArgs[1]); + } else if (subcommand === 'config') { + await dbConfig(subArgs[1]); + } else { + console.log(boltYellow(`\n Unknown db command: ${subcommand}`)); + console.log(dimText(' Available: add, list, remove, config\n')); + } +} - // Re-register MCP for all configured tools. +// ─── MCP commands ──────────────────────────────────────────────────────────── + +async function mcpAdd(aliasArg) { const tools = detectConfiguredTools(); if (tools.length === 0) { console.log(boltYellow('\n No configured AI tools found. Run `storm init` first.\n')); return; } + const alias = aliasArg || await textInput({ message: 'Alias for this connection', defaultValue: 'default' }) || 'default'; + console.log(); + + const result = await setupGlobalConnection(null, null); + if (!result) return; + + // Update project databases.json. + const databases = readDatabases(); + databases[alias] = result.name; + writeDatabases(databases); + + // Ensure global server.mjs exists. + ensureGlobalDir(); + + // Register MCP for all configured tools. const created = []; const appended = []; for (const toolId of tools) { const config = TOOL_CONFIGS[toolId]; if (!config.mcpFormat) continue; - if (config.mcpFormat === 'codex' && existsSync(join(process.cwd(), '.codex'))) { - registerMcp(config, stormDir, created, appended); - } else if (config.mcpFile) { - registerMcp(config, stormDir, created, appended); + registerMcp(config, alias, result.path, created, appended); + } + + // Add .storm/ to .gitignore. + const gitignorePath = join(process.cwd(), '.gitignore'); + let gitignore = existsSync(gitignorePath) ? readFileSync(gitignorePath, 'utf-8') : ''; + if (!gitignore.includes('.storm/')) { + appendFileSync(gitignorePath, '\n# Storm database config (machine-specific)\n.storm/\n'); + } + + console.log(); + console.log(bold(` Database "${alias}" -> ${result.name} added.`)); + if (created.length > 0 || appended.length > 0) { + [...created, ...appended].forEach(f => console.log(dimText(` ${f}`))); + } + console.log(); +} + +function mcpList() { + const databases = readDatabases(); + const entries = Object.entries(databases); + if (entries.length === 0) { + console.log(dimText('\n No databases configured for this project.')); + console.log(dimText(' Run `storm mcp add` to add one.\n')); + return; + } + console.log(); + for (const [alias, connectionName] of entries) { + const connectionPath = resolveConnection(connectionName); + const source = connectionPath + ? (connectionPath.startsWith(join(process.cwd(), '.storm')) ? 'local' : 'global') + : 'missing'; + let detail = ''; + if (connectionPath) { + try { + const connection = JSON.parse(readFileSync(connectionPath, 'utf-8')); + if (connection.host) { + detail = ` ${connection.dialect}://${connection.host}${connection.port ? ':' + connection.port : ''}/${connection.database}`; + } else { + detail = ` ${connection.dialect}:${connection.database}`; + } + } catch {} } + const serverName = mcpServerName(alias); + console.log(` ${boltYellow(alias)} -> ${connectionName} (${source})${detail}`); + console.log(dimText(` MCP server: ${serverName}`)); } + console.log(); +} + +async function mcpRemove(aliasArg) { + const databases = readDatabases(); + const entries = Object.entries(databases); + if (entries.length === 0) { + console.log(dimText('\n No databases configured for this project.\n')); + return; + } + + let alias = aliasArg; + if (!alias) { + alias = await select({ + message: 'Remove which connection?', + choices: entries.map(([a, name]) => ({ name: `${a} -> ${name}`, value: a })), + }); + } + + if (!databases[alias]) { + console.log(boltYellow(`\n Connection "${alias}" not found in this project.\n`)); + return; + } + + // Unregister from all tools. + const tools = detectConfiguredTools(); + for (const toolId of tools) { + const config = TOOL_CONFIGS[toolId]; + if (!config.mcpFormat) continue; + unregisterMcp(config, alias); + } + + delete databases[alias]; + writeDatabases(databases); + + console.log(); + console.log(bold(` Removed "${alias}" from project.`)); + console.log(); +} + +async function mcpReregisterAll() { + const databases = readDatabases(); + if (Object.keys(databases).length === 0) { + console.log(dimText('\n No databases configured. Adding one now...')); + console.log(); + await mcpAdd('default'); + return; + } + + const tools = detectConfiguredTools(); + if (tools.length === 0) { + console.log(boltYellow('\n No configured AI tools found. Run `storm init` first.\n')); + return; + } + + // Ensure global server.mjs is current. + ensureGlobalDir(); + + const created = []; + const appended = []; + registerAllMcpServers(tools, created, appended); console.log(); if (created.length > 0) { @@ -1628,6 +2438,122 @@ async function updateMcp() { console.log(); } +async function mcpInit() { + console.log(); + console.log(' Set up a database-aware MCP server for your AI tools.'); + console.log(' This gives your AI tool access to your database schema'); + console.log(' (and optionally data) without exposing credentials.'); + console.log(' No Storm ORM required — works with any language or framework.'); + console.log(); + + // Step 1: Select AI tools that support MCP. + const mcpToolEntries = Object.entries(TOOL_CONFIGS).filter(([_, c]) => c.mcpFormat); + const tools = await checkbox({ + message: 'Which AI tools do you use?', + choices: mcpToolEntries.map(([id, config]) => ({ + name: config.name, + value: id, + checked: false, + })), + }); + + if (tools.length === 0) { + console.log(boltYellow('\n No tools selected.\n')); + return; + } + + // Step 2: Database connection(s). + const databases = {}; + console.log(); + const result = await setupGlobalConnection(null, null); + if (!result) return; + + const alias = await textInput({ message: 'Alias for this connection', defaultValue: 'default' }) || 'default'; + databases[alias.trim()] = result.name; + + let addMore = true; + while (addMore) { + console.log(); + addMore = await confirm({ message: 'Add another database connection?', defaultValue: false }); + if (addMore) { + console.log(); + const nextResult = await setupGlobalConnection(null, null); + if (nextResult) { + const nextAlias = await textInput({ message: 'Alias for this connection' }); + if (nextAlias && nextAlias.trim()) { + databases[nextAlias.trim()] = nextResult.name; + } + } + } + } + + writeDatabases(databases); + ensureGlobalDir(); + + // Step 3: Register MCP servers and update .gitignore. + const created = []; + const appended = []; + + for (const [alias, connectionName] of Object.entries(databases)) { + const connectionPath = resolveConnection(connectionName); + if (!connectionPath) continue; + for (const toolId of tools) { + const config = TOOL_CONFIGS[toolId]; + if (!config.mcpFormat) continue; + registerMcp(config, alias, connectionPath, created, appended); + } + } + + const gitignorePath = join(process.cwd(), '.gitignore'); + const ignoreEntries = ['.storm/']; + for (const toolId of tools) { + const config = TOOL_CONFIGS[toolId]; + if (config.mcpFile) ignoreEntries.push(config.mcpFile); + } + let gitignore = existsSync(gitignorePath) ? readFileSync(gitignorePath, 'utf-8') : ''; + const missing = ignoreEntries.filter(e => !gitignore.includes(e)); + if (missing.length > 0) { + const block = '\n# Storm MCP (machine-specific)\n' + missing.join('\n') + '\n'; + appendFileSync(gitignorePath, block); + appended.push('.gitignore'); + } + + // Summary. + console.log(); + if (created.length > 0 || appended.length > 0) { + [...new Set([...created, ...appended])].forEach(f => console.log(dimText(` ${f}`))); + } + + console.log(); + console.log(bold(' MCP server configured.')); + console.log(); + const toolNames = tools.map(t => TOOL_CONFIGS[t].name); + console.log(` Your AI tool${toolNames.length > 1 ? 's' : ''} (${toolNames.join(', ')}) can now access your`); + console.log(' database schema. Restart your AI tool to activate.'); + console.log(); + console.log(dimText(' Manage later with:')); + console.log(dimText(' storm db config Toggle data access and table exclusions')); + console.log(dimText(' storm mcp add Add another database to this project')); + console.log(dimText(' storm mcp list Show configured databases')); + console.log(); +} + +async function updateMcp(subArgs) { + const subcommand = subArgs ? subArgs[0] : undefined; + + if (subcommand === 'init') { + await mcpInit(); + } else if (subcommand === 'add') { + await mcpAdd(subArgs[1]); + } else if (subcommand === 'list' || subcommand === 'ls') { + mcpList(); + } else if (subcommand === 'remove' || subcommand === 'rm') { + await mcpRemove(subArgs[1]); + } else { + await mcpReregisterAll(); + } +} + // ─── Main flow ─────────────────────────────────────────────────────────────── async function setup() { @@ -1728,10 +2654,10 @@ async function setup() { } } - // Step 4: Optional database connection. + // Step 4: Optional database connection(s). const mcpTools = tools.filter(t => TOOL_CONFIGS[t].mcpFormat); let dbConfigured = false; - let stormDir = null; + const databases = {}; if (mcpTools.length > 0) { console.log(); @@ -1747,34 +2673,58 @@ async function setup() { if (connectDb) { console.log(); - stormDir = await setupDatabase(); - dbConfigured = stormDir !== null; + const result = await setupGlobalConnection(null, null); + if (result) { + const alias = await textInput({ message: 'Alias for this connection', defaultValue: 'default' }) || 'default'; + databases[alias.trim()] = result.name; + dbConfigured = true; + + // Offer to add more connections. + let addMore = true; + while (addMore) { + console.log(); + addMore = await confirm({ message: 'Add another database connection?', defaultValue: false }); + if (addMore) { + console.log(); + const nextResult = await setupGlobalConnection(null, null); + if (nextResult) { + const nextAlias = await textInput({ message: 'Alias for this connection' }); + if (nextAlias && nextAlias.trim()) { + databases[nextAlias.trim()] = nextResult.name; + } + } + } + } + + writeDatabases(databases); + } } } // Step 5: Register MCP, append schema rules, update .gitignore, and install schema skills. if (dbConfigured) { - // Add MCP config files to .gitignore (they contain machine-specific paths). + // Add .storm/ and MCP config files to .gitignore. const gitignorePath = join(process.cwd(), '.gitignore'); - const mcpIgnoreEntries = []; + const ignoreEntries = ['.storm/']; for (const toolId of tools) { const config = TOOL_CONFIGS[toolId]; - if (config.mcpFile) mcpIgnoreEntries.push(config.mcpFile); + if (config.mcpFile) ignoreEntries.push(config.mcpFile); } - if (mcpIgnoreEntries.length > 0) { - let gitignore = existsSync(gitignorePath) ? readFileSync(gitignorePath, 'utf-8') : ''; - const missing = mcpIgnoreEntries.filter(e => !gitignore.includes(e)); - if (missing.length > 0) { - const block = '\n# Storm MCP (machine-specific paths)\n' + missing.join('\n') + '\n'; - appendFileSync(gitignorePath, block); - appended.push('.gitignore'); - } + let gitignore = existsSync(gitignorePath) ? readFileSync(gitignorePath, 'utf-8') : ''; + const missing = ignoreEntries.filter(e => !gitignore.includes(e)); + if (missing.length > 0) { + const block = '\n# Storm MCP (machine-specific)\n' + missing.join('\n') + '\n'; + appendFileSync(gitignorePath, block); + appended.push('.gitignore'); } + + // Register MCP servers for all database connections. + registerAllMcpServers(tools, created, appended); + // Fetch schema rules and append to each tool's rules block. const schemaRules = await fetchSkill('storm-schema-rules'); for (const toolId of tools) { const config = TOOL_CONFIGS[toolId]; - registerMcp(config, stormDir, created, appended); if (config.rulesFile && schemaRules) { const rulesPath = join(process.cwd(), config.rulesFile); if (existsSync(rulesPath)) { @@ -1830,7 +2780,12 @@ async function setup() { console.log(); console.log(dimText(' Note: Windsurf requires manual MCP configuration.')); console.log(dimText(` Add storm-schema server in Windsurf settings with command:`)); - console.log(dimText(` node ${join(stormDir, 'mcp-server.mjs')}`)); + for (const [alias, connectionName] of Object.entries(databases)) { + const connectionPath = resolveConnection(connectionName); + if (connectionPath) { + console.log(dimText(` node ${join(homedir(), '.storm', 'server.mjs')} ${connectionPath}`)); + } + } } console.log(); @@ -1951,20 +2906,25 @@ async function demo() { } console.log(dimText(' Setting up MCP server for database schema access...')); - const stormDir = await setupDatabase(demoConnection); - const dbConfigured = stormDir !== null; + const connectionName = 'demo-' + dialect; + const result = await setupGlobalConnection(connectionName, demoConnection); + const dbConfigured = result !== null; if (dbConfigured) { + // Write project databases.json. + writeDatabases({ default: connectionName }); + // Register MCP for the selected tool. - registerMcp(config, stormDir, created, appended); - - // Add MCP config file to .gitignore. - if (config.mcpFile) { - const gitignorePath = join(cwd, '.gitignore'); - let gitignore = existsSync(gitignorePath) ? readFileSync(gitignorePath, 'utf-8') : ''; - if (!gitignore.includes(config.mcpFile)) { - appendFileSync(gitignorePath, `\n# Storm MCP (machine-specific paths)\n${config.mcpFile}\n`); - } + registerMcp(config, 'default', result.path, created, appended); + + // Add .storm/ and MCP config file to .gitignore. + const gitignorePath = join(cwd, '.gitignore'); + const ignoreEntries = ['.storm/']; + if (config.mcpFile) ignoreEntries.push(config.mcpFile); + let gitignore = existsSync(gitignorePath) ? readFileSync(gitignorePath, 'utf-8') : ''; + const missing = ignoreEntries.filter(e => !gitignore.includes(e)); + if (missing.length > 0) { + appendFileSync(gitignorePath, `\n# Storm MCP (machine-specific)\n${missing.join('\n')}\n`); } // Fetch and install schema rules into the rules block. @@ -2067,7 +3027,15 @@ async function run() { storm init Configure rules, skills, and database (default) storm demo Create a demo project in an empty directory storm update Update rules and skills (non-interactive) + storm db List global database connections + storm db add [name] Add a global database connection + storm db remove [name] Remove a global database connection + storm db config [name] Configure data access and table exclusions + storm mcp init Set up MCP database server (no Storm ORM required) storm mcp Re-register MCP servers for configured tools + storm mcp add [alias] Add a database connection to this project + storm mcp list List project database connections + storm mcp remove [alias] Remove a database connection ${dimText('Options:')} --help, -h Show this help message @@ -2086,8 +3054,12 @@ async function run() { if (command === 'update') { await update(); + } else if (command === 'db') { + const dbSubArgs = args.filter(a => !a.startsWith('-')).slice(1); + await updateDb(dbSubArgs); } else if (command === 'mcp') { - await updateMcp(); + const mcpSubArgs = args.filter(a => !a.startsWith('-')).slice(1); + await updateMcp(mcpSubArgs); } else if (command === 'demo') { await demo(); } else { diff --git a/storm-core/src/main/java/st/orm/core/template/impl/SchemaValidator.java b/storm-core/src/main/java/st/orm/core/template/impl/SchemaValidator.java index 2f25de370..836d346ef 100644 --- a/storm-core/src/main/java/st/orm/core/template/impl/SchemaValidator.java +++ b/storm-core/src/main/java/st/orm/core/template/impl/SchemaValidator.java @@ -472,7 +472,14 @@ private void validateType( } /** - * Validates that {@code @UK}-annotated fields have a matching unique constraint in the database. + * Validates unique key alignment between {@code @UK}-annotated fields and database unique constraints. + * + *

Forward check: for each {@code @UK} field, verifies a matching constraint exists in the database + * (unless suppressed by {@code @UK(constraint = false)}).

+ * + *

Reverse check: for each single-column unique constraint in the database, warns if the corresponding + * entity field is not annotated with {@code @UK}. Composite (multi-column) constraints are not flagged, + * since modeling them requires structural changes (inline records) that should be a deliberate choice.

*/ private void validateUniqueKeys( @Nonnull Class type, @@ -490,26 +497,15 @@ private void validateUniqueKeys( indexColumnsByName.computeIfAbsent(uk.indexName(), k -> new TreeSet<>(String.CASE_INSENSITIVE_ORDER)) .add(uk.columnName()); } + // Forward check: @UK fields must have a matching database constraint. for (RecordField field : model.recordType().fields()) { - if (field.isAnnotationPresent(DbIgnore.class)) { - continue; - } - if (!field.isAnnotationPresent(UK.class)) { - continue; - } - // Skip @PK fields, since primary keys are already validated in step 5. - if (field.isAnnotationPresent(PK.class)) { - continue; - } - if (ignoredComponents.contains(field.name())) { - continue; - } - // Skip if the @UK annotation indicates no constraint is expected. + if (field.isAnnotationPresent(DbIgnore.class)) continue; + if (!field.isAnnotationPresent(UK.class)) continue; + if (field.isAnnotationPresent(PK.class)) continue; + if (ignoredComponents.contains(field.name())) continue; UK ukAnnotation = field.getAnnotation(UK.class); - if (ukAnnotation != null && !ukAnnotation.constraint()) { - continue; - } - // Collect the expected column names for this @UK field. + if (ukAnnotation == null) continue; + if (!ukAnnotation.constraint()) continue; SortedSet expectedColumns = new TreeSet<>(String.CASE_INSENSITIVE_ORDER); for (Column column : model.declaredColumns()) { String fieldPath = column.metamodel().fieldPath(); @@ -517,10 +513,7 @@ private void validateUniqueKeys( expectedColumns.add(column.name()); } } - if (expectedColumns.isEmpty()) { - continue; - } - // Check if any unique index in the database covers exactly these columns. + if (expectedColumns.isEmpty()) continue; boolean found = indexColumnsByName.values().stream() .anyMatch(indexColumns -> indexColumns.equals(expectedColumns)); if (!found) { diff --git a/storm-foundation/src/main/java/st/orm/UK.java b/storm-foundation/src/main/java/st/orm/UK.java index ac9a36463..950d2ba6f 100644 --- a/storm-foundation/src/main/java/st/orm/UK.java +++ b/storm-foundation/src/main/java/st/orm/UK.java @@ -63,32 +63,27 @@ * ) : Entity * } * - *

For compound unique constraints spanning multiple columns, use an inline record annotated with {@code @UK}: + *

Compound Unique Keys

+ * + *

For compound unique constraints that need a metamodel key (e.g., for keyset pagination or type-safe lookups), + * use an inline record annotated with {@code @UK}:

* *

Java: *

{@code
- * record UserEmailUK(int userId, String email) {}
+ * record UserEmailUk(int userId, String email) {}
  *
  * record SomeEntity(@PK Integer id,
  *                   @FK User user,
  *                   String email,
- *                   @UK @Persist(insertable = false, updatable = false) UserEmailUK uniqueKey
+ *                   @UK @Persist(insertable = false, updatable = false) UserEmailUk uniqueKey
  * ) implements Entity {}
  * }
* - *

Kotlin: - *

{@code
- * data class UserEmailUK(val userId: Int, val email: String)
- *
- * data class SomeEntity(@PK val id: Int?,
- *                       @FK val user: User,
- *                       val email: String,
- *                       @UK @Persist(insertable = false, updatable = false) val uniqueKey: UserEmailUK
- * ) : Entity
- * }
- * *

The {@code @Persist(insertable = false, updatable = false)} annotation prevents the inline record's columns - * from being persisted separately when they overlap with other fields on the entity. + * from being persisted separately when they overlap with other fields on the entity.

+ * + *

Compound unique constraints that do not require a metamodel key do not need to be modeled in the entity. + * Schema validation does not warn about unmodeled compound constraints.

* *

The metamodel processor generates {@link Metamodel.Key} instances for fields annotated with {@code @UK}, * enabling type-safe keyset pagination and unique field lookups via repository methods like diff --git a/storm-kotlin/src/main/kotlin/st/orm/template/Refs.kt b/storm-kotlin/src/main/kotlin/st/orm/template/Refs.kt index 08c4eecf8..c1f1ee9f9 100644 --- a/storm-kotlin/src/main/kotlin/st/orm/template/Refs.kt +++ b/storm-kotlin/src/main/kotlin/st/orm/template/Refs.kt @@ -17,6 +17,7 @@ package st.orm.template import st.orm.Data import st.orm.Entity +import st.orm.Projection import st.orm.Ref /** @@ -42,3 +43,29 @@ fun > E.ref(): Ref = Ref.of(this) * Requires `import st.orm.template.refById`. */ inline fun refById(id: Any): Ref = Ref.of(T::class.java, id) + +/** + * Extracts the primary key from an entity ref, returning a type-safe id. + * + * Usage: + * ```kotlin + * val ref: Ref = ... + * val id: Int = ref.entityId() + * ``` + * + * Requires `import st.orm.template.entityId`. + */ +fun > Ref.entityId(): ID = Ref.entityId(this) + +/** + * Extracts the primary key from a projection ref, returning a type-safe id. + * + * Usage: + * ```kotlin + * val ref: Ref = ... + * val id: Int = ref.projectionId() + * ``` + * + * Requires `import st.orm.template.projectionId`. + */ +fun > Ref

.projectionId(): ID = Ref.projectionId(this) diff --git a/storm-metamodel-ksp/src/main/kotlin/st/orm/metamodel/MetamodelProcessor.kt b/storm-metamodel-ksp/src/main/kotlin/st/orm/metamodel/MetamodelProcessor.kt index 4cbb35393..5469a9d35 100644 --- a/storm-metamodel-ksp/src/main/kotlin/st/orm/metamodel/MetamodelProcessor.kt +++ b/storm-metamodel-ksp/src/main/kotlin/st/orm/metamodel/MetamodelProcessor.kt @@ -62,10 +62,10 @@ class MetamodelProcessor( companion object { private const val GENERATE_METAMODEL = "st.orm.GenerateMetamodel" private const val DATA = "st.orm.Data" - private const val DISCRIMINATOR = "st.orm.Discriminator" private const val REF = "st.orm.Ref" + private const val FOREIGN_KEY = "st.orm.FK" private const val PRIMARY_KEY = "st.orm.PK" - private const val UNIQUE = "st.orm.UK" + private const val UNIQUE_KEY = "st.orm.UK" private const val K_BOOLEAN = "kotlin.Boolean" private const val K_BYTE = "kotlin.Byte" @@ -444,14 +444,14 @@ class MetamodelProcessor( } private fun isUniqueField(prop: KSPropertyDeclaration): Boolean { - if (hasAnnotationOrMeta(prop, UNIQUE)) return true + if (hasAnnotationOrMeta(prop, UNIQUE_KEY)) return true val parent = prop.parentDeclaration as? KSClassDeclaration ?: return false val ctor = parent.primaryConstructor if (ctor != null) { val param = ctor.parameters.firstOrNull { it.name?.asString() == prop.simpleName.asString() } - if (param != null && hasAnnotationOrMeta(param, UNIQUE)) return true + if (param != null && hasAnnotationOrMeta(param, UNIQUE_KEY)) return true } // Fallback for sealed interface properties: delegate to first sealed subclass. @@ -471,7 +471,7 @@ class MetamodelProcessor( for (ann in prop.annotations) { val annDecl = ann.annotationType.resolve().declaration as? KSClassDeclaration ?: continue val annQn = annDecl.qualifiedName?.asString() - if (annQn == UNIQUE) { + if (annQn == UNIQUE_KEY) { val argument = ann.arguments.firstOrNull { it.name?.asString() == "nullsDistinct" } return argument?.value as? Boolean ?: true } @@ -485,7 +485,7 @@ class MetamodelProcessor( for (ann in param.annotations) { val annDecl = ann.annotationType.resolve().declaration as? KSClassDeclaration ?: continue val annQn = annDecl.qualifiedName?.asString() - if (annQn == UNIQUE) { + if (annQn == UNIQUE_KEY) { val argument = ann.arguments.firstOrNull { it.name?.asString() == "nullsDistinct" } return argument?.value as? Boolean ?: true } @@ -503,6 +503,11 @@ class MetamodelProcessor( return true } + /** + * Returns true if the field should be treated as a unique key for metamodel generation. + */ + private fun isEffectivelyUniqueField(prop: KSPropertyDeclaration): Boolean = isUniqueField(prop) + private fun isEffectivelyNullable(prop: KSPropertyDeclaration): Boolean { // PK fields are always non-null. if (isPrimaryKey(prop)) return false @@ -668,7 +673,7 @@ class MetamodelProcessor( } else { val kotlinTypeName = getKotlinTypeName(typeRef, packageName) // E (unwrap Ref) val valueKotlinTypeName = getKotlinValueTypeName(typeRef, packageName) // V (keep Ref) - val unique = isUniqueField(prop) + val unique = isEffectivelyUniqueField(prop) val baseClass = if (unique) "AbstractKeyMetamodel" else "AbstractMetamodel" builder.append(" /** Represents the $className.$fieldName field. */\n") builder.append( @@ -702,7 +707,7 @@ class MetamodelProcessor( val kotlinTypeName = getKotlinTypeName(typeRef, packageName) // E val valueKotlinTypeName = getKotlinValueTypeName(typeRef, packageName) // V val v = if (forceNullableChain) ensureNullable(valueKotlinTypeName) else valueKotlinTypeName - val unique = isUniqueField(prop) + val unique = isEffectivelyUniqueField(prop) val isData = classDeclaration.implementsInterface(DATA) val baseClass = if (!isData || unique) "AbstractKeyMetamodel" else "AbstractMetamodel" builder.append(" val $fieldName: $baseClass\n") @@ -729,6 +734,30 @@ class MetamodelProcessor( val simpleTypeName = getSimpleTypeName(typeRef, packageName) val isChildData = isDataType(prop) + // Validate: @PK, @FK, and @UK are not supported on inline record fields. + if (!classDeclaration.implementsInterface(DATA)) { + if (hasAnnotationOrMeta(prop, PRIMARY_KEY)) { + logger.error( + "@PK is not supported on inline record fields. " + + "Primary keys are only supported on top-level entity fields.", + prop, + ) + } + if (hasAnnotationOrMeta(prop, FOREIGN_KEY)) { + logger.error( + "@FK is not supported on inline record fields. " + + "Foreign keys are only supported on top-level entity fields.", + prop, + ) + } + if (hasAnnotationOrMeta(prop, UNIQUE_KEY)) { + logger.error( + "@UK is not supported on inline record fields. " + + "Unique keys are only supported on top-level entity fields.", + prop, + ) + } + } val inlineFlag = if (isChildData) "false" else "true" val childForceNullable = forceNullableChain || propNullable val childMetaClassName = @@ -739,7 +768,7 @@ class MetamodelProcessor( } else { "{ t: T -> this@$metaClassName.getValue(t).$fieldName }" } - if (!isChildData && isUniqueField(prop)) { + if (!isChildData && isEffectivelyUniqueField(prop)) { val nullsDistinct = getNullsDistinct(prop) val referencedDecl = typeRef.resolve().declaration as? KSClassDeclaration if (nullsDistinct && referencedDecl != null && hasNullableLeaf(referencedDecl)) { @@ -793,8 +822,32 @@ class MetamodelProcessor( " override fun getValue(record: T): $v =\n" + " this@$metaClassName.getValue(record).$fieldName\n" } - val unique = isUniqueField(prop) + val unique = isEffectivelyUniqueField(prop) val isData = classDeclaration.implementsInterface(DATA) + // Validate: @PK, @FK, and @UK are not supported on inline record fields. + if (!isData) { + if (hasAnnotationOrMeta(prop, PRIMARY_KEY)) { + logger.error( + "@PK is not supported on inline record fields. " + + "Primary keys are only supported on top-level entity fields.", + prop, + ) + } + if (hasAnnotationOrMeta(prop, FOREIGN_KEY)) { + logger.error( + "@FK is not supported on inline record fields. " + + "Foreign keys are only supported on top-level entity fields.", + prop, + ) + } + if (hasAnnotationOrMeta(prop, UNIQUE_KEY)) { + logger.error( + "@UK is not supported on inline record fields. " + + "Unique keys are only supported on top-level entity fields.", + prop, + ) + } + } val effectivelyNullable = if (!isData) { // Leaf of a compound key: report raw field nullability for runtime derivation. isEffectivelyNullable(prop) diff --git a/storm-metamodel-processor/src/main/java/st/orm/metamodel/MetamodelProcessor.java b/storm-metamodel-processor/src/main/java/st/orm/metamodel/MetamodelProcessor.java index 9123c4c5a..dbe757c97 100644 --- a/storm-metamodel-processor/src/main/java/st/orm/metamodel/MetamodelProcessor.java +++ b/storm-metamodel-processor/src/main/java/st/orm/metamodel/MetamodelProcessor.java @@ -66,10 +66,9 @@ public final class MetamodelProcessor extends AbstractProcessor { private static final String METAMODEL_TYPE = "st.orm.MetamodelType"; private static final String GENERATE_METAMODEL = "st.orm.GenerateMetamodel"; private static final String DATA = "st.orm.Data"; - private static final String DISCRIMINATOR = "st.orm.Discriminator"; private static final String FOREIGN_KEY = "st.orm.FK"; private static final String PRIMARY_KEY = "st.orm.PK"; - private static final String UNIQUE = "st.orm.UK"; + private static final String UNIQUE_KEY = "st.orm.UK"; private static final Pattern REF_PATTERN = Pattern.compile("^st\\.orm\\.Ref<([^>]+)>$"); @@ -495,7 +494,7 @@ private static boolean isUniqueField(@Nonnull Element recordElement, @Nonnull St if (recordElement.getKind() == RECORD && recordElement instanceof TypeElement te) { for (RecordComponentElement rc : te.getRecordComponents()) { if (rc.getSimpleName().toString().equals(fieldName)) { - return hasAnnotationOrMeta(rc, UNIQUE); + return hasAnnotationOrMeta(rc, UNIQUE_KEY); } } } @@ -504,7 +503,7 @@ private static boolean isUniqueField(@Nonnull Element recordElement, @Nonnull St if (enclosed.getKind() != CONSTRUCTOR) continue; for (VariableElement param : ((ExecutableElement) enclosed).getParameters()) { if (param.getSimpleName().toString().equals(fieldName)) { - return hasAnnotationOrMeta(param, UNIQUE); + return hasAnnotationOrMeta(param, UNIQUE_KEY); } } } @@ -599,7 +598,7 @@ private boolean hasNullableLeaf(@Nullable TypeElement recordElement) { private static boolean extractNullsDistinct(@Nonnull Element element) { for (AnnotationMirror am : element.getAnnotationMirrors()) { String annotationName = am.getAnnotationType().toString(); - if (UNIQUE.equals(annotationName)) { + if (UNIQUE_KEY.equals(annotationName)) { // Direct @UK annotation. for (var entry : am.getElementValues().entrySet()) { if ("nullsDistinct".equals(entry.getKey().getSimpleName().toString())) { @@ -612,7 +611,7 @@ private static boolean extractNullsDistinct(@Nonnull Element element) { Element annEl = am.getAnnotationType().asElement(); if (annEl instanceof TypeElement te) { for (AnnotationMirror meta : te.getAnnotationMirrors()) { - if (UNIQUE.equals(meta.getAnnotationType().toString())) { + if (UNIQUE_KEY.equals(meta.getAnnotationType().toString())) { // Meta-annotated @UK does not carry the user's nullsDistinct attribute. return true; } @@ -773,7 +772,7 @@ private String buildInterfaceFields(@Nonnull Element recordElement, @Nonnull Str .append(fieldName).append(";\n"); } else { String valueTypeName = getValueTypeName(fieldType, packageName); - boolean unique = isUniqueField(recordElement, fieldName); + boolean unique = isEffectivelyUniqueField(recordElement, fieldName); String baseClass = unique ? "AbstractKeyMetamodel" : "AbstractMetamodel"; builder.append(" /** Represents the {@link ").append(recordName).append("#").append(fieldName) @@ -853,7 +852,7 @@ private String buildClassFields(@Nonnull Element recordElement, .append(";\n"); } else { String valueTypeName = getValueTypeName(fieldType, packageName); - boolean unique = isUniqueField(recordElement, fieldName); + boolean unique = isEffectivelyUniqueField(recordElement, fieldName); boolean isData = implementsData(recordElement); String baseClass = (!isData || unique) ? "AbstractKeyMetamodel" : "AbstractMetamodel"; @@ -886,6 +885,27 @@ private String initClassFields(@Nonnull Element recordElement, if (isNestedRecord(fieldType)) continue; boolean inline = !isDataType(recordElement, fieldName); + // Validate: @PK, @FK, and @UK are not supported on inline record fields. + if (!implementsData(recordElement)) { + if (hasAnnotationOrMeta(enclosed, PRIMARY_KEY)) { + processingEnv.getMessager().printMessage(ERROR, + "@PK is not supported on inline record fields. " + + "Primary keys are only supported on top-level entity fields.", + enclosed); + } + if (hasAnnotationOrMeta(enclosed, FOREIGN_KEY)) { + processingEnv.getMessager().printMessage(ERROR, + "@FK is not supported on inline record fields. " + + "Foreign keys are only supported on top-level entity fields.", + enclosed); + } + if (hasAnnotationOrMeta(enclosed, UNIQUE_KEY)) { + processingEnv.getMessager().printMessage(ERROR, + "@UK is not supported on inline record fields. " + + "Unique keys are only supported on top-level entity fields.", + enclosed); + } + } String inlineFlag = inline ? "true" : "false"; // Null-safe nested getter: parent record (root getter) can be null. String nestedGetter = @@ -893,7 +913,7 @@ private String initClassFields(@Nonnull Element recordElement, " " + recordName + " p = " + metaClassName + ".this.getValue(t);\n" + " return (p == null) ? null : " + accessorExpr(recordElement, "p", fieldName, fieldType) + ";\n" + " }"; - if (inline && isUniqueField(recordElement, fieldName)) { + if (inline && isEffectivelyUniqueField(recordElement, fieldName)) { boolean nullsDistinct = getNullsDistinct(recordElement, fieldName); if (nullsDistinct && hasNullableLeaf(asTypeElement(fieldType))) { processingEnv.getMessager().printMessage( @@ -920,8 +940,29 @@ private String initClassFields(@Nonnull Element recordElement, } } else { String valueTypeName = getValueTypeName(fieldType, packageName); - boolean unique = isUniqueField(recordElement, fieldName); + boolean unique = isEffectivelyUniqueField(recordElement, fieldName); boolean isData = implementsData(recordElement); + // Validate: @PK, @FK, and @UK are not supported on inline record fields. + if (!isData) { + if (hasAnnotationOrMeta(enclosed, PRIMARY_KEY)) { + processingEnv.getMessager().printMessage(ERROR, + "@PK is not supported on inline record fields. " + + "Primary keys are only supported on top-level entity fields.", + enclosed); + } + if (hasAnnotationOrMeta(enclosed, FOREIGN_KEY)) { + processingEnv.getMessager().printMessage(ERROR, + "@FK is not supported on inline record fields. " + + "Foreign keys are only supported on top-level entity fields.", + enclosed); + } + if (hasAnnotationOrMeta(enclosed, UNIQUE_KEY)) { + processingEnv.getMessager().printMessage(ERROR, + "@UK is not supported on inline record fields. " + + "Unique keys are only supported on top-level entity fields.", + enclosed); + } + } String baseClass = (!isData || unique) ? "AbstractKeyMetamodel" : "AbstractMetamodel"; boolean effectivelyNullable = false; if (!isData) { @@ -1250,6 +1291,18 @@ private boolean isUniqueFieldOnSubclass(@Nonnull TypeElement sealedInterface, @N return firstRecord != null && isUniqueField(firstRecord, fieldName); } + /** + * Returns {@code true} if the field should be treated as a unique key for metamodel generation purposes. + */ + private static boolean isEffectivelyUniqueField(@Nonnull Element recordElement, @Nonnull String fieldName) { + return isUniqueField(recordElement, fieldName); + } + + private boolean isEffectivelyUniqueFieldOnSubclass(@Nonnull TypeElement sealedInterface, @Nonnull String fieldName) { + TypeElement firstRecord = getFirstPermittedRecord(sealedInterface); + return firstRecord != null && isEffectivelyUniqueField(firstRecord, fieldName); + } + private boolean getNullsDistinctOnSubclass(@Nonnull TypeElement sealedInterface, @Nonnull String fieldName) { TypeElement firstRecord = getFirstPermittedRecord(sealedInterface); return firstRecord != null ? getNullsDistinct(firstRecord, fieldName) : true; @@ -1300,7 +1353,7 @@ private void generateSealedMetamodelInterface(@Nonnull TypeElement sealedInterfa .append(fieldName).append(";\n"); } else { String valueTypeName = getValueTypeName(fieldType, packageName); - boolean unique = isUniqueFieldOnSubclass(sealedInterface, fieldName); + boolean unique = isEffectivelyUniqueFieldOnSubclass(sealedInterface, fieldName); String baseClass = unique ? "AbstractKeyMetamodel" : "AbstractMetamodel"; fields.append(" /** Represents the {@link ").append(typeName).append("#").append(fieldName) .append("()} field. */\n"); @@ -1398,7 +1451,7 @@ private void generateSealedMetamodelClass(@Nonnull TypeElement sealedInterface, .append(";\n"); } else { String valueTypeName = getValueTypeName(fieldType, packageName); - boolean unique = isUniqueFieldOnSubclass(sealedInterface, fieldName); + boolean unique = isEffectivelyUniqueFieldOnSubclass(sealedInterface, fieldName); String baseClass = (!isData || unique) ? "AbstractKeyMetamodel" : "AbstractMetamodel"; classFields.append(" /** Represents the {@link ").append(typeName).append("#").append(fieldName) .append("()} field. */\n"); @@ -1423,7 +1476,7 @@ private void generateSealedMetamodelClass(@Nonnull TypeElement sealedInterface, " " + typeName + " p = " + metaClassName + ".this.getValue(t);\n" + " return (p == null) ? null : p." + fieldName + "();\n" + " }"; - if (inline && isUniqueFieldOnSubclass(sealedInterface, fieldName)) { + if (inline && isEffectivelyUniqueFieldOnSubclass(sealedInterface, fieldName)) { boolean nullsDistinct = getNullsDistinctOnSubclass(sealedInterface, fieldName); initFields.append(" this.").append(fieldName).append(" = new ").append(fieldTypeName) .append("Metamodel<>(") @@ -1441,7 +1494,7 @@ private void generateSealedMetamodelClass(@Nonnull TypeElement sealedInterface, } } else { String valueTypeName = getValueTypeName(fieldType, packageName); - boolean unique = isUniqueFieldOnSubclass(sealedInterface, fieldName); + boolean unique = isEffectivelyUniqueFieldOnSubclass(sealedInterface, fieldName); String baseClass = (!isData || unique) ? "AbstractKeyMetamodel" : "AbstractMetamodel"; boolean effectivelyNullable; if (!isData) { diff --git a/website/sidebars.ts b/website/sidebars.ts index 6acb8c41d..7d70dc8e1 100644 --- a/website/sidebars.ts +++ b/website/sidebars.ts @@ -77,6 +77,8 @@ const sidebars: SidebarsConfig = { 'faq', 'migration-from-jpa', 'ai', + 'ai-reference', + 'database-and-mcp', ], }, { diff --git a/website/static/skills/storm-entity-from-schema.md b/website/static/skills/storm-entity-from-schema.md index 68f270ec1..12d57dab5 100644 --- a/website/static/skills/storm-entity-from-schema.md +++ b/website/static/skills/storm-entity-from-schema.md @@ -10,8 +10,9 @@ Steps: 2. Check which tables already have entity classes in the codebase. 3. For tables without entities: offer to generate new ones. 4. For tables with existing entities: call \`describe_table\` and compare against the entity definition. Report differences and suggest updates. -5. Ask: Kotlin or Java? -6. Ask about loading preference for new FKs: +5. If \`select_data\` is available: sample a few rows from ambiguous columns to inform type decisions. For example, a \`VARCHAR\` column might contain enum-like values (suggest an enum), a \`TEXT\` column might store JSON (suggest \`@Json\`), or an \`INT\` column might be a type discriminator. Only query when the schema alone leaves the type decision ambiguous — do not sample every table. +6. Ask: Kotlin or Java? +7. Ask about loading preference for new FKs: - **Deeply nested**: FK as direct types. Full graph in one query, no N+1. - **Shallow**: FK as Ref. Only ID stored, fetch on demand. No N+1 either way. @@ -23,7 +24,10 @@ Generation/update rules: - Auto-increment PKs: IDENTITY. Others: NONE. - NOT NULL FKs: non-nullable. Nullable FKs: nullable. - CIRCULAR NOT SUPPORTED. Two tables referencing each other: one must use Ref. Self-ref: always Ref. -- @UK for unique constraints. @Version for version columns (confirm with user). +- **Single-column unique constraints** (apply by default): use `@UK` on the field. Generates a `Metamodel.Key` for type-safe lookups and scrolling. Always add `@UK` when `describe_table` shows a single-column unique constraint — it's one annotation for free value. +- **Composite unique constraints** (only when needed): composite unique constraints do NOT need to be modeled unless the user explicitly needs a composite `Metamodel.Key` (for keyset pagination or type-safe lookups). When needed, use an inline record + `@UK @Persist(insertable = false, updatable = false)` — ask the user if they need this. +- `describe_table` reports unique constraints (top-level `uniqueConstraints` array with constraint name and column list) and FK cascade rules (onDelete/onUpdate). Both are exposed for context only. **Storm does not model cascade behavior or enforce uniqueness** — these are database-level concerns. Do not generate annotations or code for cascade rules. +- @Version for version columns (confirm with user). - When updating, preserve existing field order and custom annotations. Only add/modify what changed. - Use `@DbIgnore` on fields or types that should be excluded from schema validation. - Use `@PK(constraint = false)` if the table intentionally has no PK constraint in the database. diff --git a/website/static/skills/storm-entity-java.md b/website/static/skills/storm-entity-java.md index eaa52eee0..148c215ff 100644 --- a/website/static/skills/storm-entity-java.md +++ b/website/static/skills/storm-entity-java.md @@ -99,13 +99,20 @@ Generation rules: ) implements Entity {} ``` -10. Unique keys, embedded components, enums, optimistic locking: same rules as Kotlin. +10. Unique keys: + - **Single-column** (apply by default): `@UK @Nonnull String email`. Generates a `Metamodel.Key` for type-safe lookups and scrolling. Always add `@UK` when the database has a single-column unique constraint — it's one annotation for free value. + - **Composite** (only when needed in code): use an inline record + `@UK @Persist(insertable = false, updatable = false)`. Only add this when the user explicitly needs a composite `Metamodel.Key` for keyset pagination or type-safe lookups. Composite unique constraints that don't need a Key don't need to be modeled. + - `@UK(constraint = false)` suppresses schema validation when no database constraint exists. -11. Java records are immutable. Consider Lombok \`@Builder(toBuilder = true)\` for copy-with-modification. +11. Embedded components, enums, optimistic locking: same rules as Kotlin. -12. Use descriptive variable names, never abbreviated. +12. Java records are immutable. Consider Lombok \`@Builder(toBuilder = true)\` for copy-with-modification. -13. **Use `Ref` for map keys and set membership**: Prefer `Ref` (via `.ref()`) for all entity lookups, map keys, and set membership. `Ref` provides identity-based `equals`/`hashCode` on the primary key, making it safe and efficient. When a projection already returns `Ref`, use it directly as a map key without calling `.ref()` again. +13. Use descriptive variable names, never abbreviated. + +14. **Use `Ref` for map keys and set membership**: Prefer `Ref` (via `.ref()`) for all entity lookups, map keys, and set membership. `Ref` provides identity-based `equals`/`hashCode` on the primary key, making it safe and efficient. When a projection already returns `Ref`, use it directly as a map key without calling `.ref()` again. + +15. **Typed ID from `Ref`:** Use `Ref.entityId(ref)` to extract a type-safe ID from a `Ref`. For projections, use `Ref.projectionId(ref)`. Avoid `ref.id()` — it returns `Object` and requires an unsafe cast. After generating, remind the user to rebuild for metamodel generation. diff --git a/website/static/skills/storm-entity-kotlin.md b/website/static/skills/storm-entity-kotlin.md index bbf255e9f..f31d4d2d3 100644 --- a/website/static/skills/storm-entity-kotlin.md +++ b/website/static/skills/storm-entity-kotlin.md @@ -41,7 +41,10 @@ Generation rules: 5. NO COLLECTION FIELDS. No \`List\` on entities. Query the child side instead: \`orm.findAll(User_.city eq city)\`. -6. Unique keys: \`@UK val email: String\` for type-safe lookups. +6. Unique keys: + - **Single-column** (apply by default): `@UK val email: String`. Generates a `Metamodel.Key` for type-safe lookups and scrolling. Always add `@UK` when the database has a single-column unique constraint — it's one annotation for free value. + - **Composite** (only when needed in code): use an inline record + `@UK @Persist(insertable = false, updatable = false)`. Only add this when the user explicitly needs a composite `Metamodel.Key` for keyset pagination or type-safe lookups. Composite unique constraints that don't need a Key don't need to be modeled. + - `@UK(constraint = false)` suppresses schema validation when no database constraint exists. 7. Embedded components: Separate data class (no @PK, no Entity interface). Fields become parent table columns. Inlining is implicit — `@Inline` never needs to be specified explicitly. When `@Inline` is used, the field must be an inline (embedded) type, not a scalar or entity. @@ -123,7 +126,7 @@ Generation rules: 14. **Use `Ref` for map keys and set membership**: Prefer `Ref` (via `.ref()`) for all entity lookups, map keys, and set membership. `Ref` provides identity-based `equals`/`hashCode` on the primary key, making it safe and efficient. When a projection already returns `Ref`, use it directly as a map key without calling `.ref()` again. -15. **`Ref.id()` in Kotlin:** `ref.id` does not work in Kotlin — use `ref.id()` (Java interface method). The return type is `Any`, so a cast is needed: `ref.id() as String`. This is relevant when extracting IDs from `Ref` at system boundaries (e.g., URL parameters, response bodies). +15. **Typed ID from `Ref`:** Use the `entityId()` extension function to extract a type-safe ID: `ref.entityId()` (import `st.orm.template.entityId`). For projections, use `ref.projectionId()` (import `st.orm.template.projectionId`). Avoid `ref.id()` — it returns `Any` and requires an unsafe cast. After generating, remind the user to rebuild for metamodel generation (e.g., \`City_\`). diff --git a/website/static/skills/storm-query-java.md b/website/static/skills/storm-query-java.md index 133abde10..a3d6b4920 100644 --- a/website/static/skills/storm-query-java.md +++ b/website/static/skills/storm-query-java.md @@ -142,9 +142,24 @@ List topCities = orm.entity(City.class) .having(RAW."COUNT(*) >= \{minUsers}") // template form for aggregate expressions .orderByDescending(RAW."AVG(\{User_.age})") .getResultList(); + +// Multi-field groupBy — always use the varargs metamodel form: +record CityActiveCount(@FK Ref city, boolean active, long userCount) implements Data {} + +List counts = orm.entity(User.class) + .select(CityActiveCount.class, RAW."\{User_.city}, \{User_.active}, COUNT(*)") + .groupBy(User_.city, User_.active) // ✅ varargs metamodel form + .getResultList(); + +// ❌ Don't use template when metamodel fields work: +// .groupBy(RAW."\{User_.city}, \{User_.active}") +// ✅ Use varargs metamodel form — code-first, type-safe: +// .groupBy(User_.city, User_.active) ``` -Always prefer code over templates. Templates are for expressions QueryBuilder can't produce (e.g., `COUNT(*)`, `AVG()`). `groupBy`, `having`, and `orderBy` also accept template forms when needed (e.g., `.having(RAW."COUNT(*) >= \{min}")`, `.orderByDescending(RAW."AVG(\{User_.age})")`), but use the code-based methods when possible. Do NOT write the entire query as a raw SQL string. +**`Ref` in aggregation result types:** When the SELECT clause references a FK field (`\{User_.city}`) rather than a full entity (`\{City.class}`), use `Ref` in the result type — not the raw ID type and not the full entity. `Ref` maps correctly to the FK column value. Use the full entity type only when the SELECT includes all its columns via `\{City.class}`. + +Always prefer code over templates. Templates are for expressions QueryBuilder can't produce (e.g., `COUNT(*)`, `AVG()`). `groupBy`, `having`, and `orderBy` also accept template forms when needed (e.g., `.having(RAW."COUNT(*) >= \{min}")`, `.orderByDescending(RAW."AVG(\{User_.age})")`), but **always use the varargs metamodel form for `groupBy`** and **the metamodel form for `orderBy`** when possible — reserve template forms for computed expressions. Do NOT write the entire query as a raw SQL string. ## Row Locking @@ -342,6 +357,7 @@ Critical rules: - Streaming: `select().getResultStream()` returns a `Stream`. ALWAYS use try-with-resources to avoid connection leaks. There are no `selectBy` methods that return Stream directly -- always use `select()` (optionally with predicate) and then `.getResultStream()`. - **Metamodel navigation depth**: Multiple levels of navigation are allowed on the root entity. Joined (non-root) entities can only navigate one level deep. For deeper navigation, explicitly join the intermediate entity. - **Use `Ref` for map keys and set membership**: Prefer `Ref` (via `.ref()`) for map keys, set membership, and identity-based lookups. `Ref` provides identity-based `equals`/`hashCode` on the primary key. +- **Typed ID from `Ref`:** Use `Ref.entityId(ref)` to extract a type-safe ID. For projections, use `Ref.projectionId(ref)`. Avoid `ref.id()` — it returns `Object` and requires an unsafe cast. ## Verification diff --git a/website/static/skills/storm-query-kotlin.md b/website/static/skills/storm-query-kotlin.md index c15c6377f..e9b62f3f3 100644 --- a/website/static/skills/storm-query-kotlin.md +++ b/website/static/skills/storm-query-kotlin.md @@ -188,9 +188,24 @@ val topCities = orm.entity() .having { "COUNT(*) >= $minUsers" } // template form for aggregate expressions .orderByDescending { "AVG(${User_.age})" } .resultList + +// Multi-field groupBy — always use the varargs metamodel form: +data class CityActiveCount(val city: Ref, val active: Boolean, val userCount: Long) : Data + +val counts = orm.entity() + .select(CityActiveCount::class) { "${User_.city}, ${User_.active}, COUNT(*)" } + .groupBy(User_.city, User_.active) // ✅ varargs metamodel form + .resultList + +// ❌ Don't use template lambda when metamodel fields work: +// .groupBy { "${User_.city}, ${User_.active}" } +// ✅ Use varargs metamodel form — code-first, type-safe: +// .groupBy(User_.city, User_.active) ``` -Always prefer code over templates. Templates are for expressions QueryBuilder can't produce (e.g., `COUNT(*)`, `AVG()`). `groupBy`, `having`, and `orderBy` also accept template lambdas when needed (e.g., `.having { "COUNT(*) >= $min" }`, `.orderByDescending { "AVG(${User_.age})" }`), but use the code-based methods when possible. Do NOT write the entire query as a raw SQL string. +**`Ref` in aggregation result types:** When the SELECT clause references a FK field (`${User_.city}`) rather than a full entity (`${City::class}`), use `Ref` in the result type — not the raw ID type and not the full entity. `Ref` maps correctly to the FK column value. Use the full entity type only when the SELECT includes all its columns via `${City::class}`. + +Always prefer code over templates. Templates are for expressions QueryBuilder can't produce (e.g., `COUNT(*)`, `AVG()`). `groupBy`, `having`, and `orderBy` also accept template lambdas when needed (e.g., `.having { "COUNT(*) >= $min" }`, `.orderByDescending { "AVG(${User_.age})" }`), but **always use the varargs metamodel form for `groupBy`** and **the metamodel form for `orderBy`** when possible — reserve template lambdas for computed expressions. Do NOT write the entire query as a raw SQL string. ## Row Locking @@ -488,6 +503,7 @@ Critical rules: val countMap: MutableMap, MutableMap> = mutableMapOf() countMap.getOrPut(candidate.cell.ref()) { mutableMapOf() } ``` +- **Typed ID from `Ref`:** Use `ref.entityId()` (import `st.orm.template.entityId`) to extract a type-safe ID from a `Ref`. Avoid `ref.id()` — it returns `Any` and requires an unsafe cast. ## Verification diff --git a/website/static/skills/storm-repository-java.md b/website/static/skills/storm-repository-java.md index eb024bb56..aa6af3020 100644 --- a/website/static/skills/storm-repository-java.md +++ b/website/static/skills/storm-repository-java.md @@ -57,6 +57,7 @@ Key rules: **Template joins are a code smell.** If you need a template-based ON clause (`.innerJoin(T.class).on(RAW."...")`) or a full `orm.query(RAW."...")` to express a join that follows a database FK constraint, the entity model is missing an `@FK` annotation. Fix the entity first — add `@FK` (with `Ref` for PK fields, full entity for non-PK fields) — then the join becomes `.innerJoin(Entity.class).on(OtherEntity.class)`, pure code with no templates. Template joins are only justified when there is genuinely no FK constraint in the database. 9. **Use `Ref` for map keys and set membership**: Prefer `Ref` (via `.ref()`) for map keys, set membership, and identity-based lookups. `Ref` provides identity-based `equals`/`hashCode` on the primary key. 10. **Prefer typed parameters over raw IDs.** Repository method signatures should accept entity or `Ref` parameters for FK fields — not raw IDs like `String` or `int`. Raw IDs are untyped and lose the entity association. Convert IDs to `Ref` at the system boundary (controller/route handler) using `Ref.of(Entity.class, id)`. +11. **Typed ID from `Ref`:** Use `Ref.entityId(ref)` to extract a type-safe ID. For projections, use `Ref.projectionId(ref)`. Avoid `ref.id()` — it returns `Object` and requires an unsafe cast. ## API Design: Prefer the Simplest Approach diff --git a/website/static/skills/storm-repository-kotlin.md b/website/static/skills/storm-repository-kotlin.md index 614bc7324..5ca5473c7 100644 --- a/website/static/skills/storm-repository-kotlin.md +++ b/website/static/skills/storm-repository-kotlin.md @@ -71,6 +71,7 @@ Key rules: **Template joins are a code smell.** If you need a template-based ON clause (`.innerJoin(T::class).on { "..." }`) or a full `orm.query { }` to express a join that follows a database FK constraint, the entity model is missing an `@FK` annotation. Fix the entity first — add `@FK` (with `Ref` for PK fields, full entity for non-PK fields) — then the join becomes `.innerJoin(Entity::class).on(OnEntity::class)`, pure code with no templates. Template joins are only justified when there is genuinely no FK constraint in the database. 9. **Use `Ref` for map keys and set membership**: Prefer `Ref` (via `.ref()`) for map keys, set membership, and identity-based lookups. `Ref` provides identity-based `equals`/`hashCode` on the primary key. When a projection already returns `Ref`, use it directly without calling `.ref()` again. 10. **Prefer typed parameters over raw IDs.** Repository method signatures should accept entity or `Ref` parameters for FK fields — not raw IDs like `String` or `Int`. Raw IDs are untyped and lose the entity association. Convert IDs to `Ref` at the system boundary (controller/route handler) using `refById(id)` (import `st.orm.template.refById`). +11. **Typed ID from `Ref`:** Use `ref.entityId()` (import `st.orm.template.entityId`) to extract a type-safe ID. Avoid `ref.id()` — it returns `Any` and requires an unsafe cast. ## API Design: Prefer the Simplest Approach diff --git a/website/static/skills/storm-schema-rules.md b/website/static/skills/storm-schema-rules.md index 5af756a4e..4afc6845d 100644 --- a/website/static/skills/storm-schema-rules.md +++ b/website/static/skills/storm-schema-rules.md @@ -3,7 +3,8 @@ This project has a Storm Schema MCP server configured. Use the following tools to access the live database schema: - `list_tables` - List all tables in the database -- `describe_table(table)` - Describe a table's columns, types, nullability, primary key, and foreign keys +- `describe_table(table)` - Describe a table's columns, types, nullability, primary key, foreign keys (with cascade rules), and unique constraints +- `select_data(table, ...)` - Query individual records from a table (only available when data access is enabled for this connection) Use these tools when: - Asked about the database schema or data model @@ -11,4 +12,14 @@ Use these tools when: - Validating that existing entities match the actual database schema - Investigating foreign key relationships between tables -The MCP server exposes only schema metadata (table definitions, column types, constraints). It has no access to actual data. +### Schema vs. data access + +The `list_tables` and `describe_table` tools return structural metadata only — no data is exposed. + +The `select_data` tool is only available when the developer has explicitly enabled data access for this connection. If the tool is not listed in `tools/list`, data access is disabled — do not attempt to call it. When available, `select_data` accepts a structured request (table, columns, filters, sort, offset, limit) and returns individual rows formatted as a markdown table. It does not accept raw SQL. Results default to 50 rows (max 500), and cell values longer than 200 characters are truncated. + +Use `select_data` when sample data would inform a decision — for example, to determine whether a `VARCHAR` column contains enum-like values, whether a `TEXT` column stores JSON, or what value ranges a numeric column holds. Do not query data speculatively or in bulk; use it when a specific question about the data would change the entity design. + +When presenting `select_data` results to the user, always display them as a table with column names as column headers and one row per record. Never transpose the data (columns as rows). The response already contains a markdown table — present it directly or reformat it, but always keep the column-per-column, row-per-row orientation. + +Some tables may be excluded from data queries by the developer. If `select_data` returns an error about an excluded table, the table's schema is still available through `describe_table` — only data access is restricted. diff --git a/website/static/skills/storm-schema.md b/website/static/skills/storm-schema.md index 24af573ce..717cdf05e 100644 --- a/website/static/skills/storm-schema.md +++ b/website/static/skills/storm-schema.md @@ -2,8 +2,9 @@ Use the storm-schema MCP tools to inspect the database schema. 1. Call \`list_tables\` to get all tables 2. Call \`describe_table\` for each table -3. Present a schema summary -4. Offer to generate Storm entities (ask Kotlin or Java, ask about loading preference) +3. If \`select_data\` is available and columns are ambiguous (e.g., generic \`VARCHAR\`, \`TEXT\`, or \`INT\` columns where the purpose is unclear from the name and type alone), sample a few rows to clarify intent +4. Present a schema summary +5. Offer to generate Storm entities (ask Kotlin or Java, ask about loading preference) Generation conventions: - snake_case table -> PascalCase class @@ -13,5 +14,6 @@ Generation conventions: - @FK for FKs, nullability from NOT NULL constraints - CIRCULAR: if two tables reference each other, at least one must use Ref - Self-referencing: always Ref -- @UK for unique constraints +- @UK for single-column unique constraints (apply by default). Composite unique constraints don't need modeling unless the user needs a composite Metamodel.Key +- FK cascade rules (onDelete/onUpdate) are exposed by describe_table for context but are not modeled by Storm — cascade behavior is a database-level concern - Descriptive names, never abbreviated diff --git a/website/static/skills/storm-sql-java.md b/website/static/skills/storm-sql-java.md index c55102c32..f07dede2b 100644 --- a/website/static/skills/storm-sql-java.md +++ b/website/static/skills/storm-sql-java.md @@ -78,6 +78,8 @@ The join, grouping, and result retrieval are all code-based. Only the `COUNT(*)` The `Data` interface marks types for SQL generation without CRUD. It tells Storm how to map the result columns to the record fields. +**`Ref` in result types:** When a SELECT clause references a FK field (`\{User_.city}`) rather than a full entity (`\{City.class}`), use `Ref` in the result type — not the raw ID type and not the full entity. `Ref` maps correctly to the FK column value. Use the full entity type only when selecting all its columns via `\{City.class}`. + All interpolated values become bind parameters. SQL injection safe by design. **Note:** `Query.getResultList()` (no type parameter) returns `List`. For typed results, use `query.getResultList(T.class)`. This is different from QueryBuilder's `.getResultList()` which returns `List` already typed to the query's result type. diff --git a/website/static/skills/storm-sql-kotlin.md b/website/static/skills/storm-sql-kotlin.md index 870f8cd13..0bc968eb0 100644 --- a/website/static/skills/storm-sql-kotlin.md +++ b/website/static/skills/storm-sql-kotlin.md @@ -78,6 +78,8 @@ The join, grouping, and result retrieval are all code-based. Only the `COUNT(*)` The `Data` interface marks types for SQL generation without CRUD. It tells Storm how to map the result columns to the record fields. +**`Ref` in result types:** When a SELECT clause references a FK field (`${User_.city}`) rather than a full entity (`${City::class}`), use `Ref` in the result type — not the raw ID type and not the full entity. `Ref` maps correctly to the FK column value. Use the full entity type only when selecting all its columns via `${City::class}`. + All interpolated values become bind parameters. SQL injection safe by design. **Note:** `Query.resultList` (Kotlin property, no type parameter) returns `List>`. For typed results, use `query.resultList()` or `query.getResultList(T::class)`. This is different from QueryBuilder's `.resultList` which returns `List` already typed to the query's result type. diff --git a/website/static/skills/storm-validate.md b/website/static/skills/storm-validate.md index a9a9b61e8..7a243e301 100644 --- a/website/static/skills/storm-validate.md +++ b/website/static/skills/storm-validate.md @@ -11,7 +11,8 @@ Compare Storm entities against the live database schema. - Extra columns that are NOT NULL without a default value (these would cause INSERT failures) - FK/`@FK` mismatches (FK referencing wrong table is an error; missing FK constraint is a warning) - Nullability differences (entity non-null but DB nullable is a warning) - - Missing unique constraints for @UK fields + - Missing unique constraints for @UK fields (use `uniqueConstraints` array from `describe_table`) + - Do NOT validate or report on FK cascade rules (onDelete/onUpdate) — Storm does not model cascade behavior - Primary key issues: mismatch between entity and DB PK columns is an error; missing PK constraint is a warning - Missing sequences for @PK(generation = SEQUENCE) fields @@ -20,6 +21,7 @@ Respect suppression annotations: - Skip fields annotated with `@DbIgnore` - `@PK(constraint = false)` suppresses missing PK constraint warning - `@UK(constraint = false)` suppresses missing unique constraint warning +- Composite (multi-column) DB unique constraints that are not modeled in the entity do NOT produce a warning — modeling them requires structural changes that should be a deliberate choice - `@FK(constraint = false)` suppresses missing FK constraint warning - Polymorphic FKs (sealed interface targets) cannot have standard DB FK constraints; skip FK validation for these diff --git a/website/versioned_docs/version-1.11.1/getting-started.md b/website/versioned_docs/version-1.11.1/getting-started.md index 6ccf83984..9e45aeb15 100644 --- a/website/versioned_docs/version-1.11.1/getting-started.md +++ b/website/versioned_docs/version-1.11.1/getting-started.md @@ -6,7 +6,7 @@ Storm is a modern SQL Template and ORM framework for Kotlin 2.0+ and Java 21+. I Storm is built around a simple idea: your data model should be a plain value, not a framework-managed object. In Storm, entities are Kotlin data classes or Java records. They carry no hidden state, no change-tracking proxies, and no lazy-loading hooks. You can create them, pass them across layers, serialize them, compare them by value, and store them in collections without worrying about session scope, detachment, or side effects. What you see in the source code is exactly what exists at runtime. -This stateless design is a deliberate trade-off. Traditional ORMs like JPA/Hibernate give you automatic dirty checking and transparent lazy loading, but at the cost of complexity: you must reason about managed vs. detached state, proxy initialization, persistence context boundaries, and cascading rules that interact in subtle ways. Storm avoids all of this. When you call `update`, you pass the full entity. When you query a relationship, you get the result in the same query. There are no surprises. +This stateless design is a deliberate trade-off. Traditional ORMs like JPA/Hibernate give you transparent lazy loading and proxy-based dirty checking, but at the cost of complexity: you must reason about managed vs. detached state, proxy initialization, persistence context boundaries, and cascading rules that interact in subtle ways. Storm avoids all of this. It still performs [dirty checking](/dirty-checking), but by comparing entity state within a transaction rather than through proxies or bytecode manipulation. When you query a relationship, you get the result in the same query. There are no surprises. Storm is also SQL-first. Rather than abstracting SQL away behind a query language (like JPQL) or a verbose criteria builder, Storm embraces SQL directly. Its SQL Template API lets you write real SQL with type-safe parameter interpolation and automatic result mapping. For common CRUD patterns, the type-safe DSL and repository interfaces provide concise, compiler-checked alternatives, but the full power of SQL is always available when you need it.