| name | promptopskit |
|---|---|
| description | Guidance for creating and editing promptopskit prompt files, defaults, variables, and validation-safe templates. |
This project uses promptopskit to manage LLM prompts as code. Prompts live in markdown files with YAML front matter, are validated against a schema, and render into provider-specific request bodies (OpenAI, Anthropic, Gemini, OpenRouter, LLMAsAService, and OpenAI Responses). Follow these instructions when creating or editing prompts.
When a user asks for prompt work, infer their intent from phrasing and act directly. Do not stop at explanation when the user is asking for a file or code artifact.
Treat requests like these as instructions to create a new prompt markdown file:
- "Create a prompt that ..."
- "Add a prompt that ..."
- "Write a prompt for ..."
- "Make a prompt to ..."
- "Build me a prompt that ..."
- "I need a prompt that ..."
- "Scaffold a prompt for ..."
- "Create an eval/rewrite/classifier/extractor/summarizer prompt ..."
Behavior:
- Create a new
kebab-case.mdfile under the project's prompt source directory (./promptsby default, or the existing prompt folder used by the repo). - Give the file a meaningful name derived from the task, not a generic name like
prompt.mdornew-prompt.md. - Set
idto a stable slash path or camel/snake identifier that matches local conventions. Prefer path-like ids such assupport/reply,content/seo-brief, orexamples/basicwhen prompts are organized in folders. - Include all appropriate defaults for the requested use case:
- Always include
idandschema_version: 1. - Include
descriptionwhen the purpose is not obvious from the id. - Include
providerandmodelonly if they are requested or not supplied by a nearbydefaults.md. - Include
sampling,reasoning,response,tools,cache,provider_options, orrawonly when the prompt behavior requires them. - Include
context.inputsfor every{{ variable }}in the body. - Use object-form inputs with
non_empty: true,max_size,trim, and validation controls when the input is user-provided or unbounded. - Add
context.history.max_itemsfor chat or support prompts that should preserve bounded conversation history through compaction. - Add
# Notesonly for authoring details, examples, or explanations that must not be sent to the model.
- Always include
- If the prompt should return JSON or another structured result, define
response.format: jsonand a JSON Schema inresponse.schema. - When the user gives a vague task, choose sensible placeholders and defaults from the task instead of asking for every detail. Ask only if the missing detail would change the prompt's purpose, provider, or output contract.
- After writing the file, run or recommend
promptopskit validatefor the prompt source directory.
Example response to "Create a prompt that turns a support ticket into a concise triage summary":
---
id: support/triage-summary
schema_version: 1
description: Summarize a support ticket for triage.
context:
inputs:
- name: ticket
non_empty: true
trim: true
max_size: 12000
reject_secrets: true
response:
format: json
schema_name: support_triage_summary
schema:
type: object
additionalProperties: false
required: [summary, urgency, category, next_action]
properties:
summary:
type: string
urgency:
type: string
enum: [low, medium, high]
category:
type: string
next_action:
type: string
---
# System instructions
You summarize support tickets for a triage queue. Be concise, specific, and avoid
inventing details that are not in the ticket.
# Prompt template
Summarize this support ticket:
{{ ticket }}Treat requests like these as instructions to generate code that renders a prompt into a provider request body:
- "render the body for the prompt [name] for openai"
- "generate the body for [name] using anthropic"
- "create the call for prompt [name] for google"
- "convert prompt [name] to an openrouter request"
- "show me the OpenAI body for [name]"
- "generate the Anthropic call for the prompt [name]"
- "render/generate/build/create/produce the request/body/call/payload/messages for prompt [name] with provider [provider]"
- "turn [name] into a provider request for openai/anthropic/google/gemini/openrouter/llmasaservice"
- "wire up prompt [name] to OpenAI/Anthropic/Gemini/OpenRouter/LLMAsAService"
- "give me code to call [provider] with prompt [name]"
Provider aliases:
| User says | Use provider |
|---|---|
openai, chat completions, OpenAI chat |
openai |
responses, OpenAI Responses, responses api |
openai-responses |
anthropic, claude |
anthropic |
google, gemini |
gemini |
openrouter |
openrouter |
llmasaservice, llmasaservice.io, llm gateway |
llmasaservice |
Behavior:
- Generate code unless the user explicitly asks for only the raw rendered JSON.
- Prefer
createPromptOpsKit().renderPrompt()for server-side app code that loads prompt source or compiled JSON by path. - Prefer provider adapters (
openaiAdapter,anthropicAdapter,geminiAdapter,openrouterAdapter,llmasaserviceAdapter) when the user asks for provider-specific integration code or already has a compiled asset. - Include
variablesfor every declared prompt input, using realistic placeholder values or function parameters. - Include
historyonly if the prompt is chat-style or declarescontext.history. - Check
returnMessagebefore readingrequestwhen usingkit.renderPrompt()oradapter.renderPrompt(). - Return or pass
request.bodyas the provider request payload;request.providerandrequest.modelare metadata for the caller. - Do not put API keys in generated snippets. Use environment variables and keep provider calls on the server.
OpenAI example:
import OpenAI from 'openai';
import { createPromptOpsKit } from 'promptopskit';
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const kit = createPromptOpsKit({
sourceDir: './prompts',
compiledDir: './.generated-prompts/json',
mode: 'auto',
});
const result = await kit.renderPrompt({
path: 'support/triage-summary',
provider: 'openai',
variables: {
ticket: ticketText,
},
strict: true,
});
if (result.returnMessage) {
return result.returnMessage;
}
if (!result.request) {
throw new Error('Prompt rendering did not produce an OpenAI request.');
}
const completion = await client.chat.completions.create(result.request.body as any);Anthropic example:
import Anthropic from '@anthropic-ai/sdk';
import { createPromptOpsKit } from 'promptopskit';
const client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const kit = createPromptOpsKit({ sourceDir: './prompts' });
const result = await kit.renderPrompt({
path: 'support/triage-summary',
provider: 'anthropic',
variables: { ticket: ticketText },
strict: true,
});
if (result.returnMessage) return result.returnMessage;
if (!result.request) throw new Error('Prompt rendering did not produce an Anthropic request.');
const message = await client.messages.create(result.request.body as any);Google Gemini example:
import { GoogleGenAI } from '@google/genai';
import { createPromptOpsKit } from 'promptopskit';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const kit = createPromptOpsKit({ sourceDir: './prompts' });
const result = await kit.renderPrompt({
path: 'support/triage-summary',
provider: 'gemini',
variables: { ticket: ticketText },
strict: true,
});
if (result.returnMessage) return result.returnMessage;
if (!result.request) throw new Error('Prompt rendering did not produce a Gemini request.');
const response = await ai.models.generateContent({
model: result.request.model,
...(result.request.body as Record<string, unknown>),
});OpenRouter example:
import OpenAI from 'openai';
import { createPromptOpsKit } from 'promptopskit';
const client = new OpenAI({
apiKey: process.env.OPENROUTER_API_KEY,
baseURL: 'https://openrouter.ai/api/v1',
});
const kit = createPromptOpsKit({ sourceDir: './prompts' });
const result = await kit.renderPrompt({
path: 'support/triage-summary',
provider: 'openrouter',
variables: { ticket: ticketText },
strict: true,
});
if (result.returnMessage) return result.returnMessage;
if (!result.request) throw new Error('Prompt rendering did not produce an OpenRouter request.');
const completion = await client.chat.completions.create(result.request.body as any);LLMAsAService example:
import OpenAI from 'openai';
import {
createLLMAsAServiceOpenAIConfig,
llmasaserviceAdapter,
} from 'promptopskit/llmasaservice';
const client = new OpenAI(createLLMAsAServiceOpenAIConfig({
baseURL: process.env.LLM_GATEWAY_BASE_URL,
projectId: process.env.LLM_GATEWAY_PROJECT_ID,
}));
const result = await llmasaserviceAdapter.renderPrompt(
{
path: 'support/triage-summary',
},
{
runtime: {
provider_options: {
llmasaservice: {
project_id: process.env.LLM_GATEWAY_PROJECT_ID,
customer: {
customer_id: customer.id,
customer_name: customer.name,
customer_user_id: user.id,
customer_user_email: user.email,
},
},
},
},
variables: { ticket: ticketText },
strict: true,
},
);
if (result.returnMessage) return result.returnMessage;
if (!('body' in result)) {
throw new Error('Prompt rendering did not produce an LLMAsAService request.');
}
const completion = await client.chat.completions.create(result.body as any);If the user asks for "just the body", render with kit.renderPrompt() and show
or return result.request.body, not the whole render result.
Handle these common requests with the same conventions:
- "Add defaults for this prompt folder" -> create or update
defaults.mdwith sharedprovider,model,metadata, cache settings, and optional fallback# System instructions. - "Add inputs to this prompt" -> add exact
context.inputsentries for every placeholder and choose validation controls for risky or large values. - "Make this prompt return JSON" -> add
response.format: json,response.schema_name, and a strictresponse.schema. - "Add a test for this prompt" -> create a sidecar
.test.yamlwith realisticcasesandvariables. - "Compile/render/inspect/validate my prompts" -> use the matching
promptopskitCLI command and report validation errors with file references. - "Move common instructions into a reusable prompt" -> create a shared fragment
and reference it with
includes. - "Add provider-specific options" -> prefer portable fields first, then
provider_options, and userawonly for unmodeled vendor fields with a note. - "Make this safe for production" -> add validation for untrusted inputs, keep API calls server-side, validate before compile, and avoid committing generated artifacts unless the project already does.
Every prompt is a .md file with two parts:
- YAML front matter — model settings, provider config, variables, overrides
- Markdown body — sections separated by H1 headings
---
id: greeting
schema_version: 1
provider: openai
model: gpt-5.4
context:
inputs:
- name: name
max_size: 2000
---
# System instructions
You are a helpful assistant.
# Prompt template
Hello {{ name }}, how can I help you?When creating a new prompt file with "just the necessary fields", include only the fields required by that specific file:
- Always include
idandschema_version: 1 - Include
providerandmodelonly if they are not inherited fromdefaults.md - Include
context.inputswhenever the body contains{{ variable }}placeholders - Omit
contextentirely only when the body has no placeholders
| Field | Type | Required | Description |
|---|---|---|---|
id |
string | yes | Unique identifier for the prompt |
schema_version |
number | yes | Always 1 |
description |
string | no | Human-readable description |
provider |
enum | no | openai, openai-responses, anthropic, google, gemini, openrouter, llmasaservice, or any |
model |
string | no | Model identifier (e.g. gpt-5.4, claude-sonnet-4-20250514) |
fallback_models |
string[] | no | Ordered fallback model list |
reasoning |
object | no | `{ effort: low |
sampling |
object | no | { temperature, top_p, frequency_penalty, presence_penalty, stop, max_output_tokens } |
response |
object | no | `{ format: text |
cache |
object | no | Provider-specific cache controls (openai, anthropic, gemini/google) |
tools |
array | no | Tool names (strings) or inline definitions with { name, description, input_schema } |
provider_options |
object | no | Provider-specific advanced options (anthropic, gemini, openrouter, llmasaservice) |
raw |
object | no | Provider-scoped request-body passthrough for unmodeled vendor fields |
mcp |
object | no | `{ servers: [string |
context.inputs |
`Array<string | { name, max_size?, trim?, allow_regex?, deny_regex?, non_empty?, reject_secrets? }>` | no |
context.history |
object | no | { max_items: number }; caps rendered history by compacting older turns into one preserved message |
includes |
string[] | no | Relative paths to other prompt files to include |
environments |
object | no | Per-environment overrides (see Overrides) |
tiers |
object | no | Per-tier overrides (see Overrides) |
metadata |
object | no | { owner, tags, review_required, stable } |
Use H1 headings to define sections. The parser recognizes these headings (case-insensitive):
| Heading | Maps to | Purpose |
|---|---|---|
# System instructions |
system_instructions |
System/developer message |
# Prompt template |
prompt_template |
User message template |
# Notes |
notes |
Documentation only — not sent to the model |
If the body has no H1 headings, the entire body becomes the prompt_template.
Use {{ variable_name }} syntax in system instructions and prompt template
sections. Variables are replaced at render time.
Rules:
- Declare all variables in
context.inputs— validation warns on undeclared usage - Before finishing a new prompt file, scan the body for every
{{ variable }}and ensure each exact variable name appears incontext.inputs - Use object-form inputs with
max_sizewhen a variable is likely to grow large and should trigger early warnings - Use
trimto enforce byte budgets before interpolation whenmax_sizeis set - Use
allow_regexfor allowlist checks anddeny_regexfor blocklist checks on risky inputs - Prefer unquoted
/pattern/iliterals for regex validators so backslash escapes such as\sand\bstay copyable from regex tools - Use structured regexes like
{ pattern, flags, return_message }when the validator needs a fallback message or separate flags - In structured
pattern:YAML, use single quotes for patterns with backslashes or double each backslash in double-quoted strings - Use
non_empty: truefor required user text andreject_secrets: truefor common secret redaction checks - When the caller should receive a structured fallback message instead of an exception, use object form with
return_messageonallow_regex,deny_regex,non_empty, orreject_secrets - Escape literal braces with
\{{and\}} - In strict mode, missing variables throw an error
- In permissive mode, unresolved placeholders are left intact
Example with a size budget:
context:
inputs:
- user_message
- name: account_summary
max_size: 4096If a rendered value exceeds max_size, renderPrompt() emits a non-blocking POK030 warning.
At render time, callers can also pass onContextOverflow to transform oversized values before warnings/rendering.
If a validator declares return_message, renderPrompt() returns that message in a structured result and omits the provider request instead of throwing for that validation failure. Invalid regex definitions still fail during validate and compile as POK013 prompt-authoring errors.
Malformed allow_regex and deny_regex values fail during validate and compile, not just at render time. When regex compilation fails, the error includes the prompt id, variable name, field name, and raw configured value. Double-quoted YAML regex strings with raw backslashes fail as POK013; use /pattern/i, single-quoted pattern: '...', or doubled backslashes.
Use context.history.max_items when a chat-style prompt should bound rendered conversation history:
context:
history:
max_items: 10When runtime history exceeds max_items, PromptOpsKit preserves history by compacting older turns into one synthetic history message and keeping the most recent turns. Callers can provide onHistoryCompaction to create a custom summary:
const result = await kit.renderPrompt({
path: 'support/reply',
provider: 'openai',
history,
onHistoryCompaction: ({ overflow }) => ({
role: 'user',
content: `Earlier conversation summary: ${summarizeConversationUsingLLM(overflow)}`,
}),
});If no callback is supplied, PromptOpsKit creates a plain text compacted history message. Do not describe max_items as dropping history; it preserves overflow through compaction.
Example: this is the minimal valid shape for a prompt that references
{{ pull_request }} even when provider/model are inherited from defaults:
---
id: summarizePullRequest
schema_version: 1
context:
inputs:
- pull_request
---
# Prompt template
Summarize the following pull request:
{{ pull_request }}Compose prompts from shared fragments:
includes:
- ./shared/tone.md
- ./shared/safety.mdIncluded files are parsed and their system_instructions are prepended
before the including file's own system instructions. Includes resolve
recursively. Circular includes are detected and rejected.
Note: Included files do not inherit folder defaults. Only the top-level prompt that is loaded via
loadPromptFilereceives defaults.
Define shared defaults for a prompt tree by adding a defaults.md file in any
folder:
prompts/
├── defaults.md # global provider, model, metadata + system instructions
└── support/
├── defaults.md # overrides for support/*
└── reply.md # inherits from support/defaults.md
Supported default fields:
provider(front matter) — default provider for the foldermodel(front matter) — default model for the foldercache(front matter) — default provider-specific cache hintsmetadata(front matter) — merged with prompt-local metadata# System instructions(body section) — used when the prompt has none
This lets you configure app-wide settings like provider and model
in a single root defaults.md, so individual prompts only declare what's unique to them.
Important: defaults.md does not declare or infer context.inputs for a prompt.
If a prompt body uses placeholders, the prompt file itself must declare them.
Rules:
- Nearest subfolder
defaults.mdoverrides parent defaults - Prompt-local values always take precedence over defaults
defaults.mdfiles are skipped during compilation and validationloadPromptFiledefaults the search boundary to the file's own directory; passdefaultsRootto enable ancestor traversal
Override model settings per environment or tier:
environments:
development:
model: gpt-4.1-mini
reasoning:
effort: low
sampling:
temperature: 0.9
production:
model: gpt-5.4
reasoning:
effort: high
sampling:
temperature: 0.3
tiers:
free:
model: gpt-4.1-mini
sampling:
max_output_tokens: 500
premium:
model: gpt-5.4Overridable fields: model, fallback_models, reasoning, sampling,
response, cache, raw, tools, provider_options.
Override application order: base → environment → tier → runtime.
Prefer portable fields first:
- Use
samplingfor common sampling controls - Use
response.schema,response.schema_name,response.schema_description, andresponse.schema_strictfor structured output when possible - Use
cachefor provider cache hints - Use
toolsfor tool definitions
Treat response.schema as the provider-neutral JSON Schema contract. The adapters emit it through provider-specific request fields: OpenAI/OpenRouter/LLMAsAService response_format, OpenAI Responses text.format, Anthropic output_config.format, and Gemini generationConfig.responseJsonSchema.
Use provider_options for known non-portable mappings:
provider_options:
anthropic:
top_k: 40
tool_choice:
type: auto
output_config:
format:
type: json_schema
schema:
type: object
gemini:
# Use only when Gemini's native schema dialect is required.
response_schema:
type: object
response_json_schema:
type: object
openrouter:
provider:
order: ["anthropic", "openai"]
transforms: ["middle-out"]
llmasaservice:
project_id: llm-project-id
# Optional default; usually pass the real customer at render time.
customer:
customer_id: cust_123
customer_name: AcmeFor LLMAsAService, prefer putting the current customer/account/user attribution in runtime.provider_options.llmasaservice.customer during rendering. Static prompt metadata may include a default, but runtime values should override it for real requests.
Use raw only when a vendor request-body field is important and PromptOpsKit does not model it yet:
raw:
openai:
service_tier: flex
anthropic:
service_tier: auto
gemini:
safetySettings:
- category: HARM_CATEGORY_DANGEROUS_CONTENT
threshold: BLOCK_ONLY_HIGH
openrouter:
usage:
include: true
llmasaservice:
conversationId: conv_123Raw blocks are provider-scoped (openai, openai-responses/openai_responses, anthropic, gemini/google, openrouter, llmasaservice) and are shallow-merged into the final request body after normalized fields. When adding raw, include a short note in # Notes explaining why a first-class field is not being used.
Create a .test.yaml file alongside a prompt to define test cases:
# greeting.test.yaml
cases:
- name: basic
variables:
name: "World"
- name: formal
variables:
name: "Dr. Smith"Choose the narrowest runtime surface that fits the environment.
- You are on the server or in a Node runtime
- Prompts live as
.mdfiles in a source tree - You want promptopskit to handle loading, defaults, includes, overrides, and provider shaping in one step
- You want auto mode to prefer compiled artifacts when present but still fall back to source
import { createPromptOpsKit } from 'promptopskit';
const kit = createPromptOpsKit({
sourceDir: './prompts',
compiledDir: './.generated-prompts/json',
warnings: {
contextSize: process.env.NODE_ENV === 'production' ? 'off' : 'console-and-result',
},
});
const result = await kit.renderPrompt({
path: 'support/reply',
provider: 'openai',
environment: 'production',
variables: {
user_message: 'How do I reset my password?',
app_context: 'Account settings',
},
});
if (result.returnMessage) {
return result.returnMessage;
}
if (!result.request) {
throw new Error('Prompt rendering did not produce a provider request.');
}
const { request } = result;- You want direct provider adapter imports such as
promptopskit/openai - You are on the server and want adapter-level ergonomics
- You still want the adapter to resolve either source
.mdor compiled output from disk
import path from 'node:path';
import { openaiAdapter } from 'promptopskit/openai';
const result = await openaiAdapter.renderPrompt(
{
path: 'support/reply',
sourceDir: path.join(process.cwd(), 'prompts'),
compiledDir: path.join(process.cwd(), '.generated-prompts', 'json'),
},
{
environment: 'production',
variables: {
user_message: 'How do I reset my password?',
app_context: 'Account settings',
},
strict: true,
},
);
if (result.returnMessage) {
return result.returnMessage;
}
if (!('body' in result)) {
throw new Error('Prompt rendering did not produce a provider request.');
}
const request = result;- You already have a compiled JSON or ESM prompt artifact
- You are in edge, worker, or browser-oriented code and cannot read prompt files from disk
- You want the smallest runtime surface and no file loading behavior
import type { ResolvedPromptAsset } from 'promptopskit';
import { openaiAdapter } from 'promptopskit/openai';
import compiledPrompt from './.generated-prompts/esm/support/reply.mjs';
const prompt = compiledPrompt as ResolvedPromptAsset;
const request = openaiAdapter.render(prompt, {
environment: 'production',
variables: {
user_message: 'How do I reset my password?',
app_context: 'Account settings',
},
strict: true,
});- Do not recommend direct provider API calls from browser or client components unless the user explicitly wants a demo-only setup
- Do not use
createPromptOpsKit()in browser-only code; it is Node-oriented - For client-side rendering, use precompiled ESM or inline a small
ResolvedPromptAsset, then pass the request body to a server endpoint or server action that holds provider credentials - If the user insists on a pure browser provider call, explicitly call out that API keys will be exposed and treat it as unsafe for production
Prompts should usually be validated and compiled as part of the normal build pipeline rather than handled ad hoc.
{
"scripts": {
"validate:prompts": "promptopskit validate ./prompts --strict",
"build:prompts": "promptopskit compile",
"build": "npm run validate:prompts && npm run build:prompts && tsup"
}
}promptopskit compile defaults to JSON output in ./.generated-prompts/json, which matches runtime compiledDir loading. Use promptopskit compile --format esm when prompts need to be imported into a bundle; those artifacts default to ./.generated-prompts/esm.
- Node server: compile to JSON and configure
compiledDir - Browser or client bundle: compile to ESM and import specific prompt artifacts
- Mixed app: compile JSON for server loading and ESM only for prompts that must ship in a client bundle
- Add
validate:promptsbeforebuild:promptsso schema or variable mistakes fail fast - Treat compiled artifacts as build outputs, not the source of truth
- Keep prompt source in
./prompts; use./.generated-prompts/jsonas the default server output and./.generated-prompts/esmfor imported client artifacts unless a project-specific build layout needs something else - If using
createPromptOpsKitinautomode, point bothsourceDirandcompiledDirat those directories so local development can fall back to source when artifacts are stale or missing
import { createPromptOpsKit } from 'promptopskit';
export const prompts = createPromptOpsKit({
sourceDir: './prompts',
compiledDir: './.generated-prompts/json',
mode: 'auto',
});import type { ResolvedPromptAsset } from 'promptopskit';
import compiledPrompt from './generated/prompts/support/reply.mjs';
const prompt = compiledPrompt as ResolvedPromptAsset;Use validateAsset() when you are working with an already-parsed asset and want schema or variable diagnostics before rendering.
import { validateAsset, parsePrompt } from 'promptopskit';
const asset = parsePrompt(source);
const result = validateAsset(asset);
if (!result.valid) {
console.error(result.errors);
}Use promptopskit/testing helpers for unit tests around prompt behavior or request shaping.
import { createMockAsset, parseTestPrompt } from 'promptopskit/testing';
const mock = createMockAsset({ model: 'gpt-4.1-mini' });
const asset = parseTestPrompt(`
---
id: test
schema_version: 1
provider: openai
model: gpt-5.4
---
# Prompt template
Hello {{ name }}
`);| Command | Description |
|---|---|
promptopskit init [dir] |
Scaffold a prompts directory with starter files (including defaults.md) |
promptopskit validate [sourceDir] [options] |
Validate all prompt files in a directory, defaulting to ./prompts |
promptopskit compile [src] [out] |
Compile .md prompts to JSON or ESM artifacts |
promptopskit render <file> |
Render a prompt preview |
promptopskit inspect <file> |
Print the normalized prompt asset |
- One prompt per file — each
.mdfile is a single prompt asset - Always set
idandschema_version: 1unless a surrounding tool explicitly generates those fields - Declare every placeholder in
context.inputs; do not rely on defaults or includes to infer variables - Use
defaults.mdfor shared provider, model, metadata, and fallback system instructions - Use includes for reusable system behavior, not for user-specific prompt bodies
- Prefer
createPromptOpsKit().renderPrompt()for server-side app code when prompts live as source files - Prefer direct adapters for compiled assets or provider-specific integration points
- Do not suggest browser-side provider calls for production because credentials belong on the server
- Validate before compile and compile before shipping when prompts are part of the build
- Variable names should be
snake_case - Prompt file names should be
kebab-case.md