Skip to content

Commit c189028

Browse files
anandgupta42claude
andauthored
feat: add AI-powered prompt enhancement (#144)
* feat: add prompt enhancement feature Add AI-powered prompt enhancement that rewrites rough user prompts into clearer, more specific versions before sending to the main model. - Add `enhancePrompt()` utility using a small/cheap model to polish prompts - Register `prompt.enhance` TUI command with `<leader>i` keybind - Show "enhance" hint in the bottom bar alongside agents/commands - Add `prompt_enhance` keybind to config schema - Add unit tests for the `clean()` text sanitization function Inspired by KiloCode's prompt enhancement feature. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: improve enhancement prompt with research-backed approach and add auto-enhance config - Rewrite system prompt based on AutoPrompter research (5 missing info categories: specifics, action plan, scope, verification, intent) - Add few-shot examples for data engineering tasks (dbt, SQL, migrations) - Add `experimental.auto_enhance_prompt` config flag (default: false) - Auto-enhance normal prompts on submit when enabled (skips shell/slash) - Export `isAutoEnhanceEnabled()` for config-driven behavior Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address code review findings for prompt enhancement - Add 15s timeout via `AbortController` to prevent indefinite hangs - Extract `ENHANCE_ID` constant and document synthetic `as any` casts - Fix `clean()` regex to match full-string code fences only (avoids stripping inner code blocks) - Export `stripThinkTags()` as separate utility for testability - Move auto-enhance before extmark expansion (prevents sending expanded paste content to the small model) - Add toast feedback and error logging for auto-enhance path - Update `store.prompt.input` after enhancement so history is accurate - Add outer try/catch with logging to `enhancePrompt()` - Expand tests from 9 to 30: `stripThinkTags()`, `clean()` edge cases, combined pipeline tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: handle unclosed `<think>` tags from truncated model output When the small model hits its token limit mid-generation, `<think>` tags may not have a closing `</think>`. The previous regex required a closing tag, which would leak the entire reasoning block into the enhanced prompt. Now `stripThinkTags()` matches both closed and unclosed think blocks. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address remaining review findings — history, debounce, tests - Fix history storing original text instead of enhanced text by passing `inputText` explicitly to `history.append()` instead of spreading `store.prompt` which may contain stale state - Add concurrency guard (`enhancingInProgress` flag) to prevent multiple concurrent auto-enhance LLM calls from rapid submissions - Consolidate magic string into `ENHANCE_NAME` constant used across agent name, user agent, log service, and ID derivation - Add justifying comment for `as any` cast on synthetic IDs explaining why branded types are safely bypassed - Add `isAutoEnhanceEnabled()` tests (5 cases): config absent, present but missing flag, false, true, undefined - Add `enhancePrompt()` tests (10 cases): empty input, whitespace, successful enhancement, think tag stripping, code fence stripping, stream.text failure, stream init failure, empty LLM response, think tags with no content, combined pipeline Test count: 32 -> 48 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: address Sentry findings — stream consumption and race condition - Explicitly consume `stream.fullStream` before awaiting `stream.text` to prevent potential hangs from Vercel AI SDK stream not being drained - Add race condition guard to manual enhance command: if user edits the prompt while enhancement is in-flight, discard the stale result - Add same guard to auto-enhance path in `submit()` for consistency - Update LLM mock to include `fullStream` async iterable Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 0c92216 commit c189028

File tree

4 files changed

+564
-0
lines changed

4 files changed

+564
-0
lines changed
Lines changed: 157 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,157 @@
1+
// altimate_change - new file
2+
import { Provider } from "@/provider/provider"
3+
import { LLM } from "@/session/llm"
4+
import { Agent } from "@/agent/agent"
5+
import { Config } from "@/config/config"
6+
import { Log } from "@/util/log"
7+
import { MessageV2 } from "@/session/message-v2"
8+
9+
const ENHANCE_NAME = "enhance-prompt"
10+
const ENHANCE_TIMEOUT_MS = 15_000
11+
// MessageV2.User requires branded MessageID/SessionID types, but this is a
12+
// synthetic message that never enters the session store — cast is safe here.
13+
const ENHANCE_ID = ENHANCE_NAME as any
14+
15+
const log = Log.create({ service: ENHANCE_NAME })
16+
17+
// Research-backed enhancement prompt based on:
18+
// - AutoPrompter (arxiv 2504.20196): 5 missing info categories that cause 27% lower edit correctness
19+
// - Meta-prompting best practices: clear role, structural scaffolding, few-shot examples
20+
// - KiloCode's enhance-prompt implementation: lightweight model, preserve intent, no wrapping
21+
const ENHANCE_SYSTEM_PROMPT = `You are a prompt rewriter for a data engineering coding agent. The agent can read/write files, run SQL, manage dbt models, inspect schemas, and execute shell commands.
22+
23+
Your task: rewrite the user's rough prompt into a clearer version that will produce better results. Reply with ONLY the enhanced prompt — no explanations, no wrapping in quotes or code fences.
24+
25+
## What to improve
26+
27+
Research shows developer prompts commonly lack these five categories of information. Add them when missing:
28+
29+
1. **Specifics** — Add concrete details the agent needs: table names, column names, file paths, SQL dialects, error messages. If the user references "the model" or "the table", keep the reference but clarify what the agent should look for.
30+
2. **Action plan** — When the prompt is vague ("fix this"), add explicit steps: investigate first, then modify, then verify. Structure as a logical sequence.
31+
3. **Scope** — Clarify what files, models, or queries are in scope. If ambiguous, instruct the agent to identify the scope first.
32+
4. **Verification** — Add a verification step when the user implies correctness matters (fixes, migrations, refactors). E.g. "run the query to confirm results" or "run dbt test after changes".
33+
5. **Intent clarification** — When the request could be interpreted multiple ways, pick the most likely interpretation and make it explicit.
34+
35+
## Rules
36+
37+
- Preserve the user's intent exactly — never add requirements they didn't ask for
38+
- Keep it concise — a good enhancement adds 1-3 sentences, not paragraphs
39+
- If the prompt is already clear and specific, return it unchanged
40+
- Write in the same tone/style as the user (casual stays casual, technical stays technical)
41+
- Never add generic filler like "please ensure best practices" or "follow coding standards"
42+
- Do not mention yourself or the enhancement process
43+
44+
## Examples
45+
46+
User: "fix the failing test"
47+
Enhanced: "Investigate the failing test — run the test suite first to identify which test is failing and why, then examine the relevant source code, apply a fix, and re-run the test to confirm it passes."
48+
49+
User: "add a created_at column to the users model"
50+
Enhanced: "Add a created_at timestamp column to the users dbt model. Update the SQL definition and the schema.yml entry. Use the appropriate timestamp type for the target warehouse."
51+
52+
User: "why is this query slow"
53+
Enhanced: "Analyze why the query is slow. Run EXPLAIN/query profile to identify bottlenecks (full table scans, missing indexes, expensive joins). Suggest specific optimizations based on the findings."
54+
55+
User: "migrate this from snowflake to bigquery"
56+
Enhanced: "Migrate the SQL from Snowflake dialect to BigQuery dialect. Convert Snowflake-specific functions (e.g. DATEADD, IFF, QUALIFY) to BigQuery equivalents. Preserve the query logic and verify the translated query is syntactically valid."`
57+
58+
export function stripThinkTags(text: string) {
59+
// Match closed <think>...</think> blocks, and also unclosed <think>... to end of string
60+
// (unclosed tags happen when the model hits token limit mid-generation)
61+
return text.replace(/<think>[\s\S]*?(?:<\/think>\s*|$)/g, "")
62+
}
63+
64+
export function clean(text: string) {
65+
return text
66+
.trim()
67+
.replace(/^```\w*\n([\s\S]*?)\n```$/, "$1")
68+
.trim()
69+
.replace(/^(['"])([\s\S]*)\1$/, "$2")
70+
.trim()
71+
}
72+
73+
/**
74+
* Check if auto-enhance is enabled in config.
75+
* Defaults to false — user must explicitly opt in.
76+
*/
77+
export async function isAutoEnhanceEnabled(): Promise<boolean> {
78+
const cfg = await Config.get()
79+
return cfg.experimental?.auto_enhance_prompt === true
80+
}
81+
82+
export async function enhancePrompt(text: string): Promise<string> {
83+
const trimmed = text.trim()
84+
if (!trimmed) return text
85+
86+
log.info("enhancing", { length: trimmed.length })
87+
88+
const controller = new AbortController()
89+
const timeout = setTimeout(() => controller.abort(), ENHANCE_TIMEOUT_MS)
90+
91+
try {
92+
const defaultModel = await Provider.defaultModel()
93+
const model =
94+
(await Provider.getSmallModel(defaultModel.providerID)) ??
95+
(await Provider.getModel(defaultModel.providerID, defaultModel.modelID))
96+
97+
const agent: Agent.Info = {
98+
name: ENHANCE_NAME,
99+
mode: "primary",
100+
hidden: true,
101+
options: {},
102+
permission: [],
103+
prompt: ENHANCE_SYSTEM_PROMPT,
104+
temperature: 0.7,
105+
}
106+
107+
const user: MessageV2.User = {
108+
id: ENHANCE_ID,
109+
sessionID: ENHANCE_ID,
110+
role: "user",
111+
time: { created: Date.now() },
112+
agent: ENHANCE_NAME,
113+
model: {
114+
providerID: model.providerID,
115+
modelID: model.id,
116+
},
117+
}
118+
119+
const stream = await LLM.stream({
120+
agent,
121+
user,
122+
system: [],
123+
small: true,
124+
tools: {},
125+
model,
126+
abort: controller.signal,
127+
sessionID: ENHANCE_ID,
128+
retries: 2,
129+
messages: [
130+
{
131+
role: "user",
132+
content: trimmed,
133+
},
134+
],
135+
})
136+
137+
// Consume the stream explicitly to avoid potential SDK hangs where
138+
// .text never resolves if the stream isn't drained (Vercel AI SDK caveat)
139+
for await (const _ of stream.fullStream) {
140+
// drain
141+
}
142+
const result = await stream.text.catch((err) => {
143+
log.error("failed to enhance prompt", { error: err })
144+
return undefined
145+
})
146+
147+
if (!result) return text
148+
149+
const cleaned = clean(stripThinkTags(result).trim())
150+
return cleaned || text
151+
} catch (err) {
152+
log.error("enhance prompt failed", { error: err })
153+
return text
154+
} finally {
155+
clearTimeout(timeout)
156+
}
157+
}

packages/opencode/src/cli/cmd/tui/component/prompt/index.tsx

Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,10 @@ import { useToast } from "../../ui/toast"
3434
import { useKV } from "../../context/kv"
3535
import { useTextareaKeybindings } from "../textarea-keybindings"
3636
import { DialogSkill } from "../dialog-skill"
37+
// altimate_change start - import prompt enhancement
38+
import { enhancePrompt, isAutoEnhanceEnabled } from "@/altimate/enhance-prompt"
39+
let enhancingInProgress = false
40+
// altimate_change end
3741

3842
export type PromptProps = {
3943
sessionID?: string
@@ -194,6 +198,53 @@ export function Prompt(props: PromptProps) {
194198
dialog.clear()
195199
},
196200
},
201+
// altimate_change start - add prompt enhance command
202+
{
203+
title: "Enhance prompt",
204+
value: "prompt.enhance",
205+
keybind: "prompt_enhance",
206+
category: "Prompt",
207+
enabled: !!store.prompt.input,
208+
onSelect: async (dialog) => {
209+
if (!store.prompt.input.trim()) return
210+
dialog.clear()
211+
const original = store.prompt.input
212+
toast.show({
213+
message: "Enhancing prompt...",
214+
variant: "info",
215+
duration: 2000,
216+
})
217+
try {
218+
const enhanced = await enhancePrompt(original)
219+
// Guard against race condition: if user edited the prompt while
220+
// enhancement was in-flight, discard the stale enhanced result
221+
if (store.prompt.input !== original) return
222+
if (enhanced !== original) {
223+
input.setText(enhanced)
224+
setStore("prompt", "input", enhanced)
225+
input.gotoBufferEnd()
226+
toast.show({
227+
message: "Prompt enhanced",
228+
variant: "success",
229+
duration: 2000,
230+
})
231+
} else {
232+
toast.show({
233+
message: "Prompt already looks good",
234+
variant: "info",
235+
duration: 2000,
236+
})
237+
}
238+
} catch {
239+
toast.show({
240+
message: "Failed to enhance prompt",
241+
variant: "error",
242+
duration: 3000,
243+
})
244+
}
245+
},
246+
},
247+
// altimate_change end
197248
{
198249
title: "Paste",
199250
value: "prompt.paste",
@@ -564,6 +615,32 @@ export function Prompt(props: PromptProps) {
564615
const messageID = MessageID.ascending()
565616
let inputText = store.prompt.input
566617

618+
// altimate_change start - auto-enhance prompt before expanding paste text
619+
// Only enhance the raw user text, not shell commands or slash commands
620+
// Guard prevents concurrent enhancement calls from rapid submissions
621+
if (store.mode === "normal" && !inputText.startsWith("/") && !enhancingInProgress) {
622+
try {
623+
const autoEnhance = await isAutoEnhanceEnabled()
624+
if (autoEnhance) {
625+
enhancingInProgress = true
626+
toast.show({ message: "Enhancing prompt...", variant: "info", duration: 2000 })
627+
const enhanced = await enhancePrompt(inputText)
628+
// Discard if user changed the prompt during enhancement
629+
if (store.prompt.input !== inputText) return
630+
if (enhanced !== inputText) {
631+
inputText = enhanced
632+
setStore("prompt", "input", enhanced)
633+
}
634+
}
635+
} catch (err) {
636+
// Enhancement failure should never block prompt submission
637+
console.error("auto-enhance failed, using original prompt", err)
638+
} finally {
639+
enhancingInProgress = false
640+
}
641+
}
642+
// altimate_change end
643+
567644
// Expand pasted text inline before submitting
568645
const allExtmarks = input.extmarks.getAllForTypeId(promptPartTypeId)
569646
const sortedExtmarks = allExtmarks.sort((a: { start: number }, b: { start: number }) => b.start - a.start)
@@ -653,6 +730,7 @@ export function Prompt(props: PromptProps) {
653730
}
654731
history.append({
655732
...store.prompt,
733+
input: inputText,
656734
mode: currentMode,
657735
})
658736
input.extmarks.clear()
@@ -1155,6 +1233,11 @@ export function Prompt(props: PromptProps) {
11551233
<text fg={theme.text}>
11561234
{keybind.print("command_list")} <span style={{ fg: theme.textMuted }}>commands</span>
11571235
</text>
1236+
{/* altimate_change start - show enhance hint */}
1237+
<text fg={theme.text}>
1238+
{keybind.print("prompt_enhance")} <span style={{ fg: theme.textMuted }}>enhance</span>
1239+
</text>
1240+
{/* altimate_change end */}
11581241
</Match>
11591242
<Match when={store.mode === "shell"}>
11601243
<text fg={theme.text}>

packages/opencode/src/config/config.ts

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -866,6 +866,9 @@ export namespace Config {
866866
agent_cycle: z.string().optional().default("tab").describe("Next agent"),
867867
agent_cycle_reverse: z.string().optional().default("shift+tab").describe("Previous agent"),
868868
variant_cycle: z.string().optional().default("ctrl+t").describe("Cycle model variants"),
869+
// altimate_change start - add prompt enhance keybind
870+
prompt_enhance: z.string().optional().default("<leader>i").describe("Enhance prompt with AI before sending"),
871+
// altimate_change end
869872
input_clear: z.string().optional().default("ctrl+c").describe("Clear input field"),
870873
input_paste: z.string().optional().default("ctrl+v").describe("Paste from clipboard"),
871874
input_submit: z.string().optional().default("return").describe("Submit input"),
@@ -1226,6 +1229,14 @@ export namespace Config {
12261229
.positive()
12271230
.optional()
12281231
.describe("Timeout in milliseconds for model context protocol (MCP) requests"),
1232+
// altimate_change start - auto-enhance prompt config
1233+
auto_enhance_prompt: z
1234+
.boolean()
1235+
.optional()
1236+
.describe(
1237+
"Automatically enhance prompts with AI before sending (default: false). Uses a small model to rewrite rough prompts into clearer versions.",
1238+
),
1239+
// altimate_change end
12291240
})
12301241
.optional(),
12311242
})

0 commit comments

Comments
 (0)