|
| 1 | +--- |
| 2 | +name: java-spring-ai |
| 3 | +description: Use when the user asks to add AI features, integrate Spring AI or LangChain4J, build a chatbot, implement RAG (retrieval-augmented generation), use vector stores, stream LLM responses, or call AI tools/functions in a Spring Boot project. |
| 4 | +version: 1.0.0 |
| 5 | +authors: [java-plugins contributors] |
| 6 | +tags: [java, spring-boot, spring-ai, langchain4j, llm, rag, vector-store, ai] |
| 7 | +allowed-tools: [Read, Glob, Grep, Edit, Write] |
| 8 | +--- |
| 9 | + |
| 10 | +# Spring AI / LangChain4J Skill |
| 11 | + |
| 12 | +Detect the framework in use, then apply the correct patterns. |
| 13 | + |
| 14 | +## Step 1 — Detect framework and version |
| 15 | + |
| 16 | +Check `pom.xml` or `build.gradle`: |
| 17 | +- `spring-ai-*` dependency → **Spring AI** (note version: 1.0.x GA or 0.8.x milestone) |
| 18 | +- `langchain4j-*` dependency → **LangChain4J** (note version: 0.x or 1.x) |
| 19 | +- Neither present → offer to add one (recommend Spring AI for Spring Boot 3.x, LangChain4J for Boot 2.x) |
| 20 | + |
| 21 | +Check Spring Boot version: |
| 22 | +- Boot 3.x → Spring AI 1.x preferred, LangChain4J 0.35+ |
| 23 | +- Boot 2.x → LangChain4J 0.30.x (Spring AI requires Boot 3.x) |
| 24 | + |
| 25 | +--- |
| 26 | + |
| 27 | +## Mode: `review` |
| 28 | + |
| 29 | +User asks to review existing AI code. Check for: |
| 30 | + |
| 31 | +**Spring AI:** |
| 32 | +- [ ] `ChatClient` built via `ChatClient.Builder` (not raw `ChatModel`) for fluent API |
| 33 | +- [ ] Prompt templates use `PromptTemplate` with variables — no string concatenation |
| 34 | +- [ ] Streaming uses `stream().content()` or `Flux<String>` — not blocking `.call()` for real-time responses |
| 35 | +- [ ] `@Retryable` or Spring AI retry config on ChatClient calls — LLMs are flaky |
| 36 | +- [ ] Secrets (`spring.ai.openai.api-key`) come from env vars or Vault, never hardcoded |
| 37 | +- [ ] `VectorStore` queries use `SearchRequest.query(text).withTopK(n)` — not raw SQL |
| 38 | +- [ ] RAG advisor (`QuestionAnswerAdvisor`) attached to ChatClient — not manual context injection |
| 39 | +- [ ] Token usage logged at DEBUG, not INFO (avoid log noise) |
| 40 | + |
| 41 | +**LangChain4J:** |
| 42 | +- [ ] AI services use `@AiService` interface — not `ChatLanguageModel.generate()` directly |
| 43 | +- [ ] System prompts in `@SystemMessage` annotation — not hardcoded strings |
| 44 | +- [ ] Memory uses `MessageWindowChatMemory` or `TokenWindowChatMemory` — not unlimited history |
| 45 | +- [ ] Streaming via `StreamingChatLanguageModel` with `TokenStream` — not blocking |
| 46 | +- [ ] Embeddings via `EmbeddingModel` + `EmbeddingStore` for RAG — not in-memory list search |
| 47 | +- [ ] Tools annotated with `@Tool` on service methods — not manual function dispatch |
| 48 | +- [ ] API key from `@Value("${langchain4j.openai.api-key}")` — never literal |
| 49 | + |
| 50 | +--- |
| 51 | + |
| 52 | +## Mode: `chat` |
| 53 | + |
| 54 | +User asks to add a basic chatbot or chat endpoint. |
| 55 | + |
| 56 | +### Spring AI |
| 57 | +1. Add dependency (see `references/patterns.md` → Spring AI Setup) |
| 58 | +2. Inject `ChatClient.Builder`, build a `ChatClient` bean |
| 59 | +3. Create `ChatController` with `@PostMapping("/chat")` |
| 60 | +4. Use `chatClient.prompt().user(message).call().content()` for simple response |
| 61 | +5. For streaming: return `Flux<String>` with `chatClient.prompt().user(message).stream().content()` |
| 62 | +6. Add `ANTHROPIC_API_KEY` / `OPENAI_API_KEY` to `application.yml` via `${env-var}` |
| 63 | + |
| 64 | +### LangChain4J |
| 65 | +1. Add `langchain4j-spring-boot-starter` + provider dependency |
| 66 | +2. Define `@AiService` interface with `@SystemMessage` |
| 67 | +3. Register as Spring bean via `AiServices.builder(MyAssistant.class).chatLanguageModel(model).build()` |
| 68 | +4. Expose via `@RestController` |
| 69 | + |
| 70 | +--- |
| 71 | + |
| 72 | +## Mode: `rag` |
| 73 | + |
| 74 | +User asks to implement RAG (chat over documents, knowledge base, semantic search). |
| 75 | + |
| 76 | +### Spring AI RAG |
| 77 | +1. Choose vector store: PgVector (PostgreSQL), Chroma, Redis, Weaviate, Qdrant (see `references/patterns.md`) |
| 78 | +2. Add `spring-ai-{store}-store-spring-boot-starter` |
| 79 | +3. Ingest pipeline: |
| 80 | + - `DocumentReader` (PDF, text, web) → `TokenTextSplitter` → `VectorStore.add()` |
| 81 | + - Run at startup via `ApplicationRunner` or dedicated `@PostMapping("/ingest")` |
| 82 | +4. Query pipeline: |
| 83 | + - Attach `QuestionAnswerAdvisor(vectorStore)` to `ChatClient` |
| 84 | + - Spring AI auto-retrieves context and injects into prompt |
| 85 | +5. Tune: `SearchRequest.withTopK(5).withSimilarityThreshold(0.7)` |
| 86 | + |
| 87 | +### LangChain4J RAG |
| 88 | +1. Add `EmbeddingStore` (Chroma, Qdrant, in-memory for dev) |
| 89 | +2. `EmbeddingStoreIngestor` with `DocumentSplitter` and `EmbeddingModel` |
| 90 | +3. `EmbeddingStoreContentRetriever` → `RetrievalAugmentor` → `AiServices` builder |
| 91 | + |
| 92 | +--- |
| 93 | + |
| 94 | +## Mode: `tools` |
| 95 | + |
| 96 | +User asks to give the AI the ability to call Java methods (function/tool calling). |
| 97 | + |
| 98 | +### Spring AI |
| 99 | +1. Define a `@Bean` of type `Function<Input, Output>` — Spring AI auto-registers it |
| 100 | +2. Or use `@Description` on a `record` parameter for rich schema |
| 101 | +3. Pass function names to `ChatClient`: `.options(OpenAiChatOptions.builder().withFunction("myFunction").build())` |
| 102 | +4. Spring AI handles the tool call loop automatically |
| 103 | + |
| 104 | +### LangChain4J |
| 105 | +1. Annotate service methods with `@Tool("description of what this tool does")` |
| 106 | +2. Register the service as a tool: `AiServices.builder(...).tools(myToolService).build()` |
| 107 | +3. The model decides when to call — no manual dispatch needed |
| 108 | + |
| 109 | +--- |
| 110 | + |
| 111 | +## Mode: `memory` |
| 112 | + |
| 113 | +User asks to add conversation memory / chat history. |
| 114 | + |
| 115 | +### Spring AI |
| 116 | +- `MessageChatMemoryAdvisor` with `InMemoryChatMemory` for single-instance apps |
| 117 | +- `JdbcChatMemory` for persistent / multi-instance memory (requires `spring-ai-jdbc` store) |
| 118 | +- Key: pass `conversationId` (e.g., session ID or user ID) to scope memory per user |
| 119 | + |
| 120 | +### LangChain4J |
| 121 | +- `MessageWindowChatMemory.withMaxMessages(20)` — keeps last N messages |
| 122 | +- `TokenWindowChatMemory` — keeps messages within token budget |
| 123 | +- For persistence: implement `ChatMemoryStore` backed by Redis or JDBC |
| 124 | + |
| 125 | +--- |
| 126 | + |
| 127 | +## Output format |
| 128 | + |
| 129 | +For **review mode**: list findings as `[CRITICAL] / [HIGH] / [MEDIUM] / [LOW]` with file:line references. |
| 130 | + |
| 131 | +For **implementation modes** (chat, rag, tools, memory): |
| 132 | +1. Show exact Maven/Gradle dependencies with versions |
| 133 | +2. Show full working code snippets (not pseudocode) |
| 134 | +3. Show `application.yml` configuration |
| 135 | +4. Note: state the minimum Spring Boot and Java version required |
| 136 | + |
| 137 | +Always note version-specific differences: |
| 138 | +- Spring AI 1.0.x (GA) vs 0.8.x (milestone) — API changes between these |
| 139 | +- LangChain4J 1.x vs 0.x — `AiServices` API changed in 1.x |
| 140 | +- Spring Boot 3.x required for Spring AI; Boot 2.x → use LangChain4J |
0 commit comments