You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
LLM integration plugin for PostGraphile v5 — server-side text-to-vector embedding, resolve-time vector injection, and RAG (Retrieval-Augmented Generation) for pgvector columns using `@agentic-kit/ollama`.
16
+
17
+
## Table of contents
18
+
19
+
-[Installation](#installation)
20
+
-[Usage](#usage)
21
+
-[Features](#features)
22
+
-[Plugins](#plugins)
23
+
-[Configuration](#configuration)
24
+
-[RAG queries](#rag-queries)
25
+
-[License](#license)
6
26
7
-
-**Text-based vector search**: Adds `text: String` field to `VectorNearbyInput` — clients pass natural language instead of raw float vectors
8
-
-**Text mutation fields**: Adds `{column}Text: String` companion fields on create/update inputs for vector columns
9
-
-**Pluggable embedders**: Provider-based architecture (Ollama via `@agentic-kit/ollama`, with room for OpenAI, etc.)
10
-
-**Per-database configuration**: Reads `llm_module` from `services_public.api_modules` for per-API embedder config
11
-
-**Plugin-conditional**: Fields only appear in the schema when the plugin is loaded
27
+
## Installation
28
+
29
+
```bash
30
+
npm install graphile-llm
31
+
```
12
32
13
33
## Usage
14
34
@@ -27,3 +47,89 @@ const preset = {
27
47
],
28
48
};
29
49
```
50
+
51
+
The preset bundles all plugins listed below. You can also import each plugin individually (`createLlmModulePlugin`, `createLlmTextSearchPlugin`, `createLlmTextMutationPlugin`, `createLlmRagPlugin`) if you prefer a-la-carte.
52
+
53
+
## Features
54
+
55
+
-**Text-based vector search** — adds `text: String` field to `VectorNearbyInput`; clients pass natural language instead of raw float vectors
56
+
-**Text mutation fields** — adds `{column}Text: String` companion fields on create/update inputs for vector columns
57
+
-**RAG queries** — adds `ragQuery` and `embedText` root query fields; detects `@hasChunks` smart tags for chunk-aware retrieval
58
+
-**Pluggable providers** — provider-based architecture for both embedding and chat completion (Ollama via `@agentic-kit/ollama`, extensible to OpenAI, etc.)
59
+
-**Per-database configuration** — reads `llm_module` from `services_public.api_modules` for per-API provider config
60
+
-**Toggleable** — each capability (`enableTextSearch`, `enableTextMutations`, `enableRag`) can be independently enabled or disabled
61
+
-**Plugin-conditional** — fields only appear in the schema when the plugin is loaded
62
+
63
+
## Plugins
64
+
65
+
| Plugin | Description | Toggle |
66
+
|--------|-------------|--------|
67
+
|`LlmModulePlugin`| Resolves embedder and chat completer from config; stores on build context | Always included |
68
+
|`LlmTextSearchPlugin`| Adds `text: String` to `VectorNearbyInput` with resolve-time embedding |`enableTextSearch` (default: `true`) |
Providers can also be configured via environment variables (`EMBEDDER_PROVIDER`, `EMBEDDER_MODEL`, `EMBEDDER_BASE_URL`, `CHAT_PROVIDER`, `CHAT_MODEL`, `CHAT_BASE_URL`).
105
+
106
+
## RAG queries
107
+
108
+
When `enableRag: true` and tables have `@hasChunks` smart tags, the plugin adds:
0 commit comments