You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: API.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ A SQLite extension that provides semantic memory capabilities with hybrid search
23
23
sqlite-memory enables semantic search over text content stored in SQLite. It:
24
24
25
25
1.**Chunks** text content using semantic parsing (markdown-aware)
26
-
2.**Generates embeddings** for each chunk using the built-in llama.cpp engine (`"local"` provider) or the [vector.space](https://vector.space) remote service
26
+
2.**Generates embeddings** for each chunk using the built-in llama.cpp engine (`"local"` provider) or the [vectors.space](https://vectors.space) remote service
27
27
3.**Stores** embeddings and full-text content for hybrid search
28
28
4.**Searches** using vector similarity combined with FTS5 full-text search
29
29
@@ -77,15 +77,15 @@ Configures the embedding model to use.
77
77
**Parameters:**
78
78
| Parameter | Type | Description |
79
79
|-----------|------|-------------|
80
-
|`provider`| TEXT |`"local"` for built-in llama.cpp engine, or any other name (e.g., `"openai"`) for [vector.space](https://vector.space) remote service |
81
-
|`model`| TEXT | For local: full path to GGUF model file. For remote: model identifier supported by vector.space |
80
+
|`provider`| TEXT |`"local"` for built-in llama.cpp engine, or any other name (e.g., `"openai"`) for [vectors.space](https://vectors.space) remote service |
81
+
|`model`| TEXT | For local: full path to GGUF model file. For remote: model identifier supported by vectors.space |
82
82
83
83
**Returns:** INTEGER - 1 on success
84
84
85
85
**Notes:**
86
86
- When `provider` is `"local"`, the extension uses the built-in llama.cpp engine and verifies the model file exists
87
-
- When `provider` is anything other than `"local"`, the extension uses the [vector.space](https://vector.space) remote embedding service
88
-
- Remote embedding requires a free API key from [vector.space](https://vector.space) (set via `memory_set_apikey`)
87
+
- When `provider` is anything other than `"local"`, the extension uses the [vectors.space](https://vectors.space) remote embedding service
88
+
- Remote embedding requires a free API key from [vectors.space](https://vectors.space) (set via `memory_set_apikey`)
89
89
- Settings are persisted in `dbmem_settings` table
90
90
- For local models, the embedding engine is initialized immediately
91
91
@@ -94,7 +94,7 @@ Configures the embedding model to use.
94
94
-- Local embedding model (uses built-in llama.cpp engine)
0 commit comments