-
Notifications
You must be signed in to change notification settings - Fork 2k
Vercel AI SDK backfilling #769
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
266d1d8
be240fb
4389e3c
9155763
d95d7d7
3b29452
a20929e
c5a03f3
ba20b01
a03800b
a0d469a
147a549
707361d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -184,6 +184,29 @@ const result = await generateText({ | |
| }) | ||
| ``` | ||
|
|
||
| **Hybrid Search Mode (RAG)** - Search both memories AND document chunks: | ||
| ```typescript | ||
| import { generateText } from "ai" | ||
| import { withSupermemory } from "@supermemory/tools/ai-sdk" | ||
| import { openai } from "@ai-sdk/openai" | ||
|
|
||
| const modelWithHybrid = withSupermemory(openai("gpt-4"), "user-123", { | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Documentation doesn't match the new API This PR changed the This example uses: withSupermemory(openai("gpt-4"), "user-123", { ... })But the new API requires: withSupermemory({
model: openai("gpt-4"),
containerTag: "user-123",
customId: "conv-123", // Now required
...options
})All README examples need to be updated to use the new options object pattern. Users following these docs will get runtime errors. |
||
| mode: "full", | ||
| searchMode: "hybrid", // Search memories + document chunks | ||
| searchLimit: 15 // Max results (default: 10) | ||
| }) | ||
sreedharsreeram marked this conversation as resolved.
Show resolved
Hide resolved
sreedharsreeram marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| const result = await generateText({ | ||
| model: modelWithHybrid, | ||
| messages: [{ role: "user", content: "What's in my documents about quarterly goals?" }], | ||
| }) | ||
| ``` | ||
|
|
||
| Search mode options: | ||
| - `"memories"` (default) - Search only memory entries | ||
| - `"hybrid"` - Search memories + document chunks (recommended for RAG) | ||
| - `"documents"` - Search only document chunks | ||
|
|
||
| #### Automatic Memory Capture | ||
|
|
||
| The middleware can automatically save user messages as memories: | ||
|
|
@@ -653,6 +676,8 @@ interface WithSupermemoryOptions { | |
| conversationId?: string | ||
| verbose?: boolean | ||
| mode?: "profile" | "query" | "full" | ||
| searchMode?: "memories" | "hybrid" | "documents" | ||
| searchLimit?: number | ||
| addMemory?: "always" | "never" | ||
| /** Optional Supermemory API key. Use this in browser environments. */ | ||
| apiKey?: string | ||
|
|
@@ -662,6 +687,8 @@ interface WithSupermemoryOptions { | |
| - **conversationId**: Optional conversation ID to group messages into a single document for contextual memory generation | ||
| - **verbose**: Enable detailed logging of memory search and injection process (default: false) | ||
| - **mode**: Memory search mode - "profile" (default), "query", or "full" | ||
| - **searchMode**: Search mode - "memories" (default), "hybrid", or "documents". Use "hybrid" for RAG applications | ||
| - **searchLimit**: Maximum number of search results when using hybrid/documents mode (default: 10) | ||
| - **addMemory**: Automatic memory storage mode - "always" or "never" (default: "never") | ||
|
|
||
| ## Available Tools | ||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.