Skip to content

Commit ce7421d

Browse files
chore(eino): add missing embedding_ollama page
1 parent 2389bf0 commit ce7421d

4 files changed

Lines changed: 251 additions & 6 deletions

File tree

content/en/docs/eino/ecosystem_integration/embedding/embedding_ark.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,13 @@ Description: ""
33
date: "2025-01-20"
44
lastmod: ""
55
tags: []
6-
title: Embedding - ark
6+
title: Embedding - ARK
77
weight: 0
88
---
99

1010
## **Overview**
1111

12-
Ark Embedding is an implementation of Eino’s Embedding interface that converts text to vectors. Volcengine Ark provides model inference services, including text embedding. This component follows [Eino: Embedding Guide](/en/docs/eino/core_modules/components/embedding_guide).
12+
Ark Embedding is an implementation of Eino’s Embedding interface that converts text to vectors. Volcengine Ark provides model inference services, including text embedding. This component follows [[🚧]Eino: Embedding Guide](/en/docs/eino/core_modules/components/embedding_guide).
1313

1414
## **Usage**
1515

@@ -42,6 +42,8 @@ embedder, err := ark.NewEmbedder(ctx, &ark.EmbeddingConfig{
4242

4343
### **Generate Embeddings**
4444

45+
Text vectorization is done via `EmbedStrings`:
46+
4547
```go
4648
embeddings, err := embedder.EmbedStrings(ctx, []string{
4749
"First text",
@@ -88,6 +90,7 @@ func main() {
8890
panic(err)
8991
}
9092

93+
// use generated vectors
9194
for i, embedding := range embeddings {
9295
println("text", i+1, "vector dim:", len(embedding))
9396
}
@@ -97,5 +100,5 @@ func main() {
97100
## **References**
98101

99102
- [Eino: Embedding Guide](/en/docs/eino/core_modules/components/embedding_guide)
100-
- [Embedding OpenAI](/en/docs/eino/ecosystem_integration/embedding/embedding_openai)
103+
- [Embedding - OpenAI](/en/docs/eino/ecosystem_integration/embedding/embedding_openai)
101104
- Volcengine Ark: https://www.volcengine.com/product/ark
Lines changed: 121 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,121 @@
1+
---
2+
Description: ""
3+
date: "2025-12-11"
4+
lastmod: ""
5+
tags: []
6+
title: Embedding - Ollama
7+
weight: 0
8+
---
9+
10+
## **Overview**
11+
12+
This is an Ollama Embedding component for [Eino](https://github.com/cloudwego/eino) that implements the `Embedder` interface. It integrates seamlessly into Eino's embedding system to provide text vectorization.
13+
14+
## **Features**
15+
16+
- Implements `github.com/cloudwego/eino/components/embedding.Embedder` interface
17+
- Easy integration into Eino workflows
18+
- Supports custom Ollama service endpoint and model
19+
- Built-in Eino callback support
20+
21+
## **Installation**
22+
23+
```bash
24+
go get github.com/cloudwego/eino-ext/components/embedding/ollama
25+
```
26+
27+
## **Quick Start**
28+
29+
```go
30+
package main
31+
32+
import (
33+
"context"
34+
"log"
35+
"os"
36+
"time"
37+
38+
"github.com/cloudwego/eino-ext/components/embedding/ollama"
39+
)
40+
41+
func main() {
42+
ctx := context.Background()
43+
44+
baseURL := os.Getenv("OLLAMA_BASE_URL")
45+
if baseURL == "" {
46+
baseURL = "http://localhost:11434" // default localhost
47+
}
48+
model := os.Getenv("OLLAMA_EMBED_MODEL")
49+
if model == "" {
50+
model = "nomic-embed-text"
51+
}
52+
53+
embedder, err := ollama.NewEmbedder(ctx, &ollama.EmbeddingConfig{
54+
BaseURL: baseURL,
55+
Model: model,
56+
Timeout: 10 * time.Second,
57+
})
58+
if err != nil {
59+
log.Fatalf("NewEmbedder of ollama error: %v", err)
60+
return
61+
}
62+
63+
log.Printf("===== call Embedder directly =====")
64+
65+
vectors, err := embedder.EmbedStrings(ctx, []string{"hello", "how are you"})
66+
if err != nil {
67+
log.Fatalf("EmbedStrings of Ollama failed, err=%v", err)
68+
}
69+
70+
log.Printf("vectors : %v", vectors)
71+
72+
// you can use WithModel to specify the model
73+
vectors, err = embedder.EmbedStrings(ctx, []string{"hello", "how are you"}, embedding.WithModel(model))
74+
if err != nil {
75+
log.Fatalf("EmbedStrings of Ollama failed, err=%v", err)
76+
}
77+
78+
log.Printf("vectors : %v", vectors)
79+
}
80+
```
81+
82+
## **Configuration**
83+
84+
The embedder can be configured via the `EmbeddingConfig` struct:
85+
86+
```go
87+
type EmbeddingConfig struct {
88+
// Timeout specifies the maximum duration to wait for API responses
89+
// If HTTPClient is set, Timeout will not be used.
90+
// Optional. Default: no timeout
91+
Timeout time.Duration `json:"timeout"`
92+
93+
// HTTPClient specifies the client to send HTTP requests.
94+
// If HTTPClient is set, Timeout will not be used.
95+
// Optional. Default &http.Client{Timeout: Timeout}
96+
HTTPClient *http.Client `json:"http_client"`
97+
98+
// BaseURL specifies the Ollama service endpoint URL
99+
// Format: http(s)://host:port
100+
// Optional. Default: "http://localhost:11434"
101+
BaseURL string `json:"base_url"`
102+
103+
// Model specifies the ID of the model to use for embedding generation
104+
// Required. You can also set it when calling the EmbedStrings with `embedding.WithModel(model)`
105+
Model string `json:"model"`
106+
107+
// Truncate specifies whether to truncate text to model's maximum context length
108+
// When set to true, if text to embed exceeds the model's maximum context length,
109+
// a call to EmbedStrings will return an error
110+
// Optional.
111+
Truncate *bool `json:"truncate,omitempty"`
112+
113+
// KeepAlive controls how long the model will stay loaded in memory following this request.
114+
// Optional. Default 5 minutes
115+
KeepAlive *time.Duration `json:"keep_alive,omitempty"`
116+
117+
// Options lists model-specific options.
118+
// Optional
119+
Options map[string]any `json:"options,omitempty"`
120+
}
121+
```

content/en/docs/eino/ecosystem_integration/embedding/embedding_openai.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,13 @@ Description: ""
33
date: "2025-01-20"
44
lastmod: ""
55
tags: []
6-
title: Embedding - openai
6+
title: Embedding - OpenAI
77
weight: 0
88
---
99

1010
## **Overview**
1111

12-
OpenAI embedder is an implementation of Eino’s Embedding interface that converts text into vector representations. It follows [Eino: Embedding Guide](/en/docs/eino/core_modules/components/embedding_guide) and is typically used for:
12+
OpenAI embedder is an implementation of Eino’s Embedding interface that converts text into vector representations. It follows [[🚧]Eino: Embedding Guide](/en/docs/eino/core_modules/components/embedding_guide) and is typically used for:
1313

1414
- Converting text into high‑dimensional vectors
1515
- Using OpenAI’s embedding models
@@ -100,6 +100,6 @@ func main() {
100100
## **References**
101101

102102
- [Eino: Embedding Guide](/en/docs/eino/core_modules/components/embedding_guide)
103-
- [Embedding Ark](/en/docs/eino/ecosystem_integration/embedding/embedding_ark)
103+
- [Embedding - Ark](/en/docs/eino/ecosystem_integration/embedding/embedding_ark)
104104
- OpenAI Embedding API: https://platform.openai.com/docs/guides/embeddings
105105
- Azure OpenAI Service: https://learn.microsoft.com/azure/cognitive-services/openai/
Lines changed: 121 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,121 @@
1+
---
2+
Description: ""
3+
date: "2025-12-11"
4+
lastmod: ""
5+
tags: []
6+
title: Embedding - Ollama
7+
weight: 0
8+
---
9+
10+
## **基本介绍**
11+
12+
这是一个为 [Eino](https://github.com/cloudwego/eino) 实现的 Ollama Embedding 组件,实现了 `Embedder` 接口,可无缝集成到 Eino 的 embedding 系统中,提供文本向量化能力。
13+
14+
## **特性**
15+
16+
- 实现 `github.com/cloudwego/eino/components/embedding.Embedder` 接口
17+
- 易于集成到 Eino 的工作流
18+
- 支持自定义 Ollama 服务端点和模型
19+
- Eino 内置回调支持
20+
21+
## **安装**
22+
23+
```bash
24+
go get github.com/cloudwego/eino-ext/components/embedding/ollama
25+
```
26+
27+
## **快速开始**
28+
29+
```go
30+
package main
31+
32+
import (
33+
"context"
34+
"log"
35+
"os"
36+
"time"
37+
38+
"github.com/cloudwego/eino-ext/components/embedding/ollama"
39+
)
40+
41+
func main() {
42+
ctx := context.Background()
43+
44+
baseURL := os.Getenv("OLLAMA_BASE_URL")
45+
if baseURL == "" {
46+
baseURL = "http://localhost:11434" // 默认本地
47+
}
48+
model := os.Getenv("OLLAMA_EMBED_MODEL")
49+
if model == "" {
50+
model = "nomic-embed-text"
51+
}
52+
53+
embedder, err := ollama.NewEmbedder(ctx, &ollama.EmbeddingConfig{
54+
BaseURL: baseURL,
55+
Model: model,
56+
Timeout: 10 * time.Second,
57+
})
58+
if err != nil {
59+
log.Fatalf("NewEmbedder of ollama error: %v", err)
60+
return
61+
}
62+
63+
log.Printf("===== call Embedder directly =====")
64+
65+
vectors, err := embedder.EmbedStrings(ctx, []string{"hello", "how are you"})
66+
if err != nil {
67+
log.Fatalf("EmbedStrings of Ollama failed, err=%v", err)
68+
}
69+
70+
log.Printf("vectors : %v", vectors)
71+
72+
// you can use WithModel to specify the model
73+
vectors, err = embedder.EmbedStrings(ctx, []string{"hello", "how are you"}, embedding.WithModel(model))
74+
if err != nil {
75+
log.Fatalf("EmbedStrings of Ollama failed, err=%v", err)
76+
}
77+
78+
log.Printf("vectors : %v", vectors)
79+
}
80+
```
81+
82+
## **配置说明**
83+
84+
embedder 可以通过 `EmbeddingConfig` 结构体进行配置:
85+
86+
```go
87+
type EmbeddingConfig struct {
88+
// Timeout specifies the maximum duration to wait for API responses
89+
// If HTTPClient is set, Timeout will not be used.
90+
// Optional. Default: no timeout
91+
Timeout time.Duration `json:"timeout"`
92+
93+
// HTTPClient specifies the client to send HTTP requests.
94+
// If HTTPClient is set, Timeout will not be used.
95+
// Optional. Default &http.Client{Timeout: Timeout}
96+
HTTPClient *http.Client `json:"http_client"`
97+
98+
// BaseURL specifies the Ollama service endpoint URL
99+
// Format: http(s)://host:port
100+
// Optional. Default: "http://localhost:11434"
101+
BaseURL string `json:"base_url"`
102+
103+
// Model specifies the ID of the model to use for embedding generation
104+
// Required. You can also set it when calling the EmbedStrings with `embedding.WithModel(model)`
105+
Model string `json:"model"`
106+
107+
// Truncate specifies whether to truncate text to model's maximum context length
108+
// When set to true, if text to embed exceeds the model's maximum context length,
109+
// a call to EmbedStrings will return an error
110+
// Optional.
111+
Truncate *bool `json:"truncate,omitempty"`
112+
113+
// KeepAlive controls how long the model will stay loaded in memory following this request.
114+
// Optional. Default 5 minutes
115+
KeepAlive *time.Duration `json:"keep_alive,omitempty"`
116+
117+
// Options lists model-specific options.
118+
// Optional
119+
Options map[string]any `json:"options,omitempty"`
120+
}
121+
```

0 commit comments

Comments
 (0)