Skip to content

Commit 96139d4

Browse files
feat: sync eino docs (#1512)
1 parent 39820ca commit 96139d4

251 files changed

Lines changed: 14141 additions & 29066 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

_typos.toml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,3 +15,6 @@ exten = "exten"
1515
invokable = "invokable"
1616
typ = "typ"
1717
Rabit = "Rabit"
18+
byted = "byted"
19+
Byted = "Byted"
20+
bytedgpt = "bytedgpt"

content/en/docs/eino/Eino: Cookbook.md

Lines changed: 32 additions & 32 deletions
Large diffs are not rendered by default.

content/en/docs/eino/FAQ.md

Lines changed: 97 additions & 66 deletions
Large diffs are not rendered by default.

content/en/docs/eino/core_modules/chain_and_graph_orchestration/callback_manual.md

Lines changed: 109 additions & 109 deletions
Large diffs are not rendered by default.

content/en/docs/eino/core_modules/chain_and_graph_orchestration/checkpoint_interrupt.md

Lines changed: 82 additions & 77 deletions
Large diffs are not rendered by default.

content/en/docs/eino/core_modules/chain_and_graph_orchestration/orchestration_design_principles.md

Lines changed: 120 additions & 120 deletions
Large diffs are not rendered by default.

content/en/docs/eino/core_modules/chain_and_graph_orchestration/stream_programming_essentials.md

Lines changed: 84 additions & 84 deletions
Large diffs are not rendered by default.

content/en/docs/eino/core_modules/chain_and_graph_orchestration/workflow_orchestration_framework.md

Lines changed: 132 additions & 130 deletions
Large diffs are not rendered by default.

content/en/docs/eino/core_modules/components/agentic_chat_model_guide.md

Lines changed: 61 additions & 51 deletions
Large diffs are not rendered by default.

content/en/docs/eino/core_modules/components/chat_model_guide.md

Lines changed: 42 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -3,18 +3,18 @@ Description: ""
33
date: "2026-01-20"
44
lastmod: ""
55
tags: []
6-
title: 'Eino: ChatModel Guide'
7-
weight: 1
6+
title: 'Eino: ChatModel User Guide'
7+
weight: 8
88
---
99

10-
## Overview
10+
## Introduction
1111

12-
The Model component is used to interact with large language models. Its main purpose is to send user input messages to the language model and obtain the model's response. This component plays an important role in the following scenarios:
12+
The Model component is a component for interacting with large language models. Its main purpose is to send user input messages to the language model and obtain the model's response. This component plays an important role in the following scenarios:
1313

14-
- Natural language conversations
14+
- Natural language dialogue
1515
- Text generation and completion
16-
- Tool call parameter generation
17-
- Multimodal interactions (text, images, audio, etc.)
16+
- Parameter generation for tool calls
17+
- Multimodal interaction (text, images, audio, etc.)
1818

1919
## Component Definition
2020

@@ -42,8 +42,8 @@ type ToolCallingChatModel interface {
4242

4343
- Function: Generate a complete model response
4444
- Parameters:
45-
- ctx: Context object for passing request-level information, also used to pass the Callback Manager
46-
- input: List of input messages
45+
- ctx: Context object for passing request-level information and Callback Manager
46+
- input: Input message list
4747
- opts: Optional parameters for configuring model behavior
4848
- Return values:
4949
- `*schema.Message`: The response message generated by the model
@@ -52,7 +52,7 @@ type ToolCallingChatModel interface {
5252
#### Stream Method
5353

5454
- Function: Generate model response in streaming mode
55-
- Parameters: Same as the Generate method
55+
- Parameters: Same as Generate method
5656
- Return values:
5757
- `*schema.StreamReader[*schema.Message]`: Stream reader for model response
5858
- error: Error information during generation
@@ -72,7 +72,7 @@ type ToolCallingChatModel interface {
7272
7373
```go
7474
type Message struct {
75-
// Role indicates the role of the message (system/user/assistant/tool)
75+
// Role represents the message role (system/user/assistant/tool)
7676
Role RoleType
7777
// Content is the text content of the message
7878
Content string
@@ -82,18 +82,18 @@ type Message struct {
8282
// UserInputMultiContent stores user input multimodal data, supporting text, images, audio, video, files
8383
// When using this field, the model role is restricted to User
8484
UserInputMultiContent []MessageInputPart
85-
// AssistantGenMultiContent holds multimodal data output by the model, supporting text, images, audio, video
85+
// AssistantGenMultiContent stores model output multimodal data, supporting text, images, audio, video
8686
// When using this field, the model role is restricted to Assistant
8787
AssistantGenMultiContent []MessageOutputPart
8888
// Name is the sender name of the message
8989
Name string
9090
// ToolCalls is the tool call information in assistant messages
9191
ToolCalls []ToolCall
92-
// ToolCallID is the tool call ID for tool messages
92+
// ToolCallID is the tool call ID of tool messages
9393
ToolCallID string
9494
// ResponseMeta contains response metadata
9595
ResponseMeta *ResponseMeta
96-
// Extra is used to store additional information
96+
// Extra stores additional information
9797
Extra map[string]any
9898
}
9999
```
@@ -102,8 +102,8 @@ The Message struct is the basic structure for model interaction, supporting:
102102

103103
- Multiple roles: system, user, assistant (ai), tool
104104
- Multimodal content: text, images, audio, video, files
105-
- Tool calls: Support for model calling external tools and functions
106-
- Metadata: Including response reason, token usage statistics, etc.
105+
- Tool calls: supports model calling external tools and functions
106+
- Metadata: includes response reason, token usage statistics, etc.
107107

108108
### Common Options
109109

@@ -121,12 +121,12 @@ type Options struct {
121121
Model *string
122122
// TopP controls the diversity of output
123123
TopP *float32
124-
// Stop specifies the conditions to stop generation
124+
// Stop specifies conditions to stop generation
125125
Stop []string
126126
}
127127
```
128128

129-
Options can be set using the following methods:
129+
Options can be set in the following ways:
130130

131131
```go
132132
// Set temperature
@@ -183,7 +183,7 @@ import (
183183
"github.com/cloudwego/eino/schema"
184184
)
185185

186-
// Initialize model (using OpenAI as an example)
186+
// Initialize model (using openai as example)
187187
cm, err := openai.NewChatModel(ctx, &openai.ChatModelConfig{
188188
// Configuration parameters
189189
})
@@ -192,11 +192,11 @@ cm, err := openai.NewChatModel(ctx, &openai.ChatModelConfig{
192192
messages := []*schema.Message{
193193
{
194194
Role: schema.System,
195-
Content: "你是一个有帮助的助手。",
195+
Content: "You are a helpful assistant.",
196196
},
197197
{
198198
Role: schema.User,
199-
Content: "你好!",
199+
Content: "Hello!",
200200
},
201201
}
202202

@@ -282,7 +282,7 @@ handler := &callbacksHelper.ModelCallbackHandler{
282282
return ctx
283283
},
284284
OnEnd: func(ctx context.Context, info *callbacks.RunInfo, output *model.CallbackOutput) context.Context {
285-
fmt.Printf("Generation complete, Token usage: %+v\n", output.TokenUsage)
285+
fmt.Printf("Generation complete, token usage: %+v\n", output.TokenUsage)
286286
return ctx
287287
},
288288
OnEndWithStreamOutput: func(ctx context.Context, info *callbacks.RunInfo, output *schema.StreamReader[*model.CallbackOutput]) context.Context {
@@ -336,22 +336,22 @@ result, err := runnable.Invoke(ctx, messages, compose.WithCallbacks(helper))
336336

337337
## **Existing Implementations**
338338

339-
1. OpenAI ChatModel: Using OpenAI's GPT series models [ChatModel - OpenAI](/docs/eino/ecosystem_integration/chat_model/chat_model_openai)
340-
2. Ollama ChatModel: Using Ollama local models [ChatModel - Ollama](/docs/eino/ecosystem_integration/chat_model/chat_model_ollama)
341-
3. ARK ChatModel: Using ARK platform model services [ChatModel - ARK](/docs/eino/ecosystem_integration/chat_model/chat_model_ark)
339+
1. OpenAI ChatModel: Using OpenAI's GPT series models [ChatModel - OpenAI](https://bytedance.larkoffice.com/wiki/NguEw85n6iJjShkVtdQcHpydnld)
340+
2. Ollama ChatModel: Using Ollama local models [ChatModel - Ollama](https://bytedance.larkoffice.com/wiki/WWngw1XMViwgyYkNuZgcjZnxnke)
341+
3. ARK ChatModel: Using ARK platform model services [ChatModel - ARK](https://bytedance.larkoffice.com/wiki/WUzzwaX8ricGwZk1i1mcJHHNnEl)
342342
4. More: [Eino ChatModel](https://www.cloudwego.io/docs/eino/ecosystem_integration/chat_model/)
343343

344-
## Custom Implementation Reference
344+
## Implementation Reference
345345

346-
When implementing a custom ChatModel component, pay attention to the following points:
346+
When implementing a custom ChatModel component, note the following:
347347

348348
1. Make sure to implement common options
349-
2. Make sure to implement the callback mechanism
350-
3. Remember to close the writer after streaming output is complete
349+
2. Make sure to implement callback mechanism
350+
3. Remember to close the writer after completing output in streaming mode
351351

352352
### Option Mechanism
353353

354-
If a custom ChatModel needs Options beyond the common Options, you can use the component abstraction utility functions to implement custom Options, for example:
354+
Custom ChatModel can use the component abstraction utility function to implement custom Options if Options beyond common Options are needed, for example:
355355

356356
```go
357357
import (
@@ -445,7 +445,7 @@ func NewMyChatModel(config *MyChatModelConfig) (*MyChatModel, error) {
445445
}
446446

447447
func (m *MyChatModel) Generate(ctx context.Context, messages []*schema.Message, opts ...model.Option) (*schema.Message, error) {
448-
// 1. Process options
448+
// 1. Handle options
449449
options := &MyChatModelOptions{
450450
Options: &model.Options{
451451
Model: &m.model,
@@ -456,7 +456,7 @@ func (m *MyChatModel) Generate(ctx context.Context, messages []*schema.Message,
456456
options.Options = model.GetCommonOptions(options.Options, opts...)
457457
options = model.GetImplSpecificOptions(options, opts...)
458458

459-
// 2. Callback before starting generation
459+
// 2. Callback before generation starts
460460
ctx = callbacks.OnStart(ctx, &model.CallbackInput{
461461
Messages: messages,
462462
Config: &model.Config{
@@ -467,7 +467,7 @@ func (m *MyChatModel) Generate(ctx context.Context, messages []*schema.Message,
467467
// 3. Execute generation logic
468468
response, err := m.doGenerate(ctx, messages, options)
469469

470-
// 4. Handle error and completion callbacks
470+
// 4. Handle errors and completion callback
471471
if err != nil {
472472
ctx = callbacks.OnError(ctx, err)
473473
return nil, err
@@ -481,7 +481,7 @@ func (m *MyChatModel) Generate(ctx context.Context, messages []*schema.Message,
481481
}
482482

483483
func (m *MyChatModel) Stream(ctx context.Context, messages []*schema.Message, opts ...model.Option) (*schema.StreamReader[*schema.Message], error) {
484-
// 1. Process options
484+
// 1. Handle options
485485
options := &MyChatModelOptions{
486486
Options: &model.Options{
487487
Model: &m.model,
@@ -492,7 +492,7 @@ func (m *MyChatModel) Stream(ctx context.Context, messages []*schema.Message, op
492492
options.Options = model.GetCommonOptions(options.Options, opts...)
493493
options = model.GetImplSpecificOptions(options, opts...)
494494

495-
// 2. Callback before starting streaming generation
495+
// 2. Callback before streaming generation starts
496496
ctx = callbacks.OnStart(ctx, &model.CallbackInput{
497497
Messages: messages,
498498
Config: &model.Config{
@@ -501,18 +501,18 @@ func (m *MyChatModel) Stream(ctx context.Context, messages []*schema.Message, op
501501
})
502502

503503
// 3. Create streaming response
504-
// Pipe produces a StreamReader and a StreamWriter; writing to StreamWriter can be read from StreamReader, both are concurrency-safe.
505-
// The implementation asynchronously writes generated content to StreamWriter and returns StreamReader as the return value
506-
// ***StreamReader is a data stream that can only be read once. When implementing Callback yourself, you need to pass the data stream to callback via OnEndWithCallbackOutput and also return a data stream, requiring a copy of the data stream
507-
// Considering this scenario always requires copying the data stream, the OnEndWithStreamOutput function will copy internally and return an unread stream
508-
// The following code demonstrates one stream processing approach; the processing method is not unique
504+
// Pipe produces a StreamReader and StreamWriter. Writing to StreamWriter can be read from StreamReader, both are concurrency-safe.
505+
// In implementation, write generated content to StreamWriter asynchronously and return StreamReader as return value
506+
// ***StreamReader is a data stream that can only be read once. When implementing Callback yourself, you need to pass the data stream to callback via OnEndWithCallbackOutput and also return a data stream, requiring copying the data stream.
507+
// Considering this scenario always requires copying the data stream, the OnEndWithCallbackOutput function copies internally and returns an unread stream.
508+
// The following code demonstrates one stream handling approach; handling approaches are not unique.
509509
sr, sw := schema.Pipe[*model.CallbackOutput](1)
510510

511-
// 4. Start asynchronous generation
511+
// 4. Start async generation
512512
go func() {
513513
defer sw.Close()
514514

515-
// Stream writing
515+
// Stream write
516516
m.doStream(ctx, messages, options, sw)
517517
}()
518518

0 commit comments

Comments
 (0)