Skip to content

Commit 8e60f20

Browse files
Sync Core Integrations API reference (ollama) on Docusaurus (#11075)
Co-authored-by: anakin87 <44616784+anakin87@users.noreply.github.com>
1 parent fe15e01 commit 8e60f20

File tree

11 files changed

+198
-88
lines changed

11 files changed

+198
-88
lines changed

docs-website/reference/integrations-api/ollama.md

Lines changed: 18 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,9 @@ slug: "/integrations-ollama"
1010

1111
### OllamaDocumentEmbedder
1212

13-
Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of each
14-
Document. It uses embedding models compatible with the Ollama Library.
13+
Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field.
14+
15+
It uses embedding models compatible with the Ollama Library.
1516

1617
Usage example:
1718

@@ -41,9 +42,11 @@ __init__(
4142
meta_fields_to_embed: list[str] | None = None,
4243
embedding_separator: str = "\n",
4344
batch_size: int = 32,
44-
)
45+
) -> None
4546
```
4647

48+
Create a new OllamaDocumentEmbedder instance.
49+
4750
**Parameters:**
4851

4952
- **model** (<code>str</code>) – The name of the model to use. The model should be available in the running Ollama instance.
@@ -116,8 +119,9 @@ Asynchronously run an Ollama Model to compute embeddings of the provided documen
116119

117120
### OllamaTextEmbedder
118121

119-
Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of
120-
each Document. It uses embedding models compatible with the Ollama Library.
122+
Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field.
123+
124+
It uses embedding models compatible with the Ollama Library.
121125

122126
Usage example:
123127

@@ -138,9 +142,11 @@ __init__(
138142
generation_kwargs: dict[str, Any] | None = None,
139143
timeout: int = 120,
140144
keep_alive: float | str | None = None,
141-
)
145+
) -> None
142146
```
143147

148+
Create a new OllamaTextEmbedder instance.
149+
144150
**Parameters:**
145151

146152
- **model** (<code>str</code>) – The name of the model to use. The model should be available in the running Ollama instance.
@@ -236,9 +242,11 @@ __init__(
236242
tools: ToolsType | None = None,
237243
response_format: None | Literal["json"] | JsonSchemaValue | None = None,
238244
think: bool | Literal["low", "medium", "high"] = False,
239-
)
245+
) -> None
240246
```
241247

248+
Create a new OllamaChatGenerator instance.
249+
242250
**Parameters:**
243251

244252
- **model** (<code>str</code>) – The name of the model to use. The model must already be present (pulled) in the running Ollama instance.
@@ -395,9 +403,11 @@ __init__(
395403
timeout: int = 120,
396404
keep_alive: float | str | None = None,
397405
streaming_callback: Callable[[StreamingChunk], None] | None = None,
398-
)
406+
) -> None
399407
```
400408

409+
Create a new OllamaGenerator instance.
410+
401411
**Parameters:**
402412

403413
- **model** (<code>str</code>) – The name of the model to use. The model should be available in the running Ollama instance.

docs-website/reference_versioned_docs/version-2.18/integrations-api/ollama.md

Lines changed: 18 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,9 @@ slug: "/integrations-ollama"
1010

1111
### OllamaDocumentEmbedder
1212

13-
Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of each
14-
Document. It uses embedding models compatible with the Ollama Library.
13+
Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field.
14+
15+
It uses embedding models compatible with the Ollama Library.
1516

1617
Usage example:
1718

@@ -41,9 +42,11 @@ __init__(
4142
meta_fields_to_embed: list[str] | None = None,
4243
embedding_separator: str = "\n",
4344
batch_size: int = 32,
44-
)
45+
) -> None
4546
```
4647

48+
Create a new OllamaDocumentEmbedder instance.
49+
4750
**Parameters:**
4851

4952
- **model** (<code>str</code>) – The name of the model to use. The model should be available in the running Ollama instance.
@@ -116,8 +119,9 @@ Asynchronously run an Ollama Model to compute embeddings of the provided documen
116119

117120
### OllamaTextEmbedder
118121

119-
Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of
120-
each Document. It uses embedding models compatible with the Ollama Library.
122+
Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field.
123+
124+
It uses embedding models compatible with the Ollama Library.
121125

122126
Usage example:
123127

@@ -138,9 +142,11 @@ __init__(
138142
generation_kwargs: dict[str, Any] | None = None,
139143
timeout: int = 120,
140144
keep_alive: float | str | None = None,
141-
)
145+
) -> None
142146
```
143147

148+
Create a new OllamaTextEmbedder instance.
149+
144150
**Parameters:**
145151

146152
- **model** (<code>str</code>) – The name of the model to use. The model should be available in the running Ollama instance.
@@ -236,9 +242,11 @@ __init__(
236242
tools: ToolsType | None = None,
237243
response_format: None | Literal["json"] | JsonSchemaValue | None = None,
238244
think: bool | Literal["low", "medium", "high"] = False,
239-
)
245+
) -> None
240246
```
241247

248+
Create a new OllamaChatGenerator instance.
249+
242250
**Parameters:**
243251

244252
- **model** (<code>str</code>) – The name of the model to use. The model must already be present (pulled) in the running Ollama instance.
@@ -395,9 +403,11 @@ __init__(
395403
timeout: int = 120,
396404
keep_alive: float | str | None = None,
397405
streaming_callback: Callable[[StreamingChunk], None] | None = None,
398-
)
406+
) -> None
399407
```
400408

409+
Create a new OllamaGenerator instance.
410+
401411
**Parameters:**
402412

403413
- **model** (<code>str</code>) – The name of the model to use. The model should be available in the running Ollama instance.

docs-website/reference_versioned_docs/version-2.19/integrations-api/ollama.md

Lines changed: 18 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,9 @@ slug: "/integrations-ollama"
1010

1111
### OllamaDocumentEmbedder
1212

13-
Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of each
14-
Document. It uses embedding models compatible with the Ollama Library.
13+
Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field.
14+
15+
It uses embedding models compatible with the Ollama Library.
1516

1617
Usage example:
1718

@@ -41,9 +42,11 @@ __init__(
4142
meta_fields_to_embed: list[str] | None = None,
4243
embedding_separator: str = "\n",
4344
batch_size: int = 32,
44-
)
45+
) -> None
4546
```
4647

48+
Create a new OllamaDocumentEmbedder instance.
49+
4750
**Parameters:**
4851

4952
- **model** (<code>str</code>) – The name of the model to use. The model should be available in the running Ollama instance.
@@ -116,8 +119,9 @@ Asynchronously run an Ollama Model to compute embeddings of the provided documen
116119

117120
### OllamaTextEmbedder
118121

119-
Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of
120-
each Document. It uses embedding models compatible with the Ollama Library.
122+
Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field.
123+
124+
It uses embedding models compatible with the Ollama Library.
121125

122126
Usage example:
123127

@@ -138,9 +142,11 @@ __init__(
138142
generation_kwargs: dict[str, Any] | None = None,
139143
timeout: int = 120,
140144
keep_alive: float | str | None = None,
141-
)
145+
) -> None
142146
```
143147

148+
Create a new OllamaTextEmbedder instance.
149+
144150
**Parameters:**
145151

146152
- **model** (<code>str</code>) – The name of the model to use. The model should be available in the running Ollama instance.
@@ -236,9 +242,11 @@ __init__(
236242
tools: ToolsType | None = None,
237243
response_format: None | Literal["json"] | JsonSchemaValue | None = None,
238244
think: bool | Literal["low", "medium", "high"] = False,
239-
)
245+
) -> None
240246
```
241247

248+
Create a new OllamaChatGenerator instance.
249+
242250
**Parameters:**
243251

244252
- **model** (<code>str</code>) – The name of the model to use. The model must already be present (pulled) in the running Ollama instance.
@@ -395,9 +403,11 @@ __init__(
395403
timeout: int = 120,
396404
keep_alive: float | str | None = None,
397405
streaming_callback: Callable[[StreamingChunk], None] | None = None,
398-
)
406+
) -> None
399407
```
400408

409+
Create a new OllamaGenerator instance.
410+
401411
**Parameters:**
402412

403413
- **model** (<code>str</code>) – The name of the model to use. The model should be available in the running Ollama instance.

docs-website/reference_versioned_docs/version-2.20/integrations-api/ollama.md

Lines changed: 18 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,9 @@ slug: "/integrations-ollama"
1010

1111
### OllamaDocumentEmbedder
1212

13-
Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of each
14-
Document. It uses embedding models compatible with the Ollama Library.
13+
Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field.
14+
15+
It uses embedding models compatible with the Ollama Library.
1516

1617
Usage example:
1718

@@ -41,9 +42,11 @@ __init__(
4142
meta_fields_to_embed: list[str] | None = None,
4243
embedding_separator: str = "\n",
4344
batch_size: int = 32,
44-
)
45+
) -> None
4546
```
4647

48+
Create a new OllamaDocumentEmbedder instance.
49+
4750
**Parameters:**
4851

4952
- **model** (<code>str</code>) – The name of the model to use. The model should be available in the running Ollama instance.
@@ -116,8 +119,9 @@ Asynchronously run an Ollama Model to compute embeddings of the provided documen
116119

117120
### OllamaTextEmbedder
118121

119-
Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of
120-
each Document. It uses embedding models compatible with the Ollama Library.
122+
Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field.
123+
124+
It uses embedding models compatible with the Ollama Library.
121125

122126
Usage example:
123127

@@ -138,9 +142,11 @@ __init__(
138142
generation_kwargs: dict[str, Any] | None = None,
139143
timeout: int = 120,
140144
keep_alive: float | str | None = None,
141-
)
145+
) -> None
142146
```
143147

148+
Create a new OllamaTextEmbedder instance.
149+
144150
**Parameters:**
145151

146152
- **model** (<code>str</code>) – The name of the model to use. The model should be available in the running Ollama instance.
@@ -236,9 +242,11 @@ __init__(
236242
tools: ToolsType | None = None,
237243
response_format: None | Literal["json"] | JsonSchemaValue | None = None,
238244
think: bool | Literal["low", "medium", "high"] = False,
239-
)
245+
) -> None
240246
```
241247

248+
Create a new OllamaChatGenerator instance.
249+
242250
**Parameters:**
243251

244252
- **model** (<code>str</code>) – The name of the model to use. The model must already be present (pulled) in the running Ollama instance.
@@ -395,9 +403,11 @@ __init__(
395403
timeout: int = 120,
396404
keep_alive: float | str | None = None,
397405
streaming_callback: Callable[[StreamingChunk], None] | None = None,
398-
)
406+
) -> None
399407
```
400408

409+
Create a new OllamaGenerator instance.
410+
401411
**Parameters:**
402412

403413
- **model** (<code>str</code>) – The name of the model to use. The model should be available in the running Ollama instance.

docs-website/reference_versioned_docs/version-2.21/integrations-api/ollama.md

Lines changed: 18 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,9 @@ slug: "/integrations-ollama"
1010

1111
### OllamaDocumentEmbedder
1212

13-
Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of each
14-
Document. It uses embedding models compatible with the Ollama Library.
13+
Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field.
14+
15+
It uses embedding models compatible with the Ollama Library.
1516

1617
Usage example:
1718

@@ -41,9 +42,11 @@ __init__(
4142
meta_fields_to_embed: list[str] | None = None,
4243
embedding_separator: str = "\n",
4344
batch_size: int = 32,
44-
)
45+
) -> None
4546
```
4647

48+
Create a new OllamaDocumentEmbedder instance.
49+
4750
**Parameters:**
4851

4952
- **model** (<code>str</code>) – The name of the model to use. The model should be available in the running Ollama instance.
@@ -116,8 +119,9 @@ Asynchronously run an Ollama Model to compute embeddings of the provided documen
116119

117120
### OllamaTextEmbedder
118121

119-
Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of
120-
each Document. It uses embedding models compatible with the Ollama Library.
122+
Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field.
123+
124+
It uses embedding models compatible with the Ollama Library.
121125

122126
Usage example:
123127

@@ -138,9 +142,11 @@ __init__(
138142
generation_kwargs: dict[str, Any] | None = None,
139143
timeout: int = 120,
140144
keep_alive: float | str | None = None,
141-
)
145+
) -> None
142146
```
143147

148+
Create a new OllamaTextEmbedder instance.
149+
144150
**Parameters:**
145151

146152
- **model** (<code>str</code>) – The name of the model to use. The model should be available in the running Ollama instance.
@@ -236,9 +242,11 @@ __init__(
236242
tools: ToolsType | None = None,
237243
response_format: None | Literal["json"] | JsonSchemaValue | None = None,
238244
think: bool | Literal["low", "medium", "high"] = False,
239-
)
245+
) -> None
240246
```
241247

248+
Create a new OllamaChatGenerator instance.
249+
242250
**Parameters:**
243251

244252
- **model** (<code>str</code>) – The name of the model to use. The model must already be present (pulled) in the running Ollama instance.
@@ -395,9 +403,11 @@ __init__(
395403
timeout: int = 120,
396404
keep_alive: float | str | None = None,
397405
streaming_callback: Callable[[StreamingChunk], None] | None = None,
398-
)
406+
) -> None
399407
```
400408

409+
Create a new OllamaGenerator instance.
410+
401411
**Parameters:**
402412

403413
- **model** (<code>str</code>) – The name of the model to use. The model should be available in the running Ollama instance.

0 commit comments

Comments
 (0)