diff --git a/docs-website/reference/integrations-api/ollama.md b/docs-website/reference/integrations-api/ollama.md index c4be8e4166..e6f29073bd 100644 --- a/docs-website/reference/integrations-api/ollama.md +++ b/docs-website/reference/integrations-api/ollama.md @@ -10,8 +10,9 @@ slug: "/integrations-ollama" ### OllamaDocumentEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of each -Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -41,9 +42,11 @@ __init__( meta_fields_to_embed: list[str] | None = None, embedding_separator: str = "\n", batch_size: int = 32, -) +) -> None ``` +Create a new OllamaDocumentEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -116,8 +119,9 @@ Asynchronously run an Ollama Model to compute embeddings of the provided documen ### OllamaTextEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of -each Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -138,9 +142,11 @@ __init__( generation_kwargs: dict[str, Any] | None = None, timeout: int = 120, keep_alive: float | str | None = None, -) +) -> None ``` +Create a new OllamaTextEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -236,9 +242,11 @@ __init__( tools: ToolsType | None = None, response_format: None | Literal["json"] | JsonSchemaValue | None = None, think: bool | Literal["low", "medium", "high"] = False, -) +) -> None ``` +Create a new OllamaChatGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model must already be present (pulled) in the running Ollama instance. @@ -395,9 +403,11 @@ __init__( timeout: int = 120, keep_alive: float | str | None = None, streaming_callback: Callable[[StreamingChunk], None] | None = None, -) +) -> None ``` +Create a new OllamaGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. diff --git a/docs-website/reference_versioned_docs/version-2.18/integrations-api/ollama.md b/docs-website/reference_versioned_docs/version-2.18/integrations-api/ollama.md index c4be8e4166..e6f29073bd 100644 --- a/docs-website/reference_versioned_docs/version-2.18/integrations-api/ollama.md +++ b/docs-website/reference_versioned_docs/version-2.18/integrations-api/ollama.md @@ -10,8 +10,9 @@ slug: "/integrations-ollama" ### OllamaDocumentEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of each -Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -41,9 +42,11 @@ __init__( meta_fields_to_embed: list[str] | None = None, embedding_separator: str = "\n", batch_size: int = 32, -) +) -> None ``` +Create a new OllamaDocumentEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -116,8 +119,9 @@ Asynchronously run an Ollama Model to compute embeddings of the provided documen ### OllamaTextEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of -each Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -138,9 +142,11 @@ __init__( generation_kwargs: dict[str, Any] | None = None, timeout: int = 120, keep_alive: float | str | None = None, -) +) -> None ``` +Create a new OllamaTextEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -236,9 +242,11 @@ __init__( tools: ToolsType | None = None, response_format: None | Literal["json"] | JsonSchemaValue | None = None, think: bool | Literal["low", "medium", "high"] = False, -) +) -> None ``` +Create a new OllamaChatGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model must already be present (pulled) in the running Ollama instance. @@ -395,9 +403,11 @@ __init__( timeout: int = 120, keep_alive: float | str | None = None, streaming_callback: Callable[[StreamingChunk], None] | None = None, -) +) -> None ``` +Create a new OllamaGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. diff --git a/docs-website/reference_versioned_docs/version-2.19/integrations-api/ollama.md b/docs-website/reference_versioned_docs/version-2.19/integrations-api/ollama.md index c4be8e4166..e6f29073bd 100644 --- a/docs-website/reference_versioned_docs/version-2.19/integrations-api/ollama.md +++ b/docs-website/reference_versioned_docs/version-2.19/integrations-api/ollama.md @@ -10,8 +10,9 @@ slug: "/integrations-ollama" ### OllamaDocumentEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of each -Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -41,9 +42,11 @@ __init__( meta_fields_to_embed: list[str] | None = None, embedding_separator: str = "\n", batch_size: int = 32, -) +) -> None ``` +Create a new OllamaDocumentEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -116,8 +119,9 @@ Asynchronously run an Ollama Model to compute embeddings of the provided documen ### OllamaTextEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of -each Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -138,9 +142,11 @@ __init__( generation_kwargs: dict[str, Any] | None = None, timeout: int = 120, keep_alive: float | str | None = None, -) +) -> None ``` +Create a new OllamaTextEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -236,9 +242,11 @@ __init__( tools: ToolsType | None = None, response_format: None | Literal["json"] | JsonSchemaValue | None = None, think: bool | Literal["low", "medium", "high"] = False, -) +) -> None ``` +Create a new OllamaChatGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model must already be present (pulled) in the running Ollama instance. @@ -395,9 +403,11 @@ __init__( timeout: int = 120, keep_alive: float | str | None = None, streaming_callback: Callable[[StreamingChunk], None] | None = None, -) +) -> None ``` +Create a new OllamaGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. diff --git a/docs-website/reference_versioned_docs/version-2.20/integrations-api/ollama.md b/docs-website/reference_versioned_docs/version-2.20/integrations-api/ollama.md index c4be8e4166..e6f29073bd 100644 --- a/docs-website/reference_versioned_docs/version-2.20/integrations-api/ollama.md +++ b/docs-website/reference_versioned_docs/version-2.20/integrations-api/ollama.md @@ -10,8 +10,9 @@ slug: "/integrations-ollama" ### OllamaDocumentEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of each -Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -41,9 +42,11 @@ __init__( meta_fields_to_embed: list[str] | None = None, embedding_separator: str = "\n", batch_size: int = 32, -) +) -> None ``` +Create a new OllamaDocumentEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -116,8 +119,9 @@ Asynchronously run an Ollama Model to compute embeddings of the provided documen ### OllamaTextEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of -each Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -138,9 +142,11 @@ __init__( generation_kwargs: dict[str, Any] | None = None, timeout: int = 120, keep_alive: float | str | None = None, -) +) -> None ``` +Create a new OllamaTextEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -236,9 +242,11 @@ __init__( tools: ToolsType | None = None, response_format: None | Literal["json"] | JsonSchemaValue | None = None, think: bool | Literal["low", "medium", "high"] = False, -) +) -> None ``` +Create a new OllamaChatGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model must already be present (pulled) in the running Ollama instance. @@ -395,9 +403,11 @@ __init__( timeout: int = 120, keep_alive: float | str | None = None, streaming_callback: Callable[[StreamingChunk], None] | None = None, -) +) -> None ``` +Create a new OllamaGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. diff --git a/docs-website/reference_versioned_docs/version-2.21/integrations-api/ollama.md b/docs-website/reference_versioned_docs/version-2.21/integrations-api/ollama.md index c4be8e4166..e6f29073bd 100644 --- a/docs-website/reference_versioned_docs/version-2.21/integrations-api/ollama.md +++ b/docs-website/reference_versioned_docs/version-2.21/integrations-api/ollama.md @@ -10,8 +10,9 @@ slug: "/integrations-ollama" ### OllamaDocumentEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of each -Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -41,9 +42,11 @@ __init__( meta_fields_to_embed: list[str] | None = None, embedding_separator: str = "\n", batch_size: int = 32, -) +) -> None ``` +Create a new OllamaDocumentEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -116,8 +119,9 @@ Asynchronously run an Ollama Model to compute embeddings of the provided documen ### OllamaTextEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of -each Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -138,9 +142,11 @@ __init__( generation_kwargs: dict[str, Any] | None = None, timeout: int = 120, keep_alive: float | str | None = None, -) +) -> None ``` +Create a new OllamaTextEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -236,9 +242,11 @@ __init__( tools: ToolsType | None = None, response_format: None | Literal["json"] | JsonSchemaValue | None = None, think: bool | Literal["low", "medium", "high"] = False, -) +) -> None ``` +Create a new OllamaChatGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model must already be present (pulled) in the running Ollama instance. @@ -395,9 +403,11 @@ __init__( timeout: int = 120, keep_alive: float | str | None = None, streaming_callback: Callable[[StreamingChunk], None] | None = None, -) +) -> None ``` +Create a new OllamaGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. diff --git a/docs-website/reference_versioned_docs/version-2.22/integrations-api/ollama.md b/docs-website/reference_versioned_docs/version-2.22/integrations-api/ollama.md index c4be8e4166..e6f29073bd 100644 --- a/docs-website/reference_versioned_docs/version-2.22/integrations-api/ollama.md +++ b/docs-website/reference_versioned_docs/version-2.22/integrations-api/ollama.md @@ -10,8 +10,9 @@ slug: "/integrations-ollama" ### OllamaDocumentEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of each -Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -41,9 +42,11 @@ __init__( meta_fields_to_embed: list[str] | None = None, embedding_separator: str = "\n", batch_size: int = 32, -) +) -> None ``` +Create a new OllamaDocumentEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -116,8 +119,9 @@ Asynchronously run an Ollama Model to compute embeddings of the provided documen ### OllamaTextEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of -each Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -138,9 +142,11 @@ __init__( generation_kwargs: dict[str, Any] | None = None, timeout: int = 120, keep_alive: float | str | None = None, -) +) -> None ``` +Create a new OllamaTextEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -236,9 +242,11 @@ __init__( tools: ToolsType | None = None, response_format: None | Literal["json"] | JsonSchemaValue | None = None, think: bool | Literal["low", "medium", "high"] = False, -) +) -> None ``` +Create a new OllamaChatGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model must already be present (pulled) in the running Ollama instance. @@ -395,9 +403,11 @@ __init__( timeout: int = 120, keep_alive: float | str | None = None, streaming_callback: Callable[[StreamingChunk], None] | None = None, -) +) -> None ``` +Create a new OllamaGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. diff --git a/docs-website/reference_versioned_docs/version-2.23/integrations-api/ollama.md b/docs-website/reference_versioned_docs/version-2.23/integrations-api/ollama.md index c4be8e4166..e6f29073bd 100644 --- a/docs-website/reference_versioned_docs/version-2.23/integrations-api/ollama.md +++ b/docs-website/reference_versioned_docs/version-2.23/integrations-api/ollama.md @@ -10,8 +10,9 @@ slug: "/integrations-ollama" ### OllamaDocumentEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of each -Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -41,9 +42,11 @@ __init__( meta_fields_to_embed: list[str] | None = None, embedding_separator: str = "\n", batch_size: int = 32, -) +) -> None ``` +Create a new OllamaDocumentEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -116,8 +119,9 @@ Asynchronously run an Ollama Model to compute embeddings of the provided documen ### OllamaTextEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of -each Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -138,9 +142,11 @@ __init__( generation_kwargs: dict[str, Any] | None = None, timeout: int = 120, keep_alive: float | str | None = None, -) +) -> None ``` +Create a new OllamaTextEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -236,9 +242,11 @@ __init__( tools: ToolsType | None = None, response_format: None | Literal["json"] | JsonSchemaValue | None = None, think: bool | Literal["low", "medium", "high"] = False, -) +) -> None ``` +Create a new OllamaChatGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model must already be present (pulled) in the running Ollama instance. @@ -395,9 +403,11 @@ __init__( timeout: int = 120, keep_alive: float | str | None = None, streaming_callback: Callable[[StreamingChunk], None] | None = None, -) +) -> None ``` +Create a new OllamaGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. diff --git a/docs-website/reference_versioned_docs/version-2.24/integrations-api/ollama.md b/docs-website/reference_versioned_docs/version-2.24/integrations-api/ollama.md index c4be8e4166..e6f29073bd 100644 --- a/docs-website/reference_versioned_docs/version-2.24/integrations-api/ollama.md +++ b/docs-website/reference_versioned_docs/version-2.24/integrations-api/ollama.md @@ -10,8 +10,9 @@ slug: "/integrations-ollama" ### OllamaDocumentEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of each -Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -41,9 +42,11 @@ __init__( meta_fields_to_embed: list[str] | None = None, embedding_separator: str = "\n", batch_size: int = 32, -) +) -> None ``` +Create a new OllamaDocumentEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -116,8 +119,9 @@ Asynchronously run an Ollama Model to compute embeddings of the provided documen ### OllamaTextEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of -each Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -138,9 +142,11 @@ __init__( generation_kwargs: dict[str, Any] | None = None, timeout: int = 120, keep_alive: float | str | None = None, -) +) -> None ``` +Create a new OllamaTextEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -236,9 +242,11 @@ __init__( tools: ToolsType | None = None, response_format: None | Literal["json"] | JsonSchemaValue | None = None, think: bool | Literal["low", "medium", "high"] = False, -) +) -> None ``` +Create a new OllamaChatGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model must already be present (pulled) in the running Ollama instance. @@ -395,9 +403,11 @@ __init__( timeout: int = 120, keep_alive: float | str | None = None, streaming_callback: Callable[[StreamingChunk], None] | None = None, -) +) -> None ``` +Create a new OllamaGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. diff --git a/docs-website/reference_versioned_docs/version-2.25/integrations-api/ollama.md b/docs-website/reference_versioned_docs/version-2.25/integrations-api/ollama.md index c4be8e4166..e6f29073bd 100644 --- a/docs-website/reference_versioned_docs/version-2.25/integrations-api/ollama.md +++ b/docs-website/reference_versioned_docs/version-2.25/integrations-api/ollama.md @@ -10,8 +10,9 @@ slug: "/integrations-ollama" ### OllamaDocumentEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of each -Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -41,9 +42,11 @@ __init__( meta_fields_to_embed: list[str] | None = None, embedding_separator: str = "\n", batch_size: int = 32, -) +) -> None ``` +Create a new OllamaDocumentEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -116,8 +119,9 @@ Asynchronously run an Ollama Model to compute embeddings of the provided documen ### OllamaTextEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of -each Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -138,9 +142,11 @@ __init__( generation_kwargs: dict[str, Any] | None = None, timeout: int = 120, keep_alive: float | str | None = None, -) +) -> None ``` +Create a new OllamaTextEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -236,9 +242,11 @@ __init__( tools: ToolsType | None = None, response_format: None | Literal["json"] | JsonSchemaValue | None = None, think: bool | Literal["low", "medium", "high"] = False, -) +) -> None ``` +Create a new OllamaChatGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model must already be present (pulled) in the running Ollama instance. @@ -395,9 +403,11 @@ __init__( timeout: int = 120, keep_alive: float | str | None = None, streaming_callback: Callable[[StreamingChunk], None] | None = None, -) +) -> None ``` +Create a new OllamaGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. diff --git a/docs-website/reference_versioned_docs/version-2.26/integrations-api/ollama.md b/docs-website/reference_versioned_docs/version-2.26/integrations-api/ollama.md index c4be8e4166..e6f29073bd 100644 --- a/docs-website/reference_versioned_docs/version-2.26/integrations-api/ollama.md +++ b/docs-website/reference_versioned_docs/version-2.26/integrations-api/ollama.md @@ -10,8 +10,9 @@ slug: "/integrations-ollama" ### OllamaDocumentEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of each -Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -41,9 +42,11 @@ __init__( meta_fields_to_embed: list[str] | None = None, embedding_separator: str = "\n", batch_size: int = 32, -) +) -> None ``` +Create a new OllamaDocumentEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -116,8 +119,9 @@ Asynchronously run an Ollama Model to compute embeddings of the provided documen ### OllamaTextEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of -each Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -138,9 +142,11 @@ __init__( generation_kwargs: dict[str, Any] | None = None, timeout: int = 120, keep_alive: float | str | None = None, -) +) -> None ``` +Create a new OllamaTextEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -236,9 +242,11 @@ __init__( tools: ToolsType | None = None, response_format: None | Literal["json"] | JsonSchemaValue | None = None, think: bool | Literal["low", "medium", "high"] = False, -) +) -> None ``` +Create a new OllamaChatGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model must already be present (pulled) in the running Ollama instance. @@ -395,9 +403,11 @@ __init__( timeout: int = 120, keep_alive: float | str | None = None, streaming_callback: Callable[[StreamingChunk], None] | None = None, -) +) -> None ``` +Create a new OllamaGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. diff --git a/docs-website/reference_versioned_docs/version-2.27/integrations-api/ollama.md b/docs-website/reference_versioned_docs/version-2.27/integrations-api/ollama.md index c4be8e4166..e6f29073bd 100644 --- a/docs-website/reference_versioned_docs/version-2.27/integrations-api/ollama.md +++ b/docs-website/reference_versioned_docs/version-2.27/integrations-api/ollama.md @@ -10,8 +10,9 @@ slug: "/integrations-ollama" ### OllamaDocumentEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of each -Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -41,9 +42,11 @@ __init__( meta_fields_to_embed: list[str] | None = None, embedding_separator: str = "\n", batch_size: int = 32, -) +) -> None ``` +Create a new OllamaDocumentEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -116,8 +119,9 @@ Asynchronously run an Ollama Model to compute embeddings of the provided documen ### OllamaTextEmbedder -Computes the embeddings of a list of Documents and stores the obtained vectors in the embedding field of -each Document. It uses embedding models compatible with the Ollama Library. +Computes the embeddings of a list of Documents and stores the obtained vectors in each Document's embedding field. + +It uses embedding models compatible with the Ollama Library. Usage example: @@ -138,9 +142,11 @@ __init__( generation_kwargs: dict[str, Any] | None = None, timeout: int = 120, keep_alive: float | str | None = None, -) +) -> None ``` +Create a new OllamaTextEmbedder instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance. @@ -236,9 +242,11 @@ __init__( tools: ToolsType | None = None, response_format: None | Literal["json"] | JsonSchemaValue | None = None, think: bool | Literal["low", "medium", "high"] = False, -) +) -> None ``` +Create a new OllamaChatGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model must already be present (pulled) in the running Ollama instance. @@ -395,9 +403,11 @@ __init__( timeout: int = 120, keep_alive: float | str | None = None, streaming_callback: Callable[[StreamingChunk], None] | None = None, -) +) -> None ``` +Create a new OllamaGenerator instance. + **Parameters:** - **model** (str) – The name of the model to use. The model should be available in the running Ollama instance.