Skip to content

Commit b79081f

Browse files
authored
docs: re-add 2.18 docs (#10879)
* docs: re-add 2.18 docs * faq
1 parent e1f8309 commit b79081f

397 files changed

Lines changed: 86087 additions & 2 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

docs-website/reference_versioned_docs/version-2.18/experiments-api/experimental_agents_api.md

Lines changed: 474 additions & 0 deletions
Large diffs are not rendered by default.
Lines changed: 180 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,180 @@
1+
---
2+
title: "ChatMessage Store"
3+
id: experimental-chatmessage-store-api
4+
description: "Storage for the chat messages."
5+
slug: "/experimental-chatmessage-store-api"
6+
---
7+
8+
<a id="haystack_experimental.chat_message_stores.in_memory"></a>
9+
10+
## Module haystack\_experimental.chat\_message\_stores.in\_memory
11+
12+
<a id="haystack_experimental.chat_message_stores.in_memory.InMemoryChatMessageStore"></a>
13+
14+
### InMemoryChatMessageStore
15+
16+
Stores chat messages in-memory.
17+
18+
The `chat_history_id` parameter is used as a unique identifier for each conversation or chat session.
19+
It acts as a namespace that isolates messages from different sessions. Each `chat_history_id` value corresponds to a
20+
separate list of `ChatMessage` objects stored in memory.
21+
22+
Typical usage involves providing a unique `chat_history_id` (for example, a session ID or conversation ID)
23+
whenever you write, read, or delete messages. This ensures that chat messages from different
24+
conversations do not overlap.
25+
26+
Usage example:
27+
```python
28+
from haystack.dataclasses import ChatMessage
29+
from haystack_experimental.chat_message_stores.in_memory import InMemoryChatMessageStore
30+
31+
message_store = InMemoryChatMessageStore()
32+
33+
messages = [
34+
ChatMessage.from_assistant("Hello, how can I help you?"),
35+
ChatMessage.from_user("Hi, I have a question about Python. What is a Protocol?"),
36+
]
37+
message_store.write_messages(chat_history_id="user_456_session_123", messages=messages)
38+
retrieved_messages = message_store.retrieve_messages(chat_history_id="user_456_session_123")
39+
40+
print(retrieved_messages)
41+
```
42+
43+
<a id="haystack_experimental.chat_message_stores.in_memory.InMemoryChatMessageStore.__init__"></a>
44+
45+
#### InMemoryChatMessageStore.\_\_init\_\_
46+
47+
```python
48+
def __init__(skip_system_messages: bool = True,
49+
last_k: int | None = 10) -> None
50+
```
51+
52+
Create an InMemoryChatMessageStore.
53+
54+
**Arguments**:
55+
56+
- `skip_system_messages`: Whether to skip storing system messages. Defaults to True.
57+
- `last_k`: The number of last messages to retrieve. Defaults to 10 messages if not specified.
58+
59+
<a id="haystack_experimental.chat_message_stores.in_memory.InMemoryChatMessageStore.to_dict"></a>
60+
61+
#### InMemoryChatMessageStore.to\_dict
62+
63+
```python
64+
def to_dict() -> dict[str, Any]
65+
```
66+
67+
Serializes the component to a dictionary.
68+
69+
**Returns**:
70+
71+
Dictionary with serialized data.
72+
73+
<a id="haystack_experimental.chat_message_stores.in_memory.InMemoryChatMessageStore.from_dict"></a>
74+
75+
#### InMemoryChatMessageStore.from\_dict
76+
77+
```python
78+
@classmethod
79+
def from_dict(cls, data: dict[str, Any]) -> "InMemoryChatMessageStore"
80+
```
81+
82+
Deserializes the component from a dictionary.
83+
84+
**Arguments**:
85+
86+
- `data`: The dictionary to deserialize from.
87+
88+
**Returns**:
89+
90+
The deserialized component.
91+
92+
<a id="haystack_experimental.chat_message_stores.in_memory.InMemoryChatMessageStore.count_messages"></a>
93+
94+
#### InMemoryChatMessageStore.count\_messages
95+
96+
```python
97+
def count_messages(chat_history_id: str) -> int
98+
```
99+
100+
Returns the number of chat messages stored in this store.
101+
102+
**Arguments**:
103+
104+
- `chat_history_id`: The chat history id for which to count messages.
105+
106+
**Returns**:
107+
108+
The number of messages.
109+
110+
<a id="haystack_experimental.chat_message_stores.in_memory.InMemoryChatMessageStore.write_messages"></a>
111+
112+
#### InMemoryChatMessageStore.write\_messages
113+
114+
```python
115+
def write_messages(chat_history_id: str, messages: list[ChatMessage]) -> int
116+
```
117+
118+
Writes chat messages to the ChatMessageStore.
119+
120+
**Arguments**:
121+
122+
- `chat_history_id`: The chat history id under which to store the messages.
123+
- `messages`: A list of ChatMessages to write.
124+
125+
**Raises**:
126+
127+
- `ValueError`: If messages is not a list of ChatMessages.
128+
129+
**Returns**:
130+
131+
The number of messages written.
132+
133+
<a id="haystack_experimental.chat_message_stores.in_memory.InMemoryChatMessageStore.retrieve_messages"></a>
134+
135+
#### InMemoryChatMessageStore.retrieve\_messages
136+
137+
```python
138+
def retrieve_messages(chat_history_id: str,
139+
last_k: int | None = None) -> list[ChatMessage]
140+
```
141+
142+
Retrieves all stored chat messages.
143+
144+
**Arguments**:
145+
146+
- `chat_history_id`: The chat history id from which to retrieve messages.
147+
- `last_k`: The number of last messages to retrieve. If unspecified, the last_k parameter passed
148+
to the constructor will be used.
149+
150+
**Raises**:
151+
152+
- `ValueError`: If last_k is not None and is less than 0.
153+
154+
**Returns**:
155+
156+
A list of chat messages.
157+
158+
<a id="haystack_experimental.chat_message_stores.in_memory.InMemoryChatMessageStore.delete_messages"></a>
159+
160+
#### InMemoryChatMessageStore.delete\_messages
161+
162+
```python
163+
def delete_messages(chat_history_id: str) -> None
164+
```
165+
166+
Deletes all stored chat messages.
167+
168+
**Arguments**:
169+
170+
- `chat_history_id`: The chat history id from which to delete messages.
171+
172+
<a id="haystack_experimental.chat_message_stores.in_memory.InMemoryChatMessageStore.delete_all_messages"></a>
173+
174+
#### InMemoryChatMessageStore.delete\_all\_messages
175+
176+
```python
177+
def delete_all_messages() -> None
178+
```
179+
180+
Deletes all stored chat messages from all chat history ids.
Lines changed: 151 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,151 @@
1+
---
2+
title: "Generators"
3+
id: experimental-generators-api
4+
description: "Enables text generation using LLMs."
5+
slug: "/experimental-generators-api"
6+
---
7+
8+
<a id="haystack_experimental.components.generators.chat.openai"></a>
9+
10+
## Module haystack\_experimental.components.generators.chat.openai
11+
12+
<a id="haystack_experimental.components.generators.chat.openai.OpenAIChatGenerator"></a>
13+
14+
### OpenAIChatGenerator
15+
16+
An OpenAI chat-based text generator component that supports hallucination risk scoring.
17+
18+
This is based on the paper
19+
[LLMs are Bayesian, in Expectation, not in Realization](https://arxiv.org/abs/2507.11768).
20+
21+
## Usage Example:
22+
23+
```python
24+
from haystack.dataclasses import ChatMessage
25+
26+
from haystack_experimental.utils.hallucination_risk_calculator.dataclasses import HallucinationScoreConfig
27+
from haystack_experimental.components.generators.chat.openai import OpenAIChatGenerator
28+
29+
# Evidence-based Example
30+
llm = OpenAIChatGenerator(model="gpt-4o")
31+
rag_result = llm.run(
32+
messages=[
33+
ChatMessage.from_user(
34+
text="Task: Answer strictly based on the evidence provided below.
35+
"
36+
"Question: Who won the Nobel Prize in Physics in 2019?
37+
"
38+
"Evidence:
39+
"
40+
"- Nobel Prize press release (2019): James Peebles (1/2); Michel Mayor & Didier Queloz (1/2).
41+
"
42+
"Constraints: If evidence is insufficient or conflicting, refuse."
43+
)
44+
],
45+
hallucination_score_config=HallucinationScoreConfig(skeleton_policy="evidence_erase"),
46+
)
47+
print(f"Decision: {rag_result['replies'][0].meta['hallucination_decision']}")
48+
print(f"Risk bound: {rag_result['replies'][0].meta['hallucination_risk']:.3f}")
49+
print(f"Rationale: {rag_result['replies'][0].meta['hallucination_rationale']}")
50+
print(f"Answer:
51+
{rag_result['replies'][0].text}")
52+
print("---")
53+
```
54+
55+
<a id="haystack_experimental.components.generators.chat.openai.OpenAIChatGenerator.run"></a>
56+
57+
#### OpenAIChatGenerator.run
58+
59+
```python
60+
@component.output_types(replies=list[ChatMessage])
61+
def run(
62+
messages: list[ChatMessage],
63+
streaming_callback: StreamingCallbackT | None = None,
64+
generation_kwargs: dict[str, Any] | None = None,
65+
*,
66+
tools: ToolsType | None = None,
67+
tools_strict: bool | None = None,
68+
hallucination_score_config: HallucinationScoreConfig | None = None
69+
) -> dict[str, list[ChatMessage]]
70+
```
71+
72+
Invokes chat completion based on the provided messages and generation parameters.
73+
74+
**Arguments**:
75+
76+
- `messages`: A list of ChatMessage instances representing the input messages.
77+
- `streaming_callback`: A callback function that is called when a new token is received from the stream.
78+
- `generation_kwargs`: Additional keyword arguments for text generation. These parameters will
79+
override the parameters passed during component initialization.
80+
For details on OpenAI API parameters, see [OpenAI documentation](https://platform.openai.com/docs/api-reference/chat/create).
81+
- `tools`: A list of Tool and/or Toolset objects, or a single Toolset for which the model can prepare calls.
82+
If set, it will override the `tools` parameter provided during initialization.
83+
- `tools_strict`: Whether to enable strict schema adherence for tool calls. If set to `True`, the model will follow exactly
84+
the schema provided in the `parameters` field of the tool definition, but this may increase latency.
85+
If set, it will override the `tools_strict` parameter set during component initialization.
86+
- `hallucination_score_config`: If provided, the generator will evaluate the hallucination risk of its responses using
87+
the OpenAIPlanner and annotate each response with hallucination metrics.
88+
This involves generating multiple samples and analyzing their consistency, which may increase
89+
latency and cost. Use this option when you need to assess the reliability of the generated content
90+
in scenarios where accuracy is critical.
91+
For details, see the [research paper](https://arxiv.org/abs/2507.11768)
92+
93+
**Returns**:
94+
95+
A dictionary with the following key:
96+
- `replies`: A list containing the generated responses as ChatMessage instances. If hallucination
97+
scoring is enabled, each message will include additional metadata:
98+
- `hallucination_decision`: "ANSWER" if the model decided to answer, "REFUSE" if it abstained.
99+
- `hallucination_risk`: The EDFL hallucination risk bound.
100+
- `hallucination_rationale`: The rationale behind the hallucination decision.
101+
102+
<a id="haystack_experimental.components.generators.chat.openai.OpenAIChatGenerator.run_async"></a>
103+
104+
#### OpenAIChatGenerator.run\_async
105+
106+
```python
107+
@component.output_types(replies=list[ChatMessage])
108+
async def run_async(
109+
messages: list[ChatMessage],
110+
streaming_callback: StreamingCallbackT | None = None,
111+
generation_kwargs: dict[str, Any] | None = None,
112+
*,
113+
tools: ToolsType | None = None,
114+
tools_strict: bool | None = None,
115+
hallucination_score_config: HallucinationScoreConfig | None = None
116+
) -> dict[str, list[ChatMessage]]
117+
```
118+
119+
Asynchronously invokes chat completion based on the provided messages and generation parameters.
120+
121+
This is the asynchronous version of the `run` method. It has the same parameters and return values
122+
but can be used with `await` in async code.
123+
124+
**Arguments**:
125+
126+
- `messages`: A list of ChatMessage instances representing the input messages.
127+
- `streaming_callback`: A callback function that is called when a new token is received from the stream.
128+
Must be a coroutine.
129+
- `generation_kwargs`: Additional keyword arguments for text generation. These parameters will
130+
override the parameters passed during component initialization.
131+
For details on OpenAI API parameters, see [OpenAI documentation](https://platform.openai.com/docs/api-reference/chat/create).
132+
- `tools`: A list of Tool and/or Toolset objects, or a single Toolset for which the model can prepare calls.
133+
If set, it will override the `tools` parameter provided during initialization.
134+
- `tools_strict`: Whether to enable strict schema adherence for tool calls. If set to `True`, the model will follow exactly
135+
the schema provided in the `parameters` field of the tool definition, but this may increase latency.
136+
If set, it will override the `tools_strict` parameter set during component initialization.
137+
- `hallucination_score_config`: If provided, the generator will evaluate the hallucination risk of its responses using
138+
the OpenAIPlanner and annotate each response with hallucination metrics.
139+
This involves generating multiple samples and analyzing their consistency, which may increase
140+
latency and cost. Use this option when you need to assess the reliability of the generated content
141+
in scenarios where accuracy is critical.
142+
For details, see the [research paper](https://arxiv.org/abs/2507.11768)
143+
144+
**Returns**:
145+
146+
A dictionary with the following key:
147+
- `replies`: A list containing the generated responses as ChatMessage instances. If hallucination
148+
scoring is enabled, each message will include additional metadata:
149+
- `hallucination_decision`: "ANSWER" if the model decided to answer, "REFUSE" if it abstained.
150+
- `hallucination_risk`: The EDFL hallucination risk bound.
151+
- `hallucination_rationale`: The rationale behind the hallucination decision.

0 commit comments

Comments
 (0)