You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs-website/docs/overview/get-started.mdx
+87-5Lines changed: 87 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,6 +5,9 @@ slug: "/get-started"
5
5
description: "Have a look at this page to learn how to quickly get up and running with Haystack. It contains instructions for installing, running your first RAG pipeline, adding data and further resources."
6
6
---
7
7
8
+
importTabsfrom'@theme/Tabs';
9
+
importTabItemfrom'@theme/TabItem';
10
+
8
11
# Get Started
9
12
10
13
Have a look at this page to learn how to quickly get up and running with Haystack. It contains instructions for installing, running your first RAG pipeline, adding data and further resources.
@@ -39,15 +42,15 @@ If you have any questions, please reach out to us on the [GitHub Discussion](htt
39
42
40
43
</details>
41
44
42
-
In the example below, we show how to set an API key using a Haystack [Secret](../concepts/secret-management.mdx). However, for easier use, you can also set an OpenAI key as an `OPENAI_API_KEY` environment variable.
45
+
In the examples below, we show how to set an API key using a Haystack [Secret](../concepts/secret-management.mdx). You can choose between OpenAI or Hugging Face as your LLM provider. For easier use, you can also set the API key as an environment variable (`OPENAI_API_KEY` or `HF_API_TOKEN`).
46
+
47
+
<Tabs>
48
+
<TabItemvalue="openai"label="OpenAI"default>
43
49
44
50
:::note
45
51
**Using OpenAIChatGenerator requires an OpenAI API key with sufficient quota.**
46
52
New users on the free tier may immediately encounter a `429` ("insufficient_quota") error when running
47
-
the example below.
48
-
49
-
If you do not have enough OpenAI credits, you may skip this example or use an alternative Generator such as
50
-
`HuggingFaceAPIChatGenerator`.
53
+
this example. If you do not have enough OpenAI credits, try the Hugging Face tab instead.
51
54
:::
52
55
53
56
```python
@@ -112,9 +115,88 @@ results = rag_pipeline.run(
112
115
)
113
116
114
117
print(results["llm"]["replies"])
118
+
```
119
+
120
+
</TabItem>
121
+
<TabItemvalue="huggingface"label="Hugging Face">
122
+
123
+
:::note
124
+
**Using HuggingFaceAPIChatGenerator requires a Hugging Face API token.**
125
+
You can get a [free Hugging Face token](https://huggingface.co/settings/tokens) to use the Serverless Inference API.
126
+
This API is rate-limited but perfect for experimentation.
127
+
:::
128
+
129
+
```python
130
+
# import necessary dependencies
131
+
from haystack import Pipeline, Document
132
+
from haystack.components.generators.chat import HuggingFaceAPIChatGenerator
133
+
from haystack.components.retrievers import InMemoryBM25Retriever
134
+
from haystack.document_stores.in_memory import InMemoryDocumentStore
135
+
from haystack.components.builders import ChatPromptBuilder
136
+
from haystack.utils import Secret
137
+
from haystack.dataclasses import ChatMessage
138
+
139
+
# create a document store and write documents to it
140
+
document_store = InMemoryDocumentStore()
141
+
document_store.write_documents([
142
+
Document(content="My name is Jean and I live in Paris."),
143
+
Document(content="My name is Mark and I live in Berlin."),
144
+
Document(content="My name is Giorgio and I live in Rome.")
145
+
])
146
+
147
+
# A prompt corresponds to an NLP task and contains instructions for the model. Here, the pipeline will go through each Document to figure out the answer.
148
+
prompt_template = [
149
+
ChatMessage.from_system(
150
+
"""
151
+
Given these documents, answer the question.
152
+
Documents:
153
+
{% for doc in documents %}
154
+
{{ doc.content }}
155
+
{% endfor %}
156
+
Question:
157
+
"""
158
+
),
159
+
ChatMessage.from_user(
160
+
"{{question}}"
161
+
),
162
+
ChatMessage.from_system("Answer:")
163
+
]
164
+
165
+
# create the components adding the necessary parameters
# Arrange pipeline components in the order you need them. If a component has more than one inputs or outputs, indicate which input you want to connect to which output using the format ("component_name.output_name", "component_name, input_name").
# Run the pipeline by specifying the first component in the pipeline and passing its mandatory inputs. Optionally, you can pass inputs to other components.
186
+
question ="Who lives in Paris?"
187
+
results = rag_pipeline.run(
188
+
{
189
+
"retriever": {"query": question},
190
+
"prompt_builder": {"question": question},
191
+
}
192
+
)
193
+
194
+
print(results["llm"]["replies"])
116
195
```
117
196
197
+
</TabItem>
198
+
</Tabs>
199
+
118
200
### Adding Your Data
119
201
120
202
Instead of running the RAG pipeline on example data, learn how you can add your own custom data using [Document Stores](../concepts/document-store.mdx).
Copy file name to clipboardExpand all lines: docs-website/versioned_docs/version-2.22/overview/get-started.mdx
+87-5Lines changed: 87 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,6 +5,9 @@ slug: "/get-started"
5
5
description: "Have a look at this page to learn how to quickly get up and running with Haystack. It contains instructions for installing, running your first RAG pipeline, adding data and further resources."
6
6
---
7
7
8
+
importTabsfrom'@theme/Tabs';
9
+
importTabItemfrom'@theme/TabItem';
10
+
8
11
# Get Started
9
12
10
13
Have a look at this page to learn how to quickly get up and running with Haystack. It contains instructions for installing, running your first RAG pipeline, adding data and further resources.
@@ -39,15 +42,15 @@ If you have any questions, please reach out to us on the [GitHub Discussion](htt
39
42
40
43
</details>
41
44
42
-
In the example below, we show how to set an API key using a Haystack [Secret](../concepts/secret-management.mdx). However, for easier use, you can also set an OpenAI key as an `OPENAI_API_KEY` environment variable.
45
+
In the examples below, we show how to set an API key using a Haystack [Secret](../concepts/secret-management.mdx). You can choose between OpenAI or Hugging Face as your LLM provider. For easier use, you can also set the API key as an environment variable (`OPENAI_API_KEY` or `HF_API_TOKEN`).
46
+
47
+
<Tabs>
48
+
<TabItemvalue="openai"label="OpenAI"default>
43
49
44
50
:::note
45
51
**Using OpenAIChatGenerator requires an OpenAI API key with sufficient quota.**
46
52
New users on the free tier may immediately encounter a `429` ("insufficient_quota") error when running
47
-
the example below.
48
-
49
-
If you do not have enough OpenAI credits, you may skip this example or use an alternative Generator such as
50
-
`HuggingFaceAPIChatGenerator`.
53
+
this example. If you do not have enough OpenAI credits, try the Hugging Face tab instead.
51
54
:::
52
55
53
56
```python
@@ -112,9 +115,88 @@ results = rag_pipeline.run(
112
115
)
113
116
114
117
print(results["llm"]["replies"])
118
+
```
119
+
120
+
</TabItem>
121
+
<TabItemvalue="huggingface"label="Hugging Face">
122
+
123
+
:::note
124
+
**Using HuggingFaceAPIChatGenerator requires a Hugging Face API token.**
125
+
You can get a [free Hugging Face token](https://huggingface.co/settings/tokens) to use the Serverless Inference API.
126
+
This API is rate-limited but perfect for experimentation.
127
+
:::
128
+
129
+
```python
130
+
# import necessary dependencies
131
+
from haystack import Pipeline, Document
132
+
from haystack.components.generators.chat import HuggingFaceAPIChatGenerator
133
+
from haystack.components.retrievers import InMemoryBM25Retriever
134
+
from haystack.document_stores.in_memory import InMemoryDocumentStore
135
+
from haystack.components.builders import ChatPromptBuilder
136
+
from haystack.utils import Secret
137
+
from haystack.dataclasses import ChatMessage
138
+
139
+
# create a document store and write documents to it
140
+
document_store = InMemoryDocumentStore()
141
+
document_store.write_documents([
142
+
Document(content="My name is Jean and I live in Paris."),
143
+
Document(content="My name is Mark and I live in Berlin."),
144
+
Document(content="My name is Giorgio and I live in Rome.")
145
+
])
146
+
147
+
# A prompt corresponds to an NLP task and contains instructions for the model. Here, the pipeline will go through each Document to figure out the answer.
148
+
prompt_template = [
149
+
ChatMessage.from_system(
150
+
"""
151
+
Given these documents, answer the question.
152
+
Documents:
153
+
{% for doc in documents %}
154
+
{{ doc.content }}
155
+
{% endfor %}
156
+
Question:
157
+
"""
158
+
),
159
+
ChatMessage.from_user(
160
+
"{{question}}"
161
+
),
162
+
ChatMessage.from_system("Answer:")
163
+
]
164
+
165
+
# create the components adding the necessary parameters
# Arrange pipeline components in the order you need them. If a component has more than one inputs or outputs, indicate which input you want to connect to which output using the format ("component_name.output_name", "component_name, input_name").
# Run the pipeline by specifying the first component in the pipeline and passing its mandatory inputs. Optionally, you can pass inputs to other components.
186
+
question ="Who lives in Paris?"
187
+
results = rag_pipeline.run(
188
+
{
189
+
"retriever": {"query": question},
190
+
"prompt_builder": {"question": question},
191
+
}
192
+
)
193
+
194
+
print(results["llm"]["replies"])
116
195
```
117
196
197
+
</TabItem>
198
+
</Tabs>
199
+
118
200
### Adding Your Data
119
201
120
202
Instead of running the RAG pipeline on example data, learn how you can add your own custom data using [Document Stores](../concepts/document-store.mdx).
0 commit comments