You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: SearchQnA/README.md
+16-99Lines changed: 16 additions & 99 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,14 @@ Operating within the LangChain framework, the Google Search QnA chatbot mimics h
16
16
17
17
By integrating search capabilities with LLMs within the LangChain framework, this Google Search QnA chatbot delivers comprehensive and precise answers, akin to human search behavior.
18
18
19
-
The workflow falls into the following architecture:
19
+
## Table of contents
20
+
21
+
1.[Architecture](#architecture)
22
+
2.[Deployment Options](#deployment-options)
23
+
24
+
## Architecture
25
+
26
+
The architecture of the SearchQnA Application is illustrated below:
20
27
21
28

22
29
@@ -85,104 +92,14 @@ flowchart LR
85
92
86
93
```
87
94
88
-
## Deploy SearchQnA Service
89
-
90
-
The SearchQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.
91
-
92
-
Currently we support two ways of deploying SearchQnA services with docker compose:
93
-
94
-
1. Start services using the docker image on `docker hub`:
95
-
96
-
```bash
97
-
docker pull opea/searchqna:latest
98
-
```
99
-
100
-
2. Start services using the docker images `built from source`: [Guide](https://github.com/opea-project/GenAIExamples/tree/main/SearchQnA/docker_compose/)
101
-
102
-
### Setup Environment Variable
103
-
104
-
To set up environment variables for deploying SearchQnA services, follow these steps:
2. If you are in a proxy environment, also set the proxy-related environment variables:
119
-
120
-
```bash
121
-
export http_proxy="Your_HTTP_Proxy"
122
-
export https_proxy="Your_HTTPs_Proxy"
123
-
```
124
-
125
-
3. Set up other environment variables:
126
-
127
-
```bash
128
-
source ./docker_compose/set_env.sh
129
-
```
130
-
131
-
### Deploy SearchQnA on Gaudi
132
-
133
-
If your version of `Habana Driver` < 1.16.0 (check with `hl-smi`), run the following command directly to start SearchQnA services. Find the corresponding [compose.yaml](./docker_compose/intel/hpu/gaudi/compose.yaml).
134
-
135
-
```bash
136
-
cd GenAIExamples/SearchQnA/docker_compose/intel/hpu/gaudi/
137
-
docker compose up -d
138
-
```
139
-
140
-
Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) to build docker images from source.
141
-
142
-
### Deploy SearchQnA on Xeon
143
-
144
-
Find the corresponding [compose.yaml](./docker_compose/intel/cpu/xeon/compose.yaml).
145
-
146
-
```bash
147
-
cd GenAIExamples/SearchQnA/docker_compose/intel/cpu/xeon/
148
-
docker compose up -d
149
-
```
150
-
151
-
Refer to the [Xeon Guide](./docker_compose/intel/cpu/xeon/README.md) for more instructions on building docker images from source.
152
-
153
-
## Consume SearchQnA Service
154
-
155
-
Two ways of consuming SearchQnA Service:
156
-
157
-
1. Use cURL command on terminal
158
-
159
-
```bash
160
-
curl http://${host_ip}:3008/v1/searchqna \
161
-
-H "Content-Type: application/json" \
162
-
-d '{
163
-
"messages": "What is the latest news? Give me also the source link.",
164
-
"stream": "True"
165
-
}'
166
-
```
167
-
168
-
2. Access via frontend
169
-
170
-
To access the frontend, open the following URL in your browser: http://{host_ip}:5173.
171
-
172
-
By default, the UI runs on port 5173 internally.
173
-
174
-
## Troubleshooting
175
-
176
-
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example:
95
+
This SearchQnA use case performs Search-augmented Question Answering across multiple platforms. Currently, we provide the example for Intel® Gaudi® 2 and Intel® Xeon® Scalable Processors, and we invite contributions from other hardware vendors to expand OPEA ecosystem.
177
96
178
-
```bash
179
-
http_proxy=""
180
-
curl http://${host_ip}:3001/embed \
181
-
-X POST \
182
-
-d '{"inputs":"What is Deep Learning?"}' \
183
-
-H 'Content-Type: application/json'
184
-
```
97
+
## Deployment Options
185
98
186
-
2. (Docker only) If all microservices work well, check the port ${host_ip}:3008, the port may be allocated by other users, you can modify the `compose.yaml`.
99
+
The table below lists the available deployment options and their implementation details for different hardware platforms.
187
100
188
-
3. (Docker only) If you get errors like "The container name is in use", change container name in `compose.yaml`.
Some HuggingFace resources require an access token. Developers can create one by first signing up on [HuggingFace](https://huggingface.co/) and then generating a [user access token](https://huggingface.co/docs/transformers.js/en/guides/private#step-1-generating-a-user-access-token).
33
+
34
+
## Troubleshooting
35
+
36
+
1. If errors such as "Access Denied" occur, validate the [microservice](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker_compose/intel/cpu/xeon/README.md#validate-microservices) that is querying the embed API. A simple example:
37
+
38
+
```bash
39
+
http_proxy=""
40
+
curl http://${host_ip}:3001/embed \
41
+
-X POST \
42
+
-d '{"inputs":"What is Deep Learning?"}' \
43
+
-H 'Content-Type: application/json'
44
+
```
45
+
46
+
2. (Docker only) If all microservices work well, check the port ${host_ip}:3008, the port might already be in use by another service, you can modify the `compose.yaml`.
47
+
48
+
3. (Docker only) If you get errors like "The container name is in use", change container name in `compose.yaml`.
0 commit comments