Skip to content

Commit 06c143a

Browse files
committed
remove the Prepare the Colab Environment section from template and tutorials
1 parent 7540b8e commit 06c143a

8 files changed

Lines changed: 4 additions & 97 deletions

tutorials/29_Serializing_Pipelines.ipynb

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -30,18 +30,6 @@
3030
"Although it's possible to serialize into other formats too, Haystack supports YAML out of the box to make it easy for humans to make changes without the need to go back and forth with Python code. In this tutorial, we will create a very simple pipeline in Python code, serialize it into YAML, make changes to it, and deserialize it back into a Haystack `Pipeline`."
3131
]
3232
},
33-
{
34-
"cell_type": "markdown",
35-
"metadata": {
36-
"id": "9smrsiIqfS7J"
37-
},
38-
"source": [
39-
"## Preparing the Colab Environment\n",
40-
"\n",
41-
"- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration)\n",
42-
"- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/logging)"
43-
]
44-
},
4533
{
4634
"cell_type": "markdown",
4735
"metadata": {

tutorials/30_File_Type_Preprocessing_Index_Pipeline.ipynb

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -40,18 +40,6 @@
4040
"Optionally, you can keep going to see how to use these documents in a query pipeline as well."
4141
]
4242
},
43-
{
44-
"cell_type": "markdown",
45-
"metadata": {
46-
"id": "rns_B_NGN0Ze"
47-
},
48-
"source": [
49-
"## Preparing the Colab Environment\n",
50-
"\n",
51-
"- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration)\n",
52-
"- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/logging)"
53-
]
54-
},
5543
{
5644
"cell_type": "markdown",
5745
"metadata": {

tutorials/31_Metadata_Filtering.ipynb

Lines changed: 1 addition & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -28,18 +28,6 @@
2828
"Although new retrieval techniques are great, sometimes you just know that you want to perform search on a specific group of documents in your document store. This can be anything from all the documents that are related to a specific _user_, or that were published after a certain _date_ and so on. Metadata filtering is very useful in these situations. In this tutorial, we will create a few simple documents containing information about Haystack, where the metadata includes information on what version of Haystack the information relates to. We will then do metadata filtering to make sure we are answering the question based only on information about Haystack 2.0.\n"
2929
]
3030
},
31-
{
32-
"cell_type": "markdown",
33-
"metadata": {
34-
"id": "tM3U5KyegTAE"
35-
},
36-
"source": [
37-
"## Preparing the Colab Environment\n",
38-
"\n",
39-
"- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration)\n",
40-
"- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/logging)"
41-
]
42-
},
4331
{
4432
"cell_type": "markdown",
4533
"metadata": {
@@ -269,4 +257,4 @@
269257
},
270258
"nbformat": 4,
271259
"nbformat_minor": 0
272-
}
260+
}

tutorials/32_Classifying_Documents_and_Queries_by_Language.ipynb

Lines changed: 1 addition & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -32,18 +32,6 @@
3232
"In the last section, you'll build a multi-lingual RAG pipeline. The language of a question is detected, and only documents in that language are used to generate the answer. For this section, the [`TextLanguageRouter`](https://docs.haystack.deepset.ai/docs/textlanguagerouter) will come in handy.\n"
3333
]
3434
},
35-
{
36-
"cell_type": "markdown",
37-
"metadata": {
38-
"id": "oBa4Q25cGTr6"
39-
},
40-
"source": [
41-
"## Preparing the Colab Environment\n",
42-
"\n",
43-
"- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration)\n",
44-
"- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/logging)"
45-
]
46-
},
4735
{
4836
"cell_type": "markdown",
4937
"metadata": {
@@ -687,4 +675,4 @@
687675
},
688676
"nbformat": 4,
689677
"nbformat_minor": 0
690-
}
678+
}

tutorials/33_Hybrid_Retrieval.ipynb

Lines changed: 1 addition & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -28,18 +28,6 @@
2828
"There are many cases when a simple keyword-based approaches like BM25 performs better than a dense retrieval (for example in a specific domain like healthcare) because a dense model needs to be trained on data. For more details about Hybrid Retrieval, check out [Blog Post: Hybrid Document Retrieval](https://haystack.deepset.ai/blog/hybrid-retrieval)."
2929
]
3030
},
31-
{
32-
"cell_type": "markdown",
33-
"metadata": {
34-
"id": "ITs3WTT5lXQT"
35-
},
36-
"source": [
37-
"## Preparing the Colab Environment\n",
38-
"\n",
39-
"- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration)\n",
40-
"- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/setting-the-log-level)"
41-
]
42-
},
4331
{
4432
"cell_type": "markdown",
4533
"metadata": {
@@ -571,4 +559,4 @@
571559
},
572560
"nbformat": 4,
573561
"nbformat_minor": 0
574-
}
562+
}

tutorials/34_Extractive_QA_Pipeline.ipynb

Lines changed: 1 addition & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -29,18 +29,6 @@
2929
"To get data into the extractive pipeline, you'll also build an indexing pipeline to ingest the [Wikipedia pages of Seven Wonders of the Ancient World dataset](https://en.wikipedia.org/wiki/Wonders_of_the_World)."
3030
]
3131
},
32-
{
33-
"cell_type": "markdown",
34-
"metadata": {
35-
"id": "eF_hnatJUEHq"
36-
},
37-
"source": [
38-
"## Preparing the Colab Environment\n",
39-
"\n",
40-
"- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration)\n",
41-
"- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/logging)"
42-
]
43-
},
4432
{
4533
"cell_type": "markdown",
4634
"metadata": {
@@ -659,4 +647,4 @@
659647
},
660648
"nbformat": 4,
661649
"nbformat_minor": 0
662-
}
650+
}

tutorials/42_Sentence_Window_Retriever.ipynb

Lines changed: 0 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -24,17 +24,6 @@
2424
"`SentenceWindowRetriever(document_store=doc_store, window_size=2)`"
2525
]
2626
},
27-
{
28-
"cell_type": "markdown",
29-
"id": "784caaa2",
30-
"metadata": {},
31-
"source": [
32-
"\n",
33-
"## Preparing the Colab Environment\n",
34-
"\n",
35-
"- [Enable GPU Runtime](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration#enabling-the-gpu-in-colab)\n"
36-
]
37-
},
3827
{
3928
"cell_type": "markdown",
4029
"id": "98c2f9d3",

tutorials/template.ipynb

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -21,16 +21,6 @@
2121
"*Here provide a short description of the tutorial. What does it teach? What's its expected outcome?*"
2222
]
2323
},
24-
{
25-
"cell_type": "markdown",
26-
"metadata": {},
27-
"source": [
28-
"## Preparing the Colab Environment\n",
29-
"\n",
30-
"- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration#enabling-the-gpu-in-colab)\n",
31-
"- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/log-level)"
32-
]
33-
},
3424
{
3525
"cell_type": "markdown",
3626
"metadata": {},

0 commit comments

Comments
 (0)