diff --git a/tutorials/33_Hybrid_Retrieval.ipynb b/tutorials/33_Hybrid_Retrieval.ipynb index 9bfc9ed..f489f06 100644 --- a/tutorials/33_Hybrid_Retrieval.ipynb +++ b/tutorials/33_Hybrid_Retrieval.ipynb @@ -233,7 +233,7 @@ "source": [ "### 2) Rank the Results\n", "\n", - "Use the [TransformersSimilarityRanker](https://docs.haystack.deepset.ai/docs/transformerssimilarityranker) that scores the relevancy of all retrieved documents for the given search query by using a cross encoder model. In this example, you will use [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) model to rank the retrieved documents but you can replace this model with other cross-encoder models on Hugging Face." + "Use the [SentenceTransformersSimilarityRanker](https://docs.haystack.deepset.ai/docs/sentencetransformerssimilarityranker) that scores the relevancy of all retrieved documents for the given search query by using a cross encoder model. In this example, you will use [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) model to rank the retrieved documents but you can replace this model with other cross-encoder models on Hugging Face." ] }, { @@ -244,9 +244,9 @@ }, "outputs": [], "source": [ - "from haystack.components.rankers import TransformersSimilarityRanker\n", + "from haystack.components.rankers import SentenceTransformersSimilarityRanker\n", "\n", - "ranker = TransformersSimilarityRanker(model=\"BAAI/bge-reranker-base\")" + "ranker = SentenceTransformersSimilarityRanker(model=\"BAAI/bge-reranker-base\")" ] }, { diff --git a/tutorials/44_Creating_Custom_SuperComponents.ipynb b/tutorials/44_Creating_Custom_SuperComponents.ipynb index 092a0ed..6ac4e47 100644 --- a/tutorials/44_Creating_Custom_SuperComponents.ipynb +++ b/tutorials/44_Creating_Custom_SuperComponents.ipynb @@ -245,7 +245,7 @@ "from haystack import Document, Pipeline, super_component\n", "from haystack.components.joiners import DocumentJoiner\n", "from haystack.components.embedders import SentenceTransformersTextEmbedder\n", - "from haystack.components.rankers import TransformersSimilarityRanker\n", + "from haystack.components.rankers import SentenceTransformersSimilarityRanker\n", "from haystack.components.retrievers import InMemoryBM25Retriever, InMemoryEmbeddingRetriever\n", "from haystack.document_stores.in_memory import InMemoryDocumentStore\n", "\n", @@ -265,7 +265,7 @@ " bm25_retriever = InMemoryBM25Retriever(document_store)\n", " text_embedder = SentenceTransformersTextEmbedder(embedder_model)\n", " document_joiner = DocumentJoiner()\n", - " ranker = TransformersSimilarityRanker(ranker_model)\n", + " ranker = SentenceTransformersSimilarityRanker(ranker_model)\n", "\n", " # Create the pipeline\n", " self.pipeline = Pipeline()\n", @@ -513,7 +513,7 @@ "\n", "The main differences between the two retrievers are:\n", "\n", - "1. **Added Ranker Component**: The second version includes a `TransformersSimilarityRanker` that re-ranks the documents based on their semantic similarity to the query.\n", + "1. **Added Ranker Component**: The second version includes a `SentenceTransformersSimilarityRanker` that re-ranks the documents based on their semantic similarity to the query.\n", "2. **Updated Input Mapping**: We added `\"text_embedder.text\"`, `\"bm25_retriever.query\"` and `\"ranker.query\"` to the input mapping to ensure the input query is sent to all three components while simplifying the `retriever.run` method.\n", "\n", "The ranker can significantly improve the quality of the results by re-ranking the documents based on their semantic similarity to the query, even if they were not ranked highly by the initial retrievers." @@ -629,7 +629,7 @@ "from haystack.components.joiners import DocumentJoiner\n", "from haystack.components.embedders import SentenceTransformersTextEmbedder\n", "from haystack.components.retrievers import InMemoryBM25Retriever, InMemoryEmbeddingRetriever\n", - "from haystack.components.rankers import TransformersSimilarityRanker\n", + "from haystack.components.rankers import SentenceTransformersSimilarityRanker\n", "from haystack.document_stores.in_memory import InMemoryDocumentStore\n", "\n", "\n", @@ -646,7 +646,7 @@ " bm25_retriever = InMemoryBM25Retriever(document_store)\n", " text_embedder = SentenceTransformersTextEmbedder(embedder_model)\n", " document_joiner = DocumentJoiner()\n", - " ranker = TransformersSimilarityRanker(ranker_model)\n", + " ranker = SentenceTransformersSimilarityRanker(ranker_model)\n", "\n", " # Create the pipeline\n", " self.pipeline = Pipeline()\n", diff --git a/tutorials/guide_evaluation.ipynb b/tutorials/guide_evaluation.ipynb index 15eacb0..65f32d6 100644 --- a/tutorials/guide_evaluation.ipynb +++ b/tutorials/guide_evaluation.ipynb @@ -110,7 +110,7 @@ "### Methods to Improve Generation:\n", "\n", "- **Ranking**: Incorporate a ranking mechanism into your retrieved documents before providing the context to your prompt\n", - " - **Order by similarity**: Reorder your retrieved documents by similarity using cross-encoder models from Hugging Face with [TransformersSimilarityRanker](https://docs.haystack.deepset.ai/docs/transformerssimilarityranker), Rerank models from Cohere with [CohereRanker](https://docs.haystack.deepset.ai/docs/cohereranker), or Rerankers from Jina with [JinaRanker](https://docs.haystack.deepset.ai/docs/jinaranker)\n", + " - **Order by similarity**: Reorder your retrieved documents by similarity using cross-encoder models from Hugging Face with [SentenceTransformersSimilarityRanker](https://docs.haystack.deepset.ai/docs/sentencetransformerssimilarityranker), Rerank models from Cohere with [CohereRanker](https://docs.haystack.deepset.ai/docs/cohereranker), or Rerankers from Jina with [JinaRanker](https://docs.haystack.deepset.ai/docs/jinaranker)\n", " - **Increase diversity by ranking**: Maximize the overall diversity among your context using sentence-transformers models with [SentenceTransformersDiversityRanker](https://docs.haystack.deepset.ai/docs/sentencetransformersdiversityranker) to help increase the semantic answer similarity (SAS) in LFQA applications.\n", " - **Address the \"Lost in the Middle\" problem by reordering**: Position the most relevant documents at the beginning and end of the context using [LostInTheMiddleRanker](https://docs.haystack.deepset.ai/docs/lostinthemiddleranker) to increase faithfulness.\n", "- **Different Generators**: Try different large language models and benchmark the results. The full list of model providers is in [Generators](https://docs.haystack.deepset.ai/docs/generators).\n",