Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
37 changes: 29 additions & 8 deletions .github/workflows/github_release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ on:
# Ignore release versions tagged with -rc0 suffix
- "!v2.[0-9]+.[0-9]-rc0"

pull_request:
jobs:
generate-notes:
runs-on: ubuntu-latest
Expand Down Expand Up @@ -59,16 +60,36 @@ jobs:
reno report --no-show-source --ignore-cache --version v${{ steps.version.outputs.current_release }} -o relnotes.rst

- name: Convert to Markdown
uses: docker://pandoc/core:3.1
uses: docker://pandoc/core:3.8
with:
args: "--from rst --to markdown_github --no-highlight relnotes.rst -o relnotes.md --wrap=none"
args: "--from rst --to gfm --no-highlight relnotes.rst -o relnotes.md --wrap=none"

- name: Add contributor list
# only for minor releases and minor release candidates (not bugfix releases)
if: endsWith(steps.version.outputs.current_release, '.0')
env:
GH_TOKEN: ${{ github.token }}
START: v${{ steps.version.outputs.current_release }}-rc0
END: "v2.20.x"
# END: ${{ github.ref_name }}
run: |
CONTRIBUTORS=$(gh api "repos/deepset-ai/haystack/compare/$START...$END" \
--jq '[.commits[].author.login] | map(select(. != null)) | unique | map("@\(.)") | join(", ")')

cat relnotes.md > enhanced_relnotes.md
{
echo ""
echo "## 💙 Big thank you to everyone who contributed to this release!"
echo ""
echo "$CONTRIBUTORS"
} >> enhanced_relnotes.md

- name: Debug
run: |
cat relnotes.md
cat enhanced_relnotes.md

- uses: ncipollo/release-action@v1
with:
bodyFile: "relnotes.md"
prerelease: ${{ steps.version.outputs.current_pre_release }}
allowUpdates: true
# - uses: ncipollo/release-action@v1
# with:
# bodyFile: "relnotes.md"
# prerelease: ${{ steps.version.outputs.current_pre_release != '' }}
# allowUpdates: true
15 changes: 8 additions & 7 deletions .github/workflows/readme_sync.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,14 @@ on:
pull_request:
paths:
- "docs/pydoc/**"
push:
branches:
- main
# release branches have the form v1.9.x
- "v[0-9]+.[0-9]+.x"
# Exclude 1.x release branches, there's another workflow handling those
- "!v1.[0-9]+.x"
# TODO: remove this workflow once migration to Docusaurus is complete
# push:
# branches:
# - main
# # release branches have the form v1.9.x
# - "v[0-9]+.[0-9]+.x"
# # Exclude 1.x release branches, there's another workflow handling those
# - "!v1.[0-9]+.x"

env:
HATCH_VERSION: "1.14.2"
Expand Down
2 changes: 1 addition & 1 deletion VERSION.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
2.20.0-rc0
2.21.0-rc0
4 changes: 2 additions & 2 deletions docs-website/docusaurus.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ const config = {
beforeDefaultRemarkPlugins: [require('./src/remark/versionedReferenceLinks')],
versions: {
current: {
label: '2.20-unstable',
label: '2.21-unstable',
path: 'next',
banner: 'unreleased',
},
Expand Down Expand Up @@ -88,7 +88,7 @@ const config = {
exclude: ['**/_templates/**'],
versions: {
current: {
label: '2.20-unstable',
label: '2.21-unstable',
path: 'next',
banner: 'unreleased',
},
Expand Down
36 changes: 14 additions & 22 deletions docs-website/reference/integrations-api/watsonx.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,8 @@ Enables text completions using IBM's watsonx.ai foundation models.
This component extends WatsonxChatGenerator to provide the standard Generator interface that works with prompt
strings instead of ChatMessage objects.

The generator works with IBM's foundation models including:
- granite-13b-chat-v2
- llama-2-70b-chat
- llama-3-70b-instruct
- Other watsonx.ai chat models
The generator works with IBM's foundation models that are listed
[here](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=wx&audience=wdp).

You can customize the generation behavior by passing parameters to the watsonx.ai API through the
`generation_kwargs` argument. These parameters are passed directly to the watsonx.ai inference endpoint.
Expand Down Expand Up @@ -73,7 +70,7 @@ Output:
```python
def __init__(*,
api_key: Secret = Secret.from_env_var("WATSONX_API_KEY"),
model: str = "ibm/granite-3-2b-instruct",
model: str = "ibm/granite-3-3-8b-instruct",
project_id: Secret = Secret.from_env_var("WATSONX_PROJECT_ID"),
api_base_url: str = "https://us-south.ml.cloud.ibm.com",
system_prompt: str | None = None,
Expand Down Expand Up @@ -233,14 +230,8 @@ This component interacts with IBM's watsonx.ai platform to generate chat respons
models. It supports the [ChatMessage](https://docs.haystack.deepset.ai/docs/chatmessage) format for both input
and output, including multimodal inputs with text and images.

The generator works with IBM's foundation models including:
- granite-13b-chat-v2
- llama-2-70b-chat
- llama-3-70b-instruct
- llama-3-2-11b-vision-instruct (multimodal)
- llama-3-2-90b-vision-instruct (multimodal)
- pixtral-12b (multimodal)
- Other watsonx.ai chat models
The generator works with IBM's foundation models that are listed
[here](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=wx&audience=wdp).

You can customize the generation behavior by passing parameters to the watsonx.ai API through the
`generation_kwargs` argument. These parameters are passed directly to the watsonx.ai inference endpoint.
Expand Down Expand Up @@ -294,7 +285,7 @@ print(response)
```python
def __init__(*,
api_key: Secret = Secret.from_env_var("WATSONX_API_KEY"),
model: str = "ibm/granite-3-2b-instruct",
model: str = "ibm/granite-3-3-8b-instruct",
project_id: Secret = Secret.from_env_var("WATSONX_PROJECT_ID"),
api_base_url: str = "https://us-south.ml.cloud.ibm.com",
generation_kwargs: dict[str, Any] | None = None,
Expand Down Expand Up @@ -454,7 +445,7 @@ documents = [
]

document_embedder = WatsonxDocumentEmbedder(
model="ibm/slate-30m-english-rtrvr",
model="ibm/slate-30m-english-rtrvr-v2",
api_key=Secret.from_env_var("WATSONX_API_KEY"),
api_base_url="https://us-south.ml.cloud.ibm.com",
project_id=Secret.from_env_var("WATSONX_PROJECT_ID"),
Expand All @@ -472,7 +463,7 @@ print(result["documents"][0].embedding)

```python
def __init__(*,
model: str = "ibm/slate-30m-english-rtrvr",
model: str = "ibm/slate-30m-english-rtrvr-v2",
api_key: Secret = Secret.from_env_var("WATSONX_API_KEY"),
api_base_url: str = "https://us-south.ml.cloud.ibm.com",
project_id: Secret = Secret.from_env_var("WATSONX_PROJECT_ID"),
Expand All @@ -492,7 +483,7 @@ Creates a WatsonxDocumentEmbedder component.
**Arguments**:

- `model`: The name of the model to use for calculating embeddings.
Default is "ibm/slate-30m-english-rtrvr".
Default is "ibm/slate-30m-english-rtrvr-v2".
- `api_key`: The WATSONX API key. Can be set via environment variable WATSONX_API_KEY.
- `api_base_url`: The WATSONX URL for the watsonx.ai service.
Default is "https://us-south.ml.cloud.ibm.com".
Expand Down Expand Up @@ -582,7 +573,7 @@ from haystack_integrations.components.embedders.watsonx.text_embedder import Wat
text_to_embed = "I love pizza!"

text_embedder = WatsonxTextEmbedder(
model="ibm/slate-30m-english-rtrvr",
model="ibm/slate-30m-english-rtrvr-v2",
api_key=Secret.from_env_var("WATSONX_API_KEY"),
api_base_url="https://us-south.ml.cloud.ibm.com",
project_id=Secret.from_env_var("WATSONX_PROJECT_ID"),
Expand All @@ -591,7 +582,7 @@ text_embedder = WatsonxTextEmbedder(
print(text_embedder.run(text_to_embed))

# {'embedding': [0.017020374536514282, -0.023255806416273117, ...],
# 'meta': {'model': 'ibm/slate-30m-english-rtrvr',
# 'meta': {'model': 'ibm/slate-30m-english-rtrvr-v2',
# 'truncated_input_tokens': 3}}
```

Expand All @@ -601,7 +592,7 @@ print(text_embedder.run(text_to_embed))

```python
def __init__(*,
model: str = "ibm/slate-30m-english-rtrvr",
model: str = "ibm/slate-30m-english-rtrvr-v2",
api_key: Secret = Secret.from_env_var("WATSONX_API_KEY"),
api_base_url: str = "https://us-south.ml.cloud.ibm.com",
project_id: Secret = Secret.from_env_var("WATSONX_PROJECT_ID"),
Expand All @@ -617,7 +608,7 @@ Creates an WatsonxTextEmbedder component.
**Arguments**:

- `model`: The name of the IBM watsonx model to use for calculating embeddings.
Default is "ibm/slate-30m-english-rtrvr".
Default is "ibm/slate-30m-english-rtrvr-v2".
- `api_key`: The WATSONX API key. Can be set via environment variable WATSONX_API_KEY.
- `api_base_url`: The WATSONX URL for the watsonx.ai service.
Default is "https://us-south.ml.cloud.ibm.com".
Expand Down Expand Up @@ -683,3 +674,4 @@ Embeds a single string.
A dictionary with:
- 'embedding': The embedding of the input text
- 'meta': Information about the model usage

Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,8 @@ Enables text completions using IBM's watsonx.ai foundation models.
This component extends WatsonxChatGenerator to provide the standard Generator interface that works with prompt
strings instead of ChatMessage objects.

The generator works with IBM's foundation models including:
- granite-13b-chat-v2
- llama-2-70b-chat
- llama-3-70b-instruct
- Other watsonx.ai chat models
The generator works with IBM's foundation models that are listed
[here](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=wx&audience=wdp).

You can customize the generation behavior by passing parameters to the watsonx.ai API through the
`generation_kwargs` argument. These parameters are passed directly to the watsonx.ai inference endpoint.
Expand Down Expand Up @@ -73,7 +70,7 @@ Output:
```python
def __init__(*,
api_key: Secret = Secret.from_env_var("WATSONX_API_KEY"),
model: str = "ibm/granite-3-2b-instruct",
model: str = "ibm/granite-3-3-8b-instruct",
project_id: Secret = Secret.from_env_var("WATSONX_PROJECT_ID"),
api_base_url: str = "https://us-south.ml.cloud.ibm.com",
system_prompt: str | None = None,
Expand Down Expand Up @@ -233,14 +230,8 @@ This component interacts with IBM's watsonx.ai platform to generate chat respons
models. It supports the [ChatMessage](https://docs.haystack.deepset.ai/docs/chatmessage) format for both input
and output, including multimodal inputs with text and images.

The generator works with IBM's foundation models including:
- granite-13b-chat-v2
- llama-2-70b-chat
- llama-3-70b-instruct
- llama-3-2-11b-vision-instruct (multimodal)
- llama-3-2-90b-vision-instruct (multimodal)
- pixtral-12b (multimodal)
- Other watsonx.ai chat models
The generator works with IBM's foundation models that are listed
[here](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=wx&audience=wdp).

You can customize the generation behavior by passing parameters to the watsonx.ai API through the
`generation_kwargs` argument. These parameters are passed directly to the watsonx.ai inference endpoint.
Expand Down Expand Up @@ -294,7 +285,7 @@ print(response)
```python
def __init__(*,
api_key: Secret = Secret.from_env_var("WATSONX_API_KEY"),
model: str = "ibm/granite-3-2b-instruct",
model: str = "ibm/granite-3-3-8b-instruct",
project_id: Secret = Secret.from_env_var("WATSONX_PROJECT_ID"),
api_base_url: str = "https://us-south.ml.cloud.ibm.com",
generation_kwargs: dict[str, Any] | None = None,
Expand Down Expand Up @@ -454,7 +445,7 @@ documents = [
]

document_embedder = WatsonxDocumentEmbedder(
model="ibm/slate-30m-english-rtrvr",
model="ibm/slate-30m-english-rtrvr-v2",
api_key=Secret.from_env_var("WATSONX_API_KEY"),
api_base_url="https://us-south.ml.cloud.ibm.com",
project_id=Secret.from_env_var("WATSONX_PROJECT_ID"),
Expand All @@ -472,7 +463,7 @@ print(result["documents"][0].embedding)

```python
def __init__(*,
model: str = "ibm/slate-30m-english-rtrvr",
model: str = "ibm/slate-30m-english-rtrvr-v2",
api_key: Secret = Secret.from_env_var("WATSONX_API_KEY"),
api_base_url: str = "https://us-south.ml.cloud.ibm.com",
project_id: Secret = Secret.from_env_var("WATSONX_PROJECT_ID"),
Expand All @@ -492,7 +483,7 @@ Creates a WatsonxDocumentEmbedder component.
**Arguments**:

- `model`: The name of the model to use for calculating embeddings.
Default is "ibm/slate-30m-english-rtrvr".
Default is "ibm/slate-30m-english-rtrvr-v2".
- `api_key`: The WATSONX API key. Can be set via environment variable WATSONX_API_KEY.
- `api_base_url`: The WATSONX URL for the watsonx.ai service.
Default is "https://us-south.ml.cloud.ibm.com".
Expand Down Expand Up @@ -582,7 +573,7 @@ from haystack_integrations.components.embedders.watsonx.text_embedder import Wat
text_to_embed = "I love pizza!"

text_embedder = WatsonxTextEmbedder(
model="ibm/slate-30m-english-rtrvr",
model="ibm/slate-30m-english-rtrvr-v2",
api_key=Secret.from_env_var("WATSONX_API_KEY"),
api_base_url="https://us-south.ml.cloud.ibm.com",
project_id=Secret.from_env_var("WATSONX_PROJECT_ID"),
Expand All @@ -591,7 +582,7 @@ text_embedder = WatsonxTextEmbedder(
print(text_embedder.run(text_to_embed))

# {'embedding': [0.017020374536514282, -0.023255806416273117, ...],
# 'meta': {'model': 'ibm/slate-30m-english-rtrvr',
# 'meta': {'model': 'ibm/slate-30m-english-rtrvr-v2',
# 'truncated_input_tokens': 3}}
```

Expand All @@ -601,7 +592,7 @@ print(text_embedder.run(text_to_embed))

```python
def __init__(*,
model: str = "ibm/slate-30m-english-rtrvr",
model: str = "ibm/slate-30m-english-rtrvr-v2",
api_key: Secret = Secret.from_env_var("WATSONX_API_KEY"),
api_base_url: str = "https://us-south.ml.cloud.ibm.com",
project_id: Secret = Secret.from_env_var("WATSONX_PROJECT_ID"),
Expand All @@ -617,7 +608,7 @@ Creates an WatsonxTextEmbedder component.
**Arguments**:

- `model`: The name of the IBM watsonx model to use for calculating embeddings.
Default is "ibm/slate-30m-english-rtrvr".
Default is "ibm/slate-30m-english-rtrvr-v2".
- `api_key`: The WATSONX API key. Can be set via environment variable WATSONX_API_KEY.
- `api_base_url`: The WATSONX URL for the watsonx.ai service.
Default is "https://us-south.ml.cloud.ibm.com".
Expand Down Expand Up @@ -683,3 +674,4 @@ Embeds a single string.
A dictionary with:
- 'embedding': The embedding of the input text
- 'meta': Information about the model usage

Loading
Loading