Skip to content

Update dependency transformers to v5 [SECURITY]#1907

Open
renovate[bot] wants to merge 1 commit intomasterfrom
renovate/pypi-transformers-vulnerability
Open

Update dependency transformers to v5 [SECURITY]#1907
renovate[bot] wants to merge 1 commit intomasterfrom
renovate/pypi-transformers-vulnerability

Conversation

@renovate
Copy link
Copy Markdown
Contributor

@renovate renovate Bot commented May 6, 2026

ℹ️ Note

This PR body was truncated due to platform limits.

This PR contains the following updates:

Package Update Change OpenSSF
transformers major >=4.46.0>=5.0.0 OpenSSF Scorecard
transformers major >=4.45,<5>=5.0.0,<6 OpenSSF Scorecard

Transformers Regular Expression Denial of Service (ReDoS) vulnerability

CVE-2024-12720 / GHSA-6rvg-6v2m-4j46

More information

Details

A Regular Expression Denial of Service (ReDoS) vulnerability was identified in the huggingface/transformers library, specifically in the file tokenization_nougat_fast.py. The vulnerability occurs in the post_process_single() function, where a regular expression processes specially crafted input. The issue stems from the regex exhibiting exponential time complexity under certain conditions, leading to excessive backtracking. This can result in significantly high CPU usage and potential application downtime, effectively creating a Denial of Service (DoS) scenario. The affected version is v4.46.3.

Severity

  • CVSS Score: 5.3 / 10 (Medium)
  • Vector String: CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Deserialization of Untrusted Data in Hugging Face Transformers

CVE-2024-11394 / GHSA-hxxf-235m-72v3 / PYSEC-2024-229

More information

Details

Hugging Face Transformers Trax Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file.

The specific flaw exists within the handling of model files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-25012.

Severity

  • CVSS Score: 8.8 / 10 (High)
  • Vector String: CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Deserialization of Untrusted Data in Hugging Face Transformers

CVE-2024-11392 / GHSA-qxrp-vhvm-j765 / PYSEC-2024-227

More information

Details

Hugging Face Transformers MobileViTV2 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file.

The specific flaw exists within the handling of configuration files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-24322.

Severity

  • CVSS Score: 7.5 / 10 (High)
  • Vector String: CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:H/I:H/A:H

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Deserialization of Untrusted Data in Hugging Face Transformers

CVE-2024-11393 / GHSA-wrfc-pvp9-mr9g / PYSEC-2024-228

More information

Details

Hugging Face Transformers MaskFormer Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file.

The specific flaw exists within the parsing of model files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-25191.

Severity

  • CVSS Score: 8.8 / 10 (High)
  • Vector String: CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


CVE-2024-11392 / GHSA-qxrp-vhvm-j765 / PYSEC-2024-227

More information

Details

Hugging Face Transformers MobileViTV2 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file.

The specific flaw exists within the handling of configuration files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-24322.

Severity

  • CVSS Score: 8.8 / 10 (High)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

References

This data is provided by OSV and the PyPI Advisory Database (CC-BY 4.0).


CVE-2024-11393 / GHSA-wrfc-pvp9-mr9g / PYSEC-2024-228

More information

Details

Hugging Face Transformers MaskFormer Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file.

The specific flaw exists within the parsing of model files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-25191.

Severity

  • CVSS Score: 8.8 / 10 (High)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

References

This data is provided by OSV and the PyPI Advisory Database (CC-BY 4.0).


CVE-2024-11394 / GHSA-hxxf-235m-72v3 / PYSEC-2024-229

More information

Details

Hugging Face Transformers Trax Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file.

The specific flaw exists within the handling of model files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-25012.

Severity

  • CVSS Score: 8.8 / 10 (High)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

References

This data is provided by OSV and the PyPI Advisory Database (CC-BY 4.0).


CVE-2025-2099 / GHSA-qq3j-4f4f-9583 / PYSEC-2025-40

More information

Details

A vulnerability in the preprocess_string() function of the transformers.testing_utils module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario.

Severity

  • CVSS Score: 7.5 / 10 (High)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

References

This data is provided by OSV and the PyPI Advisory Database (CC-BY 4.0).


Transformers Regular Expression Denial of Service (ReDoS) vulnerability

CVE-2025-1194 / GHSA-fpwr-67px-3qhx

More information

Details

A Regular Expression Denial of Service (ReDoS) vulnerability was identified in the huggingface/transformers library, specifically in the file tokenization_gpt_neox_japanese.py of the GPT-NeoX-Japanese model. The vulnerability occurs in the SubWordJapaneseTokenizer class, where regular expressions process specially crafted inputs. The issue stems from a regex exhibiting exponential complexity under certain conditions, leading to excessive backtracking. This can result in high CPU usage and potential application downtime, effectively creating a Denial of Service (DoS) scenario. The affected version is v4.48.1 (latest).

Severity

  • CVSS Score: 4.3 / 10 (Medium)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:L

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Hugging Face Transformers Regular Expression Denial of Service

CVE-2025-2099 / GHSA-qq3j-4f4f-9583 / PYSEC-2025-40

More information

Details

A Regular Expression Denial of Service (ReDoS) exists in the preprocess_string() function of the transformers.testing_utils module. In versions before 4.50.0, the regex used to process code blocks in docstrings contains nested quantifiers that can trigger catastrophic backtracking when given inputs with many newline characters. An attacker who can supply such input to preprocess_string() (or code paths that call it) can force excessive CPU usage and degrade availability.

Fix: released in 4.50.0, which rewrites the regex to avoid the inefficient pattern. ([GitHub][1])

  • Affected: < 4.50.0
  • Patched: 4.50.0

Severity

  • CVSS Score: 5.3 / 10 (Medium)
  • Vector String: CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Transformers vulnerable to ReDoS attack through its get_imports() function

CVE-2025-3264 / GHSA-jjph-296x-mrcr

More information

Details

A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the get_imports() function within dynamic_module_utils.py. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern \s*try\s*:.*?except.*?: used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption.

Severity

  • CVSS Score: 5.3 / 10 (Medium)
  • Vector String: CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Transformers's ReDoS vulnerability in get_configuration_file can lead to catastrophic backtracking

CVE-2025-3263 / GHSA-q2wp-rjmx-x6x9

More information

Details

A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the get_configuration_file() function within the transformers.configuration_utils module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern config\.(.*)\.json that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library.

Severity

  • CVSS Score: 5.3 / 10 (Medium)
  • Vector String: CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Transformers is vulnerable to ReDoS attack through its DonutProcessor class

CVE-2025-3933 / GHSA-37mw-44qp-f5jm

More information

Details

A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the DonutProcessor class's token2json() method. This vulnerability affects versions 4.51.3 and earlier, and is fixed in version 4.52.1. The issue arises from the regex pattern <s_(.*?)> which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting document processing tasks using the Donut model.

Severity

  • CVSS Score: 5.3 / 10 (Medium)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Transformers's Improper Input Validation vulnerability can be exploited through username injection

CVE-2025-3777 / GHSA-phhr-52qp-3mj4

More information

Details

Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the image_utils.py file. The vulnerability arises from insecure URL validation using the startswith() method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1.

Severity

  • CVSS Score: 3.5 / 10 (Low)
  • Vector String: CVSS:3.0/AV:N/AC:L/PR:L/UI:R/S:U/C:L/I:N/A:N

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Hugging Face Transformers vulnerable to Regular Expression Denial of Service (ReDoS) in the AdamWeightDecay optimizer

CVE-2025-6921 / GHSA-4w7r-h757-3r74

More information

Details

The huggingface/transformers library, versions prior to 4.53.0, is vulnerable to Regular Expression Denial of Service (ReDoS) in the AdamWeightDecay optimizer. The vulnerability arises from the _do_use_weight_decay method, which processes user-controlled regular expressions in the include_in_weight_decay and exclude_from_weight_decay lists. Malicious regular expressions can cause catastrophic backtracking during the re.search call, leading to 100% CPU utilization and a denial of service. This issue can be exploited by attackers who can control the patterns in these lists, potentially causing the machine learning task to hang and rendering services unresponsive.

Severity

  • CVSS Score: 5.3 / 10 (Medium)
  • Vector String: CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Hugging Face Transformers is vulnerable to ReDoS through its MarianTokenizer

CVE-2025-6638 / GHSA-59p9-h35m-wg4g

More information

Details

A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically affecting the MarianTokenizer's remove_language_code() method. This vulnerability is present in version 4.52.4 and has been fixed in version 4.53.0. The issue arises from inefficient regex processing, which can be exploited by crafted input strings containing malformed language code patterns, leading to excessive CPU consumption and potential denial of service.

Severity

  • CVSS Score: 5.3 / 10 (Medium)
  • Vector String: CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Hugging Face Transformers Regular Expression Denial of Service (ReDoS) vulnerability

CVE-2025-5197 / GHSA-9356-575x-2w9m

More information

Details

A Regular Expression Denial of Service (ReDoS) vulnerability exists in the Hugging Face Transformers library, specifically in the convert_tf_weight_name_to_pt_weight_name() function. This function, responsible for converting TensorFlow weight names to PyTorch format, uses a regex pattern /[^/]*___([^/]*)/ that can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. The vulnerability affects versions up to 4.51.3 and is fixed in version 4.53.0. This issue can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting model conversion processes between TensorFlow and PyTorch formats.

Severity

  • CVSS Score: 5.3 / 10 (Medium)
  • Vector String: CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Hugging Face Transformers library has Regular Expression Denial of Service

CVE-2025-6051 / GHSA-rcv9-qm8p-9p6j

More information

Details

A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the normalize_numbers() method of the EnglishNormalizer class. This vulnerability affects versions up to 4.52.4 and is fixed in version 4.53.0. The issue arises from the method's handling of numeric strings, which can be exploited using crafted input strings containing long sequences of digits, leading to excessive CPU consumption. This vulnerability impacts text-to-speech and number normalization tasks, potentially causing service disruption, resource exhaustion, and API vulnerabilities.

Severity

  • CVSS Score: 5.3 / 10 (Medium)
  • Vector String: CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


HuggingFace Transformers allows for arbitrary code execution in the Trainer class

CVE-2026-1839 / GHSA-69w3-r845-3855

More information

Details

A vulnerability in the HuggingFace Transformers library, specifically in the Trainer class, allows for arbitrary code execution. The _load_rng_state() method in src/transformers/trainer.py at line 3059 calls torch.load() without the weights_only=True parameter. This issue affects all versions of the library supporting torch>=2.2 when used with PyTorch versions below 2.6, as the safe_globals() context manager provides no protection in these versions. An attacker can exploit this vulnerability by supplying a malicious checkpoint file, such as rng_state.pth, which can execute arbitrary code when loaded. The issue is resolved in version v5.0.0rc3.

Severity

  • CVSS Score: 6.5 / 10 (Medium)
  • Vector String: CVSS:3.0/AV:L/AC:H/PR:N/UI:R/S:U/C:H/I:L/A:H

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Release Notes

huggingface/transformers (transformers)

v5.0.0: Transformers v5

Compare Source

Transformers v5 release notes

image
  • Highlights
  • Significant API changes: dynamic weight loading, tokenization
  • Backwards Incompatible Changes
  • Bugfixes and improvements

We have a migration guide that will be continuously updated available on the main branch, please check it out in case you're facing issues: migration guide.

Highlights

We are excited to announce the initial release of Transformers v5. This is the first major release in five years, and the release is significant: 1200 commits have been pushed to main since the latest minor release. This release removes a lot of long-due deprecations, introduces several refactors that significantly simplify our APIs and internals, and comes with a large number of bug fixes.

We give an overview of our focus for this release in the following blogpost. In these release notes, we'll focus directly on the refactors and new APIs coming with v5.

This release is the full V5 release. It sets in motion something bigger: going forward, starting with v5, we'll now release minor releases every week, rather than every 5 weeks. Expect v5.1 to follow next week, then v5.2 the week that follows, etc.

We're moving forward with this change to ensure you have access to models as soon as they're supported in the library, rather than a few weeks after.

In order to install this release, please do so with the following:

pip install transformers

For us to deliver the best package possible, it is imperative that we have feedback on how the toolkit is currently working for you. Please try it out, and open an issue in case you're facing something inconsistent/a bug.

Transformers version 5 is a community endeavor, and we couldn't have shipped such a massive release without the help of the entire community.

Significant API changes

Dynamic weight loading

We introduce a new weight loading API in transformers, which significantly improves on the previous API. This
weight loading API is designed to apply operations to the checkpoints loaded by transformers.

Instead of loading the checkpoint exactly as it is serialized within the model, these operations can reshape, merge,
and split the layers according to how they're defined in this new API. These operations are often a necessity when
working with quantization or parallelism algorithms.

This new API is centered around the new WeightConverter class:

class WeightConverter(WeightTransform):
    operations: list[ConversionOps]
    source_keys: Union[str, list[str]]
    target_keys: Union[str, list[str]]

The weight converter is designed to apply a list of operations on the source keys, resulting in target keys. A common
operation done on the attention layers is to fuse the query, key, values layers. Doing so with this API would amount
to defining the following conversion:

conversion = WeightConverter(
    ["self_attn.q_proj", "self_attn.k_proj", "self_attn.v_proj"],  # The input layers
    "self_attn.qkv_proj",  # The single layer as output
    operations=[Concatenate(dim=0)],
)

In this situation, we apply the Concatenate operation, which accepts a list of layers as input and returns a single
layer.

This allows us to define a mapping from architecture to a list of weight conversions. Applying those weight conversions
can apply arbitrary transformations to the layers themselves. This significantly simplified the from_pretrained method
and helped us remove a lot of technical debt that we accumulated over the past few years.

This results in several improvements:

  • Much cleaner definition of transformations applied to the checkpoint
  • Reversible transformations, so loading and saving a checkpoint should result in the same checkpoint
  • Faster model loading thanks to scheduling of tensor materialization
  • Enables complex mix of transformations that wouldn't otherwise be possible (such as quantization + MoEs, or TP + MoEs)

Linked PR: #​41580

Tokenization

Just as we moved towards a single backend library for model definition, we want our tokenizers, and the Tokenizer object to be a lot more intuitive. With v5, tokenizer definition is much simpler; one can now initialize an empty LlamaTokenizer and train it directly on your corpus.

Defining a new tokenizer object should be as simple as this:

from transformers import TokenizersBackend, generate_merges
from tokenizers import pre_tokenizers, Tokenizer
from tokenizers.model import BPE

class Llama5Tokenizer(TokenizersBackend):
    def __init__(self, unk_token="<unk>",bos_token="<s>", eos_token="</s>", vocab=None, merges=None ):
        if vocab is None:
            self._vocab = {
                str(unk_token): 0,
                str(bos_token): 1,
                str(eos_token): 2,
            }

        else:
            self._vocab = vocab

            self._merges = merges

        self._tokenizer = Tokenizer(
            BPE(vocab=self._vocab, merges=self._merges, fuse_unk=True)
        )
        self._tokenizer.pre_tokenizer = pre_tokenizers.Metaspace(
            replacement="▁", prepend_scheme=_get_prepend_scheme(self.add_prefix_space, self), split=False
        )
        super().__init__(
            tokenizer_object=self._tokenizer,
            unk_token=unk_token,
            bos_token=bos_token,
            eos_token=eos_token,
        )

Once the tokenizer is defined as above, you can load it with the following: Llama5Tokenizer(). Doing this returns you an empty, trainable tokenizer that follows the definition of the authors of Llama5 (it does not exist yet 😉).

The above is the main motivation towards refactoring tokenization: we want tokenizers to behave similarly to models: trained or empty, and with exactly what is defined in their class definition.

Backend Architecture Changes: moving away from the slow/fast tokenizer separation

Up to now, transformers maintained two parallel implementations for many tokenizers:

  • "Slow" tokenizers (tokenization_<model>.py) - Python-based implementations, often using SentencePiece as the backend.
  • "Fast" tokenizers (tokenization_<model>_fast.py) - Rust-based implementations using the 🤗 tokenizers library.

In v5, we consolidate to a single tokenizer file per model: tokenization_<model>.py. This file will use the most appropriate backend available:

  1. TokenizersBackend (preferred): Rust-based tokenizers from the 🤗 tokenizers library. In general it provides optimal performance, but it also offers a lot more features that are commonly adopted across the ecosystem:
  • handling additional tokens
  • a full python API for setting and updating
  • automatic parallelization,
  • automatic offsets
  • customization
  • training
  1. SentencePieceBackend: for tokenizers requiring the sentencepiece library. It inherits from PythonBackend.
  2. PythonBackend: a Python implementations of the features provided by tokenizers. Basically allows adding tokens.
  3. MistralCommonBackend: relies on MistralCommon's tokenization library. (Previously known as the MistralCommonTokenizer)

The AutoTokenizer automatically selects the appropriate backend based on available files and dependencies. This is transparent, you continue to use AutoTokenizer.from_pretrained() as before. This allows transformers to be future-proof and modular to easily support future backends.

Defining a tokenizers outside of the existing backends

We enable users and tokenizer builders to define their own tokenizers from top to bottom. Tokenizers are usually defined using a backend such as tokenizers, sentencepiece or mistral-common, but we offer the possibility to design the tokenizer at a higher-level, without relying on those backends.

To do so, you can import the PythonBackend (which was previously known as PreTrainedTokenizer). This class encapsulates all the logic related to added tokens, encoding, and decoding.

If you want something even higher up the stack, then PreTrainedTokenizerBase is what PythonBackend inherits from. It contains the very basic tokenizer API features:

  • encode
  • decode
  • vocab_size
  • get_vocab
  • convert_tokens_to_ids
  • convert_ids_to_tokens
  • from_pretrained
  • save_pretrained
  • among a few others
API Changes
1. Direct tokenizer initialization with vocab and merges

Starting with v5, we now enable initializing blank, untrained tokenizers-backed tokenizers:

from transformers import LlamaTokenizer

tokenizer = LlamaTokenizer()

This tokenizer will therefore follow the definition of the LlamaTokenizer as defined in its class definition. It can then be trained on a corpus as can be seen in the tokenizers documentation.

These tokenizers can also be initialized from vocab and merges (if necessary), like the previous "slow" tokenizers:

from transformers import LlamaTokenizer

vocab = {"<unk>": 0, "<s>": 1, "</s>": 2, "hello": 3, "world": 4}
merges = [("h", "e"), ("l", "l"), ("o", " ")]

tokenizer = LlamaTokenizer(vocab=vocab, merges=merges)

This tokenizer will behave as a Llama-like tokenizer, with an updated vocabulary. This allows comparing different tokenizer classes with the same vocab; therefore enabling the comparison of different pre-tokenizers, normalizers, etc.

⚠️ The vocab_file (as in, a path towards a file containing the vocabulary) cannot be used to initialize the LlamaTokenizer as loading from files is reserved to the from_pretrained method.

2. Simplified decoding API

The batch_decode and decode methods have been unified to reflect behavior of the encode method. Both single and batch decoding now use the same decode method. See an example of the new behavior below:

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("t5-small") 
inputs = ["hey how are you?", "fine"]
tokenizer.decode(tokenizer.encode(inputs))

Gives:

- 'hey how are you?</s> fine</s>'
+ ['hey how are you?</s>', 'fine</s>']

We expect encode and decode to behave, as two sides of the same coin: encode, process, decode, should work.

[!NOTE]
A common use-case would be: encode, model.generate, decode. However, using generate would return list[list[int]], which would then be incompatible with decode.

3. Unified encoding API

The encode_plus method is deprecated in favor of the single __call__ method.

4. apply_chat_template returns BatchEncoding

Previously, apply_chat_template returned input_ids for backward compatibility. Starting with v5, it now consistently returns a BatchEncoding dict like other tokenizer methods.

# v5
messages = [
    {"role": "user", "content": "Hello!"},
    {"role": "assistant", "content": "Hi there!"}
]

# Now returns BatchEncoding with input_ids, attention_mask, etc.
outputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
print(outputs.keys())  # dict_keys(['input_ids', 'attention_mask'])
5. Removed legacy configuration file saving:

We simplify the serialization of tokenization attributes:

  • special_tokens_map.json - special tokens are now stored in tokenizer_config.json.
  • added_tokens.json - added tokens are now stored in tokenizer.json.
  • added_tokens_decoder is only stored when there is no tokenizer.json.

When loading older tokenizers, these files are still read for backward compatibility, but new saves use the consolidated format. We're gradually moving towards consolidating attributes to fewer files so that other libraries and implementations may depend on them more reliably.

6. Model-Specific Changes

Several models that had identical tokenizers now import from their base implementation:

  • LayoutLM → uses BertTokenizer
  • LED → uses BartTokenizer
  • Longformer → uses RobertaTokenizer
  • LXMert → uses BertTokenizer
  • MT5 → uses T5Tokenizer
  • MVP → uses BartTokenizer

These modules will eventually be removed altogether.

Removed T5-specific workarounds

The internal _eventually_correct_t5_max_length method has been removed. T5 tokenizers now handle max length consistently with other models.

Testing Changes

A few testing changes specific to tokenizers have been applied:

  • Model-specific tokenization test files now focus on integration tests.
  • Common tokenization API tests (e.g., add_tokens, encode, decode) are now centralized and automatically applied across all tokenizers. This reduces test duplication and ensures consistent behavior

For legacy implementations, the original BERT Python tokenizer code (including WhitespaceTokenizer, BasicTokenizer, etc.) is preserved in bert_legacy.py for reference purposes.

7. Deprecated / Modified Features

Special Tokens Structure:

  • SpecialTokensMixin: Merged into PreTrainedTokenizerBase to simplify the tokenizer architecture.
  • special_tokens_map: Now only stores named special token attributes (e.g., bos_token, eos_token). Use extra_special_tokens for additional special tokens (formerly additional_special_tokens). all_special_tokens includes both named and extra tokens.
# v4
tokenizer.special_tokens_map  # Included 'additional_special_tokens'

# v5
tokenizer.special_tokens_map  # Only named tokens
tokenizer.extra_special_tokens  # Additional tokens
  • special_tokens_map_extended and all_special_tokens_extended: Removed. Access AddedToken objects directly from _special_tokens_map or _extra_special_tokens if needed.
  • additional_special_tokens: Still accepted for backward compatibility but is automatically converted to extra_special_tokens.

Deprecated Methods:

  • sanitize_special_tokens(): Already deprecated in v4, removed in v5.
  • prepare_seq2seq_batch(): Deprecated; use __call__() with text_target parameter instead.
# v4
model_inputs = tokenizer.prepare_seq2seq_batch(src_texts, tgt_texts, max_length=128)

# v5
model_inputs = tokenizer(src_texts, text_target=tgt_texts, max_length=128, return_tensors="pt")
model_inputs["labels"] = model_inputs.pop("input_ids_target")
  • BatchEncoding.words(): Deprecated; use word_ids() instead.

Removed Methods:

  • create_token_type_ids_from_sequences(): Removed from base class. Subclasses that need custom token type ID creation should implement this method directly.
  • prepare_for_model(), build_inputs_with_special_tokens(), truncate_sequences(): Moved from tokenization_utils_base.py to tokenization_python.py for PythonBackend tokenizers. TokenizersBackend provides model-ready input via tokenize() and encode(), so these methods are no longer needed in the base class.
  • _switch_to_input_mode(), _switch_to_target_mode(), as_target_tokenizer(): Removed from base class. Use __call__() with text_target parameter instead.
# v4
with tokenizer.as_target_tokenizer():
    labels = tokenizer(tgt_texts, ...)

# v5
labels = tokenizer(text_target=tgt_texts, ...)
  • parse_response(): Removed from base class.
Performance
MoE Performance

The v5 release significantly improves the performance of the MoE models, as can be seen in the graphs below. We improve and optimize MoE performance through batched and grouped experts implementations, and we optimize them for decoding using batched_mm.

image
Core performance

We focus on improving the performance of loading weights on device (which gives speedups up to 6x in tensor parallel situations); this is preliminary work that we'll continue to work on in the coming weeks. Some notable improvements:

Library-wide changes with lesser impact

Default dtype update

We have updated the default dtype for all models loaded with from_pretrained to be auto. This will lead to model instantiations respecting the dtype in which the model was saved, rather than forcing it to load in float 32.

You can, of course, still specify the dtype in which you want to load your model by specifying it as an argument to the from_pretrained method.

Shard size

The Hugging Face Hub infrastructure has gradually moved to a XET backend. This will significantly simplify uploads and downloads, with higher download and upload speeds, partial uploads, and, most notably, a higher threshold for accepted file sizes on the Hugging Face Hub.

To reflect this, we're increasing the default shard size of models serialized on the Hub to 50GB (up

Note

PR body was truncated to here.


Configuration

📅 Schedule: (UTC)

  • Branch creation
    • ""
  • Automerge
    • At any time (no schedule defined)

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate Bot temporarily deployed to Vespa Cloud CD May 6, 2026 15:03 Inactive
@renovate renovate Bot force-pushed the renovate/pypi-transformers-vulnerability branch from 7dd9b53 to a548d3d Compare May 6, 2026 15:03
@renovate renovate Bot temporarily deployed to Vespa Cloud CD May 6, 2026 15:04 Inactive
@renovate renovate Bot force-pushed the renovate/pypi-transformers-vulnerability branch from a548d3d to 2bebcae Compare May 6, 2026 15:07
@renovate renovate Bot temporarily deployed to Vespa Cloud CD May 6, 2026 15:07 Inactive
@renovate renovate Bot force-pushed the renovate/pypi-transformers-vulnerability branch from 2bebcae to 885338b Compare May 6, 2026 15:08
@renovate renovate Bot temporarily deployed to Vespa Cloud CD May 6, 2026 15:09 Inactive
@renovate renovate Bot force-pushed the renovate/pypi-transformers-vulnerability branch from 885338b to bcdadbb Compare May 7, 2026 02:11
@renovate renovate Bot temporarily deployed to Vespa Cloud CD May 7, 2026 02:11 Inactive
@renovate renovate Bot force-pushed the renovate/pypi-transformers-vulnerability branch from bcdadbb to 2cd73a5 Compare May 7, 2026 02:17
@renovate renovate Bot temporarily deployed to Vespa Cloud CD May 7, 2026 02:17 Inactive
@renovate renovate Bot force-pushed the renovate/pypi-transformers-vulnerability branch from 2cd73a5 to 246b41a Compare May 7, 2026 13:33
@renovate renovate Bot temporarily deployed to Vespa Cloud CD May 7, 2026 13:33 Inactive
@renovate renovate Bot force-pushed the renovate/pypi-transformers-vulnerability branch from 246b41a to 35b3f4a Compare May 7, 2026 13:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants