Skip to content

Add Arabic char tokenizer and Japanese-English katakana support#15614

Merged
XuesongYang merged 1 commit intoNVIDIA-NeMo:mainfrom
quapham:arabic_chartokenizer_japanese_english
Apr 25, 2026
Merged

Add Arabic char tokenizer and Japanese-English katakana support#15614
XuesongYang merged 1 commit intoNVIDIA-NeMo:mainfrom
quapham:arabic_chartokenizer_japanese_english

Conversation

@quapham
Copy link
Copy Markdown
Contributor

@quapham quapham commented Apr 16, 2026

Important

The Update branch button must only be pressed in very rare occassions.
An outdated branch is never blocking the merge of a PR.
Please reach out to the automation team before pressing that button.

What does this PR do ?

Add Arabic character tokenizer support and improve mixed Japanese-English katakana handling in Japanese G2P/tokenizer tests.

Collection: [Note which collection this PR will affect] TTS, common

Changelog

  • add Arabic character tokenizer coverage/support
  • update Japanese katakana accent handling for mixed English/Japanese text
  • add unit tests for tokenizer behavior

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

@XuesongYang
Copy link
Copy Markdown
Collaborator

thanks for adding new tokenizers.

Regarding 4 arabic locales, we decided to use separate tokenizers for ar-SY, ar-AE, ar-MSA, and ar-SA. In this PR, I only found ArabicCharsTokenizer was defined. Could you pls add tokenizers below?

  ar-AE: [arabic_AE_chartokenizer]
  ar-MSA: [arabic_MSA_chartokenizer]
  ar-SA: [arabic_SA_chartokenizer]
  ar-SY: [arabic_SY_chartokenizer]

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR extends NeMo’s TTS tokenizer/G2P support by adding an Arabic character-level tokenizer and adjusting Japanese Katakana pitch-accent G2P behavior for mixed Japanese/English (loanword) text.

Changes:

  • Added ArabicCharsTokenizer (character-based) and Arabic locale punctuation/grapheme support (ar-MSA).
  • Updated JapaneseKatakanaAccentG2p handling of ASCII/katakana decisions and unknown symbols/punctuation.
  • Updated tokenizer unit tests to cover Arabic chars and mixed Japanese/English katakana-accent output.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 5 comments.

File Description
tests/collections/common/tokenizers/text_to_speech/test_tts_tokenizers.py Replaces the Japanese katakana-accent test and adds an Arabic chars tokenizer unit test.
nemo/collections/tts/g2p/models/ja_jp_ipa.py Refines ASCII/loanword vs. Latin handling and adds warnings/space replacement for unknown symbols.
nemo/collections/common/tokenizers/text_to_speech/tts_tokenizers.py Introduces ArabicCharsTokenizer implementation.
nemo/collections/common/tokenizers/text_to_speech/ipa_lexicon.py Adds ar-MSA locale entry, Arabic grapheme set, and locale-specific punctuation.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread nemo/collections/common/tokenizers/text_to_speech/tts_tokenizers.py Outdated
Comment thread nemo/collections/tts/g2p/models/ja_jp_ipa.py
Comment thread nemo/collections/tts/g2p/models/ja_jp_ipa.py Outdated
Comment thread nemo/collections/tts/g2p/models/ja_jp_ipa.py Outdated
@XuesongYang
Copy link
Copy Markdown
Collaborator

thanks for adding new tokenizers.

Regarding 4 arabic locales, we decided to use separate tokenizers for ar-SY, ar-AE, ar-MSA, and ar-SA. In this PR, I only found ArabicCharsTokenizer was defined. Could you pls add tokenizers below?

  ar-AE: [arabic_AE_chartokenizer]
  ar-MSA: [arabic_MSA_chartokenizer]
  ar-SA: [arabic_SA_chartokenizer]
  ar-SY: [arabic_SY_chartokenizer]

discussed offline. No need to add one tokenizer per locale.

Comment thread nemo/collections/common/tokenizers/text_to_speech/tts_tokenizers.py Outdated
Comment thread nemo/collections/common/tokenizers/text_to_speech/tts_tokenizers.py Outdated
Comment thread nemo/collections/common/tokenizers/text_to_speech/tts_tokenizers.py Outdated
Comment thread nemo/collections/tts/g2p/models/ja_jp_ipa.py Outdated
Comment thread tests/collections/common/tokenizers/text_to_speech/test_tts_tokenizers.py Outdated
@XuesongYang XuesongYang added Has Babysitter Claude autonomously takes over the quest to get to a green CI. Agent Plan Approved Agent PR babysitter's plan is human-approved and removed Has Babysitter Claude autonomously takes over the quest to get to a green CI. Agent Plan Approved Agent PR babysitter's plan is human-approved labels Apr 20, 2026
@XuesongYang XuesongYang force-pushed the arabic_chartokenizer_japanese_english branch from 57abdf7 to 433981c Compare April 20, 2026 22:19
Copy link
Copy Markdown
Collaborator

@XuesongYang XuesongYang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm in genral, but need some changes before merging. @quapham pls have a look and take actions accordingly.

i am not familiar either languages. But pls let me know if I am wrong. Thanks!

@XuesongYang XuesongYang requested a review from blisc April 20, 2026 22:23
@XuesongYang
Copy link
Copy Markdown
Collaborator

@blisc let's pause merging this one now, and merge #15567 first. I believe there are some merging conflicts to address.

Apply isort and black reformatting
Fix Hindi chartokenizer, use 'case=upper' and prevent duplicate spaces in the Japanese G2P fallback paths.
Apply isort and black reformatting
Fix HindiCharsTokenizer backward compat and add Arabic dialect tests
Apply isort and black reformatting
Add Arabic tokenizer test coverage: diacritics, dialects, punctuation, unknown chars
Expand Arabic tokenizer tests: parametrize diacritics, dialects
Apply isort and black reformatting
added comprehensive test coverage.
fix: add back-compatibility, case=mixed, ascii_letters.
fix: add charset_version to Hindi/Arabic tokenizers for backward compatibility
Introduce a  parameter in HindiCharsTokenizer and
ArabicCharsTokenizer so old models (v1: case='mixed') keep working
while new models train with the corrected charset (v2: case='upper').
- Define CASELESS_SCRIPT_TOKENIZER_TARGETS and DEFAULT_CHARSET_VERSION
  constants in tts_tokenizers.py
- Persist charset_version into the OmegaConf config during training
  (setup_tokenizers) so .nemo archives record which version was used
- Add _migrate_charset_version() helper in magpietts inference utils
  to pin charset_version=1 for old checkpoints that lack the field,
  preventing a silent vocabulary mismatch at inference time
bugfix: L2_TTS_Fast_dev_runs_Magpietts_OnlineCFGDistillation.sh

Signed-off-by: quapham <quapham@users.noreply.github.com>
Signed-off-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>
@XuesongYang
Copy link
Copy Markdown
Collaborator

@blisc let's pause merging this one now, and merge #15567 first. I believe there are some merging conflicts to address.

all CI passed. @blisc pls review.

Comment on lines +61 to +70
# Persist charset_version so it's saved in .nemo archives and
# update_config_for_inference can distinguish old checkpoints
# (missing charset_version → v1) from new ones.
if (
hasattr(tokenizer_config, '_target_')
and tokenizer_config._target_ in CASELESS_SCRIPT_TOKENIZER_TARGETS
and not hasattr(tokenizer_config, 'charset_version')
):
with open_dict(all_tokenizers_config):
tokenizer_config.charset_version = DEFAULT_CHARSET_VERSION
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm ok to merge this as this. But in the future, we need to refactor this default version to be part of the class not a top level global in the file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants