Add Arabic char tokenizer and Japanese-English katakana support#15614
Conversation
|
thanks for adding new tokenizers. Regarding 4 arabic locales, we decided to use separate tokenizers for ar-SY, ar-AE, ar-MSA, and ar-SA. In this PR, I only found |
There was a problem hiding this comment.
Pull request overview
This PR extends NeMo’s TTS tokenizer/G2P support by adding an Arabic character-level tokenizer and adjusting Japanese Katakana pitch-accent G2P behavior for mixed Japanese/English (loanword) text.
Changes:
- Added
ArabicCharsTokenizer(character-based) and Arabic locale punctuation/grapheme support (ar-MSA). - Updated
JapaneseKatakanaAccentG2phandling of ASCII/katakana decisions and unknown symbols/punctuation. - Updated tokenizer unit tests to cover Arabic chars and mixed Japanese/English katakana-accent output.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
tests/collections/common/tokenizers/text_to_speech/test_tts_tokenizers.py |
Replaces the Japanese katakana-accent test and adds an Arabic chars tokenizer unit test. |
nemo/collections/tts/g2p/models/ja_jp_ipa.py |
Refines ASCII/loanword vs. Latin handling and adds warnings/space replacement for unknown symbols. |
nemo/collections/common/tokenizers/text_to_speech/tts_tokenizers.py |
Introduces ArabicCharsTokenizer implementation. |
nemo/collections/common/tokenizers/text_to_speech/ipa_lexicon.py |
Adds ar-MSA locale entry, Arabic grapheme set, and locale-specific punctuation. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
discussed offline. No need to add one tokenizer per locale. |
57abdf7 to
433981c
Compare
XuesongYang
left a comment
There was a problem hiding this comment.
lgtm in genral, but need some changes before merging. @quapham pls have a look and take actions accordingly.
i am not familiar either languages. But pls let me know if I am wrong. Thanks!
49f1f7b to
565ca12
Compare
Apply isort and black reformatting Fix Hindi chartokenizer, use 'case=upper' and prevent duplicate spaces in the Japanese G2P fallback paths. Apply isort and black reformatting Fix HindiCharsTokenizer backward compat and add Arabic dialect tests Apply isort and black reformatting Add Arabic tokenizer test coverage: diacritics, dialects, punctuation, unknown chars Expand Arabic tokenizer tests: parametrize diacritics, dialects Apply isort and black reformatting added comprehensive test coverage. fix: add back-compatibility, case=mixed, ascii_letters. fix: add charset_version to Hindi/Arabic tokenizers for backward compatibility Introduce a parameter in HindiCharsTokenizer and ArabicCharsTokenizer so old models (v1: case='mixed') keep working while new models train with the corrected charset (v2: case='upper'). - Define CASELESS_SCRIPT_TOKENIZER_TARGETS and DEFAULT_CHARSET_VERSION constants in tts_tokenizers.py - Persist charset_version into the OmegaConf config during training (setup_tokenizers) so .nemo archives record which version was used - Add _migrate_charset_version() helper in magpietts inference utils to pin charset_version=1 for old checkpoints that lack the field, preventing a silent vocabulary mismatch at inference time bugfix: L2_TTS_Fast_dev_runs_Magpietts_OnlineCFGDistillation.sh Signed-off-by: quapham <quapham@users.noreply.github.com> Signed-off-by: Xuesong Yang <1646669+XuesongYang@users.noreply.github.com>
| # Persist charset_version so it's saved in .nemo archives and | ||
| # update_config_for_inference can distinguish old checkpoints | ||
| # (missing charset_version → v1) from new ones. | ||
| if ( | ||
| hasattr(tokenizer_config, '_target_') | ||
| and tokenizer_config._target_ in CASELESS_SCRIPT_TOKENIZER_TARGETS | ||
| and not hasattr(tokenizer_config, 'charset_version') | ||
| ): | ||
| with open_dict(all_tokenizers_config): | ||
| tokenizer_config.charset_version = DEFAULT_CHARSET_VERSION |
There was a problem hiding this comment.
I'm ok to merge this as this. But in the future, we need to refactor this default version to be part of the class not a top level global in the file.
Important
The
Update branchbutton must only be pressed in very rare occassions.An outdated branch is never blocking the merge of a PR.
Please reach out to the automation team before pressing that button.
What does this PR do ?
Add Arabic character tokenizer support and improve mixed Japanese-English katakana handling in Japanese G2P/tokenizer tests.
Collection: [Note which collection this PR will affect] TTS, common
Changelog
Usage
# Add a code snippet demonstrating how to use thisGitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information