Skip to content

Introduce LoraConfig#17229

Merged
meta-codesync[bot] merged 13 commits intogh/lucylq/132/basefrom
gh/lucylq/132/head
Feb 13, 2026
Merged

Introduce LoraConfig#17229
meta-codesync[bot] merged 13 commits intogh/lucylq/132/basefrom
gh/lucylq/132/head

Conversation

@lucylq
Copy link
Copy Markdown
Contributor

@lucylq lucylq commented Feb 5, 2026

Differential Revision: [D92304723](https://our.internmc.facebook.com/intern/diff/D92304723/)

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Feb 5, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17229

Note: Links to docs will display an error until the docs builds have been completed.

❌ 4 New Failures, 6 Unrelated Failures

As of commit 53a5176 with merge base aa2f683 (image):

NEW FAILURES - The following jobs have failed:

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 5, 2026
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Feb 5, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@lucylq lucylq changed the title Introduce LoraConfig [wip] Introduce LoraConfig Feb 5, 2026
@lucylq lucylq changed the title [wip] Introduce LoraConfig Introduce LoraConfig Feb 5, 2026
@lucylq lucylq requested a review from Copilot February 5, 2026 19:01
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a LoraConfig dataclass to encapsulate LoRA (Low-Rank Adaptation) adapter configuration, replacing the previous separate adapter_checkpoint and adapter_config fields in BaseConfig. This refactoring improves code organization and provides a cleaner API for LoRA configuration.

Changes:

  • Added LoraConfig dataclass with validation and automatic config parsing from JSON
  • Refactored BaseConfig to use LoraConfig instead of separate adapter fields
  • Updated from_args method to create LoraConfig instances from CLI arguments
  • Simplified adapter loading logic in Llama2Model by using the new LoraConfig object

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
extension/llm/export/config/llm_config.py Introduces LoraConfig dataclass with validation, updates BaseConfig to use it, and modifies from_args to create LoraConfig from CLI arguments
examples/models/llama/model.py Refactors adapter loading to use LoraConfig object, simplifying the logic by removing redundant validation and JSON parsing

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread extension/llm/export/config/llm_config.py Outdated
Comment thread extension/llm/export/config/llm_config.py Outdated
Comment thread extension/llm/export/config/llm_config.py Outdated
Github Executorch added 2 commits February 10, 2026 08:59
lucylq pushed a commit that referenced this pull request Feb 11, 2026
Pull Request resolved: #17229

Introduce LoraConfig to hold lora parameters such as:
- checkpoint
- rank
- target_modules (e.g. q_proj, k_proj, v_proj, up_proj, down_proj, gate_proj, o_proj)
- lora_alpha

LoraConfig validation done post-init. LoraConfig can be created with config.json file.

Update cases of export_llama_lib to use LoraConfig instead of adapter_checkpoint and adapter_config.

NOTE: we may need to extend this to support more customizable features like lora config per layer etc. cc @hakanb

ghstack-source-id: 340400147
@exported-using-ghexport

Differential Revision: [D92304723](https://our.internmc.facebook.com/intern/diff/D92304723/)
lucylq pushed a commit that referenced this pull request Feb 11, 2026
Pull Request resolved: #17229

Introduce LoraConfig to hold lora parameters such as:
- checkpoint
- rank
- target_modules (e.g. q_proj, k_proj, v_proj, up_proj, down_proj, gate_proj, o_proj)
- lora_alpha

LoraConfig validation done post-init. LoraConfig can be created with config.json file.

Update cases of export_llama_lib to use LoraConfig instead of adapter_checkpoint and adapter_config.

NOTE: we may need to extend this to support more customizable features like lora config per layer etc. cc @hakanb

ghstack-source-id: 340423626
@exported-using-ghexport

Differential Revision: [D92304723](https://our.internmc.facebook.com/intern/diff/D92304723/)
lucylq pushed a commit that referenced this pull request Feb 11, 2026
Pull Request resolved: #17229

Introduce LoraConfig to hold lora parameters such as:
- checkpoint
- rank
- target_modules (e.g. q_proj, k_proj, v_proj, up_proj, down_proj, gate_proj, o_proj)
- lora_alpha

LoraConfig validation done post-init. LoraConfig can be created with config.json file.

Update cases of export_llama_lib to use LoraConfig instead of adapter_checkpoint and adapter_config.

NOTE: we may need to extend this to support more customizable features like lora config per layer etc. cc @hakanb

ghstack-source-id: 340434171
@exported-using-ghexport

Differential Revision: [D92304723](https://our.internmc.facebook.com/intern/diff/D92304723/)
lucylq pushed a commit that referenced this pull request Feb 11, 2026
Pull Request resolved: #17229

Introduce LoraConfig to hold lora parameters such as:
- checkpoint
- rank
- target_modules (e.g. q_proj, k_proj, v_proj, up_proj, down_proj, gate_proj, o_proj)
- lora_alpha

LoraConfig validation done post-init. LoraConfig can be created with config.json file.

Update cases of export_llama_lib to use LoraConfig instead of adapter_checkpoint and adapter_config.

NOTE: we may need to extend this to support more customizable features like lora config per layer etc. cc @hakanb

ghstack-source-id: 340443733
@exported-using-ghexport

Differential Revision: [D92304723](https://our.internmc.facebook.com/intern/diff/D92304723/)
lucylq pushed a commit that referenced this pull request Feb 11, 2026
Pull Request resolved: #17229

Introduce LoraConfig to hold lora parameters such as:
- checkpoint
- rank
- target_modules (e.g. q_proj, k_proj, v_proj, up_proj, down_proj, gate_proj, o_proj)
- lora_alpha

LoraConfig validation done post-init. LoraConfig can be created with config.json file.

Update cases of export_llama_lib to use LoraConfig instead of adapter_checkpoint and adapter_config.

NOTE: we may need to extend this to support more customizable features like lora config per layer etc. cc @hakanb

ghstack-source-id: 340443733
@exported-using-ghexport

Differential Revision: [D92304723](https://our.internmc.facebook.com/intern/diff/D92304723/)
Copy link
Copy Markdown
Contributor

@kimishpatel kimishpatel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review automatically exported from Phabricator review in Meta.

lucylq pushed a commit that referenced this pull request Feb 12, 2026
Pull Request resolved: #17229

Introduce LoraConfig to hold lora parameters such as:
- checkpoint
- rank
- target_modules (e.g. q_proj, k_proj, v_proj, up_proj, down_proj, gate_proj, o_proj)
- lora_alpha

LoraConfig validation done post-init. LoraConfig can be created with config.json file.

Update cases of export_llama_lib to use LoraConfig instead of adapter_checkpoint and adapter_config.

NOTE: we may need to extend this to support more customizable features like lora config per layer etc. cc @hakanb

ghstack-source-id: 340525244
@exported-using-ghexport

Differential Revision: [D92304723](https://our.internmc.facebook.com/intern/diff/D92304723/)
lucylq pushed a commit that referenced this pull request Feb 12, 2026
Pull Request resolved: #17229

Introduce LoraConfig to hold lora parameters such as:
- checkpoint
- rank
- target_modules (e.g. q_proj, k_proj, v_proj, up_proj, down_proj, gate_proj, o_proj)
- lora_alpha

LoraConfig validation done post-init. LoraConfig can be created with config.json file.

Update cases of export_llama_lib to use LoraConfig instead of adapter_checkpoint and adapter_config.

NOTE: we may need to extend this to support more customizable features like lora config per layer etc. cc @hakanb

ghstack-source-id: 340525244
@exported-using-ghexport

Differential Revision: [D92304723](https://our.internmc.facebook.com/intern/diff/D92304723/)
lucylq pushed a commit that referenced this pull request Feb 12, 2026
Pull Request resolved: #17229

Introduce LoraConfig to hold lora parameters such as:
- checkpoint
- rank
- target_modules (e.g. q_proj, k_proj, v_proj, up_proj, down_proj, gate_proj, o_proj)
- lora_alpha

LoraConfig validation done post-init. LoraConfig can be created with config.json file.

Update cases of export_llama_lib to use LoraConfig instead of adapter_checkpoint and adapter_config.

NOTE: we may need to extend this to support more customizable features like lora config per layer etc. cc @hakanb

ghstack-source-id: 340525244
@exported-using-ghexport

Differential Revision: [D92304723](https://our.internmc.facebook.com/intern/diff/D92304723/)
lucylq pushed a commit that referenced this pull request Feb 13, 2026
Pull Request resolved: #17229

Introduce LoraConfig to hold lora parameters such as:
- checkpoint
- rank
- target_modules (e.g. q_proj, k_proj, v_proj, up_proj, down_proj, gate_proj, o_proj)
- lora_alpha

LoraConfig validation done post-init. LoraConfig can be created with config.json file.

Update cases of export_llama_lib to use LoraConfig instead of adapter_checkpoint and adapter_config.

NOTE: we may need to extend this to support more customizable features like lora config per layer etc. cc @hakanb

ghstack-source-id: 340525244
@exported-using-ghexport

Differential Revision: [D92304723](https://our.internmc.facebook.com/intern/diff/D92304723/)
lucylq pushed a commit that referenced this pull request Feb 13, 2026
Pull Request resolved: #17229

Introduce LoraConfig to hold lora parameters such as:
- checkpoint
- rank
- target_modules (e.g. q_proj, k_proj, v_proj, up_proj, down_proj, gate_proj, o_proj)
- lora_alpha

LoraConfig validation done post-init. LoraConfig can be created with config.json file.

Update cases of export_llama_lib to use LoraConfig instead of adapter_checkpoint and adapter_config.

NOTE: we may need to extend this to support more customizable features like lora config per layer etc. cc @hakanb

ghstack-source-id: 340930093
@exported-using-ghexport

Differential Revision: [D92304723](https://our.internmc.facebook.com/intern/diff/D92304723/)
@meta-codesync meta-codesync Bot merged commit 3ac1f31 into gh/lucylq/132/base Feb 13, 2026
314 of 326 checks passed
@meta-codesync meta-codesync Bot deleted the gh/lucylq/132/head branch February 13, 2026 07:01
lucylq pushed a commit that referenced this pull request Feb 13, 2026
This PR was created by the merge bot to help merge the original PR into
the main branch.
ghstack PR number: #17229 by
@lucylq
^ Please use this as the source of truth for the PR details, comments,
and reviews
ghstack PR base:
https://github.com/pytorch/executorch/tree/gh/lucylq/132/base
ghstack PR head:
https://github.com/pytorch/executorch/tree/gh/lucylq/132/head
Merge bot PR base:
https://github.com/pytorch/executorch/tree/gh/lucylq/131/orig
Merge bot PR head:
https://github.com/pytorch/executorch/tree/gh/lucylq/132/orig
Differential Revision:
[D92304723](https://our.internmc.facebook.com/intern/diff/D92304723/)
@diff-train-skip-merge

Co-authored-by: Github Executorch <github_executorch@arm.com>
lucylq pushed a commit that referenced this pull request Feb 13, 2026
Pull Request resolved: #17229

Introduce LoraConfig to hold lora parameters such as:
- checkpoint
- rank
- target_modules (e.g. q_proj, k_proj, v_proj, up_proj, down_proj, gate_proj, o_proj)
- lora_alpha

LoraConfig validation done post-init. LoraConfig can be created with config.json file.

Update cases of export_llama_lib to use LoraConfig instead of adapter_checkpoint and adapter_config.

NOTE: we may need to extend this to support more customizable features like lora config per layer etc. cc @hakanb

ghstack-source-id: 340930093
@exported-using-ghexport

Differential Revision: [D92304723](https://our.internmc.facebook.com/intern/diff/D92304723/)
@lucylq lucylq mentioned this pull request Feb 13, 2026
lucylq added a commit that referenced this pull request Feb 13, 2026
Pull Request resolved: #17229

Introduce LoraConfig to hold lora parameters such as:
- checkpoint
- rank
- target_modules (e.g. q_proj, k_proj, v_proj, up_proj, down_proj,
gate_proj, o_proj)
- lora_alpha

LoraConfig validation done post-init. LoraConfig can be created with
config.json file.

Update cases of export_llama_lib to use LoraConfig instead of
adapter_checkpoint and adapter_config.

NOTE: we may need to extend this to support more customizable features
like lora config per layer etc. cc @hakanb

ghstack-source-id: 340930093
@exported-using-ghexport

Differential Revision:
[D92304723](https://our.internmc.facebook.com/intern/diff/D92304723/)

Co-authored-by: Github Executorch <github_executorch@arm.com>
chizkiyahu pushed a commit to chizkiyahu/executorch that referenced this pull request Feb 23, 2026
Pull Request resolved: pytorch#17229

Introduce LoraConfig to hold lora parameters such as:
- checkpoint
- rank
- target_modules (e.g. q_proj, k_proj, v_proj, up_proj, down_proj,
gate_proj, o_proj)
- lora_alpha

LoraConfig validation done post-init. LoraConfig can be created with
config.json file.

Update cases of export_llama_lib to use LoraConfig instead of
adapter_checkpoint and adapter_config.

NOTE: we may need to extend this to support more customizable features
like lora config per layer etc. cc @hakanb

ghstack-source-id: 340930093
@exported-using-ghexport

Differential Revision:
[D92304723](https://our.internmc.facebook.com/intern/diff/D92304723/)

Co-authored-by: Github Executorch <github_executorch@arm.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants