You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Train with the following parameter combinations:
enable_ddp=True
enable_fsdp=False
use_peft=True
freeze_llm=False
The LoRA model will not be saved anywhere, is this expected?
Checked the code save_model_checkpoint_peft in checkpoint_handler.py Does not seem to use save_pretrained to save peft model, and the version used in FSDP seems more reasonable, which saves peft model if use_peft=True and freeze_llm=True
I have to change the code as the following to save it in a subpath under <SAVE_PATH>/llm
System Info
Same as #113
Information
🐛 Describe the bug
Train with the following parameter combinations:
enable_ddp=True
enable_fsdp=False
use_peft=True
freeze_llm=False
The LoRA model will not be saved anywhere, is this expected?
Checked the code
save_model_checkpoint_peft
incheckpoint_handler.py
Does not seem to usesave_pretrained
to save peft model, and the version used in FSDP seems more reasonable, which saves peft model if use_peft=True and freeze_llm=TrueI have to change the code as the following to save it in a subpath under
<SAVE_PATH>/llm
Error logs
No logs but no Peft model will be saved.
Expected behavior
I think it shall be saved for Peft model (did I miss anything?)
The text was updated successfully, but these errors were encountered: