Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: unable to find output_dir in multi-GPU during resume_from_checkpoint check #352

Merged
merged 3 commits into from
Sep 26, 2024

Conversation

Abhishek-TAMU
Copy link
Collaborator

@Abhishek-TAMU Abhishek-TAMU commented Sep 26, 2024

Description of the change

Addition of code to create output_dir in accelerate_launch.py if it doesn't exist.

Related issue number

#1352

How to verify the PR

Run full fine tuning or LoRA tuning with multiple GPUs and check if issue exists.

Was the PR tested

  • I have added >=1 unit test(s) for every new method I have added.
  • I have ensured all unit tests pass

Copy link

Thanks for making a pull request! 😃
One of the maintainers will review and advise on the next steps.

1 similar comment
Copy link

Thanks for making a pull request! 😃
One of the maintainers will review and advise on the next steps.

@Abhishek-TAMU Abhishek-TAMU changed the title Fix unable to find output_dir in multi-GPU during resume_from_checkpoint check fix: unable to find output_dir in multi-GPU during resume_from_checkpoint check Sep 26, 2024
@github-actions github-actions bot added the fix label Sep 26, 2024
@@ -98,6 +98,8 @@ def main():
#
##########
output_dir = job_config.get("output_dir")
if not os.path.exists(output_dir):
Copy link
Collaborator

@anhuong anhuong Sep 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this solves the main case with running multi-GPU training from the image. But this issue would still appear if someone ran accelerate launch tuning/sft_trainer.py on their own but unsure if creating the directory within sft_trainer.py makes sense or if the same issue would come up where the dir may not be created....

After talking to Abhishek about this, adding this in before the separate process are initiated ensures the output_dir exists.

If we want only one process to check the resume from checkpoint condition then we can maybe add the check if dist.rank() == 0 to process resume_from_checkpoint similar to this case

But we want to check with others on this.

So for now, we are adding this improvement that fixes the bug and can look into the alternative as a future improvement.

@anhuong anhuong merged commit 0c6a062 into foundation-model-stack:main Sep 26, 2024
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants