Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implements MLFlowLogger #2365

Merged
merged 8 commits into from
Feb 12, 2025
Merged

Implements MLFlowLogger #2365

merged 8 commits into from
Feb 12, 2025

Conversation

nathan-az
Copy link
Contributor

@nathan-az nathan-az commented Feb 8, 2025

Context

What is the purpose of this PR? Is it to

  • add a new feature
  • fix a bug
  • update tests and/or documentation
  • other (please add here)

Please link to any issues this PR addresses.

Changelog

Implements a basic MLFlowLogger in-line with the functionality of the other loggers.

Test plan

Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.

  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
  • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

UX

If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example

  • I did not change any public API
  • I have added an example to docs or docstrings

Note the MLFlowLogger's init was heavily inspired by the HF trainer's MLflowCallback. In-line with the other loggers, key identifiers (experiment name, run ID) can be overridden, but are often set in MLFlow via environment variables.

Fixes #2211, interest shown in #2103

Images

Logging the config shows, nested under the same path as the config output directory (this looks like it aligns to other loggers, but it's a bit ugly - do we want to keep it or should we always log in the artifact root as torchtune_config.yaml?)

image

Parameters and metrics show in the run overview, with nested parameters period-separated (e.g. profiler.profile_memory)

image

Metrics also display over-time in the Model Metrics tab according to the logging steps

image

Copy link

pytorch-bot bot commented Feb 8, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2365

Note: Links to docs will display an error until the docs builds have been completed.

⏳ No Failures, 3 Pending

As of commit c069438 with merge base 9b38360 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link

Hi @nathan-az!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 8, 2025
@fabiogeraci
Copy link

fabiogeraci commented Feb 8, 2025

Very similar approach but I found that self.rank pulls the local rank, prefer to log the experiment on global_rank and implement custom metrics for system log. I think you might encounter issue in setting the eun_id when running multi GPUs as well as multi nodes multi GPUs.In my custom class I also included auto_log, with true/false, for user friendly approach.
But if the guys are happy good for me.

@nathan-az
Copy link
Contributor Author

I found that self.rank pulls the local rank, prefer to log the experiment on global_rank and implement custom metrics for system log

I considered this, but for the sake of uniformity aligned it to the other logger implementations.

I think you might encounter issue in setting the eun_id when running multi GPUs as well as multi nodes multi GPUs.In my custom class I also included auto_log, with true/false, for user friendly approach. But if the guys are happy good for me.

Multi-GPU should be ok due ot the self._mlflow.active_run() checks, and only setting the run on rank 0.

Regarding multi-node, my solution generally is to have a setup script before my trainer which sets the MLFLOW_RUN_ID env var on all nodes. This means there is some redundant logging on each node, but at least it aligns and doesn't create more runs.

This approach has also been useful when using MLFlow as an artifact store for multi-node runs, where different nodes have different checkpoint ranks that need to be logged together.

@fabiogeraci
Copy link

Multi-GPU should be ok due ot the self._mlflow.active_run() checks, and only setting the run on rank 0.

Regarding multi-node, my solution generally is to have a setup script before my trainer which sets the MLFLOW_RUN_ID env var on all nodes. This means there is some redundant logging on each node, but at least it aligns and doesn't create more runs.

This approach has also been useful when using MLFlow as an artifact store for multi-node runs, where different nodes have different checkpoint ranks that need to be logged together.

Personally I prefer to let MlFlow setting the RUN_ID, simply because experiment_name and run_name can be the same and if the user forgets to change the run_id, i think MlFlow raises an error (I did not check, from memory).

I deployed my implementation on a internal server of my company, at the moment all user feed from the same front-end, therefore for me it is not an option to have redundant runs (under default), it would be confusing and messy, as well as spamming the logger database (the database team would not be happy).

I am not a core developer therefore take this as my personal opinion

@Ankur-singh
Copy link
Contributor

@lulmer cc'ing you because you request support for MLFlow logging in last office hour.

@fabiogeraci
Copy link

@lulmer cc'ing you because you request support for MLFlow logging in last office hour.

I am not sure i understand!!

@steveepreston
Copy link

@fabiogeraci I think he mentioned you because your message

Copy link
Contributor

@joecummings joecummings left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a super quick turnaround - thanks! Can you also include some screenshots using the MLFlow logger from torchtune for additional "testing"?

@Ankur-singh
Copy link
Contributor

@lulmer (Louis Ulmer) requested MLFlow logger during the office hour. Hence tagged him to this PR, so he can follow the progress.

@fabiogeraci & @steveepreston sorry for the confusion.

@nathan-az
Copy link
Contributor Author

nathan-az commented Feb 11, 2025

No worries @joecummings - added some images! Let me know if we want to change the behaviour around the artifact naming/path. I tried to align it to the wandb logger, which just takes output_config_fname.parent.

My deeply nested absolute path is an unlikely case - I assume most users use relative paths for their config e.g. configs/torchtune_config.yaml, which will show more cleanly.

That said, we can always just store in the artifact root torchtune_config.yaml. Happy to make any changes, and glad to see it's all working well 🙂

Copy link
Contributor

@joecummings joecummings left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly minor comments!

torchtune/training/metric_logging.py Outdated Show resolved Hide resolved
torchtune/training/metric_logging.py Outdated Show resolved Hide resolved
torchtune/training/metric_logging.py Outdated Show resolved Hide resolved
torchtune/training/metric_logging.py Outdated Show resolved Hide resolved
torchtune/training/metric_logging.py Outdated Show resolved Hide resolved
torchtune/training/metric_logging.py Outdated Show resolved Hide resolved
Copy link
Contributor

@joecummings joecummings left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome work - thanks! 🙏

@joecummings joecummings merged commit a965fb0 into pytorch:main Feb 12, 2025
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature] add mlflow metric_logging
6 participants