-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[workflow] Add Phi-4 Finetuning Example #1308
base: develop
Are you sure you want to change the base?
[workflow] Add Phi-4 Finetuning Example #1308
Conversation
4095bb9
to
c4eb107
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Why exactly do we need Gradio? How is it related to an FL tutorial?
- The PR description/title is misleading. Please rename as "[workflow] Add Phi-4 Finetuning Example"
- Consider asking the user to run
fx experimental activate
after installing OpenFL. This need not be part of the notebook execution cells.
|
|
|
changes made in 6ebb95e |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost there. A few polish tips:
- Heading styles inconsistent across the notebook. Suggest using h1 for title (i.e. single # in markdown) and h2 for sections, h3 for sub-sections. Note that I don't mean to enforce using h1-h2-h3. Some notebooks are also good with just h1 and h3.
- Update PR description (since now not using gradio)
- Re-run the notebook and include cell outputs. As mentioned in previous notebook PRs, having cell outputs is useful for the user. This helps a user guess whether the notebook is worth their time investment very quickly.
@MasterSkepticista I have a different view on keeping the cell output in notebook. It hampers the readability, these examples are there for reference purpose to demonstrate capability. |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good @rajithkrishnegowda, just some minor questions/comments
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"training_config = {\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I recommend cleaning up these parameters
-
remove parameters that are already set to their default values (for example,
use_ipex
,logging_strategy
, etc.). It will improve readability and reduce cognitive overload. -
it would be helpful to call out anything was specifically set to enable federated fine-tuning (i.e. setting 'num_train_epochs' to 1)
" \"warmup_ratio\": 0.2,\n", | ||
"}\n", | ||
"\n", | ||
"peft_config = {\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, would be good to clean it up
"metadata": {}, | ||
"source": [ | ||
"After installation, activate experimental APIs using: \n", | ||
"`fx experimental activate`" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: since we are importing directly from openfl.experimental
and are not using any fx
commands, do we need to explicitly activate it?
This PR adds Phi-4 Finetuning example using Workflow API.