-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable pylint in the github workflow #63
Conversation
@hickeyma What's your recommendation for the following warnings/convention? They seem reasonable to me. Do we have to correct every one?
|
Nice @tedhtchang, just tested and I'm seeing the same lint results as you. They're warnings and convention errors, so I would imagine they're not critical at the moment but I suspect 3 issues could be created for each type and they could get addressed later as time permits... |
@tedhtchang one change I would suggest here is if we can update CI/CD checks to fail only if pylint score < 9.0 ? As you can see the check is failing on your PR . If checks fail, we will block the PR . Expecting 10 /10 may be unreasonable at the moment |
Sure. I disabled the 3 checkers to pass the lint for 10/10 but covered up these warning messages is not ideal. |
That worked well @tedhtchang
|
This is not good practise. I recommend checking out the pylint code first (search: |
@tedhtchang Maybe you can't resolve everyone of the warnings/errors but need to decide for a good reason and disable as shown in previous comment above. |
The I ended up fixed almost every warning except :
and
@Ssukriti We should to manually run functional test that can exercise these modules: |
I used this command to see if sft_trainer module is till functional. Do we have any other functional test to verify this module ?
|
@tedhtchang @hickeyma we have seen those conflicts before pylint and black even before and we have not been able to mandate a 10 pylint score. I do not want to slow down contributions getting stuck in these kind of conflicts and have to figure out how to disable pylint.
|
for functional testing if you ran command in your comment #63 (comment) , that should be sufficient. |
@tedhtchang the above command worked fine for me. After checking out your PR branch and installing it:
Not sure why it didn't work for you. Also worked with llama-7b model |
@anhuong thanks for review. I think it failed to run with transformers==4.38.2 package installed. I had to downgrade to 4.37.2 |
Signed-off-by: ted chang <[email protected]>
Signed-off-by: ted chang <[email protected]>
@hickeyma 's Recommendation is contributors should fix, disable(in-line), disable globally in that order. |
Signed-off-by: ted chang <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tedhtchang Do you plan to add details on how to run and address errors from the linter in a later PR after Contributions.md has been added?
@@ -443,7 +443,9 @@ disable=raw-checker-failed, | |||
attribute-defined-outside-init, | |||
abstract-method, | |||
pointless-statement, | |||
wrong-import-order | |||
wrong-import-order, | |||
duplicate-code, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was there duplicate code in the launch_training that wasn't worth fixing? Seems like a good check to have instead of globally removing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anhuong What do you think? I can revert this change and fix launch_training.py in another PR properly.
It looked a little messy because I had to put many #pylint: disable=duplicate-code
in those 2 files. I could fix but then I have to figure out how to test it which could further delay this PR.
build/launch_training.py:1:0: R0801: Similar lines in 2 files
==build.launch_training:[65:74]
==tuning.sft_trainer:[277:286]
parser = transformers.HfArgumentParser(
dataclass_types=(
configs.ModelArguments,
configs.DataArguments,
configs.TrainingArguments,
peft_config.LoraConfig,
peft_config.PromptTuningConfig,
)
) (duplicate-code)
build/launch_training.py:1:0: R0801: Similar lines in 2 files
==build.launch_training:[80:86]
==tuning.sft_trainer:[292:298]
(
model_args,
data_args,
training_args,
lora_config,
prompt_tuning_config, (duplicate-code)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is true I duplicated the code because passing the arguments to sft_trainer was causing issues and we wanted the same parsing of arguments. That's fine then, thanks for showing me what the output looks like
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Ted for adding the linting, explaining the changes, and for creating issue #78 for documenting the linting process.
Changes
Closes #59
Test locally:
git clone https://github.com/tedhtchang/fms-hf-tuning -b Enable-pylint
cd fms-hf-tuning pip install -r setup_requirement.txt tox run -e lint