Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switched linters, black -> ruff #334

Merged
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions .github/workflows/python-quality.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,5 +16,4 @@ jobs:
run: |
pip install --upgrade pip
pip install .[dev]
- run: black --check bench examples optimum test
- run: ruff check bench examples optimum test
- run: ruff check --show-fixes bench examples optimum test
ishandeva marked this conversation as resolved.
Show resolved Hide resolved
6 changes: 3 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@
check_dirs := optimum test bench examples

check:
black --check ${check_dirs}
ruff check ${check_dirs}
ruff check --show-fixes ${check_dirs}
ruff format ${check_dirs} --diff

style:
black ${check_dirs}
ruff check ${check_dirs} --fix
ruff format ${check_dirs}

test:
python -m pytest -sv test
2 changes: 1 addition & 1 deletion external/awq/test_awq_kernels.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@
# limitations under the License.
import pytest
import torch

from pack import pack_awq

from optimum.quanto import AffineQuantizer, MaxOptimizer, qint4, ungroup


Expand Down
2 changes: 1 addition & 1 deletion external/awq/test_awq_packing.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@
import numpy as np
import pytest
import torch

from pack_intweight import pack_intweight
from packing_utils import pack_awq, reverse_awq_order, unpack_awq

from optimum.quanto import AWQPackedTensor, AWQPacking


Expand Down
2 changes: 1 addition & 1 deletion external/smoothquant/smoothquant.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@
from tqdm import tqdm
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.models.bloom.modeling_bloom import BloomBlock
from transformers.models.opt.modeling_opt import OPTDecoderLayer
from transformers.models.llama.modeling_llama import LlamaDecoderLayer, LlamaRMSNorm
from transformers.models.mistral.modeling_mistral import MistralDecoderLayer, MistralRMSNorm
from transformers.models.opt.modeling_opt import OPTDecoderLayer


def get_act_scales(model, tokenizer, dataset, num_samples=512, seq_len=512):
Expand Down
15 changes: 8 additions & 7 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ dynamic = ['version']
homepage = 'https://github.com/huggingface/optimum-quanto'

[project.optional-dependencies]
dev = ['pytest', 'ruff', 'black']
dev = ['pytest', 'ruff']
examples = [
'torchvision',
'transformers',
Expand All @@ -50,19 +50,20 @@ version = {attr = 'optimum.quanto.__version__'}
requires = ['setuptools>65.5.1', 'setuptools_scm']
build-backend = 'setuptools.build_meta'

[tool.black]
line-length = 119

[tool.ruff]
# Never enforce `E501` (line length violations).
# Configuration for Ruff
line-length = 119 # Same line-length as Black had

# Linting rules:
# Never enforce `E501` (line length violations) and other specific rules.
lint.ignore = ['C901', 'E501', 'E741']
lint.select = ['C', 'E', 'F', 'I', 'W']
line-length = 119

# Ignore import violations in all `__init__.py` files.
[tool.ruff.lint.per-file-ignores]
'__init__.py' = ['E402', 'F401', 'F403', 'F811']

# isort configuration (to sort imports)
[tool.ruff.lint.isort]
lines-after-imports = 2
known-first-party = ['optimum.quanto']
known-first-party = ['optimum.quanto']
2 changes: 1 addition & 1 deletion setup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,6 @@ else
pip install --upgrade --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118
fi
# Build tools
pip install black ruff pytest build
pip install ruff pytest build
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a legacy file I should have removed a long time ago since it is obsoleted by optional targets in pyproject.toml.

Anyway, you also need to edit the python-quality CI workflows under .github.

Copy link
Contributor Author

@ishandeva ishandeva Oct 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Anyway, you also need to edit the python-quality CI workflows under .github.

Do we want similar verbosity flags on this one as well?
I see that currently it simply calls check :
- run: ruff check bench examples optimum test

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that would be useful thank you.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alrighty then, done.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason it might still be relevant is because ppl submitting pull-request might not be familiar with the concept of linting/formatting, but would understand immediately what this is about when looking at the failing CI logs for their pull-request.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see... yes, agreed.

# For examples
pip install accelerate transformers datasets
Loading