Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Auto-round scores down to align scores & medals #74

Merged
merged 4 commits into from
Aug 22, 2024

Conversation

japborst
Copy link
Member

@japborst japborst commented Aug 22, 2024

Fixes #61.

The challenge is that a 9.99 score leads to a silver medal, even though 10.0 is printed as in HumanReadableFormatter we're rounding to a single decimal.

There are two approaches to this problem

  • Fix in the human formatter
  • Round differently everywhere

Separately, you could argue this is only a problem for scores near thresholds and not in other instances (e.g. 3.35 might be fine to round to 3.4, but 9.99 should be displayed as 9.9).

To keep things simple, I've added a property on the Score model and used the rounded-down version in the human-readable formatter.

Comment on lines 23 to 25
def __post_init__(self) -> None:
"""Auto-round score down to 1 decimal place."""
self.value = math.floor(self.value * 10) / 10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of overriding the provided score upon instantiation, I would suggest creating a separate @property, i.e. a derived value. This would allow:

  • Keeping the original value
  • Keeping things correct when mutated (e.g. score = Score(9.99, 'foo'); score.value = 5)

WDYT?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, that's definitely also something that can work.

Would you then suggest we only use the property (e.g. rounded_value) in the plain (human readable) output?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, and maybe keep the non-rounded in the JSON output? Although we could also use the rounded one in JSON, for consistency purposes 🤔

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's keep the non-rounded in the machine-readable outputs 👍

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated the PR to reflect this.


if len(self._failed_models) > 0:
print()
print(
f"Error: model score too low, fail_any_model_under = "
f"{self._config.fail_any_model_under}"
)
for model, score in self._failed_models:
print(f"Model {model.name} scored {round(score.value, 1)}")
for model, model_score in self._failed_models:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We were redefining score here. So whilst here, I've updated this.

src/dbt_score/scoring.py Outdated Show resolved Hide resolved
Comment on lines 32 to 34
== """🥇 \x1B[1mmodel1\x1B[0m (score: 10.0)
\x1B[1;32mOK \x1B[0m tests.conftest.rule_severity_low
\x1B[1;31mERR \x1B[0m tests.conftest.rule_severity_medium: Oh noes
\x1B[1;33mWARN\x1B[0m (critical) tests.conftest.rule_severity_critical: Error
== """🥇 \x1b[1mmodel1\x1b[0m (score: 10.0)
\x1b[1;32mOK \x1b[0m tests.conftest.rule_severity_low
\x1b[1;31mERR \x1b[0m tests.conftest.rule_severity_medium: Oh noes
\x1b[1;33mWARN\x1b[0m (critical) tests.conftest.rule_severity_critical: Error
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, I know hex is not case sensitive, but curious how this changed 🤔

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Appears it's my local ruff kicking in action.

Even though ruff is configured for this project, it's on 0.2.2 and locally have 0.6.1 running.

Will revert this change.

Copy link
Contributor

@matthieucan matthieucan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉

@matthieucan matthieucan merged commit 3d27d98 into master Aug 22, 2024
3 checks passed
@matthieucan matthieucan deleted the japborst/round-score-down branch August 22, 2024 14:29
@jochemvandooren
Copy link
Contributor

Thanks for fixing it @japborst ! 🙌

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Bug: Silver medal with 10.0 project score
3 participants