Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE]: Translation Specific Metrics #840

Open
1 task done
jularase opened this issue Feb 11, 2025 · 0 comments
Open
1 task done

[FEATURE]: Translation Specific Metrics #840

jularase opened this issue Feb 11, 2025 · 0 comments
Labels
enhancement New feature or request epic use-case

Comments

@jularase
Copy link
Collaborator

jularase commented Feb 11, 2025

Motivation

User Story
As a developer or technical user working with AI-generated translations, I want to compare how well different models translate text so that I can choose the best option for my use case without needing deep ML expertise.

Problem Statement
Users need a straightforward way to evaluate and compare translation models based on meaningful, real-world performance indicators rather than raw technical scores. They want to see which model produces more accurate, fluent, and reliable translations in a way that’s easy to interpret.

Alternatives

No response

Contribution

No response

Have you searched for similar issues before submitting this one?

  • Yes, I have searched for similar issues
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request epic use-case
Projects
Development

No branches or pull requests

1 participant