Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Align on architecture for running LLM-as-judge evaluation #861

Open
aittalam opened this issue Feb 13, 2025 · 0 comments
Open

Align on architecture for running LLM-as-judge evaluation #861

aittalam opened this issue Feb 13, 2025 · 0 comments
Assignees
Labels
api Changes which impact API/presentation layer backend

Comments

@aittalam
Copy link
Member

See rationale here.

The main goal of this task is to end up with alignment on how we want to implement this new evaluation (e.g. as a new workflow which calls inference + LLM judge inference + evaluation vs having evaluation call LLM judge inference).
The deliverable should be a diagram explaining how we will run this new feature in Lumigator

@aittalam aittalam changed the title align on architecture for running both the evaluation with LLM as judge and the evaluation of judges themselves Align on architecture for running both the evaluation with LLM as judge and the evaluation of judges themselves Feb 13, 2025
@aittalam aittalam changed the title Align on architecture for running both the evaluation with LLM as judge and the evaluation of judges themselves Align on architecture for running LLM-as-judge evaluation Feb 13, 2025
@aittalam aittalam self-assigned this Feb 13, 2025
@ividal ividal added backend api Changes which impact API/presentation layer labels Feb 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api Changes which impact API/presentation layer backend
Projects
Development

No branches or pull requests

2 participants