Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

End-to-End Testing Enablement / Expansion #55

Open
rgreenberg1 opened this issue Sep 9, 2024 · 0 comments
Open

End-to-End Testing Enablement / Expansion #55

rgreenberg1 opened this issue Sep 9, 2024 · 0 comments
Assignees

Comments

@rgreenberg1
Copy link
Contributor

Requirements:

Files, test pathways, and expectations – validate and report that the targeted files/packages have reasonable test coverage after:
main.py

Smoke Test:
Validate a minimal synchronous benchmark utilizing OpenAI token and API with model gpt-4o-mini with a 10 request limit and emulated data with minimal prompt and output size will run end to end and output the expected results

Sanity Test:
Validate various minimal benchmarks {sweep, synchronous, throughput, constant, poisson} with emulated data will run end to end and output the expected results for a tiny model on both a vLLM local server and a Llama.cpp local server limiting by request length or time and keeping to a managagable test length

Regression Test:
Validate various longer benchmarks {sweep, synchronous, throughput, constant, poisson} will run with the different data types and various settings for models, tokenizers, and other params

@rgreenberg1 rgreenberg1 changed the title End to end Testing Enablement / Expansion End-to-End Testing Enablement / Expansion Sep 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants