You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Files, test pathways, and expectations – validate and report that the targeted files/packages have reasonable test coverage after:
main.py
Smoke Test:
Validate a minimal synchronous benchmark utilizing OpenAI token and API with model gpt-4o-mini with a 10 request limit and emulated data with minimal prompt and output size will run end to end and output the expected results
Sanity Test:
Validate various minimal benchmarks {sweep, synchronous, throughput, constant, poisson} with emulated data will run end to end and output the expected results for a tiny model on both a vLLM local server and a Llama.cpp local server limiting by request length or time and keeping to a managagable test length
Regression Test:
Validate various longer benchmarks {sweep, synchronous, throughput, constant, poisson} will run with the different data types and various settings for models, tokenizers, and other params
The text was updated successfully, but these errors were encountered:
Requirements:
Files, test pathways, and expectations – validate and report that the targeted files/packages have reasonable test coverage after:
main.py
Smoke Test:
Validate a minimal synchronous benchmark utilizing OpenAI token and API with model gpt-4o-mini with a 10 request limit and emulated data with minimal prompt and output size will run end to end and output the expected results
Sanity Test:
Validate various minimal benchmarks {sweep, synchronous, throughput, constant, poisson} with emulated data will run end to end and output the expected results for a tiny model on both a vLLM local server and a Llama.cpp local server limiting by request length or time and keeping to a managagable test length
Regression Test:
Validate various longer benchmarks {sweep, synchronous, throughput, constant, poisson} will run with the different data types and various settings for models, tokenizers, and other params
The text was updated successfully, but these errors were encountered: