Open-Source LLM inference cost comparison for compute nomads
This project aims to collect benchmark data from different GPUs on various clouds/providers and compare it to fixed per token costs from other providers. By utilizing our platform, you can make informed decisions on cost-effective LLM options, affordable language models, and efficient LLM GPU selections.
Our comprehensive LLM Price Comparison tool empowers users to evaluate multiple AI models and provides insights into AI model pricing guides. This project will help you choose the right GPU and cloud provider for the model of your choice—facilitating GPU inference and LLM GPU benchmarks.
Please refer to the CONTRIBUTING.md file for information about how to get involved. We welcome issues, questions, and pull requests. If you have insights on GPU comparisons, benchmarks, or AI model evaluations, please share them with our community.
Source Providers Source Self-hosted
We as members, contributors, and leaders, pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Please refer to the CODE_OF_CONDUCT.md file for more information about contributing.