You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TL;DR
Users want to evaluate their own AI models within Lumigator, but currently, they are limited to pre-integrated options. This feature will allow users to bring any model (e.g., from Hugging Face or OpenAI API compatible models) and run evaluations seamlessly.
Problem Statement
Today, Lumigator users can only evaluate pre-integrated models, leaving out a major part of our user base who wants to evaluate other models for summarisation. This was a key pain point raised in our latest Discord event, where:
User X asked: “What kind of models do you support?”
Another user asked: “Do you support all Hugging Face models?”
A third user asked: “Are there any models Lumigator won’t support?”
These questions highlight the uncertainty users experience. Many are assuming Lumigator is limited to certain models we pre+selected for summarisartion when, in reality, we aim to make it more flexible.
Goal:
Reduce churn by eliminating a key limitation mentioned in Discord and user feedback.
The Priority is to focus on:
Prio 1: OpenAI-compatible APIs (Bedrock, etc. but also ollama, vllm…)
Prio 2: other HF hub models (e.g. generic ones, not summarization specific)
Alternatives
No response
Contribution
No response
Have you searched for similar issues before submitting this one?
Yes, I have searched for similar issues
The text was updated successfully, but these errors were encountered:
Expanding Lumigator’s API Support for Enterprise Users
Our goal with Lumigator is to enhance transparency in model evaluation, enabling users to make informed decisions regardless of their current AI ecosystem. While we prioritize open APIs such as OpenAI API and Hugging Face for evaluating summarization (and later translation), many enterprise users are deeply integrated into closed-source ecosystems. By exposing these APIs, we could ensure that:
Enterprise users can compare models they already have access to, making evaluation seamless.
We collect valuable insights on adoption rates of different AI platforms.
We create opportunities to suggest alternative models that may align better with their needs in terms of performance, cost, or ethical considerations.
Here’s a list of some key enterprise APIs that we could consider supporting in Lumigator (TBC by @agpituk and @ividal):
Microsoft Azure AI (Azure OpenAI Service, Azure Cognitive Services, Custom Vision, Speech API)
Google Cloud AI (Gemini API, Vertex AI for custom models)
IBM Watson AI (Natural Language Understanding, WatsonX)
Oracle AI Services (Language AI, Speech AI, Vision AI)
Key Model-Specific API Endpoints:
Anthropic API (Claude series via Anthropic or AWS Bedrock)
Cohere API (LLMs optimized for business applications)
AI21 Studio API (Jurassic series, offered through Bedrock)
Meta Llama API (via Azure or AWS Bedrock)
Mistral AI API (offered through multiple cloud providers)
This approach ensures that users in enterprise settings can evaluate models they are already using, while also gaining visibility into open alternatives that may be more cost-effective, transparent, or better suited to their needs.
Motivation
TL;DR
Users want to evaluate their own AI models within Lumigator, but currently, they are limited to pre-integrated options. This feature will allow users to bring any model (e.g., from Hugging Face or OpenAI API compatible models) and run evaluations seamlessly.
Problem Statement
Today, Lumigator users can only evaluate pre-integrated models, leaving out a major part of our user base who wants to evaluate other models for summarisation. This was a key pain point raised in our latest Discord event, where:
User X asked: “What kind of models do you support?”
Another user asked: “Do you support all Hugging Face models?”
A third user asked: “Are there any models Lumigator won’t support?”
These questions highlight the uncertainty users experience. Many are assuming Lumigator is limited to certain models we pre+selected for summarisartion when, in reality, we aim to make it more flexible.
Goal:
The Priority is to focus on:
Prio 1: OpenAI-compatible APIs (Bedrock, etc. but also ollama, vllm…)
Prio 2: other HF hub models (e.g. generic ones, not summarization specific)
Alternatives
No response
Contribution
No response
Have you searched for similar issues before submitting this one?
The text was updated successfully, but these errors were encountered: