-
Notifications
You must be signed in to change notification settings - Fork 437
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate Your First Golden - Using Custom model #1216
Comments
Hey @pratikchhapolika , if you don't supply an OpenAI API key, DeepEval uses the OpenAI model as the critic model for filtering unqualified goldens. You can easily avoid this by defining your own custom FiltrationConfig with the custom model you've defined for generation. |
I am seeing the same error @kritinv
|
I also find this doc to be misleading: https://docs.confident-ai.com/docs/guides-using-custom-embedding-models#:~:text=from%20deepeval.synthesizer%20import%20Synthesizer%0A...%0A%0Asynthesizer%20%3D%20Synthesizer(embedder%3DCustomEmbeddingModel())
Shoud we pass, chat model to both Synthesizer and filtration_config or the embedding model. Which model does it uses to convert pdf to text? |
I used Azure OpenAI LLM and Embeddings (custom models) for synthetic dataset generation. It used to work without errors with version 1.3.2
but with the latest version 2.1.9 it throws the following error:
My point is why is it even falling back to OpenAI keys when custom model is being supplied. For obvious reasons, I have omitted my Azure OpenAI credentials in the below code.
|
I am trying with custom LLM using a model from amazonbedrock and I get the same error with the latest version. So downgraded DeepEval to 1.3.2, where it works without error. But there is no FiltrationConfig or ContextConstructionConfig in this version. You simply pass the model, critic_model and embedder while initializing synthesizer like: synthesizer = Synthesizer(model=awsbedrock, critic_model=awsbedrock, embedder=awsbedrockembed) Anyone knows if this issue is in DeepEval's list to be fixed? |
Hey @pratikchhapolika @3051360 @RajeswariKumaran totally missed this thread. We're active every minute in our discord but issues on github can get very fragmented so i suggest in the future if you want things to be fixed immediately go to discord. @pratikchhapolika if you're still around, can you try again in the latest version @3051360 it is because you didn't supply a @RajeswariKumaran How did you define your synthesizer before you downgraded? I can't comment without seeing your code but there are a few places where LLMs are used so not specifying custom models for every part of your pipeline might be the error. |
@penguine-ip , thanks for getting back on this. I tried this again with the current version of deepeval v2.2.5 and made the necessary changes back and it worked fine! Earlier I got the error "AssertionError: n_contexts_per_doc must be a positive integer." I am not sure what I changed in the interim but I dont see this issue now with Amazonbedrock custom llm. After defining classes using DeepEvalBaseLLM (awsbedrock) and DeepEvalBaseEmbeddingModel (awsbedrockembeddings) for amazonbedrock, I used the following: filtration_config = FiltrationConfig( context_construction_config = ContextConstructionConfig(max_contexts_per_document=5, critic_model=awsbedrock, embedder=awsbedrockembed) synthesizer.generate_goldens_from_docs( |
I am following this link: https://docs.confident-ai.com/docs/synthesizer-introduction#:~:text=begin%20generating%20goldens.-,from%20deepeval.synthesizer%20import%20Synthesizer,-...
Browser:
Chrome
Python:
3.12
deepeval version:
'2.0.3'
Jupyter Notebook
on Macbook 16 ProNote: This custom model works fine when evaluating on metrics.
Custom model using
AzureOpenAI
Generate Your First Golden
ERROR TRACE
The text was updated successfully, but these errors were encountered: