Is the library compatible with LLMs aside from OpenAI? #448
Replies: 2 comments
-
Hi, sorry for the late reply, please see here for a guide on how to use custom LLMs: https://docs.guardrailsai.com/llm_api_wrappers/#using-a-custom-llm-api Let me know if you have any further questions. |
Beta Was this translation helpful? Give feedback.
0 replies
-
If you run into problems with Dolly, feel free to send over an example of your data and rail spec :) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have been trying to run validation over the Dolly-3b model from the HuggingFaceHub via the Langchain API. Since this library integrates with langchain, I was assuming it would work. However, I only always get None outputs. I am also curious what the engine parameter of the
guard
function represents if the LLM is not from openAI.My LLM model code:
`from langchain.llms import HuggingFaceHub
from langchain.chains import LLMChain
repo_id = "databricks/dolly-v2-3b"
llm = HuggingFaceHub(repo_id=repo_id)
raw_llm_output, validated_output = guard(
llm_api=llm,
# engine="text-davinci-003"
)`
Beta Was this translation helpful? Give feedback.
All reactions