Replies: 3 comments 2 replies
-
Hello Team, Can you please reply if possible ? Many Thanks |
Beta Was this translation helpful? Give feedback.
-
Hello Team, |
Beta Was this translation helpful? Give feedback.
-
Hi, sorry for the late reply. You can use Input validation is on the way, coming this week! I'm closing this discussion for now, but feel free to reopen it if you have any more questions. |
Beta Was this translation helpful? Give feedback.
-
Dear Guardrails Team,
I hope this message finds you well. My name is Gaurav and I am currently working on a AIML project. We are currently exploring the integration of Large Language Models (LLM) in our project.
We are interested in using Guardrails to validate the output of our LLMs, and I have reviewed your documentation on validators and output schema. However, I have some specific questions about how we can effectively implement this in our project, particularly around LLM outputs validation
Could you please provide some insights on the following if it's possible?
We want to keep the control of LLM execution and just use the LLM output for validation.
For e.g.
Define the input schema
input_schema = {
"text": str,
"parameters": {
"name": str,
"age": int,
},
}
Define the output schema
output_schema = {
"response": str,
}
LLM Output
llm_result = "Microsoft's earnings per share for 2022 was $9.65, a 20% increase from the previous year."
Create a Guardrails validator
validator = guardrails.Validator(input_schema, output_schema)
try:
validator.validate(input, output)
print("Validation successful")
except guardrails.ValidationError as e:
print("Validation failed:", e)
Thank you for considering my request. I am looking forward to your response.
Best regards,
Gaurav Tomar
Beta Was this translation helpful? Give feedback.
All reactions