Why does Guard.parse
use an LLM API in LangChain integration?
#473
-
Reading the current (
Why does |
Beta Was this translation helpful? Give feedback.
Answered by
irgolic
Dec 4, 2023
Replies: 1 comment
-
Hiya, it accepts For an example of reasking, see here: https://docs.guardrailsai.com/defining_guards/pydantic/#structured-output-with-validation |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
irgolic
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hiya, it accepts
llm_api
because Guardrails validators have the ability to "reask" the LLM if the validator fails. The suppliedllm_api
will be used for this.For an example of reasking, see here: https://docs.guardrailsai.com/defining_guards/pydantic/#structured-output-with-validation