LLM Rails entry point.
- explain_info_var
- streaming_handler_var
Rails based on a given configuration.
__init__(
config: nemoguardrails.rails.llm.config.RailsConfig,
llm: Optional[langchain.llms.base.BaseLLM] = None,
verbose: bool = False
)
Initializes the LLMRails instance.
Args:
config
: A rails configuration.llm
: An optional LLM engine to use.verbose
: Whether the logging should be verbose or not.
explain() → ExplainInfo
Helper function to return the latest ExplainInfo object.
generate(prompt: Optional[str] = None, messages: Optional[List[dict]] = None)
Synchronous version of generate_async.
generate_async(
prompt: Optional[str] = None,
messages: Optional[List[dict]] = None,
streaming_handler: Optional[nemoguardrails.streaming.StreamingHandler] = None
) → Union[str, dict]
Generate a completion or a next message.
The format for messages is the following:
[
{"role": "context", "content": {"user_name": "John"}},
{"role": "user", "content": "Hello! How are you?"},
{"role": "assistant", "content": "I am fine, thank you!"},
{"role": "event", "event": {"type": "UserSilent"}},
...
]
Args:
prompt
: The prompt to be used for completion.messages
: The history of messages to be used to generate the next message.streaming_handler
: If specified, and the config supports streaming, the provided handler will be used for streaming.
Returns: The completion (when a prompt is provided) or the next message.
System messages are not yet supported.
generate_events(events: List[dict]) → List[dict]
Synchronous version of LLMRails.generate_events_async
.
generate_events_async(events: List[dict]) → List[dict]
Generate the next events based on the provided history.
The format for events is the following:
[
{"type": "...", ...},
...
]
Args:
events
: The history of events to be used to generate the next events.
Returns: The newly generate event(s).
register_action(
action: <built-in function callable>,
name: Optional[str] = None
)
Register a custom action for the rails configuration.
register_action_param(name: str, value: Any)
Registers a custom action parameter.
register_embedding_search_provider(
name: str,
cls: Type[nemoguardrails.embeddings.index.EmbeddingsIndex]
) → None
Register a new embedding search provider.
Args:
name
: The name of the embedding search provider that will be used.cls
: The class that will be used to generate and search embedding
register_filter(
filter_fn: <built-in function callable>,
name: Optional[str] = None
)
Register a custom filter for the rails configuration.
register_output_parser(output_parser: <built-in function callable>, name: str)
Register a custom output parser for the rails configuration.
register_prompt_context(name: str, value_or_fn: Any)
Register a value to be included in the prompt context.
:name: The name of the variable or function that will be used. :value_or_fn: The value or function that will be used to generate the value.
stream_async(
prompt: Optional[str] = None,
messages: Optional[List[dict]] = None
) → AsyncIterator[str]
Simplified interface for getting directly the streamed tokens from the LLM.