bug: NeMo-Guardrails responses apparently breaking/terminating on line breaks with different models #936
Labels
bug
Something isn't working
colang 2.0
good first issue
Good for newcomers
status: help wanted
Issues where external contributions are encouraged.
Did you check docs and existing issues?
Python version (python --version)
Python 3.11.9
Operating system/version
Windows 11 23H2
NeMo-Guardrails version (if you must use a specific version and not the latest
latest pip install; latest develop branch
Describe the bug
I'm running into issues for LLM responses that are formatted in a specific way. In this particular case, I was able to narrow it down to apparently linebreaks being present in the LLM response.
The issue was first found on Azure GPT-4o instances, but also spread over to llama3.1 models hosted locally via ollama. I cannot say if this is present for other llama3.x models too. For both situations, at the point of finding, the installation was outdated (commit-id 3265b39, 12.12.2024).
Both versions have been updated to the latest beta branch, leading to the same result on Azure and ollama. On their own (without NeMo), both endpoints work expectedly with the same question. Therefore, I suspect that the issue is within NeMo-Guardrails.
I've added a log of potential outputs with the 12.12. llama3 version which I'm still seeing in a similar form with the latest development branch on Azure instances.
Output_one_paragraph.txt (this seems fine)
Output_with_paragraph.txt (this breaks)
Edit: Tested with develop with Azure and ollama - same result.
Steps To Reproduce
steps to reproduce.txt
config_test_6.zip
I've cloned the NeMo-Guardrails repo (which is present in the
NeMo-Guardrails
folder) and added the folderlocal_files
on its root level, with the other paths as in the outputs/logs.For the Azure instances, I use the following config instead:
Expected Behavior
For the linebreak versions, I'd expect an output similar to this in white text on green ground as seen in the output_with_paragraph file (there, it was printed as - if I understand correctly - preliminary answer with black text on green ground):
For the Azure version, there is no trailing "Now, let's continue the conversation!" or starting "I think there may be some confusion! The AI's previous response should have been:", but the result "I'm sorry, an internal error has occurred." is the same.
Actual Behavior
See attached outputs as above. The error text is "I'm sorry, an internal error has occurred."
Output_one_paragraph.txt
Output_with_paragraph.txt
The text was updated successfully, but these errors were encountered: