-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Retry Logic to http requests #151
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
…tion Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
) | ||
|
||
return StreamingResponse(convo_turn()) | ||
except Exception as e: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might just be I don't know how these work, but is there a change that one of these errors could occur after the generator is finished and the messages and metamessages are created?
Unclear on the relationship between the try catch block and the Streaming Response method . If there is an error in the middle of the stream what happens?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very good questions! My latest commit employs a background process for the honcho calls to separately handle/log any potential errors while (ideally) not interfering with the StreamingResponse
. I think in the case of say a network interruption or rate limit error, it would hit this exception and return an error to the client which would (read: should) handle this interruption gracefully - see more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My worry with this approach is that if there is an error while saving a message to honcho then that is not propagated to the front-end / user. Meaning it will look like their message sent and the conversation is fine, but if they reload the messages will be gone without any indication of an error occurring.
It might make sense to make separate try catch blocks or separate Exceptions for the different types of errors that the LLM vs the honcho calls are making and report them to the user differently, without using the background tasks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Like the structure, just a few minor questions
www/app/page.tsx
Outdated
const MessageBox = dynamic(() => import("@/components/messagebox"), { | ||
ssr: false, | ||
}); | ||
const Sidebar = dynamic(() => import("@/components/sidebar"), { | ||
ssr: false, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick but wouldn't these components both be used immediately? Not sure what performance gain comes from this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, on further inspection, dynamically importing Messagebox
likely does not improve anything other than a browser console error I believe I recall seeing - however the Sidepanel
is docked on mobile by default, so there could be potentially some load speed improvement gained with this approach.
api/routers/chat.py
Outdated
background_tasks.add_task( | ||
create_messages_and_metamessages, | ||
app.id, user.id, inp.conversation_id, inp.message, thought, response |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comment about concerns with this approach
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, something like this perhaps? Handling it async in the try/except should add logging for us and properly utilize the retry logic from the client with the http errors.
api/routers/chat.py
Outdated
@@ -1,82 +1,118 @@ | |||
from fastapi import APIRouter | |||
from fastapi.responses import StreamingResponse | |||
from fastapi import APIRouter, HTTPException, BackgroundTasks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only nitpick is you can probably get rid of the background tasks import here, but looks good now
Suspect IssuesThis pull request was deployed and Sentry observed the following issues:
Did you find this useful? React with a 👍 or 👎 |
No description provided.