Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Retry Logic to http requests #151

Merged
merged 11 commits into from
Oct 18, 2024
Merged

Add Retry Logic to http requests #151

merged 11 commits into from
Oct 18, 2024

Conversation

bLopata
Copy link
Contributor

@bLopata bLopata commented Oct 11, 2024

No description provided.

Copy link

vercel bot commented Oct 11, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
tutor-gpt ❌ Failed (Inspect) Oct 18, 2024 4:14pm

api/routers/chat.py Fixed Show fixed Hide fixed
api/routers/chat.py Fixed Show fixed Hide fixed
api/routers/chat.py Fixed Show fixed Hide fixed
…tion

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
)

return StreamingResponse(convo_turn())
except Exception as e:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might just be I don't know how these work, but is there a change that one of these errors could occur after the generator is finished and the messages and metamessages are created?

Unclear on the relationship between the try catch block and the Streaming Response method . If there is an error in the middle of the stream what happens?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very good questions! My latest commit employs a background process for the honcho calls to separately handle/log any potential errors while (ideally) not interfering with the StreamingResponse. I think in the case of say a network interruption or rate limit error, it would hit this exception and return an error to the client which would (read: should) handle this interruption gracefully - see more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My worry with this approach is that if there is an error while saving a message to honcho then that is not propagated to the front-end / user. Meaning it will look like their message sent and the conversation is fine, but if they reload the messages will be gone without any indication of an error occurring.

It might make sense to make separate try catch blocks or separate Exceptions for the different types of errors that the LLM vs the honcho calls are making and report them to the user differently, without using the background tasks

Copy link
Collaborator

@VVoruganti VVoruganti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like the structure, just a few minor questions

www/app/page.tsx Outdated
Comment on lines 25 to 30
const MessageBox = dynamic(() => import("@/components/messagebox"), {
ssr: false,
});
const Sidebar = dynamic(() => import("@/components/sidebar"), {
ssr: false,
});
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick but wouldn't these components both be used immediately? Not sure what performance gain comes from this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, on further inspection, dynamically importing Messagebox likely does not improve anything other than a browser console error I believe I recall seeing - however the Sidepanel is docked on mobile by default, so there could be potentially some load speed improvement gained with this approach.

www/utils/retryUtils.ts Show resolved Hide resolved
Comment on lines 53 to 55
background_tasks.add_task(
create_messages_and_metamessages,
app.id, user.id, inp.conversation_id, inp.message, thought, response
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment about concerns with this approach

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, something like this perhaps? Handling it async in the try/except should add logging for us and properly utilize the retry logic from the client with the http errors.

VVoruganti
VVoruganti previously approved these changes Oct 18, 2024
@@ -1,82 +1,118 @@
from fastapi import APIRouter
from fastapi.responses import StreamingResponse
from fastapi import APIRouter, HTTPException, BackgroundTasks
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only nitpick is you can probably get rid of the background tasks import here, but looks good now

@bLopata bLopata changed the title Ben/dev 69 Add Retry Logic to http requests Oct 18, 2024
@bLopata bLopata merged commit e8e787f into main Oct 18, 2024
4 of 6 checks passed
@bLopata bLopata deleted the ben/dev-69 branch October 18, 2024 16:17
Copy link

sentry-io bot commented Oct 22, 2024

Suspect Issues

This pull request was deployed and Sentry observed the following issues:

  • ‼️ TypeError: Failed to fetch /auth View Issue

Did you find this useful? React with a 👍 or 👎

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants