Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scaling for wider public (runtime and gpt calls) #21

Open
webbertakken opened this issue Jul 8, 2023 · 1 comment
Open

Scaling for wider public (runtime and gpt calls) #21

webbertakken opened this issue Jul 8, 2023 · 1 comment
Assignees
Labels
enhancement New feature or request
Milestone

Comments

@webbertakken
Copy link
Owner

Context

Runtime needs to be very scalable, so that many requests can be made. Potentially thousands of installations will apply to even more repositories. About half of them might have regular commits (triggering pull_request.synchronize hook).

We need to be sure that both the runtime and the LLM backend can take some load.

Suggested solution

Whichever works to be fair.

Considered alternatives

  • Keep prototyping on a very small scale (which won't help create traction)
@webbertakken webbertakken added the enhancement New feature or request label Jul 8, 2023
@webbertakken webbertakken added this to the v1.0.0 milestone Jul 8, 2023
@webbertakken webbertakken self-assigned this Jul 8, 2023
@webbertakken
Copy link
Owner Author

Runtime is workers. Backend is GPT, which should be relatively scalable.

Just need to figure out if donations will cover the costs. Or otherwise find more sponsors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Development

No branches or pull requests

1 participant