Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chore: Add openai o1 support #366

Merged
merged 6 commits into from
Feb 4, 2025

Conversation

StrongMonkey
Copy link
Contributor

@StrongMonkey StrongMonkey commented Jan 23, 2025

Add openai o1 support in proxy to modify request body.

obot-platform/obot#1131

@@ -96,6 +101,41 @@ func (s *server) proxyDirector(req *http.Request) {
if s.cfg.PathPrefix != "" && !strings.HasPrefix(req.URL.Path, s.cfg.PathPrefix) {
req.URL.Path = s.cfg.PathPrefix + req.URL.Path
}

if req.Body == nil || req.Method != http.MethodPost {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense to do this?

Suggested change
if req.Body == nil || req.Method != http.MethodPost {
if req.Body == nil || s.cfg.UpstreamHost != "api.openai.com" || req.URL.Path != "/v1/chat/completion" {

Copy link
Collaborator

@g-linville g-linville Jan 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this, because it means that we can look for errors when we try to unmarshal the body later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, I will change this.

@StrongMonkey StrongMonkey force-pushed the openai-o1 branch 3 times, most recently from aa67dc3 to bd15449 Compare January 24, 2025 22:57
@StrongMonkey
Copy link
Contributor Author

@thedadams Pushed a change that make the response compatiable with stream client in gptscript.

return nil
}

func isModelO1(model string) bool {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current PR only supports o1, as o1-mini doesn't seem to support system message which could screw a lot of things. We could investigate in a different issue.

}

func (s *server) modifyResponse(resp *http.Response) error {
if resp.StatusCode != http.StatusOK || resp.Body == nil || resp.Request.URL.Path != chatCompletionsPath || resp.Request.URL.Host != openaiBaseHostName {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: resp.Body is always non-nil.

Suggested change
if resp.StatusCode != http.StatusOK || resp.Body == nil || resp.Request.URL.Path != chatCompletionsPath || resp.Request.URL.Host != openaiBaseHostName {
if resp.StatusCode != http.StatusOK || resp.Request.URL.Path != chatCompletionsPath || resp.Request.URL.Host != openaiBaseHostName {

Comment on lines 160 to 164
rawBody, err := io.ReadAll(resp.Body)
if err != nil {
return fmt.Errorf("failed to read response body: %w", err)
}
if err := json.Unmarshal(rawBody, &respBody); err == nil && isModelO1(respBody.Model) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we only have to process the body once.

Suggested change
rawBody, err := io.ReadAll(resp.Body)
if err != nil {
return fmt.Errorf("failed to read response body: %w", err)
}
if err := json.Unmarshal(rawBody, &respBody); err == nil && isModelO1(respBody.Model) {
if err := json.Decode(resp.Body).Decode(&respBody); err == nil && isModelO1(respBody.Model) {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It needs to save the original response so that in case it is not a O1 model, restore the resp.Body to the original data.

}

if resp.Header.Get("Content-Type") == "application/json" {
var respBody openai.ChatCompletionResponse
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have to close the body if we are reading and replacing it. I think you need to save the original body in a variable so you don't close the new body.

Suggested change
var respBody openai.ChatCompletionResponse
originalBody := resp.Body
defer originalBody.Close()
var respBody openai.ChatCompletionResponse

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, probably better to just close the original body without using defer

Comment on lines 172 to 187
Choices: func() []openai.ChatCompletionStreamChoice {
var choices []openai.ChatCompletionStreamChoice
for _, choice := range respBody.Choices {
choices = append(choices, openai.ChatCompletionStreamChoice{
Index: choice.Index,
Delta: openai.ChatCompletionStreamChoiceDelta{
Content: choice.Message.Content,
Role: choice.Message.Role,
FunctionCall: choice.Message.FunctionCall,
ToolCalls: choice.Message.ToolCalls,
},
FinishReason: choice.FinishReason,
})
}
return choices
}(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whoa! I've never seen this before. I think I like it.

@@ -63,7 +74,8 @@ func Run(cfg *Config) error {
ModifyResponse: cfg.RewriteModelsFn,
})
mux.Handle("/v1/", &httputil.ReverseProxy{
Director: s.proxyDirector,
Director: s.proxyDirector,
ModifyResponse: s.modifyResponse,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be good if we could find a way to not put this on every model provider that uses this proxy code, and only put it on the OpenAI model provider.

Copy link
Contributor Author

@StrongMonkey StrongMonkey Jan 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, it would need us to put the common code into a seperate folder/repo with its own go modules. Right now if I can update dep for openai-model-provider it pulls other tools in.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wait, actually, yeah I didn't realize the proxy code is shared. Let me think..

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, thorsten's custom path comes right on time. This switches to use that.

Signed-off-by: Daishan Peng <daishan@acorn.io>
Signed-off-by: Daishan Peng <daishan@acorn.io>
Signed-off-by: Daishan Peng <daishan@acorn.io>
Signed-off-by: Daishan Peng <daishan@acorn.io>
// Register default handlers only if they are not already registered
if _, exists := cfg.CustomPathHandleFuncs["/{$}"]; !exists {
mux.HandleFunc("/{$}", s.healthz)
}
if _, exists := cfg.CustomPathHandleFuncs["/v1/models"]; !exists {
if handler, exists := cfg.CustomPathHandleFuncs["/v1/models"]; !exists {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@iwilltry42 can you review this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!
I don't like all those if-else statements, but have no cleaner solution right now (unfortunately, mux doesn't take care of the path sorting and neither allows overrides nor looking up what paths are already handled.

type Config struct {
url *url.URL
Url *url.URL
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
Url *url.URL
URL *url.URL

Signed-off-by: Daishan Peng <daishan@acorn.io>
@pixeye33
Copy link

Hello,

since this PR has been opened #378 has been merged.

while not being certain it needs to, but should generic-openai-model-provider also be changed ?

@iwilltry42
Copy link
Contributor

@pixeye33 that would only make sense, if one were to use the generic provider to connect to OpenAI directly instead of using the OpenAI provider, right?
Or is there an o1 variant with the same limitations that you can run in e.g. Ollama?

Now that I think about it - it may be needed for Azure OpenAI which has access to o1 as well.

@pixeye33
Copy link

pixeye33 commented Jan 31, 2025

For this PR, at least Ollama and generic-openai-model-provider should be identical (or even be just one).

But, for my specific case : i'm using https://www.litellm.ai/ in front of groq/OpenAI for all the AI tools i'm playing with, and i'd like to use generic-openai-model-provider to use it for obot too (Same logic applies when using https://openrouter.ai/).

However, i'm not even sure that a o1 specific fix is necessary/done on litellm : i have not tried yet.

@StrongMonkey StrongMonkey merged commit 6cd817a into obot-platform:main Feb 4, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants