You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Not all APIs support streaming, which prevents the application of token count policies across every request (e.g., create thread, create message). This limitation needs careful consideration when implementing such policies.
Deployment ID Issue in Assistant APIs:
When a deployment-id is passed to any assistant API, it corrupts the final OpenAI endpoint. For instance, the expected endpoint for chat completion is structured as: backend1.com/openai/deployment/gpt4/chat?api-version=.... However, with assistant APIs, the endpoint does not incorporate the /deployment/gpt4 path, leading to potential issues with dynamic URL generation. Since assistants are tied to specific subscriptions, the current failover mechanism is ineffective and requires revision.
Question:
Could you please confirm whether streaming is supported with the current implementation?
Thank you,
Sarmad
The text was updated successfully, but these errors were encountered:
Inconsistent API Behavior:
Deployment ID Issue in Assistant APIs:
Question:
Thank you,
Sarmad
The text was updated successfully, but these errors were encountered: