You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be helpful if we had a means of enabling debug information to share how many tokens we are using up for input prompts and for output.
For example, we notice some models are stating they have a max of 1536 output tokens, we are seeing some cases of outputs that stop and we are not sure if we hit a limit on output tokens or if something else is happening.
The text was updated successfully, but these errors were encountered:
It would be helpful if we had a means of enabling debug information to share how many tokens we are using up for input prompts and for output.
For example, we notice some models are stating they have a max of 1536 output tokens, we are seeing some cases of outputs that stop and we are not sure if we hit a limit on output tokens or if something else is happening.
The text was updated successfully, but these errors were encountered: