feat(lm): add support for o3-mini and openai reasoning models #7649
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Use max_completion_tokens for OpenAI reasoning models (o1/o3)
Problem
The OpenAI o3 model family deprecated
max_tokens
in favor ofmax_completion_tokens
. Additionally, both o1 and o3 models (including their mini variants) are part of the same "reasoning model" family that require specific configuration (temperature=1.0, tokens >= 5000). This change is does this behind the scenes so that it wouldn't introduce any backward compatibility issues.Changes
Updated model family detection to use regex pattern matching for all variations:
o1
,o3
o1-mini
,o3-mini
Unified parameter handling for reasoning models:
max_completion_tokens
instead ofmax_tokens
for all reasoning modelsAdded comprehensive tests:
max_completion_tokens
vsmax_tokens
)Testing
The changes are verified by new test cases in
test_clients/test_lm.py
:test_reasoning_model_requirements
: Verifies temperature and token requirementstest_reasoning_model_token_parameter
: Tests parameter naming across different model variantsExample Usage