Skip to content

Commit

Permalink
Merge pull request #4206 from AkatQuas/fix/field
Browse files Browse the repository at this point in the history
fix: field name fix
  • Loading branch information
sestinj authored Feb 28, 2025
2 parents b874fff + 0de1a2f commit b9fb555
Show file tree
Hide file tree
Showing 3 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion docs/docs/reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ Parameters that control the behavior of text generation and completion settings.
- `topP`: The cumulative probability for nucleus sampling. Lower values limit responses to tokens within the top probability mass.
- `topK`: The maximum number of tokens considered at each step. Limits the generated text to tokens within this probability.
- `presencePenalty`: Discourages the model from generating tokens that have already appeared in the output.
- `frequencePenalty`: Penalizes tokens based on their frequency in the text, reducing repetition.
- `frequencyPenalty`: Penalizes tokens based on their frequency in the text, reducing repetition.
- `mirostat`: Enables Mirostat sampling, which controls the perplexity during text generation. Supported by Ollama, LM Studio, and llama.cpp providers (default: `0`, where `0` = disabled, `1` = Mirostat, and `2` = Mirostat 2.0).
- `stop`: An array of stop tokens that, when encountered, will terminate the completion. Allows specifying multiple end conditions.
- `maxTokens`: The maximum number of tokens to generate in a completion (default: `2048`).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ Parameters that control the behavior of text generation and completion settings.
- `topP`: The cumulative probability for nucleus sampling. Lower values limit responses to tokens within the top probability mass.
- `topK`: The maximum number of tokens considered at each step. Limits the generated text to tokens within this probability.
- `presencePenalty`: Discourages the model from generating tokens that have already appeared in the output.
- `frequencePenalty`: Penalizes tokens based on their frequency in the text, reducing repetition.
- `frequencyPenalty`: Penalizes tokens based on their frequency in the text, reducing repetition.
- `mirostat`: Enables Mirostat sampling, which controls the perplexity during text generation. Supported by Ollama, LM Studio, and llama.cpp providers (default: `0`, where `0` = disabled, `1` = Mirostat, and `2` = Mirostat 2.0).
- `stop`: An array of stop tokens that, when encountered, will terminate the completion. Allows specifying multiple end conditions.
- `maxTokens`: The maximum number of tokens to generate in a completion (default: `2048`).
Expand Down
2 changes: 1 addition & 1 deletion extensions/vscode/config_schema.json
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
"description": "The presence penalty Aof the completion.",
"type": "number"
},
"frequencePenalty": {
"frequencyPenalty": {
"title": "Frequency Penalty",
"description": "The frequency penalty of the completion.",
"type": "number"
Expand Down

0 comments on commit b9fb555

Please sign in to comment.