Skip to content
This repository has been archived by the owner on Oct 29, 2024. It is now read-only.

Commit

Permalink
fix(docs): fix tasks.json inconsistent issue
Browse files Browse the repository at this point in the history
  • Loading branch information
pinglin committed Sep 28, 2024
1 parent f0f4f89 commit 5b75fa3
Show file tree
Hide file tree
Showing 18 changed files with 53 additions and 53 deletions.
4 changes: 2 additions & 2 deletions ai/anthropic/v0/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -51,13 +51,13 @@ Anthropic's text generation models (often called generative pre-trained transfor
| Task ID (required) | `task` | string | `TASK_TEXT_GENERATION_CHAT` |
| Model Name (required) | `model-name` | string | The Anthropic model to be used |
| Prompt (required) | `prompt` | string | The prompt text |
| System message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is set using a generic message as "You are a helpful assistant." |
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is set using a generic message as "You are a helpful assistant." |
| Prompt Images | `prompt-images` | array[string] | The prompt images (Note: The prompt images will be injected in the order they are provided to the 'prompt' message. Anthropic doesn't support sending images via image-url, use this field instead) |
| [Chat history](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| Seed | `seed` | integer | The seed (Note: Not supported by Anthropic Models) |
| Temperature | `temperature` | number | The temperature for sampling |
| Top K | `top-k` | integer | Top k for sampling |
| Max new tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |
| Max New Tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |
</div>


Expand Down
4 changes: 2 additions & 2 deletions ai/anthropic/v0/config/tasks.json
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@
"value",
"reference"
],
"title": "Max new tokens",
"title": "Max New Tokens",
"type": "integer"
},
"model-name": {
Expand Down Expand Up @@ -227,7 +227,7 @@
"reference",
"template"
],
"title": "System message",
"title": "System Message",
"type": "string"
},
"temperature": {
Expand Down
4 changes: 2 additions & 2 deletions ai/cohere/v0/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -53,14 +53,14 @@ Cohere's text generation models (often called generative pre-trained transformer
| Task ID (required) | `task` | string | `TASK_TEXT_GENERATION_CHAT` |
| Model Name (required) | `model-name` | string | The Cohere command model to be used |
| Prompt (required) | `prompt` | string | The prompt text |
| System message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is using a generic message as "You are a helpful assistant." |
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is using a generic message as "You are a helpful assistant." |
| Documents | `documents` | array[string] | The documents to be used for the model, for optimal performance, the length of each document should be less than 300 words. |
| Prompt Images | `prompt-images` | array[string] | The prompt images (Note: As for 2024-06-24 Cohere models are not multimodal, so images will be ignored.) |
| [Chat history](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Each message should adhere to the format: : \{"role": "The message role, i.e. 'USER' or 'CHATBOT'", "content": "message content"\}. |
| Seed | `seed` | integer | The seed (default=42) |
| Temperature | `temperature` | number | The temperature for sampling (default=0.7) |
| Top K | `top-k` | integer | Top k for sampling (default=10) |
| Max new tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate (default=50) |
| Max New Tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate (default=50) |
</div>


Expand Down
4 changes: 2 additions & 2 deletions ai/cohere/v0/config/tasks.json
Original file line number Diff line number Diff line change
Expand Up @@ -254,7 +254,7 @@
"value",
"reference"
],
"title": "Max new tokens",
"title": "Max New Tokens",
"type": "integer"
},
"model-name": {
Expand Down Expand Up @@ -349,7 +349,7 @@
"reference",
"template"
],
"title": "System message",
"title": "System Message",
"type": "string"
},
"temperature": {
Expand Down
4 changes: 2 additions & 2 deletions ai/fireworksai/v0/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -52,13 +52,13 @@ Fireworks AI's text generation models (often called generative pre-trained trans
| Task ID (required) | `task` | string | `TASK_TEXT_GENERATION_CHAT` |
| Model Name (required) | `model` | string | The OSS model to be used |
| Prompt (required) | `prompt` | string | The prompt text |
| System message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is set using a generic message as "You are a helpful assistant." |
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is set using a generic message as "You are a helpful assistant." |
| Prompt Images | `prompt-images` | array[string] | The prompt images (Note: According to Fireworks AI documentation on 2024-07-24, the total number of images included in a single API request should not exceed 30, and all the images should be smaller than 5MB in size) |
| [Chat history](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\} |
| Seed | `seed` | integer | The seed |
| Temperature | `temperature` | number | The temperature for sampling |
| Top K | `top-k` | integer | Integer to define the top tokens considered within the sample operation to create new text |
| Max new tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |
| Max New Tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |
| Top P | `top-p` | number | Float to define the tokens that are within the sample operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top-p (default=0.5) |
</div>

Expand Down
4 changes: 2 additions & 2 deletions ai/fireworksai/v0/config/tasks.json
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@
"value",
"reference"
],
"title": "Max new tokens",
"title": "Max New Tokens",
"type": "integer"
},
"model": {
Expand Down Expand Up @@ -266,7 +266,7 @@
"reference",
"template"
],
"title": "System message",
"title": "System Message",
"type": "string"
},
"temperature": {
Expand Down
4 changes: 2 additions & 2 deletions ai/groq/v0/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -51,13 +51,13 @@ Groq serves open source text generation models (often called generative pre-trai
| Task ID (required) | `task` | string | `TASK_TEXT_GENERATION_CHAT` |
| Model (required) | `model` | string | The OSS model to be used |
| Prompt (required) | `prompt` | string | The prompt text |
| System message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is set using a generic message as "You are a helpful assistant." |
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is set using a generic message as "You are a helpful assistant." |
| Prompt Images | `prompt-images` | array[string] | The prompt images (Note: Only a subset of OSS models support image inputs) |
| [Chat history](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\} |
| Seed | `seed` | integer | The seed |
| Temperature | `temperature` | number | The temperature for sampling |
| Top K | `top-k` | integer | Integer to define the top tokens considered within the sample operation to create new text |
| Max new tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |
| Max New Tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |
| Top P | `top-p` | number | Float to define the tokens that are within the sample operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top-p (default=0.5) |
| User | `user` | string | The user name passed to GroqPlatform |
</div>
Expand Down
4 changes: 2 additions & 2 deletions ai/groq/v0/config/tasks.json
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@
"value",
"reference"
],
"title": "Max new tokens",
"title": "Max New Tokens",
"type": "integer"
},
"model": {
Expand Down Expand Up @@ -237,7 +237,7 @@
"reference",
"template"
],
"title": "System message",
"title": "System Message",
"type": "string"
},
"temperature": {
Expand Down
16 changes: 8 additions & 8 deletions ai/instill/v0/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -327,10 +327,10 @@ Generate texts from input text prompts.
| Task ID (required) | `task` | string | `TASK_TEXT_GENERATION` |
| Model Name (required) | `model-name` | string | The Instill Model model to be used. |
| Prompt (required) | `prompt` | string | The prompt text |
| System message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is using a generic message as "You are a helpful assistant." |
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is using a generic message as "You are a helpful assistant." |
| Seed | `seed` | integer | The seed |
| Temperature | `temperature` | number | The temperature for sampling |
| Max new tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |
| Max New Tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |
</div>


Expand All @@ -356,12 +356,12 @@ Generate texts from input text prompts and chat history.
| Task ID (required) | `task` | string | `TASK_TEXT_GENERATION_CHAT` |
| Model Name (required) | `model-name` | string | The Instill Model model to be used. |
| Prompt (required) | `prompt` | string | The prompt text |
| System message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is using a generic message as "You are a helpful assistant." |
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is using a generic message as "You are a helpful assistant." |
| Prompt Images | `prompt-images` | array[string] | The prompt images |
| [Chat history](#text-generation-chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| Seed | `seed` | integer | The seed |
| Temperature | `temperature` | number | The temperature for sampling |
| Max new tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |
| Max New Tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |
</div>


Expand Down Expand Up @@ -452,12 +452,12 @@ Answer questions based on a prompt and an image.
| Task ID (required) | `task` | string | `TASK_VISUAL_QUESTION_ANSWERING` |
| Model Name (required) | `model-name` | string | The Instill Model model to be used. |
| Prompt (required) | `prompt` | string | The prompt text |
| System message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is using a generic message as "You are a helpful assistant." |
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is using a generic message as "You are a helpful assistant." |
| Prompt Images | `prompt-images` | array[string] | The prompt images |
| [Chat history](#visual-question-answering-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| Seed | `seed` | integer | The seed |
| Temperature | `temperature` | number | The temperature for sampling |
| Max new tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |
| Max New Tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |
</div>


Expand Down Expand Up @@ -519,12 +519,12 @@ Generate texts from input text prompts and chat history.
| Task ID (required) | `task` | string | `TASK_CHAT` |
| Model Name (required) | `model-name` | string | The Instill Model model to be used. |
| Prompt (required) | `prompt` | string | The prompt text |
| System message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is using a generic message as "You are a helpful assistant." |
| System Message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is using a generic message as "You are a helpful assistant." |
| Prompt Images | `prompt-images` | array[string] | The prompt images |
| [Chat history](#chat-chat-history) | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| Seed | `seed` | integer | The seed |
| Temperature | `temperature` | number | The temperature for sampling |
| Max new tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |
| Max New Tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |
</div>


Expand Down
8 changes: 4 additions & 4 deletions ai/instill/v0/config/tasks.json
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@
"value",
"reference"
],
"title": "Max new tokens",
"title": "Max New Tokens",
"type": "integer"
},
"model-name": {
Expand Down Expand Up @@ -230,7 +230,7 @@
"reference",
"template"
],
"title": "System message",
"title": "System Message",
"type": "string"
},
"temperature": {
Expand Down Expand Up @@ -315,7 +315,7 @@
"value",
"reference"
],
"title": "Max new tokens",
"title": "Max New Tokens",
"type": "integer"
},
"model-name": {
Expand Down Expand Up @@ -389,7 +389,7 @@
"reference",
"template"
],
"title": "System message",
"title": "System Message",
"type": "string"
},
"temperature": {
Expand Down
Loading

0 comments on commit 5b75fa3

Please sign in to comment.