Skip to content
This repository has been archived by the owner on Oct 29, 2024. It is now read-only.

Commit

Permalink
Merge branch 'main' into YCK1130/jira-component
Browse files Browse the repository at this point in the history
  • Loading branch information
donch1989 authored Jul 28, 2024
2 parents 31c22d1 + 810f850 commit 49638eb
Show file tree
Hide file tree
Showing 98 changed files with 10,404 additions and 494 deletions.
1 change: 1 addition & 0 deletions ai/cohere/v0/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,7 @@ Sort text inputs by semantic relevance to a specified query.
| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Reranked documents | `ranking` | array[string] | Reranked documents |
| Reranked documents relevance (optional) | `relevance` | array[number] | The relevance scores of the reranked documents |
| Usage (optional) | `usage` | object | Search Usage on the Cohere Platform Rerank Models |


Expand Down
4 changes: 2 additions & 2 deletions ai/huggingface/v0/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,8 @@ The component configuration is defined and maintained [here](https://github.com/

| Field | Field ID | Type | Note |
| :--- | :--- | :--- | :--- |
| API Key (required) | `api-key` | string | Fill in your Hugging face API token. To find your token, visit https://huggingface.co/settings/tokens. |
| Base URL (required) | `base-url` | string | Hostname for the endpoint. To use Inference API set to https://api-inference.huggingface.co, for Inference Endpoint set to your custom endpoint. |
| API Key (required) | `api-key` | string | Fill in your Hugging face API token. To find your token, visit <a href="https://huggingface.co/settings/tokens">here</a> |
| Base URL (required) | `base-url` | string | Hostname for the endpoint. To use Inference API set to <a href="https://api-inference.huggingface.co">here</a>, for Inference Endpoint set to your custom endpoint. |
| Is Custom Endpoint (required) | `is-custom-endpoint` | boolean | Fill true if you are using a custom Inference Endpoint and not the Inference API. |


Expand Down
4 changes: 2 additions & 2 deletions ai/huggingface/v0/config/setup.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
"additionalProperties": true,
"properties": {
"api-key": {
"description": "Fill in your Hugging face API token. To find your token, visit https://huggingface.co/settings/tokens.",
"description": "Fill in your Hugging face API token. To find your token, visit <a href=\"https://huggingface.co/settings/tokens\">here</a>",
"instillUpstreamTypes": [
"reference"
],
Expand All @@ -17,7 +17,7 @@
},
"base-url": {
"default": "https://api-inference.huggingface.co",
"description": "Hostname for the endpoint. To use Inference API set to https://api-inference.huggingface.co, for Inference Endpoint set to your custom endpoint.",
"description": "Hostname for the endpoint. To use Inference API set to <a href=\"https://api-inference.huggingface.co\">here</a>, for Inference Endpoint set to your custom endpoint.",
"instillUpstreamTypes": [
"value"
],
Expand Down
11 changes: 5 additions & 6 deletions ai/mistralai/v0/README.mdx
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
---
title: "Mistral"
title: "Mistral AI"
lang: "en-US"
draft: false
description: "Learn about how to set up a VDP Mistral component https://github.com/instill-ai/instill-core"
description: "Learn about how to set up a VDP Mistral AI component https://github.com/instill-ai/instill-core"
---

The Mistral component is an AI component that allows users to connect the AI models served on the Mistral Platform.
The Mistral AI component is an AI component that allows users to connect the AI models served on the Mistral AI Platform.
It can carry out the following tasks:

- [Text Generation Chat](#text-generation-chat)
Expand All @@ -21,7 +21,7 @@ It can carry out the following tasks:

## Configuration

The component configuration is defined and maintained [here](https://github.com/instill-ai/component/blob/main/ai/mistral/v0/config/definition.json).
The component configuration is defined and maintained [here](https://github.com/instill-ai/component/blob/main/ai/mistralai/v0/config/definition.json).



Expand All @@ -31,7 +31,7 @@ The component configuration is defined and maintained [here](https://github.com/

| Field | Field ID | Type | Note |
| :--- | :--- | :--- | :--- |
| API Key (required) | `api-key` | string | Fill in your Mistral API key. To find your keys, visit the Mistral platform page. |
| API Key (required) | `api-key` | string | Fill in your Mistral API key. To find your keys, visit the Mistral AI platform page. |



Expand Down Expand Up @@ -78,7 +78,6 @@ Turn text into a vector of numbers that capture its meaning, unlocking use cases
| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_TEXT_EMBEDDINGS` |
| Embedding Type (required) | `embedding-type` | string | Specifies the return type of embedding. |
| Model Name (required) | `model-name` | string | The Mistral embed model to be used |
| Text (required) | `text` | string | The text |

Expand Down
93 changes: 93 additions & 0 deletions ai/ollama/v0/README.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
---
title: "Ollama"
lang: "en-US"
draft: false
description: "Learn about how to set up a VDP Ollama component https://github.com/instill-ai/instill-core"
---

The Ollama component is an AI component that allows users to connect the AI models served with the Ollama library.
It can carry out the following tasks:

- [Text Generation Chat](#text-generation-chat)
- [Text Embeddings](#text-embeddings)



## Release Stage

`Alpha`



## Configuration

The component configuration is defined and maintained [here](https://github.com/instill-ai/component/blob/main/ai/ollama/v0/config/definition.json).




## Setup


| Field | Field ID | Type | Note |
| :--- | :--- | :--- | :--- |
| Endpoint (required) | `endpoint` | string | Fill in your Ollama hosting endpoint. ### WARNING ###: As of 2024-07-26, the Ollama component does not support authentication methods. To prevent unauthorized access to your Ollama serving resources, please implement additional security measures such as IP whitelisting. |
| Model Auto-Pull (required) | `auto-pull` | boolean | Automatically pull the requested models from the Ollama server if the model is not found in the local cache. |




## Supported Tasks

### Text Generation Chat

Provide text outputs in response to text/image inputs.


| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_TEXT_GENERATION_CHAT` |
| Model Name (required) | `model` | string | The OSS model to be used, check https://ollama.com/library for list of models available |
| Prompt (required) | `prompt` | string | The prompt text |
| System message | `system-message` | string | The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. By default, the model’s behavior is set using a generic message as "You are a helpful assistant." |
| Prompt Images | `prompt-images` | array[string] | The prompt images |
| Chat history | `chat-history` | array[object] | Incorporate external chat history, specifically previous messages within the conversation. Please note that System Message will be ignored and will not have any effect when this field is populated. Each message should adhere to the format: : \{"role": "The message role, i.e. 'system', 'user' or 'assistant'", "content": "message content"\}. |
| Seed | `seed` | integer | The seed |
| Temperature | `temperature` | number | The temperature for sampling |
| Top K | `top-k` | integer | Top k for sampling |
| Max new tokens | `max-new-tokens` | integer | The maximum number of tokens for model to generate |



| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Text | `text` | string | Model Output |






### Text Embeddings

Turn text into a vector of numbers that capture its meaning, unlocking use cases like semantic search.


| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_TEXT_EMBEDDINGS` |
| Model Name (required) | `model` | string | The OSS model to be used, check https://ollama.com/library for list of models available |
| Text (required) | `text` | string | The text |



| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Embedding | `embedding` | array[number] | Embedding of the input text |







7 changes: 7 additions & 0 deletions ai/ollama/v0/assets/ollama.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 49638eb

Please sign in to comment.