Skip to content

Commit

Permalink
published CLI page
Browse files Browse the repository at this point in the history
  • Loading branch information
Alfrick committed Jan 29, 2025
1 parent ef0d319 commit 917c9a0
Show file tree
Hide file tree
Showing 14 changed files with 266 additions and 160 deletions.
2 changes: 1 addition & 1 deletion code_snippets/api-guide/audit-log/operation.html
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

const raw = JSON.stringify({
"query": {
"operations": [300, 400, 600]
"operations": ["MODEL_CREATE", "WORKFLOW_CREATE", "APPLICATION_CREATE"]
}
});

Expand Down
2 changes: 1 addition & 1 deletion code_snippets/api-guide/audit-log/operation.js
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ stub.PostAuditLogSearches(
user_id: USER_ID,
},
query: {
operations: [300, 400, 600],
operations: ["MODEL_CREATE", "WORKFLOW_CREATE", "APPLICATION_CREATE"]
},
},
metadata,
Expand Down
2 changes: 1 addition & 1 deletion code_snippets/api-guide/audit-log/operation.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
service_pb2.PostAuditLogSearchesRequest(
user_app_id=userDataObject, # The userDataObject is created in the overview and is required when using a PAT
query=resources_pb2.AuditLogQuery(
operations=[300, 400, 600]
operations=[resources_pb2.EventType.MODEL_CREATE, resources_pb2.EventType.WORKFLOW_CREATE, resources_pb2.EventType.APPLICATION_CREATE]
)
),
metadata=metadata
Expand Down
2 changes: 1 addition & 1 deletion code_snippets/api-guide/audit-log/operation.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,6 @@ curl -X POST "https://api.clarifai.com/v2/users/YOUR_USER_ID_HERE/audit_log/sear
-H "Content-Type: application/json" \
-d '{
"query": {
"operations": [300, 400, 600]
"operations": ["MODEL_CREATE", "WORKFLOW_CREATE", "APPLICATION_CREATE"]
}
}'
2 changes: 1 addition & 1 deletion code_snippets/python-sdk/cli/predict_by_yaml.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
model_url: "https://clarifai.com/anthropic/completion/models/claude-v2"
bytes: "Human: Write a tweet on future of AI\nAssistant:"
input_type: "text"
input_type: "text"
24 changes: 12 additions & 12 deletions docs/api-guide/audit-log/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -308,18 +308,18 @@ You can define the target of your query; that is, specify the resource on which
<summary>**Target Types Supported**</summary>


| Target |Code |
|-----------------|--------------------|
| `User user` | 1 |
| `Role role` | 2 |
| `Team team` | 3 |
| `App app` | 4 |
| `Module module` | 5 |
| `ModuleVersion module_version` | 6 |
| `Workflow workflow` | 7 |
| `WorkflowVersion workflow_version ` | 8 |
| `Model model` | 9 |
| `ModelVersion model_version` | 10 |
| Target |
|-----------------|
| `User user` |
| `Role role` |
| `Team team` |
| `App app` |
| `Module module` |
| `ModuleVersion module_version` |
| `Workflow workflow` |
| `WorkflowVersion workflow_version ` |
| `Model model` |
| `ModelVersion model_version` |


</details>
Expand Down
1 change: 1 addition & 0 deletions docs/portal-guide/compute-orchestration/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ If you’re not using Compute Orchestration, the Shared SaaS (Serverless) deploy
![ ](/img/compute-orchestration/intro-1.png)



## Compute Clusters and Nodepools

We use [clusters and nodepools](https://docs.clarifai.com/portal-guide/compute-orchestration/set-up-compute) to organize and manage the compute resources required for the Compute Orchestration capabilities.
Expand Down
2 changes: 1 addition & 1 deletion docs/sdk/Inference-from-AI-Models/Audio-as-Input.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ The Clarifai SDKs for Audio Processing provides a comprehensive set of tools and

:::tip Clarifai CLI

Learn how to use the Clarifai CLI (Command Line Interface) tool [here](https://docs.clarifai.com/sdk/Inference-from-AI-Models/#clarifai-cli).
Learn how to use the Clarifai CLI (Command Line Interface) tool [here](https://docs.clarifai.com/sdk/cli).

:::

Expand Down
2 changes: 1 addition & 1 deletion docs/sdk/Inference-from-AI-Models/Image-as-Input.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Clarifai SDKs empowers you to seamlessly integrate advanced image recognition fu

:::tip Clarifai CLI

Learn how to use the Clarifai CLI (Command Line Interface) tool [here](https://docs.clarifai.com/sdk/Inference-from-AI-Models/#clarifai-cli).
Learn how to use the Clarifai CLI (Command Line Interface) tool [here](https://docs.clarifai.com/sdk/cli).

:::

Expand Down
140 changes: 1 addition & 139 deletions docs/sdk/Inference-from-AI-Models/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,148 +10,10 @@ sidebar_position: 6

You can leverage the Clarifai Python SDK or the Node.js SDK to make accurate predictions on your data. Whether you're working with images, videos, text, or other formats, the SDKs offer an intuitive and efficient way to interact with AI models and perform inferences seamlessly.

For an even simpler approach, Clarifai provides a user-friendly Command Line Interface (CLI), bundled within the Python SDK package. The CLI streamlines inferencing tasks, allowing you to quickly run predictions and view results — all without requiring extensive setup.
For an even simpler approach, Clarifai provides a user-friendly [Command Line Interface](https://docs.clarifai.com/sdk/cli) (CLI), bundled within the Python SDK package. The CLI streamlines inferencing tasks, allowing you to quickly run predictions and view results — all without requiring extensive setup.

This combination of tools ensures flexibility and ease of use, empowering you to harness the full potential of Clarifai's AI capabilities.

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import CodeBlock from "@theme/CodeBlock";

import Login from "!!raw-loader!../../../code_snippets/python-sdk/cli/login.yaml";
import PredictOptions from "!!raw-loader!../../../code_snippets/python-sdk/cli/predict_options.txt";
import PredictByIds from "!!raw-loader!../../../code_snippets/python-sdk/cli/predict_by_ids.sh";
import PredictByIdsOutput from "!!raw-loader!../../../code_snippets/python-sdk/cli/predict_by_ids_output.txt";
import PredictByFilePath from "!!raw-loader!../../../code_snippets/python-sdk/cli/predict_by_file_path.sh";
import PredictByURL from "!!raw-loader!../../../code_snippets/python-sdk/cli/predict_by_url.sh";
import PredictByModelURL from "!!raw-loader!../../../code_snippets/python-sdk/cli/predict_by_model_url.sh";
import PredictByInputID from "!!raw-loader!../../../code_snippets/python-sdk/cli/predict_by_input_id.sh";
import PredictByYAML from "!!raw-loader!../../../code_snippets/python-sdk/cli/predict_by_yaml.yaml";
import PredictByYAMLBash from "!!raw-loader!../../../code_snippets/python-sdk/cli/predict_by_yaml_bash.sh";
import InferenceParameters from "!!raw-loader!../../../code_snippets/python-sdk/cli/inference_parameters.sh";
import OutputConfig from "!!raw-loader!../../../code_snippets/python-sdk/cli/output_config.sh";

## Clarifai CLI

To make predictions using the [Clarifai CLI](https://github.com/Clarifai/examples/blob/main/CLI/model.ipynb), you first need to create a login configuration file for storing your account credentials.

<Tabs>
<TabItem value="yaml" label="YAML">
<CodeBlock className="language-yaml">{Login}</CodeBlock>
</TabItem>
</Tabs>

Then, authenticate your CLI session with Clarifai using the stored credentials in the configuration file:

```bash
clarifai login --config <config-filepath>
```

You can perform model predictions using the Clarifai CLI in the following ways:

- By specifying `user_id`, `app_id`, and `model_id`
- By providing the model URL
- By using a YAML configuration file


<details>
<summary>CLI Predict Options</summary>
<CodeBlock className="language-text">{PredictOptions}</CodeBlock>
</details>

### Predict by IDs

You can use the `--bytes` argument along with specifying `user_id`, `app_id`, and `model_id`.

<Tabs>
<TabItem value="bash" label="Bash">
<CodeBlock className="language-bash">{PredictByIds}</CodeBlock>
</TabItem>
</Tabs>

<details>
<summary>Output Example</summary>
<CodeBlock className="language-text">{PredictByIdsOutput}</CodeBlock>
</details>

> You can also use the `--file_path` argument, which specifies the local path to the file that contains the instructions for the model to generate predictions.
<Tabs>
<TabItem value="bash" label="Bash">
<CodeBlock className="language-bash">{PredictByFilePath}</CodeBlock>
</TabItem>
</Tabs>

> You can also use the `--url` argument, which specifies the URL of the file that contains the instructions for the model to generate predictions.
<Tabs>
<TabItem value="bash" label="Bash">
<CodeBlock className="language-bash">{PredictByURL}</CodeBlock>
</TabItem>
</Tabs>

<!--
> You can also use the `--input_id` argument, which specifies an existing input ID in the app for the model to predict.

<Tabs>
<TabItem value="bash" label="Bash">
<CodeBlock className="language-bash">{PredictByInputID}</CodeBlock>
</TabItem>
</Tabs>

-->

### Predict by Model URL

You can make predictions by using the `--model_url` argument, which specifies the URL of the model to be used for generating predictions.

<Tabs>
<TabItem value="bash" label="Bash">
<CodeBlock className="language-bash">{PredictByModelURL}</CodeBlock>
</TabItem>
</Tabs>

### Predict by a YAML file

You can provide the instructions for generating predictions in a YMAL configuration file.

Here is an example:

<Tabs>
<TabItem value="yaml" label="YAML">
<CodeBlock className="language-yaml">{PredictByYAML}</CodeBlock>
</TabItem>
</Tabs>

Then, you need to specify the path to that file.

<Tabs>
<TabItem value="bash" label="Bash">
<CodeBlock className="language-bash">{PredictByYAMLBash}</CodeBlock>
</TabItem>
</Tabs>

### Specify Prediction Parameters

You can specify prediction parameters to influence the output of some models. These settings allow you to control the model's behavior during prediction, influencing attributes such as creativity, coherence, and diversity in the results.

You can get a description of the prediction parameters [here](https://docs.clarifai.com/sdk/Inference-from-AI-Models/Advance-Inference-Options/#prediction-paramaters).

Here is how you can specify various inference parameters :

<Tabs>
<TabItem value="bash" label="Bash">
<CodeBlock className="language-bash">{InferenceParameters}</CodeBlock>
</TabItem>
</Tabs>

Here is how you can specify output configuration parameters:

<Tabs>
<TabItem value="bash" label="Bash">
<CodeBlock className="language-bash">{OutputConfig}</CodeBlock>
</TabItem>
</Tabs>

import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
Expand Down
2 changes: 1 addition & 1 deletion docs/sdk/Inference-from-AI-Models/Text-as-Input.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Unlock the potential of Clarifai's state-of-the-art text-based AI features, allo

:::tip Clarifai CLI

Learn how to use the Clarifai CLI (Command Line Interface) tool [here](https://docs.clarifai.com/sdk/Inference-from-AI-Models/#clarifai-cli).
Learn how to use the Clarifai CLI (Command Line Interface) tool [here](https://docs.clarifai.com/sdk/cli).

:::

Expand Down
3 changes: 2 additions & 1 deletion docs/sdk/advance-model-operations/model-upload.md
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,8 @@ Ensure your local environment has sufficient memory and compute resources to loa

:::

There are two types of CLI (command line interface) commands you can use to test your models in your local development environment. You can learn more about the Clarifai CLI tool [here](https://github.com/Clarifai/examples/blob/main/CLI/model.ipynb).

There are two types of CLI (command line interface) commands you can use to test your models in your local development environment. You can learn more about the Clarifai CLI tool [here](https://docs.clarifai.com/sdk/cli).

#### 1. Using the `test-locally` Command

Expand Down
Loading

0 comments on commit 917c9a0

Please sign in to comment.