Skip to content

Commit

Permalink
Merge pull request #295 from chaitanya1731/patch
Browse files Browse the repository at this point in the history
Set release version to v1.3.1
  • Loading branch information
uMartinXu authored Aug 1, 2024
2 parents f70a4bf + 1e060e8 commit 7fc2fee
Show file tree
Hide file tree
Showing 13 changed files with 31 additions and 31 deletions.
2 changes: 1 addition & 1 deletion device_plugins/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Follow the steps below to install Intel Device Plugins Operator using OpenShift
### Installation via command line interface (CLI)
Apply the [install_operator.yaml](/device_plugins/install_operator.yaml) file:
```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/device_plugins/install_operator.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/device_plugins/install_operator.yaml
```

### Verify Installation via CLI
Expand Down
2 changes: 1 addition & 1 deletion device_plugins/deploy_gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
## Create CR via CLI
Apply the CR yaml file:
```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/device_plugins/gpu_device_plugin.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/device_plugins/gpu_device_plugin.yaml
```

## Verify via CLI
Expand Down
2 changes: 1 addition & 1 deletion device_plugins/deploy_qat.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
## Create CR via CLI
Apply the CR yaml file:
```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/device_plugins/qat_device_plugin.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/device_plugins/qat_device_plugin.yaml
```

## Verify via CLI
Expand Down
2 changes: 1 addition & 1 deletion device_plugins/deploy_sgx.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
## Create CR via CLI
Apply the CR yaml file:
```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/device_plugins/sgx_device_plugin.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/device_plugins/sgx_device_plugin.yaml
```

## Verify via CLI
Expand Down
4 changes: 2 additions & 2 deletions e2e/inference/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ To enable the interactive mode, the OpenVINO notebook CR needs to be created and

Create `AcceleratorProfile` in the `redhat-ods-applications` namespace
```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/e2e/inference/accelerator_profile_flex140.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/e2e/inference/accelerator_profile_flex140.yaml
```

3. Navigate to `openvino-notebooks` ImageStream and add the above created `AcceleratorProfile` key to the annotation field, as shown in the image below:
Expand Down Expand Up @@ -73,7 +73,7 @@ Follow the [link](https://github.com/openvinotoolkit/operator/blob/main/docs/not
Deploy the ```accelerator_profile_gaudi.yaml``` in the redhat-ods-applications namespace.

```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/e2e/inference/accelerator_profile_gaudi.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/e2e/inference/accelerator_profile_gaudi.yaml
```

## See Also
Expand Down
8 changes: 4 additions & 4 deletions gaudi/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,13 @@ If you are familiar with the steps here to manually provision the accelerator, t

The default kernel firmware search path `/lib/firmware` in RHCOS is not writable. Command below can be used to add path `/var/lib/fimware` into the firmware search path list.
```
oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/gaudi/gaudi_firmware_path.yaml
oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/gaudi/gaudi_firmware_path.yaml
```

## Label Gaudi Accelerator Nodes With NFD
NFD operator can be used to configure NFD to automatically detect the Gaudi accelerators and label the nodes for the flowing provisioning steps.
```
oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/gaudi/gaudi_nfd_instance_openshift.yaml
oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/gaudi/gaudi_nfd_instance_openshift.yaml
```
Verify NFD has labelled the node correctly:
```
Expand All @@ -42,7 +42,7 @@ Follow the steps below to install HabanaAI Operator using OpenShift web console:

### Installation via Command Line Interface (CLI)
```
oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/gaudi/gaudi_install_operator.yaml
oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/gaudi/gaudi_install_operator.yaml
```

### Verify Installation via CLI
Expand Down Expand Up @@ -70,7 +70,7 @@ To create a Habana Gaudi device plugin CR, follow the steps below.
### Create CR via CLI
Apply the CR yaml file:
```
oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/gaudi/gaudi_device_config.yaml
oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/gaudi/gaudi_device_config.yaml
```

### Verify the DeviceConfig CR is created
Expand Down
2 changes: 1 addition & 1 deletion kmmo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ $ oc label node <node_name> intel.feature.node.kubernetes.io/dgpu-canary=true

3. Use pre-build mode to deploy the driver container.
```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/kmmo/intel-dgpu.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/kmmo/intel-dgpu.yaml
```

4. After the driver is verified on the cluster through the canary deployment, simply remove the line shown below from the [`intel-dgpu.yaml`](/kmmo/intel-dgpu.yaml) file and reapply the yaml file to deploy the driver to the entire cluster. As a cluster administrator, you can also select another deployment policy.
Expand Down
2 changes: 1 addition & 1 deletion machine_configuration/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Any contribution in this area is welcome.
* Turn on `intel_iommu` kernel parameter and load `vfio_pci` at boot for QAT provisioning

```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/machine_configuration/100-intel-qat-intel-iommu-on.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/machine_configuration/100-intel-qat-intel-iommu-on.yaml
```

Note: This will reboot the worker nodes when changing the kernel parameter through MCO.
Expand Down
4 changes: 2 additions & 2 deletions nfd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,12 @@ Note: As RHOCP cluster administrator, you might need to merge the NFD operator c

1. Create `NodeFeatureDiscovery` CR instance.
```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/nfd/node-feature-discovery-openshift.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/nfd/node-feature-discovery-openshift.yaml
```

2. Create `NodeFeatureRule` CR instance.
```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/nfd/node-feature-rules-openshift.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/nfd/node-feature-rules-openshift.yaml
```

## Verification
Expand Down
16 changes: 8 additions & 8 deletions tests/l2/dgpu/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ This workload runs [clinfo](https://github.com/Oblomov/clinfo) utilizing the i91
* Build the workload container image.

```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/clinfo_build.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/dgpu/clinfo_build.yaml
```

* Deploy and execute the workload.

```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/clinfo_job.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/dgpu/clinfo_job.yaml
```

* Check the results.
Expand Down Expand Up @@ -47,13 +47,13 @@ This workload runs ```hwinfo``` utilizing the i915 resource from GPU provisionin
* Build the workload container image.

```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/hwinfo_build.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/dgpu/hwinfo_build.yaml
```

* Deploy and execute the workload.

```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/hwinfo_job.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/dgpu/hwinfo_job.yaml
```

* Check the results
Expand Down Expand Up @@ -96,13 +96,13 @@ This workload runs [vainfo](https://github.com/intel/libva-utils) utilizing the
* Build the workload container image.

```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/vainfo_build.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/dgpu/vainfo_build.yaml
```

* Deploy and execute the workload.

```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/vainfo_job.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/dgpu/vainfo_job.yaml
```

* Check the results.
Expand Down Expand Up @@ -163,13 +163,13 @@ This workload runs various test programs from [libvpl](https://github.com/intel/
* Build the workload container image.

```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/intelvpl_build.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/dgpu/intelvpl_build.yaml
```

* Deploy and execute the workload.

```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/intelvpl_job.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/dgpu/intelvpl_job.yaml
```

* Check the results.
Expand Down
8 changes: 4 additions & 4 deletions tests/l2/qat/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,25 +6,25 @@ This workload runs [qatlib](https://github.com/intel/qatlib) sample tests using
Please replace the credentials in buildconfig yaml with your RedHat account login credentials.

```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/qat/qatlib_build.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/qat/qatlib_build.yaml
```

* Create SCC intel-qat-scc for Intel QAT based workload, if this SCC is not created

```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/security/qatlib_scc.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/security/qatlib_scc.yaml
```
* Create the intel-qat service account to use intel-qat-scc

```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/security/qatlib_rbac.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/security/qatlib_rbac.yaml
```

* Deploy the qatlib workload job with intel-qat service account

```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/qat/qatlib_job.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/qat/qatlib_job.yaml
```

* Check the results.
Expand Down
4 changes: 2 additions & 2 deletions tests/l2/sgx/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@
This [SampleEnclave](https://github.com/intel/linux-sgx/tree/master/SampleCode/SampleEnclave) application workload from the Intel SGX SDK runs an Intel SGX enclave utilizing the EPC resource from the Intel SGX provisioning.
* Build the container image.
```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/sgx/sgx_build.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/sgx/sgx_build.yaml
```

* Deploy and run the workload.

```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/sgx/sgx_job.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/sgx/sgx_job.yaml
```

* Check the results.
Expand Down
6 changes: 3 additions & 3 deletions workloads/opea/chatqna/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ For example:
```

```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/workloads/opea/chatqna/persistent_volumes.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/workloads/opea/chatqna/persistent_volumes.yaml
```

Expand All @@ -86,7 +86,7 @@ create_megaservice_container.sh

### Deploy Redis Vector Database Service
```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/workloads/opea/chatqna/redis_deployment_service.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/workloads/opea/chatqna/redis_deployment_service.yaml
```

Expand All @@ -109,7 +109,7 @@ redis-vector-db ClusterIP 1.2.3.4 <none> 6379/TCP,8001/T
Update the inference endpoint from the <image name> in the chatqna_megaservice_deployment.

```
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/workloads/opea/chatqna/chatqna_megaservice_deployment.yaml
$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/workloads/opea/chatqna/chatqna_megaservice_deployment.yaml
```

Check that the pod and service are running:
Expand Down

0 comments on commit 7fc2fee

Please sign in to comment.