diff --git a/device_plugins/README.md b/device_plugins/README.md index 1c6624d5..bf88ba12 100644 --- a/device_plugins/README.md +++ b/device_plugins/README.md @@ -23,7 +23,7 @@ Follow the steps below to install Intel Device Plugins Operator using OpenShift ### Installation via command line interface (CLI) Apply the [install_operator.yaml](/device_plugins/install_operator.yaml) file: ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/device_plugins/install_operator.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/device_plugins/install_operator.yaml ``` ### Verify Installation via CLI diff --git a/device_plugins/deploy_gpu.md b/device_plugins/deploy_gpu.md index 9b46aff7..f17da05e 100644 --- a/device_plugins/deploy_gpu.md +++ b/device_plugins/deploy_gpu.md @@ -14,7 +14,7 @@ ## Create CR via CLI Apply the CR yaml file: ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/device_plugins/gpu_device_plugin.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/device_plugins/gpu_device_plugin.yaml ``` ## Verify via CLI diff --git a/device_plugins/deploy_qat.md b/device_plugins/deploy_qat.md index 8c8378ce..24e843ce 100644 --- a/device_plugins/deploy_qat.md +++ b/device_plugins/deploy_qat.md @@ -14,7 +14,7 @@ ## Create CR via CLI Apply the CR yaml file: ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/device_plugins/qat_device_plugin.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/device_plugins/qat_device_plugin.yaml ``` ## Verify via CLI diff --git a/device_plugins/deploy_sgx.md b/device_plugins/deploy_sgx.md index 6ebd2191..981b8a99 100644 --- a/device_plugins/deploy_sgx.md +++ b/device_plugins/deploy_sgx.md @@ -14,7 +14,7 @@ ## Create CR via CLI Apply the CR yaml file: ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/device_plugins/sgx_device_plugin.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/device_plugins/sgx_device_plugin.yaml ``` ## Verify via CLI diff --git a/e2e/inference/README.md b/e2e/inference/README.md index 10dc22ef..b3505810 100644 --- a/e2e/inference/README.md +++ b/e2e/inference/README.md @@ -36,7 +36,7 @@ To enable the interactive mode, the OpenVINO notebook CR needs to be created and Create `AcceleratorProfile` in the `redhat-ods-applications` namespace ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/e2e/inference/accelerator_profile_flex140.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/e2e/inference/accelerator_profile_flex140.yaml ``` 3. Navigate to `openvino-notebooks` ImageStream and add the above created `AcceleratorProfile` key to the annotation field, as shown in the image below: @@ -73,7 +73,7 @@ Follow the [link](https://github.com/openvinotoolkit/operator/blob/main/docs/not Deploy the ```accelerator_profile_gaudi.yaml``` in the redhat-ods-applications namespace. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/e2e/inference/accelerator_profile_gaudi.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/e2e/inference/accelerator_profile_gaudi.yaml ``` ## See Also diff --git a/gaudi/README.md b/gaudi/README.md index 09a94f6e..a511e2b4 100644 --- a/gaudi/README.md +++ b/gaudi/README.md @@ -15,13 +15,13 @@ If you are familiar with the steps here to manually provision the accelerator, t The default kernel firmware search path `/lib/firmware` in RHCOS is not writable. Command below can be used to add path `/var/lib/fimware` into the firmware search path list. ``` -oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/gaudi/gaudi_firmware_path.yaml +oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/gaudi/gaudi_firmware_path.yaml ``` ## Label Gaudi Accelerator Nodes With NFD NFD operator can be used to configure NFD to automatically detect the Gaudi accelerators and label the nodes for the flowing provisioning steps. ``` -oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/gaudi/gaudi_nfd_instance_openshift.yaml +oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/gaudi/gaudi_nfd_instance_openshift.yaml ``` Verify NFD has labelled the node correctly: ``` @@ -42,7 +42,7 @@ Follow the steps below to install HabanaAI Operator using OpenShift web console: ### Installation via Command Line Interface (CLI) ``` -oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/gaudi/gaudi_install_operator.yaml +oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/gaudi/gaudi_install_operator.yaml ``` ### Verify Installation via CLI @@ -70,7 +70,7 @@ To create a Habana Gaudi device plugin CR, follow the steps below. ### Create CR via CLI Apply the CR yaml file: ``` -oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/gaudi/gaudi_device_config.yaml +oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/gaudi/gaudi_device_config.yaml ``` ### Verify the DeviceConfig CR is created diff --git a/kmmo/README.md b/kmmo/README.md index 2cfe6e7d..436edf5c 100644 --- a/kmmo/README.md +++ b/kmmo/README.md @@ -57,7 +57,7 @@ $ oc label node intel.feature.node.kubernetes.io/dgpu-canary=true 3. Use pre-build mode to deploy the driver container. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/kmmo/intel-dgpu.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/kmmo/intel-dgpu.yaml ``` 4. After the driver is verified on the cluster through the canary deployment, simply remove the line shown below from the [`intel-dgpu.yaml`](/kmmo/intel-dgpu.yaml) file and reapply the yaml file to deploy the driver to the entire cluster. As a cluster administrator, you can also select another deployment policy. diff --git a/machine_configuration/README.md b/machine_configuration/README.md index 20cb83f8..7177ec36 100644 --- a/machine_configuration/README.md +++ b/machine_configuration/README.md @@ -24,7 +24,7 @@ Any contribution in this area is welcome. * Turn on `intel_iommu` kernel parameter and load `vfio_pci` at boot for QAT provisioning ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/machine_configuration/100-intel-qat-intel-iommu-on.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/machine_configuration/100-intel-qat-intel-iommu-on.yaml ``` Note: This will reboot the worker nodes when changing the kernel parameter through MCO. diff --git a/nfd/README.md b/nfd/README.md index 15a01b41..65c67648 100644 --- a/nfd/README.md +++ b/nfd/README.md @@ -14,12 +14,12 @@ Note: As RHOCP cluster administrator, you might need to merge the NFD operator c 1. Create `NodeFeatureDiscovery` CR instance. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/nfd/node-feature-discovery-openshift.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/nfd/node-feature-discovery-openshift.yaml ``` 2. Create `NodeFeatureRule` CR instance. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/nfd/node-feature-rules-openshift.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/nfd/node-feature-rules-openshift.yaml ``` ## Verification diff --git a/tests/l2/dgpu/README.md b/tests/l2/dgpu/README.md index b812a76a..b85176e7 100644 --- a/tests/l2/dgpu/README.md +++ b/tests/l2/dgpu/README.md @@ -6,13 +6,13 @@ This workload runs [clinfo](https://github.com/Oblomov/clinfo) utilizing the i91 * Build the workload container image. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/clinfo_build.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/dgpu/clinfo_build.yaml ``` * Deploy and execute the workload. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/clinfo_job.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/dgpu/clinfo_job.yaml ``` * Check the results. @@ -47,13 +47,13 @@ This workload runs ```hwinfo``` utilizing the i915 resource from GPU provisionin * Build the workload container image. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/hwinfo_build.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/dgpu/hwinfo_build.yaml ``` * Deploy and execute the workload. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/hwinfo_job.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/dgpu/hwinfo_job.yaml ``` * Check the results @@ -96,13 +96,13 @@ This workload runs [vainfo](https://github.com/intel/libva-utils) utilizing the * Build the workload container image. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/vainfo_build.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/dgpu/vainfo_build.yaml ``` * Deploy and execute the workload. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/vainfo_job.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/dgpu/vainfo_job.yaml ``` * Check the results. @@ -163,13 +163,13 @@ This workload runs various test programs from [libvpl](https://github.com/intel/ * Build the workload container image. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/intelvpl_build.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/dgpu/intelvpl_build.yaml ``` * Deploy and execute the workload. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/intelvpl_job.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/dgpu/intelvpl_job.yaml ``` * Check the results. diff --git a/tests/l2/qat/README.md b/tests/l2/qat/README.md index 837bc438..4f210c70 100644 --- a/tests/l2/qat/README.md +++ b/tests/l2/qat/README.md @@ -6,25 +6,25 @@ This workload runs [qatlib](https://github.com/intel/qatlib) sample tests using Please replace the credentials in buildconfig yaml with your RedHat account login credentials. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/qat/qatlib_build.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/qat/qatlib_build.yaml ``` * Create SCC intel-qat-scc for Intel QAT based workload, if this SCC is not created ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/security/qatlib_scc.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/security/qatlib_scc.yaml ``` * Create the intel-qat service account to use intel-qat-scc ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/security/qatlib_rbac.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/security/qatlib_rbac.yaml ``` * Deploy the qatlib workload job with intel-qat service account ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/qat/qatlib_job.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/qat/qatlib_job.yaml ``` * Check the results. diff --git a/tests/l2/sgx/README.md b/tests/l2/sgx/README.md index 57de0698..70572a52 100644 --- a/tests/l2/sgx/README.md +++ b/tests/l2/sgx/README.md @@ -2,13 +2,13 @@ This [SampleEnclave](https://github.com/intel/linux-sgx/tree/master/SampleCode/SampleEnclave) application workload from the Intel SGX SDK runs an Intel SGX enclave utilizing the EPC resource from the Intel SGX provisioning. * Build the container image. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/sgx/sgx_build.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/sgx/sgx_build.yaml ``` * Deploy and run the workload. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/sgx/sgx_job.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/tests/l2/sgx/sgx_job.yaml ``` * Check the results. diff --git a/workloads/opea/chatqna/README.md b/workloads/opea/chatqna/README.md index 83ba06c9..a790a1f7 100644 --- a/workloads/opea/chatqna/README.md +++ b/workloads/opea/chatqna/README.md @@ -65,7 +65,7 @@ For example: ``` ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/workloads/opea/chatqna/persistent_volumes.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/workloads/opea/chatqna/persistent_volumes.yaml ``` @@ -86,7 +86,7 @@ create_megaservice_container.sh ### Deploy Redis Vector Database Service ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/workloads/opea/chatqna/redis_deployment_service.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/workloads/opea/chatqna/redis_deployment_service.yaml ``` @@ -109,7 +109,7 @@ redis-vector-db ClusterIP 1.2.3.4 6379/TCP,8001/T Update the inference endpoint from the in the chatqna_megaservice_deployment. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/workloads/opea/chatqna/chatqna_megaservice_deployment.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.3.1/workloads/opea/chatqna/chatqna_megaservice_deployment.yaml ``` Check that the pod and service are running: