diff --git a/device_plugins/README.md b/device_plugins/README.md index 7db5bec6..9f59b5cc 100644 --- a/device_plugins/README.md +++ b/device_plugins/README.md @@ -23,7 +23,7 @@ Follow the steps below to install Intel Device Plugins Operator using OpenShift ### Installation via command line interface (CLI) Apply the [install_operator.yaml](/device_plugins/install_operator.yaml) file: ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/device_plugins/install_operator.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/device_plugins/install_operator.yaml ``` ### Verify Installation via CLI diff --git a/device_plugins/deploy_dsa.md b/device_plugins/deploy_dsa.md index 1b1b97bb..1fce3389 100644 --- a/device_plugins/deploy_dsa.md +++ b/device_plugins/deploy_dsa.md @@ -14,7 +14,7 @@ ## Create CR via CLI Apply the CR yaml file: ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/device_plugins/dsa_device_plugin.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/device_plugins/dsa_device_plugin.yaml ``` ## Verify via CLI diff --git a/device_plugins/deploy_gpu.md b/device_plugins/deploy_gpu.md index 9b46aff7..6ea496cc 100644 --- a/device_plugins/deploy_gpu.md +++ b/device_plugins/deploy_gpu.md @@ -14,7 +14,7 @@ ## Create CR via CLI Apply the CR yaml file: ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/device_plugins/gpu_device_plugin.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/device_plugins/gpu_device_plugin.yaml ``` ## Verify via CLI diff --git a/device_plugins/deploy_qat.md b/device_plugins/deploy_qat.md index 8c8378ce..b89fac13 100644 --- a/device_plugins/deploy_qat.md +++ b/device_plugins/deploy_qat.md @@ -14,7 +14,7 @@ ## Create CR via CLI Apply the CR yaml file: ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/device_plugins/qat_device_plugin.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/device_plugins/qat_device_plugin.yaml ``` ## Verify via CLI diff --git a/device_plugins/deploy_sgx.md b/device_plugins/deploy_sgx.md index 6ebd2191..e541fb49 100644 --- a/device_plugins/deploy_sgx.md +++ b/device_plugins/deploy_sgx.md @@ -14,7 +14,7 @@ ## Create CR via CLI Apply the CR yaml file: ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/device_plugins/sgx_device_plugin.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/device_plugins/sgx_device_plugin.yaml ``` ## Verify via CLI diff --git a/e2e/inference/README.md b/e2e/inference/README.md index 10dc22ef..e3ecbb9a 100644 --- a/e2e/inference/README.md +++ b/e2e/inference/README.md @@ -36,7 +36,7 @@ To enable the interactive mode, the OpenVINO notebook CR needs to be created and Create `AcceleratorProfile` in the `redhat-ods-applications` namespace ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/e2e/inference/accelerator_profile_flex140.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/e2e/inference/accelerator_profile_flex140.yaml ``` 3. Navigate to `openvino-notebooks` ImageStream and add the above created `AcceleratorProfile` key to the annotation field, as shown in the image below: @@ -73,7 +73,7 @@ Follow the [link](https://github.com/openvinotoolkit/operator/blob/main/docs/not Deploy the ```accelerator_profile_gaudi.yaml``` in the redhat-ods-applications namespace. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/e2e/inference/accelerator_profile_gaudi.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/e2e/inference/accelerator_profile_gaudi.yaml ``` ## See Also diff --git a/gaudi/README.md b/gaudi/README.md index 080274e4..dfe39374 100644 --- a/gaudi/README.md +++ b/gaudi/README.md @@ -15,13 +15,13 @@ If you are familiar with the steps here to manually provision the accelerator, t The default kernel firmware search path `/lib/firmware` in RHCOS is not writable. Command below can be used to add path `/var/lib/fimware` into the firmware search path list. ``` -oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/gaudi/gaudi_firmware_path.yaml +oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/gaudi/gaudi_firmware_path.yaml ``` ## Label Gaudi Accelerator Nodes With NFD NFD operator can be used to configure NFD to automatically detect the Gaudi accelerators and label the nodes for the following provisioning steps. ``` -oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/gaudi/gaudi_nfd_instance_openshift.yaml +oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/gaudi/gaudi_nfd_instance_openshift.yaml ``` Verify NFD has labelled the node correctly: ``` @@ -42,7 +42,7 @@ Follow the steps below to install Intel Gaudi Base Operator using OpenShift web ### Installation via Command Line Interface (CLI) ``` -oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/gaudi/gaudi_install_operator.yaml +oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/gaudi/gaudi_install_operator.yaml ``` ### Verify Installation via CLI @@ -70,7 +70,7 @@ To create a Habana Gaudi device plugin CR, follow the steps below. ### Create CR via CLI Apply the CR yaml file: ``` -oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/gaudi/gaudi_device_config.yaml +oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/gaudi/gaudi_device_config.yaml ``` ### Verify the DeviceConfig CR is created diff --git a/kmmo/README.md b/kmmo/README.md index a6ad7488..f1682e1f 100644 --- a/kmmo/README.md +++ b/kmmo/README.md @@ -57,7 +57,7 @@ $ oc label node intel.feature.node.kubernetes.io/dgpu-canary=true 3. Use pre-build mode to deploy the driver container. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/kmmo/intel-dgpu.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/kmmo/intel-dgpu.yaml ``` 4. After the driver is verified on the cluster through the canary deployment, simply remove the line shown below from the [`intel-dgpu.yaml`](/kmmo/intel-dgpu.yaml) file and reapply the yaml file to deploy the driver to the entire cluster. As a cluster administrator, you can also select another deployment policy. diff --git a/machine_configuration/README.md b/machine_configuration/README.md index e2690e1a..5ee23c01 100644 --- a/machine_configuration/README.md +++ b/machine_configuration/README.md @@ -24,7 +24,7 @@ Any contribution in this area is welcome. * Turn on `intel_iommu,sm_on` kernel parameter and load `vfio_pci` at boot for QAT and DSA provisioning ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/machine_configuration/100-intel-iommu-on.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/machine_configuration/100-intel-iommu-on.yaml ``` Note: This will reboot the worker nodes when changing the kernel parameter through MCO. diff --git a/nfd/README.md b/nfd/README.md index d1e22c13..9df99313 100644 --- a/nfd/README.md +++ b/nfd/README.md @@ -14,12 +14,12 @@ Note: As RHOCP cluster administrator, you might need to merge the NFD operator c 1. Create `NodeFeatureDiscovery` CR instance. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/nfd/node-feature-discovery-openshift.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/nfd/node-feature-discovery-openshift.yaml ``` 2. Create `NodeFeatureRule` CR instance. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/nfd/node-feature-rules-openshift.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/nfd/node-feature-rules-openshift.yaml ``` ## Verification diff --git a/tests/gaudi/l2/README.md b/tests/gaudi/l2/README.md index a138287f..dff8aa97 100644 --- a/tests/gaudi/l2/README.md +++ b/tests/gaudi/l2/README.md @@ -4,7 +4,7 @@ System Management Interface Tool (hl-smi) utility tool obtains information and monitors data of the Intel Gaudi AI accelerators. `hl-smi` tool is packaged with the Gaudi base image. Run below command to deploy and execute the tool: ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/gaudi/l2/hl-smi_job.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/tests/gaudi/l2/hl-smi_job.yaml ``` Verify Output: @@ -41,7 +41,7 @@ HCCL (Habana Collective Communication Library) demo is a program that demonstrat Build the workload container image: ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/gaudi/l2/hccl_build.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/tests/gaudi/l2/hccl_build.yaml ``` Create service account with required permissions: ``` @@ -50,7 +50,7 @@ $ oc adm policy add-scc-to-user anyuid -z hccl-demo-anyuid-sa -n gaudi-validatio ``` Deploy and execute the workload: ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/gaudi/l2/hccl_job.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/tests/gaudi/l2/hccl_job.yaml ``` Verify Output: diff --git a/tests/l2/dgpu/README.md b/tests/l2/dgpu/README.md index b812a76a..a852cfad 100644 --- a/tests/l2/dgpu/README.md +++ b/tests/l2/dgpu/README.md @@ -6,13 +6,13 @@ This workload runs [clinfo](https://github.com/Oblomov/clinfo) utilizing the i91 * Build the workload container image. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/clinfo_build.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/tests/l2/dgpu/clinfo_build.yaml ``` * Deploy and execute the workload. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/clinfo_job.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/tests/l2/dgpu/clinfo_job.yaml ``` * Check the results. @@ -47,13 +47,13 @@ This workload runs ```hwinfo``` utilizing the i915 resource from GPU provisionin * Build the workload container image. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/hwinfo_build.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/tests/l2/dgpu/hwinfo_build.yaml ``` * Deploy and execute the workload. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/hwinfo_job.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/tests/l2/dgpu/hwinfo_job.yaml ``` * Check the results @@ -96,13 +96,13 @@ This workload runs [vainfo](https://github.com/intel/libva-utils) utilizing the * Build the workload container image. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/vainfo_build.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/tests/l2/dgpu/vainfo_build.yaml ``` * Deploy and execute the workload. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/vainfo_job.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/tests/l2/dgpu/vainfo_job.yaml ``` * Check the results. @@ -163,13 +163,13 @@ This workload runs various test programs from [libvpl](https://github.com/intel/ * Build the workload container image. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/intelvpl_build.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/tests/l2/dgpu/intelvpl_build.yaml ``` * Deploy and execute the workload. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/intelvpl_job.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/tests/l2/dgpu/intelvpl_job.yaml ``` * Check the results. diff --git a/tests/l2/dsa/README.md b/tests/l2/dsa/README.md index afe25ad9..65a32f08 100644 --- a/tests/l2/dsa/README.md +++ b/tests/l2/dsa/README.md @@ -6,25 +6,25 @@ This workload runs [accel-config](https://github.com/intel/idxd-config) sample t Please replace the credentials in buildconfig yaml with your RedHat account login credentials. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dsa/dsa_build.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/tests/l2/dsa/dsa_build.yaml ``` * Create SCC intel-dsa-scc for Intel DSA based workload, if this SCC is not created ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/security/dsa_scc.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/security/dsa_scc.yaml ``` * Create the intel-dsa service account to use intel-dsa-scc ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/security/dsa_rbac.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/security/dsa_rbac.yaml ``` * Deploy the accel-config workload job with intel-dsa service account ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dsa/dsa_job.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/tests/l2/dsa/dsa_job.yaml ``` * Check the results. diff --git a/tests/l2/qat/README.md b/tests/l2/qat/README.md index 837bc438..2c4d54ae 100644 --- a/tests/l2/qat/README.md +++ b/tests/l2/qat/README.md @@ -6,25 +6,25 @@ This workload runs [qatlib](https://github.com/intel/qatlib) sample tests using Please replace the credentials in buildconfig yaml with your RedHat account login credentials. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/qat/qatlib_build.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/tests/l2/qat/qatlib_build.yaml ``` * Create SCC intel-qat-scc for Intel QAT based workload, if this SCC is not created ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/security/qatlib_scc.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/security/qatlib_scc.yaml ``` * Create the intel-qat service account to use intel-qat-scc ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/security/qatlib_rbac.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/security/qatlib_rbac.yaml ``` * Deploy the qatlib workload job with intel-qat service account ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/qat/qatlib_job.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/tests/l2/qat/qatlib_job.yaml ``` * Check the results. diff --git a/tests/l2/sgx/README.md b/tests/l2/sgx/README.md index 57de0698..bdb886c7 100644 --- a/tests/l2/sgx/README.md +++ b/tests/l2/sgx/README.md @@ -2,13 +2,13 @@ This [SampleEnclave](https://github.com/intel/linux-sgx/tree/master/SampleCode/SampleEnclave) application workload from the Intel SGX SDK runs an Intel SGX enclave utilizing the EPC resource from the Intel SGX provisioning. * Build the container image. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/sgx/sgx_build.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/tests/l2/sgx/sgx_build.yaml ``` * Deploy and run the workload. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/sgx/sgx_job.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/tests/l2/sgx/sgx_job.yaml ``` * Check the results. diff --git a/workloads/opea/chatqna/README.md b/workloads/opea/chatqna/README.md index 83ba06c9..b95e9474 100644 --- a/workloads/opea/chatqna/README.md +++ b/workloads/opea/chatqna/README.md @@ -65,7 +65,7 @@ For example: ``` ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/workloads/opea/chatqna/persistent_volumes.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/workloads/opea/chatqna/persistent_volumes.yaml ``` @@ -86,7 +86,7 @@ create_megaservice_container.sh ### Deploy Redis Vector Database Service ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/workloads/opea/chatqna/redis_deployment_service.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/workloads/opea/chatqna/redis_deployment_service.yaml ``` @@ -109,7 +109,7 @@ redis-vector-db ClusterIP 1.2.3.4 6379/TCP,8001/T Update the inference endpoint from the in the chatqna_megaservice_deployment. ``` -$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/workloads/opea/chatqna/chatqna_megaservice_deployment.yaml +$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/v1.5.0/workloads/opea/chatqna/chatqna_megaservice_deployment.yaml ``` Check that the pod and service are running: