Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

e2e_inference: update readme with known issue #187

Merged
merged 1 commit into from
Dec 14, 2023

Conversation

vbedida79
Copy link
Contributor

update about #107
Signed-off-by: vbedida79 [email protected]

@@ -2,6 +2,12 @@
Intel AI inference end-to-end solution with RHOCP is based on the Intel® Data Center GPU Flex Series provisioning, Intel® OpenVINO™, and [Red Hat OpenShift Data Science](https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-data-science) (RHODS) on RHOCP. There are two AI inference modes verified with Intel® Xeon® processors and Intel Data Center GPU Flex Series with RHOCP-4.12.
* Interactive mode – RHODS provides OpenVINO based Jupyter Notebooks for users to interactively debug the inference applications or [optimize the models](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) on RHOCP using data center GPU cards or Intel Xeon processors.
* Deployment mode – [OpenVINO Model Sever](https://github.com/openvinotoolkit/model_server) (OVMS) can be used to deploy the inference workloads in data center and edge computing environments on RHOCP.
*
**NOTE**: Known issue on OCP 4.13
* To work around GPU workloads [issue](https://github.com/intel/intel-technology-enabling-for-openshift/issues/107), please run the below command on the GPU node :
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please change it to
Note: To make sure the AI inference workloads work properly, Please make sure follow the workaround section in the Known SeLinux regression issue

@vbedida79 please work with @mregmi and add the workaround section in the issue description. Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated PR and issue #107

@uMartinXu
Copy link
Contributor

LGTM!

@uMartinXu uMartinXu merged commit 57a6369 into intel:main Dec 14, 2023
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants