diff --git a/docs/modelserving/v1beta1/llm/huggingface/multi-node/README.md b/docs/modelserving/v1beta1/llm/huggingface/multi-node/README.md new file mode 100644 index 000000000..4998409a9 --- /dev/null +++ b/docs/modelserving/v1beta1/llm/huggingface/multi-node/README.md @@ -0,0 +1,334 @@ +# Multi-node/Multi-GPU Inference with Hugging Face vLLM Serving Runtime + +This guide provides step-by-step instructions on setting up multi-node and multi-GPU inference using Hugging Face's vLLM Serving Runtime. Before proceeding, please ensure you meet the following prerequisites and understand the limitations of this setup. + +## Prerequisites + +- Multi-node functionality is only supported in **RawDeployment** mode. +- **Auto-scaling is not available** for multi-node setups. +- A **Persistent Volume Claim (PVC)** is required for multi-node configurations, and it must support the **ReadWriteMany (RWX)** access mode. + + +### Key Validations + +- `TENSOR_PARALLEL_SIZE` and `PIPELINE_PARALLEL_SIZE` cannot be set via environment variables. They must be configured through `workerSpec.tensorParallelSize` and `workerSpec.pipelineParallelSize` respectively. +- In a ServingRuntime designed for multi-node, both `workerSpec.tensorParallelSize` and `workerSpec.pipelineParallelSize` must be set. +- The minimum value for `workerSpec.tensorParallelSize` is **1**, and the minimum value for `workerSpec.pipelineParallelSize` is **2**. +- Currently, four GPU types are allowed: `nvidia.com/gpu` (*default*), `intel.com/gpu`, `amd.com/gpu`, and `habana.ai/gaudi`. +- You can specify the GPU type via InferenceService, but if it differs from what is set in the ServingRuntime, both GPU types will be assigned to the resource. Then it can cause issues. +- The Autoscaler must be configured as `external`. +- The only supported storage protocol for StorageURI is `PVC`. +- By default, the following 4 types of GPU resources are allowed: + ~~~ + "nvidia.com/gpu" + "amd.com/gpu" + "intel.com/gpu" + "habana.ai/gaudi" + ~~~ + - If you want to use other GPU types, you can set this in the annotations of ISVC as follows: + ~~~ + serving.kserve.io/gpu-resource-types: '["gpu-type1", "gpu-type2", "gpu-type3"]' + ~~~ + +!!! note + + You must have **exactly one head pod** in your setup. The replica count for this head pod can be adjusted using the `min_replicas` or `max_replicas` settings in the `InferenceService (ISVC)`. However, creating additional head pods will cause them to be excluded from the Ray cluster, resulting in improper functioning. Ensure this limitation is clearly documented. + + Do not use 2 different GPU types for multi node serving. + +### Consideration + +Using the multi-node feature likely indicates that you are trying to deploy a very large model. In such cases, you should consider increasing the `initialDelaySeconds` for the `livenessProbe`, `readinessProbe`, and `startupProbe`. The default values may not be suitable for your specific needs. + +You can set StartupProbe in ServingRuntime for your own situation. +~~~ +.. + startupProbe: + failureThreshold: 40 + periodSeconds: 30 + successThreshold: 1 + timeoutSeconds: 30 + initialDelaySeconds: 20 +.. +~~~ + +## WorkerSpec and ServingRuntime + +To enable multi-node/multi-GPU inference, `workerSpec` must be configured in both ServingRuntime and InferenceService. The `huggingface-server-multinode` `ServingRuntime` already includes this field and is built on **vLLM**, which supports multi-node/multi-GPU feature. Note that this setup is **not compatible with Triton**. + +!!! note + + Even if the `ServingRuntime` is properly configured with `workerSpec`, multi-node/multi-GPU will not be enabled unless the InferenceService also configures the workerSpec. + +``` +... + predictor: + model: + runtime: kserve-huggingfaceserver-multinode + modelFormat: + name: huggingface + storageUri: pvc://llama-3-8b-pvc/hf/8b_instruction_tuned + workerSpec: {} # Specifying workerSpec indicates that multi-node functionality will be used +``` + +## Key Configurations + +When using the `huggingface-server-multinode` `ServingRuntime`, there are two critical configurations you need to understand: + +1. **`workerSpec.tensorParallelSize`**: + This setting controls how many GPUs are used per node. The GPU type count in both the head and worker node deployment resources will be updated automatically. + + +2. **`workerSpec.pipelineParallelSize`** + This setting determines how many nodes are involved in the deployment. This variable represents the total number of nodes, including both the head and worker nodes. + + +### Example InferenceService + +Here’s an example of an `InferenceService` configuration for a Hugging Face model: + +```yaml +apiVersion: serving.kserve.io/v1beta1 +kind: InferenceService +metadata: + name: huggingface-llama3 + annotations: + serving.kserve.io/deploymentMode: RawDeployment + serving.kserve.io/autoscalerClass: external + +spec: + predictor: + model: + modelFormat: + name: huggingface + storageUri: pvc://llama-3-8b-pvc/hf/8b_instruction_tuned + workerSpec: { + pipelineParallelSize: 2 + tensorParallelSize: 1 + } +``` + +## Serve the Hugging Face vLLM Model Using 2 Nodes + +Follow these steps to serve the Hugging Face vLLM model using a multi-node setup. + +### 1. Create a Persistent Volume Claim (PVC) + +First, create a PVC for model storage. Be sure to update `%fileStorageClassName%` with your actual storage class. + +```yaml +kubectl apply -f - <