-
Notifications
You must be signed in to change notification settings - Fork 105
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Begin docs Signed-off-by: Ryan Wolf <[email protected]> * Add slurm sdk example Signed-off-by: Ryan Wolf <[email protected]> * Use safe import Signed-off-by: Ryan Wolf <[email protected]> * Fix bugs in sdk Signed-off-by: Ryan Wolf <[email protected]> * Update docs and tweak scripts Signed-off-by: Ryan Wolf <[email protected]> * Add interface helper function Signed-off-by: Ryan Wolf <[email protected]> * Update docs Signed-off-by: Ryan Wolf <[email protected]> * Fix formatting Signed-off-by: Ryan Wolf <[email protected]> * Add config docstring Signed-off-by: Ryan Wolf <[email protected]> * Address comments Signed-off-by: Ryan Wolf <[email protected]> --------- Signed-off-by: Ryan Wolf <[email protected]>
- Loading branch information
Showing
10 changed files
with
338 additions
and
8 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,127 @@ | ||
.. _data-curator-nemo-sdk: | ||
|
||
====================================== | ||
NeMo Curator with NeMo SDK | ||
====================================== | ||
----------------------------------------- | ||
NeMo SDK | ||
----------------------------------------- | ||
|
||
The NeMo SDK is a general purpose tool for configuring and executing Python functions and scripts acrosss various computing environments. | ||
It is used across the NeMo Framework for managing machine learning experiments. | ||
One of the key features of the NeMo SDK is the ability to run code locally or on platforms like SLURM with minimal changes. | ||
|
||
----------------------------------------- | ||
Usage | ||
----------------------------------------- | ||
|
||
We recommend getting slightly familiar with NeMo SDK before jumping into this. The documentation can be found here. | ||
|
||
Let's walk through the example usage for how you can launch a slurm job using `examples/launch_slurm.py <https://github.com/NVIDIA/NeMo-Curator/blob/main/examples/nemo_sdk/launch_slurm.py>`_. | ||
|
||
.. code-block:: python | ||
import nemo_sdk as sdk | ||
from nemo_sdk.core.execution import SlurmExecutor | ||
from nemo_curator.nemo_sdk import SlurmJobConfig | ||
@sdk.factory | ||
def nemo_curator_slurm_executor() -> SlurmExecutor: | ||
""" | ||
Configure the following function with the details of your SLURM cluster | ||
""" | ||
return SlurmExecutor( | ||
job_name_prefix="nemo-curator", | ||
account="my-account", | ||
nodes=2, | ||
exclusive=True, | ||
time="04:00:00", | ||
container_image="nvcr.io/nvidia/nemo:dev", | ||
container_mounts=["/path/on/machine:/path/in/container"], | ||
) | ||
First, we need to define a factory that can produce a ``SlurmExecutor``. | ||
This exectuor is where you define all your cluster parameters. Note: NeMo SDK only supports running on SLURM clusters with `Pyxis <https://github.com/NVIDIA/pyxis>`_ right now. | ||
After this, there is the main function | ||
|
||
.. code-block:: python | ||
# Path to NeMo-Curator/examples/slurm/container_entrypoint.sh on the SLURM cluster | ||
container_entrypoint = "/cluster/path/slurm/container_entrypoint.sh" | ||
# The NeMo Curator command to run | ||
curator_command = "text_cleaning --input-data-dir=/path/to/data --output-clean-dir=/path/to/output" | ||
curator_job = SlurmJobConfig( | ||
job_dir="/home/user/jobs", | ||
container_entrypoint=container_entrypoint, | ||
script_command=curator_command, | ||
) | ||
First, we need to specify the path to `examples/slurm/container-entrypoint.sh <https://github.com/NVIDIA/NeMo-Curator/blob/main/examples/slurm/container-entrypoint.sh>`_ on the cluster. | ||
This shell script is responsible for setting up the Dask cluster on Slurm and will be the main script run. | ||
Therefore, we need to define the path to it. | ||
|
||
Second, we need to establish the NeMo Curator script we want to run. | ||
This can be a command line utility like ``text_cleaning`` we have above, or it can be your own custom script ran with ``python path/to/script.py`` | ||
|
||
|
||
Finally, we combine all of these into a ``SlurmJobConfig``. This config has many options for configuring the Dask cluster. | ||
We'll highlight a couple of important ones: | ||
|
||
* ``device="cpu"`` determines the type of Dask cluster to initialize. If you are using GPU modules, please set this equal to ``"gpu"``. | ||
* ``interface="etho0"`` specifies the network interface to use for communication within the Dask cluster. It will likely be different for your Slurm cluster, so please modify as needed. You can determine what interfaces are available by running the following function on your cluster. | ||
|
||
.. code-block:: python | ||
from nemo_curator import get_network_interfaces | ||
print(get_network_interfaces()) | ||
.. code-block:: python | ||
executor = sdk.resolve(SlurmExecutor, "nemo_curator_slurm_executor") | ||
with sdk.Experiment("example_nemo_curator_exp", executor=executor) as exp: | ||
exp.add(curator_job.to_script(), tail_logs=True) | ||
exp.run(detach=False) | ||
After configuring the job, we can finally run it. | ||
First, we use the sdk to resolve our custom factory. | ||
Next, we use it to begin an experiment named "example_nemo_curator_exp" running on our Slurm exectuor. | ||
|
||
``exp.add(curator_job.to_script(), tail_logs=True)`` adds the NeMo Curator script to be part of the experiment. | ||
It converts the ``SlurmJobConfig`` to a ``sdk.Script``. | ||
This ``curator_job.to_script()`` has two important parameters. | ||
* ``add_scheduler_file=True`` | ||
* ``add_device=True`` | ||
|
||
Both of these modify the command specified in ``curator_command``. | ||
Setting both to ``True`` (the default) transforms the original command from: | ||
|
||
.. code-block:: bash | ||
# Original command | ||
text_cleaning \ | ||
--input-data-dir=/path/to/data \ | ||
--output-clean-dir=/path/to/output | ||
to: | ||
|
||
.. code-block:: bash | ||
# Modified commmand | ||
text_cleaning \ | ||
--input-data-dir=/path/to/data \ | ||
--output-clean-dir=/path/to/output \ | ||
--scheduler-file=/path/to/scheduler/file \ | ||
--device="cpu" | ||
As you can see, ``add_scheduler_file=True`` causes ``--scheduler-file=/path/to/scheduer/file`` to be appended to the command, and ``add_device=True`` causes ``--device="cpu"`` (or whatever the device is set to) to be appended. | ||
``/path/to/scheduer/file`` is determined by ``SlurmJobConfig``, and ``device`` is what the user specified in the ``device`` parameter previously. | ||
|
||
The scheduler file argument is necessary to connect to the Dask cluster on Slurm. | ||
All NeMo Curator scripts accept both arguments, so the default is to automatically add them. | ||
If your script is configured differently, feel free to turn these off. | ||
|
||
The final line ``exp.run(detach=False)`` starts the experiment on the Slurm cluster. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,56 @@ | ||
# Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
import nemo_sdk as sdk | ||
from nemo_sdk.core.execution import SlurmExecutor | ||
|
||
from nemo_curator.nemo_sdk import SlurmJobConfig | ||
|
||
|
||
@sdk.factory | ||
def nemo_curator_slurm_executor() -> SlurmExecutor: | ||
""" | ||
Configure the following function with the details of your SLURM cluster | ||
""" | ||
return SlurmExecutor( | ||
job_name_prefix="nemo-curator", | ||
account="my-account", | ||
nodes=2, | ||
exclusive=True, | ||
time="04:00:00", | ||
container_image="nvcr.io/nvidia/nemo:dev", | ||
container_mounts=["/path/on/machine:/path/in/container"], | ||
) | ||
|
||
|
||
def main(): | ||
# Path to NeMo-Curator/examples/slurm/container_entrypoint.sh on the SLURM cluster | ||
container_entrypoint = "/cluster/path/slurm/container_entrypoint.sh" | ||
# The NeMo Curator command to run | ||
# This command can be susbstituted with any NeMo Curator command | ||
curator_command = "text_cleaning --input-data-dir=/path/to/data --output-clean-dir=/path/to/output" | ||
curator_job = SlurmJobConfig( | ||
job_dir="/home/user/jobs", | ||
container_entrypoint=container_entrypoint, | ||
script_command=curator_command, | ||
) | ||
|
||
executor = sdk.resolve(SlurmExecutor, "nemo_curator_slurm_executor") | ||
with sdk.Experiment("example_nemo_curator_exp", executor=executor) as exp: | ||
exp.add(curator_job.to_script(), tail_logs=True) | ||
exp.run(detach=False) | ||
|
||
|
||
if __name__ == "__main__": | ||
main() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
# Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
from .slurm import SlurmJobConfig | ||
|
||
__all__ = ["SlurmJobConfig"] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,110 @@ | ||
# Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
from dataclasses import dataclass | ||
from typing import Dict | ||
|
||
from nemo_curator.utils.import_utils import safe_import | ||
|
||
sdk = safe_import("nemo_sdk") | ||
|
||
|
||
@dataclass | ||
class SlurmJobConfig: | ||
""" | ||
Configuration for running a NeMo Curator script on a SLURM cluster using | ||
NeMo SDK | ||
Args: | ||
job_dir: The base directory where all the files related to setting up | ||
the Dask cluster for NeMo Curator will be written | ||
container_entrypoint: A path to the container-entrypoint.sh script | ||
on the cluster. container-entrypoint.sh is found in the repo | ||
here: https://github.com/NVIDIA/NeMo-Curator/blob/main/examples/slurm/container-entrypoint.sh | ||
script_command: The NeMo Curator CLI tool to run. Pass any additional arguments | ||
needed directly in this string. | ||
device: The type of script that will be running, and therefore the type | ||
of Dask cluster that will be created. Must be either "cpu" or "gpu". | ||
interface: The network interface the Dask cluster will communicate over. | ||
Use nemo_curator.get_network_interfaces() to get a list of available ones. | ||
protocol: The networking protocol to use. Can be either "tcp" or "ucx". | ||
Setting to "ucx" is recommended for GPU jobs if your cluster supports it. | ||
cpu_worker_memory_limit: The maximum memory per process that a Dask worker can use. | ||
"5GB" or "5000M" are examples. "0" means no limit. | ||
rapids_no_initialize: Will delay or disable the CUDA context creation of RAPIDS libraries, | ||
allowing for improved compatibility with UCX-enabled clusters and preventing runtime warnings. | ||
cudf_spill: Enables automatic spilling (and “unspilling”) of buffers from device to host to | ||
enable out-of-memory computation, i.e., computing on objects that occupy more memory | ||
than is available on the GPU. | ||
rmm_scheduler_pool_size: Sets a small pool of GPU memory for message transfers when | ||
the scheduler is using ucx | ||
rmm_worker_pool_size: The amount of GPU memory each GPU worker process may use. | ||
Recommended to set at 80-90% of available GPU memory. 72GiB is good for A100/H100 | ||
libcudf_cufile_policy: Allows reading/writing directly from storage to GPU. | ||
""" | ||
|
||
job_dir: str | ||
container_entrypoint: str | ||
script_command: str | ||
device: str = "cpu" | ||
interface: str = "eth0" | ||
protocol: str = "tcp" | ||
cpu_worker_memory_limit: str = "0" | ||
rapids_no_initialize: str = "1" | ||
cudf_spill: str = "1" | ||
rmm_scheduler_pool_size: str = "1GB" | ||
rmm_worker_pool_size: str = "72GiB" | ||
libcudf_cufile_policy: str = "OFF" | ||
|
||
def to_script(self, add_scheduler_file: bool = True, add_device: bool = True): | ||
""" | ||
Converts to a script object executable by NeMo SDK | ||
Args: | ||
add_scheduler_file: Automatically appends a '--scheduler-file' argument to the | ||
script_command where the value is job_dir/logs/scheduler.json. All | ||
scripts included in NeMo Curator accept and require this argument to scale | ||
properly on SLURM clusters. | ||
add_device: Automatically appends a '--device' argument to the script_command | ||
where the value is the member variable of device. All scripts included in | ||
NeMo Curator accept and require this argument. | ||
Returns: | ||
A NeMo SDK Script that will intialize a Dask cluster, and run the specified command. | ||
It is designed to be executed on a SLURM cluster | ||
""" | ||
env_vars = self._build_env_vars() | ||
|
||
if add_scheduler_file: | ||
env_vars[ | ||
"SCRIPT_COMMAND" | ||
] += f" --scheduler-file={env_vars['SCHEDULER_FILE']}" | ||
if add_device: | ||
env_vars["SCRIPT_COMMAND"] += f" --device={env_vars['DEVICE']}" | ||
|
||
# Surround the command in quotes so the variable gets set properly | ||
env_vars["SCRIPT_COMMAND"] = f"\"{env_vars['SCRIPT_COMMAND']}\"" | ||
|
||
return sdk.Script(path=self.container_entrypoint, env=env_vars) | ||
|
||
def _build_env_vars(self) -> Dict[str, str]: | ||
env_vars = vars(self) | ||
# Convert to uppercase to match container_entrypoint.sh | ||
env_vars = {key.upper(): val for key, val in env_vars.items()} | ||
|
||
env_vars["LOGDIR"] = f"{self.job_dir}/logs" | ||
env_vars["PROFILESDIR"] = f"{self.job_dir}/profiles" | ||
env_vars["SCHEDULER_FILE"] = f"{env_vars['LOGDIR']}/scheduler.json" | ||
env_vars["SCHEDULER_LOG"] = f"{env_vars['LOGDIR']}/scheduler.log" | ||
env_vars["DONE_MARKER"] = f"{env_vars['LOGDIR']}/done.txt" | ||
|
||
return env_vars |
Oops, something went wrong.