From 168fc524fb89f66a7821c79f94128c1c725438f9 Mon Sep 17 00:00:00 2001 From: Oded Messer Date: Tue, 21 Feb 2023 13:20:39 +0200 Subject: [PATCH] Clarifying existing content - take 2 (#301) --- content/docs/user-guide/deploying/index.md | 24 ++++++------ content/docs/user-guide/index.md | 24 +++++++++--- content/docs/user-guide/serving/index.md | 45 +++++++++++----------- 3 files changed, 54 insertions(+), 39 deletions(-) diff --git a/content/docs/user-guide/deploying/index.md b/content/docs/user-guide/deploying/index.md index 0118b1a7..b1b343eb 100644 --- a/content/docs/user-guide/deploying/index.md +++ b/content/docs/user-guide/deploying/index.md @@ -11,9 +11,12 @@ Each deployment is MLEM Object that holds following parameters: implementation you chose Also, each deployment has **state**, which is a snapshot of the actual state of -your deployment. It is created and updated by MLEM during deployment process to -keep track of parameters needed for management. It is stored separately from -declaration. +your deployment. It is created and updated by MLEM during the deployment process +to keep track of parameters needed for state management, and is stored +separately from the declaration. If you use Git to develop your models +(recommended!), the declaration should be committed to the repo, while state +should be gitignored, and kept in a remote storage like s3, to allow updating it +once you re-deploy the model locally or in CI/CD without creating new commits. ## Simple deployment @@ -22,14 +25,14 @@ configuration. You just need your model saved with MLEM and an environment you want to deploy to ```yaml -$ mlem deployment run \ --model \ --some_option -option_value +$ mlem deployment run \ --model \ --some_option + ``` A MLEM Object named `` of type `deployment` will be created and deployed to target environment. -To view all available `` values, run `mlem types env`. Some of them +To view all available `` values, run `mlem types env`. Some of them may require setting up credential information or other parameters, which can be provided to `mlem deployment run` command via options. @@ -56,7 +59,8 @@ This will stop the deployment and erase deployment state value ## Making requests -You also can create MLEM Client for your deployment to make some requests: +You also can create a MLEM client for your deployment from Python code +(`mlem.api.load`) to make some requests: ```python from mlem.api import load @@ -66,14 +70,12 @@ client = service.get_client() res = client.predict(data) ``` -Or run `deployment apply` from command line: +Or use `mlem deployment apply` from command line: ```cli $ mlem deployment apply ``` ---- - ## Pre-defining deployment You can also create deployments without actually running them and later trigger @@ -140,7 +142,7 @@ databases, key-value stores etc. Please express your interest in them via issues. Setting up remote state manager is a lot like setting DVC remote. All you need -to do is provide a URI where you want to store state files. E.g. for s3 it will +to do is provide uri where you want to store state files. E.g. for s3 it will look like this ```cli diff --git a/content/docs/user-guide/index.md b/content/docs/user-guide/index.md index f357ad75..6ef1bfed 100644 --- a/content/docs/user-guide/index.md +++ b/content/docs/user-guide/index.md @@ -1,11 +1,23 @@ # User Guide Our guides describe the major concepts in MLEM and how it works comprehensively, -explaining when and how to use what, as well as inter-relationship between them. +explaining when and how to use its features. -The topics here range from more foundational (impacting many parts of MLEM) to -more specific and advanced things you can do. We also include a few misc. -guides, for example related to [contributing to MLEM](/doc/contributing) itself. +## Codification: the MLEM way -Please choose from the navigation sidebar to the left, or click the `Next` -button below ↘ +Saving machine learning models to files or loading them back into Python objects +may seem like a simple task at first. For example, the `pickle` and `torch` +libraries can serialize/deserialize model objects to/from files. However, MLEM +adds some "special sauce" by inspecting the objects and [saving] their metadata +into `.mlem` metafiles and using these intelligently later on. + +The metadata in `.mlem` files is necessary to reliably enable actions like +[packaging] and [serving] different model types in various ways. MLEM allows us +to automate a lot of the pain points we would hit in typical ML workflows by +codifying and managing the information about our ML models (or other [objects]) +for us. + +[saving]: /doc/user-guide/models +[packaging]: /doc/user-guide/building +[serving]: /doc/user-guide/serving +[objects]: /doc/user-guide/basic-concepts diff --git a/content/docs/user-guide/serving/index.md b/content/docs/user-guide/serving/index.md index a2cd7b2c..304ff091 100644 --- a/content/docs/user-guide/serving/index.md +++ b/content/docs/user-guide/serving/index.md @@ -6,7 +6,7 @@ pages. ## Running server -To start up FastAPI server run: +To start a FastAPI server, run: ```cli $ mlem serve fastapi --model https://github.com/iterative/example-mlem-get-started/rf @@ -23,37 +23,38 @@ INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit) ``` - +The server is now running and listening for requests on the URL shown above. +Endpoints are created automatically from model methods (using the `sample_data` +provided when [saving the model](#saving-your-model)) to infer the payload +schema. You can open the [Swagger UI](http://localhost:8080/docs) in your +browser to explore the OpenAPI spec and query examples. -Servers automatically create endpoints from model methods using the -`sample_data` argument provided to [`mlem.api.save`](/doc/api-reference/save) to -infer the payload schemas. + - +This requires the correct packages to be installed for the server to serve the +model. The needed requirements are inferred from the model metadata extracted +when saving it. You can read more about it in +[model codification](/doc/user-guide/basic-concepts#model-codification). -Note, that serving the model requires you to have the correct packages to be -installed. You can check out how to create -[a `venv` with right packages](/doc/user-guide/building/venv) with MLEM, or how -to serve the model in a [Docker container](/doc/user-guide/deploying/docker). + ## Making requests -You can open Swagger UI (OpenAPI) at -[http://localhost:8080/docs](http://localhost:8080/docs) to check out OpenAPI -spec and query examples. - -Each server implementation also has its client implementation counterpart, in -the case of FastAPI server it’s HTTPClient. Clients can be used to make requests -to servers. Since a server also exposes the model interface description, the -client will know what methods are available and handle serialization and -deserialization for you. You can use them via CLI: +Each server implementation also has its client counterpart (e.g. `HTTPClient` +for FastAPI). Clients can be used to make requests to their corresponding +Servers. Since a server also exposes the model interface description, the client +will know what methods are available and handle serialization and +deserialization for you. You can use them via `mlem apply-remote`: ```cli -$ mlem apply-remote http test_x.csv --host="0.0.0.0" --port=8080 --json +$ mlem apply-remote http test_x.csv \ + --json \ + --host="0.0.0.0" \ + --port=8080 [1, 0, 2, 1, 1, 0, 1, 2, 1, 1, 2, 0, 0, 0, 0, 1, 2, 1, 1, 2, 0, 2, 0, 2, 2, 2, 2, 2, 0, 0, 0, 0, 1, 0, 0, 2, 1, 0] ``` -or via Python API: +Or from Python using the `mlem.api`: ```py from mlem.api import load @@ -65,7 +66,7 @@ res = client.predict(load("test_x.csv"))
-### 💡 Or query the model directly with curl +### Or query directly from terminal using `curl`: ```cli $ curl -X 'POST' \