-
Notifications
You must be signed in to change notification settings - Fork 12
GS: more feedback #129
Comments
related to #58 |
This comment was marked as resolved.
This comment was marked as resolved.
@jorgeorpinel, I see you updated the description above. Some feedback looks outdated, so I assume this is the feedback for mlem.ai (i.e. Or you can keep it as it is, I'll apply what I think should be applied and then you'll TAL at #188 and we'll continue there. UPD: get-started/deploying was not updated yet in #188 . |
Feel free to |
@jorgeorpinel, @mike0sv. While rewriting Get Started (see in review app) to be pretty simple while demonstrating key Use Cases for MLEM, I'm tempted to showcase Basically The simplicity of this approach compared to Heroku is that you don't need (1) to login/register to Heroku and get API key, (2) no need to explain what is MLEM Env (may be incorrect here TBH, Mike please check me). Still, deploying to a place outside of local machine, is a real deployment, while local docker deploy is a local deploy 🤔 |
Ok, I did everything with Heroku and I like it. No need to use local Docker deploy I think. @jorgeorpinel, I think I addressed almost everything here. Except for:
Could you please check out the new GS index page? Do we need to shorten the code block? Do we need to shorten YAML output? Is this ok to do? My point: maybe it's long, but reproducible. This is what I would expect from GS, that should teach me the basic features of MLEM. |
Btw, if we're going to use another dataset (not iris), the training script will be longer. In that case, we'll have to extract details to some repo, I'm afraid. Rn the "pro" of the current GS is that we don't need any repo. But still, we have WDYT? @jorgeorpinel |
re Heroku vs. Docker deployment (GS)
It's a good idea but installing Docker locally is kind of a pain from what I remember (and some features are not supported outside of Linux). Heroku seems more realistic indeed, and users who don't want to try it can get a good idea from just reading the GS (if it's well done).
Didn't get what you meant there.
|
What about
Nice. See my review. Truncating code blocks in general is helpful (to emphasize the important parts of such code blocks AND to make the doc faster to read). |
Thanks for the feedback, @jorgeorpinel! Applied suggestion from your review as well, except for few things (left my comments there).
No, serve is standalone. Build use serve under the hood. Deploy use build and serve under the hood. Explained this in get-started/building.
The only page that needs this, considering the current content, is
Where exactly? The output may be verbose, but I think it's good to be clear in GS, to show people what is happening.
Good idea. Did that, you can check it out.
Yes, it's not something we need in GS. It's covered in the CLI/API reference. |
OK thanks for clarifying. TBH we don't need to mention this in GS (although a short note probably doesn't hurt either). In general it's better to leave implementation details for other sections of the docs 🙂
Yes, it's probably more of a product question: Is MLEM's default output is too verbose? (Is there an issue somewhere to discuss this?) |
we don't have it, although I just mentioned this in iterative/mlem#390 |
hooray, we have covered everything then. I'm closing the issue. Thanks for your effort @jorgeorpinel 🙏🏻 |
Thank you. |
We still have #58 😬 |
Specific improvements to existing content.
mlem.api.save
; Mention/link toload()
?serve|deploy
?create env|deployment
); Truncate sample outputmlem.api.apply
is less clear thanmlem.api.load
- while the 2nd is not explained in GS. Let's 1st with 2nd.The text was updated successfully, but these errors were encountered: