Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reserve idle resources on a node #1826

Open
Randomshot opened this issue Nov 18, 2024 · 2 comments
Open

Reserve idle resources on a node #1826

Randomshot opened this issue Nov 18, 2024 · 2 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@Randomshot
Copy link

Description

What problem are you trying to solve?

When deploying a new version of an existing application deployed as a deployment, Karpenter creates a new node if the existing node runs out of idle resources.
After that, the old version of the Deployment Pod deployed to the existing node is terminated, and the existing node is integrated with the new node due to Underutilized.

Because of this node integration process, pods unrelated to deployment also undergo node replacement.
I want to minimize the phenomenon of creating and integrating new nodes during this deployment process through the function of reserving some idle resources of nodes in Karpenter.

How important is this feature to you?

Sometimes this phenomenon causes unnecessary node migration of pods, which affects service stability. Multiple Pods are unaffected, but services made up of single Pods will experience downtime.
There is also a way to run all services in multiple pods, but it is expected that complexity will increase and node costs will also increase.

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@Randomshot Randomshot added the kind/feature Categorizes issue or PR as related to a new feature. label Nov 18, 2024
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Nov 18, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Karpenter contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@sftim
Copy link

sftim commented Dec 11, 2024

Also see https://kubernetes.io/docs/tasks/administer-cluster/node-overprovisioning/ @Randomshot - the outcome you want (reserved capacity on nodes) is possible right now, as I see it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

3 participants