-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
new project/feature "Karpenter Downscaler" #1800
Comments
This issue is currently awaiting triage. If Karpenter contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
I think this is best handled outside of, but in conjunction with, Karpenter, using existing tools such as KEDA. We scale our dev cluster down to (almost) zero overnight using KEDA's ScaledObjects, and as KEDA scales down all the deployments, etc, Karpenter automatically scales down the nodes. Karpenter isn't application-aware and I don't think it should be. The only signal it needs to scale down is unused capacity in the cluster (i.e. nodes that can be consolidated or removed), and there are existing tools for that. |
I agree that in many cases, independent tools like KEDA or kube-downscaler are sufficient to scale down most of a cluster using karpenter's built-in disruption behavior. But, like you say, these tools are made to scale applications, not clusters. There are at least 2 caveats that this tool intends to address:
The goal for such a tool is not for Karpenter to become application aware, because it's not connected in any way with application manifests, but to empower infra teams with a way to force a shutting down a cluster in order to manage costs, without relying on application teams. |
Related to: #1177 We also ran into this and we currently don't have KEDA in our cluster so we had to find another solution. For us Karpenter is running on Fargate in our cluster (as well as CoreDNS). We have a cronjob that patches the nodepool cpu limits to 0 on a schedule. Only Karpenter and CoreDNS remain. Then another cronjob (also on Fargate) patches the nodepool back to it's original limit to bring up nodes again. |
As a project, we love it when people publish software outside of Kubernetes that can work with Kubernetes. There is a process for donating components to Kubernetes, and we like code donations too, but it is more work. Talk to SIG Autoscaling if you want to donate a controller repository to Kubernetes. |
Description
What problem are you trying to solve?
With my team we started to work on a project to stop nodes managed by Karpenter at specific times.
We called it "Karpenter Downscaler"
Karpenter Downscaler operates as a controller with a CRD.
Based on the schedules we set, it automatically scales down to 0 the nodes managed by Karpenter, freeing up resources.
Right now, it's a private project, but we would like to opensource it.
Does the Karpenter community could be interested in such tool ? or shall we make it opensource but in our org.
Thanks
How important is this feature to you?
The text was updated successfully, but these errors were encountered: