-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: NodeOverlay #1305
base: main
Are you sure you want to change the base?
RFC: NodeOverlay #1305
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
@@ -0,0 +1,171 @@ | ||||||
# Node Overlays | ||||||
|
||||||
Node launch decisions are the output a scheduling simulation algorithm that involves satisfying pod scheduling constraints with available instance type offerings. Cloud provider implementations are responsible for surfacing these offerings to the Karpenter scheduler through the [cloudprovider.GetInstanceTypes()](https://github.com/kubernetes-sigs/karpenter/blob/37db06f4742eada19934f76e616f2b1860aca57b/pkg/cloudprovider/types.go#L59) API. This separation of responsibilities enables cloud providers to make simplifying assumptions, but limits extensibility for more advanced use cases. NodeOverlays enable customers to inject alternative assumptions into the scheduling simulation for more accurate scheduling simulations. | ||||||
|
||||||
## Use Cases | ||||||
|
||||||
### Price | ||||||
|
||||||
Each instance offering comes with a price, which is used as a minimization function in Karpenter's scheduling algorithm. However, Karpenter cannot perfectly understand all factors that influence price, such as an external vendor's fee, a pricing discount, or an external motiviation such as carbon-offset. | ||||||
|
||||||
* https://github.com/aws/karpenter-provider-aws/pull/4686 | ||||||
* https://github.com/aws/karpenter-provider-aws/issues/3860 | ||||||
* https://github.com/aws/karpenter-provider-aws/pull/4697 | ||||||
|
||||||
### Extended Resources | ||||||
|
||||||
InstanceTypes have a set of well known resource capacities, such as CPU, memory, or GPUs. However, Kubernetes supports arbitrary [extended resources](https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/), which cannot be known prior to Node launch. Karpenter lacks a mechanism to teach the simulation about extended resource capacities. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm assuming that we would still support mechanisms for doing first-class support of a bunch of extended resources so that we don't have to be asking users to specify NodeOverlays for all of them? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes. We would prefer first class support for all of this. |
||||||
|
||||||
* https://github.com/kubernetes-sigs/karpenter/issues/751 | ||||||
* https://github.com/kubernetes-sigs/karpenter/issues/729 | ||||||
|
||||||
### Resource Overhead | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd like to see if we can do a better job being more accurate on these. Like extended resource support, I'd like it if we didn't have to require users to have to set all of these but could just do the right thing here where possible There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Strongly agree that we should provide first class support for extended resources wherever possible. In my mind it's whether or not these can be reasonably known by a cloud provider -- some simply can't. |
||||||
|
||||||
* https://github.com/aws/karpenter-provider-aws/issues/5161 | ||||||
|
||||||
System software such as containerd, kubelet, and the host operating system come with resource overhead like CPU and memory, which is substracted from a Node's capacity. Karpenter attempts to calculate this overhead where it is known, but cannot predict the overhead required by custom operating systems. To complicate this, the overhead often changes for each version of an operating system. | ||||||
|
||||||
## Proposal | ||||||
|
||||||
NodeOverlays are a new Karpenter API that enable customers to fine-tune the scheduling simulation for advanced use cases. NodeOverlays can be configured with a `metav1.Selector` which flexibly constrain when they are applied to a simulation. Selectors can match well-known labels like `node.kubernetes.io/instance-type` or `karpenter.sh/node-pool` and custom labels defined in a NodePool's `.spec.labels` or `.spec.requirements`. When multiple NodeOverlays match an offering for a given simulation, they are merged together using `spec.weight` to resolve conflicts. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As they're NodeOverlays, does it make sense to say it just references the same as a pod NodeSelector? Additionally, if I'm a user, and I'm wondering if my NodeOverlays are right, will these match the node labels or the nodeclaim labels? Realistically it shouldn't matter There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not grokking this |
||||||
|
||||||
### Examples | ||||||
|
||||||
Configure price to match a compute savings plan | ||||||
```yaml | ||||||
kind: NodeOverlay | ||||||
metadata: | ||||||
name: default-discount | ||||||
spec: | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Assuming the lack of a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yep. See the section below |
||||||
pricePercent: 90 | ||||||
``` | ||||||
|
||||||
Support extended resource types (e.g. smarter-devices/fuse) | ||||||
* https://github.com/kubernetes-sigs/karpenter/issues/751 | ||||||
* https://github.com/kubernetes-sigs/karpenter/issues/729 | ||||||
|
||||||
```yaml | ||||||
kind: NodeOverlay | ||||||
metadata: | ||||||
name: extended-resource | ||||||
spec: | ||||||
selector: | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ahh this is helpful for understanding the role Selectors play in the UX. I would suggest including a bit of this specificity in the intro |
||||||
matchLabels: | ||||||
karpenter.sh/capacity-type: on-demand | ||||||
matchExpressions: | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we want to do validation on well known labels? Or should this also apply to custom labels? I can't think of any reason it shouldn't apply to all custom labels. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It will apply to all |
||||||
- key: node.kubernetes.io/instance-type | ||||||
operator: In | ||||||
values: ["m5.large", "m5.2xlarge", "m5.4xlarge", "m5.8xlarge", "m5.12xlarge"] | ||||||
capacity: | ||||||
smarter-devices/fuse: 1 | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. To be clear, the "extended resource support" in this design would only support statically applied extended resources on selected nodes, correct? This would not cover cases where dynamic extended resources are needed, for example setting an extended resource for available network bandwidth which varies by EC2 instance type. Additionally, karpenter still needs to have its provisioning and deprovisioning logic updated to account for arbitrary extended resources other than the gpu resource, right? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
You could achieve through multiple NodeOverlays, that match different selectors. e.g. m5.large has 2, m5.2xlarge has 4. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think defining a NodeOverlay for each network bandwidth amount is a reasonable solution for my use case. We use a wide variety of instance types and sizes. Do you think it could be feasible to incorporate some level of templating for values that can be easily retrieved from the EC2 API? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For NetworkBandwidth, I'd love to just support this one first class. We could always try to do something that attempts to model this like:
But it's hard to capture all potential relationships without devolving into something like https://github.com/google/cel-spec There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Similar use case with KubeVirt, VM pods are getting "dynamic" label keys as node selectors, e.g. In my Karpenter fork, just providing an "ignored list" through environment variable where I'm doing pattern matching against |
||||||
``` | ||||||
|
||||||
Add memory overhead of of 10Mi and 50Mi for all instances with more than 2Gi memory | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Does this replace There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good question. Punted to a future design :) |
||||||
* https://github.com/aws/karpenter-provider-aws/issues/5161 | ||||||
|
||||||
```yaml | ||||||
kind: NodeOverlay | ||||||
metadata: | ||||||
name: memory-default | ||||||
spec: | ||||||
overhead: | ||||||
memory: 10Mi | ||||||
--- | ||||||
kind: NodeOverlay | ||||||
metadata: | ||||||
name: memory | ||||||
spec: | ||||||
weight: 1 | ||||||
selector: | ||||||
matchExpressions: | ||||||
key: karpenter.k8s.aws/instance-memory | ||||||
operator: Gt | ||||||
values: [2048] | ||||||
overhead: | ||||||
memory: 50Mi | ||||||
--- | ||||||
``` | ||||||
|
||||||
### Selectors | ||||||
|
||||||
NodeOverlay's selector semantic allows for fine grained control over when values are injected into the scheduling simulation. An empty selector applies to all simulations, or 1:n with InstanceType. Selectors could be written that define a 1:1 relationship between NodeOverlay and InstanceType. Selectors can even be written to apply n:1 with InstanceType, through additional labels that constrain by NodePool, NodeClass, zone, or other scheduling dimensions. | ||||||
ellistarn marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||
|
||||||
A [previous proposal](https://github.com/aws/karpenter-provider-aws/pull/2390/files#diff-b7121b2d822e57b70203794f3b6367e7173b16d84070c3712cd2f15b8dbd11b2R133) explored the tradeoffs between a 1:1 semantic vs more flexible alternatives. It raised valid observability challenges around communicating to customers exactly how the CR was modifying the scheduling simulation. It recommended a 1:1 approach, with the assumption that some system would likely dynamically codegenerate the CRs for the relevant use case. We received [direct feedback](https://github.com/aws/karpenter-provider-aws/pull/2404#issuecomment-1250688882) from some customers that the number of variations would be combinatorial, and thus unmanagable for their use case. Ultimately, we paused this analysis in favor of other priorities. | ||||||
ellistarn marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||
|
||||||
After two years of additional customer feedback in this problem space, this proposal asserts that the flexibility of a selector-based approach is a signficant simplification for customers compared to a 1:1 approach and avoids the scalability challenges associated with persisting large numbers of CRs in the Kubernetes API. This decision will require us to work through the associated observability challenges. | ||||||
|
||||||
### Merging | ||||||
|
||||||
Our decision to pursue Selectors that allow for an n:m semantic presents a challenge where multiple NodeOverlays could match a simulation, but with different values. To address this, we will support an optional `.spec.weight` parameter (discussed in the [previous proposal](https://github.com/aws/karpenter-provider-aws/pull/2390/files#diff-b7121b2d822e57b70203794f3b6367e7173b16d84070c3712cd2f15b8dbd11b2R153)). | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm thinking about these overlays as a combination of mathematical operations on a subset of instance types for a subset of attributes. Is it possible to just arrange these mathematical operations in a way like PEMDAS such that we don't have to think about weight and can just merge them? I think this would be option 2. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is a cool idea, but I'm not sure about the use cases it unlocks. |
||||||
|
||||||
We have a few choices for dealing with conflicting NodeOverlays | ||||||
|
||||||
1. Multiple NodeOverlays are merged in order of weight | ||||||
2. Multiple NodeOverlays are applied sequentially in order of weight | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This option strikes me as more confusing in a lot of ways and doesn't really adhere to an "overlay" construct, where a user is probably thinking about this in terms of layers There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 👍 |
||||||
3. One NodeOverlay is selected by weight | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is there another option about implied weight via Selector specificity? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Baking these semantics into selector is quite a bit more complicated, and requires customers to deeply understand this. There may be cases where they disagree as well. Weight is a common idiom from other scheduling APIs, which is why I chose this here. |
||||||
|
||||||
We recommend option 1, as it supports the common use case of default + override semantic. In the memory overhead example, both NodeOverlays attempt to define the same parameter, but one is more specific than the other. Option 1 and 3 allow us to support this use case, where option 2 would result in 50Mi+10Mi. We are not aware of use cases that would benefit from sequential application. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm wondering how common we're thinking overlapping nodeoverlays will occur. I could see a sequential merging if users wanted to model multiple pricing discounts as multiple overlays, rather than having to merge them all themselves into a CR. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you elaborate a little bit about how a user might model something with sequential applies? |
||||||
|
||||||
We choose option 1 over option 3 to support NodeOverlays that define multiple values. For example, we could collapse the example NodeOverlays using a merge semantic. | ||||||
|
||||||
```yaml | ||||||
kind: NodeOverlay | ||||||
metadata: | ||||||
name: default | ||||||
spec: | ||||||
pricePercent: 90 | ||||||
overhead: | ||||||
memory: 10Mi | ||||||
Comment on lines
+116
to
+118
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. My guess is that price discounts vary by a different dimension than by how overhead will vary. Overhead could vary by instance size, especially if modeled as a percent. If it's flat, it may be properly modeled like this. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah strongly agree. The merge semantic is a solve for this. |
||||||
--- | ||||||
kind: NodeOverlay | ||||||
metadata: | ||||||
name: memory | ||||||
spec: | ||||||
weight: 1 # required to resolve conflicts with default | ||||||
selector: | ||||||
matchExpressions: | ||||||
key: karpenter.k8s.aws/instance-memory | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nit: using kwok labels in our upstream to not use AWS labels
Suggested change
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd rather be clear with a concrete provider, than less clear with a fake provider. |
||||||
operator: Gt | ||||||
values: [2048] | ||||||
overhead: | ||||||
memory: 50Mi | ||||||
--- | ||||||
kind: NodeOverlay | ||||||
metadata: | ||||||
name: extended-resource | ||||||
spec: | ||||||
# weight is not required since default does not conflict with extended resources | ||||||
selector: | ||||||
matchLabels: | ||||||
karpenter.sh/capacity-type: on-demand | ||||||
matchExpressions: | ||||||
- key: node.kubernetes.io/instance-type | ||||||
operator: In | ||||||
values: ["m5.large", "m5.2xlarge", "m5.4xlarge", "m5.8xlarge", "m5.12xlarge"] | ||||||
capacity: | ||||||
smarter-devices/fuse: 1 | ||||||
Comment on lines
+145
to
+146
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If there was an overhead here, does it apply to this capacity? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We will need to decide these semantics. I think yes. But it also makes me want to revisit how our cloudprovider models base+overhead. |
||||||
``` | ||||||
|
||||||
Option 1 enables us to be strictly more flexible than option 3. However, it introduces ambiguity when multiple NodeOverlays have the same weight. Option 3 would only allow one to apply, so any conflict must be a misconfiguration. However, our example demonstrates that Option 1 has valid use cases where multiple NodeOverlays share the same weight, as long as they don't define the same value. For Option 1, NodeOverlays with the same weight that both attempt to configure the same field are considered to be a misconfiguration. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Does it just apply to the things that are being selected on or the whole set of instance types? Sounds like the former. So basically -- it only applies to misconfiguration errors on the intersection of an invalid set of selectors There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not grokking this. |
||||||
|
||||||
### Misconfiguration | ||||||
|
||||||
Given our merge semantics, it's possible for customers to define multiple NodeOverlays with undefined behavior. In both cases, we will identify and communicate these misconfigurations, but should we fail open or fail closed? Failing closed means that a misconfigured would halt Karpenter from launching new Nodes until the misconfiguration is resolved. Failing open means that Karpenter might launch capacity that differs from what an operator expects. | ||||||
|
||||||
We could choose to fail closed to prevent undefined behavior, and within this, we have the option to fail closed for the entire provisioning loop, or for a single NodePool. For example, if multiple NodeOverlays match NodePoolA with the same weight, but only a single NodeOverlay matches NodePoolB, Karpenter could exclude NodePoolA as invalid, but proceed with NodePoolB for simulation. These semantics are very similar to how Karpenter deals with `.spec.limits` or `InsufficientCapacityErrors`. See: https://karpenter.sh/docs/concepts/scheduling/#fallback. | ||||||
|
||||||
We choose to fail open, but to do so in a consistent way. If two NodeOverlays conflict and have the same weight, we will merge them in alphabetical order. Karpenter customers may find this semantic familiar, as this is exactly how NodePool weights are resolved. We prefer this conflict resolution approach over failing closed, since it avoids the failure mode where any customer (i.e. most use cases) using a single NodePool would suffer a scaling outage due to misconfigured NodeOverlays. | ||||||
|
||||||
### Observability | ||||||
|
||||||
Exposing the full details of the mutations caused by NodeOverlays cannot be feasibly stored in ETCD. Further, we do not want to require that customers read the Karpenter logs to debug this feature. When GetInstanceTypes is called per NodePool, we will emit an event on the NodePool to surface deeper semantic information about the effects of the NodeOverlays. Our implementation will need to be conservative on the volume and frequency of information, but provide sufficient insight for customers to verify that their configurations are working as expected. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we have an example of that event? Would that event just say that we are applying the overlay or would it be more specific about how it's mutating each thing? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This event strikes me as something that would be extremely verbose to get the right details across. You have to list out all the offerings that are compatible, and what the notion of each overlay is There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Going to cut this for now :) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is there anything that we could surface through metrics here that would allow us to also show this information? How does this affect our current instance types metrics and the values that they carry? |
||||||
|
||||||
### Per-Field Semantics | ||||||
|
||||||
The semantics of each field carry individual nuance. Price could be modeled directly via a `price` field, but we can significantly reduce the number of NodeOverlays that need to be defined if we instead model the semantic as a function of the currently defined price (e.g. `pricePercent` or `priceAdjustment`). Similarly, `capacity` and `overhead` make sense to be modeled directly, but `capacityPercent` makes much less sense than `overheadPercent`. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is it possible to lay out what you are thinking the API looks like for each field? I am not sure I grok what we will support.
Second I think the names can be confusing at the moment, maybe its just me. For example, if I look just a pricePercent does that mean percent of discount or percent of price? Lastly, if I specify a capacity that exist within the scheduling simulation for an instance type but its incorrect, such as huge pages, do I override that value or can I even do that? If not can I specify a negative overhead to add value to an existing resource? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is exactly why my recommendation below is to leave the individual semantics of each field to follow-on RFCs. This RFC just proposes a single PricePercent feature. I'll add some language that describes the semantics of it. The idea is to apply a multiplier to the price (either up or down). Since json doesn't support floats, I used percent to round to two decimal places, so we can reason in terms of integers. Very open to naming suggestions. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Got it. I realize now I totally missed that section right under. I think I understand the pricePercent a bit better now so I am happy with keeping that name. Other names I was thinking about was There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is there any prior art for doing arbitrary "equation-based" things? I don't know if we quite need the flexibiliy of CEL here but giving some level of arbitrary flexibility here that we could reason about might be nice There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Let's punt to a future rev. I don't think using CEL is crazy. |
||||||
|
||||||
We expect these semantics to be specific to each field, and do not attempt to define or support a domain specific language (e.g. CEL) for arbitrary control over these fields. We expect this API to evolve as we learn about the specific use cases. | ||||||
|
||||||
### Launch Scope | ||||||
|
||||||
To accelerate feedback on this new mechanism, we will release this API as v1alpha1, with support for a single field: `pricePercent`. All additional scope described in this doc will be left to followon RFCs and implementation effort. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,85 @@ | ||
--- | ||
apiVersion: apiextensions.k8s.io/v1 | ||
kind: CustomResourceDefinition | ||
metadata: | ||
annotations: | ||
controller-gen.kubebuilder.io/version: v0.15.0 | ||
name: nodeoverlays.karpenter.sh | ||
spec: | ||
group: karpenter.sh | ||
names: | ||
categories: | ||
- karpenter | ||
kind: NodeOverlay | ||
listKind: NodeOverlayList | ||
plural: nodeoverlays | ||
singular: nodeoverlay | ||
scope: Cluster | ||
versions: | ||
- name: v1alpha1 | ||
schema: | ||
openAPIV3Schema: | ||
properties: | ||
apiVersion: | ||
description: |- | ||
APIVersion defines the versioned schema of this representation of an object. | ||
Servers should convert recognized schemas to the latest internal value, and | ||
may reject unrecognized values. | ||
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources | ||
type: string | ||
kind: | ||
description: |- | ||
Kind is a string value representing the REST resource this object represents. | ||
Servers may infer this from the endpoint the client submits requests to. | ||
Cannot be updated. | ||
In CamelCase. | ||
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds | ||
type: string | ||
metadata: | ||
type: object | ||
spec: | ||
properties: | ||
pricePercent: | ||
description: PricePercent modifies the price of the simulated node | ||
(PriceAdjustment + (Price * PricePercent / 100)). | ||
type: integer | ||
Comment on lines
+42
to
+45
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If we can use a quantity here, it'll allow adjustments finer than 0.01 (if we don't, it could end up hard to change that later) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I thought about using quantities and a multiplier, but the ergonomics were really confusing. e.g., what does it mean to have a multiplier of 100m. It's 1/10, but a milli multiplier is very unintuitive to me. Do you think we will see use cases of less than 1% increments? |
||
selector: | ||
description: Selector matches against simulated nodes and modifies | ||
their scheduling properties. Matches all if empty. | ||
items: | ||
description: |- | ||
A node selector requirement is a selector that contains values, a key, and an operator | ||
that relates the key and values. | ||
properties: | ||
key: | ||
description: The label key that the selector applies to. | ||
type: string | ||
operator: | ||
description: |- | ||
Represents a key's relationship to a set of values. | ||
Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. | ||
type: string | ||
values: | ||
description: |- | ||
An array of string values. If the operator is In or NotIn, | ||
the values array must be non-empty. If the operator is Exists or DoesNotExist, | ||
the values array must be empty. If the operator is Gt or Lt, the values | ||
array must have a single element, which will be interpreted as an integer. | ||
This array is replaced during a strategic merge patch. | ||
items: | ||
type: string | ||
type: array | ||
x-kubernetes-list-type: atomic | ||
required: | ||
- key | ||
- operator | ||
type: object | ||
type: array | ||
type: object | ||
required: | ||
- spec | ||
type: object | ||
served: true | ||
storage: true | ||
subresources: | ||
status: {} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,85 @@ | ||
--- | ||
apiVersion: apiextensions.k8s.io/v1 | ||
kind: CustomResourceDefinition | ||
metadata: | ||
annotations: | ||
controller-gen.kubebuilder.io/version: v0.15.0 | ||
name: nodeoverlays.karpenter.sh | ||
spec: | ||
group: karpenter.sh | ||
names: | ||
categories: | ||
- karpenter | ||
kind: NodeOverlay | ||
listKind: NodeOverlayList | ||
plural: nodeoverlays | ||
singular: nodeoverlay | ||
scope: Cluster | ||
versions: | ||
- name: v1alpha1 | ||
schema: | ||
openAPIV3Schema: | ||
properties: | ||
apiVersion: | ||
description: |- | ||
APIVersion defines the versioned schema of this representation of an object. | ||
Servers should convert recognized schemas to the latest internal value, and | ||
may reject unrecognized values. | ||
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources | ||
type: string | ||
kind: | ||
description: |- | ||
Kind is a string value representing the REST resource this object represents. | ||
Servers may infer this from the endpoint the client submits requests to. | ||
Cannot be updated. | ||
In CamelCase. | ||
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds | ||
type: string | ||
metadata: | ||
type: object | ||
spec: | ||
properties: | ||
pricePercent: | ||
description: PricePercent modifies the price of the simulated node | ||
(PriceAdjustment + (Price * PricePercent / 100)). | ||
type: integer | ||
selector: | ||
description: Selector matches against simulated nodes and modifies | ||
their scheduling properties. Matches all if empty. | ||
items: | ||
description: |- | ||
A node selector requirement is a selector that contains values, a key, and an operator | ||
that relates the key and values. | ||
properties: | ||
key: | ||
description: The label key that the selector applies to. | ||
type: string | ||
operator: | ||
description: |- | ||
Represents a key's relationship to a set of values. | ||
Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. | ||
type: string | ||
values: | ||
description: |- | ||
An array of string values. If the operator is In or NotIn, | ||
the values array must be non-empty. If the operator is Exists or DoesNotExist, | ||
the values array must be empty. If the operator is Gt or Lt, the values | ||
array must have a single element, which will be interpreted as an integer. | ||
This array is replaced during a strategic merge patch. | ||
items: | ||
type: string | ||
type: array | ||
x-kubernetes-list-type: atomic | ||
required: | ||
- key | ||
- operator | ||
type: object | ||
type: array | ||
type: object | ||
required: | ||
- spec | ||
type: object | ||
served: true | ||
storage: true | ||
subresources: | ||
status: {} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
/* | ||
Copyright The Kubernetes Authors. | ||
|
||
Licensed under the Apache License, Version 2.0 (the "License"); | ||
you may not use this file except in compliance with the License. | ||
You may obtain a copy of the License at | ||
|
||
http://www.apache.org/licenses/LICENSE-2.0 | ||
|
||
Unless required by applicable law or agreed to in writing, software | ||
distributed under the License is distributed on an "AS IS" BASIS, | ||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
See the License for the specific language governing permissions and | ||
limitations under the License. | ||
*/ | ||
|
||
// +k8s:deepcopy-gen=package | ||
// +groupName=karpenter.sh | ||
package v1alpha1 // doc.go is discovered by codegen |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may be worth describing the potential interaction of client-side use (and customization) of pricing with provider-specific server-side price-sensitive allocation strategies (like those supported by EC2 Fleet).