Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature/support-multi-node-pools #394

Merged
merged 4 commits into from
Nov 6, 2024

Conversation

alidaw
Copy link
Contributor

@alidaw alidaw commented Sep 18, 2024

The implementations out for review are used to add multi node pool support at the k8s resource.

@alidaw alidaw self-assigned this Sep 18, 2024
@nvthongswansea
Copy link
Member

@alidaw could you update the docs in path website/docs/r/k8s.html.md please?

As the implementation of multi node pool support
will require gsclient to support nested schema at
the point where requested Kubernetes template
parameters schema will be used for validation of
what got assigned to corresponding schema
properties of the k8s resource, the implementation
must base on gsclient from version 3.15.0 on.
@alidaw alidaw force-pushed the feature/support-multi-node-pools branch from bb18ed0 to 281329b Compare September 19, 2024 19:35
@alidaw
Copy link
Contributor Author

alidaw commented Sep 19, 2024

Rebased to be ahead of master branch!

@alidaw alidaw force-pushed the feature/support-multi-node-pools branch from 281329b to e0eafd3 Compare September 19, 2024 19:57
@alidaw
Copy link
Contributor Author

alidaw commented Sep 19, 2024

@alidaw could you update the docs in path website/docs/r/k8s.html.md please?

Done. Please have a look.

@alidaw alidaw force-pushed the feature/support-multi-node-pools branch from e0eafd3 to 7d0c125 Compare September 19, 2024 20:56
@alidaw alidaw force-pushed the feature/support-multi-node-pools branch from 7d0c125 to 48300a1 Compare September 25, 2024 11:04
@nvthongswansea
Copy link
Member

Could you also add some missing parameters in this PR as well @alidaw . The parameters are k8s_hubble (which is requested by #396 ), and log-related parameters.

Come with additional parameter called ...
  - hubble
  - kube_apiserver_log_enabled
  - audit_log_enabled
  - audit_log_level
  - log_delivery
  - log_delivery_bucket
  - log_delivery_access_key
  - log_delivery_secret_key
  - log_delivery_interval
  - log_delivery_endpoint
@nvthongswansea
Copy link
Member

@alidaw regarding the last commit of the k8s_hubble and and log-related parameters, I think all of them should be optional and computed, you don't need to set the default (since the default is set automatically by the API server when they are not set in the request payload). The reason why we should set them computed is because in the past there was a case where the default was changed in parameter schema (in paas service template) but it was not in terraform provider. Hence conflicts. Using computed will solve this issue since the param will be set automatically to the value returned by API server when the user do not set it in the tf file. Here is the log-related parameters I have just added in v1, you could take a look.

@alidaw alidaw force-pushed the feature/support-multi-node-pools branch from a4f823d to 812629e Compare September 25, 2024 20:27
@alidaw
Copy link
Contributor Author

alidaw commented Sep 25, 2024

@alidaw regarding the last commit of the k8s_hubble and and log-related parameters, I think all of them should be optional and computed, you don't need to set the default (since the default is set automatically by the API server when they are not set in the request payload). The reason why we should set them computed is because in the past there was a case where the default was changed in parameter schema (in paas service template) but it was not in terraform provider. Hence conflicts. Using computed will solve this issue since the param will be set automatically to the value returned by API server when the user do not set it in the tf file. Here is the log-related parameters I have just added in v1, you could take a look.

Ok. I see the issue. I've adjusted the schema definition of respective new parameters as you suggested via interactive rebase. I was just wondering why at commit you are applying validation on the new parameter called audit_log_level via hard coded values supposed to be allowed instead fetching what is allowed via respecive parameter's template definition.

You define the allowed values hard coded as global via

var k8sAuditLogLevels = []string{"Metadata", "RequestALLResponseCRUD", "RequestALLResponseALL"}

to then apply validation via validation.StringInSlice(k8sAuditLogLevels, false) as shown below

			"audit_log_level": {
				...
				ValidateFunc: validation.StringInSlice(k8sAuditLogLevels, false),
			},

But I suggest to fetch what is allowed via respective parameter's template definition as I did as shown below:

func validateK8sParameters(d *schema.ResourceDiff, template gsclient.PaaSTemplate) error {
        ...
	templateParameterAuditLogLevel, templateParameterFound := template.Properties.ParametersSchema["k8s_audit_log_level"]
	if auditLogLevel, ok := d.GetOk("audit_log_level"); ok && templateParameterFound {
		var isValid bool
		for _, allowedValue := range templateParameterAuditLogLevel.Allowed {
			if auditLogLevel.(string) == allowedValue {
				isValid = true
			}
		}
		if !isValid {
			errorMessages = append(errorMessages,
				fmt.Sprintf("Invalid 'audit_log_level' value. Value must be one of these:\n\t%s",
					strings.Join(templateParameterAuditLogLevel.Allowed, "\n\t"),
				),
			)
		}
	}
        ...
}

So we do better in referring to validation known by template instead working with hard coded values.

@alidaw alidaw force-pushed the feature/support-multi-node-pools branch from 812629e to 4fe06a9 Compare September 25, 2024 20:38
@alidaw
Copy link
Contributor Author

alidaw commented Sep 25, 2024

Could you also add some missing parameters in this PR as well @alidaw . The parameters are k8s_hubble (which is requested by #396 ), and log-related parameters.

Done.

nodePool["rocket_storage"] = props.Parameters["k8s_worker_node_rocket_storage"]
}
if isNodePoolsSet {
nodePoolsSet := nodePoolsSetInterface.([]map[string]interface{})
Copy link
Member

@nvthongswansea nvthongswansea Nov 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alidaw Here it should be nodePoolsSet := nodePoolsSetInterface.([]interface{}), because it is a list. Then:

for _, nodePoolSetInterface := range nodePoolsSet {
			nodePoolRead := make(map[string]interface{}, 0)
			nodePoolSet := nodePoolSetInterface.(map[string]interface{})
...

Copy link
Contributor Author

@alidaw alidaw Nov 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nvthongswansea I've followed your suggestion via interactive rebase.
Sure, I already considered that node pools is a list, where each node pool is supposed to be a string to interface mapping such as map[string]interface{}. This is why I initialized what we need to iterate over like nodePoolsSet := nodePoolsSetInterface.([]map[string]interface{}), which at least is a list of string to interface mapping. But when this caused an issue and we need to iterate over nodePoolsSetInterface.([]interface{}) and break down every item to string to interface mapping via nodePoolSetInterface.(map[string]interface{}) instead directly iterating over nodePoolsSetInterface.([]map[string]interface{}), then it's good to have it fixed now.
I think iterating over collection broken down via nodePoolsSetInterface.([]map[string]interface{}) may break the flow in case if there are no node pools set, right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alidaw afair, go does not allow that. It complains that nodePoolsSetInterface is an interface{} not a []map[string]interface{}. So we have to do type assertions one by one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. In this direction I was thinking. Thx.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... Ah and I've recognized that at the functions called resourceGridscaleK8sCreate, resourceGridscaleK8sUpdate and validateK8sParameters I already had considered to break down the node pools interface into a collection of interfaces via .([]interface{}). So then I think at the function called resourceGridscaleK8sRead either it was a leftover or happened accidentally that i broke the interface down into a list of string to interface mapping via .([]map[string]interface{}).

@alidaw alidaw force-pushed the feature/support-multi-node-pools branch from 4fe06a9 to 2475a34 Compare November 4, 2024 14:42
@nvthongswansea
Copy link
Member

nvthongswansea commented Nov 6, 2024

@alidaw name of gsk cluster or node pool does not allow _ .- is accepted. Please fix the acc test.

@alidaw
Copy link
Contributor Author

alidaw commented Nov 6, 2024

@alidaw name of gsk cluster or node pool does not allow _ .- is accepted. Please fix the acc test.

Ok. I've added a commit to stick to naming convention of node at k8s resource test.

@nvthongswansea
Copy link
Member

LGTM

@nvthongswansea nvthongswansea self-requested a review November 6, 2024 11:30
@nvthongswansea nvthongswansea merged commit 34c84ec into master Nov 6, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants