Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[POC] separate experimental / production functionality #670

Closed
2 tasks done
Tracked by #669
strekm opened this issue Mar 14, 2024 · 4 comments
Closed
2 tasks done
Tracked by #669

[POC] separate experimental / production functionality #670

strekm opened this issue Mar 14, 2024 · 4 comments
Assignees

Comments

@strekm
Copy link
Contributor

strekm commented Mar 14, 2024

Description
Scope of this task is to figure out best way to enable users wanting to provide configuration that is not available via Istio CR. This functionality should be separated from managed offering. There should not be possibility to switch between experimental and managed offering during runtime.

Additional configuration is provided via CM that is observed by Istio manager. Strategy of applying should be replace.

Proposed solution should be low maintenance and utilise usage of experimental offering. implementation should be common for both ofeering but functionality should separated by image building.

Open questions:

  • how avoid custom configuration rollback during Istio reconciliation
  • image build separation (different controller?)
  • validation of custom configuration

DoD:
- [ ] Provide unit and integration tests.

  • Provide documentation.
  • Verify if the solution works for both open-source Kyma and SAP BTP, Kyma runtime.
    - [ ] If you changed the resource limits, explain why it was needed.
    - [ ] If the default configuration of Istio Operator has been changed, you performed a manual upgrade test to verify that the change can be rolled out correctly.
    - [ ] Verify that your contributions don't decrease code coverage. If they do, explain why this is the case.
    - [ ] Add release notes.

Attachments
part of: #669

@triffer
Copy link
Contributor

triffer commented Mar 15, 2024

I think the cleanest way to do that is by having different implementations for resolving the Istio Operator manifest. We would have different Docker images and using a build tag we can configure what implementation is included.

Implementation consideration:

  • Use interface for defining different manifest resolvers
  • Move current logic of reading manifest and merging it with Istio CR in a function that can be used by both resolvers
  • Create two different resolvers for default and new custom Istio op configuration
    • The default implementation will work with istio manifest from the repository + merging Istio CR
    • The custom Istio op configuration implementation will work the same way as the default, but alternatively supports to load the Istio operator config from a config map, if the config map exists
  • If we need a additional validation of the custom configuration it can be done here, but I think we should just rely on Istio installation validation
  • A Build tag is used to separate implementations on istio reconciliation package level

How to trigger the reconciliation?

We can use a build tag to conditionally add a watch for the config map with the custom configuration. The question is how do we continue from here? We can then have a logic in reconcile to read the Istio CR and reconcile it. I would not do that, since this mixes up a lot of existing code, seems to me like a workaround and is harder to understand.

I'd rather reduce the requeue time of the reconciliation of Istio CR(e.g. to 1 or 2 min) and reconcile the changes in the config map with the next reconciliation. This is easier to implement, and we don't need a separation of the code depending on the build tag.

Simple example implementation

There is a new Interface for the ConfigResolver:

type ConfigResolver interface {
        // Resolves istio operator config as a string.
	Resolve() string, error
}

Add a new resolver that will have the new functionality and that is used when the build tag allow-custom-istio-operator-config is set.

// +build experimental

package istio

type experimentalConfigResolver struct {
}

func NewConfigResolver() ConfigResolver {
	return &experimentalConfigResolver{}
}

func (d *experimentalConfigResolver) Resolve() (string, error) {
	// Resolving from custom istio operator from CM and as a fallback from default manifest and merge with Istio CR ")
	return "istio operator config", nil
}

Add the default resolver with the existing functionality that is used when the build tag allow-custom-istio-operator-config is not set.

// +build !experimental

package istio

type defaultConfigResolver struct {
}

func NewConfigResolver() ConfigResolver {
	return &defaultConfigResolver{}
}

func (d *defaultConfigResolver) Resolve() (string, error) {
	// Resolving from default manifest and merge with Istio CR
	return "istio operator config", nil
}

type ConfigResolver interface {
	// Resolves istio operator config as a string.
	Resolve() (string, error)
}

We want to call the ConfigResolver as part of the Istio installation Reconcile and replace the current creation of the Istio operator manifest here.
The following implementation would be replace with a call to the resolver that returns the Istio Operator manifest. We can also make the ConfigResolver a field on the Installation struct and initialise it there, it's an implementation detail.

        r := NewConfigResolver()

	istioOperatorManifest, err := r.Resolve();
	if err != nil {
		...
	}
	err = i.IstioClient.Install(istioOperatorManifest)

Considering Open Questions

how avoid custom configuration rollback during Istio reconciliation

The resolving will be part of the normal reconciliation, just by replacing the retrieval of the Istio Operator config. Therefore should be no issue with rollbacks, since the config from the config map would be used as long as it exists.

image build separation (different controller?)

The separation is handled by the different implementations of the interface and build tags.

validation of custom configuration

We should not do any additional validation, but rely on the validation done by Istio installation.

@Ressetkk
Copy link
Contributor

So we came up with a POC results. With some help from @triffer I've come up with a working solution that is separated with build tags, as was proposed. The implementation is generally based on proposal above.

When manager is built with -tags=experimental the experimental implementation of a resolver is used:

  • It checks for ConfigMap created named istio-system/custom-istio-operator with a data under key istio-operator.yaml
  • Saves the ConfigMap to a temp directory in a container
  • Returns the absolute path to newly created config file
  • If ConfigMap is empty, or doesn't exist - use standard configuration as before

To build manager with experimental feature enabled, use command:

go build -a -tags=experimental -o manager-experimental main.go

To run the manager:

  • Ensure you have kind cluster working and current context is set to this cluster.
  • Run make install.
  • Create namespaces Istio-system and kyma-system.
  • Create istio-kyma-priority-class PriorityClass in kyma-system.
  • Create ConfigMap in Istio-system with custom IstioOperator
    kubectl create configmap -n istio-system custom-istio-operator --from-file=path/to/custom-istio-operator.yaml 
    
  • Run ./manager-experimental
  • Apply config/samples/operator_v1alpha2_istio.yaml

I already see some drawbacks of this Impl:

  • This requires additional build artifact to be produced. That includes custom kustomization with an overlay patch for experimental channel. The image tag would need to contain suffix -experimental.
  • We had problems with GitHub Rate limits recently in build pipelines. If we add yet another wait-for-status step in our pipeline, I'm afraid that we will hit the rate limits much more frequently. @kolodziejczak can elaborate more on that topic. This doesn't scale well.
  • Dockerfile can be reused, however we need to add --build-arg that adds a tag=experimental to go build
  • If we override additional configurations, like CNI config, with idea of custom Operator config that is not templated or merged in any way, we lose this functionality. That means we cannot override some module-controlled options in the CR.

I personally don't like the idea. It seems to be too hacky. This feature should be part of IstioCR and contain a nice and convenient interface for the customer to enable/disable some alpha functions for a price of no SLA. This way we still have control over istio operator CR, and have control over which alpha features are enabled.

Questions

Q: If the templating is dropped, how will the image version be handled?

0001-poc-custom-istio-operator-configuration.patch

@triffer
Copy link
Contributor

triffer commented Mar 19, 2024

There is also an open question regarding the Istio proxy sidecar restart behaviour.
Since we allow to configure the whole Istio operator, we also allow to use different proxy sidecar images.
In this case, we need to think about how the proxy sidecar restart will behave.
One solution could be to dynamically create a restart predicate for this image and version and use it in the proxy sidecar restart.

@Ressetkk
Copy link
Contributor

Ressetkk commented Mar 20, 2024

After the meeting we came to some conclusions:

  • Keep the idea of setting custom istio operator resource through configmap as starting point
  • Separation between 2 different image flavours has to be covered so this feature is not leaking outside experimental
  • Create default ConfigMap that user can modify and extend with default configuration (working state of config file)
  • Provide documentation on how to manage and override Custom Resource and potential errors and statement that no support will be provided
  • Reduce reconciliation time to 5 min

Things to consider:

  • Possibly bleeding edge distribution to experimental from main
  • Provide how to force reconciliation using kubectl

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants