Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 create bootstrap token if not found in refresh process #11037

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 16 additions & 3 deletions bootstrap/kubeadm/internal/controllers/kubeadmconfig_controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ func (r *KubeadmConfigReconciler) reconcile(ctx context.Context, scope *Scope, c
// If the BootstrapToken has been generated for a join but the config owner has no nodeRefs,
// this indicates that the node has not yet joined and the token in the join config has not
// been consumed and it may need a refresh.
return r.refreshBootstrapTokenIfNeeded(ctx, config, cluster)
return r.refreshBootstrapTokenIfNeeded(ctx, config, cluster, scope)
}
if configOwner.IsMachinePool() {
// If the BootstrapToken has been generated and infrastructure is ready but the configOwner is a MachinePool,
Expand Down Expand Up @@ -315,8 +315,7 @@ func (r *KubeadmConfigReconciler) reconcile(ctx context.Context, scope *Scope, c
return r.joinWorker(ctx, scope)
}

func (r *KubeadmConfigReconciler) refreshBootstrapTokenIfNeeded(ctx context.Context, config *bootstrapv1.KubeadmConfig, cluster *clusterv1.Cluster) (ctrl.Result, error) {
log := ctrl.LoggerFrom(ctx)
func (r *KubeadmConfigReconciler) refreshBootstrapTokenIfNeeded(ctx context.Context, config *bootstrapv1.KubeadmConfig, cluster *clusterv1.Cluster, scope *Scope) (ctrl.Result, error) { log := ctrl.LoggerFrom(ctx)
token := config.Spec.JoinConfiguration.Discovery.BootstrapToken.Token

remoteClient, err := r.Tracker.GetClient(ctx, util.ObjectKey(cluster))
Expand All @@ -326,6 +325,20 @@ func (r *KubeadmConfigReconciler) refreshBootstrapTokenIfNeeded(ctx context.Cont

secret, err := getToken(ctx, remoteClient, token)
if err != nil {
if apierrors.IsNotFound(err) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should limit this behaviour (re-create the token) to only when configOwner.IsMachinePool() or when the config owner is a machine and it doesn't have the data secret field set.

This is required because machines, once data secret field is set, are never going to pickup up new data secrets (and having re-created by un-used data secrets around will be noisy/possibly confusing when triaging issues).

We also using test coverage for this change

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fabriziopandini Any advice for the issue I met? Current logic bug here is:

  1. Token is created and tracked by cluster-api controller, but the lifecycle is handled by workload cluster, it will be deleted by workload cluster without info cluster-api controller
  2. Machine Pool is created by cluster-api controller, but cluster-autoscaler is used to trigger the replica number expected for the pool

So whenever the cluster is paused, things may be out of control, and we can see node join issue, especially when we use the default 15 mins for token ttl.

In our case, I have changed the token ttl to 7 days to avoid the issue, but still I think we should find a way to handle it. If auto refresh is not recommended, will it be acceptable that we got some kind of ctl to do the refresh?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the reason for pausing the cluster? Maybe when pausing the cluster, you also should pause autoscaler for this cluster in some way?!

Copy link
Contributor Author

@archerwu9425 archerwu9425 Nov 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In our case:
We have two controllers cluster, so we can migrate workload clusters to another controller cluster during the capo controllers upgrade, which will pause the cluster. If meet any issue during migration, the time for pause may be longer.

As I said, we have used longer ttl to solve this. But still think this issue should be handled.

Cluster api support paused feature, then I think it's reasonable to handle this kind of case. Just my opinion😄

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While the approach you mention here is valid, the feedback from Fabrizio

limit this behaviour (re-create the token) to only when configOwner.IsMachinePool() or when the config owner is a machine and it doesn't have the data secret field set.

should still apply — given that we don't want to renew every token regardless if the owner has joined or not

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue I met is for machinepool, so if I limited the refresh token when not found to machinepool created by cluster-api, can it be acceptable?

Copy link
Member

@fabriziopandini fabriziopandini Dec 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, it is ok to improve how MP recovers after a Cluster is paused for a long time.
What we want to make sure is that the current change doesn't impact anything which isn't owned by a MP (kubeadmconfig_controller serves also regular machines, not only MP).

Also, please do not consider pause as a regular Cluster API feature.
It is an option that we introduced for allowing extraordinary (emphasis on extraordinary) maintenance operations and it is assumed deep knowledge of the system for whatever happens when the cluster is paused.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've covered this initial request (only recreate for machine pools) in my dupe PR #11520. Sorry that I didn't find this PR earlier – maybe I got distracted by closed PRs or so. Our PRs are now quite similar.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AndiDog Great to know, and you have got all the test ready, please go on with your PR, nice to have the issue be fixed soon

log.Info("Bootstrap token not found, creating new bootstrap token")
token, err := createToken(ctx, remoteClient, r.TokenTTL)
if err != nil {
return ctrl.Result{}, errors.Wrapf(err, "failed to create new bootstrap token")
}

config.Spec.JoinConfiguration.Discovery.BootstrapToken.Token = token
log.V(3).Info("Altering JoinConfiguration.Discovery.BootstrapToken.Token")

// update the bootstrap data
return r.joinWorker(ctx, scope)
}

return ctrl.Result{}, errors.Wrapf(err, "failed to get bootstrap token secret in order to refresh it")
}
log = log.WithValues("Secret", klog.KObj(secret))
Expand Down
Loading