-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 create bootstrap token if not found in refresh process #11037
Closed
Closed
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should limit this behaviour (re-create the token) to only when configOwner.IsMachinePool() or when the config owner is a machine and it doesn't have the data secret field set.
This is required because machines, once data secret field is set, are never going to pickup up new data secrets (and having re-created by un-used data secrets around will be noisy/possibly confusing when triaging issues).
We also using test coverage for this change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fabriziopandini Any advice for the issue I met? Current logic bug here is:
So whenever the cluster is
paused
, things may be out of control, and we can see node join issue, especially when we use the default15 mins
for token ttl.In our case, I have changed the token ttl to 7 days to avoid the issue, but still I think we should find a way to handle it. If auto refresh is not recommended, will it be acceptable that we got some kind of ctl to do the refresh?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the reason for pausing the cluster? Maybe when pausing the cluster, you also should pause autoscaler for this cluster in some way?!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In our case:
We have two controllers cluster, so we can migrate workload clusters to another controller cluster during the capo controllers upgrade, which will pause the cluster. If meet any issue during migration, the time for pause may be longer.
As I said, we have used longer ttl to solve this. But still think this issue should be handled.
Cluster api support paused feature, then I think it's reasonable to handle this kind of case. Just my opinion😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While the approach you mention here is valid, the feedback from Fabrizio
should still apply — given that we don't want to renew every token regardless if the owner has joined or not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue I met is for machinepool, so if I limited the refresh token when not found to machinepool created by cluster-api, can it be acceptable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, it is ok to improve how MP recovers after a Cluster is paused for a long time.
What we want to make sure is that the current change doesn't impact anything which isn't owned by a MP (kubeadmconfig_controller serves also regular machines, not only MP).
Also, please do not consider pause as a regular Cluster API feature.
It is an option that we introduced for allowing extraordinary (emphasis on extraordinary) maintenance operations and it is assumed deep knowledge of the system for whatever happens when the cluster is paused.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've covered this initial request (only recreate for machine pools) in my dupe PR #11520. Sorry that I didn't find this PR earlier – maybe I got distracted by closed PRs or so. Our PRs are now quite similar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AndiDog Great to know, and you have got all the test ready, please go on with your PR, nice to have the issue be fixed soon