-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorrect credentials being used when creating Kubernetes Client #347
Comments
Thanks for the digging - yikes, this sounds no good. What are the requirements to replicate this?
|
Hi @chadlwilson -- Thanks for getting back so quickly! Indeed, no good! Those requirements that you list are correct and all that I can think of to replicate this behaviour; Kubernetes version should be irrelevant for replicating. Something I forgot to clarify in the above post, I said 'intermittently' using the incorrect credentials -- My belief is that this is due to the refresh occurring every 1 minute and the clients being recycled every 10 by the plugin. So there are periods (e.g. when the server first starts the plugin initialises) where it is within the first minute and therefore skips the refresh. |
@chadlwilson More digging done :D I believe this is the same issue that we are seeing (raised on the underlying clients repo). There is a workaround suggested on the issue which is to use the So it should just be a case of changing this line: Will give this a try and build a version of the plugin locally to try this on our test server. Hopefully will get time for this tomorrow, if successful how would you feel about a PR? |
Sure, a PR is welcome. Would need to understand if that's the best way or we are better to turn off autoconfiguration entirely as it seems some others did from that thread? |
Unfortunately neither of the suggestions on that issue worked 😞 Current theory is that they only work with the latest version (6.x) and not with the backports to 5.x, but I have not yet validated this. |
Hey @chadlwilson Have raised a PR updating to use the latest version of the Tested both disabling auto configuration completely when using this version and using the token provider, both work, but went with disabling auto configure. Verified against our development GoCD server. |
Hello,
We are seeing an issue when going from version
v3.8.2-350
of the plugin tov3.8.4-408
where intermittently the 'wrong' cluster credentials are being used when pods are being created. The credentials that are being used are that of the GoCD server rather than the ones that have been created specifically for the agents. We know this as we see an error message like below (some info redacted).Doing some digging in the changes between these two versions I think the issue is due to an upgrade of the underlying Kubernetes client from version
5.12.2
to5.12.4
.Between these versions, they have back-ported some functionality to automatically 'refresh' tokens, which is utilising the well known locations for these credentials. The issue is, that this is picking up the credentials for the server, rather than those that should be being used for the agent which is passed into the plugin.
Given a cursory glance I could not find a way to disable this auto refresh mechanism, but I have far from done a deep dive.
The text was updated successfully, but these errors were encountered: