-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k8s: Use contexts instead of namespaces for flexibility #228
base: 0.11.x
Are you sure you want to change the base?
Conversation
Using the --namespace flag assumes the "current" credentials are valid for that namespace (e.g. kubectl does not try to switch contexts when you explicitly specify a namespace, which makes sense). However it is possible that a Kubernetes deployment expects different credentials per-namespace, which KUBECONFIG supports. However given how we're using --namespace right now, Nelson isn't leveraging that flexibility. This change instead uses --context to explicitly specify the context (and therefore token + namespace). However since contexts can be named anything (there is a logical name for each context which ties together (cluster, namespace, token)), and because we expect each DC to have its own KUBECONFIG, Nelson will assume the context name is the same as the namespace name.
Previous in-cluster behavior used assumed administrative credentials automatically mounted in the Pod (https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod) to do deployments in-cluster. However with the previous change to use --context instead of --namespace, this no longer works (because there is no KUBECONFIG file, it just uses the token). Therefore even if Nelson is deployed in the same cluster a corresponding kubeconfig must still be mounted + specified. In any case this also makes the semantics perhaps slightly less confusing and/or more consistent.
6206051
to
a03fb75
Compare
a03fb75
to
36d4b65
Compare
.flatMap(_.output) | ||
.flatMap { stdout => | ||
IO.fromEither(Parse.decodeEither[DeploymentStatus](stdout.mkString("\n")).leftMap(kubectlJsonError)) | ||
} | ||
|
||
def getCronJob(namespace: NamespaceName, stackName: StackName): IO[JobStatus] = | ||
exec(List("kubectl", "get", "job", "-l", s"stackName=${stackName.toString}", "-n", namespace.root.asString, "-o", "json"), emptyStdin) | ||
exec(List("kubectl", "get", "job", "-l", s"stackName=${stackName.toString}", "--context", namespace.root.asString, "-o", "json"), emptyStdin) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually this is very breaking... this assumes that the kubeconfig files people are using are structured in a particular manner, correct? This would break my setup today.
def readKubernetesInfrastructure(kfg: KConfig): Option[Infrastructure.Kubernetes] = for { | ||
inCluster <- kfg.lookup[Boolean]("in-cluster") | ||
mode <- if (inCluster) Some(KubernetesMode.InCluster) else readKubernetesOutClusterParams(kfg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this will also break any existing configuration files in the field.
Codecov Report
@@ Coverage Diff @@
## 0.11.x #228 +/- ##
==========================================
+ Coverage 53.08% 55.56% +2.48%
==========================================
Files 133 134 +1
Lines 4591 4373 -218
Branches 111 112 +1
==========================================
- Hits 2437 2430 -7
+ Misses 2154 1943 -211
Continue to review full report at Codecov.
|
@adelbertc did you get a chance to think about what to do here? This will break existing deployments |
NOTE: This is a breaking change for in-cluster deployments, more info below:
Using the --namespace flag assumes the "current" credentials are valid
for that namespace (e.g. kubectl does not try to switch contexts when
you explicitly specify a namespace, which makes sense). However it is
possible that a Kubernetes deployment expects different credentials
per-namespace, which KUBECONFIG supports. However given how we're using
--namespace right now, Nelson isn't leveraging that flexibility.
This change instead uses --context to explicitly specify the context
(and therefore token + namespace). However since contexts can be named
anything (there is a logical name for each context which ties together
(cluster, namespace, token)), and because we expect each DC to have
its own KUBECONFIG, Nelson will assume the context name is the same as
the namespace name.
In addition this change also removes in/out-cluster distinction in the Kubernetes backend.
Previous in-cluster behavior used assumed administrative credentials
automatically mounted in the Pod
(https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod)
to do deployments in-cluster. However with the previous change to use
--context instead of --namespace, this no longer works (because there is
no KUBECONFIG file, it just uses the token). Therefore even if Nelson
is deployed in the same cluster a corresponding kubeconfig must still be
mounted + specified. In any case this also makes the semantics perhaps
slightly less confusing and/or more consistent.