-
Notifications
You must be signed in to change notification settings - Fork 423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Doc on installing aws-iam-authenticator #270
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Deployment step instructions are incomplete. Deploying the example DaemonSet will cause the the pod to end up in CrashLoopBackOff. Logs show the following:
This is due to UID in container being 10000 and the DaemonSet creating the hostPaths '/etc/kubernetes/aws-iam-authenticator/' and '/var/aws-iam-authenticator/' which will be created with UID and GID as root. Potential fixes are to instruct the user to create the directories on the host with the correct permissions prior to deploying or add an initContainer to set the permissions prior to the aws-iam-authenticator pod starting.
|
/lifecycle frozen |
Why this is closed without saying anything? |
The example config and the documentation is just WRONG and misleading. LE : also all API versions should be 'v1' This works on Amazon EKS so far: ---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: aws-iam-authenticator
rules:
- apiGroups:
- iamauthenticator.k8s.aws
resources:
- iamidentitymappings
verbs:
- get
- list
- watch
- apiGroups:
- iamauthenticator.k8s.aws
resources:
- iamidentitymappings/status
verbs:
- patch
- update
- apiGroups:
- ""
resources:
- events
verbs:
- create
- update
- patch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- aws-auth
verbs:
- get
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: aws-iam-authenticator
namespace: kube-system
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: aws-iam-authenticator
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: aws-iam-authenticator
subjects:
- kind: ServiceAccount
name: aws-iam-authenticator
namespace: kube-system
# ---
# EKS-Style ConfigMap: roles and users can be mapped in the same way as supported on EKS.
# If mappings are defined this way they do not need to be redefined on the other ConfigMap.
# https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
# uncomment if using EKS-Style ConfigMap
# apiVersion: v1
# kind: ConfigMap
# metadata:
# name: aws-auth
# namespace: kube-system
# data:
# mapRoles: |
# - rolearn: <ARN of instance role (not instance profile)>
# username: system:node:{{EC2PrivateDNSName}}
# groups:
# - system:bootstrappers
# - system:nodes
# mapUsers: |
# - rolearn: arn:aws:iam::000000000000:user/Alice
# username: alice
# groups:
# - system:masters
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: kube-system
name: aws-iam-authenticator
labels:
k8s-app: aws-iam-authenticator
data:
config.yaml: |
# a unique-per-cluster identifier to prevent replay attacks
# (good choices are a random token or a domain name that will be unique to your cluster)
clusterID: my-dev-cluster.example.com
server:
# each mapRoles entry maps an IAM role to a username and set of groups
# Each username and group can optionally contain template parameters:
# 1) "{{AccountID}}" is the 12 digit AWS ID.
# 2) "{{SessionName}}" is the role session name, with `@` characters
# transliterated to `-` characters.
# 3) "{{SessionNameRaw}}" is the role session name, without character
# transliteration (available in version >= 0.5).
mapRoles:
# statically map arn:aws:iam::000000000000:role/KubernetesAdmin to a cluster admin
- roleARN: arn:aws:iam::000000000000:role/KubernetesAdmin
username: kubernetes-admin
groups:
- system:masters
# map EC2 instances in my "KubernetesNode" role to users like
# "aws:000000000000:instance:i-0123456789abcdef0". Only use this if you
# trust that the role can only be assumed by EC2 instances. If an IAM user
# can assume this role directly (with sts:AssumeRole) they can control
# SessionName.
- roleARN: arn:aws:iam::000000000000:role/KubernetesNode
username: aws:{{AccountID}}:instance:{{SessionName}}
groups:
- system:bootstrappers
- aws:instances
# map federated users in my "KubernetesAdmin" role to users like
# "admin:alice-example.com". The SessionName is an arbitrary role name
# like an e-mail address passed by the identity provider. Note that if this
# role is assumed directly by an IAM User (not via federation), the user
# can control the SessionName.
- roleARN: arn:aws:iam::000000000000:role/KubernetesAdmin
username: admin:{{SessionName}}
groups:
- system:masters
# map federated users in my "KubernetesOtherAdmin" role to users like
# "alice-example.com". The SessionName is an arbitrary role name
# like an e-mail address passed by the identity provider. Note that if this
# role is assumed directly by an IAM User (not via federation), the user
# can control the SessionName. Note that the "{{SessionName}}" macro is
# quoted to ensure it is properly parsed as a string.
- roleARN: arn:aws:iam::000000000000:role/KubernetesOtherAdmin
username: "{{SessionName}}"
groups:
- system:masters
# map federated users in my "KubernetesUsers" role to users like
# "[email protected]". SessionNameRaw is sourced from the same place as
# SessionName with the distinction that no transformation is performed
# on the value. For example an email addresses passed by an identity
# provider will not have the `@` replaced with a `-`.
- roleARN: arn:aws:iam::000000000000:role/KubernetesUsers
username: "{{SessionNameRaw}}"
groups:
- developers
# each mapUsers entry maps an IAM role to a static username and set of groups
mapUsers:
# map user IAM user Alice in 000000000000 to user "alice" in "system:masters"
- userARN: arn:aws:iam::000000000000:user/Alice
username: alice
groups:
- system:masters
# List of Account IDs to whitelist for authentication
mapAccounts:
# - <AWS_ACCOUNT_ID>
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: kube-system
name: aws-iam-authenticator
labels:
k8s-app: aws-iam-authenticator
annotations:
seccomp.security.alpha.kubernetes.io/pod: runtime/default
spec:
selector:
matchLabels:
k8s-app: aws-iam-authenticator
updateStrategy:
type: RollingUpdate
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
k8s-app: aws-iam-authenticator
spec:
# use service account with access to
serviceAccountName: aws-iam-authenticator
# run on the host network (don't depend on CNI)
hostNetwork: true
# run on each master node, but comment out for Amazon EKS, it creates unschedulable deployment
# nodeSelector:
# node-role.kubernetes.io/master: ""
tolerations:
# - effect: NoSchedule
# key: node-role.kubernetes.io/master
- key: CriticalAddonsOnly
operator: Exists
# run `aws-iam-authenticator server` with three volumes
# - config (mounted from the ConfigMap at /etc/aws-iam-authenticator/config.yaml)
# - state (persisted TLS certificate and keys, mounted from the host)
# - output (output kubeconfig to plug into your apiserver configuration, mounted from the host)
# - initContainers is needed because of errors in EKS deployment : https://github.com/kubernetes-sigs/aws-iam-authenticator/issues/270#issuecomment-584238850
initContainers:
- name: chown
image: busybox
command: ['sh', '-c', 'chown 10000:10000 /var/aws-iam-authenticator; chown 10000:10000 /etc/kubernetes/aws-iam-authenticator']
volumeMounts:
- name: state
mountPath: /var/aws-iam-authenticator/
- name: output
mountPath: /etc/kubernetes/aws-iam-authenticator/
containers:
- name: aws-iam-authenticator
image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.3
args:
- server
# uncomment if using EKS-Style ConfigMap
- --backend-mode=EKSConfigMap
- --config=/etc/aws-iam-authenticator/config.yaml
- --state-dir=/var/aws-iam-authenticator
- --generate-kubeconfig=/etc/kubernetes/aws-iam-authenticator/kubeconfig.yaml
# uncomment if using the Kops Usage instructions https://sigs.k8s.io/aws-iam-authenticator#kops-usage
# the kubeconfig.yaml is pregenerated by the 'aws-iam-authenticator init' step
# - --kubeconfig-pregenerated=true
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
resources:
requests:
memory: 20Mi
cpu: 10m
limits:
memory: 20Mi
cpu: 100m
volumeMounts:
- name: config
mountPath: /etc/aws-iam-authenticator/
- name: state
mountPath: /var/aws-iam-authenticator/
- name: output
mountPath: /etc/kubernetes/aws-iam-authenticator/
volumes:
- name: config
configMap:
name: aws-iam-authenticator
- name: output
hostPath:
path: /etc/kubernetes/aws-iam-authenticator/
- name: state
hostPath:
path: /var/aws-iam-authenticator/ |
Hi @anastazya, Is it working fine in your EKS environment? I have created an IAM group, role and policy to assume the role. I have mapped the role to
I am not sure if it is some mis-configuration on my side or something is wrong with aws-iam-authenticator? |
Yes, that's the error i am getting also, but the reasons are others now. Anyway, this module and the documentation is so wrong and misleading, i'm going to make it a personal matter to write an article about it. Now i'm on parental leave until 22 July and just gathering thoughts. I will get to the bottom of this after that date. |
@anastazya thank you for the update. |
Hi @anastazya have You any updates about this topic? |
In the Readme file, the second step to "Run the server" is not clear. Can you modify that to describe how to run the server?
The text was updated successfully, but these errors were encountered: