How auth works in EKS with IAM Users
This was not trivial to find: how exactly does authentication and authorization work in EKS if you want to use IAM Users or Roles to authenticate?
As an example, what happens when you run kubectl auth can-i get pods
?
Prerequisites
You will need to install aws
command line tools, kubectl
, and
heptio-authenticator-aws
; make sure they are on PATH
. I also use
jq
, shyaml
, and httpie
in examples.
Client-side configuration: AWS
In case of EKS there are two parts to configure: AWS command line tools (AWS CLI) and kubectl. I need to access multiple clusters using multiple credentials, so I’ll cover that more generic case here.
The usual way to configure AWS is to run aws configure --profile <profile-name>
, but you can just as well just edit ~/.aws/config
and ~/.aws/credentials
:
[@trurl ~] cat .aws/config
[default]
region = us-west-2
[profile some-other-profile]
region = us-west-2
[profile profile-that-uses-iam-role]
region = us-west-2
[@trurl ~] cat .aws/credentials
[default]
aws_access_key_id = [REDACTED]
aws_secret_access_key = [REDACTED]
[some-other-profile]
aws_access_key_id = [REDACTED]
aws_secret_access_key = [REDACTED]
[profile-that-uses-iam-role]
role_arn = <ARN of the role to assume>
source_profile = <name of profile used for auth before assuming this role>
IAM has two kinds of identity entities: users and roles. An IAM User is what you’d expect: it has an username and credentials, belongs to some groups, has permissions attached to it. An IAM Role is similar, but it’s an identity that a user or a process may assume temporarily for the purpose of performing some actions. IAM Users are usually used for human interaction with AWS; IAM Roles are usually used for automated access, for example: when EKS server wants to tell AWS it needs to start a new node, it can manage that part of your AWS account only because it authenticates into a role that has sufficient permissions in the account.
AWS CLI always starts with credentials of an IAM user (or the account owner, although you should use that identity only to create an IAM user to interact with your account); you can also assume a role, like in the last profile, but I won’t use that here.
If your configuration is correct, you can do this:
# You may omit the AWS_PROFILE=..., aws will use the default
[@trurl ~] export AWS_PROFILE=<profile-name>
[@trurl ~] aws sts get-caller-identity
{
"Account": "[REDACTED]",
"UserId": "[REDACTED]",
"Arn": "arn:aws:iam::[REDACTED]:user/<username>"
}
If you created a cluster, you can now ask AWS for information necessary to access it:
[@trurl ~] aws eks list-clusters
{
"clusters": [
"my-cluster",
[...]
]
}
[@trurl ~] aws eks describe-cluster --name=my-cluster
{
"cluster": {
"status": "ACTIVE",
"endpoint": "https://[REDACTED].[REDACTED].us-west-2.eks.amazonaws.com",
"name": "my-cluster",
"certificateAuthority": {
"data": "[REDACTED]"
},
[...]
}
}
Client-side configuration: kubectl
Now that you have AWS CLI running, create configuration file for
kubectl
. You can have a single file for multiple clusters,
but it’s cleaner to have a separate file for every cluster. The
location does not matter, I use ~/.kube/config.d/<cluster-name>
.
Example file:
[@trurl ~] cat ~/.kube/config.d/my-cluster
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <certificateAuthority.data from describe-cluster>
server: <endpoint from describe-cluster>
name: <cluster-name>
contexts:
- context:
cluster: <cluster-name>
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- <cluster-name>
command: heptio-authenticator-aws
env:
- name: AWS_PROFILE
value: <profile-name>
This file tells kubectl:
- the base URL for the cluster’s API server (
cluster.server
), - the certificate authority data to use for TLS verification
(
certificate-authority-data
), - that for authentication it should use bearer tokens generated by
heptio-authenticator-aws
.
heptio-authenticator-aws
does not execute any remote calls; it just
creates a signed token based on cluster name and credentials from your
AWS_PROFILE
:
[@trurl ~] heptio-authenticator-aws token -i my-cluster
{
"kind": "ExecCredential",
"apiVersion": "client.authentication.k8s.io/v1alpha1",
"spec": {},
"status": {
"token": "[REDACTED]"
}
}
If your configuration is correct AND you have permissions set right, this should work:
[@trurl ~] export KUBECONFIG=~/.kube/config.d/my-cluster
[@trurl ~] kubectl auth can-i get pods
yes
You might also see one of these:
[@trurl ~] kubectl auth can-i get pods
error: You must be logged in to the server (Unauthorized)
meaning there’s something wrong with authentication, or:
[@trurl ~] kubectl auth can-i get pods
no
meaning that authentication went well, but authorization did not. I’ll address both below.
Authentication
Kubernetes has two kinds of users:
Service accounts: they are just what you would expect from user accounts; they are first class entities in Kubernetes, available through its API. you can use them with EKS, but there is nothing EKS-specific here as they sidestep all the IAM machinery; and there already exist better guides about this branch, so I’ll skip them here.
Normal users: managed completely outside Kubernetes; the API server trusts some external software to provide it with the username and groups for every request it receives. This is what EKS uses when you authenticate using IAM identities, and this is what I’ll cover here.
When you execute a kubectl
command it does a REST call to
Kubernetes’s API server and sends the token generated by
heptio-authenticator-aws
in the Authentication
header. When you
kubectl auth can-i get pods
, the client does more or less this:
# Store the CA data in a file for TLS verification; you can skip it if
# you're fine with the insecure `--verify=no` in http call below.
[@trurl ~] cat ~/.kube/config.d/my-cluster \
| shyaml get-value 'clusters.0.cluster.certificate-authority-data' \
| base64 -D > ~/.kube/config.d/my-cluster-ca-data
# Generate the authentication token
[@trurl ~] TOKEN=`heptio-authenticator-aws token -i my-cluster | jq -r .status.token`
[@trurl ~] REQUEST='{"kind": "SelfSubjectAccessReview", "apiVersion": "authorization.k8s.io/v1", "spec":{"resourceAttributes":{"name":"pod","verb":"get"}}}'
[@trurl ~] echo "$REQUEST" \
| http --verify=~/.kube/config.d/my-cluster-ca-data \
post https://[REDACTED].[REDACTED].us-west-2.eks.amazonaws.com/apis/authorization.k8s.io/v1/selfsubjectaccessreviews \
Authorization:"Bearer $TOKEN"
HTTP/1.1 201 Created
Content-Length: 196
Content-Type: application/json
Date: Wed, 11 Jul 2018 01:21:27 GMT
{
[...]
"status": {
"allowed": true
}
}
On the server side Kubernetes passes the token to a webhook to the
aws-iam-authenticator
process running on EKS host; if all goes well
the authenticator returns a “normal user” identity consisting of
username
(a string) and groups
(a list of strings containing group
names). If that works, that is when authentication ends.
If it fails then it means that you attempted to execute the request
using an IAM User or Role that the aws-iam-authenticator
running on
your cluster does not know how to map to a “normal user” in Kubernetes
space.
There are essentially two cases here:
The IAM User or Role that was used to create the cluster is
hardwired in aws-iam-authenticator
configuration to map to a
“normal user” who belongs to the system:masters
group. This part
is not visible anywhere in data you can reach via kubectl
and
aws
commands, but that also means you can’t modify it.
For other IAM Users or Roles you must configure
aws-iam-authenticator
by setting the aws-auth
configmap. If you
went far along with the cluster to actually add some nodes it means
you must have created that configmap already, since it’s the same
thing that lets the cluster authenticate its nodes. If not, you might
want to look into
Getting Started with Amazon EKS.
To authenticate another IAM User or Role, you have to add user or role mappings:
[@trurl ~] cat /tmp/aws-auth-cm
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: <ARN of some-other-role>
username: some-other-role
groups:
- system:masters
mapUsers: |
- userarn: <ARN of other-user>
username: other-user
groups:
- system:masters
[@trurl ~] KUBECONFIG=~/.kube/config.d/my-cluster kubectl apply -f /tmp/aws-auth-cm
configmap "aws-auth" configured
I added some-other-role
and other-user
to the configuration, and
for simplicity added them both to system:masters
.
Authorization
There is nothing EKS-specific in authorization process; in particular, any permissions or policies attached to IAM Users don’t mean anything, you’re talking directly to Kubernetes API server, and it uses IAM only for authentication.
EKS clusters run in RBAC mode, meaning you grant permissions by
binding roles to identities. In above examples I did not have to do
that because I depended on aws-iam-authenticator
to return the
system:masters
group for all identities I used, and that is a
built-in group that has full permissions to the cluster. That’s the
simplest thing and most likely what you want to do for manual
interaction with the cluster anyway. For cases where that’s not
enough and you would prefer to grant less permissions to an account,
there are much better guides, in particular
Using RBAC Authorization
in Kubernetes docs.