EKS - Grant AWS Role Access to Console

date
May 10, 2023
slug
eks-grant-aws-role-access-to-console
author
status
Public
tags
Notes
summary
Describes how to configure access to an Amazon EKS cluster for AWS roles.
type
Post
thumbnail
Amazon-EKS-Anywhere.jpg
updatedAt
May 10, 2023 09:25 PM
EKS - Grant AWS Role Access to Console
When you set up Amazon Elastic Kubernetes Service (EKS) for the first time, you may receive a message in the AWS Console UI indicating that your current user or role doesn't have access to Kubernetes objects.
To address this issue, create a config-map called aws-auth in the kube-system namespace when deploying a new EKS cluster. This config-map establishes a relationship between an AWS user/role and a Kubernetes group.
We can achieve this by binding our AWS IAM user/role with the Kubernetes group. For fine grained permissions we will create a new group where we assign the desired scope. If we donā€™t care about fine-grained permissions, then we can just reuse the group system:masters. (We will only cover fine-grained access)

Limitations:

AWS has limitations when referencing ARNs that they cannot contain the path. This caused issues for me. For example, we can't specify an ARN such asĀ arn:aws:iam::111122223333:role/my-team/developers/role-name. The ARN needs to beĀ arn:aws:iam::111122223333:role/role-nameĀ instead.

Steps:

First, create a Kubernetes cluster role that allows read-only access:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: eks-console
rules:
  - apiGroups:
      - ""
    resources:
      - nodes
      - namespaces
      - pods
      - events
    verbs:
      - get
      - list
  - apiGroups:
      - apps
    resources:
      - deployments
      - daemonsets
      - statefulsets
      - replicasets
    verbs:
      - get
      - list
  - apiGroups:
      - batch
    resources:
      - jobs
    verbs:
      - get
      - list

Then, letā€™s create a cluster role binding and bind it to our ClusterRole eks-console.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: eks-console
subjects:
  - kind: Group
    name: eks-console
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: eks-console
  apiGroup: rbac.authorization.k8s.io
Finally, we just need to edit the aws-auth config-map and link our AWS IAM role arn:aws:iam::<ACOUNT ID>:role/<ROLE NAME> with the Kubernetes group eks-console:
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    ...
    - groups:
      - eks-console
      rolearn: arn:aws:iam::<ACOUNT ID>:role/<ROLE NAME>
      username: <ROLE NAME>
Ā 
After applying the ConfigMap, changes take effect immediately.