If you have ever use Kubernetes on other cloud providers, you would found, things on EKS works a bit differently.
Today I was facing an issue of giving access to one of our devs the access. So you need to create a new user with the cluster role binding
kubectl create clusterrolebinding eks-admin-cluster-admin-binding --clusterrole=cluster-admin --user=eksadmin
And then grant the eksadmin access to the configmap that handle mapping between AWS IAM and k8s RBAC
kubectl edit -n kube-system configmap/aws-auth data: mapRoles: | - groups: - system:bootstrappers - system:nodes rolearn: arn:aws:iam::7777777777:role/eks-node-group-role username: system:node:{{EC2PrivateDNSName}} mapUsers: | - userarn: arn:aws:iam::7777777777:user/nick.test username: eks-admin groups: - system:masters
Once you completed the setup. configure aws credential/profile and make sure your profile is nick.test in this case. And you should be able to call the k8s api.
aws eks --region ap-southeast-2 update-kubeconfig --name test-eks