K8s tips: Authenticate on AWS with Kube2IAM

When the pods in your kubernetes cluster need to access AWS, you have different solutions to authenticate, based on two factors: where the cluster is deployed and the grade of security you are looking for.

K8s running outside AWS

In case the cluster is deployed outside AWS you have to:

  • Create an IAM user
  • Mount the .aws/credentials file or pass the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY env vars, of the IAM user, to the Pod

K8s running inside AWS

In case the cluster is deployed inside AWS (on EC2 instances), your solutions increase because you have access to the AWS metadata service!

AWS metadata service

Almost any library written to interact with AWS try different methods to authenticate in sequence and the last, but not least, is using AWS metadata service to get a temporary token of the IAM role associated to the EC2 Instance.

This means that you could simply deploy your Pod without credentials and it will automagically authenticate using the IAM Role that you must have previously associated with the EC2 instances of your K8s nodes.

Little warning

Based on how kubernetes works, if you don’t make use of taint/tolerations/nodeSelectors/ etc…, your Pod could be moved to any other Node if restarted for any reason, so be sure to assign the IAM role to all Nodes that could be elected to run your Pod.

BIG WARNING

This approach, that seems so cool and makes you feel living in a world full of unicorns and rainbows, in reality, is the worst thing that you could do to the security of your AWS account.
Now ANY Pod running inside your cluster is able to access AWS using the IAM Role assigned to the Node!
I don’t think I have to explain why this is so bad.

Kube2IAM to the Rescue

Kube2IAM Github

Kube2IAM Helm Chart Github

Setup

1
helm install stable/kube2iam

Kube2IAM, once installed, runs inside your cluster as a daemonset and does a simple thing: Intercepts all Pods requests to the AWS Metadata Service and forwards a request for a specific IAM Role.

To make it works there are some mandatory configs on the chart

1
2
3
host:
iptables: true # Add the iptables rule to catch AWS Metadata requests
interface: weave/cali+/kube-bridge/.. # Specify your virtual newtork interface

and on the AWS Account Allow Node role to assume other roles:

1
2
3
4
5
6
7
8
9
10
11
12
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"sts:AssumeRole"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

Add a Trust Relationship with the Node role on the role that will be assumed.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/kubernetes-node-role"
},
"Action": "sts:AssumeRole"
}
]
}

Usage

Now you have to simply indicate the role a Pod can assume using a simple annotation:

1
2
3
4
5
6
7
8
9
10
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-pod
spec:
replicas: 3
template:
metadata:
annotations:
iam.amazonaws.com/role: role-arn

Restrictions

You can indicate a default role that all Pods without annotation should assume, using an argument on Kube2IAM, this way you’ll prevent the use of the Node’s one.

1
2
extraArgs:
default-role: app-default

You can also restrict the roles that can be assumed inside a Namespace using an additional argument on Kube2IAM to enable it, and specifing the whitelist of roles with an annotation on the Namespace.

1
2
extraArgs:
namespace-restrictions: true
1
2
3
4
5
6
7
apiVersion: v1
kind: Namespace
metadata:
annotations:
iam.amazonaws.com/allowed-roles: |
["my-custom-path/*"]
name: default

Your Node’s role now is safe and you have a capillary control on what AWS permissions each Pod will get, as bonus you have also automatic key rotation.