Image Blog Granting User Access to Your Kubernetes Cluster
February 13, 2019

How to Create a Kubernetes Cluster and Configure User Accounts

Containers
Security

Creating a Kubernetes cluster and managing Kubernetes users doesn't need to be a challenge. In this blog, we walk you through how to create a Kubernetes cluster and configure Kubernetes user accounts for your Kubernetes cluster.

Back to top

What Is a Kubernetes Cluster?

A Kubernetes cluster is a collection of node machines used for running containerized applications.

The nodes pull their resources together to create a more powerful machine. This makes using a Kubernetes cluster advantageous for the enterprise. It helps you keep pace with the speed of DevOps.

Get Expert Guidance and Support for Kubernetes

Our Enterprise Architects can provide guidance and support for your Kubernetes project. Talk to an expert today to see how we can help.

Talk to an Expert

Back to top

Get Buy-In For Your Kubernetes Cluster

Before you can get started with your Kubernetes cluster, you need to get buy-in. 

Historically, the adoption of open source software (OSS) within a company starts at the grass-roots level: a developer or system admin will setup the technology on their workstation or lab environment to play with and understand.

After a while, that pioneer will see the value of the technology in solving actual business problems within their organization and begin to spread the word to their teammates. The next step would be to give access to other team members, so they too can realize the value that the particular OSS brings.

Getting buy-in from team members is a critical step in the OSS adoption process, as greater awareness allows for broader discussions on how the solution can be formally integrated into the organization’s processes and infrastructure.

The first step is to give team members “sandbox” access to the cluster, so they can begin to learn and understand its benefits. We want to ensure that their activities are isolated and don’t negatively impact the Kubernetes control plane components.

Back to top

How to Manage Kubernetes Users 

Let’s start by doing a quick review of how Kubernetes manages users and provides access to the Kubernetes API server (i.e., the brains of your cluster).

The first part is really simple. Kubernetes doesn’t manage users. Normal users are assumed to be managed by an outside, independent service like LDAP or Active Directory. In a standard installation of Kubernetes (i.e., using kubeadm), authentication is done via standard transport level security (TLS) certificates.

Any user that presents a valid certificate signed by the cluster’s certificate authority (CA) is considered authenticated. In this configuration, Kubernetes determines the username from the common name field in the ‘subject’ of the cert (e.g., “/CN=bob”). From there, the role based access control (RBAC) sub-system would determine whether the user is authorized to perform a specific operation a resource.

As you may know, kubectl is the official command line interface (CLI) tool to deploy and manage applications on Kubernetes. The kubectl CLI uses TLS certificates to authenticate to the API server for every command. These certs, along with other details needed to connect to a kuberbetes cluster are stored in a ‘kubeconfig’ configuration file.

If the cluster was installed using kubeadm, a kubeconfig file is automatically generated. This config file allows full administrative rights to the cluster (e.g., the cluster-admin RBAC role). It would be dangerous to give this file to our team members, as they could unknowingly do bad things to our cluster.

While want everyone to have unique usernames, the username in the auto-generated kubeconfig and cert is ‘admin’. I’d suggest using usernames that are the same or as close as possible to the username convention in use by your organization. This should help with any future integration with external authentication providers.

Back to top

How to Create a Kubernetes Cluster and Enable User Accounts

With the above in mind, it becomes clear that we’ll want to create a different kubeconfig file for each of our team members that grants them reasonable access to the cluster. We will now proceed with the steps on how to achieve that goal. While these steps can be executed on the kuberbetes master that contains the auto-generated kubeconfig, a better practice would be to install kubectl on your workstation and then copy the generated kubeconfig from the master into its default location (~/.kube/config) on your machine.

The only other tool we’ll be using is openssl, which is already installed on most Linux distributions. With the above in place, we can now begin the process of creating kubeconfig files for our team. Let’s assume that Bob is the first team member that we will be creating the kubeconfig file for. A kubeconfig file requires the URL of the API server, a cluster CA certificate, and credentials in the form of a key and a certificate signed by the cluster CA.

1. Create a User Account 

The first step is to create a key and certificate signing request (CSR) for Bob’s access to the cluster using openssl:


$ openssl req -new -newkey rsa:4096 -nodes -keyout bob-k8s.key -out bob-k8s.csr -subj "/CN=bob/O=devops"
Generating a 4096 bit RSA private key
.............................................................................................................++
.........++
writing new private key to 'bob-k8s.key'
-----

 

Now that we have a CSR, we need to have it signed by the cluster CA. For that, we create a CertificateSigningRequest object within Kubernetes containing the CSR we generated above. For this, I use a ‘template’ CSR manifest and a neat trick using the --edit parameter to kubectl that allows you to edit the manifest before submitting it to the API server:

 


$ kubectl create –edit -f k8s-csr.yaml:
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
  name: bob-k8s-access
spec:
  groups:
  - system:authenticated
  request: # replace with output from shell command: cat bob-k8s.csr | base64 | tr -d '\n'
  usages:
  - client auth
certificatesigningrequest.certificates.k8s.io/bob-k8s-access created

 

Using the kubectl --edit parameter, you can insert the base64 encoded CSR directly into the manifest using the vi editor ‘:r!’ sequence to insert the output of the shell command above. Just ensure that the base64 CSR string is on the same line as the request field.

Once the CSR has been created, it enters a ‘PENDING’ condition as seen from the output below:

 


$ kubectl get csr
NAME             AGE     REQUESTOR   CONDITION
bob-k8s-access   72s     tendai      Pending

 

Next, we want to approve the CSR object, for that we issue the command below:

 


$ kubectl certificate approve bob-k8s-access
certificatesigningrequest.certificates.k8s.io/bob-k8s-access approved

 

Now if we check the csr again we see that it is in a Approved,Issued condition:

 


kubectl get csr
NAME             AGE     REQUESTOR   CONDITION
bob-k8s-access   2m16s   tendai      Approved,Issued

 

That means that Bob’s base64-encoded, signed certificate has been and made available in the ‘status.certificate’ field of the CSR object. To retrieve the certificate, we can issue the following command:

 


$ kubectl get csr bob-k8s-access -o jsonpath='{.status.certificate}' | base64 --decode > bob-k8s-access.crt

 

Let’s verify that we have a certificate for Bob:

 


$ cat bob-k8s-access.crt
-----BEGIN CERTIFICATE-----
MIIEITCCAwmgAwIBAgIUEZPUkdn8DVPWIKh6NJ911hpGaxgwDQYJKoZIhvcNAQEL
BQAwFTETMBEGA1UEAxMKa3ViZXJuZXRlczAeFw0xOTAxMjMxOTI5MDBaFw0yMDAx
MjMxOTI5MDBaMCAxEDAOBgNVBAoTB3N1cHBvcnQxDDAKBgNVBAMTA2JvYjCCAiIw
DQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAMeRxQvtbc+hBmYTZR7THKq4Mhhs
[output omitted]
SCK5Vh6wMIelmwW+rI2zTuo95qcmiQoQFyFwv6vULjJIw04GV3eBBGlqPBmyuzeV
Ho6IjhCorljh3aPG4TxAILpeXp+F1htAhxYTFSqrAecfr5aL1qeWG1gnISMeRhuC
urlHGwY=
-----END CERTIFICATE-----

 

The next requirement for Bob’s kubeconfig file is the cluster CA certificate. That is easy to get as we already have it in our existing kubeconfig file. To retrieve it, we can use the following command to save it into a file named ‘k8s-ca.crt’:

 


$ kubectl config view -o jsonpath='{.clusters[0].cluster.certificate-authority-data}' --raw | base64 --decode - > k8s-ca.crt

 

Great! Now we can start creating Bob’s kubeconfig file. Again, a kubeconfig file consists of a cluster configuration (Name, URL, CA cert), a user configuration (name, key, cert) and a context configuration. A context specifies the cluster, the user and the namespace that kubectl will use when making calls to the API server.

2. Configure Your Kubernetes Cluster

Let’s set up the cluster configuration in Bob’s kubeconfig file. We will pull these details from our existing kubeconfig using the command below. Note that the ‘kubectl config set-cluster’ command uses the --kubeconfig parameter to create the file ‘bob-k8s-config’:

  
$ kubectl config set-cluster $(kubectl config view -o jsonpath='{.clusters[0].name}') --server=$(kubectl config view -o jsonpath='{.clusters[0].cluster.server}') --certificate-authority=k8s-ca.crt --kubeconfig=bob-k8s-config --embed-certs
Cluster "kubernetes" set.

 

If we look at the contents of bob-k8s-config that was created by the last command, we see that the cluster configuration has been set:

 


$ cat bob-k8s-config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJT [output omitted]
    server: https://192.168.1.182:6443
  name: kubernetes
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []

 

We can see that the users and context lists are empty. Lets setup the user next which will import Bob’s key and cert into the file. For this we will use the ‘kubectl config set-credentials’ command below:

 


$ kubectl config set-credentials bob --client-certificate=bob-k8s-access.crt --client-key=bob-k8s.key --embed-certs --kubeconfig=bob-k8s-config
User "bob" set.

 

The final kubeconfig requirement is to create a context. We will do that using the ‘kubectl config set-context’ command below:

 


$ kubectl config set-context bob --cluster=$(kubectl config view -o jsonpath='{.clusters[0].name}') --namespace=bob --user=bob --kubeconfig=bob-k8s-config
Context "bob" created.

 

The --namespace parameter tells kubectl which namespace to use for that context, ‘bob’ in our case. We haven’t created a namespace for Bob, let’s do that now and give it a couple of meaningful labels:

 


$ kubectl create ns bob
namespace/bob created
$ kubectl label ns bob user=bob env=sandbox
namespace/bob labeled

 

Finally, we’ll want to specify the context that Bob will use for his kubectl commands, for this we will use the ‘kubectl config use-context’ command below:

 


$ kubectl config use-context bob --kubeconfig=bob-k8s-config
Switched to context "bob".

 

Now let’s test Bob’s kubeconfig by running the ‘kubectl version’ command:

 


$ kubectl version --kubeconfig=bob-k8s-config
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:36:14Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}

 

The fact that we did not receive any errors shows that Bob’s kubeconfig file is configured correctly. If there were any issues with the certs, an error would be returned instead of the Server Version.

Great, now let’s go ahead and list the running pods using Bob’s kubeconfig:

 


$ kubectl get pods --kubeconfig=bob-k8s-config
Error from server (Forbidden): pods is forbidden: User "bob" cannot list resource "pods" in API group "" in the namespace "bob"

 

Surprised? You shouldn’t be. Everything that we have done so far is just to allow Bob to authenticate to the cluster. However, he has not been authorized to do anything!

Need Kubernetes Support? 

OpenLogic experts provide technical support and professional services. 

Talk to an Expert

3. Assign Roles Within a Namespace

Again, this is where K8s RBAC comes into play. Kubernetes uses roles to determine if a user is authorized to make a call. Roles are scoped to either the entire cluster via a ClusterRole object or to a particular namespace via a Role object.

We need to give Bob a role that will give him complete freedom within the ‘bob’ namespace but nothing else outside of his namespace. Kubernetes also allows you to use a ClusterRole but scope it to a single namespace via a RoleBinding. The default ‘admin’ cluster role is a good place to start as it will give Bob the freedom to create most types of Kubernetes objects within his namespace.


$kubectl create rolebinding bob-admin --namespace=bob --clusterrole=admin --user=bob
rolebinding.rbac.authorization.k8s.io/bob-admin created

 

Now if we re-run the command to list pods using Bob’s kubeconfig, we no longer receive an error:

 


$ kubectl get pods --kubeconfig=bob-k8s-config
No resources found.

 

You are now ready to securely send Bob his kubeconfig. You’ll need to include instructions on where the file should be placed. The location {HOME}/.kube/config is where kubectl looks for the file by default, so it is probably best to place it there. You’ll also want to include instructions on how to install the kubectl command-line as well.

That is all there is to it. To further protect your cluster from resource over-utilization, you can configure ResourceQuota, LimitRange and Network policy objects towards that end. I will cover that in my next post. As Kubernetes currently does not support certificate revocation lists (CRL), to revoke Bob’s access to the cluster, you can simply delete the RoleBinding created above:

 


$kubectl delete rolebinding bob-admin
rolebinding.rbac.authorization.k8s.io/bob-admin deleted

 

Back to top

Why Kubernetes Cluster Management Is Important

Kubernetes cluster management is an important part of your strategy. You want to make sure you're effectively managing a group of Kubernetes clusters. Enterprises, in particular, will have multiple clusters for development, testing, and production. And these will likely be distributed across environments.

Managing them can be tricky. Unless you enlist expert help. 

Get Support For Kubernetes Cluster Management

OpenLogic experts can help you configure and optimize your Kubernetes cluster management. In fact, we offer a Kubernetes Foundations Service to help you get started faster.

We also offer ongoing support for Kubernetes. Talk to a Kubernetes expert today to learn how to get more out of this important technology. 

Talk to a Kubernetes Expert

Related Content

Back to top