top of page

Stakater Blog

Follow our blog for the latest updates in the world of DevSecOps, Cloud and Kubernetes

Multi-tenant Clusters In Kubernetes

Background

Companies care about delivering their products to customers as quickly as possible, with as little cost as possible, so they prefer to choose infrastructure tools that can help achieve these goals. While this is possible when using Kubernetes clusters, it’s common to face resource allocation and security challenges. In this article, we will look at how Kubernetes clusters and, most importantly, how Kubernetes multi-tenancy can help address some of these challenges.

What is multi-tenancy

Multi-tenancy at its core allows tenants (development teams, applications, customers, or projects) to share a single instance of an application, software, or compute resources. Shared resources include CPU, memory, networking, control plane, and others. Having different tenants access different resources or services is not financially scalable, so sharing saves cost, and simplifies administration and operational overhead for most companies.


Multi-tenancy in Kubernetes

Kubernetes inherently implements multi-tenancy by providing constructs to ensure workloads run well side-by-side, and access resources as necessary. A cluster admin looking to achieve this can have a cluster controlled by a logical main node with multiple tenants running one or more workloads in this cluster. Also, it should isolate tenants from accessing each other's workloads to minimize the harm a compromised or malicious tenant can do to the cluster and other tenants.


Multi-tenancy cluster setup by admin

Cluster admins can use the RBAC approach to achieve multi-tenancy, as illustrated in the image below

Image credit Yogesh Dev

Role-Based Access Controls (RBAC)

To set up RBAC per tenant, we need to create a namespace per tenant. The cluster's worker node resources are shared among the namespaces, but the namespace provides an isolated environment for tenants to run workloads. We will later create a service account, a secret for the service account, a role, and role binding for the tenant. Furthermore, we will update the kubeconfig file to access the tenant's Kubernetes resources.

In this exercise, we will use minikube to set up a single node cluster:


1. Set up a cluster

minikube start

2. Create a namespace

kubectl create ns developers

3. Create a service account

kubectl create sa developers-user --namespace developers

4. Create a role

##role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: developers
  name: developers-user-full-access
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["*"]
  verbs: ["*"]
- apiGroups: ["batch"]
  resources:
  - jobs
  - cronjobs  
  verbs: ["*"]  
kubectl apply -f role.yaml

5. Create role binding

##rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: developers-user-view
  namespace: developers
subjects:
- kind: ServiceAccount
  name: developers-user
  apiGroup: ""
roleRef:
  kind: Role
  name: developers-user-full-access
  apiGroup: rbac.authorization.k8s.io
kubectl apply -f rolebinding.yaml

6. Create a secret for the service account

Create a secret for the service account created in Step 3

##secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: developers-user-secret
  namespace: developers
  annotations:
    kubernetes.io/service-account.name: developers-user
type: kubernetes.io/service-account-token
kubectl apply -f secret.yaml

7. Extract ca.crt from secret

kubectl get secret --namespace developers developers-user-secret -o json | jq -r '.data["ca.crt"]' | base64 -d > "./ca.crt"

8. Get user token from secret

TOKEN=$(echo `kubectl get secret --namespace developers developers-user-secret -o json | jq -r '.data["token"]' | base64 -d`) && export TOKEN

9. Get current context

CURRENT_CONTEXT=$(echo `kubectl config current-context`) && export CURRENT_CONTEXT

10. Get cluster name

CLUSTER_NAME=$(`echo kubectl config get-contexts $CURRENT_CONTEXT -o name`) && export CLUSTER_NAME

11. Get cluster endpoint

CLUSTER_ENDPOINT=$(echo `kubectl config view -o jsonpath="{.clusters[?(@.name == \""${CLUSTER_NAME}"\")].cluster.server}"`) && export CLUSTER_ENDPOINT

12. Setting cluster entry in kubeconfig

kubectl config set-cluster developers-user-developers-minikube --server="${CLUSTER_ENDPOINT}" --certificate-authority="./ca.crt" --embed-certs=true

13. Setting token credentials entry in kubeconfig

Token from step 8

kubectl config set-credentials developers-user-developers-minikube --token="${TOKEN}"
kubectl config set-context developers-user-developers-minikube --cluster=minikube --user=developers-user-developers-minikube --namespace=developers

Now let’s change the context to developers-user-developers-minikube and try to get pods inside the developers' namespace and afterward in all namespaces.


kubectl config use-context developers-user-developers-minikube
kubectl get pods
kubectl get pods -A

Obviously, the tenant is not able to access resources in the other namespace


Quota enforcement

Cluster admins can enforce resource quota in a particular namespace is important to pre-define the allocation of resources for each tenant to avoid unintended resource contention and depletion. ResourceQuota are Kubernetes objects that enable cluster admins to restrict cluster tenants’ resource usage in a certain namespace. The Kubernetes quota admission controller will watch for new objects created in that namespace, monitor and track resource usage.


Create and specify CPU resource quota on a namespace with requests and limits.

##resourcequota.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: developers-cpu-quota
  namespace: developers
spec:
  hard:
    requests.cpu: "1"  
    limits.cpu: "2"
kubectl apply -f resourcequota.yaml -n=developers

When applied by the cluster admin, the sum of CPU requests will not exceed 1 core and the sum of CPU limits will not exceed 2 cores for all pods within developers namespace.

Network policies to monitor tenant communications

By default, network communication between namespaces in most Kubernetes deployments is permitted. The cluster admin will need to configure some network policies to add isolation to each namespace. This will also help to support multiple tenants.

Example Network Policy file that will prevent traffic from external namespaces:
##networkpolicy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: developers-block-external-namespace-traffic
spec:
  podSelector:
    matchLabels:
  ingress:
  - from:
    - podSelector: {}

Apply the above with the command below

kubectl apply -f networkpolicy.yaml -n=developers

Cost optimization for Kubernetes multi-cluster

As the workloads from tenants or teams scale, it becomes important to optimize the costs of running your Kubernetes cluster. Putting in place the right cost monitoring and optimization setup ensures that cluster admins productively utilize available compute, memory, pods, namespaces, services, controllers, and other resources.

  • Ensure the correct number of worker nodes: Keeping the correct number of nodes running in a cluster is critical for cost optimization. Therefore, ensuring your team is running the right size, number, and type of node in your Kubernetes cluster is important.

  • Configure auto-scaling: With Horizontal Pod Autoscaler (HPA), we can dynamically adjust the number of pods in the cluster to handle current workload requirements and the number of pods in deployment to meet tenants’ demand.

  • Maintaining pod sizes: We can use resource requests and limits to avail enough resources for optimal performance, and avoid wastage. This can help keep our costs in check. Observe pod usage and performance of workloads over time to consider scaling your pods through requests and limits.


Web UI dashboard with limited view

Kubernetes provides a basic general-purpose web UI where users or tenants can view, deploy containerized applications and interact with the cluster. It can be used to troubleshoot or manage existing resources.


The interface shows a high-level overview of applications running on the cluster, and provides information on the current state of the cluster in real-time as well as all the objects that compose it, being able to interact with them so a service can be scaled or a pod can be restarted.


To set up, wen can use the following command as the cluster admin :

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

Accessing the dashboard from a local workstation, we then create a secure channel to our Kubernetes cluster by running the following command:

kubectl proxy

Let's access the dashboard at:



This view provides an interface with token authentication and a dashboard where we have all the main elements of Kubernetes:


Using an authentication token (RBAC)

To log in to the Dashboard, we will be using the bearer token generated for tenants in the developers' namespace in step 8.



Tenants in the developers' namespace cannot view resources in the default namespace as seen in the notification. Deploy a simple hello-node app in the developers' namespace to see some resources on the dashboard


kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.39 -- /agnhost netexec --http-port=8080

Switch from default to developer's namespace by changing the query namespace from default to developers.

The tenant can now view resources in the developers' namespaces.




Multi-tenant cluster management using Argo CD(RBAC configuration for application GitOps)

Apart from providing a great experience of automating day-to-day tasks required to manage and monitor the GitOps continuous delivery tool for Kubernetes, Argo CD also provides the feature to enforce necessary security boundaries with tenants that would like to use Gitops to deploy applications into a Kubernetes cluster to test and verify new features.

Install Argo CD

As the cluster admin, we can install the Argo CD GitOPs instance in the cluster by running the following command in the default namespace.

kubectl create namespace argocd

The above command will create a namespace for the Argo CD services and applications to reside. Download the latest Argo CD CLI version from https://github.com/argoproj/argo-cd/releases/latest


Access the Argo CD API server

By default, the Argo CD API server is not exposed to an external IP. For the purposes of this article, we are going to utilize kubectl port-forwarding to connect to the API server without actually exposing the service.

kubectl port-forward svc/argocd-server -n argocd 8080:443

This will make things easier for us as we can now access the API server using localhost:8080.


Note: this method is only for local development or demonstration purposes.

The initial password for the admin account is auto-generated and stored as clear text in the field password in a secret named argocd-initial-admin-secret in your Argo CD installation namespace. You can simply retrieve this password using kubectl:

ADMIN_PASSWORD=$( echo `kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d`) && export ADMIN_PASSWORD

Users and roles in Argo CD

As the cluster admin, we can create new users for different tenants, so we can distribute the GitOps instance with the different tenants and limited permissions. We are going to edit the RBAC ConfigMap directly in this example


kubectl edit configmap argocd-cm -n argocd

Add the data section

data:
  accounts.jeff: apiKey,login
…

The user, jeff, has two capabilities:

1. apiKey - allows generating authentication tokens for API access

2. login - allows to login using UI



The user has no password set.

To create a password, the cluster admin must use the admin’s password with the following command

argocd account update-password --account jeff --new-password jeff1234 --current-password "${ADMIN_PASSWORD}"

The user can now log in



By default Argo CD has two built-in roles, all new users are using the policy.default role:readonly from the argocd-rbac-cm ConfigMap, which can not create resources or modify Argo CD settings.





The RBAC config map will need to be updated to allow jeff to perform some operations. The cluster admin will run the following to pull up the config map once again:


kubectl get configmap argocd-rbac-cm -n argocd -o yaml > argocd-rbac.yml

Now add the following

 data:
  policy.csv: |
    p, role:developer, applications, *, */*, allow
    p, role:developer, clusters, get, *, allow
    p, role:developer, repositories, get, *, allow
    p, role:developer, repositories, create, *, allow
    p, role:developer, repositories, update, *, allow
    p, role:developer, repositories, delete, *, allow
    g, jeff, role:developer
  policy.default: role:''

Now apply the RBAC config map by running the following:

kubectl apply -f argocd-rbac.yml



Jeff was able to create an app in the default namespace, but what if the cluster admin wants to allow namespace actions? RBAC in the ConfigMap will not allow this. It is good to separate user access, so the one team/tenant will not affect team/tenants resources. With Argo CD Projects we can achieve this


Projects

Argo CD provides logical application grouping and helps to separate tenants from each other in the multi-tenant Argo CD instance. Cluster Admins can create one project per tenant and use project settings to specify where and what tenants can run their workloads, specifying access for namespaces, repositories, clusters, and so on. And then, we will be able to limit every developer team by their own namespaces only.


The cluster admin creates a project to provide limited access to a tenant:

kubectl apply -n argocd -f - << EOF
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: developers-tenants
spec:
# Deny all cluster-scoped resources from being created.
  clusterResourceWhitelist:
  - group: '*'
    kind: '*'
 # Only permit applications to deploy to the developers namespace in the same cluster
  destinations:
  - namespace: developers
    server: https://kubernetes.default.svc
  sourceRepos:
# Allow manifests to deploy from any Git repos 
  -  '*'
  orphanedResources:
   warn: false
EOF

Fields of interest are sourceRepos and destinations:

  • sourceRepos: Specifies list of repositories that will be allowed for a deployment. We can as well use it to impose usage of only internally hosted Git providers.

  • destinations: Specifies the Kubernetes clusters and exact namespaces that will be used for deployment. In this exercise, we allow deployment to the developers' namespace.

The members of the developers-tenants group will be able to help themselves and manage all applications of the developers-tenants project. At the same time, they won’t see applications from other tenants and won’t be able to accidentally modify Kubernetes resources that don’t belong to them.


Now, let’s move an existing application guestbook from the argo-test-ns to the new project using the CLI

argocd app set guestbook --project developers-tenants --dest-namespace developers

We have successfully moved the application from the default namespace to the developers' namespace



Now let's try to create a new application in this project, but set its namespace as default:





Let's try to create the app again, in the developers namespace:


Conclusion

The takeaway from this article is that even if multi-tenancy with Kubernetes is crucial for better operations, cost management, and resource efficiency in a company, it is simply not designed as a multi-tenant system; the steps implemented in this article describe how some tools can achieve a good multi-tenant experience, by extending the functionality of Kubernetes and implementing GitOps with Argo CD. We are also working on an operator and product to handle the multi-tenancy use cases with ease. It currently supports OpenShift but soon we will have a version for Kubernetes available. You can read more about the Multi Tenant Operator here.


In future articles, we will be sharing more steps needed to achieve an effective DevSecOps multi-tenant environment.


776 views0 comments

Recent Posts

See All
bottom of page