top of page

Stakater Blog

Follow our blog for the latest updates in the world of DevSecOps, Cloud and Kubernetes

Simplify Multitenancy with Stakater's Multi Tenant Operator

Stakater and the RedHat team collaborated to write this article and it was originally published on the RedHat Website.


As a Technology Partner within the Red Hat OpenShift ecosystem, Stakater's Multi Tenant Operator has been officially certified by Red Hat since March 2022, with the latest release certified in November 2023. This certification marks a significant milestone in the OpenShift ecosystem, offering robust solutions for complex multitenant challenges.


Stakater Multi Tenant Operator (MTO) offers robust and straightforward abstractions to coordinate intricate building blocks integrated with OpenShift, Argo CD, and Vault, enabling the creation of feature-packed, multitenant platforms on OpenShift.


Simplifying multitenancy

MTO simplifies management and configuration of multitenancy. It introduces simplified abstractions built upon native OpenShift and Kubernetes primitives, providing a centralized view of teams, members, and workloads. By enabling cluster sharing, multitenancy offers numerous advantages, including enhanced resource use, reduced configuration efforts, and seamless sharing of internal cluster resources among diverse tenants. MTO not only facilitates multitenancy but also enhances overall cluster management. With MTO, you can effectively use an OpenShift cluster across various tenants, extend managed applications, and configure and oversee individual tenant sandboxes with ease.


Why Multi Tenant Operator?

OpenShift brings some improvements with its "secure by default" concept, but it is still complex to design and orchestrate all the components involved in building a secure multitenant platform, hence making it difficult for cluster administrators to host multitenancy in a single OpenShift cluster. The need for enhanced resource use, reduced configuration efforts, and seamless resource sharing among diverse tenants brings multitenancy into sharp focus. 


Let's take a look at some of the important features that MTO offers.


Tenancy: Hosting multiple tenants in a single OpenShift cluster

MTO allows for the efficient use of an OpenShift cluster across various tenants. This feature is pivotal in maximizing resource use, as it enables different teams or departments within an organization to share a single cluster without compromising on their individual operational needs. We achieve this in MTO by using a custom resource (CR) called Tenant.


Resource management: Quotas

MTO extends the idea of Quota from OpenShift which helps us define the limits and requests on resource consumption. Introducing quotas in MTO allows you to manage resource consumption at the Tenant level, which automatically enforces the resourcequota and limitrange to all the namespaces under the Tenant. Here's more information on MTO's Quota CR.


Resource distribution: Templates, TemplateInstances, TemplateGroupInstances

MTO simplifies namespace provisioning with Templates, TemplateInstances, and TemplateGroupInstances for Kubernetes manifests, Helm charts, secrets, or ConfigMaps. These flexible predefined templates can be enforced across single or multiple tenant namespaces.


Responsibility between these three custom resources are divided as:

  • Templates store the manifests, Helm charts, or source to target configuration for secrets and ConfigMaps.

  • TemplateInstances are responsible for applying Template in a single namespace.

  • TemplateGroupInstances can deploy the Template resources to one or multiple namespaces depending on the label selectors.

Here's an example to showcase the use of Templates. The deployment of a docker pull secret across namespaces with specific labels can be facilitated by first creating a Template defining the secret structure. Subsequently, a TemplateGroupInstance is established, linking to the created Template and specifying label selector. Cluster admin can then confirm the successful creation of secrets in namespaces matching the specified labels. More information on these CRs can be found here for Templates, TemplateInstances, TemplateGroupInstances.


Hibernation

To save resources on the cluster by scaling down pods according to customer needs, Multi Tenant Operator also provides the option of hibernation of workloads. When configured, it will scale down deployments and statefulsets in your required tenant or selected namespaces according to a cron schedule, and scale them back up according to the wake schedule.


Hibernation is managed by a CR called ResourceSupervisor. More information on that can be found at MTO Hibernation.


This example schedule will put all the Deployments and StatefulSets within the tenant's namespaces to sleep, by reducing their pod count to 0 at 8 PM every weekday. At 8 AM on weekdays, MTO will then scale them back up to their previous pod counts.

  hibernation:
    sleepSchedule: 0 20 * * 1-5
    wakeSchedule: 0 8 * * 1-5

Integrations


Vault

MTO integrates seamlessly with HashiCorp Vault, enhancing security in multitenant Kubernetes environments. This integration enables each tenant's Kubernetes Service Accounts to securely access and manage their own secrets stored in Vault. By automatically creating specific roles and policies in Vault for each tenant, MTO ensures that only authorized users and applications within a tenant can access their designated secrets.


Additionally, with the Red Hat single sign-on integration, user authentication is streamlined, allowing users to login to vault via OIDC to access only the secrets relevant to their tenant, thereby maintaining strict security and data isolation across different tenants. This feature makes managing sensitive data in a shared Kubernetes cluster both secure and straightforward. Here is more information on Vault in MTO.


Argo CD

Multi Tenant Operator also extends the tenants permission model to Argo CD where it can greatly ease the overhead of managing RBAC in Argo CD.

If configured, it will create AppProjects for each tenant. The AppProject will allow tenant users to create Argo CD Applications that can be synced to namespaces owned by those tenants. Cluster admins will also be able to blacklist certain namespaces resources if they want, and allow certain cluster scoped resources.

Detailed information can be found at MTO Argo CD integration.


Sandboxes

The core principle of MTO extends beyond the mere use of namespaces as independent sandboxes. While namespaces provide autonomy for tenant applications within a shared cluster, the core essence of MTO lies in robust RBAC (Role-Based Access Control) and cluster management. Cluster administrators configure MTO's custom resources, transforming it into a self-service platform. This strategic approach not only ensures autonomous operation within tenants but also emphasizes effective user management, role assignments, and efficient cluster-level administration. More information on this can be found here.


GitOps

MTO supports the initiation of new tenants using GitOps tools like Argo CD. This approach allows for changes to be managed through pull requests in a GitOps workflow. It empowers tenants to request modifications, add or remove users seamlessly, thus facilitating a more dynamic and agile environment.


MTO Console


Dashboard

MTO Console is aimed at administrators and tenant users. This UI console will provide a more intuitive and user-friendly way to manage tenants and their resources such as tenants, namespaces, quotas, templates, integration config and more, making it easier for users to interact with the system.


Show back

The show-back functionality in Multi Tenant Operator (MTO) by Stakater is a significant feature designed to enhance the management of resources and costs in multitenant Kubernetes environments. This feature focuses on accurately tracking the usage of resources by each tenant, and/or namespace, enabling organizations to monitor and optimize their expenditures.


Furthermore, this functionality supports financial planning and budgeting by offering a clear view of operational costs associated with each tenant. This can be particularly beneficial for organizations that chargeback internal departments or external clients based on resource usage, ensuring that billing is fair and reflective of actual consumption.


Figure 1 shows the MTO console dashboard.


Figure 2 shows the show-back functionality in MTO Console.


Certification with Red Hat OpenShift

When a Technology Partner is certified on Red Hat OpenShift, it means ensuring top-notch performance for applications in both Kubernetes and OpenShift setups in a stringent process that has been vetted by Red Hat. This certification underscores Stakater's and Red Hat's joint commitment to providing an operator that is not only security-focused, but also well-supported and reliable for users.


The certification process includes seamless integration with OpenShift, comprehensive containerization, regular vulnerability checks, and access to collaborative support. This reinforces the trustworthiness and high-performance standard of Stakater's certified offerings for its customers.


Providing security

Red Hat certification for the MTO operator encompasses several critical stages, with security vulnerability scanning playing a pivotal role in ensuring more secure software delivery. Security partners can now consume and leverage Red Hat's extensive and evolving set of published security data to minimize customer false positives and other discrepancies. During this process, the operator—including all associated images, packages, and libraries—undergoes thorough examination using Red Hat's advanced scanning tools. This step is crucial for identifying and addressing potential security issues, thereby aligning Stakater software with Red Hat's stringent security standards.


What's next?

An upcoming 1.0 release of MTO will include support for vanilla Kubernetes. It will broaden the scope and applicability of MTO beyond OpenShift clusters. This feature will enhance the versatility of MTO, making it a suitable solution for a wider range of Kubernetes-based environments.


Stay tuned for Part 2, where we will do a deep dive into the CRs and how to install them on OpenShift.

32 views0 comments
bottom of page