diff --git a/0.12/404.html b/0.12/404.html index bf2f01ff1..55ee4d2d9 100644 --- a/0.12/404.html +++ b/0.12/404.html @@ -12,7 +12,7 @@ - + @@ -20,7 +20,7 @@ - + @@ -884,7 +884,7 @@
Here, admins have a bird's-eye view of all tenants, with the ability to delve into each one for detailed examination and management. This section is pivotal for observing the distribution and organization of tenants within the system. More information on each tenant can be accessed by clicking the view option against each tenant name.
+In this view, users can access a dedicated tab to review the quota utilization for their Tenants. Within this tab, users have the option to toggle between two different views: Aggregated Quota and Namespace Quota.
++This view provides users with an overview of the combined resource allocation and usage across all namespaces within their tenant. It offers a comprehensive look at the total limits and usage of resources such as CPU, memory, and other defined quotas. Users can easily monitor and manage resource distribution across their entire tenant environment from this aggregated perspective.
++Alternatively, users can opt to view quota settings on a per-namespace basis. This view allows users to focus specifically on the resource allocation and usage within individual namespaces. By selecting this option, users gain granular insights into the resource constraints and utilization for each namespace, facilitating more targeted management and optimization of resources at the namespace level.
+In the Utilization tab of the tenant console, users are presented with a detailed table listing all namespaces within their tenant. This table provides essential metrics for each namespace, including CPU and memory utilization. The metrics shown include:
+Users can adjust the interval window using the provided selector to customize the time frame for the displayed data. This table allows users to quickly assess resource utilization across all namespaces, facilitating efficient resource management and cost tracking.
+ +Upon selecting a specific namespace from the utilization table, users are directed to a detailed view that includes CPU and memory utilization graphs along with a workload table. This detailed view provides:
+This detailed view provides users with in-depth insights into resource utilization at the workload level, enabling precise monitoring and optimization of resource allocation within the selected namespace.
+Users can view all the namespaces that belong to their tenant, offering a comprehensive perspective of the accessible namespaces for tenant members. This section also provides options for detailed exploration.
diff --git a/0.12/explanation/logs-metrics.html b/0.12/explanation/logs-metrics.html index da6a69940..e9aa9f1ca 100644 --- a/0.12/explanation/logs-metrics.html +++ b/0.12/explanation/logs-metrics.html @@ -18,7 +18,7 @@ - + @@ -26,7 +26,7 @@ - + @@ -904,7 +904,7 @@Kubernetes is designed to support a single tenant platform; OpenShift brings some improvements with its \"Secure by default\" concepts but it is still very complex to design and orchestrate all the moving parts involved in building a secure multi-tenant platform hence making it difficult for cluster admins to host multi-tenancy in a single OpenShift cluster. If multi-tenancy is achieved by sharing a cluster, it can have many advantages, e.g. efficient resource utilization, less configuration effort and easier sharing of cluster-internal resources among different tenants. OpenShift and all managed applications provide enough primitive resources to achieve multi-tenancy, but it requires professional skills and deep knowledge of OpenShift.
This is where Multi Tenant Operator (MTO) comes in and provides easy to manage/configure multi-tenancy. MTO provides wrappers around OpenShift resources to provide a higher level of abstraction to users. With MTO admins can configure Network and Security Policies, Resource Quotas, Limit Ranges, RBAC for every tenant, which are automatically inherited by all the namespaces and users in the tenant. Depending on the user's role, they are free to operate within their tenants in complete autonomy. MTO supports initializing new tenants using GitOps management pattern. Changes can be managed via PRs just like a typical GitOps workflow, so tenants can request changes, add new users, or remove users.
The idea of MTO is to use namespaces as independent sandboxes, where tenant applications can run independently of each other. Cluster admins shall configure MTO's custom resources, which then become a self-service system for tenants. This minimizes the efforts of the cluster admins.
MTO enables cluster admins to host multiple tenants in a single OpenShift Cluster, i.e.:
MTO is also OpenShift certified
"},{"location":"index.html#features","title":"Features","text":"The major features of Multi Tenant Operator (MTO) are described below.
"},{"location":"index.html#kubernetes-multitenancy","title":"Kubernetes Multitenancy","text":"RBAC is one of the most complicated and error-prone parts of Kubernetes. With Multi Tenant Operator, you can rest assured that RBAC is configured with the \"least privilege\" mindset and all rules are kept up-to-date with zero manual effort.
Multi Tenant Operator binds existing ClusterRoles to the Tenant's Namespaces used for managing access to the Namespaces and the resources they contain. You can also modify the default roles or create new roles to have full control and customize access control for your users and teams.
Multi Tenant Operator is also able to leverage existing OpenShift groups or external groups synced from 3rd party identity management systems, for maintaining Tenant membership in your organization's current user management system.
"},{"location":"index.html#hashicorp-vault-multitenancy","title":"HashiCorp Vault Multitenancy","text":"Multi Tenant Operator extends the tenants permission model to HashiCorp Vault where it can create Vault paths and greatly ease the overhead of managing RBAC in Vault. Tenant users can manage their own secrets without the concern of someone else having access to their Vault paths.
More details on Vault Multitenancy
"},{"location":"index.html#argocd-multitenancy","title":"ArgoCD Multitenancy","text":"Multi Tenant Operator is not only providing strong Multi Tenancy for the OpenShift internals but also extends the tenants permission model to ArgoCD were it can provision AppProjects and Allowed Repositories for your tenants greatly ease the overhead of managing RBAC in ArgoCD.
More details on ArgoCD Multitenancy
"},{"location":"index.html#resource-management","title":"Resource Management","text":"Multi Tenant Operator provides a mechanism for defining Resource Quotas at the tenant scope, meaning all namespaces belonging to a particular tenant share the defined quota, which is why you are able to safely enable dev teams to self serve their namespaces whilst being confident that they can only use the resources allocated based on budget and business needs.
More details on Quota
"},{"location":"index.html#templates-and-template-distribution","title":"Templates and Template distribution","text":"Multi Tenant Operator allows admins/users to define templates for namespaces, so that others can instantiate these templates to provision namespaces with batteries loaded. A template could pre-populate a namespace for certain use cases or with basic tooling required. Templates allow you to define Kubernetes manifests, Helm chart and more to be applied when the template is used to create a namespace.
It also allows the parameterizing of these templates for flexibility and ease of use. It also provides the option to enforce the presence of templates in one tenant's or all the tenants' namespaces for configuring secure defaults.
Common use cases for namespace templates may be:
More details on Distributing Template Resources
"},{"location":"index.html#mto-console","title":"MTO Console","text":"Multi Tenant Operator Console is a comprehensive user interface designed for both administrators and tenant users to manage multi-tenant environments. The MTO Console simplifies the complexity involved in handling various aspects of tenants and their related resources. It serves as a centralized monitoring hub, offering insights into the current state of tenants, namespaces, templates and quotas. It is designed to provide a quick summary/snapshot of MTO's status and facilitates easier interaction with various resources such as tenants, namespaces, templates, and quotas.
More details on Console
"},{"location":"index.html#showback","title":"Showback","text":"The showback functionality in Multi Tenant Operator (MTO) Console is a significant feature designed to enhance the management of resources and costs in multi-tenant Kubernetes environments. This feature focuses on accurately tracking the usage of resources by each tenant, and/or namespace, enabling organizations to monitor and optimize their expenditures. Furthermore, this functionality supports financial planning and budgeting by offering a clear view of operational costs associated with each tenant. This can be particularly beneficial for organizations that chargeback internal departments or external clients based on resource usage, ensuring that billing is fair and reflective of actual consumption.
More details on Showback
"},{"location":"index.html#hibernation","title":"Hibernation","text":"Multi Tenant Operator can downscale Deployments and StatefulSets in a tenant's Namespace according to a defined sleep schedule. The Deployments and StatefulSets are brought back to their required replicas according to the provided wake schedule.
More details on Hibernation
"},{"location":"index.html#mattermost-multitenancy","title":"Mattermost Multitenancy","text":"Multi Tenant Operator can manage Mattermost to create Teams for tenant users. All tenant users get a unique team and a list of predefined channels gets created. When a user is removed from the tenant, the user is also removed from the Mattermost team corresponding to tenant.
More details on Mattermost
"},{"location":"index.html#remote-development-namespaces","title":"Remote Development Namespaces","text":"Multi Tenant Operator can be configured to automatically provision a namespace in the cluster for every member of the specific tenant, that will also be preloaded with any selected templates and consume the same pool of resources from the tenants quota creating safe remote dev namespaces that teams can use as scratch namespace for rapid prototyping and development. So, every developer gets a Kubernetes-based cloud development environment that feel like working on localhost.
More details on Sandboxes
"},{"location":"index.html#cross-namespace-resource-distribution","title":"Cross Namespace Resource Distribution","text":"Multi Tenant Operator supports cloning of secrets and configmaps from one namespace to another namespace based on label selectors. It uses templates to enable users to provide reference to secrets and configmaps. It uses a template group instance to distribute those secrets and namespaces in matching namespaces, even if namespaces belong to different tenants. If template instance is used then the resources will only be mapped if namespaces belong to same tenant.
More details on Copying Secrets and ConfigMaps
"},{"location":"index.html#self-service","title":"Self-Service","text":"With Multi Tenant Operator, you can empower your users to safely provision namespaces for themselves and their teams (typically mapped to SSO groups). Team-owned namespaces and the resources inside them count towards the team's quotas rather than the user's individual limits and are automatically shared with all team members according to the access rules you configure in Multi Tenant Operator.
Also, by leveraging Multi Tenant Operator's templating mechanism, namespaces can be provisioned and automatically pre-populated with any kind of resource or multiple resources such as network policies, docker pull secrets or even Helm charts etc
"},{"location":"index.html#everything-as-codegitops-ready","title":"Everything as Code/GitOps Ready","text":"Multi Tenant Operator is designed and built to be 100% OpenShift-native and to be configured and managed the same familiar way as native OpenShift resources so is perfect for modern shops that are dedicated to GitOps as it is fully configurable using Custom Resources.
"},{"location":"index.html#preventing-clusters-sprawl","title":"Preventing Clusters Sprawl","text":"As companies look to further harness the power of cloud-native, they are adopting container technologies at rapid speed, increasing the number of clusters and workloads. As the number of Kubernetes clusters grows, this is an increasing work for the Ops team. When it comes to patching security issues or upgrading clusters, teams are doing five times the amount of work.
With Multi Tenant Operator teams can share a single cluster with multiple teams, groups of users, or departments by saving operational and management efforts. This prevents you from Kubernetes cluster sprawl.
"},{"location":"index.html#native-experience","title":"Native Experience","text":"Multi Tenant Operator provides multi-tenancy with a native Kubernetes experience without introducing additional management layers, plugins, or customized binaries.
"},{"location":"changelog.html","title":"Changelog","text":""},{"location":"changelog.html#v012x","title":"v0.12.x","text":""},{"location":"changelog.html#v01219","title":"v0.12.19","text":""},{"location":"changelog.html#fix","title":"Fix","text":"onDeletePurgeAppProject
field value.kubernetes
authentication method.TemplateGroupInstance
controller now correctly updates the TemplateGroupInstance
custom resource status and the namespace count upon the deletion of a namespace.TemplateGroupInstance
controller and kube-contoller-manager
over mentioning of secret names in secrets
or imagePullSecrets
field in ServiceAccounts
has been fixed by temporarily ignoring updates to or from ServiceAccounts
.IntegrationConfig
have now access over all types of namespaces. Previously operations were denied on orphaned namespaces (the namespaces which are not part of both privileged and tenant scope). More info in Troubleshooting GuideTemplateGroupInstance
controller now ensures that its underlying resources are force-synced when a namespace is created or deleted.TemplateGroupInstance
reconcile flow has been refined to process only the namespace for which the event was received, streamlining resource creation/deletion and improving overall efficiency.mto-admin
user for Console.resourceVersion
and UID when converting oldObject
to newObject
. This prevents problems when the object is edited by another controller.kube:admin
is now bypassed by default to perform operations, earlier kube:admin
needed to be mentioned in respective tenants to give it access over namespaces.spec.quota
, if quota.tenantoperator.stakater.com/is-default: \"true\"
annotation is presentMore information about TemplateGroupInstance's sync at Sync Resources Deployed by TemplateGroupInstance
"},{"location":"changelog.html#v092","title":"v0.9.2","text":"feat: Add tenant webhook for spec validation
fix: TemplateGroupInstance now cleans up leftover Template resources from namespaces that are no longer part of TGI namespace selector
fix: Fixed hibernation sync issue
enhance: Update tenant spec for applying common/specific namespace labels/annotations. For more details check out commonMetadata & SpecificMetadata
enhance: Add support for multi-pod architecture for Operator-Hub
chore: Remove conversion webhook for Quota and Tenant
privilegedNamespaces
regexgroup-{Template.Name}
)\u26a0\ufe0f Known Issues
caBundle
field in validation webhooks is not being populated for newly added webhooks. A temporary fix is to edit the validation webhook configuration manifest without the caBundle
field added in any webhook, so OpenShift can add it to all fields simultaneouslyValidatingWebhookConfiguration
multi-tenant-operator-validating-webhook-configuration
by removing all the caBundle
fields of all webhookscaBundle
fields have been populated\u26a0\ufe0f ApiVersion v1alpha1
of Tenant and Quota custom resources has been deprecated and is scheduled to be removed in the future. The following links contain the updated structure of both resources
destinationNamespaces
created by Multi Tenant Operatorkube-RBAC-proxy
Last revision date: 12 December 2022
IMPORTANT: THIS SOFTWARE END-USER LICENSE AGREEMENT (\"EULA\") IS A LEGAL AGREEMENT (\"Agreement\") BETWEEN YOU (THE CUSTOMER, EITHER AS AN INDIVIDUAL OR, IF PURCHASED OR OTHERWISE ACQUIRED BY OR FOR AN ENTITY, AS AN ENTITY) AND Stakater AB OR ITS SUBSIDUARY (\"COMPANY\"). READ IT CAREFULLY BEFORE COMPLETING THE INSTALLATION PROCESS AND USING MULTI TENANT OPERATOR (\"SOFTWARE\"). IT PROVIDES A LICENSE TO USE THE SOFTWARE AND CONTAINS WARRANTY INFORMATION AND LIABILITY DISCLAIMERS. BY INSTALLING AND USING THE SOFTWARE, YOU ARE CONFIRMING YOUR ACCEPTANCE OF THE SOFTWARE AND AGREEING TO BECOME BOUND BY THE TERMS OF THIS AGREEMENT.
In order to use the Software under this Agreement, you must receive a license key at the time of purchase, in accordance with the scope of use and other terms specified and as set forth in Section 1 of this Agreement.
"},{"location":"eula.html#1-license-grant","title":"1. License Grant","text":"1.1 General Use. This Agreement grants you a non-exclusive, non-transferable, limited license to the use rights for the Software, subject to the terms and conditions in this Agreement. The Software is licensed, not sold.
1.2 Electronic Delivery. All Software and license documentation shall be delivered by electronic means unless otherwise specified on the applicable invoice or at the time of purchase. Software shall be deemed delivered when it is made available for download for you by the Company (\"Delivery\").
2.1 No Modifications may be created of the original Software. \"Modification\" means:
(a) Any addition to or deletion from the contents of a file included in the original Software
(b) Any new file that contains any part of the original Software
3.1 You shall not (and shall not allow any third party to):
(a) reverse engineer the Software or attempt to reconstruct or discover any source code, underlying ideas, algorithms, file formats or programming interfaces of the Software by any means whatsoever (except and only to the extent that applicable law prohibits or restricts reverse engineering restrictions);
(b) distribute, sell, sub-license, rent, lease or use the Software for time sharing, hosting, service provider or like purposes, except as expressly permitted under this Agreement;
(c) redistribute the Software;
(d) remove any product identification, proprietary, copyright or other notices contained in the Software;
(e) modify any part of the Software, create a derivative work of any part of the Software (except as permitted in Section 4), or incorporate the Software, except to the extent expressly authorized in writing by the Company;
(f) publicly disseminate performance information or analysis (including, without limitation, benchmarks) from any source relating to the Software;
(g) utilize any equipment, device, software, or other means designed to circumvent or remove any form of Source URL or copy protection used by the Company in connection with the Software, or use the Software together with any authorization code, Source URL, serial number, or other copy protection device not supplied by the Company;
(h) use the Software to develop a product which is competitive with any of the Company's product offerings;
(i) use unauthorized Source URLs or license key(s) or distribute or publish Source URLs or license key(s), except as may be expressly permitted by the Company in writing. If your unique license is ever published, the Company reserves the right to terminate your access without notice.
3.2 Under no circumstances may you use the Software as part of a product or service that provides similar functionality to the Software itself.
7.1 The Software is provided \"as is\", with all faults, defects and errors, and without warranty of any kind. The Company does not warrant that the Software will be free of bugs, errors, or other defects, and the Company shall have no liability of any kind for the use of or inability to use the Software, the Software content or any associated service, and you acknowledge that it is not technically practicable for the Company to do so.
7.2 To the maximum extent permitted by applicable law, the Company disclaims all warranties, express, implied, arising by law or otherwise, regarding the Software, the Software content and their respective performance or suitability for your intended use, including without limitation any implied warranty of merchantability, fitness for a particular purpose.
8.1 In no event will the Company be liable for any direct, indirect, consequential, incidental, special, exemplary, or punitive damages or liabilities whatsoever arising from or relating to the Software, the Software content or this Agreement, whether based on contract, tort (including negligence), strict liability or other theory, even if the Company has been advised of the possibility of such damages.
8.2 In no event will the Company's liability exceed the Software license price as indicated in the invoice. The existence of more than one claim will not enlarge or extend this limit.
9.1 Your exclusive remedy and the Company's entire liability for breach of this Agreement shall be limited, at the Company's sole and exclusive discretion, to:
(a) replacement of any defective software or documentation; or
(b) refund of the license fee paid to the Company
10.1 Consent to the Use of Data. You agree that the Company and its affiliates may collect and use technical information gathered as part of the product support services. The Company may use this information solely to improve products and services and will not disclose this information in a form that personally identifies individuals or organizations.
10.2 Government End Users. If the Software and related documentation are supplied to or purchased by or on behalf of a Government, then the Software is deemed to be \"commercial software\" as that term is used in the acquisition regulation system.
11.1 Examples included in Software may provide links to third party libraries or code (collectively \"Third Party Software\") to implement various functions. Third Party Software does not comprise part of the Software. In some cases, access to Third Party Software may be included along with the Software delivery as a convenience for demonstration purposes. Licensee acknowledges:
(1) That some part of Third Party Software may require additional licensing of copyright and patents from the owners of such, and
(2) That distribution of any of the Software referencing or including any portion of a Third Party Software may require appropriate licensing from such third parties
12.1 Entire Agreement. This Agreement sets forth our entire agreement with respect to the Software and the subject matter hereof and supersedes all prior and contemporaneous understandings and agreements whether written or oral.
12.2 Amendment. The Company reserves the right, in its sole discretion, to amend this Agreement from time. Amendments are managed as described in General Provisions.
12.3 Assignment. You may not assign this Agreement or any of its rights under this Agreement without the prior written consent of The Company and any attempted assignment without such consent shall be void.
12.4 Export Compliance. You agree to comply with all applicable laws and regulations, including laws, regulations, orders or other restrictions on export, re-export or redistribution of software.
12.5 Indemnification. You agree to defend, indemnify, and hold harmless the Company from and against any lawsuits, claims, losses, damages, fines and expenses (including attorneys' fees and costs) arising out of your use of the Software or breach of this Agreement.
12.6 Attorneys' Fees and Costs. The prevailing party in any action to enforce this Agreement will be entitled to recover its attorneys' fees and costs in connection with such action.
12.7 Severability. If any provision of this Agreement is held by a court of competent jurisdiction to be invalid, illegal, or unenforceable, the remainder of this Agreement will remain in full force and effect.
12.8 Waiver. Failure or neglect by either party to enforce at any time any of the provisions of this license Agreement shall not be construed or deemed to be a waiver of that party's rights under this Agreement.
12.9 Audit. The Company may, at its expense, appoint its own personnel or an independent third party to audit the numbers of installations of the Software in use by you. Any such audit shall be conducted upon thirty (30) days prior notice, during regular business hours and shall not unreasonably interfere with your business activities.
12.10 Headings. The headings of sections and paragraphs of this Agreement are for convenience of reference only and are not intended to restrict, affect or be of any weight in the interpretation or construction of the provisions of such sections or paragraphs.
sales@stakater.com
.If operator upgrade is set to Automatic Approval on OperatorHub, there may be scenarios where it gets blocked.
"},{"location":"troubleshooting.html#resolution","title":"Resolution","text":"Information
If upgrade approval is set to manual, and you want to skip upgrade of a specific version, then delete the InstallPlan created for that specific version. Operator Lifecycle Manager (OLM) will create the latest available InstallPlan which can be approved then.\n
As OLM does not allow to upgrade or downgrade from a version stuck because of error, the only possible fix is to uninstall the operator from the cluster. When the operator is uninstalled it removes all of its resources i.e., ClusterRoles, ClusterRoleBindings, and Deployments etc., except Custom Resource Definitions (CRDs), so none of the Custom Resources (CRs), Tenants, Templates etc., will be removed from the cluster. If any CRD has a conversion webhook defined then that webhook should be removed before installing the stable version of the operator. This can be achieved via removing the .spec.conversion
block from the CRD schema.
As an example, if you have installed v0.8.0 of Multi Tenant Operator on your cluster, then it'll stuck in an error error validating existing CRs against new CRD's schema for \"integrationconfigs.tenantoperator.stakater.com\": error validating custom resource against new schema for IntegrationConfig multi-tenant-operator/tenant-operator-config: [].spec.tenantRoles: Required value
. To resolve this issue, you'll first uninstall the MTO from the cluster. Once you uninstall the MTO, check Tenant CRD which will have a conversion block, which needs to be removed. After removing the conversion block from the Tenant CRD, install the latest available version of MTO from OperatorHub.
If a user is added to tenant resource, and the user does not exist in RHSSO, then RHSSO is not updated with the user's Vault permission.
"},{"location":"troubleshooting.html#reproduction-steps","title":"Reproduction steps","text":"If the user does not exist in RHSSO, then MTO does not create the tenant access for Vault in RHSSO.
The user now needs to go to Vault, and sign up using OIDC. Then the user needs to wait for MTO to reconcile the updated tenant (reconciliation period is currently 1 hour). After reconciliation, MTO will add relevant access for the user in RHSSO.
If the user needs to be added immediately and it is not feasible to wait for next MTO reconciliation, then: add a label or annotation to the user, or restart the Tenant controller pod to force immediate reconciliation.
"},{"location":"troubleshooting.html#pod-creation-error","title":"Pod Creation Error","text":""},{"location":"troubleshooting.html#q-errors-in-replicaset-events-about-pods-not-being-able-to-schedule-on-openshift-because-scc-annotation-is-not-found","title":"Q. Errors in ReplicaSet Events about pods not being able to schedule on OpenShift because scc annotation is not found","text":"unable to find annotation openshift.io/sa.scc.uid-range\n
Answer. OpenShift recently updated its process of handling SCC, and it's now managed by annotations like openshift.io/sa.scc.uid-range
on the namespaces. Absence of them wont let pods schedule. The fix for the above error is to make sure ServiceAccount system:serviceaccount:openshift-infra.
regex is always mentioned in Privileged.serviceAccounts
section of IntegrationConfig
. This regex will allow operations from all ServiceAccounts
present in openshift-infra
namespace. More info at Privileged Service Accounts
Cannot CREATE namespace test-john without label stakater.com/tenant\n
Answer. Error occurs when a user is trying to perform create, update, delete action on a namespace without the required stakater.com/tenant
label. This label is used by the operator to see that authorized users can perform that action on the namespace. Just add the label with the tenant name so that MTO knows which tenant the namespace belongs to, and who is authorized to perform create/update/delete operations. For more details please refer to Namespace use-case.
Cannot CREATE namespace testing without label stakater.com/tenant. User: system:serviceaccount:openshift-apiserver:openshift-apiserver-sa\n
Answer. This error occurs because we don't allow Tenant members to do operations on OpenShift Project, whenever an operation is done on a project, openshift-apiserver-sa
tries to do the same request onto a namespace. That's why the user sees openshift-apiserver-sa
Service Account instead of its own user in the error message.
The fix is to try the same operation on the namespace manifest instead.
"},{"location":"troubleshooting.html#q-error-received-while-doing-kubectl-apply-f-namespaceyaml","title":"Q. Error received while doingkubectl apply -f namespace.yaml
","text":"Error from server (Forbidden): error when retrieving current configuration of:\nResource: \"/v1, Resource=namespaces\", GroupVersionKind: \"/v1, Kind=Namespace\"\nName: \"ns1\", Namespace: \"\"\nfrom server for: \"namespace.yaml\": namespaces \"ns1\" is forbidden: User \"muneeb\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"ns1\"\n
Answer. Tenant members will not be able to use kubectl apply
because apply
first gets all the instances of that resource, in this case namespaces, and then does the required operation on the selected resource. To maintain tenancy, tenant members do not the access to get or list all the namespaces.
The fix is to create namespaces with kubectl create
instead.
Answer. Multi-Tenant Operator's ArgoCD Integration allows configuration of which cluster-scoped resources can be deployed, both globally and on a per-tenant basis. For a global allow-list that applies to all tenants, you can add both resource group
and kind
to the IntegrationConfig's spec.integrations.argocd.clusterResourceWhitelist
field. Alternatively, you can set this up on a tenant level by configuring the same details within a Tenant's spec.integrations.argocd.appProject.clusterResourceWhitelist
field. For more details, check out the ArgoCD integration use cases
Answer. The above error can occur if the ArgoCD Application is syncing from a source that is not allowed the referenced AppProject. To solve this, verify that you have referred to the correct project in the given ArgoCD Application, and that the repoURL used for the Application's source is valid. If the error still appears, you can add the URL to the relevant Tenant's spec.integrations.argocd.sourceRepos
array.
mto-showback-*
pods failing in my cluster?","text":"Answer. The mto-showback-*
pods are used to calculate the cost of the resources used by each tenant. These pods are created by the Multi-Tenant Operator and are scheduled to run every 10 minutes. If the pods are failing, it is likely that the operator's necessary to calculate cost are not present in the cluster. To solve this, you can navigate to Operators
-> Installed Operators
in the OpenShift console and check if the MTO-OpenCost and MTO-Prometheus operators are installed. If they are in a pending state, you can manually approve them to install them in the cluster.
Extensions in MTO enhance its functionality by allowing integration with external services. Currently, MTO supports integration with ArgoCD, enabling you to synchronize your repositories and configure AppProjects directly through MTO. Future updates will include support for additional integrations.
"},{"location":"crds-api-reference/extensions.html#configuring-argocd-integration","title":"Configuring ArgoCD Integration","text":"Let us take a look at how you can create an Extension CR and integrate ArgoCD with MTO.
Before you create an Extension CR, you need to modify the Integration Config resource and add the ArgoCD configuration.
integrations:\n argocd:\n clusterResourceWhitelist:\n - group: tronador.stakater.com\n kind: EnvironmentProvisioner\n namespaceResourceBlacklist:\n - group: ''\n kind: ResourceQuota\n namespace: openshift-operators\n
The above configuration will allow the EnvironmentProvisioner
CRD and blacklist the ResourceQuota
resource. Also note that the namespace
field is mandatory and should be set to the namespace where the ArgoCD is deployed.
Every Extension CR is associated with a specific Tenant. Here's an example of an Extension CR that is associated with a Tenant named tenant-sample
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Extensions\nmetadata:\n name: extensions-sample\nspec:\n tenantName: tenant-sample\n argoCD:\n onDeletePurgeAppProject: true\n appProject:\n sourceRepos:\n - \"github.com/stakater/repo\"\n clusterResourceWhitelist:\n - group: \"\"\n kind: \"Pod\"\n namespaceResourceBlacklist:\n - group: \"v1\"\n kind: \"ConfigMap\"\n
The above CR creates an Extension for the Tenant named tenant-sample
with the following configurations:
onDeletePurgeAppProject
: If set to true
, the AppProject will be deleted when the Extension is deleted.sourceRepos
: List of repositories to sync with ArgoCD.appProject
: Configuration for the AppProject.clusterResourceWhitelist
: List of cluster-scoped resources to sync.namespaceResourceBlacklist
: List of namespace-scoped resources to ignore.In the backend, MTO will create an ArgoCD AppProject with the specified configurations.
"},{"location":"crds-api-reference/integration-config.html","title":"Integration Config","text":"IntegrationConfig is used to configure settings of multi-tenancy for Multi Tenant Operator.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n components:\n console: true\n showback: true\n ingress:\n ingressClassName: 'nginx'\n keycloak:\n host: tenant-operator-keycloak.apps.mycluster-ams.abcdef.cloud\n tlsSecretName: tenant-operator-tls\n console:\n host: tenant-operator-console.apps.mycluster-ams.abcdef.cloud\n tlsSecretName: tenant-operator-tls\n gateway:\n host: tenant-operator-gateway.apps.mycluster-ams.abcdef.cloud\n tlsSecretName: tenant-operator-tls\n customPricingModel:\n CPU: \"0.031611\"\n spotCPU: \"0.006655\"\n RAM: \"0.004237\"\n spotRAM: \"0.000892\"\n GPU: \"0.95\"\n storage: \"0.00005479452\"\n zoneNetworkEgress: \"0.01\"\n regionNetworkEgress: \"0.01\"\n internetNetworkEgress: \"0.12\"\n accessControl:\n rbac:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - admin\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n editor:\n clusterRoles:\n - custom-editor\n viewer:\n clusterRoles:\n - custom-view\n namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n groups:\n - cluster-admins\n privileged:\n namespaces:\n - ^default$\n - ^openshift.*\n - ^kube.*\n serviceAccounts:\n - ^system:serviceaccount:openshift.*\n - ^system:serviceaccount:kube.*\n users:\n - ''\n groups:\n - cluster-admins\n metadata:\n groups:\n labels:\n role: customer-reader\n annotations: \n openshift.io/node-selector: node-role.kubernetes.io/worker=\n namespaces:\n labels:\n stakater.com/workload-monitoring: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n sandboxes:\n labels:\n stakater.com/kind: sandbox\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n integrations:\n keycloak:\n realm: mto\n address: https://keycloak.apps.prod.abcdefghi.kubeapp.cloud\n clientName: mto-console\n argocd:\n clusterResourceWhitelist:\n - group: tronador.stakater.com\n kind: EnvironmentProvisioner\n namespaceResourceBlacklist:\n - group: '' # all groups\n kind: ResourceQuota\n namespace: openshift-operators\n vault:\n enabled: true\n authMethod: kubernetes #enum: {kubernetes:default, token}\n accessInfo: \n accessorPath: oidc/\n address: https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n roleName: mto\n secretRef: \n name: ''\n namespace: ''\n config: \n ssoClient: vault\n
Following are the different components that can be used to configure multi-tenancy in a cluster via Multi Tenant Operator.
"},{"location":"crds-api-reference/integration-config.html#components","title":"Components","text":" components:\n console: true\n showback: true\n ingress:\n ingressClassName: nginx\n keycloak:\n host: tenant-operator-keycloak.apps.mycluster-ams.abcdef.cloud\n tlsSecretName: tenant-operator-tls\n console:\n host: tenant-operator-console.apps.mycluster-ams.abcdef.cloud\n tlsSecretName: tenant-operator-tls\n gateway:\n host: tenant-operator-gateway.apps.mycluster-ams.abcdef.cloud\n tlsSecretName: tenant-operator-tls\n
components.console:
Enables or disables the console GUI for MTO.components.showback:
Enables or disables the showback feature on the console.components.ingress:
Configures the ingress settings for various components:ingressClassName:
Ingress class to be used for the ingress.console:
Settings for the console's ingress.host:
hostname for the console's ingress.tlsSecretName:
Name of the secret containing the TLS certificate and key for the console's ingress.gateway:
Settings for the gateway's ingress.host:
hostname for the gateway's ingress.tlsSecretName:
Name of the secret containing the TLS certificate and key for the gateway's ingress.keycloak:
Settings for the Keycloak's ingress.host:
hostname for the Keycloak's ingress.tlsSecretName:
Name of the secret containing the TLS certificate and key for the Keycloak's ingress.Here's an example of how to generate the secrets required to configure MTO:
TLS Secret for Ingress:
Create a TLS secret containing your SSL/TLS certificate and key for secure communication. This secret will be used for the Console, Gateway, and Keycloak ingresses.
kubectl -n multi-tenant-operator create secret tls <tls-secret-name> --key=<path-to-key.pem> --cert=<path-to-cert.pem>\n
Integration config will be managing the following resources required for console GUI:
MTO Postgresql
resources.MTO Prometheus
resources.MTO Opencost
resources.MTO Console, Gateway, Keycloak
resources.Showback
cronjob.Details on console GUI and showback can be found here
"},{"location":"crds-api-reference/integration-config.html#access-control","title":"Access Control","text":"accessControl:\n rbac:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - admin\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n editor:\n clusterRoles:\n - custom-editor\n viewer:\n clusterRoles:\n - custom-view\n namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n groups:\n - cluster-admins\n privileged:\n namespaces:\n - ^default$\n - ^openshift.*\n - ^kube.*\n serviceAccounts:\n - ^system:serviceaccount:openshift.*\n - ^system:serviceaccount:kube.*\n users:\n - ''\n groups:\n - cluster-admins\n
"},{"location":"crds-api-reference/integration-config.html#rbac","title":"RBAC","text":"RBAC is used to configure the roles that will be applied to each Tenant namespace. The field allows optional custom roles, that are then used to create RoleBindings for namespaces that match a labelSelector.
"},{"location":"crds-api-reference/integration-config.html#tenantroles","title":"TenantRoles","text":"TenantRoles are required within the IntegrationConfig, as they are used for defining what roles will be applied to each Tenant namespace. The field allows optional custom roles, that are then used to create RoleBindings for namespaces that match a labelSelector.
\u26a0\ufe0f If you do not configure roles in any way, then the default OpenShift roles of owner
, edit
, and view
will apply to Tenant members. Their details can be found here
rbac:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - admin\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n editor:\n clusterRoles:\n - custom-editor\n viewer:\n clusterRoles:\n - custom-view\n
"},{"location":"crds-api-reference/integration-config.html#default","title":"Default","text":"This field contains roles that will be used to create default roleBindings for each namespace that belongs to tenants. These roleBindings are only created for a namespace if that namespace isn't already matched by the custom
field below it. Therefore, it is required to have at least one role mentioned within each of its three subfields: owner
, editor
, and viewer
. These 3 subfields also correspond to the member fields of the Tenant CR
An array of custom roles. Similar to the default
field, you can mention roles within this field as well. However, the custom roles also require the use of a labelSelector
for each iteration within the array. The roles mentioned here will only apply to the namespaces that are matched by the labelSelector. If a namespace is matched by 2 different labelSelectors, then both roles will apply to it. Additionally, roles can be skipped within the labelSelector. These missing roles are then inherited from the default
roles field . For example, if the following custom roles arrangement is used:
custom:\n- labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n
Then the editor
and viewer
roles will be taken from the default
roles field, as that is required to have at least one role mentioned.
Namespace Access Policy is used to configure the namespaces that are allowed to be created by tenants. It also allows the configuration of namespaces that are ignored by MTO.
namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n groups:\n - cluster-admins\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n privileged:\n namespaces:\n - ^default$\n - ^openshift.*\n - ^kube.*\n serviceAccounts:\n - ^system:serviceaccount:openshift.*\n - ^system:serviceaccount:kube.*\n users:\n - ''\n groups:\n - cluster-admins\n
"},{"location":"crds-api-reference/integration-config.html#deny","title":"Deny","text":"namespaceAccessPolicy.Deny:
Can be used to restrict privileged users/groups CRUD operation over managed namespaces.
privileged.namespaces:
Contains the list of namespaces
ignored by MTO. MTO will not manage the namespaces
in this list. Treatment for privileged namespaces does not involve further integrations or finalizers processing as with normal namespaces. Values in this list are regex patterns.
For example:
default
namespace, we can specify ^default$
openshift-
prefix, we can specify ^openshift-.*
.stakater
in its name, we can specify ^stakater.
. (A constant word given as a regex pattern will match any namespace containing that word.)privileged.serviceAccounts:
Contains the list of ServiceAccounts
ignored by MTO. MTO will not manage the ServiceAccounts
in this list. Values in this list are regex patterns. For example, to ignore all ServiceAccounts
starting with the system:serviceaccount:openshift-
prefix, we can use ^system:serviceaccount:openshift-.*
; and to ignore a specific service account like system:serviceaccount:builder
service account we can use ^system:serviceaccount:builder$.
Note
stakater
, stakater.
and stakater.*
will have the same effect. To check out the combinations, go to Regex101, select Golang, and type your expected regex and test string.
privileged.users:
Contains the list of users
ignored by MTO. MTO will not manage the users
in this list. Values in this list are regex patterns.
privileged.groups:
Contains names of the groups that are allowed to perform CRUD operations on namespaces present on the cluster. Users in the specified group(s) will be able to perform these operations without MTO getting in their way. MTO does not interfere even with the deletion of privilegedNamespaces.
Note
User kube:admin
is bypassed by default to perform operations as a cluster admin, this includes operations on all the namespaces.
\u26a0\ufe0f If you want to use a more complex regex pattern (for the privileged.namespaces
or privileged.serviceAccounts
field), it is recommended that you test the regex pattern first - either locally or using a platform such as https://regex101.com/.
metadata:\n groups:\n labels:\n role: customer-reader\n annotations: {}\n namespaces:\n labels:\n stakater.com/workload-monitoring: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n sandboxes:\n labels:\n stakater.com/kind: sandbox\n annotations: {}\n
"},{"location":"crds-api-reference/integration-config.html#namespaces-group-and-sandbox","title":"Namespaces, group and sandbox","text":"We can use the metadata.namespaces
, metadata.group
and metadata.sandbox
fields to automatically add labels
and annotations
to the Namespaces and Groups managed via MTO.
If we want to add default labels/annotations to sandbox namespaces of tenants than we just simply add them in metadata.namespaces.labels
/metadata.namespaces.annotations
respectively.
Whenever a project is made it will have the labels and annotations as mentioned above.
kind: Project\napiVersion: project.openshift.io/v1\nmetadata:\n name: bluesky-build\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n labels:\n workload-monitoring: 'true'\n stakater.com/tenant: bluesky\nspec:\n finalizers:\n - kubernetes\nstatus:\n phase: Active\n
kind: Group\napiVersion: user.openshift.io/v1\nmetadata:\n name: bluesky-owner-group\n labels:\n role: customer-reader\nusers:\n - andrew@stakater.com\n
"},{"location":"crds-api-reference/integration-config.html#integrations","title":"Integrations","text":"Integrations are used to configure the integrations that MTO has with other tools. Currently, MTO supports the following integrations:
integrations:\n keycloak:\n realm: mto\n address: https://keycloak.apps.prod.abcdefghi.kubeapp.cloud\n clientName: mto-console\n argocd:\n clusterResourceWhitelist:\n - group: tronador.stakater.com\n kind: EnvironmentProvisioner\n namespaceResourceBlacklist:\n - group: '' # all groups\n kind: ResourceQuota\n namespace: openshift-operators\n vault:\n enabled: true\n authMethod: kubernetes #enum: {kubernetes:default, Token}\n accessInfo: \n accessorPath: oidc/\n address: https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n roleName: mto\n secretRef: \n name: ''\n namespace: ''\n config: \n ssoClient: vault\n
"},{"location":"crds-api-reference/integration-config.html#keycloak","title":"Keycloak","text":"Keycloak is an open-source Identity and Access Management solution aimed at modern applications and services. It makes it easy to secure applications and services with little to no code.
If a Keycloak
instance is already set up within your cluster, configure it for MTO by enabling the following configuration:
keycloak:\n realm: mto\n address: https://keycloak.apps.prod.abcdefghi.kubeapp.cloud/\n clientName: mto-console\n
keycloak.realm:
The realm in Keycloak where the client is configured.keycloak.address:
The address of the Keycloak instance.keycloak.clientName:
The name of the client in Keycloak.For more details around enabling Keycloak in MTO, visit here
"},{"location":"crds-api-reference/integration-config.html#argocd","title":"ArgoCD","text":"ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes. It follows the GitOps pattern of using Git repositories as the source of truth for defining the desired application state. ArgoCD uses Kubernetes manifests and configures the applications on the cluster.
If argocd
is configured on a cluster, then ArgoCD configuration can be enabled.
argocd:\n enabled: bool\n clusterResourceWhitelist:\n - group: tronador.stakater.com\n kind: EnvironmentProvisioner\n namespaceResourceBlacklist:\n - group: '' # all groups\n kind: ResourceQuota\n namespace: openshift-operators\n
argocd.clusterResourceWhitelist
allows ArgoCD to sync the listed cluster scoped resources from your GitOps repo.argocd.namespaceResourceBlacklist
prevents ArgoCD from syncing the listed resources from your GitOps repo.argocd.namespace
is an optional field used to specify the namespace where ArgoCD Applications and AppProjects are deployed. The field should be populated when you want to create an ArgoCD AppProject for each tenant.Vault is used to secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.
If vault
is configured on a cluster, then Vault configuration can be enabled.
vault:\n enabled: true\n authMethod: kubernetes #enum: {kubernetes:default, token}\n accessInfo: \n accessorPath: oidc/\n address: https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n roleName: mto\n secretRef:\n name: ''\n namespace: ''\n config: \n ssoClient: vault\n
If enabled, then admins have to specify the authMethod
to be used for authentication. MTO supports two authentication methods:
kubernetes
: This is the default authentication method. It uses the Kubernetes authentication method to authenticate with Vault.token
: This method uses a Vault token to authenticate with Vault.If authMethod
is set to kubernetes
, then admins have to specify the following fields:
accessorPath:
Accessor Path within Vault to fetch SSO accessorIDaddress:
Valid Vault address reachable within cluster.roleName:
Vault's Kubernetes authentication rolesso.clientName:
SSO client name.If authMethod
is set to token
, then admins have to specify the following fields:
accessorPath:
Accessor Path within Vault to fetch SSO accessorIDaddress:
Valid Vault address reachable within cluster.secretRef:
Secret containing Vault token.name:
Name of the secret containing Vault token.namespace:
Namespace of the secret containing Vault token.For more details around enabling Kubernetes auth in Vault, visit here
The role created within Vault for Kubernetes authentication should have the following permissions:
path \"secret/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"sys/mounts\" {\n capabilities = [\"read\", \"list\"]\n}\npath \"sys/mounts/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"managed-addons/*\" {\n capabilities = [\"read\", \"list\"]\n}\npath \"auth/kubernetes/role/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"sys/auth\" {\n capabilities = [\"read\", \"list\"]\n}\npath \"sys/policies/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"identity/group\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"identity/group-alias\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"identity/group/name/*\" {\n capabilities = [\"read\", \"list\"]\n}\npath \"identity/group/id/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\n
"},{"location":"crds-api-reference/integration-config.html#custom-pricing-model","title":"Custom Pricing Model","text":"You can modify IntegrationConfig to customise the default pricing model. Here is what you need at IntegrationConfig.spec.components
:
components:\n console: true # should be enabled\n showback: true # should be enabled\n # add below and override any default value\n # you can also remove the ones you do not need\n customPricingModel:\n CPU: \"0.031611\"\n spotCPU: \"0.006655\"\n RAM: \"0.004237\"\n spotRAM: \"0.000892\"\n GPU: \"0.95\"\n storage: \"0.00005479452\"\n zoneNetworkEgress: \"0.01\"\n regionNetworkEgress: \"0.01\"\n internetNetworkEgress: \"0.12\"\n
After modifying your default IntegrationConfig in multi-tenant-operator
namespace, a configmap named opencost-custom-pricing
will be updated. You will be able to see updated pricing info in mto-console
.
Using Multi Tenant Operator, the cluster-admin can set and enforce cluster resource quotas and limit ranges for tenants.
"},{"location":"crds-api-reference/quota.html#assigning-resource-quotas","title":"Assigning Resource Quotas","text":"Bill is a cluster admin who will first create Quota
CR where he sets the maximum resource limits that Anna's tenant will have. Here limitrange
is an optional field, cluster admin can skip it if not needed.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: small\nspec:\n resourcequota:\n hard:\n requests.cpu: '5'\n requests.memory: '5Gi'\n configmaps: \"10\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\n limitrange:\n limits:\n - type: \"Pod\"\n max:\n cpu: \"2\"\n memory: \"1Gi\"\n min:\n cpu: \"200m\"\n memory: \"100Mi\"\nEOF\n
For more details please refer to Quotas.
kubectl get quota small\nNAME STATE AGE\nsmall Active 3m\n
Bill then proceeds to create a tenant for Anna, while also linking the newly created Quota
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n namespaces:\n sandboxes:\n enabled: true\nEOF\n
Now that the quota is linked with Anna's tenant, Anna can create any resource within the values of resource quota and limit range.
kubectl -n bluesky-production create deployment nginx --image nginx:latest --replicas 4\n
Once the resource quota assigned to the tenant has been reached, Anna cannot create further resources.
kubectl create pods bluesky-training\nError from server (Cannot exceed Namespace quota: please, reach out to the system administrators)\n
"},{"location":"crds-api-reference/quota.html#limiting-persistentvolume-for-tenant","title":"Limiting PersistentVolume for Tenant","text":"Bill, as a cluster admin, wants to restrict the amount of storage a Tenant can use. For that he'll add the requests.storage
field to quota.spec.resourcequota.hard
. If Bill wants to restrict tenant bluesky
to use only 50Gi
of storage, he'll first create a quota with requests.storage
field set to 50Gi
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: medium\nspec:\n resourcequota:\n hard:\n requests.cpu: '5'\n requests.memory: '10Gi'\n requests.storage: '50Gi'\n
Once the quota is created, Bill will create the tenant and set the quota field to the one he created.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n namespaces:\n sandboxes:\n enabled: true\nEOF\n
Now, the combined storage used by all tenant namespaces will not exceed 50Gi
.
Now, Bill, as a cluster admin, wants to make sure that no Tenant can provision more than a fixed amount of storage from a StorageClass. Bill can restrict that using <storage-class-name>.storageclass.storage.k8s.io/requests.storage
field in quota.spec.resourcequota.hard
field. If Bill wants to restrict tenant sigma
to use only 20Gi
of storage from storage class stakater
, he'll first create a StorageClass stakater
and then create the relevant Quota with stakater.storageclass.storage.k8s.io/requests.storage
field set to 20Gi
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: small\nspec:\n resourcequota:\n hard:\n requests.cpu: '2'\n requests.memory: '4Gi'\n stakater.storageclass.storage.k8s.io/requests.storage: '20Gi'\n
Once the quota is created, Bill will create the tenant and set the quota field to the one he created.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n namespaces:\n sandboxes:\n enabled: true\nEOF\n
Now, the combined storage provisioned from StorageClass stakater
used by all tenant namespaces will not exceed 20Gi
.
The 20Gi
limit will only be applied to StorageClass stakater
. If a tenant member creates a PVC with some other StorageClass, he will not be restricted.
Tip
More details about Resource Quota
can be found here
Cluster scoped resource:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: namespace-parameterized-restrictions-tgi\nspec:\n template: namespace-parameterized-restrictions\n sync: true\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\n
TemplateGroupInstance distributes a template across multiple namespaces which are selected by labelSelector.
"},{"location":"crds-api-reference/template-instance.html","title":"TemplateInstance","text":"Namespace scoped resource:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: networkpolicy\n namespace: build\nspec:\n template: networkpolicy\n sync: true\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\n
TemplateInstance are used to keep track of resources created from Templates, which are being instantiated inside a Namespace. Generally, a TemplateInstance is created from a Template and then the TemplateInstances will not be updated when the Template changes later on. To change this behavior, it is possible to set spec.sync: true
in a TemplateInstance. Setting this option, means to keep this TemplateInstance in sync with the underlying template (similar to Helm upgrade).
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: redis\nresources:\n helm:\n releaseName: redis\n chart:\n repository:\n name: redis\n repoUrl: https://charts.bitnami.com/bitnami\n values: |\n redisPort: 6379\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: networkpolicy\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\nresources:\n manifests:\n - kind: NetworkPolicy\n apiVersion: networking.k8s.io/v1\n metadata:\n name: deny-cross-ns-traffic\n spec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: \"${{CIDR_IP}}\"\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: resource-mapping\nresources:\n resourceMappings:\n secrets:\n - name: secret-s1\n namespace: namespace-n1\n configMaps:\n - name: configmap-c1\n namespace: namespace-n2\n
Templates are used to initialize Namespaces, share common resources across namespaces, and map secrets/configmaps from one namespace to other namespaces.
Also, you can define custom variables in Template
and TemplateInstance
. The parameters defined in TemplateInstance
are overwritten the values defined in Template
.
Manifest Templates: The easiest option to define a Template is by specifying an array of Kubernetes manifests which should be applied when the Template is being instantiated.
Helm Chart Templates: Instead of manifests, a Template can specify a Helm chart that will be installed (using Helm template) when the Template is being instantiated.
Resource Mapping Templates: A template can be used to map secrets and configmaps from one tenant's namespace to another tenant's namespace, or within a tenant's namespace.
"},{"location":"crds-api-reference/template.html#mandatory-and-optional-templates","title":"Mandatory and Optional Templates","text":"Templates can either be mandatory or optional. By default, all Templates are optional. Cluster Admins can make Templates mandatory by adding them to the spec.templateInstances
array within the Tenant configuration. All Templates listed in spec.templateInstances
will always be instantiated within every Namespace
that is created for the respective Tenant.
A minimal Tenant definition requires only a quota field, essential for limiting resource consumption:
apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: alpha\nspec:\n quota: small\n
For a more comprehensive setup, a detailed Tenant definition includes various configurations:
apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: tenant-sample\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - kubeadmin\n groups:\n - admin-group\n editors:\n users:\n - devuser1\n - devuser2\n groups:\n - dev-group\n viewers:\n users:\n - viewuser\n groups:\n - view-group\n hibernation:\n # UTC time\n sleepSchedule: \"20 * * * *\"\n wakeSchedule: \"40 * * * *\" \n namespaces:\n sandboxes:\n enabled: true\n private: true\n withoutTenantPrefix:\n - analytics\n - marketing\n withTenantPrefix:\n - dev\n - staging\n onDeletePurgeNamespaces: true\n metadata:\n common:\n labels:\n common-label: common-value\n annotations:\n common-annotation: common-value\n sandbox:\n labels:\n sandbox-label: sandbox-value\n annotations:\n sandbox-annotation: sandbox-value\n specific:\n - namespaces:\n - tenant-sample-dev\n labels:\n specific-label: specific-dev-value\n annotations:\n specific-annotation: specific-dev-value\n desc: \"This is a sample tenant setup for the v1beta3 version.\"\n
"},{"location":"crds-api-reference/tenant.html#access-control","title":"Access Control","text":"Structured access control is critical for managing roles and permissions within a tenant effectively. It divides users into three categories, each with customizable privileges. This design enables precise role-based access management.
These roles are obtained from IntegrationConfig's TenantRoles field.
Owners
: Have full administrative rights, including resource management and namespace creation. Their roles are crucial for high-level management tasks.Editors
: Granted permissions to modify resources, enabling them to support day-to-day operations without full administrative access.Viewers
: Provide read-only access, suitable for oversight and auditing without the ability to alter resources.Users and groups are linked to these roles by specifying their usernames or group names in the respective fields under owners
, editors
, and viewers
.
The quota
field sets the resource limits for the tenant, such as CPU and memory usage, to prevent any single tenant from consuming a disproportionate amount of resources. This mechanism ensures efficient resource allocation and fosters fair usage practices across all tenants.
For more information on quotas, please refer here.
"},{"location":"crds-api-reference/tenant.html#namespaces","title":"Namespaces","text":"Controls the creation and management of namespaces within the tenant:
sandboxes
:
private
to true will make the sandboxes visible only to the user they belong to. By default, sandbox namespaces are visible to all tenant members.withoutTenantPrefix
: Lists the namespaces to be created without automatically prefixing them with the tenant name, useful for shared or common resources.
withTenantPrefix
: Namespaces listed here will be prefixed with the tenant name, ensuring easy identification and isolation.onDeletePurgeNamespaces
: Determines whether namespaces associated with the tenant should be deleted upon the tenant's deletion, enabling clean up and resource freeing.metadata
: Configures metadata like labels and annotations that are applied to namespaces managed by the tenant:common
: Applies specified labels and annotations across all namespaces within the tenant, ensuring consistent metadata for resources and workloads.sandbox
: Special metadata for sandbox namespaces, which can include templated annotations or labels for dynamic information.{{ TENANT.USERNAME }}
. This template can be utilized to dynamically insert the tenant's username value into annotations, for example, as username: {{ TENANT.USERNAME }}
.specific
: Allows applying unique labels and annotations to specified tenant namespaces, enabling custom configurations for particular workloads or environments.hibernation
allows for the scheduling of inactive periods for namespaces associated with the tenant, effectively putting them into a \"sleep\" mode. This capability is designed to conserve resources during known periods of inactivity.
sleepSchedule
and wakeSchedule
, both of which accept strings formatted according to cron syntax.desc
provides a human-readable description of the tenant, aiding in documentation and at-a-glance understanding of the tenant's purpose and configuration.
\u26a0\ufe0f If same label or annotation key is being applied using different methods provided, then the highest precedence will be given to namespaces.metadata.specific
followed by namespaces.metadata.common
and in the end would be the ones applied from openshift.project.labels
/openshift.project.annotations
in IntegrationConfig
The Multi Tenant Operator (MTO) Console is a comprehensive user interface designed for both administrators and tenant users to manage multi-tenant environments. The MTO Console simplifies the complexity involved in handling various aspects of tenants and their related resources.
"},{"location":"explanation/console.html#dashboard-overview","title":"Dashboard Overview","text":"The dashboard serves as a centralized monitoring hub, offering insights into the current state of tenants, namespaces, and quotas. It is designed to provide a quick summary/snapshot of MTO resources' status. Additionally, it includes a Showback graph that presents a quick glance of the seven-day cost trends associated with the namespaces/tenants based on the logged-in user.
By default, MTO Console will be disabled and has to be enabled by setting the below configuration in IntegrationConfig.
components:\n console: true\n ingress:\n ingressClassName: <ingress-class-name>\n console:\n host: tenant-operator-console.<hostname>\n tlsSecretName: <tls-secret-name>\n gateway:\n host: tenant-operator-gateway.<hostname>\n tlsSecretName: <tls-secret-name>\n keycloak:\n host: tenant-operator-keycloak.<hostname>\n tlsSecretName: <tls-secret-name>\n showback: true\n trustedRootCert: <root-ca-secret-name>\n
<hostname>
: hostname of the cluster <ingress-class-name>
: name of the ingress class <tls-secret-name>
: name of the secret that contains the TLS certificate and key <root-ca-secret-name>
: name of the secret that contains the root CA certificate
Note: trustedRootCert
and tls-secret-name
are optional. If not provided, MTO will use the default root CA certificate and secrets respectively.
Once the above configuration is set on the IntegrationConfig, MTO would start provisioning the required resources for MTO Console to be ready. In a few moments, you should be able to see the Console Ingress in the multi-tenant-operator
namespace which gives you access to the Console.
For more details on the configuration, please visit here.
"},{"location":"explanation/console.html#tenants","title":"Tenants","text":"Here, admins have a bird's-eye view of all tenants, with the ability to delve into each one for detailed examination and management. This section is pivotal for observing the distribution and organization of tenants within the system. More information on each tenant can be accessed by clicking the view option against each tenant name.
"},{"location":"explanation/console.html#namespaces","title":"Namespaces","text":"Users can view all the namespaces that belong to their tenant, offering a comprehensive perspective of the accessible namespaces for tenant members. This section also provides options for detailed exploration.
"},{"location":"explanation/console.html#quotas","title":"Quotas","text":"MTO's Quotas are crucial for managing resource allocation. In this section, administrators can assess the quotas assigned to each tenant, ensuring a balanced distribution of resources in line with operational requirements.
"},{"location":"explanation/console.html#templates","title":"Templates","text":"The Templates section acts as a repository for standardized resource deployment patterns, which can be utilized to maintain consistency and reliability across tenant environments. Few examples include provisioning specific k8s manifests, helm charts, secrets or configmaps across a set of namespaces.
"},{"location":"explanation/console.html#showback","title":"Showback","text":"
The Showback feature is an essential financial governance tool, providing detailed insights into the cost implications of resource usage by tenant or namespace or other filters. This facilitates a transparent cost management and internal chargeback or showback process, enabling informed decision-making regarding resource consumption and budgeting.
"},{"location":"explanation/console.html#user-roles-and-permissions","title":"User Roles and Permissions","text":""},{"location":"explanation/console.html#administrators","title":"Administrators","text":"Administrators have overarching access to the console, including the ability to view all namespaces and tenants. They have exclusive access to the IntegrationConfig, allowing them to view all the settings and integrations.
"},{"location":"explanation/console.html#tenant-users","title":"Tenant Users","text":"Regular tenant users can monitor and manage their allocated resources. However, they do not have access to the IntegrationConfig and cannot view resources across different tenants, ensuring data privacy and operational integrity.
"},{"location":"explanation/console.html#live-yaml-configuration-and-graph-view","title":"Live YAML Configuration and Graph View","text":"In the MTO Console, each resource section is equipped with a \"View\" button, revealing the live YAML configuration for complete information on the resource. For Tenant resources, a supplementary \"Graph\" option is available, illustrating the relationships and dependencies of all resources under a Tenant. This dual-view approach empowers users with both the detailed control of YAML and the holistic oversight of the graph view.
You can find more details on graph visualization here: Graph Visualization
"},{"location":"explanation/console.html#caching-and-database","title":"Caching and Database","text":"MTO integrates a dedicated database to streamline resource management. Now, all resources managed by MTO are efficiently stored in a Postgres database, enhancing the MTO Console's ability to efficiently retrieve all the resources for optimal presentation.
The implementation of this feature is facilitated by the Bootstrap controller, streamlining the deployment process. This controller creates the PostgreSQL Database, establishes a service for inter-pod communication, and generates a secret to ensure secure connectivity to the database.
Furthermore, the introduction of a dedicated cache layer ensures that there is no added burden on the Kube API server when responding to MTO Console requests. This enhancement not only improves response times but also contributes to a more efficient and responsive resource management system.
"},{"location":"explanation/console.html#authentication-and-authorization","title":"Authentication and Authorization","text":""},{"location":"explanation/console.html#keycloak-for-authentication","title":"Keycloak for Authentication","text":"MTO Console incorporates Keycloak, a leading authentication module, to manage user access securely and efficiently. Keycloak is provisioned automatically by our controllers, setting up a new realm, client, and a default user named mto
.
MTO Console leverages PostgreSQL as the persistent storage solution for Keycloak, enhancing the reliability and flexibility of the authentication system.
It offers benefits such as enhanced data reliability, easy data export and import.
"},{"location":"explanation/console.html#benefits_1","title":"Benefits","text":"The MTO Console is equipped with an authorization module, designed to manage access rights intelligently and securely.
"},{"location":"explanation/console.html#benefits_2","title":"Benefits","text":"The MTO Console is engineered to simplify complex multi-tenant management. The current iteration focuses on providing comprehensive visibility. Future updates could include direct CUD (Create/Update/Delete) capabilities from the dashboard, enhancing the console\u2019s functionality. The Showback feature remains a standout, offering critical cost tracking and analysis. The delineation of roles between administrators and tenant users ensures a secure and organized operational framework.
"},{"location":"explanation/logs-metrics.html","title":"Metrics and Logs Documentation","text":"This document offers an overview of the Prometheus metrics implemented by the multi_tenant_operator
controllers, along with an interpretation guide for the logs and statuses generated by these controllers. Each metric is designed to provide specific insights into the controllers' operational performance, while the log interpretation guide aids in understanding their behavior and workflow processes. Additionally, the status descriptions for custom resources provide operational snapshots. Together, these elements form a comprehensive toolkit for monitoring and enhancing the performance and health of the controllers.
multi_tenant_operator_resources_deployed_total
kind
, name
, namespace
multi_tenant_operator_resources_deployed
kind
, name
, namespace
, type
multi_tenant_operator_reconcile_error
kind
, name
, namespace
, state
, errors
multi_tenant_operator_reconcile_count
kind
, name
multi_tenant_operator_reconcile_seconds
kind
, name
multi_tenant_operator_reconcile_seconds_total
kind
, name
In this section, we delve into the status of various custom resources managed by our controllers. The kubectl describe
command can be used to fetch the status of these resources.
Status from the templategroupinstances.tenantoperator.stakater.com
custom resource:
InstallSucceeded
: Indicates the success of the instance's installation.Ready
: Shows the readiness of the instance, with details on the last reconciliation process, its duration, and relevant messages.Running
: Reports on active processes like ongoing resource reconciliation.Template Manifests Hash
and Resource Mapping Hash
, which provide versioning and change tracking for template manifests and resource mappings.Logs from the tenant-operator-templategroupinstance-controller
:
Reconciling!
mark the beginning of a reconciliation process for a TemplateGroupInstance. Subsequent actions like Creating/Updating TemplateGroupInstance
and Retrieving list of namespaces Matching to TGI
outline the reconciliation steps.Namespaces test-namespace-1 is new or failed...
and Creating/Updating resource...
detail the management of Kubernetes resources in specific namespaces.[Worker X]
show tasks being processed in parallel, including steps like Validating parameters
, Gathering objects from manifest
, and Apply manifests
.End Reconciling
and Defering XXth Reconciling, with duration XXXms
indicate the end of a reconciliation process and its duration, aiding in performance analysis.Watcher
such as Delete call received for object...
and Following resource is recreated...
are key for tracking changes to Kubernetes objects.These logs are crucial for tracking the system's behavior, diagnosing issues, and comprehending the resource management workflow.
"},{"location":"explanation/multi-tenancy-vault.html","title":"Multi-Tenancy in Vault","text":""},{"location":"explanation/multi-tenancy-vault.html#vault-multitenancy","title":"Vault Multitenancy","text":"HashiCorp Vault is an identity-based secret and encryption management system. Vault validates and authorizes a system's clients (users, machines, apps) before providing them access to secrets or stored sensitive data.
"},{"location":"explanation/multi-tenancy-vault.html#vault-integration-in-multi-tenant-operator","title":"Vault integration in Multi Tenant Operator","text":""},{"location":"explanation/multi-tenancy-vault.html#service-account-auth-in-vault","title":"Service Account Auth in Vault","text":"MTO enables the Kubernetes auth method which can be used to authenticate with Vault using a Kubernetes Service Account Token. When enabled, for every tenant namespace, MTO automatically creates policies and roles that allow the service accounts present in those namespaces to read secrets at tenant's path in Vault. The name of the role is the same as namespace name.
These service accounts are required to have stakater.com/vault-access: true
label, so they can be authenticated with Vault via MTO.
The Diagram shows how MTO enables ServiceAccounts to read secrets from Vault.
"},{"location":"explanation/multi-tenancy-vault.html#user-oidc-auth-in-vault","title":"User OIDC Auth in Vault","text":"This requires a running RHSSO(RedHat Single Sign On)
instance integrated with Vault over OIDC login method.
MTO integration with Vault and RHSSO provides a way for users to log in to Vault where they only have access to relevant tenant paths.
Once both integrations are set up with IntegrationConfig CR, MTO links tenant users to specific client roles named after their tenant under Vault client in RHSSO.
After that, MTO creates specific policies in Vault for its tenant users.
Mapping of tenant roles to Vault is shown below
Tenant Role Vault Path Vault Capabilities Owner, Editor (tenantName)/* Create, Read, Update, Delete, List Owner, Editor sys/mounts/(tenantName)/* Create, Read, Update, Delete, List Owner, Editor managed-addons/* Read, List Viewer (tenantName)/* ReadA simple user login workflow is shown in the diagram below.
"},{"location":"explanation/template.html","title":"Understanding and Utilizing Template","text":""},{"location":"explanation/template.html#creating-templates","title":"Creating Templates","text":"Anna wants to create a Template that she can use to initialize or share common resources across namespaces (e.g. PullSecrets).
Anna can either create a template using manifests
field, covering Kubernetes or custom resources.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Or by using Helm Charts
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: redis\nresources:\n helm:\n releaseName: redis\n chart:\n repository:\n name: redis\n repoUrl: https://charts.bitnami.com/bitnami\n version: 0.0.15\n values: |\n redisPort: 6379\n
She can also use resourceMapping
field to copy over secrets and configmaps from one namespace to others.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: resource-mapping\nresources:\n resourceMappings:\n secrets:\n - name: docker-secret\n namespace: bluesky-build\n configMaps:\n - name: tronador-configMap\n namespace: stakater-tronador\n
Note: Resource mapping can be used via TGI to map resources within tenant namespaces or to some other tenant's namespace. If used with TI, the resources will only be mapped if namespaces belong to same tenant.
"},{"location":"explanation/template.html#using-templates-with-default-parameters","title":"Using Templates with Default Parameters","text":"apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: namespace-parameterized-restrictions\nparameters:\n # Name of the parameter\n - name: DEFAULT_CPU_LIMIT\n # The default value of the parameter\n value: \"1\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"0.5\"\n # If a parameter is required the template instance will need to set it\n # required: true\n # Make sure only values are entered for this parameter\n validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n manifests:\n - apiVersion: v1\n kind: LimitRange\n metadata:\n name: namespace-limit-range-${namespace}\n spec:\n limits:\n - default:\n cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n defaultRequest:\n cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n type: Container\n
Parameters can be used with both manifests
and helm charts
Templated values are placeholders in your configuration that get replaced with actual data when the CR is processed. Below is a list of currently supported templated values, their descriptions, and where they can be used.
"},{"location":"explanation/templated-metadata-values.html#supported-templated-values","title":"Supported templated values","text":"\"{{ TENANT.USERNAME }}\"
Owners
and Editors
.Tenant
: Under sandboxMetadata.labels
and sandboxMetadata.annotations
.IntegrationConfig
: Under metadata.sandboxs.labels
and metadata.sandboxs.annotations
. annotation:\n che.eclipse.org/username: \"{{ TENANT.USERNAME }}\" # double quotes are required\n
Bill is a cluster admin who wants to configure network policies to provide multi-tenant network isolation.
First, Bill creates a template for network policies:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: tenant-network-policy\nresources:\n manifests:\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-same-namespace\n spec:\n podSelector: {}\n ingress:\n - from:\n - podSelector: {}\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-from-openshift-monitoring\n spec:\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n network.openshift.io/policy-group: monitoring\n podSelector: {}\n policyTypes:\n - Ingress\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-from-openshift-ingress\n spec:\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n network.openshift.io/policy-group: ingress\n podSelector: {}\n policyTypes:\n - Ingress\n
Once the template has been created, Bill edits the IntegrationConfig to add unique label to all tenant projects:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n metadata:\n namespaces:\n labels:\n stakater.com/workload-monitoring: \"true\"\n tenant-network-policy: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n sandbox:\n labels:\n stakater.com/kind: sandbox\n privileged:\n namespaces:\n - default\n - ^openshift.*\n - ^kube.*\n serviceAccounts:\n - ^system:serviceaccount:openshift.*\n - ^system:serviceaccount:kube.*\n
Bill has added a new label tenant-network-policy: \"true\"
in project section of IntegrationConfig, now MTO will add that label in all tenant projects.
Finally, Bill creates a TemplateGroupInstance
which will distribute the network policies using the newly added project label and template.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: tenant-network-policy-group\nspec:\n template: tenant-network-policy\n selector:\n matchLabels:\n tenant-network-policy: \"true\"\n sync: true\n
MTO will now deploy the network policies mentioned in Template
to all projects matching the label selector mentioned in the TemplateGroupInstance.
Secrets like registry
credentials often need to exist in multiple Namespaces, so that Pods within different namespaces can have access to those credentials in form of secrets.
Manually creating secrets within different namespaces could lead to challenges, such as:
With the help of Multi-Tenant Operator's Template feature we can make this secret distribution experience easy.
For example, to copy a Secret called registry
which exists in the example
to new Namespaces whenever they are created, we will first create a Template which will have reference of the registry secret.
It will also push updates to the copied Secrets and keep the propagated secrets always sync and updated with parent namespaces.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: registry-secret\nresources:\n resourceMappings:\n secrets:\n - name: registry\n namespace: example\n
Now using this Template we can propagate registry secret to different namespaces that have some common set of labels.
For example, will just add one label kind: registry
and all namespaces with this label will get this secret.
For propagating it on different namespaces dynamically will have to create another resource called TemplateGroupInstance
. TemplateGroupInstance
will have Template
and matchLabel
mapping as shown below:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: registry-secret-group-instance\nspec:\n template: registry-secret\n selector:\n matchLabels:\n kind: registry\n sync: true\n
After reconciliation, you will be able to see those secrets in namespaces having mentioned label.
MTO will keep injecting this secret to the new namespaces created with that label.
kubectl get secret registry-secret -n example-ns-1\nNAME STATE AGE\nregistry-secret Active 3m\n\nkubectl get secret registry-secret -n example-ns-2\nNAME STATE AGE\nregistry-secret Active 3m\n
"},{"location":"how-to-guides/custom-metrics.html","title":"Custom Metrics Support","text":"Multi Tenant Operator now supports custom metrics for templates, template instances and template group instances. This feature allows users to monitor the usage of templates and template instances in their cluster.
To enable custom metrics and view them in your OpenShift cluster, you need to follow the steps below:
Observe
-> Metrics
in the OpenShift console.Administration
-> Namespaces
in the OpenShift console. Select the namespace where you have installed Multi Tenant Operator.openshift.io/cluster-monitoring=true
. This will enable cluster monitoring for the namespace.Observe
-> Targets
in the OpenShift console. You should see the namespace in the list of targets.Observe
-> Metrics
in the OpenShift console. You should see the custom metrics for templates, template instances and template group instances in the list of metrics.Details of metrics can be found at Metrics and Logs
"},{"location":"how-to-guides/custom-roles.html","title":"Changing the default access level for tenant owners","text":"This feature allows the cluster admins to change the default roles assigned to Tenant owner, editor, viewer groups.
For example, if Bill as the cluster admin wants to reduce the privileges that tenant owners have, so they cannot create or edit Roles or bind them. As an admin of an OpenShift cluster, Bill can do this by assigning the edit
role to all tenant owners. This is easily achieved by modifying the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n accessControl:\n rbac:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - edit\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n
Once all namespaces reconcile, the old admin
RoleBindings should get replaced with the edit
ones for each tenant owner.
Bill now wants the owners of the tenants bluesky
and alpha
to have admin
permissions over their namespaces. Custom roles feature will allow Bill to do this, by modifying the IntegrationConfig like this:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n accessControl:\n rbac:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - edit\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n owner:\n clusterRoles:\n - admin\n - labelSelector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - bluesky\n owner:\n clusterRoles:\n - admin\n
New Bindings will be created for the Tenant owners of bluesky
and alpha
, corresponding to the admin
Role. Bindings for editors and viewer will be inherited from the default roles
. All other Tenant owners will have an edit
Role bound to them within their namespaces
Multi Tenant Operator uses its helm
functionality from Template
and TemplateGroupInstance
to deploy private and public charts to multiple namespaces.
Bill, the cluster admin, wants to deploy a helm chart from OCI
registry in namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: chart-deploy\nresources:\n helm:\n releaseName: random-release\n chart:\n repository:\n name: random-chart\n repoUrl: 'oci://ghcr.io/stakater/charts/random-chart'\n version: 0.0.15\n password:\n key: password\n name: repo-user\n namespace: shared-ns\n username:\n key: username\n name: repo-user\n namespace: shared-ns\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with MatchLabels
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: chart-deploy\nspec:\n selector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - system\n sync: true\n template: chart-deploy\n
Multi Tenant Operator will pick up the credentials from the mentioned namespace to pull the chart and apply it.
Afterward, Bill can see that manifests in the chart have been successfully created in all label matching namespaces.
"},{"location":"how-to-guides/deploying-private-helm-charts.html#deploying-helm-chart-to-namespaces-via-templategroupinstances-from-https-registry","title":"Deploying Helm Chart to Namespaces via TemplateGroupInstances from HTTPS Registry","text":"Bill, the cluster admin, wants to deploy a helm chart from HTTPS
registry in namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: chart-deploy\nresources:\n helm:\n releaseName: random-release\n chart:\n repository:\n name: random-chart\n repoUrl: 'nexus-helm-url/registry'\n version: 0.0.15\n password:\n key: password\n name: repo-user\n namespace: shared-ns\n username:\n key: username\n name: repo-user\n namespace: shared-ns\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with MatchLabels
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: chart-deploy\nspec:\n selector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - system\n sync: true\n template: chart-deploy\n
Multi Tenant Operator will pick up the credentials from the mentioned namespace to pull the chart and apply it.
Afterward, Bill can see that manifests in the chart have been successfully created in all label matching namespaces.
"},{"location":"how-to-guides/deploying-templates.html","title":"Distributing Resources in Namespaces","text":"Multi Tenant Operator has two Custom Resources which can cover this need using the Template
CR, depending upon the conditions and preference.
Bill, the cluster admin, wants to deploy a docker pull secret in namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with MatchLabels
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchLabels:\n kind: build\n sync: true\n
Afterward, Bill can see that secrets have been successfully created in all label matching namespaces.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-secret Active 3m\n\nkubectl get secret docker-secret -n alpha-dave-aurora-sandbox\nNAME STATE AGE\ndocker-secret Active 2m\n
TemplateGroupInstance
can also target specific tenants or all tenant namespaces under a single YAML definition.
It can be done by using the matchExpressions
field, dividing the tenant label in key and values.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\n sync: true\n
"},{"location":"how-to-guides/deploying-templates.html#templategroupinstance-for-all-tenants","title":"TemplateGroupInstance for all Tenants","text":"This can also be done by using the matchExpressions
field, using just the tenant label key stakater.com/tenant
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: Exists\n sync: true\n
"},{"location":"how-to-guides/deploying-templates.html#deploying-template-to-a-namespace-via-templateinstance","title":"Deploying Template to a Namespace via TemplateInstance","text":"Anna wants to deploy a docker pull secret in her namespace.
First Anna asks Bill, the cluster admin, to create a template of the secret for her:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Anna creates a TemplateInstance
in her namespace referring to the Template
she wants to deploy:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: docker-pull-secret-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: docker-pull-secret\n sync: true\n
Once this is created, Anna can see that the secret has been successfully applied.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"how-to-guides/deploying-templates.html#passing-parameters-to-template-via-templateinstance-templategroupinstance","title":"Passing Parameters to Template via TemplateInstance, TemplateGroupInstance","text":"Anna wants to deploy a LimitRange resource to certain namespaces.
First Anna asks Bill, the cluster admin, to create template with parameters for LimitRange for her:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: namespace-parameterized-restrictions\nparameters:\n # Name of the parameter\n - name: DEFAULT_CPU_LIMIT\n # The default value of the parameter\n value: \"1\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"0.5\"\n # If a parameter is required the template instance will need to set it\n # required: true\n # Make sure only values are entered for this parameter\n validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n manifests:\n - apiVersion: v1\n kind: LimitRange\n metadata:\n name: namespace-limit-range-${namespace}\n spec:\n limits:\n - default:\n cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n defaultRequest:\n cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n type: Container\n
Afterward, Anna creates a TemplateInstance
in her namespace referring to the Template
she wants to deploy:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: namespace-parameterized-restrictions-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: namespace-parameterized-restrictions\n sync: true\nparameters:\n - name: DEFAULT_CPU_LIMIT\n value: \"1.5\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"1\"\n
If she wants to distribute the same Template over multiple namespaces, she can use TemplateGroupInstance
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: namespace-parameterized-restrictions-tgi\nspec:\n template: namespace-parameterized-restrictions\n sync: true\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\nparameters:\n - name: DEFAULT_CPU_LIMIT\n value: \"1.5\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"1\"\n
"},{"location":"how-to-guides/distributing-secrets-using-sealed-secret-template.html","title":"Distributing Secrets Using Sealed Secrets Template","text":"Bill is a cluster admin who wants to provide a mechanism for distributing secrets in multiple namespaces. For this, he wants to use Sealed Secrets as the solution by adding them to MTO Template CR
First, Bill creates a Template in which Sealed Secret is mentioned:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: tenant-sealed-secret\nresources:\n manifests:\n - kind: SealedSecret\n apiVersion: bitnami.com/v1alpha1\n metadata:\n name: mysecret\n spec:\n encryptedData:\n .dockerconfigjson: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq.....\n template:\n type: kubernetes.io/dockerconfigjson\n # this is an example of labels and annotations that will be added to the output secret\n metadata:\n labels:\n \"jenkins.io/credentials-type\": usernamePassword\n annotations:\n \"jenkins.io/credentials-description\": credentials from Kubernetes\n
Once the template has been created, Bill has to edit the Tenant
to add unique label to namespaces in which the secret has to be deployed. For this, he can use the support for common and specific labels across namespaces.
Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case.
apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n sandboxes:\n enabled: false\n withTenantPrefix:\n - dev\n - build\n - prod\n withoutTenantPrefix: []\n metadata:\n specific:\n - namespaces:\n - bluesky-test-namespace\n labels:\n distribute-image-pull-secret: true\n common:\n labels:\n distribute-image-pull-secret: true\n
Bill has added support for a new label distribute-image-pull-secret: true\"
for tenant projects/namespaces, now MTO will add that label depending on the used field.
Finally, Bill creates a TemplateGroupInstance
which will deploy the sealed secrets using the newly created project label and template.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: tenant-sealed-secret\nspec:\n template: tenant-sealed-secret\n selector:\n matchLabels:\n distribute-image-pull-secret: true\n sync: true\n
MTO will now deploy the sealed secrets mentioned in Template
to namespaces which have the mentioned label. The rest of the work to deploy secret from a sealed secret has to be done by Sealed Secrets Controller.
With the Multi-Tenant Operator (MTO), cluster administrators can configure multi-tenancy within their cluster. The integration of ArgoCD with MTO allows for the configuration of multi-tenancy in ArgoCD applications and AppProjects.
MTO can be configured to create AppProjects for each tenant. These AppProjects enable tenants to create ArgoCD Applications that can be synced to namespaces owned by them. Cluster admins can blacklist certain namespace resources and allow specific cluster-scoped resources as needed (see the NamespaceResourceBlacklist and ClusterResourceWhitelist sections in Integration Config docs and Tenant Custom Resource docs).
Note that ArgoCD integration in MTO is optional.
"},{"location":"how-to-guides/enabling-multi-tenancy-argocd.html#default-argocd-configuration","title":"Default ArgoCD configuration","text":"We have set a default ArgoCD configuration in Multi Tenant Operator that fulfils the following use cases:
To ensure each tenant has their own ArgoCD AppProjects, administrators must first specify the ArgoCD namespace in the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n argocd:\n namespace: openshift-operators\n ...\n
Administrators then create an Extension CR associated with the tenant:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Extensions\nmetadata:\n name: extensions-sample\nspec:\n tenantName: tenant-sample\n argoCD:\n onDeletePurgeAppProject: true\n appProject:\n sourceRepos:\n - \"github.com/stakater/repo\"\n clusterResourceWhitelist:\n - group: \"\"\n kind: \"Pod\"\n namespaceResourceBlacklist:\n - group: \"v1\"\n kind: \"ConfigMap\"\n
This creates an AppProject for the tenant:
oc get AppProject -A\nNAMESPACE NAME AGE\nopenshift-operators tenant-sample 5d15h\n
Example of the created AppProject:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: tenant-sample\n namespace: openshift-operators\nspec:\n destinations:\n - namespace: tenant-sample-build\n server: \"https://kubernetes.default.svc\"\n - namespace: tenant-sample-dev\n server: \"https://kubernetes.default.svc\"\n - namespace: tenant-sample-stage\n server: \"https://kubernetes.default.svc\"\n roles:\n - description: >-\n Role that gives full access to all resources inside the tenant's\n namespace to the tenant owner groups\n groups:\n - saap-cluster-admins\n - stakater-team\n - tenant-sample-owner-group\n name: tenant-sample-owner\n policies:\n - \"p, proj:tenant-sample:tenant-sample-owner, *, *, tenant-sample/*, allow\"\n - description: >-\n Role that gives edit access to all resources inside the tenant's\n namespace to the tenant owner group\n groups:\n - saap-cluster-admins\n - stakater-team\n - tenant-sample-edit-group\n name: tenant-sample-edit\n policies:\n - \"p, proj:tenant-sample:tenant-sample-edit, *, *, tenant-sample/*, allow\"\n - description: >-\n Role that gives view access to all resources inside the tenant's\n namespace to the tenant owner group\n groups:\n - saap-cluster-admins\n - stakater-team\n - tenant-sample-view-group\n name: tenant-sample-view\n policies:\n - \"p, proj:tenant-sample:tenant-sample-view, *, get, tenant-sample/*, allow\"\n sourceRepos:\n - \"https://github.com/stakater/gitops-config\"\n
Users belonging to the tenant group will now see only applications created by them in the ArgoCD frontend:
Note
For ArgoCD Multi Tenancy to work properly, any default roles or policies attached to all users must be removed.
"},{"location":"how-to-guides/enabling-multi-tenancy-argocd.html#preventing-argocd-from-syncing-certain-namespaced-resources","title":"Preventing ArgoCD from Syncing Certain Namespaced Resources","text":"To prevent tenants from syncing ResourceQuota and LimitRange resources to their namespaces, administrators can specify these resources in the blacklist section of the ArgoCD configuration in the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n integrations:\n argocd:\n namespace: openshift-operators\n namespaceResourceBlacklist:\n - group: \"\"\n kind: ResourceQuota\n - group: \"\"\n kind: LimitRange\n ...\n
This configuration ensures these resources are not synced by ArgoCD if added to any tenant's project directory in GitOps. The AppProject will include the blacklisted resources:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: tenant-sample\n namespace: openshift-operators\nspec:\n ...\n namespaceResourceBlacklist:\n - group: ''\n kind: ResourceQuota\n - group: ''\n kind: LimitRange\n ...\n
"},{"location":"how-to-guides/enabling-multi-tenancy-argocd.html#allowing-argocd-to-sync-certain-cluster-wide-resources","title":"Allowing ArgoCD to Sync Certain Cluster-Wide Resources","text":"To allow tenants to sync the Environment cluster-scoped resource, administrators can specify this resource in the allow-list section of the ArgoCD configuration in the IntegrationConfig's spec:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n integrations:\n argocd:\n namespace: openshift-operators\n clusterResourceWhitelist:\n - group: \"\"\n kind: Environment\n ...\n
This configuration ensures these resources are synced by ArgoCD if added to any tenant's project directory in GitOps. The AppProject will include the allow-listed resources:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: tenant-sample\n namespace: openshift-operators\nspec:\n ...\n clusterResourceWhitelist:\n - group: \"\"\n kind: Environment\n ...\n
"},{"location":"how-to-guides/enabling-multi-tenancy-argocd.html#overriding-namespaceresourceblacklist-andor-clusterresourcewhitelist-per-tenant","title":"Overriding NamespaceResourceBlacklist and/or ClusterResourceWhitelist Per Tenant","text":"To override the namespaceResourceBlacklist
and/or clusterResourceWhitelist
set via Integration Config for a specific tenant, administrators can specify these in the argoCD
section of the Extension CR:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Extensions\nmetadata:\n name: extensions-blue-sky\nspec:\n tenantName: blue-sky\n argoCD:\n onDeletePurgeAppProject: true\n appProject:\n sourceRepos:\n - \"github.com/stakater/repo\"\n clusterResourceWhitelist:\n - group: \"\"\n kind: \"Pod\"\n namespaceResourceBlacklist:\n - group: \"v1\"\n kind: \"ConfigMap\"\n
This configuration allows for tailored settings for each tenant, ensuring flexibility and control over ArgoCD resources.
"},{"location":"how-to-guides/enabling-multi-tenancy-vault.html","title":"Configuring Vault in IntegrationConfig","text":"Vault is used to secure, store and tightly control access to tokens, passwords, certificates, and encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.
To enable Vault multi-tenancy, a role has to be created in Vault under Kubernetes authentication with the following permissions:
path \"secret/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"sys/mounts\" {\n capabilities = [\"read\", \"list\"]\n}\npath \"sys/mounts/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"managed-addons/*\" {\n capabilities = [\"read\", \"list\"]\n}\npath \"auth/kubernetes/role/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"sys/auth\" {\n capabilities = [\"read\", \"list\"]\n}\npath \"sys/policies/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"identity/group\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"identity/group-alias\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"identity/group/name/*\" {\n capabilities = [\"read\", \"list\"]\n}\npath \"identity/group/id/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\n
If Bill (the cluster admin) has Vault configured in his cluster, then he can take benefit from MTO's integration with Vault.
MTO automatically creates Vault secret paths for tenants, where tenant members can securely save their secrets. It also authorizes tenant members to access these secrets via OIDC.
Bill would first have to integrate Vault with MTO by adding the details in IntegrationConfig. For more details
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n integrations:\n vault:\n enabled: true\n authMethod: kubernetes\n accessInfo: \n accessorPath: oidc/\n address: https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n roleName: mto\n secretRef: \n name: ''\n namespace: ''\n config: \n ssoClient: vault\n
Bill then creates a tenant for Anna and John:
apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n accessControl:\n owners:\n users:\n - anna@acme.org\n viewers:\n users:\n - john@acme.org\n quota: small\n namespaces:\n sandboxes:\n enabled: false\n
Now Bill goes to Vault
and sees that a path for tenant
has been made under the name bluesky/kv
, confirming that Tenant members with the Owner or Edit roles now have access to the tenant's Vault path.
Now if Anna sign's in to the Vault via OIDC, she can see her tenants path and secrets. Whereas if John sign's in to the Vault via OIDC, he can't see his tenants path or secrets as he doesn't have the access required to view them.
For more details around enabling Kubernetes auth in Vault, visit here
"},{"location":"how-to-guides/enabling-openshift-dev-workspace.html","title":"Enabling DevWorkspace for Tenant's sandbox in OpenShift","text":""},{"location":"how-to-guides/enabling-openshift-dev-workspace.html#devworkspaces-metadata-via-multi-tenant-operator","title":"DevWorkspaces metadata via Multi Tenant Operator","text":"DevWorkspaces require specific metadata on a namespace for it to work in it. With Multi Tenant Operator (MTO), you can create sandbox namespaces for users of a Tenant, and then add the required metadata automatically on all sandboxes.
"},{"location":"how-to-guides/enabling-openshift-dev-workspace.html#required-metadata-for-enabling-devworkspace-on-sandbox","title":"Required metadata for enabling DevWorkspace on sandbox","text":" labels:\n app.kubernetes.io/part-of: che.eclipse.org\n app.kubernetes.io/component: workspaces-namespace\n annotations:\n che.eclipse.org/username: <username>\n
"},{"location":"how-to-guides/enabling-openshift-dev-workspace.html#automate-sandbox-metadata-for-all-tenant-users-via-tenant-cr","title":"Automate sandbox metadata for all Tenant users via Tenant CR","text":"With Multi Tenant Operator (MTO), you can set sandboxMetadata
like below to automate metadata for all sandboxes:
apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@acme.org\n editors:\n users:\n - erik@acme.org\n viewers:\n users:\n - john@acme.org\n namespaces:\n sandboxes:\n enabled: true\n private: false\n metadata:\n sandbox:\n labels:\n app.kubernetes.io/part-of: che.eclipse.org\n app.kubernetes.io/component: workspaces-namespace\n annotations:\n che.eclipse.org/username: \"{{ TENANT.USERNAME }}\"\n
It will create sandbox namespaces and also apply the sandboxMetadata
for owners and editors. Notice the template {{ TENANT.USERNAME }}
, it will resolve the username as value of the corresponding annotation. For more info on templated value, see here
You can also automate the metadata on all sandbox namespaces by using IntegrationConfig, notice metadata.sandboxes
:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n accessControl:\n namespaceAccessPolicy:\n deny:\n privilegedNamespaces: {}\n privileged:\n namespaces:\n - ^default$\n - ^openshift.*\n - ^kube.*\n serviceAccounts:\n - ^system:serviceaccount:openshift.*\n - ^system:serviceaccount:kube.*\n - ^system:serviceaccount:stakater-actions-runner-controller:actions-runner-controller-runner-deployment$\n rbac:\n tenantRoles:\n default:\n editor:\n clusterRoles:\n - edit\n owner:\n clusterRoles:\n - admin\n viewer:\n clusterRoles:\n - view\n components:\n console: false\n ingress:\n console: {}\n gateway: {}\n keycloak: {}\n showback: false\n integrations:\n vault:\n accessInfo:\n accessorPath: \"\"\n address: \"\"\n roleName: \"\"\n secretRef:\n name: \"\"\n namespace: \"\"\n authMethod: kubernetes\n config:\n ssoClient: \"\"\n enabled: false\n metadata:\n groups: {}\n namespaces: {}\n sandboxes:\n labels:\n app.kubernetes.io/part-of: che.eclipse.org\n app.kubernetes.io/component: workspaces-namespace\n annotations:\n che.eclipse.org/username: \"{{ TENANT.USERNAME }}\"\n
For more info on templated value \"{{ TENANT.USERNAME }}\"
, see here
Bill as the cluster admin wants to extend the default access for tenant members. As an admin of an OpenShift Cluster, Bill can extend the admin, edit, and view ClusterRole using aggregation. Bill will first create a ClusterRole with privileges to resources which Bill wants to extend. Bill will add the aggregation label to the newly created ClusterRole for extending the default ClusterRoles.
kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: extend-admin-role\n labels:\n rbac.authorization.k8s.io/aggregate-to-admin: 'true'\nrules:\n - verbs:\n - create\n - update\n - patch\n - delete\n apiGroups:\n - user.openshift.io\n resources:\n - groups\n
Note: You can learn more about aggregated-cluster-roles
here
Effortlessly associate tenants with their respective resources using the enhanced graph feature on the MTO Console. This dynamic graph illustrates the relationships between tenants and the resources they create, encompassing both MTO's proprietary resources and native Kubernetes/OpenShift elements.
Example Graph:
graph LR;\n A(alpha)-->B(dev);\n A-->C(prod);\n B-->D(limitrange);\n B-->E(owner-rolebinding);\n B-->F(editor-rolebinding);\n B-->G(viewer-rolebinding);\n C-->H(limitrange);\n C-->I(owner-rolebinding);\n C-->J(editor-rolebinding);\n C-->K(viewer-rolebinding);
Explore with an intuitive graph that showcases the relationships between tenants and their resources. The MTO Console's graph feature simplifies the understanding of complex structures, providing you with a visual representation of your tenant's organization.
To view the graph of your tenant, follow the steps below:
Tenants
page on the MTO Console using the left navigation bar. View
of the tenant for which you want to view the graph. Graph
tab on the tenant details page. MTO Console uses Keycloak for authentication and authorization. By default, the MTO Console uses an internal Keycloak instance that is provisioned by the Multi Tenant Operator in its own namespace. However, you can also integrate an external Keycloak instance with the MTO Console.
This guide will help you integrate an external Keycloak instance with the MTO Console.
"},{"location":"how-to-guides/integrating-external-keycloak.html#prerequisites","title":"Prerequisites","text":"Navigate to the Keycloak console.
Clients
.Create
button to create a new client.Create a new client.
Client ID
, Client Name
and Client Protocol
fields.Valid Redirect URIs
and Web Origins
for the client.Note: The Valid Redirect URIs
and Web Origins
should be the URL of the MTO Console.
Save
button.IntegrationConfig
CR with the following configuration.integrations: \n keycloak:\n realm: <realm>\n address: <keycloak-address>\n clientName: <client-name>\n
This guide walks you through the process of adding new users in Keycloak and granting them access to Multi Tenant Operator (MTO) Console.
"},{"location":"how-to-guides/keycloak.html#accessing-keycloak-console","title":"Accessing Keycloak Console","text":"mto
realm.Users
section in the mto
realm.Now, at this point, a user will be authenticated to the MTO Console. But in order to get access to view any Tenant resources, the user will need to be part of a Tenant.
"},{"location":"how-to-guides/keycloak.html#granting-access-to-tenant-resources","title":"Granting Access to Tenant Resources","text":"apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: arsenal\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - gabriel@arsenal.com\n groups:\n - arsenal\n editors:\n users:\n - hakimi@arsenal.com\n viewers:\n users:\n - neymar@arsenal.com\n
john@arsenal.com
and wish to add them as an editor, the edited section would look like this:editors:\n users:\n - gabriel@arsenal.com\n - benzema@arsenal.com\n
Once the above steps are completed, you should be able to access the MTO Console now and see alpha Tenant's details along with all the other resources such as namespaces and templates that John has access to.
"},{"location":"how-to-guides/mattermost.html","title":"Creating Mattermost Teams for your tenant","text":""},{"location":"how-to-guides/mattermost.html#requirements","title":"Requirements","text":"MTO-Mattermost-Integration-Operator
Please contact stakater to install the Mattermost integration operator before following the below-mentioned steps.
"},{"location":"how-to-guides/mattermost.html#steps-to-enable-integration","title":"Steps to enable integration","text":"Bill wants some tenants to also have their own Mattermost Teams. To make sure this happens correctly, Bill will first add the stakater.com/mattermost: true
label to the tenants. The label will enable the mto-mattermost-integration-operator
to create and manage Mattermost Teams based on Tenants.
apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: sigma\n labels:\n stakater.com/mattermost: 'true'\nspec:\n quota: medium\n accessControl:\n owners:\n users:\n - user\n editors:\n users:\n - user1\n namespaces:\n sandboxes:\n enabled: false\n withTenantPrefix:\n - dev\n - build\n - prod\n
Now user can log In to Mattermost to see their Team and relevant channels associated with it.
The name of the Team is similar to the Tenant name. Notification channels are pre-configured for every team, and can be modified.
"},{"location":"how-to-guides/resource-sync-by-tgi.html","title":"Sync Resources Deployed by TemplateGroupInstance","text":"The TemplateGroupInstance CR provides two types of resource sync for the resources mentioned in Template
For the given example, let's consider we want to apply the following template
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-secret\nresources:\n manifests:\n\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n\n - apiVersion: v1\n kind: ServiceAccount\n metadata:\n name: example-automated-thing\n secrets:\n - name: example-automated-thing-token-zyxwv\n
And the following TemplateGroupInstance is used to deploy these resources to namespaces having label kind: build
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-secret\n selector:\n matchLabels:\n kind: build\n sync: true\n
As we can see, in our TGI, we have a field spec.sync
which is set to true
. This will update the resources on two conditions:
The TemplateGroupInstance CR is reconciled/updated
If, for any reason, the underlying resource gets updated or deleted, TemplateGroupInstance
CR will try to revert it back to the state mentioned in the Template
CR.
Note
Updates to ServiceAccounts are ignored by both, reconciler and informers, in an attempt to avoid conflict between the TGI controller and Kube Controller Manager. ServiceAccounts are only reverted in case of unexpected deletions when sync is true.
"},{"location":"how-to-guides/resource-sync-by-tgi.html#ignore-resources-updates-on-resources","title":"Ignore Resources Updates on Resources","text":"If the resources mentioned in Template
CR conflict with another controller/operator, and you want TemplateGroupInstance to not actively revert the resource updates, you can add the following label to the conflicting resource multi-tenant-operator/ignore-resource-updates: \"\"
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-secret\nresources:\n manifests:\n\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n\n - apiVersion: v1\n kind: ServiceAccount\n metadata:\n name: example-automated-thing\n labels:\n multi-tenant-operator/ignore-resource-updates: \"\"\n secrets:\n - name: example-automated-thing-token-zyxwv\n
Note
However, this label will not stop Multi Tenant Operator from updating the resource on following conditions: - Template gets updated - TemplateGroupInstance gets updated - Resource gets deleted
If you don't want to sync the resources in any case, you can disable sync via sync: false
in TemplateGroupInstance
spec.
You can uninstall MTO by following these steps:
Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not. If yes, please set spec.onDelete.cleanNamespaces
to false
for all those tenants whose namespaces you want to retain, and spec.onDelete.cleanAppProject
to false
for all those tenants whose AppProject you want to retain. For more details check out onDelete
In case you have enabled console, you will have to disable it first by navigating to Search
-> IntegrationConfig
-> tenant-operator-config
and set spec.provision.console
and spec.provision.showback
to false
.
Remove IntegrationConfig CR from the cluster by navigating to Search
-> IntegrationConfig
-> tenant-operator-config
and select Delete
from actions dropdown.
After making the required changes open OpenShift console and click on Operators
, followed by Installed Operators
from the side menu
Now the operator has been uninstalled.
Optional:
you can also manually remove MTO's CRDs and its resources from the cluster.
This document contains instructions on installing, uninstalling and configuring Multi Tenant Operator using OpenShift MarketPlace.
OpenShift OperatorHub UI
CLI/GitOps
Enabling Console
License configuration
Uninstall
Operators
, followed by OperatorHub
from the side menuMulti Tenant Operator
and then click on Multi Tenant Operator
tileinstall
buttonUpdated channel
. Select multi-tenant-operator
to install the operator in multi-tenant-operator
namespace from Installed Namespace
dropdown menu. After configuring Update approval
click on the install
button.Note: Use stable
channel for seamless upgrades. For Production Environment
prefer Manual
approval and use Automatic
for Development Environment
Note: MTO will be installed in multi-tenant-operator
namespace.
multi-tenant-operator
oc create namespace multi-tenant-operator\nnamespace/multi-tenant-operator created\n
multi-tenant-operator
namespace.oc create -f - << EOF\napiVersion: operators.coreos.com/v1\nkind: OperatorGroup\nmetadata:\n name: tenant-operator\n namespace: multi-tenant-operator\nEOF\noperatorgroup.operators.coreos.com/tenant-operator created\n
multi-tenant-operator
namespace. To enable console set .spec.config.env[].ENABLE_CONSOLE
to true
. This will create a route resource, which can be used to access the Multi-Tenant-Operator console.oc create -f - << EOF\napiVersion: operators.coreos.com/v1alpha1\nkind: Subscription\nmetadata:\n name: tenant-operator\n namespace: multi-tenant-operator\nspec:\n channel: stable\n installPlanApproval: Automatic\n name: tenant-operator\n source: certified-operators\n sourceNamespace: openshift-marketplace\n startingCSV: tenant-operator.v0.10.0\nEOF\nsubscription.operators.coreos.com/tenant-operator created\n
Note: To bring MTO via GitOps, add the above files in GitOps repository.
subscription
custom resource open OpenShift console and click on Operators
, followed by Installed Operators
from the side menuWorkloads
, followed by Pods
from the side menu and select multi-tenant-operator
projectFor more details and configurations check out IntegrationConfig.
"},{"location":"installation/openshift.html#enabling-console","title":"Enabling Console","text":"To enable console GUI for MTO, go to Search
-> IntegrationConfig
-> tenant-operator-config
and make sure the following fields are set to true
:
spec:\n components:\n console: true\n showback: true\n
Note: If your InstallPlan
approval is set to Manual
then you will have to manually approve the InstallPlan
for MTO console components to be installed.
Operators
, followed by Installed Operators
from the side menu.Upgrade available
in front of mto-opencost
or mto-prometheus
.Preview InstallPlan
on top.Approve
button.InstallPlan
will be approved, and MTO console components will be installed.We offer a free license with installation, and you can create max 2 Tenants with it.
We offer a paid license as well. You need to have a configmap license
created in MTO's namespace (multi-tenant-operator). To get this configmap, you can contact sales@stakater.com
. It would look like this:
apiVersion: v1\nkind: ConfigMap\nmetadata:\n name: license\n namespace: multi-tenant-operator\ndata:\n payload.json: |\n {\n \"metaData\": {\n \"tier\" : \"paid\",\n \"company\": \"<company name here>\"\n }\n }\n signature.base64.txt: <base64 signature here.>\n
"},{"location":"installation/openshift.html#uninstall-via-operatorhub-ui","title":"Uninstall via OperatorHub UI","text":"You can uninstall MTO by following these steps:
Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not. If yes, please set spec.onDelete.cleanNamespaces
to false
for all those tenants whose namespaces you want to retain, and spec.onDelete.cleanAppProject
to false
for all those tenants whose AppProject you want to retain. For more details check out onDelete
After making the required changes open OpenShift console and click on Operators
, followed by Installed Operators
from the side menu
Now the operator has been uninstalled.
Optional:
you can also manually remove MTO's CRDs and its resources from the cluster.
Bill is a cluster admin who wants to map a docker-pull-secret
, present in a build
namespace, in tenant namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n resourceMappings:\n secrets:\n - name: docker-pull-secret\n namespace: build\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with MatchLabels
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchLabels:\n kind: build\n sync: true\n
Afterward, Bill can see that secrets has been successfully mapped in all matching namespaces.
kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n\nkubectl get secret docker-pull-secret -n alpha-dave-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"tutorials/distributing-resources/copying-resources.html#mapping-resources-within-tenant-namespaces-via-ti","title":"Mapping Resources within Tenant Namespaces via TI","text":"Anna is a tenant owner who wants to map a docker-pull-secret
, present in bluseky-build
namespace, to bluesky-anna-aurora-sandbox
namespace.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n resourceMappings:\n secrets:\n - name: docker-pull-secret\n namespace: bluesky-build\n
Once the template has been created, Anna creates a TemplateInstance
in bluesky-anna-aurora-sandbox
namespace, referring to the Template
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: docker-secret-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: docker-pull-secret\n sync: true\n
Afterward, Bill can see that secrets has been successfully mapped in all matching namespaces.
kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"tutorials/distributing-resources/distributing-manifests.html","title":"Distributing Resources in Namespaces","text":"Multi Tenant Operator has two Custom Resources which can cover this need using the Template
CR, depending upon the conditions and preference.
Bill, the cluster admin, wants to deploy a docker pull secret in namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with template
field, and the namespaces where resources are needed, using selector
field:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchExpressions:\n - key: kind\n operator: In\n values:\n - build\n sync: true\n
Afterward, Bill can see that secrets have been successfully created in all label matching namespaces.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-secret Active 3m\n\nkubectl get secret docker-secret -n alpha-dave-aurora-sandbox\nNAME STATE AGE\ndocker-secret Active 2m\n
TemplateGroupInstance
can also target specific tenants or all tenant namespaces under a single YAML definition.
In the v1beta3 version of the Tenant Custom Resource (CR), metadata assignment has been refined to offer granular control over labels and annotations across different namespaces associated with a tenant. This functionality enables precise and flexible management of metadata, catering to both general and specific needs.
"},{"location":"tutorials/tenant/assigning-metadata.html#distributing-common-labels-and-annotations","title":"Distributing Common Labels and Annotations","text":"To apply common labels and annotations across all namespaces within a tenant, the namespaces.metadata.common
field in the Tenant CR is utilized. This approach ensures that essential metadata is uniformly present across all namespaces, supporting consistent identification, management, and policy enforcement.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n metadata:\n common:\n labels:\n app.kubernetes.io/managed-by: tenant-operator\n app.kubernetes.io/part-of: tenant-alpha\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/infra=\nEOF\n
By configuring the namespaces.metadata.common
field as shown, all namespaces within the tenant will inherit the specified labels and annotations.
For scenarios requiring targeted application of labels and annotations to specific namespaces, the Tenant CR's namespaces.metadata.specific
field is designed. This feature enables the assignment of unique metadata to designated namespaces, accommodating specialized configurations and requirements.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n metadata:\n specific:\n - namespaces:\n - bluesky-dev\n labels:\n app.kubernetes.io/is-sandbox: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\nEOF\n
This configuration directs the specific labels and annotations solely to the enumerated namespaces, enabling distinct settings for particular environments.
"},{"location":"tutorials/tenant/assigning-metadata.html#assigning-metadata-to-sandbox-namespaces","title":"Assigning Metadata to Sandbox Namespaces","text":"To specifically address sandbox namespaces within the tenant, the namespaces.metadata.sandbox
property of the Tenant CR is employed. This section allows for the distinct management of sandbox namespaces, enhancing security and differentiation in development or testing environments.
apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n sandboxes:\n enabled: true\n private: true\n metadata:\n sandbox:\n labels:\n app.kubernetes.io/part-of: che.eclipse.org\n annotations:\n che.eclipse.org/username: \"{{ TENANT.USERNAME }}\" # templated placeholder\n
This setup ensures that all sandbox namespaces receive the designated metadata, with support for templated values, such as {{ TENANT.USERNAME }}, allowing dynamic customization based on the tenant or user context.
These enhancements in metadata management within the v1beta3
version of the Tenant CR provide comprehensive and flexible tools for labeling and annotating namespaces, supporting a wide range of organizational, security, and operational objectives.
Sandbox namespaces offer a personal development and testing space for users within a tenant. This guide covers how to enable and configure sandbox namespaces for tenant users, along with setting privacy and applying metadata specifically for these sandboxes.
"},{"location":"tutorials/tenant/create-sandbox.html#enabling-sandbox-namespaces","title":"Enabling Sandbox Namespaces","text":"Bill has assigned the ownership of the tenant bluesky to Anna and Anthony. To provide them with their sandbox namespaces, he must enable the sandbox functionality in the tenant's configuration.
To enable sandbox namespaces, Bill updates the Tenant Custom Resource (CR) with sandboxes.enabled: true:
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n sandboxes:\n enabled: true\nEOF\n
This configuration automatically generates sandbox namespaces for Anna, Anthony, and even John (as an editor) with the naming convention <tenantName>-<userName>-sandbox
.
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\nbluesky-anthony-aurora-sandbox Active 5d5h\nbluesky-john-aurora-sandbox Active 5d5h\n
"},{"location":"tutorials/tenant/create-sandbox.html#creating-private-sandboxes","title":"Creating Private Sandboxes","text":"To address privacy concerns where users require their sandbox namespaces to be visible only to themselves, Bill can set the sandboxes.private: true
in the Tenant CR:
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n sandboxes:\n enabled: true\n private: true\nEOF\n
With private: true
, each sandbox namespace is accessible and visible only to its designated user, enhancing privacy and security.
With the above configuration Anna
and Anthony
will now have new sandboxes created
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\nbluesky-anthony-aurora-sandbox Active 5d5h\nbluesky-john-aurora-sandbox Active 5d5h\n
However, from the perspective of Anna
, only their sandbox will be visible
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\n
"},{"location":"tutorials/tenant/create-sandbox.html#applying-metadata-to-sandbox-namespaces","title":"Applying Metadata to Sandbox Namespaces","text":"For uniformity or to apply specific policies, Bill might need to add common metadata, such as labels or annotations, to all sandbox namespaces. This is achievable through the namespaces.metadata.sandbox
configuration:
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n sandboxes:\n enabled: true\n private: true\n metadata:\n sandbox:\n labels:\n app.kubernetes.io/part-of: che.eclipse.org\n annotations:\n che.eclipse.org/username: \"{{ TENANT.USERNAME }}\"\nEOF\n
The templated annotation \"{{ TENANT.USERNAME }}\" dynamically inserts the username of the sandbox owner, personalizing the sandbox environment. This capability is particularly useful for integrating with other systems or applications that might utilize this metadata for configuration or access control.
Through the examples demonstrated, Bill can efficiently manage sandbox namespaces for tenant users, ensuring they have the necessary resources for development and testing while maintaining privacy and organizational policies.
"},{"location":"tutorials/tenant/create-tenant.html","title":"Creating a Tenant","text":"Bill, a cluster admin, has been tasked by the CTO of Nordmart to set up a new tenant for Anna's team. Following the request, Bill proceeds to create a new tenant named bluesky in the Kubernetes cluster.
"},{"location":"tutorials/tenant/create-tenant.html#setting-up-the-tenant","title":"Setting Up the Tenant","text":"To establish the tenant, Bill crafts a Tenant Custom Resource (CR) with the necessary specifications:
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n sandboxes:\n enabled: false\nEOF\n
In this configuration, Bill specifies anna@aurora.org as the owner, giving her full administrative rights over the tenant. The editor role is assigned to john@aurora.org and the group alpha, providing them with editing capabilities within the tenant's scope.
"},{"location":"tutorials/tenant/create-tenant.html#verifying-the-tenant-creation","title":"Verifying the Tenant Creation","text":"After creating the tenant, Bill checks its status to confirm it's active and operational:
kubectl get tenants.tenantoperator.stakater.com bluesky\nNAME STATE AGE\nbluesky Active 3m\n
This output indicates that the tenant bluesky is successfully created and in an active state.
"},{"location":"tutorials/tenant/create-tenant.html#checking-user-permissions","title":"Checking User Permissions","text":"To ensure the roles and permissions are correctly assigned, Anna logs into the cluster to verify her capabilities:
Namespace Creation:
kubectl auth can-i create namespaces\nyes\n
Anna is confirmed to have the ability to create namespaces within the tenant's scope.
Cluster Resources Access:
kubectl auth can-i get namespaces\nno\n\nkubectl auth can-i get persistentvolumes\nno\n
As expected, Anna does not have access to broader cluster resources outside the tenant's confines.
Tenant Resource Access:
kubectl auth can-i get tenants.tenantoperator.stakater.com\nno\n
Access to the Tenant resource itself is also restricted, aligning with the principle of least privilege.
"},{"location":"tutorials/tenant/create-tenant.html#adding-multiple-owners-to-a-tenant","title":"Adding Multiple Owners to a Tenant","text":"Later, if there's a need to grant administrative privileges to another user, such as Anthony, Bill can easily update the tenant's configuration to include multiple owners:
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n sandboxes:\n enabled: false\nEOF\n
With this update, both Anna and Anthony can administer the tenant bluesky, including the creation of namespaces:
kubectl auth can-i create namespaces\nyes\n
This flexible approach allows Bill to manage tenant access control efficiently, ensuring that the team's operational needs are met while maintaining security and governance standards.
"},{"location":"tutorials/tenant/creating-namespaces.html","title":"Creating Namespaces through Tenant Custom Resource","text":"Bill, tasked with structuring namespaces for different environments within a tenant, utilizes the Tenant Custom Resource (CR) to streamline this process efficiently. Here's how Bill can orchestrate the creation of dev
, build
, and production
environments for the tenant members directly through the Tenant CR.
To facilitate the environment setup, Bill decides to categorize the namespaces based on their association with the tenant's name. He opts to use the namespaces.withTenantPrefix
field for namespaces that should carry the tenant name as a prefix, enhancing clarity and organization. For namespaces that do not require a tenant name prefix, Bill employs the namespaces.withoutTenantPrefix
field.
Here's how Bill configures the Tenant CR to create these namespaces:
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n withTenantPrefix:\n - dev\n - build\n withoutTenantPrefix:\n - prod\nEOF\n
This configuration ensures the creation of the desired namespaces, directly correlating them with the bluesky tenant.
Upon applying the above configuration, Bill and the tenant members observe the creation of the following namespaces:
kubectl get namespaces\nNAME STATUS AGE\nbluesky-dev Active 5m\nbluesky-build Active 5m\nprod Active 5m\n
Anna, as a tenant owner, gains the capability to further customize or create new namespaces within her tenant's scope. For example, creating a bluesky-production namespace with the necessary tenant label:
apiVersion: v1\nkind: Namespace\nmetadata:\n name: bluesky-production\n labels:\n stakater.com/tenant: bluesky\n
\u26a0\ufe0f It's crucial for Anna to include the tenant label tenantoperator.stakater.com/tenant: bluesky
to ensure the namespace is recognized as part of the bluesky tenant. Failure to do so, or if Anna is not associated with the bluesky tenant, will result in Multi Tenant Operator (MTO) denying the namespace creation.
Following the creation, the MTO dynamically assigns roles to Anna and other tenant members according to their designated user types, ensuring proper access control and operational capabilities within these namespaces.
"},{"location":"tutorials/tenant/creating-namespaces.html#incorporating-existing-namespaces-into-the-tenant-via-argocd","title":"Incorporating Existing Namespaces into the Tenant via ArgoCD","text":"For teams practicing GitOps, existing namespaces can be seamlessly integrated into the Tenant structure by appending the tenant label to the namespace's manifest within the GitOps repository. This approach allows for efficient, automated management of namespace affiliations and access controls, ensuring a cohesive tenant ecosystem.
"},{"location":"tutorials/tenant/creating-namespaces.html#add-existing-namespaces-to-tenant-via-gitops","title":"Add Existing Namespaces to Tenant via GitOps","text":"Using GitOps as your preferred development workflow, you can add existing namespaces for your tenants by including the tenant label.
To add an existing namespace to your tenant via GitOps:
To disassociate or remove namespaces from the cluster through GitOps, the namespace configuration should be eliminated from the GitOps repository. Additionally, detaching the namespace from any ArgoCD-managed applications by removing the app.kubernetes.io/instance
label ensures a clean removal without residual dependencies.
Synchronizing the repository post-removal finalizes the deletion process, effectively managing the lifecycle of namespaces within a tenant-operated Kubernetes environment.
"},{"location":"tutorials/tenant/deleting-tenant.html","title":"Deleting a Tenant While Preserving Resources","text":"When managing tenant lifecycles within Kubernetes, certain scenarios require the deletion of a tenant without removing associated namespaces or ArgoCD AppProjects. This ensures that resources and configurations tied to the tenant remain intact for archival or transition purposes.
"},{"location":"tutorials/tenant/deleting-tenant.html#configuration-for-retaining-resources","title":"Configuration for Retaining Resources","text":"Bill decides to decommission the bluesky tenant but needs to preserve all related namespaces for continuity. To achieve this, he adjusts the Tenant Custom Resource (CR) to prevent the automatic cleanup of these resources upon tenant deletion.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n namespaces:\n sandboxes:\n enabled: true\n withTenantPrefix:\n - dev\n - build\n - prod\n onDeletePurgeNamespaces: false\nEOF\n
With the onDeletePurgeNamespaces
fields set to false, Bill ensures that the deletion of the bluesky tenant does not trigger the removal of its namespaces. This setup is crucial for cases where the retention of environment setups and deployments is necessary post-tenant deletion.
It's important to note the default behavior of the Tenant Operator regarding resource cleanup:
Namespaces: By default, onDeletePurgeNamespaces
is set to false, implying that namespaces are not automatically deleted with the tenant unless explicitly configured.
Once the Tenant CR is configured as desired, Bill can proceed to delete the bluesky tenant:
kubectl delete tenant bluesky\n
This command removes the tenant resource from the cluster while leaving the specified namespaces untouched, adhering to the configured onDeletePurgeNamespaces
policies. This approach provides flexibility in managing the lifecycle of tenant resources, catering to various operational strategies and compliance requirements.
Implementing hibernation for tenants' namespaces efficiently manages cluster resources by temporarily reducing workload activities during off-peak hours. This guide demonstrates how to configure hibernation schedules for tenant namespaces, leveraging Tenant and ResourceSupervisor for precise control.
"},{"location":"tutorials/tenant/tenant-hibernation.html#configuring-hibernation-for-tenant-namespaces","title":"Configuring Hibernation for Tenant Namespaces","text":"You can manage workloads in your cluster with MTO by implementing a hibernation schedule for your tenants. Hibernation downsizes the running Deployments and StatefulSets in a tenant\u2019s namespace according to a defined cron schedule. You can set a hibernation schedule for your tenants by adding the \u2018spec.hibernation\u2019 field to the tenant's respective Custom Resource.
hibernation:\n sleepSchedule: 23 * * * *\n wakeSchedule: 26 * * * *\n
spec.hibernation.sleepSchedule
accepts a cron expression indicating the time to put the workloads in your tenant\u2019s namespaces to sleep.
spec.hibernation.wakeSchedule
accepts a cron expression indicating the time to wake the workloads in your tenant\u2019s namespaces up.
Note
Both sleep and wake schedules must be specified for your Hibernation schedule to be valid.
Additionally, adding the hibernation.stakater.com/exclude: 'true'
annotation to a namespace excludes it from hibernating.
Note
This is only true for hibernation applied via the Tenant Custom Resource, and does not apply for hibernation done by manually creating a ResourceSupervisor (details about that below).
Note
This will not wake up an already sleeping namespace before the wake schedule.
"},{"location":"tutorials/tenant/tenant-hibernation.html#resource-supervisor","title":"Resource Supervisor","text":"Adding a Hibernation Schedule to a Tenant creates an accompanying ResourceSupervisor Custom Resource. The Resource Supervisor stores the Hibernation schedules and manages the current and previous states of all the applications, whether they are sleeping or awake.
When the sleep timer is activated, the Resource Supervisor controller stores the details of your applications (including the number of replicas, configurations, etc.) in the applications' namespaces and then puts your applications to sleep. When the wake timer is activated, the controller wakes up the applications using their stored details.
Enabling ArgoCD support for Tenants will also hibernate applications in the tenants' appProjects
.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: sigma\nspec:\n argocd:\n appProjects:\n - sigma\n namespace: openshift-gitops\n hibernation:\n sleepSchedule: 42 * * * *\n wakeSchedule: 45 * * * *\n namespaces:\n - tenant-ns1\n - tenant-ns2\n
Currently, Hibernation is available only for StatefulSets and Deployments.
"},{"location":"tutorials/tenant/tenant-hibernation.html#manual-creation-of-resourcesupervisor","title":"Manual creation of ResourceSupervisor","text":"Hibernation can also be applied by creating a ResourceSupervisor resource manually. The ResourceSupervisor definition will contain the hibernation cron schedule, the names of the namespaces to be hibernated, and the names of the ArgoCD AppProjects whose ArgoCD Applications have to be hibernated (as per the given schedule).
This method can be used to hibernate:
As an example, the following ResourceSupervisor could be created manually, to apply hibernation explicitly to the 'ns1' and 'ns2' namespaces, and to the 'sample-app-project' AppProject.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: hibernator\nspec:\n argocd:\n appProjects:\n - sample-app-project\n namespace: openshift-gitops\n hibernation:\n sleepSchedule: 42 * * * *\n wakeSchedule: 45 * * * *\n namespaces:\n - ns1\n - ns2\n
"},{"location":"tutorials/tenant/tenant-hibernation.html#freeing-up-unused-resources-with-hibernation","title":"Freeing up unused resources with hibernation","text":""},{"location":"tutorials/tenant/tenant-hibernation.html#hibernating-a-tenant_1","title":"Hibernating a tenant","text":"Bill is a cluster administrator who wants to free up unused cluster resources at nighttime, in an effort to reduce costs (when the cluster isn't being used).
First, Bill creates a tenant with the hibernation
schedules mentioned in the spec, or adds the hibernation field to an existing tenant:
apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: sigma\nspec:\n hibernation:\n sleepSchedule: \"0 20 * * 1-5\" # Sleep at 8 PM on weekdays\n wakeSchedule: \"0 8 * * 1-5\" # Wake at 8 AM on weekdays\n owners:\n users:\n - user@example.com\n quota: medium\n namespaces:\n withoutTenantPrefix:\n - dev\n - stage\n - build\n
The schedules above will put all the Deployments
and StatefulSets
within the tenant's namespaces to sleep, by reducing their pod count to 0 at 8 PM every weekday. At 8 AM on weekdays, the namespaces will then wake up by restoring their applications' previous pod counts.
Bill can verify this behaviour by checking the newly created ResourceSupervisor resource at run time:
oc get ResourceSupervisor -A\nNAME AGE\nsigma 5m\n
The ResourceSupervisor will look like this at 'running' time (as per the schedule):
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - build\n - stage\n - dev\nstatus:\n currentStatus: running\n nextReconcileTime: '2022-10-12T20:00:00Z'\n
The ResourceSupervisor will look like this at 'sleeping' time (as per the schedule):
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - build\n - stage\n - dev\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: build\n kind: Deployment\n name: example\n replicas: 3\n - Namespace: stage\n kind: Deployment\n name: example\n replicas: 3\n
Bill wants to prevent the build
namespace from going to sleep, so he can add the hibernation.stakater.com/exclude: 'true'
annotation to it. The ResourceSupervisor will now look like this after reconciling:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - stage\n - dev\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: stage\n kind: Deployment\n name: example\n replicas: 3\n
"},{"location":"tutorials/tenant/tenant-hibernation.html#hibernating-namespaces-andor-argocd-applications-with-resourcesupervisor","title":"Hibernating namespaces and/or ArgoCD Applications with ResourceSupervisor","text":"Bill, the cluster administrator, wants to hibernate a collection of namespaces and AppProjects belonging to multiple different tenants. He can do so by creating a ResourceSupervisor manually, specifying the hibernation schedule in its spec, the namespaces and ArgoCD Applications that need to be hibernated as per the mentioned schedule. Bill can also use the same method to hibernate some namespaces and ArgoCD Applications that do not belong to any tenant on his cluster.
The example given below will hibernate the ArgoCD Applications in the 'test-app-project' AppProject; and it will also hibernate the 'ns2' and 'ns4' namespaces.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: test-resource-supervisor\nspec:\n argocd:\n appProjects:\n - test-app-project\n namespace: argocd-ns\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - ns2\n - ns4\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: ns2\n kind: Deployment\n name: test-deployment\n replicas: 3\n
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"Introduction","text":"Kubernetes is designed to support a single tenant platform; OpenShift brings some improvements with its \"Secure by default\" concepts but it is still very complex to design and orchestrate all the moving parts involved in building a secure multi-tenant platform hence making it difficult for cluster admins to host multi-tenancy in a single OpenShift cluster. If multi-tenancy is achieved by sharing a cluster, it can have many advantages, e.g. efficient resource utilization, less configuration effort and easier sharing of cluster-internal resources among different tenants. OpenShift and all managed applications provide enough primitive resources to achieve multi-tenancy, but it requires professional skills and deep knowledge of OpenShift.
This is where Multi Tenant Operator (MTO) comes in and provides easy to manage/configure multi-tenancy. MTO provides wrappers around OpenShift resources to provide a higher level of abstraction to users. With MTO admins can configure Network and Security Policies, Resource Quotas, Limit Ranges, RBAC for every tenant, which are automatically inherited by all the namespaces and users in the tenant. Depending on the user's role, they are free to operate within their tenants in complete autonomy. MTO supports initializing new tenants using GitOps management pattern. Changes can be managed via PRs just like a typical GitOps workflow, so tenants can request changes, add new users, or remove users.
The idea of MTO is to use namespaces as independent sandboxes, where tenant applications can run independently of each other. Cluster admins shall configure MTO's custom resources, which then become a self-service system for tenants. This minimizes the efforts of the cluster admins.
MTO enables cluster admins to host multiple tenants in a single OpenShift Cluster, i.e.:
MTO is also OpenShift certified
"},{"location":"index.html#features","title":"Features","text":"The major features of Multi Tenant Operator (MTO) are described below.
"},{"location":"index.html#kubernetes-multitenancy","title":"Kubernetes Multitenancy","text":"RBAC is one of the most complicated and error-prone parts of Kubernetes. With Multi Tenant Operator, you can rest assured that RBAC is configured with the \"least privilege\" mindset and all rules are kept up-to-date with zero manual effort.
Multi Tenant Operator binds existing ClusterRoles to the Tenant's Namespaces used for managing access to the Namespaces and the resources they contain. You can also modify the default roles or create new roles to have full control and customize access control for your users and teams.
Multi Tenant Operator is also able to leverage existing OpenShift groups or external groups synced from 3rd party identity management systems, for maintaining Tenant membership in your organization's current user management system.
"},{"location":"index.html#hashicorp-vault-multitenancy","title":"HashiCorp Vault Multitenancy","text":"Multi Tenant Operator extends the tenants permission model to HashiCorp Vault where it can create Vault paths and greatly ease the overhead of managing RBAC in Vault. Tenant users can manage their own secrets without the concern of someone else having access to their Vault paths.
More details on Vault Multitenancy
"},{"location":"index.html#argocd-multitenancy","title":"ArgoCD Multitenancy","text":"Multi Tenant Operator is not only providing strong Multi Tenancy for the OpenShift internals but also extends the tenants permission model to ArgoCD were it can provision AppProjects and Allowed Repositories for your tenants greatly ease the overhead of managing RBAC in ArgoCD.
More details on ArgoCD Multitenancy
"},{"location":"index.html#resource-management","title":"Resource Management","text":"Multi Tenant Operator provides a mechanism for defining Resource Quotas at the tenant scope, meaning all namespaces belonging to a particular tenant share the defined quota, which is why you are able to safely enable dev teams to self serve their namespaces whilst being confident that they can only use the resources allocated based on budget and business needs.
More details on Quota
"},{"location":"index.html#templates-and-template-distribution","title":"Templates and Template distribution","text":"Multi Tenant Operator allows admins/users to define templates for namespaces, so that others can instantiate these templates to provision namespaces with batteries loaded. A template could pre-populate a namespace for certain use cases or with basic tooling required. Templates allow you to define Kubernetes manifests, Helm chart and more to be applied when the template is used to create a namespace.
It also allows the parameterizing of these templates for flexibility and ease of use. It also provides the option to enforce the presence of templates in one tenant's or all the tenants' namespaces for configuring secure defaults.
Common use cases for namespace templates may be:
More details on Distributing Template Resources
"},{"location":"index.html#mto-console","title":"MTO Console","text":"Multi Tenant Operator Console is a comprehensive user interface designed for both administrators and tenant users to manage multi-tenant environments. The MTO Console simplifies the complexity involved in handling various aspects of tenants and their related resources. It serves as a centralized monitoring hub, offering insights into the current state of tenants, namespaces, templates and quotas. It is designed to provide a quick summary/snapshot of MTO's status and facilitates easier interaction with various resources such as tenants, namespaces, templates, and quotas.
More details on Console
"},{"location":"index.html#showback","title":"Showback","text":"The showback functionality in Multi Tenant Operator (MTO) Console is a significant feature designed to enhance the management of resources and costs in multi-tenant Kubernetes environments. This feature focuses on accurately tracking the usage of resources by each tenant, and/or namespace, enabling organizations to monitor and optimize their expenditures. Furthermore, this functionality supports financial planning and budgeting by offering a clear view of operational costs associated with each tenant. This can be particularly beneficial for organizations that chargeback internal departments or external clients based on resource usage, ensuring that billing is fair and reflective of actual consumption.
More details on Showback
"},{"location":"index.html#hibernation","title":"Hibernation","text":"Multi Tenant Operator can downscale Deployments and StatefulSets in a tenant's Namespace according to a defined sleep schedule. The Deployments and StatefulSets are brought back to their required replicas according to the provided wake schedule.
More details on Hibernation
"},{"location":"index.html#mattermost-multitenancy","title":"Mattermost Multitenancy","text":"Multi Tenant Operator can manage Mattermost to create Teams for tenant users. All tenant users get a unique team and a list of predefined channels gets created. When a user is removed from the tenant, the user is also removed from the Mattermost team corresponding to tenant.
More details on Mattermost
"},{"location":"index.html#remote-development-namespaces","title":"Remote Development Namespaces","text":"Multi Tenant Operator can be configured to automatically provision a namespace in the cluster for every member of the specific tenant, that will also be preloaded with any selected templates and consume the same pool of resources from the tenants quota creating safe remote dev namespaces that teams can use as scratch namespace for rapid prototyping and development. So, every developer gets a Kubernetes-based cloud development environment that feel like working on localhost.
More details on Sandboxes
"},{"location":"index.html#cross-namespace-resource-distribution","title":"Cross Namespace Resource Distribution","text":"Multi Tenant Operator supports cloning of secrets and configmaps from one namespace to another namespace based on label selectors. It uses templates to enable users to provide reference to secrets and configmaps. It uses a template group instance to distribute those secrets and namespaces in matching namespaces, even if namespaces belong to different tenants. If template instance is used then the resources will only be mapped if namespaces belong to same tenant.
More details on Copying Secrets and ConfigMaps
"},{"location":"index.html#self-service","title":"Self-Service","text":"With Multi Tenant Operator, you can empower your users to safely provision namespaces for themselves and their teams (typically mapped to SSO groups). Team-owned namespaces and the resources inside them count towards the team's quotas rather than the user's individual limits and are automatically shared with all team members according to the access rules you configure in Multi Tenant Operator.
Also, by leveraging Multi Tenant Operator's templating mechanism, namespaces can be provisioned and automatically pre-populated with any kind of resource or multiple resources such as network policies, docker pull secrets or even Helm charts etc
"},{"location":"index.html#everything-as-codegitops-ready","title":"Everything as Code/GitOps Ready","text":"Multi Tenant Operator is designed and built to be 100% OpenShift-native and to be configured and managed the same familiar way as native OpenShift resources so is perfect for modern shops that are dedicated to GitOps as it is fully configurable using Custom Resources.
"},{"location":"index.html#preventing-clusters-sprawl","title":"Preventing Clusters Sprawl","text":"As companies look to further harness the power of cloud-native, they are adopting container technologies at rapid speed, increasing the number of clusters and workloads. As the number of Kubernetes clusters grows, this is an increasing work for the Ops team. When it comes to patching security issues or upgrading clusters, teams are doing five times the amount of work.
With Multi Tenant Operator teams can share a single cluster with multiple teams, groups of users, or departments by saving operational and management efforts. This prevents you from Kubernetes cluster sprawl.
"},{"location":"index.html#native-experience","title":"Native Experience","text":"Multi Tenant Operator provides multi-tenancy with a native Kubernetes experience without introducing additional management layers, plugins, or customized binaries.
"},{"location":"changelog.html","title":"Changelog","text":""},{"location":"changelog.html#v012x","title":"v0.12.x","text":""},{"location":"changelog.html#v01219","title":"v0.12.19","text":""},{"location":"changelog.html#fix","title":"Fix","text":"onDeletePurgeAppProject
field value.kubernetes
authentication method.TemplateGroupInstance
controller now correctly updates the TemplateGroupInstance
custom resource status and the namespace count upon the deletion of a namespace.TemplateGroupInstance
controller and kube-contoller-manager
over mentioning of secret names in secrets
or imagePullSecrets
field in ServiceAccounts
has been fixed by temporarily ignoring updates to or from ServiceAccounts
.IntegrationConfig
have now access over all types of namespaces. Previously operations were denied on orphaned namespaces (the namespaces which are not part of both privileged and tenant scope). More info in Troubleshooting GuideTemplateGroupInstance
controller now ensures that its underlying resources are force-synced when a namespace is created or deleted.TemplateGroupInstance
reconcile flow has been refined to process only the namespace for which the event was received, streamlining resource creation/deletion and improving overall efficiency.mto-admin
user for Console.resourceVersion
and UID when converting oldObject
to newObject
. This prevents problems when the object is edited by another controller.kube:admin
is now bypassed by default to perform operations, earlier kube:admin
needed to be mentioned in respective tenants to give it access over namespaces.spec.quota
, if quota.tenantoperator.stakater.com/is-default: \"true\"
annotation is presentMore information about TemplateGroupInstance's sync at Sync Resources Deployed by TemplateGroupInstance
"},{"location":"changelog.html#v092","title":"v0.9.2","text":"feat: Add tenant webhook for spec validation
fix: TemplateGroupInstance now cleans up leftover Template resources from namespaces that are no longer part of TGI namespace selector
fix: Fixed hibernation sync issue
enhance: Update tenant spec for applying common/specific namespace labels/annotations. For more details check out commonMetadata & SpecificMetadata
enhance: Add support for multi-pod architecture for Operator-Hub
chore: Remove conversion webhook for Quota and Tenant
privilegedNamespaces
regexgroup-{Template.Name}
)\u26a0\ufe0f Known Issues
caBundle
field in validation webhooks is not being populated for newly added webhooks. A temporary fix is to edit the validation webhook configuration manifest without the caBundle
field added in any webhook, so OpenShift can add it to all fields simultaneouslyValidatingWebhookConfiguration
multi-tenant-operator-validating-webhook-configuration
by removing all the caBundle
fields of all webhookscaBundle
fields have been populated\u26a0\ufe0f ApiVersion v1alpha1
of Tenant and Quota custom resources has been deprecated and is scheduled to be removed in the future. The following links contain the updated structure of both resources
destinationNamespaces
created by Multi Tenant Operatorkube-RBAC-proxy
Last revision date: 12 December 2022
IMPORTANT: THIS SOFTWARE END-USER LICENSE AGREEMENT (\"EULA\") IS A LEGAL AGREEMENT (\"Agreement\") BETWEEN YOU (THE CUSTOMER, EITHER AS AN INDIVIDUAL OR, IF PURCHASED OR OTHERWISE ACQUIRED BY OR FOR AN ENTITY, AS AN ENTITY) AND Stakater AB OR ITS SUBSIDUARY (\"COMPANY\"). READ IT CAREFULLY BEFORE COMPLETING THE INSTALLATION PROCESS AND USING MULTI TENANT OPERATOR (\"SOFTWARE\"). IT PROVIDES A LICENSE TO USE THE SOFTWARE AND CONTAINS WARRANTY INFORMATION AND LIABILITY DISCLAIMERS. BY INSTALLING AND USING THE SOFTWARE, YOU ARE CONFIRMING YOUR ACCEPTANCE OF THE SOFTWARE AND AGREEING TO BECOME BOUND BY THE TERMS OF THIS AGREEMENT.
In order to use the Software under this Agreement, you must receive a license key at the time of purchase, in accordance with the scope of use and other terms specified and as set forth in Section 1 of this Agreement.
"},{"location":"eula.html#1-license-grant","title":"1. License Grant","text":"1.1 General Use. This Agreement grants you a non-exclusive, non-transferable, limited license to the use rights for the Software, subject to the terms and conditions in this Agreement. The Software is licensed, not sold.
1.2 Electronic Delivery. All Software and license documentation shall be delivered by electronic means unless otherwise specified on the applicable invoice or at the time of purchase. Software shall be deemed delivered when it is made available for download for you by the Company (\"Delivery\").
2.1 No Modifications may be created of the original Software. \"Modification\" means:
(a) Any addition to or deletion from the contents of a file included in the original Software
(b) Any new file that contains any part of the original Software
3.1 You shall not (and shall not allow any third party to):
(a) reverse engineer the Software or attempt to reconstruct or discover any source code, underlying ideas, algorithms, file formats or programming interfaces of the Software by any means whatsoever (except and only to the extent that applicable law prohibits or restricts reverse engineering restrictions);
(b) distribute, sell, sub-license, rent, lease or use the Software for time sharing, hosting, service provider or like purposes, except as expressly permitted under this Agreement;
(c) redistribute the Software;
(d) remove any product identification, proprietary, copyright or other notices contained in the Software;
(e) modify any part of the Software, create a derivative work of any part of the Software (except as permitted in Section 4), or incorporate the Software, except to the extent expressly authorized in writing by the Company;
(f) publicly disseminate performance information or analysis (including, without limitation, benchmarks) from any source relating to the Software;
(g) utilize any equipment, device, software, or other means designed to circumvent or remove any form of Source URL or copy protection used by the Company in connection with the Software, or use the Software together with any authorization code, Source URL, serial number, or other copy protection device not supplied by the Company;
(h) use the Software to develop a product which is competitive with any of the Company's product offerings;
(i) use unauthorized Source URLs or license key(s) or distribute or publish Source URLs or license key(s), except as may be expressly permitted by the Company in writing. If your unique license is ever published, the Company reserves the right to terminate your access without notice.
3.2 Under no circumstances may you use the Software as part of a product or service that provides similar functionality to the Software itself.
7.1 The Software is provided \"as is\", with all faults, defects and errors, and without warranty of any kind. The Company does not warrant that the Software will be free of bugs, errors, or other defects, and the Company shall have no liability of any kind for the use of or inability to use the Software, the Software content or any associated service, and you acknowledge that it is not technically practicable for the Company to do so.
7.2 To the maximum extent permitted by applicable law, the Company disclaims all warranties, express, implied, arising by law or otherwise, regarding the Software, the Software content and their respective performance or suitability for your intended use, including without limitation any implied warranty of merchantability, fitness for a particular purpose.
8.1 In no event will the Company be liable for any direct, indirect, consequential, incidental, special, exemplary, or punitive damages or liabilities whatsoever arising from or relating to the Software, the Software content or this Agreement, whether based on contract, tort (including negligence), strict liability or other theory, even if the Company has been advised of the possibility of such damages.
8.2 In no event will the Company's liability exceed the Software license price as indicated in the invoice. The existence of more than one claim will not enlarge or extend this limit.
9.1 Your exclusive remedy and the Company's entire liability for breach of this Agreement shall be limited, at the Company's sole and exclusive discretion, to:
(a) replacement of any defective software or documentation; or
(b) refund of the license fee paid to the Company
10.1 Consent to the Use of Data. You agree that the Company and its affiliates may collect and use technical information gathered as part of the product support services. The Company may use this information solely to improve products and services and will not disclose this information in a form that personally identifies individuals or organizations.
10.2 Government End Users. If the Software and related documentation are supplied to or purchased by or on behalf of a Government, then the Software is deemed to be \"commercial software\" as that term is used in the acquisition regulation system.
11.1 Examples included in Software may provide links to third party libraries or code (collectively \"Third Party Software\") to implement various functions. Third Party Software does not comprise part of the Software. In some cases, access to Third Party Software may be included along with the Software delivery as a convenience for demonstration purposes. Licensee acknowledges:
(1) That some part of Third Party Software may require additional licensing of copyright and patents from the owners of such, and
(2) That distribution of any of the Software referencing or including any portion of a Third Party Software may require appropriate licensing from such third parties
12.1 Entire Agreement. This Agreement sets forth our entire agreement with respect to the Software and the subject matter hereof and supersedes all prior and contemporaneous understandings and agreements whether written or oral.
12.2 Amendment. The Company reserves the right, in its sole discretion, to amend this Agreement from time. Amendments are managed as described in General Provisions.
12.3 Assignment. You may not assign this Agreement or any of its rights under this Agreement without the prior written consent of The Company and any attempted assignment without such consent shall be void.
12.4 Export Compliance. You agree to comply with all applicable laws and regulations, including laws, regulations, orders or other restrictions on export, re-export or redistribution of software.
12.5 Indemnification. You agree to defend, indemnify, and hold harmless the Company from and against any lawsuits, claims, losses, damages, fines and expenses (including attorneys' fees and costs) arising out of your use of the Software or breach of this Agreement.
12.6 Attorneys' Fees and Costs. The prevailing party in any action to enforce this Agreement will be entitled to recover its attorneys' fees and costs in connection with such action.
12.7 Severability. If any provision of this Agreement is held by a court of competent jurisdiction to be invalid, illegal, or unenforceable, the remainder of this Agreement will remain in full force and effect.
12.8 Waiver. Failure or neglect by either party to enforce at any time any of the provisions of this license Agreement shall not be construed or deemed to be a waiver of that party's rights under this Agreement.
12.9 Audit. The Company may, at its expense, appoint its own personnel or an independent third party to audit the numbers of installations of the Software in use by you. Any such audit shall be conducted upon thirty (30) days prior notice, during regular business hours and shall not unreasonably interfere with your business activities.
12.10 Headings. The headings of sections and paragraphs of this Agreement are for convenience of reference only and are not intended to restrict, affect or be of any weight in the interpretation or construction of the provisions of such sections or paragraphs.
sales@stakater.com
.If operator upgrade is set to Automatic Approval on OperatorHub, there may be scenarios where it gets blocked.
"},{"location":"troubleshooting.html#resolution","title":"Resolution","text":"Information
If upgrade approval is set to manual, and you want to skip upgrade of a specific version, then delete the InstallPlan created for that specific version. Operator Lifecycle Manager (OLM) will create the latest available InstallPlan which can be approved then.\n
As OLM does not allow to upgrade or downgrade from a version stuck because of error, the only possible fix is to uninstall the operator from the cluster. When the operator is uninstalled it removes all of its resources i.e., ClusterRoles, ClusterRoleBindings, and Deployments etc., except Custom Resource Definitions (CRDs), so none of the Custom Resources (CRs), Tenants, Templates etc., will be removed from the cluster. If any CRD has a conversion webhook defined then that webhook should be removed before installing the stable version of the operator. This can be achieved via removing the .spec.conversion
block from the CRD schema.
As an example, if you have installed v0.8.0 of Multi Tenant Operator on your cluster, then it'll stuck in an error error validating existing CRs against new CRD's schema for \"integrationconfigs.tenantoperator.stakater.com\": error validating custom resource against new schema for IntegrationConfig multi-tenant-operator/tenant-operator-config: [].spec.tenantRoles: Required value
. To resolve this issue, you'll first uninstall the MTO from the cluster. Once you uninstall the MTO, check Tenant CRD which will have a conversion block, which needs to be removed. After removing the conversion block from the Tenant CRD, install the latest available version of MTO from OperatorHub.
If a user is added to tenant resource, and the user does not exist in RHSSO, then RHSSO is not updated with the user's Vault permission.
"},{"location":"troubleshooting.html#reproduction-steps","title":"Reproduction steps","text":"If the user does not exist in RHSSO, then MTO does not create the tenant access for Vault in RHSSO.
The user now needs to go to Vault, and sign up using OIDC. Then the user needs to wait for MTO to reconcile the updated tenant (reconciliation period is currently 1 hour). After reconciliation, MTO will add relevant access for the user in RHSSO.
If the user needs to be added immediately and it is not feasible to wait for next MTO reconciliation, then: add a label or annotation to the user, or restart the Tenant controller pod to force immediate reconciliation.
"},{"location":"troubleshooting.html#pod-creation-error","title":"Pod Creation Error","text":""},{"location":"troubleshooting.html#q-errors-in-replicaset-events-about-pods-not-being-able-to-schedule-on-openshift-because-scc-annotation-is-not-found","title":"Q. Errors in ReplicaSet Events about pods not being able to schedule on OpenShift because scc annotation is not found","text":"unable to find annotation openshift.io/sa.scc.uid-range\n
Answer. OpenShift recently updated its process of handling SCC, and it's now managed by annotations like openshift.io/sa.scc.uid-range
on the namespaces. Absence of them wont let pods schedule. The fix for the above error is to make sure ServiceAccount system:serviceaccount:openshift-infra.
regex is always mentioned in Privileged.serviceAccounts
section of IntegrationConfig
. This regex will allow operations from all ServiceAccounts
present in openshift-infra
namespace. More info at Privileged Service Accounts
Cannot CREATE namespace test-john without label stakater.com/tenant\n
Answer. Error occurs when a user is trying to perform create, update, delete action on a namespace without the required stakater.com/tenant
label. This label is used by the operator to see that authorized users can perform that action on the namespace. Just add the label with the tenant name so that MTO knows which tenant the namespace belongs to, and who is authorized to perform create/update/delete operations. For more details please refer to Namespace use-case.
Cannot CREATE namespace testing without label stakater.com/tenant. User: system:serviceaccount:openshift-apiserver:openshift-apiserver-sa\n
Answer. This error occurs because we don't allow Tenant members to do operations on OpenShift Project, whenever an operation is done on a project, openshift-apiserver-sa
tries to do the same request onto a namespace. That's why the user sees openshift-apiserver-sa
Service Account instead of its own user in the error message.
The fix is to try the same operation on the namespace manifest instead.
"},{"location":"troubleshooting.html#q-error-received-while-doing-kubectl-apply-f-namespaceyaml","title":"Q. Error received while doingkubectl apply -f namespace.yaml
","text":"Error from server (Forbidden): error when retrieving current configuration of:\nResource: \"/v1, Resource=namespaces\", GroupVersionKind: \"/v1, Kind=Namespace\"\nName: \"ns1\", Namespace: \"\"\nfrom server for: \"namespace.yaml\": namespaces \"ns1\" is forbidden: User \"muneeb\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"ns1\"\n
Answer. Tenant members will not be able to use kubectl apply
because apply
first gets all the instances of that resource, in this case namespaces, and then does the required operation on the selected resource. To maintain tenancy, tenant members do not the access to get or list all the namespaces.
The fix is to create namespaces with kubectl create
instead.
Answer. Multi-Tenant Operator's ArgoCD Integration allows configuration of which cluster-scoped resources can be deployed, both globally and on a per-tenant basis. For a global allow-list that applies to all tenants, you can add both resource group
and kind
to the IntegrationConfig's spec.integrations.argocd.clusterResourceWhitelist
field. Alternatively, you can set this up on a tenant level by configuring the same details within a Tenant's spec.integrations.argocd.appProject.clusterResourceWhitelist
field. For more details, check out the ArgoCD integration use cases
Answer. The above error can occur if the ArgoCD Application is syncing from a source that is not allowed the referenced AppProject. To solve this, verify that you have referred to the correct project in the given ArgoCD Application, and that the repoURL used for the Application's source is valid. If the error still appears, you can add the URL to the relevant Tenant's spec.integrations.argocd.sourceRepos
array.
mto-showback-*
pods failing in my cluster?","text":"Answer. The mto-showback-*
pods are used to calculate the cost of the resources used by each tenant. These pods are created by the Multi-Tenant Operator and are scheduled to run every 10 minutes. If the pods are failing, it is likely that the operator's necessary to calculate cost are not present in the cluster. To solve this, you can navigate to Operators
-> Installed Operators
in the OpenShift console and check if the MTO-OpenCost and MTO-Prometheus operators are installed. If they are in a pending state, you can manually approve them to install them in the cluster.
Extensions in MTO enhance its functionality by allowing integration with external services. Currently, MTO supports integration with ArgoCD, enabling you to synchronize your repositories and configure AppProjects directly through MTO. Future updates will include support for additional integrations.
"},{"location":"crds-api-reference/extensions.html#configuring-argocd-integration","title":"Configuring ArgoCD Integration","text":"Let us take a look at how you can create an Extension CR and integrate ArgoCD with MTO.
Before you create an Extension CR, you need to modify the Integration Config resource and add the ArgoCD configuration.
integrations:\n argocd:\n clusterResourceWhitelist:\n - group: tronador.stakater.com\n kind: EnvironmentProvisioner\n namespaceResourceBlacklist:\n - group: ''\n kind: ResourceQuota\n namespace: openshift-operators\n
The above configuration will allow the EnvironmentProvisioner
CRD and blacklist the ResourceQuota
resource. Also note that the namespace
field is mandatory and should be set to the namespace where the ArgoCD is deployed.
Every Extension CR is associated with a specific Tenant. Here's an example of an Extension CR that is associated with a Tenant named tenant-sample
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Extensions\nmetadata:\n name: extensions-sample\nspec:\n tenantName: tenant-sample\n argoCD:\n onDeletePurgeAppProject: true\n appProject:\n sourceRepos:\n - \"github.com/stakater/repo\"\n clusterResourceWhitelist:\n - group: \"\"\n kind: \"Pod\"\n namespaceResourceBlacklist:\n - group: \"v1\"\n kind: \"ConfigMap\"\n
The above CR creates an Extension for the Tenant named tenant-sample
with the following configurations:
onDeletePurgeAppProject
: If set to true
, the AppProject will be deleted when the Extension is deleted.sourceRepos
: List of repositories to sync with ArgoCD.appProject
: Configuration for the AppProject.clusterResourceWhitelist
: List of cluster-scoped resources to sync.namespaceResourceBlacklist
: List of namespace-scoped resources to ignore.In the backend, MTO will create an ArgoCD AppProject with the specified configurations.
"},{"location":"crds-api-reference/integration-config.html","title":"Integration Config","text":"IntegrationConfig is used to configure settings of multi-tenancy for Multi Tenant Operator.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n components:\n console: true\n showback: true\n ingress:\n ingressClassName: 'nginx'\n keycloak:\n host: tenant-operator-keycloak.apps.mycluster-ams.abcdef.cloud\n tlsSecretName: tenant-operator-tls\n console:\n host: tenant-operator-console.apps.mycluster-ams.abcdef.cloud\n tlsSecretName: tenant-operator-tls\n gateway:\n host: tenant-operator-gateway.apps.mycluster-ams.abcdef.cloud\n tlsSecretName: tenant-operator-tls\n customPricingModel:\n CPU: \"0.031611\"\n spotCPU: \"0.006655\"\n RAM: \"0.004237\"\n spotRAM: \"0.000892\"\n GPU: \"0.95\"\n storage: \"0.00005479452\"\n zoneNetworkEgress: \"0.01\"\n regionNetworkEgress: \"0.01\"\n internetNetworkEgress: \"0.12\"\n accessControl:\n rbac:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - admin\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n editor:\n clusterRoles:\n - custom-editor\n viewer:\n clusterRoles:\n - custom-view\n namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n groups:\n - cluster-admins\n privileged:\n namespaces:\n - ^default$\n - ^openshift.*\n - ^kube.*\n serviceAccounts:\n - ^system:serviceaccount:openshift.*\n - ^system:serviceaccount:kube.*\n users:\n - ''\n groups:\n - cluster-admins\n metadata:\n groups:\n labels:\n role: customer-reader\n annotations: \n openshift.io/node-selector: node-role.kubernetes.io/worker=\n namespaces:\n labels:\n stakater.com/workload-monitoring: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n sandboxes:\n labels:\n stakater.com/kind: sandbox\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n integrations:\n keycloak:\n realm: mto\n address: https://keycloak.apps.prod.abcdefghi.kubeapp.cloud\n clientName: mto-console\n argocd:\n clusterResourceWhitelist:\n - group: tronador.stakater.com\n kind: EnvironmentProvisioner\n namespaceResourceBlacklist:\n - group: '' # all groups\n kind: ResourceQuota\n namespace: openshift-operators\n vault:\n enabled: true\n authMethod: kubernetes #enum: {kubernetes:default, token}\n accessInfo: \n accessorPath: oidc/\n address: https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n roleName: mto\n secretRef: \n name: ''\n namespace: ''\n config: \n ssoClient: vault\n
Following are the different components that can be used to configure multi-tenancy in a cluster via Multi Tenant Operator.
"},{"location":"crds-api-reference/integration-config.html#components","title":"Components","text":" components:\n console: true\n showback: true\n ingress:\n ingressClassName: nginx\n keycloak:\n host: tenant-operator-keycloak.apps.mycluster-ams.abcdef.cloud\n tlsSecretName: tenant-operator-tls\n console:\n host: tenant-operator-console.apps.mycluster-ams.abcdef.cloud\n tlsSecretName: tenant-operator-tls\n gateway:\n host: tenant-operator-gateway.apps.mycluster-ams.abcdef.cloud\n tlsSecretName: tenant-operator-tls\n
components.console:
Enables or disables the console GUI for MTO.components.showback:
Enables or disables the showback feature on the console.components.ingress:
Configures the ingress settings for various components:ingressClassName:
Ingress class to be used for the ingress.console:
Settings for the console's ingress.host:
hostname for the console's ingress.tlsSecretName:
Name of the secret containing the TLS certificate and key for the console's ingress.gateway:
Settings for the gateway's ingress.host:
hostname for the gateway's ingress.tlsSecretName:
Name of the secret containing the TLS certificate and key for the gateway's ingress.keycloak:
Settings for the Keycloak's ingress.host:
hostname for the Keycloak's ingress.tlsSecretName:
Name of the secret containing the TLS certificate and key for the Keycloak's ingress.Here's an example of how to generate the secrets required to configure MTO:
TLS Secret for Ingress:
Create a TLS secret containing your SSL/TLS certificate and key for secure communication. This secret will be used for the Console, Gateway, and Keycloak ingresses.
kubectl -n multi-tenant-operator create secret tls <tls-secret-name> --key=<path-to-key.pem> --cert=<path-to-cert.pem>\n
Integration config will be managing the following resources required for console GUI:
MTO Postgresql
resources.MTO Prometheus
resources.MTO Opencost
resources.MTO Console, Gateway, Keycloak
resources.Showback
cronjob.Details on console GUI and showback can be found here
"},{"location":"crds-api-reference/integration-config.html#access-control","title":"Access Control","text":"accessControl:\n rbac:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - admin\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n editor:\n clusterRoles:\n - custom-editor\n viewer:\n clusterRoles:\n - custom-view\n namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n groups:\n - cluster-admins\n privileged:\n namespaces:\n - ^default$\n - ^openshift.*\n - ^kube.*\n serviceAccounts:\n - ^system:serviceaccount:openshift.*\n - ^system:serviceaccount:kube.*\n users:\n - ''\n groups:\n - cluster-admins\n
"},{"location":"crds-api-reference/integration-config.html#rbac","title":"RBAC","text":"RBAC is used to configure the roles that will be applied to each Tenant namespace. The field allows optional custom roles, that are then used to create RoleBindings for namespaces that match a labelSelector.
"},{"location":"crds-api-reference/integration-config.html#tenantroles","title":"TenantRoles","text":"TenantRoles are required within the IntegrationConfig, as they are used for defining what roles will be applied to each Tenant namespace. The field allows optional custom roles, that are then used to create RoleBindings for namespaces that match a labelSelector.
\u26a0\ufe0f If you do not configure roles in any way, then the default OpenShift roles of owner
, edit
, and view
will apply to Tenant members. Their details can be found here
rbac:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - admin\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n editor:\n clusterRoles:\n - custom-editor\n viewer:\n clusterRoles:\n - custom-view\n
"},{"location":"crds-api-reference/integration-config.html#default","title":"Default","text":"This field contains roles that will be used to create default roleBindings for each namespace that belongs to tenants. These roleBindings are only created for a namespace if that namespace isn't already matched by the custom
field below it. Therefore, it is required to have at least one role mentioned within each of its three subfields: owner
, editor
, and viewer
. These 3 subfields also correspond to the member fields of the Tenant CR
An array of custom roles. Similar to the default
field, you can mention roles within this field as well. However, the custom roles also require the use of a labelSelector
for each iteration within the array. The roles mentioned here will only apply to the namespaces that are matched by the labelSelector. If a namespace is matched by 2 different labelSelectors, then both roles will apply to it. Additionally, roles can be skipped within the labelSelector. These missing roles are then inherited from the default
roles field . For example, if the following custom roles arrangement is used:
custom:\n- labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n
Then the editor
and viewer
roles will be taken from the default
roles field, as that is required to have at least one role mentioned.
Namespace Access Policy is used to configure the namespaces that are allowed to be created by tenants. It also allows the configuration of namespaces that are ignored by MTO.
namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n groups:\n - cluster-admins\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n privileged:\n namespaces:\n - ^default$\n - ^openshift.*\n - ^kube.*\n serviceAccounts:\n - ^system:serviceaccount:openshift.*\n - ^system:serviceaccount:kube.*\n users:\n - ''\n groups:\n - cluster-admins\n
"},{"location":"crds-api-reference/integration-config.html#deny","title":"Deny","text":"namespaceAccessPolicy.Deny:
Can be used to restrict privileged users/groups CRUD operation over managed namespaces.
privileged.namespaces:
Contains the list of namespaces
ignored by MTO. MTO will not manage the namespaces
in this list. Treatment for privileged namespaces does not involve further integrations or finalizers processing as with normal namespaces. Values in this list are regex patterns.
For example:
default
namespace, we can specify ^default$
openshift-
prefix, we can specify ^openshift-.*
.stakater
in its name, we can specify ^stakater.
. (A constant word given as a regex pattern will match any namespace containing that word.)privileged.serviceAccounts:
Contains the list of ServiceAccounts
ignored by MTO. MTO will not manage the ServiceAccounts
in this list. Values in this list are regex patterns. For example, to ignore all ServiceAccounts
starting with the system:serviceaccount:openshift-
prefix, we can use ^system:serviceaccount:openshift-.*
; and to ignore a specific service account like system:serviceaccount:builder
service account we can use ^system:serviceaccount:builder$.
Note
stakater
, stakater.
and stakater.*
will have the same effect. To check out the combinations, go to Regex101, select Golang, and type your expected regex and test string.
privileged.users:
Contains the list of users
ignored by MTO. MTO will not manage the users
in this list. Values in this list are regex patterns.
privileged.groups:
Contains names of the groups that are allowed to perform CRUD operations on namespaces present on the cluster. Users in the specified group(s) will be able to perform these operations without MTO getting in their way. MTO does not interfere even with the deletion of privilegedNamespaces.
Note
User kube:admin
is bypassed by default to perform operations as a cluster admin, this includes operations on all the namespaces.
\u26a0\ufe0f If you want to use a more complex regex pattern (for the privileged.namespaces
or privileged.serviceAccounts
field), it is recommended that you test the regex pattern first - either locally or using a platform such as https://regex101.com/.
metadata:\n groups:\n labels:\n role: customer-reader\n annotations: {}\n namespaces:\n labels:\n stakater.com/workload-monitoring: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n sandboxes:\n labels:\n stakater.com/kind: sandbox\n annotations: {}\n
"},{"location":"crds-api-reference/integration-config.html#namespaces-group-and-sandbox","title":"Namespaces, group and sandbox","text":"We can use the metadata.namespaces
, metadata.group
and metadata.sandbox
fields to automatically add labels
and annotations
to the Namespaces and Groups managed via MTO.
If we want to add default labels/annotations to sandbox namespaces of tenants than we just simply add them in metadata.namespaces.labels
/metadata.namespaces.annotations
respectively.
Whenever a project is made it will have the labels and annotations as mentioned above.
kind: Project\napiVersion: project.openshift.io/v1\nmetadata:\n name: bluesky-build\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n labels:\n workload-monitoring: 'true'\n stakater.com/tenant: bluesky\nspec:\n finalizers:\n - kubernetes\nstatus:\n phase: Active\n
kind: Group\napiVersion: user.openshift.io/v1\nmetadata:\n name: bluesky-owner-group\n labels:\n role: customer-reader\nusers:\n - andrew@stakater.com\n
"},{"location":"crds-api-reference/integration-config.html#integrations","title":"Integrations","text":"Integrations are used to configure the integrations that MTO has with other tools. Currently, MTO supports the following integrations:
integrations:\n keycloak:\n realm: mto\n address: https://keycloak.apps.prod.abcdefghi.kubeapp.cloud\n clientName: mto-console\n argocd:\n clusterResourceWhitelist:\n - group: tronador.stakater.com\n kind: EnvironmentProvisioner\n namespaceResourceBlacklist:\n - group: '' # all groups\n kind: ResourceQuota\n namespace: openshift-operators\n vault:\n enabled: true\n authMethod: kubernetes #enum: {kubernetes:default, Token}\n accessInfo: \n accessorPath: oidc/\n address: https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n roleName: mto\n secretRef: \n name: ''\n namespace: ''\n config: \n ssoClient: vault\n
"},{"location":"crds-api-reference/integration-config.html#keycloak","title":"Keycloak","text":"Keycloak is an open-source Identity and Access Management solution aimed at modern applications and services. It makes it easy to secure applications and services with little to no code.
If a Keycloak
instance is already set up within your cluster, configure it for MTO by enabling the following configuration:
keycloak:\n realm: mto\n address: https://keycloak.apps.prod.abcdefghi.kubeapp.cloud/\n clientName: mto-console\n
keycloak.realm:
The realm in Keycloak where the client is configured.keycloak.address:
The address of the Keycloak instance.keycloak.clientName:
The name of the client in Keycloak.For more details around enabling Keycloak in MTO, visit here
"},{"location":"crds-api-reference/integration-config.html#argocd","title":"ArgoCD","text":"ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes. It follows the GitOps pattern of using Git repositories as the source of truth for defining the desired application state. ArgoCD uses Kubernetes manifests and configures the applications on the cluster.
If argocd
is configured on a cluster, then ArgoCD configuration can be enabled.
argocd:\n enabled: bool\n clusterResourceWhitelist:\n - group: tronador.stakater.com\n kind: EnvironmentProvisioner\n namespaceResourceBlacklist:\n - group: '' # all groups\n kind: ResourceQuota\n namespace: openshift-operators\n
argocd.clusterResourceWhitelist
allows ArgoCD to sync the listed cluster scoped resources from your GitOps repo.argocd.namespaceResourceBlacklist
prevents ArgoCD from syncing the listed resources from your GitOps repo.argocd.namespace
is an optional field used to specify the namespace where ArgoCD Applications and AppProjects are deployed. The field should be populated when you want to create an ArgoCD AppProject for each tenant.Vault is used to secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.
If vault
is configured on a cluster, then Vault configuration can be enabled.
vault:\n enabled: true\n authMethod: kubernetes #enum: {kubernetes:default, token}\n accessInfo: \n accessorPath: oidc/\n address: https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n roleName: mto\n secretRef:\n name: ''\n namespace: ''\n config: \n ssoClient: vault\n
If enabled, then admins have to specify the authMethod
to be used for authentication. MTO supports two authentication methods:
kubernetes
: This is the default authentication method. It uses the Kubernetes authentication method to authenticate with Vault.token
: This method uses a Vault token to authenticate with Vault.If authMethod
is set to kubernetes
, then admins have to specify the following fields:
accessorPath:
Accessor Path within Vault to fetch SSO accessorIDaddress:
Valid Vault address reachable within cluster.roleName:
Vault's Kubernetes authentication rolesso.clientName:
SSO client name.If authMethod
is set to token
, then admins have to specify the following fields:
accessorPath:
Accessor Path within Vault to fetch SSO accessorIDaddress:
Valid Vault address reachable within cluster.secretRef:
Secret containing Vault token.name:
Name of the secret containing Vault token.namespace:
Namespace of the secret containing Vault token.For more details around enabling Kubernetes auth in Vault, visit here
The role created within Vault for Kubernetes authentication should have the following permissions:
path \"secret/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"sys/mounts\" {\n capabilities = [\"read\", \"list\"]\n}\npath \"sys/mounts/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"managed-addons/*\" {\n capabilities = [\"read\", \"list\"]\n}\npath \"auth/kubernetes/role/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"sys/auth\" {\n capabilities = [\"read\", \"list\"]\n}\npath \"sys/policies/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"identity/group\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"identity/group-alias\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"identity/group/name/*\" {\n capabilities = [\"read\", \"list\"]\n}\npath \"identity/group/id/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\n
"},{"location":"crds-api-reference/integration-config.html#custom-pricing-model","title":"Custom Pricing Model","text":"You can modify IntegrationConfig to customise the default pricing model. Here is what you need at IntegrationConfig.spec.components
:
components:\n console: true # should be enabled\n showback: true # should be enabled\n # add below and override any default value\n # you can also remove the ones you do not need\n customPricingModel:\n CPU: \"0.031611\"\n spotCPU: \"0.006655\"\n RAM: \"0.004237\"\n spotRAM: \"0.000892\"\n GPU: \"0.95\"\n storage: \"0.00005479452\"\n zoneNetworkEgress: \"0.01\"\n regionNetworkEgress: \"0.01\"\n internetNetworkEgress: \"0.12\"\n
After modifying your default IntegrationConfig in multi-tenant-operator
namespace, a configmap named opencost-custom-pricing
will be updated. You will be able to see updated pricing info in mto-console
.
Using Multi Tenant Operator, the cluster-admin can set and enforce cluster resource quotas and limit ranges for tenants.
"},{"location":"crds-api-reference/quota.html#assigning-resource-quotas","title":"Assigning Resource Quotas","text":"Bill is a cluster admin who will first create Quota
CR where he sets the maximum resource limits that Anna's tenant will have. Here limitrange
is an optional field, cluster admin can skip it if not needed.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: small\nspec:\n resourcequota:\n hard:\n requests.cpu: '5'\n requests.memory: '5Gi'\n configmaps: \"10\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\n limitrange:\n limits:\n - type: \"Pod\"\n max:\n cpu: \"2\"\n memory: \"1Gi\"\n min:\n cpu: \"200m\"\n memory: \"100Mi\"\nEOF\n
For more details please refer to Quotas.
kubectl get quota small\nNAME STATE AGE\nsmall Active 3m\n
Bill then proceeds to create a tenant for Anna, while also linking the newly created Quota
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n namespaces:\n sandboxes:\n enabled: true\nEOF\n
Now that the quota is linked with Anna's tenant, Anna can create any resource within the values of resource quota and limit range.
kubectl -n bluesky-production create deployment nginx --image nginx:latest --replicas 4\n
Once the resource quota assigned to the tenant has been reached, Anna cannot create further resources.
kubectl create pods bluesky-training\nError from server (Cannot exceed Namespace quota: please, reach out to the system administrators)\n
"},{"location":"crds-api-reference/quota.html#limiting-persistentvolume-for-tenant","title":"Limiting PersistentVolume for Tenant","text":"Bill, as a cluster admin, wants to restrict the amount of storage a Tenant can use. For that he'll add the requests.storage
field to quota.spec.resourcequota.hard
. If Bill wants to restrict tenant bluesky
to use only 50Gi
of storage, he'll first create a quota with requests.storage
field set to 50Gi
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: medium\nspec:\n resourcequota:\n hard:\n requests.cpu: '5'\n requests.memory: '10Gi'\n requests.storage: '50Gi'\n
Once the quota is created, Bill will create the tenant and set the quota field to the one he created.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n namespaces:\n sandboxes:\n enabled: true\nEOF\n
Now, the combined storage used by all tenant namespaces will not exceed 50Gi
.
Now, Bill, as a cluster admin, wants to make sure that no Tenant can provision more than a fixed amount of storage from a StorageClass. Bill can restrict that using <storage-class-name>.storageclass.storage.k8s.io/requests.storage
field in quota.spec.resourcequota.hard
field. If Bill wants to restrict tenant sigma
to use only 20Gi
of storage from storage class stakater
, he'll first create a StorageClass stakater
and then create the relevant Quota with stakater.storageclass.storage.k8s.io/requests.storage
field set to 20Gi
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: small\nspec:\n resourcequota:\n hard:\n requests.cpu: '2'\n requests.memory: '4Gi'\n stakater.storageclass.storage.k8s.io/requests.storage: '20Gi'\n
Once the quota is created, Bill will create the tenant and set the quota field to the one he created.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n namespaces:\n sandboxes:\n enabled: true\nEOF\n
Now, the combined storage provisioned from StorageClass stakater
used by all tenant namespaces will not exceed 20Gi
.
The 20Gi
limit will only be applied to StorageClass stakater
. If a tenant member creates a PVC with some other StorageClass, he will not be restricted.
Tip
More details about Resource Quota
can be found here
Cluster scoped resource:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: namespace-parameterized-restrictions-tgi\nspec:\n template: namespace-parameterized-restrictions\n sync: true\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\n
TemplateGroupInstance distributes a template across multiple namespaces which are selected by labelSelector.
"},{"location":"crds-api-reference/template-instance.html","title":"TemplateInstance","text":"Namespace scoped resource:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: networkpolicy\n namespace: build\nspec:\n template: networkpolicy\n sync: true\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\n
TemplateInstance are used to keep track of resources created from Templates, which are being instantiated inside a Namespace. Generally, a TemplateInstance is created from a Template and then the TemplateInstances will not be updated when the Template changes later on. To change this behavior, it is possible to set spec.sync: true
in a TemplateInstance. Setting this option, means to keep this TemplateInstance in sync with the underlying template (similar to Helm upgrade).
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: redis\nresources:\n helm:\n releaseName: redis\n chart:\n repository:\n name: redis\n repoUrl: https://charts.bitnami.com/bitnami\n values: |\n redisPort: 6379\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: networkpolicy\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\nresources:\n manifests:\n - kind: NetworkPolicy\n apiVersion: networking.k8s.io/v1\n metadata:\n name: deny-cross-ns-traffic\n spec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: \"${{CIDR_IP}}\"\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: resource-mapping\nresources:\n resourceMappings:\n secrets:\n - name: secret-s1\n namespace: namespace-n1\n configMaps:\n - name: configmap-c1\n namespace: namespace-n2\n
Templates are used to initialize Namespaces, share common resources across namespaces, and map secrets/configmaps from one namespace to other namespaces.
Also, you can define custom variables in Template
and TemplateInstance
. The parameters defined in TemplateInstance
are overwritten the values defined in Template
.
Manifest Templates: The easiest option to define a Template is by specifying an array of Kubernetes manifests which should be applied when the Template is being instantiated.
Helm Chart Templates: Instead of manifests, a Template can specify a Helm chart that will be installed (using Helm template) when the Template is being instantiated.
Resource Mapping Templates: A template can be used to map secrets and configmaps from one tenant's namespace to another tenant's namespace, or within a tenant's namespace.
"},{"location":"crds-api-reference/template.html#mandatory-and-optional-templates","title":"Mandatory and Optional Templates","text":"Templates can either be mandatory or optional. By default, all Templates are optional. Cluster Admins can make Templates mandatory by adding them to the spec.templateInstances
array within the Tenant configuration. All Templates listed in spec.templateInstances
will always be instantiated within every Namespace
that is created for the respective Tenant.
A minimal Tenant definition requires only a quota field, essential for limiting resource consumption:
apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: alpha\nspec:\n quota: small\n
For a more comprehensive setup, a detailed Tenant definition includes various configurations:
apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: tenant-sample\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - kubeadmin\n groups:\n - admin-group\n editors:\n users:\n - devuser1\n - devuser2\n groups:\n - dev-group\n viewers:\n users:\n - viewuser\n groups:\n - view-group\n hibernation:\n # UTC time\n sleepSchedule: \"20 * * * *\"\n wakeSchedule: \"40 * * * *\" \n namespaces:\n sandboxes:\n enabled: true\n private: true\n withoutTenantPrefix:\n - analytics\n - marketing\n withTenantPrefix:\n - dev\n - staging\n onDeletePurgeNamespaces: true\n metadata:\n common:\n labels:\n common-label: common-value\n annotations:\n common-annotation: common-value\n sandbox:\n labels:\n sandbox-label: sandbox-value\n annotations:\n sandbox-annotation: sandbox-value\n specific:\n - namespaces:\n - tenant-sample-dev\n labels:\n specific-label: specific-dev-value\n annotations:\n specific-annotation: specific-dev-value\n desc: \"This is a sample tenant setup for the v1beta3 version.\"\n
"},{"location":"crds-api-reference/tenant.html#access-control","title":"Access Control","text":"Structured access control is critical for managing roles and permissions within a tenant effectively. It divides users into three categories, each with customizable privileges. This design enables precise role-based access management.
These roles are obtained from IntegrationConfig's TenantRoles field.
Owners
: Have full administrative rights, including resource management and namespace creation. Their roles are crucial for high-level management tasks.Editors
: Granted permissions to modify resources, enabling them to support day-to-day operations without full administrative access.Viewers
: Provide read-only access, suitable for oversight and auditing without the ability to alter resources.Users and groups are linked to these roles by specifying their usernames or group names in the respective fields under owners
, editors
, and viewers
.
The quota
field sets the resource limits for the tenant, such as CPU and memory usage, to prevent any single tenant from consuming a disproportionate amount of resources. This mechanism ensures efficient resource allocation and fosters fair usage practices across all tenants.
For more information on quotas, please refer here.
"},{"location":"crds-api-reference/tenant.html#namespaces","title":"Namespaces","text":"Controls the creation and management of namespaces within the tenant:
sandboxes
:
private
to true will make the sandboxes visible only to the user they belong to. By default, sandbox namespaces are visible to all tenant members.withoutTenantPrefix
: Lists the namespaces to be created without automatically prefixing them with the tenant name, useful for shared or common resources.
withTenantPrefix
: Namespaces listed here will be prefixed with the tenant name, ensuring easy identification and isolation.onDeletePurgeNamespaces
: Determines whether namespaces associated with the tenant should be deleted upon the tenant's deletion, enabling clean up and resource freeing.metadata
: Configures metadata like labels and annotations that are applied to namespaces managed by the tenant:common
: Applies specified labels and annotations across all namespaces within the tenant, ensuring consistent metadata for resources and workloads.sandbox
: Special metadata for sandbox namespaces, which can include templated annotations or labels for dynamic information.{{ TENANT.USERNAME }}
. This template can be utilized to dynamically insert the tenant's username value into annotations, for example, as username: {{ TENANT.USERNAME }}
.specific
: Allows applying unique labels and annotations to specified tenant namespaces, enabling custom configurations for particular workloads or environments.hibernation
allows for the scheduling of inactive periods for namespaces associated with the tenant, effectively putting them into a \"sleep\" mode. This capability is designed to conserve resources during known periods of inactivity.
sleepSchedule
and wakeSchedule
, both of which accept strings formatted according to cron syntax.desc
provides a human-readable description of the tenant, aiding in documentation and at-a-glance understanding of the tenant's purpose and configuration.
\u26a0\ufe0f If same label or annotation key is being applied using different methods provided, then the highest precedence will be given to namespaces.metadata.specific
followed by namespaces.metadata.common
and in the end would be the ones applied from openshift.project.labels
/openshift.project.annotations
in IntegrationConfig
The Multi Tenant Operator (MTO) Console is a comprehensive user interface designed for both administrators and tenant users to manage multi-tenant environments. The MTO Console simplifies the complexity involved in handling various aspects of tenants and their related resources.
"},{"location":"explanation/console.html#dashboard-overview","title":"Dashboard Overview","text":"The dashboard serves as a centralized monitoring hub, offering insights into the current state of tenants, namespaces, and quotas. It is designed to provide a quick summary/snapshot of MTO resources' status. Additionally, it includes a Showback graph that presents a quick glance of the seven-day cost trends associated with the namespaces/tenants based on the logged-in user.
By default, MTO Console will be disabled and has to be enabled by setting the below configuration in IntegrationConfig.
components:\n console: true\n ingress:\n ingressClassName: <ingress-class-name>\n console:\n host: tenant-operator-console.<hostname>\n tlsSecretName: <tls-secret-name>\n gateway:\n host: tenant-operator-gateway.<hostname>\n tlsSecretName: <tls-secret-name>\n keycloak:\n host: tenant-operator-keycloak.<hostname>\n tlsSecretName: <tls-secret-name>\n showback: true\n trustedRootCert: <root-ca-secret-name>\n
<hostname>
: hostname of the cluster <ingress-class-name>
: name of the ingress class <tls-secret-name>
: name of the secret that contains the TLS certificate and key <root-ca-secret-name>
: name of the secret that contains the root CA certificate
Note: trustedRootCert
and tls-secret-name
are optional. If not provided, MTO will use the default root CA certificate and secrets respectively.
Once the above configuration is set on the IntegrationConfig, MTO would start provisioning the required resources for MTO Console to be ready. In a few moments, you should be able to see the Console Ingress in the multi-tenant-operator
namespace which gives you access to the Console.
For more details on the configuration, please visit here.
"},{"location":"explanation/console.html#tenants","title":"Tenants","text":"Here, admins have a bird's-eye view of all tenants, with the ability to delve into each one for detailed examination and management. This section is pivotal for observing the distribution and organization of tenants within the system. More information on each tenant can be accessed by clicking the view option against each tenant name.
"},{"location":"explanation/console.html#tenantsquota","title":"Tenants/Quota","text":""},{"location":"explanation/console.html#viewing-quota-in-the-tenant-console","title":"Viewing Quota in the Tenant Console","text":"In this view, users can access a dedicated tab to review the quota utilization for their Tenants. Within this tab, users have the option to toggle between two different views: Aggregated Quota and Namespace Quota.
"},{"location":"explanation/console.html#aggregated-quota-view","title":"Aggregated Quota View","text":"This view provides users with an overview of the combined resource allocation and usage across all namespaces within their tenant. It offers a comprehensive look at the total limits and usage of resources such as CPU, memory, and other defined quotas. Users can easily monitor and manage resource distribution across their entire tenant environment from this aggregated perspective.
"},{"location":"explanation/console.html#namespace-quota-view","title":"Namespace Quota View","text":"Alternatively, users can opt to view quota settings on a per-namespace basis. This view allows users to focus specifically on the resource allocation and usage within individual namespaces. By selecting this option, users gain granular insights into the resource constraints and utilization for each namespace, facilitating more targeted management and optimization of resources at the namespace level.
"},{"location":"explanation/console.html#tenantsutilization","title":"Tenants/Utilization","text":"In the Utilization tab of the tenant console, users are presented with a detailed table listing all namespaces within their tenant. This table provides essential metrics for each namespace, including CPU and memory utilization. The metrics shown include:
Users can adjust the interval window using the provided selector to customize the time frame for the displayed data. This table allows users to quickly assess resource utilization across all namespaces, facilitating efficient resource management and cost tracking.
Upon selecting a specific namespace from the utilization table, users are directed to a detailed view that includes CPU and memory utilization graphs along with a workload table. This detailed view provides:
This detailed view provides users with in-depth insights into resource utilization at the workload level, enabling precise monitoring and optimization of resource allocation within the selected namespace.
"},{"location":"explanation/console.html#namespaces","title":"Namespaces","text":"Users can view all the namespaces that belong to their tenant, offering a comprehensive perspective of the accessible namespaces for tenant members. This section also provides options for detailed exploration.
"},{"location":"explanation/console.html#quotas","title":"Quotas","text":"MTO's Quotas are crucial for managing resource allocation. In this section, administrators can assess the quotas assigned to each tenant, ensuring a balanced distribution of resources in line with operational requirements.
"},{"location":"explanation/console.html#templates","title":"Templates","text":"The Templates section acts as a repository for standardized resource deployment patterns, which can be utilized to maintain consistency and reliability across tenant environments. Few examples include provisioning specific k8s manifests, helm charts, secrets or configmaps across a set of namespaces.
"},{"location":"explanation/console.html#showback","title":"Showback","text":"
The Showback feature is an essential financial governance tool, providing detailed insights into the cost implications of resource usage by tenant or namespace or other filters. This facilitates a transparent cost management and internal chargeback or showback process, enabling informed decision-making regarding resource consumption and budgeting.
"},{"location":"explanation/console.html#user-roles-and-permissions","title":"User Roles and Permissions","text":""},{"location":"explanation/console.html#administrators","title":"Administrators","text":"Administrators have overarching access to the console, including the ability to view all namespaces and tenants. They have exclusive access to the IntegrationConfig, allowing them to view all the settings and integrations.
"},{"location":"explanation/console.html#tenant-users","title":"Tenant Users","text":"Regular tenant users can monitor and manage their allocated resources. However, they do not have access to the IntegrationConfig and cannot view resources across different tenants, ensuring data privacy and operational integrity.
"},{"location":"explanation/console.html#live-yaml-configuration-and-graph-view","title":"Live YAML Configuration and Graph View","text":"In the MTO Console, each resource section is equipped with a \"View\" button, revealing the live YAML configuration for complete information on the resource. For Tenant resources, a supplementary \"Graph\" option is available, illustrating the relationships and dependencies of all resources under a Tenant. This dual-view approach empowers users with both the detailed control of YAML and the holistic oversight of the graph view.
You can find more details on graph visualization here: Graph Visualization
"},{"location":"explanation/console.html#caching-and-database","title":"Caching and Database","text":"MTO integrates a dedicated database to streamline resource management. Now, all resources managed by MTO are efficiently stored in a Postgres database, enhancing the MTO Console's ability to efficiently retrieve all the resources for optimal presentation.
The implementation of this feature is facilitated by the Bootstrap controller, streamlining the deployment process. This controller creates the PostgreSQL Database, establishes a service for inter-pod communication, and generates a secret to ensure secure connectivity to the database.
Furthermore, the introduction of a dedicated cache layer ensures that there is no added burden on the Kube API server when responding to MTO Console requests. This enhancement not only improves response times but also contributes to a more efficient and responsive resource management system.
"},{"location":"explanation/console.html#authentication-and-authorization","title":"Authentication and Authorization","text":""},{"location":"explanation/console.html#keycloak-for-authentication","title":"Keycloak for Authentication","text":"MTO Console incorporates Keycloak, a leading authentication module, to manage user access securely and efficiently. Keycloak is provisioned automatically by our controllers, setting up a new realm, client, and a default user named mto
.
MTO Console leverages PostgreSQL as the persistent storage solution for Keycloak, enhancing the reliability and flexibility of the authentication system.
It offers benefits such as enhanced data reliability, easy data export and import.
"},{"location":"explanation/console.html#benefits_1","title":"Benefits","text":"The MTO Console is equipped with an authorization module, designed to manage access rights intelligently and securely.
"},{"location":"explanation/console.html#benefits_2","title":"Benefits","text":"The MTO Console is engineered to simplify complex multi-tenant management. The current iteration focuses on providing comprehensive visibility. Future updates could include direct CUD (Create/Update/Delete) capabilities from the dashboard, enhancing the console\u2019s functionality. The Showback feature remains a standout, offering critical cost tracking and analysis. The delineation of roles between administrators and tenant users ensures a secure and organized operational framework.
"},{"location":"explanation/logs-metrics.html","title":"Metrics and Logs Documentation","text":"This document offers an overview of the Prometheus metrics implemented by the multi_tenant_operator
controllers, along with an interpretation guide for the logs and statuses generated by these controllers. Each metric is designed to provide specific insights into the controllers' operational performance, while the log interpretation guide aids in understanding their behavior and workflow processes. Additionally, the status descriptions for custom resources provide operational snapshots. Together, these elements form a comprehensive toolkit for monitoring and enhancing the performance and health of the controllers.
multi_tenant_operator_resources_deployed_total
kind
, name
, namespace
multi_tenant_operator_resources_deployed
kind
, name
, namespace
, type
multi_tenant_operator_reconcile_error
kind
, name
, namespace
, state
, errors
multi_tenant_operator_reconcile_count
kind
, name
multi_tenant_operator_reconcile_seconds
kind
, name
multi_tenant_operator_reconcile_seconds_total
kind
, name
In this section, we delve into the status of various custom resources managed by our controllers. The kubectl describe
command can be used to fetch the status of these resources.
Status from the templategroupinstances.tenantoperator.stakater.com
custom resource:
InstallSucceeded
: Indicates the success of the instance's installation.Ready
: Shows the readiness of the instance, with details on the last reconciliation process, its duration, and relevant messages.Running
: Reports on active processes like ongoing resource reconciliation.Template Manifests Hash
and Resource Mapping Hash
, which provide versioning and change tracking for template manifests and resource mappings.Logs from the tenant-operator-templategroupinstance-controller
:
Reconciling!
mark the beginning of a reconciliation process for a TemplateGroupInstance. Subsequent actions like Creating/Updating TemplateGroupInstance
and Retrieving list of namespaces Matching to TGI
outline the reconciliation steps.Namespaces test-namespace-1 is new or failed...
and Creating/Updating resource...
detail the management of Kubernetes resources in specific namespaces.[Worker X]
show tasks being processed in parallel, including steps like Validating parameters
, Gathering objects from manifest
, and Apply manifests
.End Reconciling
and Defering XXth Reconciling, with duration XXXms
indicate the end of a reconciliation process and its duration, aiding in performance analysis.Watcher
such as Delete call received for object...
and Following resource is recreated...
are key for tracking changes to Kubernetes objects.These logs are crucial for tracking the system's behavior, diagnosing issues, and comprehending the resource management workflow.
"},{"location":"explanation/multi-tenancy-vault.html","title":"Multi-Tenancy in Vault","text":""},{"location":"explanation/multi-tenancy-vault.html#vault-multitenancy","title":"Vault Multitenancy","text":"HashiCorp Vault is an identity-based secret and encryption management system. Vault validates and authorizes a system's clients (users, machines, apps) before providing them access to secrets or stored sensitive data.
"},{"location":"explanation/multi-tenancy-vault.html#vault-integration-in-multi-tenant-operator","title":"Vault integration in Multi Tenant Operator","text":""},{"location":"explanation/multi-tenancy-vault.html#service-account-auth-in-vault","title":"Service Account Auth in Vault","text":"MTO enables the Kubernetes auth method which can be used to authenticate with Vault using a Kubernetes Service Account Token. When enabled, for every tenant namespace, MTO automatically creates policies and roles that allow the service accounts present in those namespaces to read secrets at tenant's path in Vault. The name of the role is the same as namespace name.
These service accounts are required to have stakater.com/vault-access: true
label, so they can be authenticated with Vault via MTO.
The Diagram shows how MTO enables ServiceAccounts to read secrets from Vault.
"},{"location":"explanation/multi-tenancy-vault.html#user-oidc-auth-in-vault","title":"User OIDC Auth in Vault","text":"This requires a running RHSSO(RedHat Single Sign On)
instance integrated with Vault over OIDC login method.
MTO integration with Vault and RHSSO provides a way for users to log in to Vault where they only have access to relevant tenant paths.
Once both integrations are set up with IntegrationConfig CR, MTO links tenant users to specific client roles named after their tenant under Vault client in RHSSO.
After that, MTO creates specific policies in Vault for its tenant users.
Mapping of tenant roles to Vault is shown below
Tenant Role Vault Path Vault Capabilities Owner, Editor (tenantName)/* Create, Read, Update, Delete, List Owner, Editor sys/mounts/(tenantName)/* Create, Read, Update, Delete, List Owner, Editor managed-addons/* Read, List Viewer (tenantName)/* ReadA simple user login workflow is shown in the diagram below.
"},{"location":"explanation/template.html","title":"Understanding and Utilizing Template","text":""},{"location":"explanation/template.html#creating-templates","title":"Creating Templates","text":"Anna wants to create a Template that she can use to initialize or share common resources across namespaces (e.g. PullSecrets).
Anna can either create a template using manifests
field, covering Kubernetes or custom resources.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Or by using Helm Charts
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: redis\nresources:\n helm:\n releaseName: redis\n chart:\n repository:\n name: redis\n repoUrl: https://charts.bitnami.com/bitnami\n version: 0.0.15\n values: |\n redisPort: 6379\n
She can also use resourceMapping
field to copy over secrets and configmaps from one namespace to others.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: resource-mapping\nresources:\n resourceMappings:\n secrets:\n - name: docker-secret\n namespace: bluesky-build\n configMaps:\n - name: tronador-configMap\n namespace: stakater-tronador\n
Note: Resource mapping can be used via TGI to map resources within tenant namespaces or to some other tenant's namespace. If used with TI, the resources will only be mapped if namespaces belong to same tenant.
"},{"location":"explanation/template.html#using-templates-with-default-parameters","title":"Using Templates with Default Parameters","text":"apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: namespace-parameterized-restrictions\nparameters:\n # Name of the parameter\n - name: DEFAULT_CPU_LIMIT\n # The default value of the parameter\n value: \"1\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"0.5\"\n # If a parameter is required the template instance will need to set it\n # required: true\n # Make sure only values are entered for this parameter\n validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n manifests:\n - apiVersion: v1\n kind: LimitRange\n metadata:\n name: namespace-limit-range-${namespace}\n spec:\n limits:\n - default:\n cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n defaultRequest:\n cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n type: Container\n
Parameters can be used with both manifests
and helm charts
Templated values are placeholders in your configuration that get replaced with actual data when the CR is processed. Below is a list of currently supported templated values, their descriptions, and where they can be used.
"},{"location":"explanation/templated-metadata-values.html#supported-templated-values","title":"Supported templated values","text":"\"{{ TENANT.USERNAME }}\"
Owners
and Editors
.Tenant
: Under sandboxMetadata.labels
and sandboxMetadata.annotations
.IntegrationConfig
: Under metadata.sandboxs.labels
and metadata.sandboxs.annotations
. annotation:\n che.eclipse.org/username: \"{{ TENANT.USERNAME }}\" # double quotes are required\n
Bill is a cluster admin who wants to configure network policies to provide multi-tenant network isolation.
First, Bill creates a template for network policies:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: tenant-network-policy\nresources:\n manifests:\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-same-namespace\n spec:\n podSelector: {}\n ingress:\n - from:\n - podSelector: {}\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-from-openshift-monitoring\n spec:\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n network.openshift.io/policy-group: monitoring\n podSelector: {}\n policyTypes:\n - Ingress\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-from-openshift-ingress\n spec:\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n network.openshift.io/policy-group: ingress\n podSelector: {}\n policyTypes:\n - Ingress\n
Once the template has been created, Bill edits the IntegrationConfig to add unique label to all tenant projects:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n metadata:\n namespaces:\n labels:\n stakater.com/workload-monitoring: \"true\"\n tenant-network-policy: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n sandbox:\n labels:\n stakater.com/kind: sandbox\n privileged:\n namespaces:\n - default\n - ^openshift.*\n - ^kube.*\n serviceAccounts:\n - ^system:serviceaccount:openshift.*\n - ^system:serviceaccount:kube.*\n
Bill has added a new label tenant-network-policy: \"true\"
in project section of IntegrationConfig, now MTO will add that label in all tenant projects.
Finally, Bill creates a TemplateGroupInstance
which will distribute the network policies using the newly added project label and template.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: tenant-network-policy-group\nspec:\n template: tenant-network-policy\n selector:\n matchLabels:\n tenant-network-policy: \"true\"\n sync: true\n
MTO will now deploy the network policies mentioned in Template
to all projects matching the label selector mentioned in the TemplateGroupInstance.
Secrets like registry
credentials often need to exist in multiple Namespaces, so that Pods within different namespaces can have access to those credentials in form of secrets.
Manually creating secrets within different namespaces could lead to challenges, such as:
With the help of Multi-Tenant Operator's Template feature we can make this secret distribution experience easy.
For example, to copy a Secret called registry
which exists in the example
to new Namespaces whenever they are created, we will first create a Template which will have reference of the registry secret.
It will also push updates to the copied Secrets and keep the propagated secrets always sync and updated with parent namespaces.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: registry-secret\nresources:\n resourceMappings:\n secrets:\n - name: registry\n namespace: example\n
Now using this Template we can propagate registry secret to different namespaces that have some common set of labels.
For example, will just add one label kind: registry
and all namespaces with this label will get this secret.
For propagating it on different namespaces dynamically will have to create another resource called TemplateGroupInstance
. TemplateGroupInstance
will have Template
and matchLabel
mapping as shown below:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: registry-secret-group-instance\nspec:\n template: registry-secret\n selector:\n matchLabels:\n kind: registry\n sync: true\n
After reconciliation, you will be able to see those secrets in namespaces having mentioned label.
MTO will keep injecting this secret to the new namespaces created with that label.
kubectl get secret registry-secret -n example-ns-1\nNAME STATE AGE\nregistry-secret Active 3m\n\nkubectl get secret registry-secret -n example-ns-2\nNAME STATE AGE\nregistry-secret Active 3m\n
"},{"location":"how-to-guides/custom-metrics.html","title":"Custom Metrics Support","text":"Multi Tenant Operator now supports custom metrics for templates, template instances and template group instances. This feature allows users to monitor the usage of templates and template instances in their cluster.
To enable custom metrics and view them in your OpenShift cluster, you need to follow the steps below:
Observe
-> Metrics
in the OpenShift console.Administration
-> Namespaces
in the OpenShift console. Select the namespace where you have installed Multi Tenant Operator.openshift.io/cluster-monitoring=true
. This will enable cluster monitoring for the namespace.Observe
-> Targets
in the OpenShift console. You should see the namespace in the list of targets.Observe
-> Metrics
in the OpenShift console. You should see the custom metrics for templates, template instances and template group instances in the list of metrics.Details of metrics can be found at Metrics and Logs
"},{"location":"how-to-guides/custom-roles.html","title":"Changing the default access level for tenant owners","text":"This feature allows the cluster admins to change the default roles assigned to Tenant owner, editor, viewer groups.
For example, if Bill as the cluster admin wants to reduce the privileges that tenant owners have, so they cannot create or edit Roles or bind them. As an admin of an OpenShift cluster, Bill can do this by assigning the edit
role to all tenant owners. This is easily achieved by modifying the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n accessControl:\n rbac:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - edit\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n
Once all namespaces reconcile, the old admin
RoleBindings should get replaced with the edit
ones for each tenant owner.
Bill now wants the owners of the tenants bluesky
and alpha
to have admin
permissions over their namespaces. Custom roles feature will allow Bill to do this, by modifying the IntegrationConfig like this:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n accessControl:\n rbac:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - edit\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n owner:\n clusterRoles:\n - admin\n - labelSelector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - bluesky\n owner:\n clusterRoles:\n - admin\n
New Bindings will be created for the Tenant owners of bluesky
and alpha
, corresponding to the admin
Role. Bindings for editors and viewer will be inherited from the default roles
. All other Tenant owners will have an edit
Role bound to them within their namespaces
Multi Tenant Operator uses its helm
functionality from Template
and TemplateGroupInstance
to deploy private and public charts to multiple namespaces.
Bill, the cluster admin, wants to deploy a helm chart from OCI
registry in namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: chart-deploy\nresources:\n helm:\n releaseName: random-release\n chart:\n repository:\n name: random-chart\n repoUrl: 'oci://ghcr.io/stakater/charts/random-chart'\n version: 0.0.15\n password:\n key: password\n name: repo-user\n namespace: shared-ns\n username:\n key: username\n name: repo-user\n namespace: shared-ns\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with MatchLabels
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: chart-deploy\nspec:\n selector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - system\n sync: true\n template: chart-deploy\n
Multi Tenant Operator will pick up the credentials from the mentioned namespace to pull the chart and apply it.
Afterward, Bill can see that manifests in the chart have been successfully created in all label matching namespaces.
"},{"location":"how-to-guides/deploying-private-helm-charts.html#deploying-helm-chart-to-namespaces-via-templategroupinstances-from-https-registry","title":"Deploying Helm Chart to Namespaces via TemplateGroupInstances from HTTPS Registry","text":"Bill, the cluster admin, wants to deploy a helm chart from HTTPS
registry in namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: chart-deploy\nresources:\n helm:\n releaseName: random-release\n chart:\n repository:\n name: random-chart\n repoUrl: 'nexus-helm-url/registry'\n version: 0.0.15\n password:\n key: password\n name: repo-user\n namespace: shared-ns\n username:\n key: username\n name: repo-user\n namespace: shared-ns\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with MatchLabels
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: chart-deploy\nspec:\n selector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - system\n sync: true\n template: chart-deploy\n
Multi Tenant Operator will pick up the credentials from the mentioned namespace to pull the chart and apply it.
Afterward, Bill can see that manifests in the chart have been successfully created in all label matching namespaces.
"},{"location":"how-to-guides/deploying-templates.html","title":"Distributing Resources in Namespaces","text":"Multi Tenant Operator has two Custom Resources which can cover this need using the Template
CR, depending upon the conditions and preference.
Bill, the cluster admin, wants to deploy a docker pull secret in namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with MatchLabels
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchLabels:\n kind: build\n sync: true\n
Afterward, Bill can see that secrets have been successfully created in all label matching namespaces.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-secret Active 3m\n\nkubectl get secret docker-secret -n alpha-dave-aurora-sandbox\nNAME STATE AGE\ndocker-secret Active 2m\n
TemplateGroupInstance
can also target specific tenants or all tenant namespaces under a single YAML definition.
It can be done by using the matchExpressions
field, dividing the tenant label in key and values.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\n sync: true\n
"},{"location":"how-to-guides/deploying-templates.html#templategroupinstance-for-all-tenants","title":"TemplateGroupInstance for all Tenants","text":"This can also be done by using the matchExpressions
field, using just the tenant label key stakater.com/tenant
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: Exists\n sync: true\n
"},{"location":"how-to-guides/deploying-templates.html#deploying-template-to-a-namespace-via-templateinstance","title":"Deploying Template to a Namespace via TemplateInstance","text":"Anna wants to deploy a docker pull secret in her namespace.
First Anna asks Bill, the cluster admin, to create a template of the secret for her:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Anna creates a TemplateInstance
in her namespace referring to the Template
she wants to deploy:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: docker-pull-secret-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: docker-pull-secret\n sync: true\n
Once this is created, Anna can see that the secret has been successfully applied.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"how-to-guides/deploying-templates.html#passing-parameters-to-template-via-templateinstance-templategroupinstance","title":"Passing Parameters to Template via TemplateInstance, TemplateGroupInstance","text":"Anna wants to deploy a LimitRange resource to certain namespaces.
First Anna asks Bill, the cluster admin, to create template with parameters for LimitRange for her:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: namespace-parameterized-restrictions\nparameters:\n # Name of the parameter\n - name: DEFAULT_CPU_LIMIT\n # The default value of the parameter\n value: \"1\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"0.5\"\n # If a parameter is required the template instance will need to set it\n # required: true\n # Make sure only values are entered for this parameter\n validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n manifests:\n - apiVersion: v1\n kind: LimitRange\n metadata:\n name: namespace-limit-range-${namespace}\n spec:\n limits:\n - default:\n cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n defaultRequest:\n cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n type: Container\n
Afterward, Anna creates a TemplateInstance
in her namespace referring to the Template
she wants to deploy:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: namespace-parameterized-restrictions-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: namespace-parameterized-restrictions\n sync: true\nparameters:\n - name: DEFAULT_CPU_LIMIT\n value: \"1.5\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"1\"\n
If she wants to distribute the same Template over multiple namespaces, she can use TemplateGroupInstance
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: namespace-parameterized-restrictions-tgi\nspec:\n template: namespace-parameterized-restrictions\n sync: true\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\nparameters:\n - name: DEFAULT_CPU_LIMIT\n value: \"1.5\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"1\"\n
"},{"location":"how-to-guides/distributing-secrets-using-sealed-secret-template.html","title":"Distributing Secrets Using Sealed Secrets Template","text":"Bill is a cluster admin who wants to provide a mechanism for distributing secrets in multiple namespaces. For this, he wants to use Sealed Secrets as the solution by adding them to MTO Template CR
First, Bill creates a Template in which Sealed Secret is mentioned:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: tenant-sealed-secret\nresources:\n manifests:\n - kind: SealedSecret\n apiVersion: bitnami.com/v1alpha1\n metadata:\n name: mysecret\n spec:\n encryptedData:\n .dockerconfigjson: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq.....\n template:\n type: kubernetes.io/dockerconfigjson\n # this is an example of labels and annotations that will be added to the output secret\n metadata:\n labels:\n \"jenkins.io/credentials-type\": usernamePassword\n annotations:\n \"jenkins.io/credentials-description\": credentials from Kubernetes\n
Once the template has been created, Bill has to edit the Tenant
to add unique label to namespaces in which the secret has to be deployed. For this, he can use the support for common and specific labels across namespaces.
Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case.
apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n sandboxes:\n enabled: false\n withTenantPrefix:\n - dev\n - build\n - prod\n withoutTenantPrefix: []\n metadata:\n specific:\n - namespaces:\n - bluesky-test-namespace\n labels:\n distribute-image-pull-secret: true\n common:\n labels:\n distribute-image-pull-secret: true\n
Bill has added support for a new label distribute-image-pull-secret: true\"
for tenant projects/namespaces, now MTO will add that label depending on the used field.
Finally, Bill creates a TemplateGroupInstance
which will deploy the sealed secrets using the newly created project label and template.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: tenant-sealed-secret\nspec:\n template: tenant-sealed-secret\n selector:\n matchLabels:\n distribute-image-pull-secret: true\n sync: true\n
MTO will now deploy the sealed secrets mentioned in Template
to namespaces which have the mentioned label. The rest of the work to deploy secret from a sealed secret has to be done by Sealed Secrets Controller.
With the Multi-Tenant Operator (MTO), cluster administrators can configure multi-tenancy within their cluster. The integration of ArgoCD with MTO allows for the configuration of multi-tenancy in ArgoCD applications and AppProjects.
MTO can be configured to create AppProjects for each tenant. These AppProjects enable tenants to create ArgoCD Applications that can be synced to namespaces owned by them. Cluster admins can blacklist certain namespace resources and allow specific cluster-scoped resources as needed (see the NamespaceResourceBlacklist and ClusterResourceWhitelist sections in Integration Config docs and Tenant Custom Resource docs).
Note that ArgoCD integration in MTO is optional.
"},{"location":"how-to-guides/enabling-multi-tenancy-argocd.html#default-argocd-configuration","title":"Default ArgoCD configuration","text":"We have set a default ArgoCD configuration in Multi Tenant Operator that fulfils the following use cases:
To ensure each tenant has their own ArgoCD AppProjects, administrators must first specify the ArgoCD namespace in the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n argocd:\n namespace: openshift-operators\n ...\n
Administrators then create an Extension CR associated with the tenant:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Extensions\nmetadata:\n name: extensions-sample\nspec:\n tenantName: tenant-sample\n argoCD:\n onDeletePurgeAppProject: true\n appProject:\n sourceRepos:\n - \"github.com/stakater/repo\"\n clusterResourceWhitelist:\n - group: \"\"\n kind: \"Pod\"\n namespaceResourceBlacklist:\n - group: \"v1\"\n kind: \"ConfigMap\"\n
This creates an AppProject for the tenant:
oc get AppProject -A\nNAMESPACE NAME AGE\nopenshift-operators tenant-sample 5d15h\n
Example of the created AppProject:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: tenant-sample\n namespace: openshift-operators\nspec:\n destinations:\n - namespace: tenant-sample-build\n server: \"https://kubernetes.default.svc\"\n - namespace: tenant-sample-dev\n server: \"https://kubernetes.default.svc\"\n - namespace: tenant-sample-stage\n server: \"https://kubernetes.default.svc\"\n roles:\n - description: >-\n Role that gives full access to all resources inside the tenant's\n namespace to the tenant owner groups\n groups:\n - saap-cluster-admins\n - stakater-team\n - tenant-sample-owner-group\n name: tenant-sample-owner\n policies:\n - \"p, proj:tenant-sample:tenant-sample-owner, *, *, tenant-sample/*, allow\"\n - description: >-\n Role that gives edit access to all resources inside the tenant's\n namespace to the tenant owner group\n groups:\n - saap-cluster-admins\n - stakater-team\n - tenant-sample-edit-group\n name: tenant-sample-edit\n policies:\n - \"p, proj:tenant-sample:tenant-sample-edit, *, *, tenant-sample/*, allow\"\n - description: >-\n Role that gives view access to all resources inside the tenant's\n namespace to the tenant owner group\n groups:\n - saap-cluster-admins\n - stakater-team\n - tenant-sample-view-group\n name: tenant-sample-view\n policies:\n - \"p, proj:tenant-sample:tenant-sample-view, *, get, tenant-sample/*, allow\"\n sourceRepos:\n - \"https://github.com/stakater/gitops-config\"\n
Users belonging to the tenant group will now see only applications created by them in the ArgoCD frontend:
Note
For ArgoCD Multi Tenancy to work properly, any default roles or policies attached to all users must be removed.
"},{"location":"how-to-guides/enabling-multi-tenancy-argocd.html#preventing-argocd-from-syncing-certain-namespaced-resources","title":"Preventing ArgoCD from Syncing Certain Namespaced Resources","text":"To prevent tenants from syncing ResourceQuota and LimitRange resources to their namespaces, administrators can specify these resources in the blacklist section of the ArgoCD configuration in the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n integrations:\n argocd:\n namespace: openshift-operators\n namespaceResourceBlacklist:\n - group: \"\"\n kind: ResourceQuota\n - group: \"\"\n kind: LimitRange\n ...\n
This configuration ensures these resources are not synced by ArgoCD if added to any tenant's project directory in GitOps. The AppProject will include the blacklisted resources:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: tenant-sample\n namespace: openshift-operators\nspec:\n ...\n namespaceResourceBlacklist:\n - group: ''\n kind: ResourceQuota\n - group: ''\n kind: LimitRange\n ...\n
"},{"location":"how-to-guides/enabling-multi-tenancy-argocd.html#allowing-argocd-to-sync-certain-cluster-wide-resources","title":"Allowing ArgoCD to Sync Certain Cluster-Wide Resources","text":"To allow tenants to sync the Environment cluster-scoped resource, administrators can specify this resource in the allow-list section of the ArgoCD configuration in the IntegrationConfig's spec:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n integrations:\n argocd:\n namespace: openshift-operators\n clusterResourceWhitelist:\n - group: \"\"\n kind: Environment\n ...\n
This configuration ensures these resources are synced by ArgoCD if added to any tenant's project directory in GitOps. The AppProject will include the allow-listed resources:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: tenant-sample\n namespace: openshift-operators\nspec:\n ...\n clusterResourceWhitelist:\n - group: \"\"\n kind: Environment\n ...\n
"},{"location":"how-to-guides/enabling-multi-tenancy-argocd.html#overriding-namespaceresourceblacklist-andor-clusterresourcewhitelist-per-tenant","title":"Overriding NamespaceResourceBlacklist and/or ClusterResourceWhitelist Per Tenant","text":"To override the namespaceResourceBlacklist
and/or clusterResourceWhitelist
set via Integration Config for a specific tenant, administrators can specify these in the argoCD
section of the Extension CR:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Extensions\nmetadata:\n name: extensions-blue-sky\nspec:\n tenantName: blue-sky\n argoCD:\n onDeletePurgeAppProject: true\n appProject:\n sourceRepos:\n - \"github.com/stakater/repo\"\n clusterResourceWhitelist:\n - group: \"\"\n kind: \"Pod\"\n namespaceResourceBlacklist:\n - group: \"v1\"\n kind: \"ConfigMap\"\n
This configuration allows for tailored settings for each tenant, ensuring flexibility and control over ArgoCD resources.
"},{"location":"how-to-guides/enabling-multi-tenancy-vault.html","title":"Configuring Vault in IntegrationConfig","text":"Vault is used to secure, store and tightly control access to tokens, passwords, certificates, and encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.
To enable Vault multi-tenancy, a role has to be created in Vault under Kubernetes authentication with the following permissions:
path \"secret/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"sys/mounts\" {\n capabilities = [\"read\", \"list\"]\n}\npath \"sys/mounts/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"managed-addons/*\" {\n capabilities = [\"read\", \"list\"]\n}\npath \"auth/kubernetes/role/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"sys/auth\" {\n capabilities = [\"read\", \"list\"]\n}\npath \"sys/policies/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"identity/group\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"identity/group-alias\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\npath \"identity/group/name/*\" {\n capabilities = [\"read\", \"list\"]\n}\npath \"identity/group/id/*\" {\n capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\n
If Bill (the cluster admin) has Vault configured in his cluster, then he can take benefit from MTO's integration with Vault.
MTO automatically creates Vault secret paths for tenants, where tenant members can securely save their secrets. It also authorizes tenant members to access these secrets via OIDC.
Bill would first have to integrate Vault with MTO by adding the details in IntegrationConfig. For more details
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n integrations:\n vault:\n enabled: true\n authMethod: kubernetes\n accessInfo: \n accessorPath: oidc/\n address: https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n roleName: mto\n secretRef: \n name: ''\n namespace: ''\n config: \n ssoClient: vault\n
Bill then creates a tenant for Anna and John:
apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n accessControl:\n owners:\n users:\n - anna@acme.org\n viewers:\n users:\n - john@acme.org\n quota: small\n namespaces:\n sandboxes:\n enabled: false\n
Now Bill goes to Vault
and sees that a path for tenant
has been made under the name bluesky/kv
, confirming that Tenant members with the Owner or Edit roles now have access to the tenant's Vault path.
Now if Anna sign's in to the Vault via OIDC, she can see her tenants path and secrets. Whereas if John sign's in to the Vault via OIDC, he can't see his tenants path or secrets as he doesn't have the access required to view them.
For more details around enabling Kubernetes auth in Vault, visit here
"},{"location":"how-to-guides/enabling-openshift-dev-workspace.html","title":"Enabling DevWorkspace for Tenant's sandbox in OpenShift","text":""},{"location":"how-to-guides/enabling-openshift-dev-workspace.html#devworkspaces-metadata-via-multi-tenant-operator","title":"DevWorkspaces metadata via Multi Tenant Operator","text":"DevWorkspaces require specific metadata on a namespace for it to work in it. With Multi Tenant Operator (MTO), you can create sandbox namespaces for users of a Tenant, and then add the required metadata automatically on all sandboxes.
"},{"location":"how-to-guides/enabling-openshift-dev-workspace.html#required-metadata-for-enabling-devworkspace-on-sandbox","title":"Required metadata for enabling DevWorkspace on sandbox","text":" labels:\n app.kubernetes.io/part-of: che.eclipse.org\n app.kubernetes.io/component: workspaces-namespace\n annotations:\n che.eclipse.org/username: <username>\n
"},{"location":"how-to-guides/enabling-openshift-dev-workspace.html#automate-sandbox-metadata-for-all-tenant-users-via-tenant-cr","title":"Automate sandbox metadata for all Tenant users via Tenant CR","text":"With Multi Tenant Operator (MTO), you can set sandboxMetadata
like below to automate metadata for all sandboxes:
apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@acme.org\n editors:\n users:\n - erik@acme.org\n viewers:\n users:\n - john@acme.org\n namespaces:\n sandboxes:\n enabled: true\n private: false\n metadata:\n sandbox:\n labels:\n app.kubernetes.io/part-of: che.eclipse.org\n app.kubernetes.io/component: workspaces-namespace\n annotations:\n che.eclipse.org/username: \"{{ TENANT.USERNAME }}\"\n
It will create sandbox namespaces and also apply the sandboxMetadata
for owners and editors. Notice the template {{ TENANT.USERNAME }}
, it will resolve the username as value of the corresponding annotation. For more info on templated value, see here
You can also automate the metadata on all sandbox namespaces by using IntegrationConfig, notice metadata.sandboxes
:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n accessControl:\n namespaceAccessPolicy:\n deny:\n privilegedNamespaces: {}\n privileged:\n namespaces:\n - ^default$\n - ^openshift.*\n - ^kube.*\n serviceAccounts:\n - ^system:serviceaccount:openshift.*\n - ^system:serviceaccount:kube.*\n - ^system:serviceaccount:stakater-actions-runner-controller:actions-runner-controller-runner-deployment$\n rbac:\n tenantRoles:\n default:\n editor:\n clusterRoles:\n - edit\n owner:\n clusterRoles:\n - admin\n viewer:\n clusterRoles:\n - view\n components:\n console: false\n ingress:\n console: {}\n gateway: {}\n keycloak: {}\n showback: false\n integrations:\n vault:\n accessInfo:\n accessorPath: \"\"\n address: \"\"\n roleName: \"\"\n secretRef:\n name: \"\"\n namespace: \"\"\n authMethod: kubernetes\n config:\n ssoClient: \"\"\n enabled: false\n metadata:\n groups: {}\n namespaces: {}\n sandboxes:\n labels:\n app.kubernetes.io/part-of: che.eclipse.org\n app.kubernetes.io/component: workspaces-namespace\n annotations:\n che.eclipse.org/username: \"{{ TENANT.USERNAME }}\"\n
For more info on templated value \"{{ TENANT.USERNAME }}\"
, see here
Bill as the cluster admin wants to extend the default access for tenant members. As an admin of an OpenShift Cluster, Bill can extend the admin, edit, and view ClusterRole using aggregation. Bill will first create a ClusterRole with privileges to resources which Bill wants to extend. Bill will add the aggregation label to the newly created ClusterRole for extending the default ClusterRoles.
kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: extend-admin-role\n labels:\n rbac.authorization.k8s.io/aggregate-to-admin: 'true'\nrules:\n - verbs:\n - create\n - update\n - patch\n - delete\n apiGroups:\n - user.openshift.io\n resources:\n - groups\n
Note: You can learn more about aggregated-cluster-roles
here
Effortlessly associate tenants with their respective resources using the enhanced graph feature on the MTO Console. This dynamic graph illustrates the relationships between tenants and the resources they create, encompassing both MTO's proprietary resources and native Kubernetes/OpenShift elements.
Example Graph:
graph LR;\n A(alpha)-->B(dev);\n A-->C(prod);\n B-->D(limitrange);\n B-->E(owner-rolebinding);\n B-->F(editor-rolebinding);\n B-->G(viewer-rolebinding);\n C-->H(limitrange);\n C-->I(owner-rolebinding);\n C-->J(editor-rolebinding);\n C-->K(viewer-rolebinding);
Explore with an intuitive graph that showcases the relationships between tenants and their resources. The MTO Console's graph feature simplifies the understanding of complex structures, providing you with a visual representation of your tenant's organization.
To view the graph of your tenant, follow the steps below:
Tenants
page on the MTO Console using the left navigation bar. View
of the tenant for which you want to view the graph. Graph
tab on the tenant details page. MTO Console uses Keycloak for authentication and authorization. By default, the MTO Console uses an internal Keycloak instance that is provisioned by the Multi Tenant Operator in its own namespace. However, you can also integrate an external Keycloak instance with the MTO Console.
This guide will help you integrate an external Keycloak instance with the MTO Console.
"},{"location":"how-to-guides/integrating-external-keycloak.html#prerequisites","title":"Prerequisites","text":"Navigate to the Keycloak console.
Clients
.Create
button to create a new client.Create a new client.
Client ID
, Client Name
and Client Protocol
fields.Valid Redirect URIs
and Web Origins
for the client.Note: The Valid Redirect URIs
and Web Origins
should be the URL of the MTO Console.
Save
button.IntegrationConfig
CR with the following configuration.integrations: \n keycloak:\n realm: <realm>\n address: <keycloak-address>\n clientName: <client-name>\n
This guide walks you through the process of adding new users in Keycloak and granting them access to Multi Tenant Operator (MTO) Console.
"},{"location":"how-to-guides/keycloak.html#accessing-keycloak-console","title":"Accessing Keycloak Console","text":"mto
realm.Users
section in the mto
realm.Now, at this point, a user will be authenticated to the MTO Console. But in order to get access to view any Tenant resources, the user will need to be part of a Tenant.
"},{"location":"how-to-guides/keycloak.html#granting-access-to-tenant-resources","title":"Granting Access to Tenant Resources","text":"apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: arsenal\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - gabriel@arsenal.com\n groups:\n - arsenal\n editors:\n users:\n - hakimi@arsenal.com\n viewers:\n users:\n - neymar@arsenal.com\n
john@arsenal.com
and wish to add them as an editor, the edited section would look like this:editors:\n users:\n - gabriel@arsenal.com\n - benzema@arsenal.com\n
Once the above steps are completed, you should be able to access the MTO Console now and see alpha Tenant's details along with all the other resources such as namespaces and templates that John has access to.
"},{"location":"how-to-guides/mattermost.html","title":"Creating Mattermost Teams for your tenant","text":""},{"location":"how-to-guides/mattermost.html#requirements","title":"Requirements","text":"MTO-Mattermost-Integration-Operator
Please contact stakater to install the Mattermost integration operator before following the below-mentioned steps.
"},{"location":"how-to-guides/mattermost.html#steps-to-enable-integration","title":"Steps to enable integration","text":"Bill wants some tenants to also have their own Mattermost Teams. To make sure this happens correctly, Bill will first add the stakater.com/mattermost: true
label to the tenants. The label will enable the mto-mattermost-integration-operator
to create and manage Mattermost Teams based on Tenants.
apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: sigma\n labels:\n stakater.com/mattermost: 'true'\nspec:\n quota: medium\n accessControl:\n owners:\n users:\n - user\n editors:\n users:\n - user1\n namespaces:\n sandboxes:\n enabled: false\n withTenantPrefix:\n - dev\n - build\n - prod\n
Now user can log In to Mattermost to see their Team and relevant channels associated with it.
The name of the Team is similar to the Tenant name. Notification channels are pre-configured for every team, and can be modified.
"},{"location":"how-to-guides/resource-sync-by-tgi.html","title":"Sync Resources Deployed by TemplateGroupInstance","text":"The TemplateGroupInstance CR provides two types of resource sync for the resources mentioned in Template
For the given example, let's consider we want to apply the following template
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-secret\nresources:\n manifests:\n\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n\n - apiVersion: v1\n kind: ServiceAccount\n metadata:\n name: example-automated-thing\n secrets:\n - name: example-automated-thing-token-zyxwv\n
And the following TemplateGroupInstance is used to deploy these resources to namespaces having label kind: build
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-secret\n selector:\n matchLabels:\n kind: build\n sync: true\n
As we can see, in our TGI, we have a field spec.sync
which is set to true
. This will update the resources on two conditions:
The TemplateGroupInstance CR is reconciled/updated
If, for any reason, the underlying resource gets updated or deleted, TemplateGroupInstance
CR will try to revert it back to the state mentioned in the Template
CR.
Note
Updates to ServiceAccounts are ignored by both, reconciler and informers, in an attempt to avoid conflict between the TGI controller and Kube Controller Manager. ServiceAccounts are only reverted in case of unexpected deletions when sync is true.
"},{"location":"how-to-guides/resource-sync-by-tgi.html#ignore-resources-updates-on-resources","title":"Ignore Resources Updates on Resources","text":"If the resources mentioned in Template
CR conflict with another controller/operator, and you want TemplateGroupInstance to not actively revert the resource updates, you can add the following label to the conflicting resource multi-tenant-operator/ignore-resource-updates: \"\"
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-secret\nresources:\n manifests:\n\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n\n - apiVersion: v1\n kind: ServiceAccount\n metadata:\n name: example-automated-thing\n labels:\n multi-tenant-operator/ignore-resource-updates: \"\"\n secrets:\n - name: example-automated-thing-token-zyxwv\n
Note
However, this label will not stop Multi Tenant Operator from updating the resource on following conditions: - Template gets updated - TemplateGroupInstance gets updated - Resource gets deleted
If you don't want to sync the resources in any case, you can disable sync via sync: false
in TemplateGroupInstance
spec.
You can uninstall MTO by following these steps:
Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not. If yes, please set spec.onDelete.cleanNamespaces
to false
for all those tenants whose namespaces you want to retain, and spec.onDelete.cleanAppProject
to false
for all those tenants whose AppProject you want to retain. For more details check out onDelete
In case you have enabled console, you will have to disable it first by navigating to Search
-> IntegrationConfig
-> tenant-operator-config
and set spec.provision.console
and spec.provision.showback
to false
.
Remove IntegrationConfig CR from the cluster by navigating to Search
-> IntegrationConfig
-> tenant-operator-config
and select Delete
from actions dropdown.
After making the required changes open OpenShift console and click on Operators
, followed by Installed Operators
from the side menu
Now the operator has been uninstalled.
Optional:
you can also manually remove MTO's CRDs and its resources from the cluster.
This document contains instructions on installing, uninstalling and configuring Multi Tenant Operator using OpenShift MarketPlace.
OpenShift OperatorHub UI
CLI/GitOps
Enabling Console
License configuration
Uninstall
Operators
, followed by OperatorHub
from the side menuMulti Tenant Operator
and then click on Multi Tenant Operator
tileinstall
buttonUpdated channel
. Select multi-tenant-operator
to install the operator in multi-tenant-operator
namespace from Installed Namespace
dropdown menu. After configuring Update approval
click on the install
button.Note: Use stable
channel for seamless upgrades. For Production Environment
prefer Manual
approval and use Automatic
for Development Environment
Note: MTO will be installed in multi-tenant-operator
namespace.
multi-tenant-operator
oc create namespace multi-tenant-operator\nnamespace/multi-tenant-operator created\n
multi-tenant-operator
namespace.oc create -f - << EOF\napiVersion: operators.coreos.com/v1\nkind: OperatorGroup\nmetadata:\n name: tenant-operator\n namespace: multi-tenant-operator\nEOF\noperatorgroup.operators.coreos.com/tenant-operator created\n
multi-tenant-operator
namespace. To enable console set .spec.config.env[].ENABLE_CONSOLE
to true
. This will create a route resource, which can be used to access the Multi-Tenant-Operator console.oc create -f - << EOF\napiVersion: operators.coreos.com/v1alpha1\nkind: Subscription\nmetadata:\n name: tenant-operator\n namespace: multi-tenant-operator\nspec:\n channel: stable\n installPlanApproval: Automatic\n name: tenant-operator\n source: certified-operators\n sourceNamespace: openshift-marketplace\n startingCSV: tenant-operator.v0.10.0\nEOF\nsubscription.operators.coreos.com/tenant-operator created\n
Note: To bring MTO via GitOps, add the above files in GitOps repository.
subscription
custom resource open OpenShift console and click on Operators
, followed by Installed Operators
from the side menuWorkloads
, followed by Pods
from the side menu and select multi-tenant-operator
projectFor more details and configurations check out IntegrationConfig.
"},{"location":"installation/openshift.html#enabling-console","title":"Enabling Console","text":"To enable console GUI for MTO, go to Search
-> IntegrationConfig
-> tenant-operator-config
and make sure the following fields are set to true
:
spec:\n components:\n console: true\n showback: true\n
Note: If your InstallPlan
approval is set to Manual
then you will have to manually approve the InstallPlan
for MTO console components to be installed.
Operators
, followed by Installed Operators
from the side menu.Upgrade available
in front of mto-opencost
or mto-prometheus
.Preview InstallPlan
on top.Approve
button.InstallPlan
will be approved, and MTO console components will be installed.We offer a free license with installation, and you can create max 2 Tenants with it.
We offer a paid license as well. You need to have a configmap license
created in MTO's namespace (multi-tenant-operator). To get this configmap, you can contact sales@stakater.com
. It would look like this:
apiVersion: v1\nkind: ConfigMap\nmetadata:\n name: license\n namespace: multi-tenant-operator\ndata:\n payload.json: |\n {\n \"metaData\": {\n \"tier\" : \"paid\",\n \"company\": \"<company name here>\"\n }\n }\n signature.base64.txt: <base64 signature here.>\n
"},{"location":"installation/openshift.html#uninstall-via-operatorhub-ui","title":"Uninstall via OperatorHub UI","text":"You can uninstall MTO by following these steps:
Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not. If yes, please set spec.onDelete.cleanNamespaces
to false
for all those tenants whose namespaces you want to retain, and spec.onDelete.cleanAppProject
to false
for all those tenants whose AppProject you want to retain. For more details check out onDelete
After making the required changes open OpenShift console and click on Operators
, followed by Installed Operators
from the side menu
Now the operator has been uninstalled.
Optional:
you can also manually remove MTO's CRDs and its resources from the cluster.
Bill is a cluster admin who wants to map a docker-pull-secret
, present in a build
namespace, in tenant namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n resourceMappings:\n secrets:\n - name: docker-pull-secret\n namespace: build\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with MatchLabels
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchLabels:\n kind: build\n sync: true\n
Afterward, Bill can see that secrets has been successfully mapped in all matching namespaces.
kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n\nkubectl get secret docker-pull-secret -n alpha-dave-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"tutorials/distributing-resources/copying-resources.html#mapping-resources-within-tenant-namespaces-via-ti","title":"Mapping Resources within Tenant Namespaces via TI","text":"Anna is a tenant owner who wants to map a docker-pull-secret
, present in bluseky-build
namespace, to bluesky-anna-aurora-sandbox
namespace.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n resourceMappings:\n secrets:\n - name: docker-pull-secret\n namespace: bluesky-build\n
Once the template has been created, Anna creates a TemplateInstance
in bluesky-anna-aurora-sandbox
namespace, referring to the Template
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: docker-secret-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: docker-pull-secret\n sync: true\n
Afterward, Bill can see that secrets has been successfully mapped in all matching namespaces.
kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"tutorials/distributing-resources/distributing-manifests.html","title":"Distributing Resources in Namespaces","text":"Multi Tenant Operator has two Custom Resources which can cover this need using the Template
CR, depending upon the conditions and preference.
Bill, the cluster admin, wants to deploy a docker pull secret in namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with template
field, and the namespaces where resources are needed, using selector
field:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchExpressions:\n - key: kind\n operator: In\n values:\n - build\n sync: true\n
Afterward, Bill can see that secrets have been successfully created in all label matching namespaces.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-secret Active 3m\n\nkubectl get secret docker-secret -n alpha-dave-aurora-sandbox\nNAME STATE AGE\ndocker-secret Active 2m\n
TemplateGroupInstance
can also target specific tenants or all tenant namespaces under a single YAML definition.
In the v1beta3 version of the Tenant Custom Resource (CR), metadata assignment has been refined to offer granular control over labels and annotations across different namespaces associated with a tenant. This functionality enables precise and flexible management of metadata, catering to both general and specific needs.
"},{"location":"tutorials/tenant/assigning-metadata.html#distributing-common-labels-and-annotations","title":"Distributing Common Labels and Annotations","text":"To apply common labels and annotations across all namespaces within a tenant, the namespaces.metadata.common
field in the Tenant CR is utilized. This approach ensures that essential metadata is uniformly present across all namespaces, supporting consistent identification, management, and policy enforcement.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n metadata:\n common:\n labels:\n app.kubernetes.io/managed-by: tenant-operator\n app.kubernetes.io/part-of: tenant-alpha\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/infra=\nEOF\n
By configuring the namespaces.metadata.common
field as shown, all namespaces within the tenant will inherit the specified labels and annotations.
For scenarios requiring targeted application of labels and annotations to specific namespaces, the Tenant CR's namespaces.metadata.specific
field is designed. This feature enables the assignment of unique metadata to designated namespaces, accommodating specialized configurations and requirements.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n metadata:\n specific:\n - namespaces:\n - bluesky-dev\n labels:\n app.kubernetes.io/is-sandbox: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\nEOF\n
This configuration directs the specific labels and annotations solely to the enumerated namespaces, enabling distinct settings for particular environments.
"},{"location":"tutorials/tenant/assigning-metadata.html#assigning-metadata-to-sandbox-namespaces","title":"Assigning Metadata to Sandbox Namespaces","text":"To specifically address sandbox namespaces within the tenant, the namespaces.metadata.sandbox
property of the Tenant CR is employed. This section allows for the distinct management of sandbox namespaces, enhancing security and differentiation in development or testing environments.
apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n sandboxes:\n enabled: true\n private: true\n metadata:\n sandbox:\n labels:\n app.kubernetes.io/part-of: che.eclipse.org\n annotations:\n che.eclipse.org/username: \"{{ TENANT.USERNAME }}\" # templated placeholder\n
This setup ensures that all sandbox namespaces receive the designated metadata, with support for templated values, such as {{ TENANT.USERNAME }}, allowing dynamic customization based on the tenant or user context.
These enhancements in metadata management within the v1beta3
version of the Tenant CR provide comprehensive and flexible tools for labeling and annotating namespaces, supporting a wide range of organizational, security, and operational objectives.
Sandbox namespaces offer a personal development and testing space for users within a tenant. This guide covers how to enable and configure sandbox namespaces for tenant users, along with setting privacy and applying metadata specifically for these sandboxes.
"},{"location":"tutorials/tenant/create-sandbox.html#enabling-sandbox-namespaces","title":"Enabling Sandbox Namespaces","text":"Bill has assigned the ownership of the tenant bluesky to Anna and Anthony. To provide them with their sandbox namespaces, he must enable the sandbox functionality in the tenant's configuration.
To enable sandbox namespaces, Bill updates the Tenant Custom Resource (CR) with sandboxes.enabled: true:
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n sandboxes:\n enabled: true\nEOF\n
This configuration automatically generates sandbox namespaces for Anna, Anthony, and even John (as an editor) with the naming convention <tenantName>-<userName>-sandbox
.
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\nbluesky-anthony-aurora-sandbox Active 5d5h\nbluesky-john-aurora-sandbox Active 5d5h\n
"},{"location":"tutorials/tenant/create-sandbox.html#creating-private-sandboxes","title":"Creating Private Sandboxes","text":"To address privacy concerns where users require their sandbox namespaces to be visible only to themselves, Bill can set the sandboxes.private: true
in the Tenant CR:
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n sandboxes:\n enabled: true\n private: true\nEOF\n
With private: true
, each sandbox namespace is accessible and visible only to its designated user, enhancing privacy and security.
With the above configuration Anna
and Anthony
will now have new sandboxes created
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\nbluesky-anthony-aurora-sandbox Active 5d5h\nbluesky-john-aurora-sandbox Active 5d5h\n
However, from the perspective of Anna
, only their sandbox will be visible
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\n
"},{"location":"tutorials/tenant/create-sandbox.html#applying-metadata-to-sandbox-namespaces","title":"Applying Metadata to Sandbox Namespaces","text":"For uniformity or to apply specific policies, Bill might need to add common metadata, such as labels or annotations, to all sandbox namespaces. This is achievable through the namespaces.metadata.sandbox
configuration:
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n sandboxes:\n enabled: true\n private: true\n metadata:\n sandbox:\n labels:\n app.kubernetes.io/part-of: che.eclipse.org\n annotations:\n che.eclipse.org/username: \"{{ TENANT.USERNAME }}\"\nEOF\n
The templated annotation \"{{ TENANT.USERNAME }}\" dynamically inserts the username of the sandbox owner, personalizing the sandbox environment. This capability is particularly useful for integrating with other systems or applications that might utilize this metadata for configuration or access control.
Through the examples demonstrated, Bill can efficiently manage sandbox namespaces for tenant users, ensuring they have the necessary resources for development and testing while maintaining privacy and organizational policies.
"},{"location":"tutorials/tenant/create-tenant.html","title":"Creating a Tenant","text":"Bill, a cluster admin, has been tasked by the CTO of Nordmart to set up a new tenant for Anna's team. Following the request, Bill proceeds to create a new tenant named bluesky in the Kubernetes cluster.
"},{"location":"tutorials/tenant/create-tenant.html#setting-up-the-tenant","title":"Setting Up the Tenant","text":"To establish the tenant, Bill crafts a Tenant Custom Resource (CR) with the necessary specifications:
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n sandboxes:\n enabled: false\nEOF\n
In this configuration, Bill specifies anna@aurora.org as the owner, giving her full administrative rights over the tenant. The editor role is assigned to john@aurora.org and the group alpha, providing them with editing capabilities within the tenant's scope.
"},{"location":"tutorials/tenant/create-tenant.html#verifying-the-tenant-creation","title":"Verifying the Tenant Creation","text":"After creating the tenant, Bill checks its status to confirm it's active and operational:
kubectl get tenants.tenantoperator.stakater.com bluesky\nNAME STATE AGE\nbluesky Active 3m\n
This output indicates that the tenant bluesky is successfully created and in an active state.
"},{"location":"tutorials/tenant/create-tenant.html#checking-user-permissions","title":"Checking User Permissions","text":"To ensure the roles and permissions are correctly assigned, Anna logs into the cluster to verify her capabilities:
Namespace Creation:
kubectl auth can-i create namespaces\nyes\n
Anna is confirmed to have the ability to create namespaces within the tenant's scope.
Cluster Resources Access:
kubectl auth can-i get namespaces\nno\n\nkubectl auth can-i get persistentvolumes\nno\n
As expected, Anna does not have access to broader cluster resources outside the tenant's confines.
Tenant Resource Access:
kubectl auth can-i get tenants.tenantoperator.stakater.com\nno\n
Access to the Tenant resource itself is also restricted, aligning with the principle of least privilege.
"},{"location":"tutorials/tenant/create-tenant.html#adding-multiple-owners-to-a-tenant","title":"Adding Multiple Owners to a Tenant","text":"Later, if there's a need to grant administrative privileges to another user, such as Anthony, Bill can easily update the tenant's configuration to include multiple owners:
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n sandboxes:\n enabled: false\nEOF\n
With this update, both Anna and Anthony can administer the tenant bluesky, including the creation of namespaces:
kubectl auth can-i create namespaces\nyes\n
This flexible approach allows Bill to manage tenant access control efficiently, ensuring that the team's operational needs are met while maintaining security and governance standards.
"},{"location":"tutorials/tenant/creating-namespaces.html","title":"Creating Namespaces through Tenant Custom Resource","text":"Bill, tasked with structuring namespaces for different environments within a tenant, utilizes the Tenant Custom Resource (CR) to streamline this process efficiently. Here's how Bill can orchestrate the creation of dev
, build
, and production
environments for the tenant members directly through the Tenant CR.
To facilitate the environment setup, Bill decides to categorize the namespaces based on their association with the tenant's name. He opts to use the namespaces.withTenantPrefix
field for namespaces that should carry the tenant name as a prefix, enhancing clarity and organization. For namespaces that do not require a tenant name prefix, Bill employs the namespaces.withoutTenantPrefix
field.
Here's how Bill configures the Tenant CR to create these namespaces:
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n namespaces:\n withTenantPrefix:\n - dev\n - build\n withoutTenantPrefix:\n - prod\nEOF\n
This configuration ensures the creation of the desired namespaces, directly correlating them with the bluesky tenant.
Upon applying the above configuration, Bill and the tenant members observe the creation of the following namespaces:
kubectl get namespaces\nNAME STATUS AGE\nbluesky-dev Active 5m\nbluesky-build Active 5m\nprod Active 5m\n
Anna, as a tenant owner, gains the capability to further customize or create new namespaces within her tenant's scope. For example, creating a bluesky-production namespace with the necessary tenant label:
apiVersion: v1\nkind: Namespace\nmetadata:\n name: bluesky-production\n labels:\n stakater.com/tenant: bluesky\n
\u26a0\ufe0f It's crucial for Anna to include the tenant label tenantoperator.stakater.com/tenant: bluesky
to ensure the namespace is recognized as part of the bluesky tenant. Failure to do so, or if Anna is not associated with the bluesky tenant, will result in Multi Tenant Operator (MTO) denying the namespace creation.
Following the creation, the MTO dynamically assigns roles to Anna and other tenant members according to their designated user types, ensuring proper access control and operational capabilities within these namespaces.
"},{"location":"tutorials/tenant/creating-namespaces.html#incorporating-existing-namespaces-into-the-tenant-via-argocd","title":"Incorporating Existing Namespaces into the Tenant via ArgoCD","text":"For teams practicing GitOps, existing namespaces can be seamlessly integrated into the Tenant structure by appending the tenant label to the namespace's manifest within the GitOps repository. This approach allows for efficient, automated management of namespace affiliations and access controls, ensuring a cohesive tenant ecosystem.
"},{"location":"tutorials/tenant/creating-namespaces.html#add-existing-namespaces-to-tenant-via-gitops","title":"Add Existing Namespaces to Tenant via GitOps","text":"Using GitOps as your preferred development workflow, you can add existing namespaces for your tenants by including the tenant label.
To add an existing namespace to your tenant via GitOps:
To disassociate or remove namespaces from the cluster through GitOps, the namespace configuration should be eliminated from the GitOps repository. Additionally, detaching the namespace from any ArgoCD-managed applications by removing the app.kubernetes.io/instance
label ensures a clean removal without residual dependencies.
Synchronizing the repository post-removal finalizes the deletion process, effectively managing the lifecycle of namespaces within a tenant-operated Kubernetes environment.
"},{"location":"tutorials/tenant/deleting-tenant.html","title":"Deleting a Tenant While Preserving Resources","text":"When managing tenant lifecycles within Kubernetes, certain scenarios require the deletion of a tenant without removing associated namespaces or ArgoCD AppProjects. This ensures that resources and configurations tied to the tenant remain intact for archival or transition purposes.
"},{"location":"tutorials/tenant/deleting-tenant.html#configuration-for-retaining-resources","title":"Configuration for Retaining Resources","text":"Bill decides to decommission the bluesky tenant but needs to preserve all related namespaces for continuity. To achieve this, he adjusts the Tenant Custom Resource (CR) to prevent the automatic cleanup of these resources upon tenant deletion.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n quota: small\n accessControl:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n namespaces:\n sandboxes:\n enabled: true\n withTenantPrefix:\n - dev\n - build\n - prod\n onDeletePurgeNamespaces: false\nEOF\n
With the onDeletePurgeNamespaces
fields set to false, Bill ensures that the deletion of the bluesky tenant does not trigger the removal of its namespaces. This setup is crucial for cases where the retention of environment setups and deployments is necessary post-tenant deletion.
It's important to note the default behavior of the Tenant Operator regarding resource cleanup:
Namespaces: By default, onDeletePurgeNamespaces
is set to false, implying that namespaces are not automatically deleted with the tenant unless explicitly configured.
Once the Tenant CR is configured as desired, Bill can proceed to delete the bluesky tenant:
kubectl delete tenant bluesky\n
This command removes the tenant resource from the cluster while leaving the specified namespaces untouched, adhering to the configured onDeletePurgeNamespaces
policies. This approach provides flexibility in managing the lifecycle of tenant resources, catering to various operational strategies and compliance requirements.
Implementing hibernation for tenants' namespaces efficiently manages cluster resources by temporarily reducing workload activities during off-peak hours. This guide demonstrates how to configure hibernation schedules for tenant namespaces, leveraging Tenant and ResourceSupervisor for precise control.
"},{"location":"tutorials/tenant/tenant-hibernation.html#configuring-hibernation-for-tenant-namespaces","title":"Configuring Hibernation for Tenant Namespaces","text":"You can manage workloads in your cluster with MTO by implementing a hibernation schedule for your tenants. Hibernation downsizes the running Deployments and StatefulSets in a tenant\u2019s namespace according to a defined cron schedule. You can set a hibernation schedule for your tenants by adding the \u2018spec.hibernation\u2019 field to the tenant's respective Custom Resource.
hibernation:\n sleepSchedule: 23 * * * *\n wakeSchedule: 26 * * * *\n
spec.hibernation.sleepSchedule
accepts a cron expression indicating the time to put the workloads in your tenant\u2019s namespaces to sleep.
spec.hibernation.wakeSchedule
accepts a cron expression indicating the time to wake the workloads in your tenant\u2019s namespaces up.
Note
Both sleep and wake schedules must be specified for your Hibernation schedule to be valid.
Additionally, adding the hibernation.stakater.com/exclude: 'true'
annotation to a namespace excludes it from hibernating.
Note
This is only true for hibernation applied via the Tenant Custom Resource, and does not apply for hibernation done by manually creating a ResourceSupervisor (details about that below).
Note
This will not wake up an already sleeping namespace before the wake schedule.
"},{"location":"tutorials/tenant/tenant-hibernation.html#resource-supervisor","title":"Resource Supervisor","text":"Adding a Hibernation Schedule to a Tenant creates an accompanying ResourceSupervisor Custom Resource. The Resource Supervisor stores the Hibernation schedules and manages the current and previous states of all the applications, whether they are sleeping or awake.
When the sleep timer is activated, the Resource Supervisor controller stores the details of your applications (including the number of replicas, configurations, etc.) in the applications' namespaces and then puts your applications to sleep. When the wake timer is activated, the controller wakes up the applications using their stored details.
Enabling ArgoCD support for Tenants will also hibernate applications in the tenants' appProjects
.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: sigma\nspec:\n argocd:\n appProjects:\n - sigma\n namespace: openshift-gitops\n hibernation:\n sleepSchedule: 42 * * * *\n wakeSchedule: 45 * * * *\n namespaces:\n - tenant-ns1\n - tenant-ns2\n
Currently, Hibernation is available only for StatefulSets and Deployments.
"},{"location":"tutorials/tenant/tenant-hibernation.html#manual-creation-of-resourcesupervisor","title":"Manual creation of ResourceSupervisor","text":"Hibernation can also be applied by creating a ResourceSupervisor resource manually. The ResourceSupervisor definition will contain the hibernation cron schedule, the names of the namespaces to be hibernated, and the names of the ArgoCD AppProjects whose ArgoCD Applications have to be hibernated (as per the given schedule).
This method can be used to hibernate:
As an example, the following ResourceSupervisor could be created manually, to apply hibernation explicitly to the 'ns1' and 'ns2' namespaces, and to the 'sample-app-project' AppProject.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: hibernator\nspec:\n argocd:\n appProjects:\n - sample-app-project\n namespace: openshift-gitops\n hibernation:\n sleepSchedule: 42 * * * *\n wakeSchedule: 45 * * * *\n namespaces:\n - ns1\n - ns2\n
"},{"location":"tutorials/tenant/tenant-hibernation.html#freeing-up-unused-resources-with-hibernation","title":"Freeing up unused resources with hibernation","text":""},{"location":"tutorials/tenant/tenant-hibernation.html#hibernating-a-tenant_1","title":"Hibernating a tenant","text":"Bill is a cluster administrator who wants to free up unused cluster resources at nighttime, in an effort to reduce costs (when the cluster isn't being used).
First, Bill creates a tenant with the hibernation
schedules mentioned in the spec, or adds the hibernation field to an existing tenant:
apiVersion: tenantoperator.stakater.com/v1beta3\nkind: Tenant\nmetadata:\n name: sigma\nspec:\n hibernation:\n sleepSchedule: \"0 20 * * 1-5\" # Sleep at 8 PM on weekdays\n wakeSchedule: \"0 8 * * 1-5\" # Wake at 8 AM on weekdays\n owners:\n users:\n - user@example.com\n quota: medium\n namespaces:\n withoutTenantPrefix:\n - dev\n - stage\n - build\n
The schedules above will put all the Deployments
and StatefulSets
within the tenant's namespaces to sleep, by reducing their pod count to 0 at 8 PM every weekday. At 8 AM on weekdays, the namespaces will then wake up by restoring their applications' previous pod counts.
Bill can verify this behaviour by checking the newly created ResourceSupervisor resource at run time:
oc get ResourceSupervisor -A\nNAME AGE\nsigma 5m\n
The ResourceSupervisor will look like this at 'running' time (as per the schedule):
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - build\n - stage\n - dev\nstatus:\n currentStatus: running\n nextReconcileTime: '2022-10-12T20:00:00Z'\n
The ResourceSupervisor will look like this at 'sleeping' time (as per the schedule):
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - build\n - stage\n - dev\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: build\n kind: Deployment\n name: example\n replicas: 3\n - Namespace: stage\n kind: Deployment\n name: example\n replicas: 3\n
Bill wants to prevent the build
namespace from going to sleep, so he can add the hibernation.stakater.com/exclude: 'true'
annotation to it. The ResourceSupervisor will now look like this after reconciling:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - stage\n - dev\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: stage\n kind: Deployment\n name: example\n replicas: 3\n
"},{"location":"tutorials/tenant/tenant-hibernation.html#hibernating-namespaces-andor-argocd-applications-with-resourcesupervisor","title":"Hibernating namespaces and/or ArgoCD Applications with ResourceSupervisor","text":"Bill, the cluster administrator, wants to hibernate a collection of namespaces and AppProjects belonging to multiple different tenants. He can do so by creating a ResourceSupervisor manually, specifying the hibernation schedule in its spec, the namespaces and ArgoCD Applications that need to be hibernated as per the mentioned schedule. Bill can also use the same method to hibernate some namespaces and ArgoCD Applications that do not belong to any tenant on his cluster.
The example given below will hibernate the ArgoCD Applications in the 'test-app-project' AppProject; and it will also hibernate the 'ns2' and 'ns4' namespaces.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: test-resource-supervisor\nspec:\n argocd:\n appProjects:\n - test-app-project\n namespace: argocd-ns\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - ns2\n - ns4\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: ns2\n kind: Deployment\n name: test-deployment\n replicas: 3\n
"}]}
\ No newline at end of file
diff --git a/0.12/sitemap.xml b/0.12/sitemap.xml
index f97a81cd8..ff37398b3 100644
--- a/0.12/sitemap.xml
+++ b/0.12/sitemap.xml
@@ -2,212 +2,212 @@