diff --git a/0.10/explanation/auth.html b/0.10/explanation/auth.html
new file mode 100644
index 000000000..bacbd6704
--- /dev/null
+++ b/0.10/explanation/auth.html
@@ -0,0 +1,1638 @@
+
+
+
+
+
+
@@ -1576,10 +1609,10 @@ Showback
The Showback feature is an essential financial governance tool, providing detailed insights into the cost implications of resource usage by tenant or namespace or other filters. This facilitates a transparent cost management and internal chargeback or showback process, enabling informed decision-making regarding resource consumption and budgeting.
User Roles and Permissions
-Administrators :
+Administrators
Administrators have overarching access to the console, including the ability to view all namespaces and tenants. They have exclusive access to the IntegrationConfig, allowing them to view all the settings and integrations.
-Tenant Users :
+Tenant Users
Regular tenant users can monitor and manage their allocated resources. However, they do not have access to the IntegrationConfig and cannot view resources across different tenants, ensuring data privacy and operational integrity.
Live YAML Configuration and Graph View
In the MTO Console, each resource section is equipped with a "View" button, revealing the live YAML configuration for complete information on the resource. For Tenant resources, a supplementary "Graph" option is available, illustrating the relationships and dependencies of all resources under a Tenant. This dual-view approach empowers users with both the detailed control of YAML and the holistic oversight of the graph view.
@@ -1589,6 +1622,13 @@ Caching and Database
MTO integrates a dedicated database to streamline resource management. Now, all resources managed by MTO are efficiently stored in a Postgres database, enhancing the MTO Console's ability to efficiently retrieve all the resources for optimal presentation.
The implementation of this feature is facilitated by the Bootstrap controller, streamlining the deployment process. This controller creates the PostgreSQL Database, establishes a service for inter-pod communication, and generates a secret to ensure secure connectivity to the database.
Furthermore, the introduction of a dedicated cache layer ensures that there is no added burden on the kube API server when responding to MTO Console requests. This enhancement not only improves response times but also contributes to a more efficient and responsive resource management system.
+Authentication and Authorization
+MTO Console ensures secure access control using a robust combination of Keycloak for authentication and a custom-built authorization module.
+Keycloak Integration
+Keycloak, an industry-standard authentication tool, is integrated for secure user login and management. It supports seamless integration with existing ADs or SSO systems and grants administrators complete control over user access.
+Custom Authorization Module
+Complementing Keycloak, our custom authorization module intelligently controls access based on user roles and their association with tenants. Special checks are in place for admin users, granting them comprehensive permissions.
+For more details on Keycloak's integration, PostgreSQL as persistent storage, and the intricacies of our authorization module, please visit here.
Conclusion
The MTO Console is engineered to simplify complex multi-tenant management. The current iteration focuses on providing comprehensive visibility. Future updates could include direct CUD (Create/Update/Delete) capabilities from the dashboard, enhancing the console’s functionality. The Showback feature remains a standout, offering critical cost tracking and analysis. The delineation of roles between administrators and tenant users ensures a secure and organized operational framework.
diff --git a/0.10/search/search_index.json b/0.10/search/search_index.json
index fdc973f12..c71fbb3eb 100644
--- a/0.10/search/search_index.json
+++ b/0.10/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"Introduction","text":"Kubernetes is designed to support a single tenant platform; OpenShift brings some improvements with its \"Secure by default\" concepts but it is still very complex to design and orchestrate all the moving parts involved in building a secure multi-tenant platform hence making it difficult for cluster admins to host multi-tenancy in a single OpenShift cluster. If multi-tenancy is achieved by sharing a cluster, it can have many advantages, e.g. efficient resource utilization, less configuration effort and easier sharing of cluster-internal resources among different tenants. OpenShift and all managed applications provide enough primitive resources to achieve multi-tenancy, but it requires professional skills and deep knowledge of OpenShift.
This is where Multi Tenant Operator (MTO) comes in and provides easy to manage/configure multi-tenancy. MTO provides wrappers around OpenShift resources to provide a higher level of abstraction to users. With MTO admins can configure Network and Security Policies, Resource Quotas, Limit Ranges, RBAC for every tenant, which are automatically inherited by all the namespaces and users in the tenant. Depending on the user's role, they are free to operate within their tenants in complete autonomy. MTO supports initializing new tenants using GitOps management pattern. Changes can be managed via PRs just like a typical GitOps workflow, so tenants can request changes, add new users, or remove users.
The idea of MTO is to use namespaces as independent sandboxes, where tenant applications can run independently of each other. Cluster admins shall configure MTO's custom resources, which then become a self-service system for tenants. This minimizes the efforts of the cluster admins.
MTO enables cluster admins to host multiple tenants in a single OpenShift Cluster, i.e.:
- Share an OpenShift cluster with multiple tenants
- Share managed applications with multiple tenants
- Configure and manage tenants and their sandboxes
MTO is also OpenShift certified
"},{"location":"index.html#features","title":"Features","text":"The major features of Multi Tenant Operator (MTO) are described below.
"},{"location":"index.html#kubernetes-multitenancy","title":"Kubernetes Multitenancy","text":"RBAC is one of the most complicated and error-prone parts of Kubernetes. With Multi Tenant Operator, you can rest assured that RBAC is configured with the \"least privilege\" mindset and all rules are kept up-to-date with zero manual effort.
Multi Tenant Operator binds existing ClusterRoles to the Tenant's Namespaces used for managing access to the Namespaces and the resources they contain. You can also modify the default roles or create new roles to have full control and customize access control for your users and teams.
Multi Tenant Operator is also able to leverage existing OpenShift groups or external groups synced from 3rd party identity management systems, for maintaining Tenant membership in your organization's current user management system.
"},{"location":"index.html#hashicorp-vault-multitenancy","title":"HashiCorp Vault Multitenancy","text":"Multi Tenant Operator extends the tenants permission model to HashiCorp Vault where it can create Vault paths and greatly ease the overhead of managing RBAC in Vault. Tenant users can manage their own secrets without the concern of someone else having access to their Vault paths.
More details on Vault Multitenancy
"},{"location":"index.html#argocd-multitenancy","title":"ArgoCD Multitenancy","text":"Multi Tenant Operator is not only providing strong Multi Tenancy for the OpenShift internals but also extends the tenants permission model to ArgoCD were it can provision AppProjects and Allowed Repositories for your tenants greatly ease the overhead of managing RBAC in ArgoCD.
More details on ArgoCD Multitenancy
"},{"location":"index.html#resource-management","title":"Resource Management","text":"Multi Tenant Operator provides a mechanism for defining Resource Quotas at the tenant scope, meaning all namespaces belonging to a particular tenant share the defined quota, which is why you are able to safely enable dev teams to self serve their namespaces whilst being confident that they can only use the resources allocated based on budget and business needs.
More details on Quota
"},{"location":"index.html#templates-and-template-distribution","title":"Templates and Template distribution","text":"Multi Tenant Operator allows admins/users to define templates for namespaces, so that others can instantiate these templates to provision namespaces with batteries loaded. A template could pre-populate a namespace for certain use cases or with basic tooling required. Templates allow you to define Kubernetes manifests, Helm chart and more to be applied when the template is used to create a namespace.
It also allows the parameterizing of these templates for flexibility and ease of use. It also provides the option to enforce the presence of templates in one tenant's or all the tenants' namespaces for configuring secure defaults.
Common use cases for namespace templates may be:
- Adding networking policies for multitenancy
- Adding development tooling to a namespace
- Deploying pre-populated databases with test data
- Injecting new namespaces with optional credentials such as image pull secrets
More details on Distributing Template Resources
"},{"location":"index.html#mto-console","title":"MTO Console","text":"Multi Tenant Operator Console is a comprehensive user interface designed for both administrators and tenant users to manage multi-tenant environments. The MTO Console simplifies the complexity involved in handling various aspects of tenants and their related resources. It serves as a centralized monitoring hub, offering insights into the current state of tenants, namespaces, templates and quotas. It is designed to provide a quick summary/snapshot of MTO's status and facilitates easier interaction with various resources such as tenants, namespaces, templates, and quotas.
More details on Console
"},{"location":"index.html#showback","title":"Showback","text":"The showback functionality in Multi Tenant Operator (MTO) Console is a significant feature designed to enhance the management of resources and costs in multi-tenant Kubernetes environments. This feature focuses on accurately tracking the usage of resources by each tenant, and/or namespace, enabling organizations to monitor and optimize their expenditures. Furthermore, this functionality supports financial planning and budgeting by offering a clear view of operational costs associated with each tenant. This can be particularly beneficial for organizations that chargeback internal departments or external clients based on resource usage, ensuring that billing is fair and reflective of actual consumption.
More details on Showback
"},{"location":"index.html#hibernation","title":"Hibernation","text":"Multi Tenant Operator can downscale Deployments and StatefulSets in a tenant's Namespace according to a defined sleep schedule. The Deployments and StatefulSets are brought back to their required replicas according to the provided wake schedule.
More details on Hibernation
"},{"location":"index.html#mattermost-multitenancy","title":"Mattermost Multitenancy","text":"Multi Tenant Operator can manage Mattermost to create Teams for tenant users. All tenant users get a unique team and a list of predefined channels gets created. When a user is removed from the tenant, the user is also removed from the Mattermost team corresponding to tenant.
More details on Mattermost
"},{"location":"index.html#remote-development-namespaces","title":"Remote Development Namespaces","text":"Multi Tenant Operator can be configured to automatically provision a namespace in the cluster for every member of the specific tenant, that will also be preloaded with any selected templates and consume the same pool of resources from the tenants quota creating safe remote dev namespaces that teams can use as scratch namespace for rapid prototyping and development. So, every developer gets a Kubernetes-based cloud development environment that feel like working on localhost.
More details on Sandboxes
"},{"location":"index.html#cross-namespace-resource-distribution","title":"Cross Namespace Resource Distribution","text":"Multi Tenant Operator supports cloning of secrets and configmaps from one namespace to another namespace based on label selectors. It uses templates to enable users to provide reference to secrets and configmaps. It uses a template group instance to distribute those secrets and namespaces in matching namespaces, even if namespaces belong to different tenants. If template instance is used then the resources will only be mapped if namespaces belong to same tenant.
More details on Distributing Secrets and ConfigMaps
"},{"location":"index.html#self-service","title":"Self-Service","text":"With Multi Tenant Operator, you can empower your users to safely provision namespaces for themselves and their teams (typically mapped to SSO groups). Team-owned namespaces and the resources inside them count towards the team's quotas rather than the user's individual limits and are automatically shared with all team members according to the access rules you configure in Multi Tenant Operator.
Also, by leveraging Multi Tenant Operator's templating mechanism, namespaces can be provisioned and automatically pre-populated with any kind of resource or multiple resources such as network policies, docker pull secrets or even Helm charts etc
"},{"location":"index.html#everything-as-codegitops-ready","title":"Everything as Code/GitOps Ready","text":"Multi Tenant Operator is designed and built to be 100% OpenShift-native and to be configured and managed the same familiar way as native OpenShift resources so is perfect for modern shops that are dedicated to GitOps as it is fully configurable using Custom Resources.
"},{"location":"index.html#preventing-clusters-sprawl","title":"Preventing Clusters Sprawl","text":"As companies look to further harness the power of cloud-native, they are adopting container technologies at rapid speed, increasing the number of clusters and workloads. As the number of Kubernetes clusters grows, this is an increasing work for the Ops team. When it comes to patching security issues or upgrading clusters, teams are doing five times the amount of work.
With Multi Tenant Operator teams can share a single cluster with multiple teams, groups of users, or departments by saving operational and management efforts. This prevents you from Kubernetes cluster sprawl.
"},{"location":"index.html#native-experience","title":"Native Experience","text":"Multi Tenant Operator provides multi-tenancy with a native Kubernetes experience without introducing additional management layers, plugins, or customized binaries.
"},{"location":"argocd-multitenancy.html","title":"ArgoCD Multi-tenancy","text":"ArgoCD is a declarative GitOps tool built to deploy applications to Kubernetes. While the continuous delivery (CD) space is seen by some as crowded these days, ArgoCD does bring some interesting capabilities to the table. Unlike other tools, ArgoCD is lightweight and easy to configure.
"},{"location":"argocd-multitenancy.html#why-argocd","title":"Why ArgoCD?","text":"Application definitions, configurations, and environments should be declarative and version controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand.
"},{"location":"argocd-multitenancy.html#argocd-integration-in-multi-tenant-operator","title":"ArgoCD integration in Multi Tenant Operator","text":"With Multi Tenant Operator (MTO), cluster admins can configure multi tenancy in their cluster. Now with ArgoCD integration, multi tenancy can be configured in ArgoCD applications and AppProjects.
MTO (if configured to) will create AppProjects for each tenant. The AppProject will allow tenants to create ArgoCD Applications that can be synced to namespaces owned by those tenants. Cluster admins will also be able to blacklist certain namespaces resources if they want, and allow certain cluster scoped resources as well (see the NamespaceResourceBlacklist
and ClusterResourceWhitelist
sections in Integration Config docs and Tenant Custom Resource docs).
Note that ArgoCD integration in MTO is completely optional.
"},{"location":"argocd-multitenancy.html#default-argocd-configuration","title":"Default ArgoCD configuration","text":"We have set a default ArgoCD configuration in Multi Tenant Operator that fulfils the following use cases:
- Tenants are able to see only their ArgoCD applications in the ArgoCD frontend
- Tenant 'Owners' and 'Editors' will have full access to their ArgoCD applications
- Tenants in the 'Viewers' group will have read-only access to their ArgoCD applications
- Tenants can sync all namespace-scoped resources, except those that are blacklisted in the spec
- Tenants can only sync cluster-scoped resources that are allow-listed in the spec
- Tenant 'Owners' can configure their own GitOps source repos at a tenant level
- Cluster admins can prevent specific resources from syncing via ArgoCD
- Cluster admins have full access to all ArgoCD applications and AppProjects
- Since ArgoCD integration is on a per-tenant level, namespace-scoped applications are only synced to Tenant's namespaces
Detailed use cases showing how to create AppProjects are mentioned in use cases for ArgoCD.
"},{"location":"changelog.html","title":"Changelog","text":""},{"location":"changelog.html#v010x","title":"v0.10.x","text":""},{"location":"changelog.html#v0100","title":"v0.10.0","text":""},{"location":"changelog.html#feature","title":"Feature","text":" - Added support for caching for MTO Console using PostgreSQL as caching layer.
- Added support for custom metrics with Template, Template Instance and Template Group Instance.
- Graph visualization of Tenant and its associated resources on MTO Console.
- Tenant and Admin level authz/authn support within MTO Console and Gateway.
- Now in MTO console you can view cost of different Tenant resources with different date, resource type and additional filters.
- MTO can now create a default keycloak realm, client and
mto-admin
user for Console. - Implemented Cluster Resource Quota for vanilla Kubernetes platform type.
- Dependency of TLS secrets for MTO Webhook.
- Added Helm Chart that would be used for installing MTO over Kubernetes.
- And it comes with default Cert Manager manifests for certificates.
- Support for MTO e2e.
"},{"location":"changelog.html#fix","title":"Fix","text":" - Updated CreateMergePatch to MergeMergePatches to address issues caused by losing
resourceVersion
and UID when converting oldObject
to newObject
. This prevents problems when the object is edited by another controller. - In Template Resource distribution for Secret type, we now consider the source's Secret field type, preventing default creation as Opaque regardless of the source's actual type.
- Enhanced admin permissions for tenant role in Vault to include Create, Update, Delete alongside existing Read and List privileges for the common-shared-secrets path. Viewers now have Read permission.
"},{"location":"changelog.html#enhanced","title":"Enhanced","text":" - Started to support Kubernetes along with OpenShift as platform type.
- Support of MTO's PostgreSQL instance as persistent storage for keycloak.
kube:admin
is now bypassed by default to perform operations, earlier kube:admin
needed to be mentioned in respective tenants to give it access over namespaces.
"},{"location":"changelog.html#v09x","title":"v0.9.x","text":""},{"location":"changelog.html#v094","title":"v0.9.4","text":" - enhance: Removed Quota's default support of adding it to Tenant CR in
spec.quota
, if quota.tenantoperator.stakater.com/is-default: \"true\"
annotation is present - fix: ValidatingWebhookConfiguration CRs are now owned by OLM, to handle cleanup upon operator uninstall
- enhance: TemplateGroupInstance CRs now actively watch the resources they apply, and perform functions to make sure they are in sync with the state mentioned in their respective Templates
More information about TemplateGroupInstance's sync at Sync Resources Deployed by TemplateGroupInstance
"},{"location":"changelog.html#v092","title":"v0.9.2","text":" - fix: Values within TemplateInstances created via Tenants will no longer be duplicated on Tenant CR update
- fix: Fixed a bug that made private namespaces become public
"},{"location":"changelog.html#v091","title":"v0.9.1","text":" - fix: Allow namespace controller to reconcile without crashing, if no IC exists
- fix: In case a group mentioned in IC doesn't exist, it won't block reconciliation or editing of MTO's manifests
"},{"location":"changelog.html#v090","title":"v0.9.0","text":" - feat: Added console for tenants, templates and integration config
- feat: Added support for custom realm name for RHSSO integration in Integration Config
- feat: Add multiple status conditions to tenant and TGI for success and failure cases
- feat: Show error messages with tenant and TGI status
- fix: Stop reconciliation breaking for tenant and TGI, instead continue and show warnings
- fix: Disable TGI/TI reconcile if mentioned template is not found.
- fix: Disable repeated users webhook in tenant
- enhance: Reduced API calls
- enhance: General enhancements and improvements
- chore: Update dependencies
"},{"location":"changelog.html#enabling-console","title":"Enabling console","text":" - To enable console visit Installation, and add config to subscription for OperatorHub based installation.
"},{"location":"changelog.html#v08x","title":"v0.8.x","text":""},{"location":"changelog.html#v083","title":"v0.8.3","text":" - fix: Reconcile namespaces when the group spec for tenants is changed, so new rolebindings can be created for them
"},{"location":"changelog.html#v081","title":"v0.8.1","text":" - fix: Updated release pipelines
"},{"location":"changelog.html#v080","title":"v0.8.0","text":" - feat: Allow custom roles for each tenant via label selector, more details in custom roles document
- Roles mapping is a required field in MTO's IntegrationConfig. By default, it will always be filled with OpenShift's admin/edit/view roles
- Ensure that mentioned roles exist within the cluster
- Remove coupling with OpenShift's built-in admin/edit/view roles
- feat: Removed coupling of ResourceSupervisor and Tenant resources
- Added list of namespaces to hibernate within the ResourceSupervisor resource
- Ensured that the same namespace cannot be added to two different Resource Supervisors
- Moved ResourceSupervisor into a separate pod
- Improved logs
- fix: Remove bug from tenant's common and specific metadata
- fix: Add missing field to Tenant's conversion webhook
- fix: Fix panic in ResourceSupervisor sleep functionality due to sending on closed channel
- chore: Update dependencies
"},{"location":"changelog.html#v07x","title":"v0.7.x","text":""},{"location":"changelog.html#v074","title":"v0.7.4","text":" - maintain: Automate certification of new MTO releases on RedHat's Operator Hub
"},{"location":"changelog.html#v073","title":"v0.7.3","text":" - feat: Updated Tenant CR to provide Tenant level AppProject permissions
"},{"location":"changelog.html#v072","title":"v0.7.2","text":" - feat: Add support to map secrets/configmaps from one namespace to other namespaces using TI. Secrets/configmaps will only be mapped if their namespaces belong to same Tenant
"},{"location":"changelog.html#v071","title":"v0.7.1","text":" - feat: Add option to keep AppProjects created by Multi Tenant Operator in case Tenant is deleted. By default, AppProjects get deleted
- fix: Status now updates after namespaces are created
- maintain: Changes to Helm chart's default behaviour
"},{"location":"changelog.html#v070","title":"v0.7.0","text":" - feat: Add support to map secrets/configmaps from one namespace to other namespaces using TGI. Resources can be mapped from one Tenant's namespaces to some other Tenant's namespaces
- feat: Allow creation of sandboxes that are private to the user
- feat: Allow creation of namespaces without tenant prefix from within tenant spec
- fix: Webhook changes will now be updated without manual intervention
- maintain: Updated Tenant CR version from v1beta1 to v1beta2. Conversion webhook is added to facilitate transition to new version
- see Tenant spec for updated spec
- enhance: Better automated testing
"},{"location":"changelog.html#v06x","title":"v0.6.x","text":""},{"location":"changelog.html#v061","title":"v0.6.1","text":" - fix: Update MTO service-account name in environment variable
"},{"location":"changelog.html#v060","title":"v0.6.0","text":" - feat: Add support to ArgoCD AppProjects created by Tenant Controller to have their sync disabled when relevant namespaces are hibernating
- feat: Add validation webhook for ResourceSupervisor
- fix: Delete ResourceSupervisor when hibernation is removed from tenant CR
- fix: CRQ and limit range not updating when quota changes
- fix: ArgoCD AppProjects created by Tenant Controller not updating when Tenant label is added to an existing namespace
- fix: Namespace workflow for TGI
- fix: ResourceSupervisor deletion workflow
- fix: Update RHSSO user filter for Vault integration
- fix: Update regex of namespace names in tenant CRD
- enhance: Optimize TGI and TI performance under load
- maintain: Bump Operator-SDK and Dependencies version
"},{"location":"changelog.html#v05x","title":"v0.5.x","text":""},{"location":"changelog.html#v054","title":"v0.5.4","text":" - fix: Update Helm dependency to v3.8.2
"},{"location":"changelog.html#v053","title":"v0.5.3","text":" - fix: Add support for parameters in Helm chartRepository in templates
"},{"location":"changelog.html#v052","title":"v0.5.2","text":" - fix: Add service name prefix for webhooks
"},{"location":"changelog.html#v051","title":"v0.5.1","text":" - fix: ResourceSupervisor CR no longer requires a field for the Tenant name
"},{"location":"changelog.html#v050","title":"v0.5.0","text":" - feat: Add support for tenant namespaces off-boarding. For more details check out onDelete
-
feat: Add tenant webhook for spec validation
-
fix: TemplateGroupInstance now cleans up leftover Template resources from namespaces that are no longer part of TGI namespace selector
-
fix: Fixed hibernation sync issue
-
enhance: Update tenant spec for applying common/specific namespace labels/annotations. For more details check out commonMetadata & SpecificMetadata
-
enhance: Add support for multi-pod architecture for Operator-Hub
-
chore: Remove conversion webhook for Quota and Tenant
"},{"location":"changelog.html#v04x","title":"v0.4.x","text":""},{"location":"changelog.html#v047","title":"v0.4.7","text":" - feat: Add hibernation of StatefulSets and Deployments based on a timer
- feat: New custom resource that handles hibernation
"},{"location":"changelog.html#v046","title":"v0.4.6","text":""},{"location":"changelog.html#v045","title":"v0.4.5","text":" - feat: Add support for applying labels/annotation on specific namespaces
"},{"location":"changelog.html#v044","title":"v0.4.4","text":" - fix: Update
privilegedNamespaces
regex
"},{"location":"changelog.html#v043","title":"v0.4.3","text":" - fix: IntegrationConfig will now be synced in all pods
"},{"location":"changelog.html#v042","title":"v0.4.2","text":" - feat: Added support to distribute common labels and annotations to tenant namespaces
"},{"location":"changelog.html#v041","title":"v0.4.1","text":" - fix: Update dependencies to latest version
"},{"location":"changelog.html#v040","title":"v0.4.0","text":" - feat: Controllers are now separated into individual pods
"},{"location":"changelog.html#v03x","title":"v0.3.x","text":""},{"location":"changelog.html#v0333","title":"v0.3.33","text":" - fix: Optimize namespace reconciliation
"},{"location":"changelog.html#v0333_1","title":"v0.3.33","text":" - fix: Revert v0.3.29 change till webhook network issue isn't resolved
"},{"location":"changelog.html#v0333_2","title":"v0.3.33","text":" - fix: Execute webhook and controller of matching custom resource in same pod
"},{"location":"changelog.html#v0330","title":"v0.3.30","text":" - feat: Namespace controller will now trigger TemplateGroupInstance when a new matching namespace is created
"},{"location":"changelog.html#v0329","title":"v0.3.29","text":" - feat: Controllers are now separated into individual pods
"},{"location":"changelog.html#v0328","title":"v0.3.28","text":" - fix: Enhancement of TemplateGroupInstance Namespace event listener
"},{"location":"changelog.html#v0327","title":"v0.3.27","text":" - feat: TemplateGroupInstance will create resources instantly whenever a Namespace with matching labels is created
"},{"location":"changelog.html#v0326","title":"v0.3.26","text":" - fix: Update reconciliation frequency of TemplateGroupInstance
"},{"location":"changelog.html#v0325","title":"v0.3.25","text":" - feat: TemplateGroupInstance will now directly create template resources instead of creating TemplateInstances
"},{"location":"changelog.html#migrating-from-pervious-version","title":"Migrating from pervious version","text":" - To migrate to Tenant-Operator:v0.3.25 perform the following steps
- Downscale Tenant-Operator deployment by setting the replicas count to 0
- Delete TemplateInstances created by TemplateGroupInstance (Naming convention of TemplateInstance created by TemplateGroupInstance is
group-{Template.Name}
) - Update version of Tenant-Operator to v0.3.25 and set the replicas count to 2. After Tenant-Operator pods are up TemplateGroupInstance will create the missing resources
"},{"location":"changelog.html#v0324","title":"v0.3.24","text":" - feat: Add feature to allow ArgoCD to sync specific cluster scoped custom resources, configurable via Integration Config. More details in relevant docs
"},{"location":"changelog.html#v0323","title":"v0.3.23","text":" - feat: Added concurrent reconcilers for template instance controller
"},{"location":"changelog.html#v0322","title":"v0.3.22","text":" - feat: Added validation webhook to prevent Tenant owners from creating RoleBindings with kind 'Group' or 'User'
- fix: Removed redundant logs for namespace webhook
- fix: Added missing check for users in a tenant owner's groups in namespace validation webhook
- fix: General enhancements and improvements
\u26a0\ufe0f Known Issues
caBundle
field in validation webhooks is not being populated for newly added webhooks. A temporary fix is to edit the validation webhook configuration manifest without the caBundle
field added in any webhook, so OpenShift can add it to all fields simultaneously - Edit the
ValidatingWebhookConfiguration
multi-tenant-operator-validating-webhook-configuration
by removing all the caBundle
fields of all webhooks - Save the manifest
- Verify that all
caBundle
fields have been populated - Restart Tenant-Operator pods
"},{"location":"changelog.html#v0321","title":"v0.3.21","text":" - feat: Added ClusterRole manager rules extension
"},{"location":"changelog.html#v0320","title":"v0.3.20","text":" - fix: Fixed the recreation of underlying template resources, if resources were deleted
"},{"location":"changelog.html#v0319","title":"v0.3.19","text":" - feat: Namespace webhook FailurePolicy is now set to Ignore instead of Fail
- fix: Fixed config not being updated in namespace webhook when Integration Config is updated
- fix: Fixed a crash that occurred in case of ArgoCD in Integration Config was not set during deletion of Tenant resource
\u26a0\ufe0f ApiVersion v1alpha1
of Tenant and Quota custom resources has been deprecated and is scheduled to be removed in the future. The following links contain the updated structure of both resources
- Quota v1beta1
- Tenant v1beta1
"},{"location":"changelog.html#v0318","title":"v0.3.18","text":" - fix: Add ArgoCD namespace to destination namespaces for App Projects
"},{"location":"changelog.html#v0317","title":"v0.3.17","text":" - fix: Cluster administrator's permission will now have higher precedence on privileged namespaces
"},{"location":"changelog.html#v0316","title":"v0.3.16","text":" - fix: Add groups mentioned in Tenant CR to ArgoCD App Project manifests' RBAC
"},{"location":"changelog.html#v0315","title":"v0.3.15","text":" - feat: Add validation webhook for TemplateInstance & TemplateGroupInstance to prevent their creation in case the Template they reference does not exist
"},{"location":"changelog.html#v0314","title":"v0.3.14","text":" - feat: Added Validation Webhook for Quota to prevent its deletion when a reference to it exists in any Tenant
- feat: Added Validation Webhook for Template to prevent its deletion when a reference to it exists in any Tenant, TemplateGroupInstance or TemplateInstance
- fix: Fixed a crash that occurred in case Integration Config was not found
"},{"location":"changelog.html#v0313","title":"v0.3.13","text":" - feat: Multi Tenant Operator will now consider all namespaces to be managed if any default Integration Config is not found
"},{"location":"changelog.html#v0312","title":"v0.3.12","text":" - fix: General enhancements and improvements
"},{"location":"changelog.html#v0311","title":"v0.3.11","text":" - fix: Fix Quota's conversion webhook converting the wrong LimitRange field
"},{"location":"changelog.html#v0310","title":"v0.3.10","text":" - fix: Fix Quota's LimitRange to its intended design by being an optional field
"},{"location":"changelog.html#v039","title":"v0.3.9","text":" - feat: Add ability to prevent certain resources from syncing via ArgoCD
"},{"location":"changelog.html#v038","title":"v0.3.8","text":" - feat: Add default annotation to OpenShift Projects that show description about the Project
"},{"location":"changelog.html#v037","title":"v0.3.7","text":" - fix: Fix a typo in Multi Tenant Operator's Helm release
"},{"location":"changelog.html#v036","title":"v0.3.6","text":" - fix: Fix ArgoCD's
destinationNamespaces
created by Multi Tenant Operator
"},{"location":"changelog.html#v035","title":"v0.3.5","text":" - fix: Change sandbox creation from 1 for each group to 1 for each user in a group
"},{"location":"changelog.html#v034","title":"v0.3.4","text":" - feat: Support creation of sandboxes for each group
"},{"location":"changelog.html#v033","title":"v0.3.3","text":" - feat: Add ability to create namespaces from a list of namespace prefixes listed in the Tenant CR
"},{"location":"changelog.html#v032","title":"v0.3.2","text":" - refactor: Restructure Quota CR, more details in relevant docs
- feat: Add support for adding LimitRanges in Quota
- feat: Add conversion webhook to convert existing v1alpha1 versions of quota to v1beta1
"},{"location":"changelog.html#v031","title":"v0.3.1","text":" - feat: Add ability to create ArgoCD AppProjects per tenant, more details in relevant docs
"},{"location":"changelog.html#v030","title":"v0.3.0","text":" - feat: Add support to add groups in addition to users as tenant members
"},{"location":"changelog.html#v02x","title":"v0.2.x","text":""},{"location":"changelog.html#v0233","title":"v0.2.33","text":" - refactor: Restructure Tenant spec, more details in relevant docs
- feat: Add conversion webhook to convert existing v1alpha1 versions of tenant to v1beta1
"},{"location":"changelog.html#v0232","title":"v0.2.32","text":" - refactor: Restructure integration config spec, more details in relevant docs
- feat: Allow users to input custom regex in certain fields inside of integration config, more details in relevant docs
"},{"location":"changelog.html#v0231","title":"v0.2.31","text":" - feat: Add limit range for
kube-RBAC-proxy
"},{"location":"customresources.html","title":"Custom Resources","text":"Below is the detailed explanation about Custom Resources of MTO
"},{"location":"customresources.html#1-quota","title":"1. Quota","text":"Cluster scoped resource:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: medium\nspec:\n resourcequota:\n hard:\n requests.cpu: '5'\n limits.cpu: '10'\n requests.memory: '5Gi'\n limits.memory: '10Gi'\n configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\n limitrange:\n limits:\n - type: \"Pod\"\n max:\n cpu: \"2\"\n memory: \"1Gi\"\n min:\n cpu: \"200m\"\n memory: \"100Mi\"\n - type: \"Container\"\n max:\n cpu: \"2\"\n memory: \"1Gi\"\n min:\n cpu: \"100m\"\n memory: \"50Mi\"\n default:\n cpu: \"300m\"\n memory: \"200Mi\"\n defaultRequest:\n cpu: \"200m\"\n memory: \"100Mi\"\n maxLimitRequestRatio:\n cpu: \"10\"\n
When several tenants share a single cluster with a fixed number of resources, there is a concern that one tenant could use more than its fair share of resources. Quota is a wrapper around OpenShift ClusterResourceQuota
and LimitRange
which provides administrators to limit resource consumption per Tenant
. For more details Quota.Spec , LimitRange.Spec
"},{"location":"customresources.html#2-tenant","title":"2. Tenant","text":"Cluster scoped resource:
The smallest valid Tenant definition is given below (with just one field in its spec):
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: alpha\nspec:\n quota: small\n
Here is a more detailed Tenant definition, explained below:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: alpha\nspec:\n owners: # optional\n users: # optional\n - dave@stakater.com\n groups: # optional\n - alpha\n editors: # optional\n users: # optional\n - jack@stakater.com\n viewers: # optional\n users: # optional\n - james@stakater.com\n quota: medium # required\n sandboxConfig: # optional\n enabled: true # optional\n private: true # optional\n onDelete: # optional\n cleanNamespaces: false # optional\n cleanAppProject: true # optional\n argocd: # optional\n sourceRepos: # required\n - https://github.com/stakater/gitops-config\n appProject: # optional\n clusterResourceWhitelist: # optional\n - group: tronador.stakater.com\n kind: Environment\n namespaceResourceBlacklist: # optional\n - group: \"\"\n kind: ConfigMap\n hibernation: # optional\n sleepSchedule: 23 * * * * # required\n wakeSchedule: 26 * * * * # required\n namespaces: # optional\n withTenantPrefix: # optional\n - dev\n - build\n withoutTenantPrefix: # optional\n - preview\n commonMetadata: # optional\n labels: # optional\n stakater.com/team: alpha\n annotations: # optional\n openshift.io/node-selector: node-role.kubernetes.io/infra=\n specificMetadata: # optional\n - annotations: # optional\n stakater.com/user: dave\n labels: # optional\n stakater.com/sandbox: true\n namespaces: # optional\n - alpha-dave-stakater-sandbox\n templateInstances: # optional\n - spec: # optional\n template: networkpolicy # required\n sync: true # optional\n parameters: # optional\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\n selector: # optional\n matchLabels: # optional\n policy: network-restriction\n
-
Tenant has 3 kinds of Members
. Each member type should have different roles assigned to them. These roles are gotten from the IntegrationConfig's TenantRoles field. You can customize these roles to your liking, but by default the following configuration applies:
Owners:
Users who will be owners of a tenant. They will have OpenShift admin-role assigned to their users, with additional access to create namespaces as well. Editors:
Users who will be editors of a tenant. They will have OpenShift edit-role assigned to their users. Viewers:
Users who will be viewers of a tenant. They will have OpenShift view-role assigned to their users. - For more details, check out their definitions.
-
Users
can be linked to the tenant by specifying there username in owners.users
, editors.users
and viewers.users
respectively.
-
Groups
can be linked to the tenant by specifying the group name in owners.groups
, editors.groups
and viewers.groups
respectively.
-
Tenant will have a Quota
to limit resource consumption.
-
sandboxConfig
is used to configure the tenant user sandbox feature
- Setting
enabled
to true will create sandbox namespaces for owners and editors. - Sandbox will follow the following naming convention {TenantName}-{UserName}-sandbox.
- In case of groups, the sandbox namespaces will be created for each member of the group.
- Setting
private
to true will make those sandboxes be only visible to the user they belong to. By default, sandbox namespaces are visible to all tenant members
-
onDelete
is used to tell Multi Tenant Operator what to do when a Tenant is deleted.
cleanNamespaces
if the value is set to true MTO deletes all tenant namespaces when a Tenant
is deleted. Default value is false. cleanAppProject
will keep the generated ArgoCD AppProject if the value is set to false. By default, the value is true.
-
argocd
is required if you want to create an ArgoCD AppProject for the tenant.
sourceRepos
contain a list of repositories that point to your GitOps. appProject
is used to set the clusterResourceWhitelist
and namespaceResourceBlacklist
resources. If these are also applied via IntegrationConfig
then those applied via Tenant CR will have higher precedence for given Tenant.
-
hibernation
can be used to create a schedule during which the namespaces belonging to the tenant will be put to sleep. The values of the sleepSchedule
and wakeSchedule
fields must be a string in a cron format.
-
Namespaces can also be created via tenant CR by specifying names in namespaces
.
- Multi Tenant Operator will append tenant name prefix while creating namespaces if the list of namespaces is under the
withTenantPrefix
field, so the format will be {TenantName}-{Name}. - Namespaces listed under the
withoutTenantPrefix
will be created with the given name. Writing down namespaces here that already exist within the cluster are not allowed. stakater.com/kind: {Name}
label will also be added to the namespaces.
-
commonMetadata
can be used to distribute common labels and annotations among tenant namespaces.
labels
distributes provided labels among all tenant namespaces annotations
distributes provided annotations among all tenant namespaces
-
specificMetadata
can be used to distribute specific labels and annotations among specific tenant namespaces.
labels
distributes given labels among specific tenant namespaces annotations
distributes given annotations among specific tenant namespaces namespaces
consists a list of specific tenant namespaces across which the labels and annotations will be distributed
-
Tenant automatically deploys template
resource mentioned in templateInstances
to matching tenant namespaces.
Template
resources are created in those namespaces
which belong to a tenant
and contain matching labels
. Template
resources are created in all namespaces
of a tenant
if selector
field is empty.
\u26a0\ufe0f If same label or annotation key is being applied using different methods provided, then the highest precedence will be given to specificMetadata
followed by commonMetadata
and in the end would be the ones applied from openshift.project.labels
/openshift.project.annotations
in IntegrationConfig
"},{"location":"customresources.html#3-template","title":"3. Template","text":"Cluster scoped resource:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: redis\nresources:\n helm:\n releaseName: redis\n chart:\n repository:\n name: redis\n repoUrl: https://charts.bitnami.com/bitnami\n values: |\n redisPort: 6379\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: networkpolicy\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\nresources:\n manifests:\n - kind: NetworkPolicy\n apiVersion: networking.k8s.io/v1\n metadata:\n name: deny-cross-ns-traffic\n spec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: \"${{CIDR_IP}}\"\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: resource-mapping\nresources:\n resourceMappings:\n secrets:\n - name: secret-s1\n namespace: namespace-n1\n configMaps:\n - name: configmap-c1\n namespace: namespace-n2\n
Templates are used to initialize Namespaces, share common resources across namespaces, and map secrets/configmaps from one namespace to other namespaces.
- They either contain one or more Kubernetes manifests, a reference to secrets/configmaps, or a Helm chart.
- They are being tracked by TemplateInstances in each Namespace they are applied to.
- They can contain pre-defined parameters such as ${namespace}/${tenant} or user-defined ${MY_PARAMETER} that can be specified within an TemplateInstance.
Also you can define custom variables in Template
and TemplateInstance
. The parameters defined in TemplateInstance
are overwritten the values defined in Template
.
Manifest Templates: The easiest option to define a Template is by specifying an array of Kubernetes manifests which should be applied when the Template is being instantiated.
Helm Chart Templates: Instead of manifests, a Template can specify a Helm chart that will be installed (using Helm template) when the Template is being instantiated.
Resource Mapping Templates: A template can be used to map secrets and configmaps from one tenant's namespace to another tenant's namespace, or within a tenant's namespace.
"},{"location":"customresources.html#mandatory-and-optional-templates","title":"Mandatory and Optional Templates","text":"Templates can either be mandatory or optional. By default, all Templates are optional. Cluster Admins can make Templates mandatory by adding them to the spec.templateInstances
array within the Tenant configuration. All Templates listed in spec.templateInstances
will always be instantiated within every Namespace
that is created for the respective Tenant.
"},{"location":"customresources.html#4-templateinstance","title":"4. TemplateInstance","text":"Namespace scoped resource:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: networkpolicy\n namespace: build\nspec:\n template: networkpolicy\n sync: true\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\n
TemplateInstance are used to keep track of resources created from Templates, which are being instantiated inside a Namespace. Generally, a TemplateInstance is created from a Template and then the TemplateInstances will not be updated when the Template changes later on. To change this behavior, it is possible to set spec.sync: true
in a TemplateInstance. Setting this option, means to keep this TemplateInstance in sync with the underlying template (similar to Helm upgrade).
"},{"location":"customresources.html#5-templategroupinstance","title":"5. TemplateGroupInstance","text":"Cluster scoped resource:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: namespace-parameterized-restrictions-tgi\nspec:\n template: namespace-parameterized-restrictions\n sync: true\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\n
TemplateGroupInstance distributes a template across multiple namespaces which are selected by labelSelector.
"},{"location":"customresources.html#6-resourcesupervisor","title":"6. ResourceSupervisor","text":"Cluster scoped resource:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: tenant-sample\nspec:\n argocd:\n appProjects:\n - tenant-sample\n hibernation:\n sleepSchedule: 23 * * * *\n wakeSchedule: 26 * * * *\n namespaces:\n - stage\n - dev\nstatus:\n currentStatus: running\n nextReconcileTime: '2022-07-07T11:23:00Z'\n
The ResourceSupervisor
is a resource created by MTO in case the Hibernation feature is enabled. The Resource manages the sleep/wake schedule of the namespaces owned by the tenant, and manages the previous state of any sleeping application. Currently, only StatefulSets and Deployments are put to sleep. Additionally, ArgoCD AppProjects that belong to the tenant have a deny
SyncWindow added to them.
The ResourceSupervisor
can be created both via the Tenant
or manually. For more details, check some of its use cases
"},{"location":"customresources.html#namespace","title":"Namespace","text":"apiVersion: v1\nkind: Namespace\nmetadata:\n labels:\n stakater.com/tenant: blue-sky\n name: build\n
- Namespace should have label
stakater.com/tenant
which contains the name of tenant to which it belongs to. The labels and annotations specified in the operator config, ocp.labels.project
and ocp.annotations.project
are inserted in the namespace by the controller.
"},{"location":"customresources.html#notes","title":"Notes","text":" tenant.spec.users.owner
: Can only create Namespaces with required tenant label and can delete Projects. To edit Namespace use GitOps/ArgoCD
"},{"location":"eula.html","title":"Multi Tenant Operator End User License Agreement","text":"Last revision date: 12 December 2022
IMPORTANT: THIS SOFTWARE END-USER LICENSE AGREEMENT (\"EULA\") IS A LEGAL AGREEMENT (\"Agreement\") BETWEEN YOU (THE CUSTOMER, EITHER AS AN INDIVIDUAL OR, IF PURCHASED OR OTHERWISE ACQUIRED BY OR FOR AN ENTITY, AS AN ENTITY) AND Stakater AB OR ITS SUBSIDUARY (\"COMPANY\"). READ IT CAREFULLY BEFORE COMPLETING THE INSTALLATION PROCESS AND USING MULTI TENANT OPERATOR (\"SOFTWARE\"). IT PROVIDES A LICENSE TO USE THE SOFTWARE AND CONTAINS WARRANTY INFORMATION AND LIABILITY DISCLAIMERS. BY INSTALLING AND USING THE SOFTWARE, YOU ARE CONFIRMING YOUR ACCEPTANCE OF THE SOFTWARE AND AGREEING TO BECOME BOUND BY THE TERMS OF THIS AGREEMENT.
In order to use the Software under this Agreement, you must receive a license key at the time of purchase, in accordance with the scope of use and other terms specified and as set forth in Section 1 of this Agreement.
"},{"location":"eula.html#1-license-grant","title":"1. License Grant","text":" -
1.1 General Use. This Agreement grants you a non-exclusive, non-transferable, limited license to the use rights for the Software, subject to the terms and conditions in this Agreement. The Software is licensed, not sold.
-
1.2 Electronic Delivery. All Software and license documentation shall be delivered by electronic means unless otherwise specified on the applicable invoice or at the time of purchase. Software shall be deemed delivered when it is made available for download for you by the Company (\"Delivery\").
"},{"location":"eula.html#2-modifications","title":"2. Modifications","text":""},{"location":"eula.html#3-restricted-uses","title":"3. Restricted Uses","text":" -
3.1 You shall not (and shall not allow any third party to):
-
(a) reverse engineer the Software or attempt to reconstruct or discover any source code, underlying ideas, algorithms, file formats or programming interfaces of the Software by any means whatsoever (except and only to the extent that applicable law prohibits or restricts reverse engineering restrictions);
-
(b) distribute, sell, sub-license, rent, lease or use the Software for time sharing, hosting, service provider or like purposes, except as expressly permitted under this Agreement;
-
(c) redistribute the Software;
-
(d) remove any product identification, proprietary, copyright or other notices contained in the Software;
-
(e) modify any part of the Software, create a derivative work of any part of the Software (except as permitted in Section 4), or incorporate the Software, except to the extent expressly authorized in writing by the Company;
-
(f) publicly disseminate performance information or analysis (including, without limitation, benchmarks) from any source relating to the Software;
-
(g) utilize any equipment, device, software, or other means designed to circumvent or remove any form of Source URL or copy protection used by the Company in connection with the Software, or use the Software together with any authorization code, Source URL, serial number, or other copy protection device not supplied by the Company;
-
(h) use the Software to develop a product which is competitive with any of the Company's product offerings;
-
(i) use unauthorized Source URLs or license key(s) or distribute or publish Source URLs or license key(s), except as may be expressly permitted by the Company in writing. If your unique license is ever published, the Company reserves the right to terminate your access without notice.
-
3.2 Under no circumstances may you use the Software as part of a product or service that provides similar functionality to the Software itself.
"},{"location":"eula.html#4-ownership","title":"4. Ownership","text":" - 4.1 Notwithstanding anything to the contrary contained herein, except for the limited license rights expressly provided herein, the Company and its suppliers have and will retain all rights, title and interest (including, without limitation, all patent, copyright, trademark, trade secret and other intellectual property rights) in and to the Software and all copies, modifications and derivative works thereof (including any changes which incorporate any of your ideas, feedback or suggestions). You acknowledge that you are obtaining only a limited license right to the Software, and that irrespective of any use of the words \"purchase\", \"sale\" or like terms hereunder no ownership rights are being conveyed to you under this Agreement or otherwise.
"},{"location":"eula.html#5-fees-and-payment","title":"5. Fees and Payment","text":" - 5.1 The Software license fees will be due and payable in full as set forth in the applicable invoice or at the time of purchase. You shall be responsible for all taxes, with-holdings, duties and levies arising from the order (excluding taxes based on the net income of the Company).
"},{"location":"eula.html#6-support-maintenance-and-services","title":"6. Support, Maintenance and Services","text":" - 6.1 Subject to the terms and conditions of this Agreement, as set forth in your invoice, and as set forth on the Stakater support page, support and maintenance services may be included with the purchase of your license subscription.
"},{"location":"eula.html#7-disclaimer-of-warranties","title":"7. Disclaimer of Warranties","text":" -
7.1 The Software is provided \"as is\", with all faults, defects and errors, and without warranty of any kind. The Company does not warrant that the Software will be free of bugs, errors, or other defects, and the Company shall have no liability of any kind for the use of or inability to use the Software, the Software content or any associated service, and you acknowledge that it is not technically practicable for the Company to do so.
-
7.2 To the maximum extent permitted by applicable law, the Company disclaims all warranties, express, implied, arising by law or otherwise, regarding the Software, the Software content and their respective performance or suitability for your intended use, including without limitation any implied warranty of merchantability, fitness for a particular purpose.
"},{"location":"eula.html#8-limitation-of-liability","title":"8. Limitation of Liability","text":" -
8.1 In no event will the Company be liable for any direct, indirect, consequential, incidental, special, exemplary, or punitive damages or liabilities whatsoever arising from or relating to the Software, the Software content or this Agreement, whether based on contract, tort (including negligence), strict liability or other theory, even if the Company has been advised of the possibility of such damages.
-
8.2 In no event will the Company's liability exceed the Software license price as indicated in the invoice. The existence of more than one claim will not enlarge or extend this limit.
"},{"location":"eula.html#9-remedies","title":"9. Remedies","text":""},{"location":"eula.html#10-acknowledgements","title":"10. Acknowledgements","text":" -
10.1 Consent to the Use of Data. You agree that the Company and its affiliates may collect and use technical information gathered as part of the product support services. The Company may use this information solely to improve products and services and will not disclose this information in a form that personally identifies individuals or organizations.
-
10.2 Government End Users. If the Software and related documentation are supplied to or purchased by or on behalf of a Government, then the Software is deemed to be \"commercial software\" as that term is used in the acquisition regulation system.
"},{"location":"eula.html#11-third-party-software","title":"11. Third Party Software","text":" -
11.1 Examples included in Software may provide links to third party libraries or code (collectively \"Third Party Software\") to implement various functions. Third Party Software does not comprise part of the Software. In some cases, access to Third Party Software may be included along with the Software delivery as a convenience for demonstration purposes. Licensee acknowledges:
-
(1) That some part of Third Party Software may require additional licensing of copyright and patents from the owners of such, and
-
(2) That distribution of any of the Software referencing or including any portion of a Third Party Software may require appropriate licensing from such third parties
"},{"location":"eula.html#12-miscellaneous","title":"12. Miscellaneous","text":" -
12.1 Entire Agreement. This Agreement sets forth our entire agreement with respect to the Software and the subject matter hereof and supersedes all prior and contemporaneous understandings and agreements whether written or oral.
-
12.2 Amendment. The Company reserves the right, in its sole discretion, to amend this Agreement from time. Amendments are managed as described in General Provisions.
-
12.3 Assignment. You may not assign this Agreement or any of its rights under this Agreement without the prior written consent of The Company and any attempted assignment without such consent shall be void.
-
12.4 Export Compliance. You agree to comply with all applicable laws and regulations, including laws, regulations, orders or other restrictions on export, re-export or redistribution of software.
-
12.5 Indemnification. You agree to defend, indemnify, and hold harmless the Company from and against any lawsuits, claims, losses, damages, fines and expenses (including attorneys' fees and costs) arising out of your use of the Software or breach of this Agreement.
-
12.6 Attorneys' Fees and Costs. The prevailing party in any action to enforce this Agreement will be entitled to recover its attorneys' fees and costs in connection with such action.
-
12.7 Severability. If any provision of this Agreement is held by a court of competent jurisdiction to be invalid, illegal, or unenforceable, the remainder of this Agreement will remain in full force and effect.
-
12.8 Waiver. Failure or neglect by either party to enforce at any time any of the provisions of this license Agreement shall not be construed or deemed to be a waiver of that party's rights under this Agreement.
-
12.9 Audit. The Company may, at its expense, appoint its own personnel or an independent third party to audit the numbers of installations of the Software in use by you. Any such audit shall be conducted upon thirty (30) days prior notice, during regular business hours and shall not unreasonably interfere with your business activities.
-
12.10 Headings. The headings of sections and paragraphs of this Agreement are for convenience of reference only and are not intended to restrict, affect or be of any weight in the interpretation or construction of the provisions of such sections or paragraphs.
"},{"location":"eula.html#13-contact-information","title":"13. Contact Information","text":" - 13.1 If you have any questions about this EULA, or if you want to contact the Company for any reason, please direct correspondence to
sales@stakater.com
.
"},{"location":"faq.html","title":"FAQs","text":""},{"location":"faq.html#namespace-admission-webhook","title":"Namespace Admission Webhook","text":""},{"location":"faq.html#q-error-received-while-performing-create-update-or-delete-action-on-namespace","title":"Q. Error received while performing Create, Update or Delete action on Namespace","text":"Cannot CREATE namespace test-john without label stakater.com/tenant\n
Answer. Error occurs when a user is trying to perform create, update, delete action on a namespace without the required stakater.com/tenant
label. This label is used by the operator to see that authorized users can perform that action on the namespace. Just add the label with the tenant name so that MTO knows which tenant the namespace belongs to, and who is authorized to perform create/update/delete operations. For more details please refer to Namespace use-case.
"},{"location":"faq.html#q-error-received-while-performing-create-update-or-delete-action-on-openshift-project","title":"Q. Error received while performing Create, Update or Delete action on OpenShift Project","text":"Cannot CREATE namespace testing without label stakater.com/tenant. User: system:serviceaccount:openshift-apiserver:openshift-apiserver-sa\n
Answer. This error occurs because we don't allow Tenant members to do operations on OpenShift Project, whenever an operation is done on a project, openshift-apiserver-sa
tries to do the same request onto a namespace. That's why the user sees openshift-apiserver-sa
Service Account instead of its own user in the error message.
The fix is to try the same operation on the namespace manifest instead.
"},{"location":"faq.html#q-error-received-while-doing-kubectl-apply-f-namespaceyaml","title":"Q. Error received while doing \"kubectl apply -f namespace.yaml\"","text":"Error from server (Forbidden): error when retrieving current configuration of:\nResource: \"/v1, Resource=namespaces\", GroupVersionKind: \"/v1, Kind=Namespace\"\nName: \"ns1\", Namespace: \"\"\nfrom server for: \"namespace.yaml\": namespaces \"ns1\" is forbidden: User \"muneeb\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"ns1\"\n
Answer. Tenant members will not be able to use kubectl apply
because apply
first gets all the instances of that resource, in this case namespaces, and then does the required operation on the selected resource. To maintain tenancy, tenant members do not the access to get or list all the namespaces.
The fix is to create namespaces with kubectl create
instead.
"},{"location":"faq.html#mto-argocd-integration","title":"MTO - ArgoCD Integration","text":""},{"location":"faq.html#q-how-do-i-deploy-cluster-scoped-resource-via-the-argocd-integration","title":"Q. How do I deploy cluster-scoped resource via the ArgoCD integration?","text":"Answer. Multi-Tenant Operator's ArgoCD Integration allows configuration of which cluster-scoped resources can be deployed, both globally and on a per-tenant basis. For a global allow-list that applies to all tenants, you can add both resource group
and kind
to the IntegrationConfig's spec.argocd.clusterResourceWhitelist
field. Alternatively, you can set this up on a tenant level by configuring the same details within a Tenant's spec.argocd.appProject.clusterResourceWhitelist
field. For more details, check out the ArgoCD integration use cases
"},{"location":"faq.html#q-invalidspecerror-application-repo-repo-is-not-permitted-in-project-project","title":"Q. InvalidSpecError: application repo \\<repo> is not permitted in project \\<project>","text":"Answer. The above error can occur if the ArgoCD Application is syncing from a source that is not allowed the referenced AppProject. To solve this, verify that you have referred to the correct project in the given ArgoCD Application, and that the repoURL used for the Application's source is valid. If the error still appears, you can add the URL to the relevant Tenant's spec.argocd.sourceRepos
array.
"},{"location":"faq.html#q-why-are-there-mto-showback-pods-failing-in-my-cluster","title":"Q. Why are there mto-showback-*
pods failing in my cluster?","text":"Answer. The mto-showback-*
pods are used to calculate the cost of the resources used by each tenant. These pods are created by the Multi-Tenant Operator and are scheduled to run every 10 minutes. If the pods are failing, it is likely that the operator's necessary to calculate cost are not present in the cluster. To solve this, you can navigate to Operators
-> Installed Operators
in the OpenShift console and check if the MTO-OpenCost and MTO-Prometheus operators are installed. If they are in a pending state, you can manually approve them to install them in the cluster.
"},{"location":"features.html","title":"Features","text":"The major features of Multi Tenant Operator (MTO) are described below.
"},{"location":"features.html#kubernetes-multitenancy","title":"Kubernetes Multitenancy","text":"RBAC is one of the most complicated and error-prone parts of Kubernetes. With Multi Tenant Operator, you can rest assured that RBAC is configured with the \"least privilege\" mindset and all rules are kept up-to-date with zero manual effort.
Multi Tenant Operator binds existing ClusterRoles to the Tenant's Namespaces used for managing access to the Namespaces and the resources they contain. You can also modify the default roles or create new roles to have full control and customize access control for your users and teams.
Multi Tenant Operator is also able to leverage existing OpenShift groups or external groups synced from 3rd party identity management systems, for maintaining Tenant membership in your organization's current user management system.
"},{"location":"features.html#hashicorp-vault-multitenancy","title":"HashiCorp Vault Multitenancy","text":"Multi Tenant Operator extends the tenants permission model to HashiCorp Vault where it can create Vault paths and greatly ease the overhead of managing RBAC in Vault. Tenant users can manage their own secrets without the concern of someone else having access to their Vault paths.
More details on Vault Multitenancy
"},{"location":"features.html#argocd-multitenancy","title":"ArgoCD Multitenancy","text":"Multi Tenant Operator is not only providing strong Multi Tenancy for the OpenShift internals but also extends the tenants permission model to ArgoCD were it can provision AppProjects and Allowed Repositories for your tenants greatly ease the overhead of managing RBAC in ArgoCD.
More details on ArgoCD Multitenancy
"},{"location":"features.html#mattermost-multitenancy","title":"Mattermost Multitenancy","text":"Multi Tenant Operator can manage Mattermost to create Teams for tenant users. All tenant users get a unique team and a list of predefined channels gets created. When a user is removed from the tenant, the user is also removed from the Mattermost team corresponding to tenant.
More details on Mattermost
"},{"location":"features.html#costresource-optimization","title":"Cost/Resource Optimization","text":"Multi Tenant Operator provides a mechanism for defining Resource Quotas at the tenant scope, meaning all namespaces belonging to a particular tenant share the defined quota, which is why you are able to safely enable dev teams to self serve their namespaces whilst being confident that they can only use the resources allocated based on budget and business needs.
More details on Quota
"},{"location":"features.html#remote-development-namespaces","title":"Remote Development Namespaces","text":"Multi Tenant Operator can be configured to automatically provision a namespace in the cluster for every member of the specific tenant, that will also be preloaded with any selected templates and consume the same pool of resources from the tenants quota creating safe remote dev namespaces that teams can use as scratch namespace for rapid prototyping and development. So, every developer gets a Kubernetes-based cloud development environment that feel like working on localhost.
More details on Sandboxes
"},{"location":"features.html#templates-and-template-distribution","title":"Templates and Template distribution","text":"Multi Tenant Operator allows admins/users to define templates for namespaces, so that others can instantiate these templates to provision namespaces with batteries loaded. A template could pre-populate a namespace for certain use cases or with basic tooling required. Templates allow you to define Kubernetes manifests, Helm chart and more to be applied when the template is used to create a namespace.
It also allows the parameterizing of these templates for flexibility and ease of use. It also provides the option to enforce the presence of templates in one tenant's or all the tenants' namespaces for configuring secure defaults.
Common use cases for namespace templates may be:
- Adding networking policies for multitenancy
- Adding development tooling to a namespace
- Deploying pre-populated databases with test data
- Injecting new namespaces with optional credentials such as image pull secrets
More details on Distributing Template Resources
"},{"location":"features.html#hibernation","title":"Hibernation","text":"Multi Tenant Operator can downscale Deployments and StatefulSets in a tenant's Namespace according to a defined sleep schedule. The Deployments and StatefulSets are brought back to their required replicas according to the provided wake schedule.
More details on Hibernation
"},{"location":"features.html#cross-namespace-resource-distribution","title":"Cross Namespace Resource Distribution","text":"Multi Tenant Operator supports cloning of secrets and configmaps from one namespace to another namespace based on label selectors. It uses templates to enable users to provide reference to secrets and configmaps. It uses a template group instance to distribute those secrets and namespaces in matching namespaces, even if namespaces belong to different tenants. If template instance is used then the resources will only be mapped if namespaces belong to same tenant.
More details on Distributing Secrets and ConfigMaps
"},{"location":"features.html#self-service","title":"Self-Service","text":"With Multi Tenant Operator, you can empower your users to safely provision namespaces for themselves and their teams (typically mapped to SSO groups). Team-owned namespaces and the resources inside them count towards the team's quotas rather than the user's individual limits and are automatically shared with all team members according to the access rules you configure in Multi Tenant Operator.
Also, by leveraging Multi Tenant Operator's templating mechanism, namespaces can be provisioned and automatically pre-populated with any kind of resource or multiple resources such as network policies, docker pull secrets or even Helm charts etc
"},{"location":"features.html#everything-as-codegitops-ready","title":"Everything as Code/GitOps Ready","text":"Multi Tenant Operator is designed and built to be 100% OpenShift-native and to be configured and managed the same familiar way as native OpenShift resources so is perfect for modern shops that are dedicated to GitOps as it is fully configurable using Custom Resources.
"},{"location":"features.html#preventing-clusters-sprawl","title":"Preventing Clusters Sprawl","text":"As companies look to further harness the power of cloud-native, they are adopting container technologies at rapid speed, increasing the number of clusters and workloads. As the number of Kubernetes clusters grows, this is an increasing work for the Ops team. When it comes to patching security issues or upgrading clusters, teams are doing five times the amount of work.
With Multi Tenant Operator teams can share a single cluster with multiple teams, groups of users, or departments by saving operational and management efforts. This prevents you from Kubernetes cluster sprawl.
"},{"location":"features.html#native-experience","title":"Native Experience","text":"Multi Tenant Operator provides multi-tenancy with a native Kubernetes experience without introducing additional management layers, plugins, or customized binaries.
"},{"location":"features.html#custom-metrics-support","title":"Custom Metrics Support","text":"Multi Tenant Operator now supports custom metrics for templates, template instances and template group instances.
Exposed metrics contain, number of resources deployed, number of resources failed, total number of resources deployed for template instances and template group instances. These metrics can be used to monitor the usage of templates and template instances in the cluster.
Additionally, this allows us to expose other performance metrics listed here.
More details on Enabling Custom Metrics
"},{"location":"features.html#graph-visualization-for-tenants","title":"Graph Visualization for Tenants","text":"Multi Tenant Operator now supports graph visualization for tenants on the MTO Console. Effortlessly associate tenants with their respective resources using the enhanced graph feature on the MTO Console. This dynamic graph illustrates the relationships between tenants and the resources they create, encompassing both MTO's proprietary resources and native Kubernetes/OpenShift elements.
More details on Graph Visualization
"},{"location":"hibernation.html","title":"Hibernating Namespaces","text":"You can manage workloads in your cluster with MTO by implementing a hibernation schedule for your tenants. Hibernation downsizes the running Deployments and StatefulSets in a tenant\u2019s namespace according to a defined cron schedule. You can set a hibernation schedule for your tenants by adding the \u2018spec.hibernation\u2019 field to the tenant's respective Custom Resource.
hibernation:\n sleepSchedule: 23 * * * *\n wakeSchedule: 26 * * * *\n
spec.hibernation.sleepSchedule
accepts a cron expression indicating the time to put the workloads in your tenant\u2019s namespaces to sleep.
spec.hibernation.wakeSchedule
accepts a cron expression indicating the time to wake the workloads in your tenant\u2019s namespaces up.
Note
Both sleep and wake schedules must be specified for your Hibernation schedule to be valid.
Additionally, adding the hibernation.stakater.com/exclude: 'true'
annotation to a namespace excludes it from hibernating.
Note
This is only true for hibernation applied via the Tenant Custom Resource, and does not apply for hibernation done by manually creating a ResourceSupervisor (details about that below).
Note
This will not wake up an already sleeping namespace before the wake schedule.
"},{"location":"hibernation.html#resource-supervisor","title":"Resource Supervisor","text":"Adding a Hibernation Schedule to a Tenant creates an accompanying ResourceSupervisor Custom Resource. The Resource Supervisor stores the Hibernation schedules and manages the current and previous states of all the applications, whether they are sleeping or awake.
When the sleep timer is activated, the Resource Supervisor controller stores the details of your applications (including the number of replicas, configurations, etc.) in the applications' namespaces and then puts your applications to sleep. When the wake timer is activated, the controller wakes up the applications using their stored details.
Enabling ArgoCD support for Tenants will also hibernate applications in the tenants' appProjects
.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: sigma\nspec:\n argocd:\n appProjects:\n - sigma\n namespace: openshift-gitops\n hibernation:\n sleepSchedule: 42 * * * *\n wakeSchedule: 45 * * * *\n namespaces:\n - tenant-ns1\n - tenant-ns2\n
Currently, Hibernation is available only for StatefulSets and Deployments.
"},{"location":"hibernation.html#manual-creation-of-resourcesupervisor","title":"Manual creation of ResourceSupervisor","text":"Hibernation can also be applied by creating a ResourceSupervisor resource manually. The ResourceSupervisor definition will contain the hibernation cron schedule, the names of the namespaces to be hibernated, and the names of the ArgoCD AppProjects whose ArgoCD Applications have to be hibernated (as per the given schedule).
This method can be used to hibernate:
- Some specific namespaces and AppProjects in a tenant
- A set of namespaces and AppProjects belonging to different tenants
- Namespaces and AppProjects belonging to a tenant that the cluster admin is not a member of
- Non-tenant namespaces and ArgoCD AppProjects
As an example, the following ResourceSupervisor could be created manually, to apply hibernation explicitly to the 'ns1' and 'ns2' namespaces, and to the 'sample-app-project' AppProject.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: hibernator\nspec:\n argocd:\n appProjects:\n - sample-app-project\n namespace: openshift-gitops\n hibernation:\n sleepSchedule: 42 * * * *\n wakeSchedule: 45 * * * *\n namespaces:\n - ns1\n - ns2\n
"},{"location":"installation.html","title":"Installation","text":"This document contains instructions on installing, uninstalling and configuring Multi Tenant Operator using OpenShift MarketPlace.
-
OpenShift OperatorHub UI
-
CLI/GitOps
-
Uninstall
"},{"location":"installation.html#requirements","title":"Requirements","text":" - An OpenShift cluster [v4.7 - v4.12]
"},{"location":"installation.html#installing-via-operatorhub-ui","title":"Installing via OperatorHub UI","text":" - After opening OpenShift console click on
Operators
, followed by OperatorHub
from the side menu
- Now search for
Multi Tenant Operator
and then click on Multi Tenant Operator
tile
- Click on the
install
button
- Select
Updated channel
. Select multi-tenant-operator
to install the operator in multi-tenant-operator
namespace from Installed Namespace
dropdown menu. After configuring Update approval
click on the install
button.
Note: Use stable
channel for seamless upgrades. For Production Environment
prefer Manual
approval and use Automatic
for Development Environment
- Wait for the operator to be installed
- Once successfully installed, MTO will be ready to enforce multi-tenancy in your cluster
Note: MTO will be installed in multi-tenant-operator
namespace.
"},{"location":"installation.html#configuring-integrationconfig","title":"Configuring IntegrationConfig","text":"IntegrationConfig is required to configure the settings of multi-tenancy for MTO.
- We recommend using the following IntegrationConfig as a starting point
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedNamespaces:\n - default\n - ^openshift-*\n - ^kube-*\n - ^redhat-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:default-*\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n - ^system:serviceaccount:redhat-*\n
For more details and configurations check out IntegrationConfig.
"},{"location":"installation.html#installing-via-cli-or-gitops","title":"Installing via CLI OR GitOps","text":" - Create namespace
multi-tenant-operator
oc create namespace multi-tenant-operator\nnamespace/multi-tenant-operator created\n
- Create an OperatorGroup YAML for MTO and apply it in
multi-tenant-operator
namespace.
oc create -f - << EOF\napiVersion: operators.coreos.com/v1\nkind: OperatorGroup\nmetadata:\n name: tenant-operator\n namespace: multi-tenant-operator\nEOF\noperatorgroup.operators.coreos.com/tenant-operator created\n
- Create a subscription YAML for MTO and apply it in
multi-tenant-operator
namespace. To enable console set .spec.config.env[].ENABLE_CONSOLE
to true
. This will create a route resource, which can be used to access the Multi-Tenant-Operator console.
oc create -f - << EOF\napiVersion: operators.coreos.com/v1alpha1\nkind: Subscription\nmetadata:\n name: tenant-operator\n namespace: multi-tenant-operator\nspec:\n channel: stable\n installPlanApproval: Automatic\n name: tenant-operator\n source: certified-operators\n sourceNamespace: openshift-marketplace\n startingCSV: tenant-operator.v0.9.1\n config:\n env:\n - name: ENABLE_CONSOLE\n value: 'true'\nEOF\nsubscription.operators.coreos.com/tenant-operator created\n
Note: To bring MTO via GitOps, add the above files in GitOps repository.
- After creating the
subscription
custom resource open OpenShift console and click on Operators
, followed by Installed Operators
from the side menu
- Wait for the installation to complete
- Once the installation is complete click on
Workloads
, followed by Pods
from the side menu and select multi-tenant-operator
project
- Once pods are up and running, MTO will be ready to enforce multi-tenancy in your cluster
"},{"location":"installation.html#configuring-integrationconfig_1","title":"Configuring IntegrationConfig","text":"IntegrationConfig is required to configure the settings of multi-tenancy for MTO.
- We recommend using the following IntegrationConfig as a starting point:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedNamespaces:\n - default\n - ^openshift-*\n - ^kube-*\n - ^redhat-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:default-*\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n - ^system:serviceaccount:redhat-*\n
For more details and configurations check out IntegrationConfig.
"},{"location":"installation.html#uninstall-via-operatorhub-ui","title":"Uninstall via OperatorHub UI","text":"You can uninstall MTO by following these steps:
-
Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not. If yes, please set spec.onDelete.cleanNamespaces
to false
for all those tenants whose namespaces you want to retain, and spec.onDelete.cleanAppProject
to false
for all those tenants whose AppProject you want to retain. For more details check out onDelete
-
After making the required changes open OpenShift console and click on Operators
, followed by Installed Operators
from the side menu
- Now click on uninstall and confirm uninstall.
"},{"location":"installation.html#notes","title":"Notes","text":" - For more details on how to use MTO please refer use-cases.
- For more details on how to extend your MTO manager ClusterRole please refer extend-admin-clusterrole.
"},{"location":"integration-config.html","title":"Integration Config","text":"IntegrationConfig is used to configure settings of multi-tenancy for Multi Tenant Operator.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - admin\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n - viewer\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n editor:\n clusterRoles:\n - custom-editor\n viewer:\n clusterRoles:\n - custom-viewer\n - custom-view\n openshift:\n project:\n labels:\n stakater.com/workload-monitoring: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n group:\n labels:\n role: customer-reader\n sandbox:\n labels:\n stakater.com/kind: sandbox\n clusterAdminGroups:\n - cluster-admins\n privilegedNamespaces:\n - ^default$\n - ^openshift-*\n - ^kube-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n groups:\n - cluster-admins\n argocd:\n namespace: openshift-operators\n namespaceResourceBlacklist:\n - group: '' # all groups\n kind: ResourceQuota\n clusterResourceWhitelist:\n - group: tronador.stakater.com\n kind: EnvironmentProvisioner\n rhsso:\n enabled: true\n realm: customer\n endpoint:\n url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n secretReference:\n name: auth-secrets\n namespace: openshift-auth\n vault:\n enabled: true\n endpoint:\n url: https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n secretReference:\n name: vault-root-token\n namespace: vault\n sso:\n clientName: vault\n accessorID: <ACCESSOR_ID_TOKEN>\n
Following are the different components that can be used to configure multi-tenancy in a cluster via Multi Tenant Operator.
"},{"location":"integration-config.html#tenantroles","title":"TenantRoles","text":"TenantRoles are required within the IntegrationConfig, as they are used for defining what roles will be applied to each Tenant namespace. The field allows optional custom roles, that are then used to create RoleBindings for namespaces that match a labelSelector.
\u26a0\ufe0f If you do not configure roles in any way, then the default OpenShift roles of owner
, edit
, and view
will apply to Tenant members. Their details can be found here
tenantRoles:\n default:\n owner:\n clusterRoles:\n - admin\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n - viewer\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n editor:\n clusterRoles:\n - custom-editor\n viewer:\n clusterRoles:\n - custom-viewer\n - custom-view\n
"},{"location":"integration-config.html#default","title":"Default","text":"This field contains roles that will be used to create default roleBindings for each namespace that belongs to tenants. These roleBindings are only created for a namespace if that namespaces isn't already matched by the custom
field below it. Therefore, it is required to have at least one role mentioned within each of its three subfields: owner
, editor
, and viewer
. These 3 subfields also correspond to the member fields of the Tenant CR
"},{"location":"integration-config.html#custom","title":"Custom","text":"An array of custom roles. Similar to the default
field, you can mention roles within this field as well. However, the custom roles also require the use of a labelSelector
for each iteration within the array. The roles mentioned here will only apply to the namespaces that are matched by the labelSelector. If a namespace is matched by 2 different labelSelectors, then both roles will apply to it. Additionally, roles can be skipped within the labelSelector. These missing roles are then inherited from the default
roles field . For example, if the following custom roles arrangement is used:
custom:\n- labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n
Then the editor
and viewer
roles will be taken from the default
roles field, as that is required to have at least one role mentioned.
"},{"location":"integration-config.html#openshift","title":"OpenShift","text":"openshift:\n project:\n labels:\n stakater.com/workload-monitoring: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n group:\n labels:\n role: customer-reader\n sandbox:\n labels:\n stakater.com/kind: sandbox\n clusterAdminGroups:\n - cluster-admins\n privilegedNamespaces:\n - ^default$\n - ^openshift-*\n - ^kube-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n groups:\n - cluster-admins\n
"},{"location":"integration-config.html#project-group-and-sandbox","title":"Project, group and sandbox","text":"We can use the openshift.project
, openshift.group
and openshift.sandbox
fields to automatically add labels
and annotations
to the Projects and Groups managed via MTO.
openshift:\n project:\n labels:\n stakater.com/workload-monitoring: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n group:\n labels:\n role: customer-reader\n sandbox:\n labels:\n stakater.com/kind: sandbox\n
If we want to add default labels/annotations to sandbox namespaces of tenants than we just simply add them in openshift.project.labels
/openshift.project.annotations
respectively.
Whenever a project is made it will have the labels and annotations as mentioned above.
kind: Project\napiVersion: project.openshift.io/v1\nmetadata:\n name: bluesky-build\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n labels:\n workload-monitoring: 'true'\n stakater.com/tenant: bluesky\nspec:\n finalizers:\n - kubernetes\nstatus:\n phase: Active\n
kind: Group\napiVersion: user.openshift.io/v1\nmetadata:\n name: bluesky-owner-group\n labels:\n role: customer-reader\nusers:\n - andrew@stakater.com\n
"},{"location":"integration-config.html#cluster-admin-groups","title":"Cluster Admin Groups","text":"clusterAdminGroups:
Contains names of the groups that are allowed to perform CRUD operations on namespaces present on the cluster. Users in the specified group(s) will be able to perform these operations without MTO getting in their way. MTO does not interfere even with the deletion of privilegedNamespaces.
Note
User kube:admin
is bypassed by default to perform operations as a cluster admin, this includes operations on all the namespaces.
"},{"location":"integration-config.html#privileged-namespaces","title":"Privileged Namespaces","text":"privilegedNamespaces:
Contains the list of namespaces
ignored by MTO. MTO will not manage the namespaces
in this list. Treatment for privileged namespaces does not involve further integrations or finalizers processing as with normal namespaces. Values in this list are regex patterns. For example:
- To ignore the
default
namespace, we can specify ^default$
- To ignore all namespaces starting with the
openshift-
prefix, we can specify ^openshift-*
. - To ignore any namespace containing
stakater
in its name, we can specify stakater
. (A constant word given as a regex pattern will match any namespace containing that word.)
"},{"location":"integration-config.html#privileged-serviceaccounts","title":"Privileged ServiceAccounts","text":"privilegedServiceAccounts:
Contains the list of ServiceAccounts
ignored by MTO. MTO will not manage the ServiceAccounts
in this list. Values in this list are regex patterns. For example, to ignore all ServiceAccounts
starting with the system:serviceaccount:openshift-
prefix, we can use ^system:serviceaccount:openshift-*
; and to ignore the system:serviceaccount:builder
service account we can use ^system:serviceaccount:builder$.
"},{"location":"integration-config.html#namespace-access-policy","title":"Namespace Access Policy","text":"namespaceAccessPolicy.Deny:
Can be used to restrict privileged users/groups CRUD operation over managed namespaces.
namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n groups:\n - cluster-admins\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n
\u26a0\ufe0f If you want to use a more complex regex pattern (for the openshift.privilegedNamespaces
or openshift.privilegedServiceAccounts
field), it is recommended that you test the regex pattern first - either locally or using a platform such as https://regex101.com/.
"},{"location":"integration-config.html#argocd","title":"ArgoCD","text":""},{"location":"integration-config.html#namespace","title":"Namespace","text":"argocd.namespace
is an optional field used to specify the namespace where ArgoCD Applications and AppProjects are deployed. The field should be populated when you want to create an ArgoCD AppProject for each tenant.
"},{"location":"integration-config.html#namespaceresourceblacklist","title":"NamespaceResourceBlacklist","text":"argocd:\n namespaceResourceBlacklist:\n - group: '' # all resource groups\n kind: ResourceQuota\n - group: ''\n kind: LimitRange\n - group: ''\n kind: NetworkPolicy\n
argocd.namespaceResourceBlacklist
prevents ArgoCD from syncing the listed resources from your GitOps repo.
"},{"location":"integration-config.html#clusterresourcewhitelist","title":"ClusterResourceWhitelist","text":"argocd:\n clusterResourceWhitelist:\n - group: tronador.stakater.com\n kind: EnvironmentProvisioner\n
argocd.clusterResourceWhitelist
allows ArgoCD to sync the listed cluster scoped resources from your GitOps repo.
"},{"location":"integration-config.html#rhsso-red-hat-single-sign-on","title":"RHSSO (Red Hat Single Sign-On)","text":"Red Hat Single Sign-On RHSSO is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0.
If RHSSO
is configured on a cluster, then RHSSO configuration can be enabled.
rhsso:\n enabled: true\n realm: customer\n endpoint:\n secretReference:\n name: auth-secrets\n namespace: openshift-auth\n url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n
If enabled, than admins have to provide secret and URL of RHSSO.
secretReference.name:
Will contain the name of the secret. secretReference.namespace:
Will contain the namespace of the secret. realm:
Will contain the realm name which is configured for users. url:
Will contain the URL of RHSSO.
"},{"location":"integration-config.html#vault","title":"Vault","text":"Vault is used to secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.
If vault
is configured on a cluster, then Vault configuration can be enabled.
Vault:\n enabled: true\n endpoint:\n secretReference:\n name: vault-root-token\n namespace: vault\n url: >-\n https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n sso:\n accessorID: <ACCESSOR_ID_TOKEN>\n clientName: vault\n
If enabled, than admins have to provide secret, URL and SSO accessorID of Vault.
secretReference.name:
Will contain the name of the secret. secretReference.namespace:
Will contain the namespace of the secret. url:
Will contain the URL of Vault. sso.accessorID:
Will contain the SSO accessorID. sso.clientName:
Will contain the client name.
For more details please refer use-cases
"},{"location":"tenant-roles.html","title":"Tenant Member Roles","text":"After adding support for custom roles within MTO, this page is only applicable if you use OpenShift and its default owner
, edit
, and view
roles. For more details, see the IntegrationConfig spec
MTO tenant members can have one of following 3 roles:
- Owner
- Editor
- Viewer
"},{"location":"tenant-roles.html#1-owner","title":"1. Owner","text":" fig 2. Shows how tenant owners manage their tenant using MTO
Owner is an admin of a tenant with some restrictions. It has privilege to see all resources in their Tenant with some additional privileges. They can also create new namespaces
.
Owners will also inherit roles from Edit
and View
.
"},{"location":"tenant-roles.html#access-permissions","title":"Access Permissions","text":" - Role and RoleBinding access in
Project
: - delete
- create
- list
- get
- update
- patch
"},{"location":"tenant-roles.html#quotas-permissions","title":"Quotas Permissions","text":""},{"location":"tenant-roles.html#resources-permissions","title":"Resources Permissions","text":" - CRUD access on Template, TemplateInstance and TemplateGroupInstance of MTO custom resources
- CRUD access on ImageStreamTags in
Project
- Get access on CustomResourceDefinitions in
Project
- Get, list, watch access on Builds, BuildConfigs in
Project
- CRUD access on following resources in
Project
: - Prometheuses
- Prometheusrules
- ServiceMonitors
- PodMonitors
- ThanosRulers
- Permission to create Namespaces.
- Restricted to perform actions on cluster resource Quotas and Limits.
"},{"location":"tenant-roles.html#2-editor","title":"2. Editor","text":" fig 3. Shows editors role in a tenant using MTO
Edit role will have edit access on their Projects
, but they wont have access on Roles
or RoleBindings
.
Editors will also inherit View
role.
"},{"location":"tenant-roles.html#access-permissions_1","title":"Access Permissions","text":" - ServiceAccount access in
Project
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- impersonate
"},{"location":"tenant-roles.html#quotas-permissions_1","title":"Quotas Permissions","text":" - AppliedClusterResourceQuotas and ResourceQuotaUsages access in
Project
"},{"location":"tenant-roles.html#builds-pods-pvc-permissions","title":"Builds ,Pods , PVC Permissions","text":" - Pod, PodDisruptionBudgets and PVC access in
Project
- get
- list
- watch
- create
- delete
- deletecollection
- patch
- update
- Build, BuildConfig, BuildLog, DeploymentConfig, Deployment, ConfigMap, ImageStream , ImageStreamImage and ImageStreamMapping access in
Project
- get
- list
- watch
- create
- delete
- deletecollection
- patch
- update
"},{"location":"tenant-roles.html#resources-permissions_1","title":"Resources Permissions","text":" - CRUD access on Template, TemplateInstance and TemplateGroupInstance of MTO custom resources
- Job, CronJob, Task, Trigger and Pipeline access in
Project
- get
- list
- watch
- create
- delete
- deletecollection
- patch
- update
- Get access on projects
- Route and NetworkPolicies access in
Project
- get
- list
- watch
- create
- delete
- deletecollection
- patch
- update
- Template, ReplicaSet, StatefulSet and DaemonSet access in
Project
- get
- list
- watch
- create
- delete
- deletecollection
- patch
- update
- CRUD access on all Projects related to
- Elasticsearch
- Logging
- Kibana
- Istio
- Jaeger
- Kiali
- Tekton.dev
- Get access on CustomResourceDefinitions in
Project
- Edit and view permission on
jenkins.build.openshift.io
- InstallPlan access in
Project
- Subscription and PackageManifest access in
Project
- get
- list
- watch
- create
- delete
- deletecollection
- patch
- update
"},{"location":"tenant-roles.html#3-viewer","title":"3. Viewer","text":" fig 4. Shows viewers role in a tenant using MTO
Viewer role will only have view access on their Project
.
"},{"location":"tenant-roles.html#access-permissions_2","title":"Access Permissions","text":" - ServiceAccount access in
Project
"},{"location":"tenant-roles.html#quotas-permissions_2","title":"Quotas Permissions","text":" - AppliedClusterResourceQuotas access in
Project
"},{"location":"tenant-roles.html#builds-pods-pvc-permissions_1","title":"Builds ,Pods , PVC Permissions","text":" - Pod, PodDisruptionBudget and PVC access in
Project
- Build, BuildConfig, BuildLog, DeploymentConfig, ConfigMap, ImageStream, ImageStreamImage and ImageStreamMapping access in
Project
"},{"location":"tenant-roles.html#resources-permissions_2","title":"Resources Permissions","text":" - Get, list, view access on Template, TemplateInstance and TemplateGroupInstance of MTO custom resources
- Job, CronJob, Task, Trigger and Pipeline access in
Project
- Get access on projects
- Routes, NetworkPolicies and Daemonset access in
Project
- Template, ReplicaSet, StatefulSet and Daemonset in
Project
- Get,list,watch access on all projects related to
- Elasticsearch
- Logging
- Kibana
- Istio
- Jaeger
- Kiali
- Tekton.dev
- Get, list, watch access on ImageStream, ImageStreamImage and ImageStreamMapping in
Project
- Get access on CustomResourceDefinition in
Project
- View permission on
Jenkins.Build.Openshift.io
- Subscription, PackageManifest and InstallPlan access in
Project
"},{"location":"troubleshooting.html","title":"Troubleshooting Guide","text":""},{"location":"troubleshooting.html#operatorhub-upgrade-error","title":"OperatorHub Upgrade Error","text":""},{"location":"troubleshooting.html#operator-is-stuck-in-upgrade-if-upgrade-approval-is-set-to-automatic","title":"Operator is stuck in upgrade if upgrade approval is set to Automatic","text":""},{"location":"troubleshooting.html#problem","title":"Problem","text":"If operator upgrade is set to Automatic Approval on OperatorHub, there may be scenarios where it gets blocked.
"},{"location":"troubleshooting.html#resolution","title":"Resolution","text":"Information
If upgrade approval is set to manual, and you want to skip upgrade of a specific version, then delete the InstallPlan created for that specific version. Operator Lifecycle Manager (OLM) will create the latest available InstallPlan which can be approved then.\n
As OLM does not allow to upgrade or downgrade from a version stuck because of error, the only possible fix is to uninstall the operator from the cluster. When the operator is uninstalled it removes all of its resources i.e., ClusterRoles, ClusterRoleBindings, and Deployments etc., except Custom Resource Definitions (CRDs), so none of the Custom Resources (CRs), Tenants, Templates etc., will be removed from the cluster. If any CRD has a conversion webhook defined then that webhook should be removed before installing the stable version of the operator. This can be achieved via removing the .spec.conversion
block from the CRD schema.
As an example, if you have installed v0.8.0 of Multi Tenant Operator on your cluster, then it'll stuck in an error error validating existing CRs against new CRD's schema for \"integrationconfigs.tenantoperator.stakater.com\": error validating custom resource against new schema for IntegrationConfig multi-tenant-operator/tenant-operator-config: [].spec.tenantRoles: Required value
. To resolve this issue, you'll first uninstall the MTO from the cluster. Once you uninstall the MTO, check Tenant CRD which will have a conversion block, which needs to be removed. After removing the conversion block from the Tenant CRD, install the latest available version of MTO from OperatorHub.
"},{"location":"troubleshooting.html#permission-issues","title":"Permission Issues","text":""},{"location":"troubleshooting.html#vault-user-permissions-are-not-updated-if-the-user-is-added-to-a-tenant-and-the-user-does-not-exist-in-rhsso","title":"Vault user permissions are not updated if the user is added to a Tenant, and the user does not exist in RHSSO","text":""},{"location":"troubleshooting.html#problem_1","title":"Problem","text":"If a user is added to tenant resource, and the user does not exist in RHSSO, then RHSSO is not updated with the user's Vault permission.
"},{"location":"troubleshooting.html#reproduction-steps","title":"Reproduction steps","text":" - Add a new user to Tenant CR
- Attempt to log in to Vault with the added user
- Vault denies that the user exists, and signs the user up via RHSSO. User is now created on RHSSO (you may check for the user on RHSSO).
"},{"location":"troubleshooting.html#resolution_1","title":"Resolution","text":"If the user does not exist in RHSSO, then MTO does not create the tenant access for Vault in RHSSO.
The user now needs to go to Vault, and sign up using OIDC. Then the user needs to wait for MTO to reconcile the updated tenant (reconciliation period is currently 1 hour). After reconciliation, MTO will add relevant access for the user in RHSSO.
If the user needs to be added immediately and it is not feasible to wait for next MTO reconciliation, then: add a label or annotation to the user, or restart the Tenant controller pod to force immediate reconciliation.
"},{"location":"vault-multitenancy.html","title":"Vault Multitenancy","text":"HashiCorp Vault is an identity-based secret and encryption management system. Vault validates and authorizes a system's clients (users, machines, apps) before providing them access to secrets or stored sensitive data.
"},{"location":"vault-multitenancy.html#vault-integration-in-multi-tenant-operator","title":"Vault integration in Multi Tenant Operator","text":""},{"location":"vault-multitenancy.html#service-account-auth-in-vault","title":"Service Account Auth in Vault","text":"MTO enables the Kubernetes auth method which can be used to authenticate with Vault using a Kubernetes Service Account Token. When enabled, for every tenant namespace, MTO automatically creates policies and roles that allow the service accounts present in those namespaces to read secrets at tenant's path in Vault. The name of the role is the same as namespace name.
These service accounts are required to have stakater.com/vault-access: true
label, so they can be authenticated with Vault via MTO.
The Diagram shows how MTO enables ServiceAccounts to read secrets from Vault.
"},{"location":"vault-multitenancy.html#user-oidc-auth-in-vault","title":"User OIDC Auth in Vault","text":"This requires a running RHSSO(RedHat Single Sign On)
instance integrated with Vault over OIDC login method.
MTO integration with Vault and RHSSO provides a way for users to log in to Vault where they only have access to relevant tenant paths.
Once both integrations are set-up with IntegrationConfig CR, MTO links tenant users to specific client roles named after their tenant under Vault client in RHSSO.
After that, MTO creates specific policies in Vault for its tenant users.
Mapping of tenant roles to Vault is shown below
Tenant Role Vault Path Vault Capabilities Owner, Editor (tenantName)/* Create, Read, Update, Delete, List Owner, Editor sys/mounts/(tenantName)/* Create, Read, Update, Delete, List Owner, Editor managed-addons/* Read, List Viewer (tenantName)/* Read A simple user login workflow is shown in the diagram below.
"},{"location":"explanation/console.html","title":"MTO Console","text":""},{"location":"explanation/console.html#introduction","title":"Introduction","text":"The Multi Tenant Operator (MTO) Console is a comprehensive user interface designed for both administrators and tenant users to manage multi-tenant environments. The MTO Console simplifies the complexity involved in handling various aspects of tenants and their related resources.
"},{"location":"explanation/console.html#dashboard-overview","title":"Dashboard Overview","text":"The dashboard serves as a centralized monitoring hub, offering insights into the current state of tenants, namespaces, and quotas. It is designed to provide a quick summary/snapshot of MTO resources' status. Additionally, it includes a Showback graph that presents a quick glance of the seven-day cost trends associated with the namespaces/tenants based on the logged-in user.
"},{"location":"explanation/console.html#tenants","title":"Tenants","text":"Here, admins have a bird's-eye view of all tenants, with the ability to delve into each one for detailed examination and management. This section is pivotal for observing the distribution and organization of tenants within the system. More information on each tenant can be accessed by clicking the view option against each tenant name.
"},{"location":"explanation/console.html#namespaces","title":"Namespaces","text":"Users can view all the namespaces that belong to their tenant, offering a comprehensive perspective of the accessible namespaces for tenant members. This section also provides options for detailed exploration.
"},{"location":"explanation/console.html#quotas","title":"Quotas","text":"MTO's Quotas are crucial for managing resource allocation. In this section, administrators can assess the quotas assigned to each tenant, ensuring a balanced distribution of resources in line with operational requirements.
"},{"location":"explanation/console.html#templates","title":"Templates","text":"The Templates section acts as a repository for standardized resource deployment patterns, which can be utilized to maintain consistency and reliability across tenant environments. Few examples include provisioning specific k8s manifests, helm charts, secrets or configmaps across a set of namespaces.
"},{"location":"explanation/console.html#showback","title":"Showback","text":"The Showback feature is an essential financial governance tool, providing detailed insights into the cost implications of resource usage by tenant or namespace or other filters. This facilitates a transparent cost management and internal chargeback or showback process, enabling informed decision-making regarding resource consumption and budgeting.
"},{"location":"explanation/console.html#user-roles-and-permissions","title":"User Roles and Permissions","text":""},{"location":"explanation/console.html#administrators","title":"Administrators :
","text":"Administrators have overarching access to the console, including the ability to view all namespaces and tenants. They have exclusive access to the IntegrationConfig, allowing them to view all the settings and integrations.
"},{"location":"explanation/console.html#tenant-users","title":"Tenant Users :
","text":"Regular tenant users can monitor and manage their allocated resources. However, they do not have access to the IntegrationConfig and cannot view resources across different tenants, ensuring data privacy and operational integrity.
"},{"location":"explanation/console.html#live-yaml-configuration-and-graph-view","title":"Live YAML Configuration and Graph View","text":"In the MTO Console, each resource section is equipped with a \"View\" button, revealing the live YAML configuration for complete information on the resource. For Tenant resources, a supplementary \"Graph\" option is available, illustrating the relationships and dependencies of all resources under a Tenant. This dual-view approach empowers users with both the detailed control of YAML and the holistic oversight of the graph view.
You can find more details on graph visualization here: Graph Visualization
"},{"location":"explanation/console.html#caching-and-database","title":"Caching and Database","text":"MTO integrates a dedicated database to streamline resource management. Now, all resources managed by MTO are efficiently stored in a Postgres database, enhancing the MTO Console's ability to efficiently retrieve all the resources for optimal presentation.
The implementation of this feature is facilitated by the Bootstrap controller, streamlining the deployment process. This controller creates the PostgreSQL Database, establishes a service for inter-pod communication, and generates a secret to ensure secure connectivity to the database.
Furthermore, the introduction of a dedicated cache layer ensures that there is no added burden on the kube API server when responding to MTO Console requests. This enhancement not only improves response times but also contributes to a more efficient and responsive resource management system.
"},{"location":"explanation/console.html#conclusion","title":"Conclusion","text":"The MTO Console is engineered to simplify complex multi-tenant management. The current iteration focuses on providing comprehensive visibility. Future updates could include direct CUD (Create/Update/Delete) capabilities from the dashboard, enhancing the console\u2019s functionality. The Showback feature remains a standout, offering critical cost tracking and analysis. The delineation of roles between administrators and tenant users ensures a secure and organized operational framework.
"},{"location":"explanation/why-argocd-multi-tenancy.html","title":"Need for Multi-Tenancy in ArgoCD","text":""},{"location":"explanation/why-argocd-multi-tenancy.html#argocd-multi-tenancy","title":"ArgoCD Multi-tenancy","text":"ArgoCD is a declarative GitOps tool built to deploy applications to Kubernetes. While the continuous delivery (CD) space is seen by some as crowded these days, ArgoCD does bring some interesting capabilities to the table. Unlike other tools, ArgoCD is lightweight and easy to configure.
"},{"location":"explanation/why-argocd-multi-tenancy.html#why-argocd","title":"Why ArgoCD?","text":"Application definitions, configurations, and environments should be declarative and version controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand.
"},{"location":"explanation/why-vault-multi-tenancy.html","title":"Need for Multi-Tenancy in Vault","text":""},{"location":"faq/index.html","title":"Index","text":""},{"location":"how-to-guides/integration-config.html","title":"Integration Config","text":"IntegrationConfig is used to configure settings of multi-tenancy for Multi Tenant Operator.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - admin\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n - viewer\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n editor:\n clusterRoles:\n - custom-editor\n viewer:\n clusterRoles:\n - custom-viewer\n - custom-view\n openshift:\n project:\n labels:\n stakater.com/workload-monitoring: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n group:\n labels:\n role: customer-reader\n sandbox:\n labels:\n stakater.com/kind: sandbox\n clusterAdminGroups:\n - cluster-admins\n privilegedNamespaces:\n - ^default$\n - ^openshift-*\n - ^kube-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n groups:\n - cluster-admins\n argocd:\n namespace: openshift-operators\n namespaceResourceBlacklist:\n - group: '' # all groups\n kind: ResourceQuota\n clusterResourceWhitelist:\n - group: tronador.stakater.com\n kind: EnvironmentProvisioner\n rhsso:\n enabled: true\n realm: customer\n endpoint:\n url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n secretReference:\n name: auth-secrets\n namespace: openshift-auth\n vault:\n enabled: true\n endpoint:\n url: https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n secretReference:\n name: vault-root-token\n namespace: vault\n sso:\n clientName: vault\n accessorID: <ACCESSOR_ID_TOKEN>\n
Following are the different components that can be used to configure multi-tenancy in a cluster via Multi Tenant Operator.
"},{"location":"how-to-guides/integration-config.html#tenantroles","title":"TenantRoles","text":"TenantRoles are required within the IntegrationConfig, as they are used for defining what roles will be applied to each Tenant namespace. The field allows optional custom roles, that are then used to create RoleBindings for namespaces that match a labelSelector.
\u26a0\ufe0f If you do not configure roles in any way, then the default OpenShift roles of owner
, edit
, and view
will apply to Tenant members. Their details can be found here
tenantRoles:\n default:\n owner:\n clusterRoles:\n - admin\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n - viewer\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n editor:\n clusterRoles:\n - custom-editor\n viewer:\n clusterRoles:\n - custom-viewer\n - custom-view\n
"},{"location":"how-to-guides/integration-config.html#default","title":"Default","text":"This field contains roles that will be used to create default roleBindings for each namespace that belongs to tenants. These roleBindings are only created for a namespace if that namespace isn't already matched by the custom
field below it. Therefore, it is required to have at least one role mentioned within each of its three subfields: owner
, editor
, and viewer
. These 3 subfields also correspond to the member fields of the Tenant CR
"},{"location":"how-to-guides/integration-config.html#custom","title":"Custom","text":"An array of custom roles. Similar to the default
field, you can mention roles within this field as well. However, the custom roles also require the use of a labelSelector
for each iteration within the array. The roles mentioned here will only apply to the namespaces that are matched by the labelSelector. If a namespace is matched by 2 different labelSelectors, then both roles will apply to it. Additionally, roles can be skipped within the labelSelector. These missing roles are then inherited from the default
roles field . For example, if the following custom roles arrangement is used:
custom:\n- labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n
Then the editor
and viewer
roles will be taken from the default
roles field, as that is required to have at least one role mentioned.
"},{"location":"how-to-guides/integration-config.html#openshift","title":"OpenShift","text":"openshift:\n project:\n labels:\n stakater.com/workload-monitoring: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n group:\n labels:\n role: customer-reader\n sandbox:\n labels:\n stakater.com/kind: sandbox\n clusterAdminGroups:\n - cluster-admins\n privilegedNamespaces:\n - ^default$\n - ^openshift-*\n - ^kube-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n groups:\n - cluster-admins\n
"},{"location":"how-to-guides/integration-config.html#project-group-and-sandbox","title":"Project, group and sandbox","text":"We can use the openshift.project
, openshift.group
and openshift.sandbox
fields to automatically add labels
and annotations
to the Projects and Groups managed via MTO.
openshift:\n project:\n labels:\n stakater.com/workload-monitoring: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n group:\n labels:\n role: customer-reader\n sandbox:\n labels:\n stakater.com/kind: sandbox\n
If we want to add default labels/annotations to sandbox namespaces of tenants than we just simply add them in openshift.project.labels
/openshift.project.annotations
respectively.
Whenever a project is made it will have the labels and annotations as mentioned above.
kind: Project\napiVersion: project.openshift.io/v1\nmetadata:\n name: bluesky-build\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n labels:\n workload-monitoring: 'true'\n stakater.com/tenant: bluesky\nspec:\n finalizers:\n - kubernetes\nstatus:\n phase: Active\n
kind: Group\napiVersion: user.openshift.io/v1\nmetadata:\n name: bluesky-owner-group\n labels:\n role: customer-reader\nusers:\n - andrew@stakater.com\n
"},{"location":"how-to-guides/integration-config.html#cluster-admin-groups","title":"Cluster Admin Groups","text":"clusterAdminGroups:
Contains names of the groups that are allowed to perform CRUD operations on namespaces present on the cluster. Users in the specified group(s) will be able to perform these operations without MTO getting in their way
"},{"location":"how-to-guides/integration-config.html#privileged-namespaces","title":"Privileged Namespaces","text":"privilegedNamespaces:
Contains the list of namespaces
ignored by MTO. MTO will not manage the namespaces
in this list. Values in this list are regex patterns. For example:
- To ignore the
default
namespace, we can specify ^default$
- To ignore all namespaces starting with the
openshift-
prefix, we can specify ^openshift-*
. - To ignore any namespace containing
stakater
in its name, we can specify stakater
. (A constant word given as a regex pattern will match any namespace containing that word.)
"},{"location":"how-to-guides/integration-config.html#privileged-serviceaccounts","title":"Privileged ServiceAccounts","text":"privilegedServiceAccounts:
Contains the list of ServiceAccounts
ignored by MTO. MTO will not manage the ServiceAccounts
in this list. Values in this list are regex patterns. For example, to ignore all ServiceAccounts
starting with the system:serviceaccount:openshift-
prefix, we can use ^system:serviceaccount:openshift-*
; and to ignore the system:serviceaccount:builder
service account we can use ^system:serviceaccount:builder$.
"},{"location":"how-to-guides/integration-config.html#namespace-access-policy","title":"Namespace Access Policy","text":"namespaceAccessPolicy.Deny:
Can be used to restrict privileged users/groups CRUD operation over managed namespaces.
namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n groups:\n - cluster-admins\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n
\u26a0\ufe0f If you want to use a more complex regex pattern (for the openshift.privilegedNamespaces
or openshift.privilegedServiceAccounts
field), it is recommended that you test the regex pattern first - either locally or using a platform such as https://regex101.com/.
"},{"location":"how-to-guides/integration-config.html#argocd","title":"ArgoCD","text":""},{"location":"how-to-guides/integration-config.html#namespace","title":"Namespace","text":"argocd.namespace
is an optional field used to specify the namespace where ArgoCD Applications and AppProjects are deployed. The field should be populated when you want to create an ArgoCD AppProject for each tenant.
"},{"location":"how-to-guides/integration-config.html#namespaceresourceblacklist","title":"NamespaceResourceBlacklist","text":"argocd:\n namespaceResourceBlacklist:\n - group: '' # all resource groups\n kind: ResourceQuota\n - group: ''\n kind: LimitRange\n - group: ''\n kind: NetworkPolicy\n
argocd.namespaceResourceBlacklist
prevents ArgoCD from syncing the listed resources from your GitOps repo.
"},{"location":"how-to-guides/integration-config.html#clusterresourcewhitelist","title":"ClusterResourceWhitelist","text":"argocd:\n clusterResourceWhitelist:\n - group: tronador.stakater.com\n kind: EnvironmentProvisioner\n
argocd.clusterResourceWhitelist
allows ArgoCD to sync the listed cluster scoped resources from your GitOps repo.
"},{"location":"how-to-guides/integration-config.html#rhsso-red-hat-single-sign-on","title":"RHSSO (Red Hat Single Sign-On)","text":"Red Hat Single Sign-On RHSSO is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0.
If RHSSO
is configured on a cluster, then RHSSO configuration can be enabled.
rhsso:\n enabled: true\n realm: customer\n endpoint:\n secretReference:\n name: auth-secrets\n namespace: openshift-auth\n url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n
If enabled, then admins have to provide secret and URL of RHSSO.
secretReference.name:
Will contain the name of the secret. secretReference.namespace:
Will contain the namespace of the secret. realm:
Will contain the realm name which is configured for users. url:
Will contain the URL of RHSSO.
"},{"location":"how-to-guides/integration-config.html#vault","title":"Vault","text":"Vault is used to secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.
If vault
is configured on a cluster, then Vault configuration can be enabled.
Vault:\n enabled: true\n endpoint:\n secretReference:\n name: vault-root-token\n namespace: vault\n url: >-\n https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n sso:\n accessorID: <ACCESSOR_ID_TOKEN>\n clientName: vault\n
If enabled, then admins have to provide secret, URL and SSO accessorID of Vault.
secretReference.name:
Will contain the name of the secret. secretReference.namespace:
Will contain the namespace of the secret. url:
Will contain the URL of Vault. sso.accessorID:
Will contain the SSO accessorID. sso.clientName:
Will contain the client name.
"},{"location":"how-to-guides/quota.html","title":"Quota","text":"Using Multi Tenant Operator, the cluster-admin can set and enforce cluster resource quotas and limit ranges for tenants.
"},{"location":"how-to-guides/quota.html#assigning-resource-quotas","title":"Assigning Resource Quotas","text":"Bill is a cluster admin who will first create Quota
CR where he sets the maximum resource limits that Anna's tenant will have. Here limitrange
is an optional field, cluster admin can skip it if not needed.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: small\nspec:\n resourcequota:\n hard:\n requests.cpu: '5'\n requests.memory: '5Gi'\n configmaps: \"10\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\n limitrange:\n limits:\n - type: \"Pod\"\n max:\n cpu: \"2\"\n memory: \"1Gi\"\n min:\n cpu: \"200m\"\n memory: \"100Mi\"\nEOF\n
For more details please refer to Quotas.
kubectl get quota small\nNAME STATE AGE\nsmall Active 3m\n
Bill then proceeds to create a tenant for Anna, while also linking the newly created Quota
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@stakater.com\n quota: small\n sandbox: false\nEOF\n
Now that the quota is linked with Anna's tenant, Anna can create any resource within the values of resource quota and limit range.
kubectl -n bluesky-production create deployment nginx --image nginx:latest --replicas 4\n
Once the resource quota assigned to the tenant has been reached, Anna cannot create further resources.
kubectl create pods bluesky-training\nError from server (Cannot exceed Namespace quota: please, reach out to the system administrators)\n
"},{"location":"how-to-guides/quota.html#limiting-persistentvolume-for-tenant","title":"Limiting PersistentVolume for Tenant","text":"Bill, as a cluster admin, wants to restrict the amount of storage a Tenant can use. For that he'll add the requests.storage
field to quota.spec.resourcequota.hard
. If Bill wants to restrict tenant bluesky
to use only 50Gi
of storage, he'll first create a quota with requests.storage
field set to 50Gi
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: medium\nspec:\n resourcequota:\n hard:\n requests.cpu: '5'\n requests.memory: '10Gi'\n requests.storage: '50Gi'\n
Once the quota is created, Bill will create the tenant and set the quota field to the one he created.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n quota: medium\n sandbox: true\nEOF\n
Now, the combined storage used by all tenant namespaces will not exceed 50Gi
.
"},{"location":"how-to-guides/quota.html#adding-storageclass-restrictions-for-tenant","title":"Adding StorageClass Restrictions for Tenant","text":"Now, Bill, as a cluster admin, wants to make sure that no Tenant can provision more than a fixed amount of storage from a StorageClass. Bill can restrict that using <storage-class-name>.storageclass.storage.k8s.io/requests.storage
field in quota.spec.resourcequota.hard
field. If Bill wants to restrict tenant sigma
to use only 20Gi
of storage from storage class stakater
, he'll first create a StorageClass stakater
and then create the relevant Quota with stakater.storageclass.storage.k8s.io/requests.storage
field set to 20Gi
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: small\nspec:\n resourcequota:\n hard:\n requests.cpu: '2'\n requests.memory: '4Gi'\n stakater.storageclass.storage.k8s.io/requests.storage: '20Gi'\n
Once the quota is created, Bill will create the tenant and set the quota field to the one he created.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: sigma\nspec:\n owners:\n users:\n - dave@aurora.org\n quota: small\n sandbox: true\nEOF\n
Now, the combined storage provisioned from StorageClass stakater
used by all tenant namespaces will not exceed 20Gi
.
The 20Gi
limit will only be applied to StorageClass stakater
. If a tenant member creates a PVC with some other StorageClass, he will not be restricted.
Tip
More details about Resource Quota
can be found here
"},{"location":"how-to-guides/template-group-instance.html","title":"TemplateGroupInstance","text":"Cluster scoped resource:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: namespace-parameterized-restrictions-tgi\nspec:\n template: namespace-parameterized-restrictions\n sync: true\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\n
TemplateGroupInstance distributes a template across multiple namespaces which are selected by labelSelector.
"},{"location":"how-to-guides/template-instance.html","title":"TemplateInstance","text":"Namespace scoped resource:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: networkpolicy\n namespace: build\nspec:\n template: networkpolicy\n sync: true\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\n
TemplateInstance are used to keep track of resources created from Templates, which are being instantiated inside a Namespace. Generally, a TemplateInstance is created from a Template and then the TemplateInstances will not be updated when the Template changes later on. To change this behavior, it is possible to set spec.sync: true
in a TemplateInstance. Setting this option, means to keep this TemplateInstance in sync with the underlying template (similar to Helm upgrade).
"},{"location":"how-to-guides/template.html","title":"Template","text":""},{"location":"how-to-guides/template.html#cluster-scoped-resource","title":"Cluster scoped resource","text":"apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: redis\nresources:\n helm:\n releaseName: redis\n chart:\n repository:\n name: redis\n repoUrl: https://charts.bitnami.com/bitnami\n values: |\n redisPort: 6379\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: networkpolicy\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\nresources:\n manifests:\n - kind: NetworkPolicy\n apiVersion: networking.k8s.io/v1\n metadata:\n name: deny-cross-ns-traffic\n spec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: \"${{CIDR_IP}}\"\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: resource-mapping\nresources:\n resourceMappings:\n secrets:\n - name: secret-s1\n namespace: namespace-n1\n configMaps:\n - name: configmap-c1\n namespace: namespace-n2\n
Templates are used to initialize Namespaces, share common resources across namespaces, and map secrets/configmaps from one namespace to other namespaces.
- They either contain one or more Kubernetes manifests, a reference to secrets/configmaps, or a Helm chart.
- They are being tracked by TemplateInstances in each Namespace they are applied to.
- They can contain pre-defined parameters such as ${namespace}/${tenant} or user-defined ${MY_PARAMETER} that can be specified within an TemplateInstance.
Also, you can define custom variables in Template
and TemplateInstance
. The parameters defined in TemplateInstance
are overwritten the values defined in Template
.
Manifest Templates: The easiest option to define a Template is by specifying an array of Kubernetes manifests which should be applied when the Template is being instantiated.
Helm Chart Templates: Instead of manifests, a Template can specify a Helm chart that will be installed (using Helm template) when the Template is being instantiated.
Resource Mapping Templates: A template can be used to map secrets and configmaps from one tenant's namespace to another tenant's namespace, or within a tenant's namespace.
"},{"location":"how-to-guides/template.html#mandatory-and-optional-templates","title":"Mandatory and Optional Templates","text":"Templates can either be mandatory or optional. By default, all Templates are optional. Cluster Admins can make Templates mandatory by adding them to the spec.templateInstances
array within the Tenant configuration. All Templates listed in spec.templateInstances
will always be instantiated within every Namespace
that is created for the respective Tenant.
"},{"location":"how-to-guides/tenant.html","title":"Tenant","text":"Cluster scoped resource:
The smallest valid Tenant definition is given below (with just one field in its spec):
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: alpha\nspec:\n quota: small\n
Here is a more detailed Tenant definition, explained below:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: alpha\nspec:\n owners: # optional\n users: # optional\n - dave@stakater.com\n groups: # optional\n - alpha\n editors: # optional\n users: # optional\n - jack@stakater.com\n viewers: # optional\n users: # optional\n - james@stakater.com\n quota: medium # required\n sandboxConfig: # optional\n enabled: true # optional\n private: true # optional\n onDelete: # optional\n cleanNamespaces: false # optional\n cleanAppProject: true # optional\n argocd: # optional\n sourceRepos: # required\n - https://github.com/stakater/gitops-config\n appProject: # optional\n clusterResourceWhitelist: # optional\n - group: tronador.stakater.com\n kind: Environment\n namespaceResourceBlacklist: # optional\n - group: \"\"\n kind: ConfigMap\n hibernation: # optional\n sleepSchedule: 23 * * * * # required\n wakeSchedule: 26 * * * * # required\n namespaces: # optional\n withTenantPrefix: # optional\n - dev\n - build\n withoutTenantPrefix: # optional\n - preview\n commonMetadata: # optional\n labels: # optional\n stakater.com/team: alpha\n annotations: # optional\n openshift.io/node-selector: node-role.kubernetes.io/infra=\n specificMetadata: # optional\n - annotations: # optional\n stakater.com/user: dave\n labels: # optional\n stakater.com/sandbox: true\n namespaces: # optional\n - alpha-dave-stakater-sandbox\n templateInstances: # optional\n - spec: # optional\n template: networkpolicy # required\n sync: true # optional\n parameters: # optional\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\n selector: # optional\n matchLabels: # optional\n policy: network-restriction\n
-
Tenant has 3 kinds of Members
. Each member type should have different roles assigned to them. These roles are gotten from the IntegrationConfig's TenantRoles field. You can customize these roles to your liking, but by default the following configuration applies:
Owners:
Users who will be owners of a tenant. They will have OpenShift admin-role assigned to their users, with additional access to create namespaces as well. Editors:
Users who will be editors of a tenant. They will have OpenShift edit-role assigned to their users. Viewers:
Users who will be viewers of a tenant. They will have OpenShift view-role assigned to their users. - For more details, check out their definitions.
-
Users
can be linked to the tenant by specifying there username in owners.users
, editors.users
and viewers.users
respectively.
-
Groups
can be linked to the tenant by specifying the group name in owners.groups
, editors.groups
and viewers.groups
respectively.
-
Tenant will have a Quota
to limit resource consumption.
-
sandboxConfig
is used to configure the tenant user sandbox feature
- Setting
enabled
to true will create sandbox namespaces for owners and editors. - Sandbox will follow the following naming convention {TenantName}-{UserName}-sandbox.
- In case of groups, the sandbox namespaces will be created for each member of the group.
- Setting
private
to true will make those sandboxes be only visible to the user they belong to. By default, sandbox namespaces are visible to all tenant members
-
onDelete
is used to tell Multi Tenant Operator what to do when a Tenant is deleted.
cleanNamespaces
if the value is set to true MTO deletes all tenant namespaces when a Tenant
is deleted. Default value is false. cleanAppProject
will keep the generated ArgoCD AppProject if the value is set to false. By default, the value is true.
-
argocd
is required if you want to create an ArgoCD AppProject for the tenant.
sourceRepos
contain a list of repositories that point to your GitOps. appProject
is used to set the clusterResourceWhitelist
and namespaceResourceBlacklist
resources. If these are also applied via IntegrationConfig
then those applied via Tenant CR will have higher precedence for given Tenant.
-
hibernation
can be used to create a schedule during which the namespaces belonging to the tenant will be put to sleep. The values of the sleepSchedule
and wakeSchedule
fields must be a string in a cron format.
-
Namespaces can also be created via tenant CR by specifying names in namespaces
.
- Multi Tenant Operator will append tenant name prefix while creating namespaces if the list of namespaces is under the
withTenantPrefix
field, so the format will be {TenantName}-{Name}. - Namespaces listed under the
withoutTenantPrefix
will be created with the given name. Writing down namespaces here that already exist within the cluster are not allowed. stakater.com/kind: {Name}
label will also be added to the namespaces.
-
commonMetadata
can be used to distribute common labels and annotations among tenant namespaces.
labels
distributes provided labels among all tenant namespaces annotations
distributes provided annotations among all tenant namespaces
-
specificMetadata
can be used to distribute specific labels and annotations among specific tenant namespaces.
labels
distributes given labels among specific tenant namespaces annotations
distributes given annotations among specific tenant namespaces namespaces
consists a list of specific tenant namespaces across which the labels and annotations will be distributed
-
Tenant automatically deploys template
resource mentioned in templateInstances
to matching tenant namespaces.
Template
resources are created in those namespaces
which belong to a tenant
and contain matching labels
. Template
resources are created in all namespaces
of a tenant
if selector
field is empty.
\u26a0\ufe0f If same label or annotation key is being applied using different methods provided, then the highest precedence will be given to specificMetadata
followed by commonMetadata
and in the end would be the ones applied from openshift.project.labels
/openshift.project.annotations
in IntegrationConfig
"},{"location":"how-to-guides/offboarding/uninstalling.html","title":"Uninstall via OperatorHub UI","text":"You can uninstall MTO by following these steps:
-
Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not. If yes, please set spec.onDelete.cleanNamespaces
to false
for all those tenants whose namespaces you want to retain, and spec.onDelete.cleanAppProject
to false
for all those tenants whose AppProject you want to retain. For more details check out onDelete
-
After making the required changes open OpenShift console and click on Operators
, followed by Installed Operators
from the side menu
- Now click on uninstall and confirm uninstall.
"},{"location":"how-to-guides/offboarding/uninstalling.html#notes","title":"Notes","text":" - For more details on how to use MTO please refer Tenant's tutorial.
- For more details on how to extend your MTO manager ClusterRole please refer extend-admin-clusterrole.
"},{"location":"reference-guides/add-remove-namespace-gitops.html","title":"Add/Remove Namespace from Tenant via GitOps","text":""},{"location":"reference-guides/admin-clusterrole.html","title":"Extending Admin ClusterRole","text":"Bill as the cluster admin want to add additional rules for admin ClusterRole.
Bill can extend the admin
role for MTO using the aggregation label for admin ClusterRole. Bill will create a new ClusterRole with all the permissions he needs to extend for MTO and add the aggregation label on the newly created ClusterRole.
kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: extend-admin-role\n labels:\n rbac.authorization.k8s.io/aggregate-to-admin: 'true'\nrules:\n - verbs:\n - create\n - update\n - patch\n - delete\n apiGroups:\n - user.openshift.io\n resources:\n - groups\n
Note: You can learn more about aggregated-cluster-roles
here
"},{"location":"reference-guides/admin-clusterrole.html#whats-next","title":"What\u2019s next","text":"See how Bill can hibernate unused namespaces at night
"},{"location":"reference-guides/configuring-multitenant-network-isolation.html","title":"Configuring Multi-Tenant Isolation with Network Policy Template","text":"Bill is a cluster admin who wants to configure network policies to provide multi-tenant network isolation.
First, Bill creates a template for network policies:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: tenant-network-policy\nresources:\n manifests:\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-same-namespace\n spec:\n podSelector: {}\n ingress:\n - from:\n - podSelector: {}\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-from-openshift-monitoring\n spec:\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n network.openshift.io/policy-group: monitoring\n podSelector: {}\n policyTypes:\n - Ingress\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-from-openshift-ingress\n spec:\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n network.openshift.io/policy-group: ingress\n podSelector: {}\n policyTypes:\n - Ingress\n
Once the template has been created, Bill edits the IntegrationConfig to add unique label to all tenant projects:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n project:\n labels:\n stakater.com/workload-monitoring: \"true\"\n tenant-network-policy: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n sandbox:\n labels:\n stakater.com/kind: sandbox\n privilegedNamespaces:\n - default\n - ^openshift-*\n - ^kube-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n
Bill has added a new label tenant-network-policy: \"true\"
in project section of IntegrationConfig, now MTO will add that label in all tenant projects.
Finally, Bill creates a TemplateGroupInstance
which will distribute the network policies using the newly added project label and template.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: tenant-network-policy-group\nspec:\n template: tenant-network-policy\n selector:\n matchLabels:\n tenant-network-policy: \"true\"\n sync: true\n
MTO will now deploy the network policies mentioned in Template
to all projects matching the label selector mentioned in the TemplateGroupInstance.
"},{"location":"reference-guides/custom-metrics.html","title":"Custom Metrics Support","text":"Multi Tenant Operator now supports custom metrics for templates, template instances and template group instances. This feature allows users to monitor the usage of templates and template instances in their cluster.
To enable custom metrics and view them in your OpenShift cluster, you need to follow the steps below:
- Ensure that cluster monitoring is enabled in your cluster. You can check this by going to
Observe
-> Metrics
in the OpenShift console. - Navigate to
Administration
-> Namespaces
in the OpenShift console. Select the namespace where you have installed Multi Tenant Operator. - Add the following label to the namespace:
openshift.io/cluster-monitoring=true
. This will enable cluster monitoring for the namespace. - To ensure that the metrics are being scraped for the namespace, navigate to
Observe
-> Targets
in the OpenShift console. You should see the namespace in the list of targets. - To view the custom metrics, navigate to
Observe
-> Metrics
in the OpenShift console. You should see the custom metrics for templates, template instances and template group instances in the list of metrics.
"},{"location":"reference-guides/custom-roles.html","title":"Changing the default access level for tenant owners","text":"This feature allows the cluster admins to change the default roles assigned to Tenant owner, editor, viewer groups.
For example, if Bill as the cluster admin wants to reduce the privileges that tenant owners have, so they cannot create or edit Roles or bind them. As an admin of an OpenShift cluster, Bill can do this by assigning the edit
role to all tenant owners. This is easily achieved by modifying the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - edit\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n
Once all namespaces reconcile, the old admin
RoleBindings should get replaced with the edit
ones for each tenant owner.
"},{"location":"reference-guides/custom-roles.html#giving-specific-permissions-to-some-tenants","title":"Giving specific permissions to some tenants","text":"Bill now wants the owners of the tenants bluesky
and alpha
to have admin
permissions over their namespaces. Custom roles feature will allow Bill to do this, by modifying the IntegrationConfig like this:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - edit\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n owner:\n clusterRoles:\n - admin\n - labelSelector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - bluesky\n owner:\n clusterRoles:\n - admin\n
New Bindings will be created for the Tenant owners of bluesky
and alpha
, corresponding to the admin
Role. Bindings for editors and viewer will be inherited from the default roles
. All other Tenant owners will have an edit
Role bound to them within their namespaces
"},{"location":"reference-guides/deploying-templates.html","title":"Distributing Resources in Namespaces","text":"Multi Tenant Operator has three Custom Resources which can cover this need using the Template
CR, depending upon the conditions and preference.
- TemplateGroupInstance
- TemplateInstance
- Tenant
Stakater Team, however, encourages the use of TemplateGroupInstance
to distribute resources in multiple namespaces as it is optimized for better performance.
"},{"location":"reference-guides/deploying-templates.html#deploying-template-to-namespaces-via-templategroupinstances","title":"Deploying Template to Namespaces via TemplateGroupInstances","text":"Bill, the cluster admin, wants to deploy a docker pull secret in namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with MatchLabels
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchLabels:\n kind: build\n sync: true\n
Afterward, Bill can see that secrets have been successfully created in all label matching namespaces.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-secret Active 3m\n\nkubectl get secret docker-secret -n alpha-dave-aurora-sandbox\nNAME STATE AGE\ndocker-secret Active 2m\n
TemplateGroupInstance
can also target specific tenants or all tenant namespaces under a single yaml definition.
"},{"location":"reference-guides/deploying-templates.html#templategroupinstance-for-multiple-tenants","title":"TemplateGroupInstance for multiple Tenants","text":"It can be done by using the matchExpressions
field, dividing the tenant label in key and values.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\n sync: true\n
"},{"location":"reference-guides/deploying-templates.html#templategroupinstance-for-all-tenants","title":"TemplateGroupInstance for all Tenants","text":"This can also be done by using the matchExpressions
field, using just the tenant label key stakater.com/tenant
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: Exists\n sync: true\n
"},{"location":"reference-guides/deploying-templates.html#deploying-template-to-namespaces-via-tenant","title":"Deploying Template to Namespaces via Tenant","text":"Bill is a cluster admin who wants to deploy a docker-pull-secret in Anna's tenant namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Bill edits Anna's tenant and populates the namespacetemplate
field:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n editors:\n users:\n - john@aurora.org\n quota: small\n sandboxConfig:\n enabled: true\n templateInstances:\n - spec:\n template: docker-pull-secret\n selector:\n matchLabels:\n kind: build\n
Multi Tenant Operator will deploy TemplateInstances
mentioned in templateInstances
field, TemplateInstances
will only be applied in those namespaces
which belong to Anna's tenant
and have the matching label of kind: build
.
So now Anna adds label kind: build
to her existing namespace bluesky-anna-aurora-sandbox
, and after adding the label she sees that the secret has been created.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"reference-guides/deploying-templates.html#deploying-template-to-a-namespace-via-templateinstance","title":"Deploying Template to a Namespace via TemplateInstance","text":"Anna wants to deploy a docker pull secret in her namespace.
First Anna asks Bill, the cluster admin, to create a template of the secret for her:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Anna creates a TemplateInstance
in her namespace referring to the Template
she wants to deploy:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: docker-pull-secret-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: docker-pull-secret\n sync: true\n
Once this is created, Anna can see that the secret has been successfully applied.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"reference-guides/deploying-templates.html#passing-parameters-to-template-via-templateinstance-templategroupinstance-or-tenant","title":"Passing Parameters to Template via TemplateInstance, TemplateGroupInstance or Tenant","text":"Anna wants to deploy a LimitRange resource to certain namespaces.
First Anna asks Bill, the cluster admin, to create template with parameters for LimitRange for her:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: namespace-parameterized-restrictions\nparameters:\n # Name of the parameter\n - name: DEFAULT_CPU_LIMIT\n # The default value of the parameter\n value: \"1\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"0.5\"\n # If a parameter is required the template instance will need to set it\n # required: true\n # Make sure only values are entered for this parameter\n validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n manifests:\n - apiVersion: v1\n kind: LimitRange\n metadata:\n name: namespace-limit-range-${namespace}\n spec:\n limits:\n - default:\n cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n defaultRequest:\n cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n type: Container\n
Afterward, Anna creates a TemplateInstance
in her namespace referring to the Template
she wants to deploy:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: namespace-parameterized-restrictions-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: namespace-parameterized-restrictions\n sync: true\nparameters:\n - name: DEFAULT_CPU_LIMIT\n value: \"1.5\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"1\"\n
If she wants to distribute the same Template over multiple namespaces, she can use TemplateGroupInstance
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: namespace-parameterized-restrictions-tgi\nspec:\n template: namespace-parameterized-restrictions\n sync: true\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\nparameters:\n - name: DEFAULT_CPU_LIMIT\n value: \"1.5\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"1\"\n
Or she can use her tenant to cover only the tenant namespaces.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n editors:\n users:\n - john@aurora.org\n quota: small\n sandboxConfig:\n enabled: true\n templateInstances:\n - spec:\n template: namespace-parameterized-restrictions\n sync: true\n parameters:\n - name: DEFAULT_CPU_LIMIT\n value: \"1.5\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"1\"\n selector:\n matchLabels:\n kind: build\n
"},{"location":"reference-guides/distributing-resources.html","title":"Copying Secrets and Configmaps across Tenant Namespaces via TGI","text":"Bill is a cluster admin who wants to map a docker-pull-secret
, present in a build
namespace, in tenant namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n resourceMappings:\n secrets:\n - name: docker-pull-secret\n namespace: build\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with MatchLabels
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchLabels:\n kind: build\n sync: true\n
Afterward, Bill can see that secrets has been successfully mapped in all matching namespaces.
kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n\nkubectl get secret docker-pull-secret -n alpha-dave-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"reference-guides/distributing-resources.html#mapping-resources-within-tenant-namespaces-via-ti","title":"Mapping Resources within Tenant Namespaces via TI","text":"Anna is a tenant owner who wants to map a docker-pull-secret
, present in bluseky-build
namespace, to bluesky-anna-aurora-sandbox
namespace.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n resourceMappings:\n secrets:\n - name: docker-pull-secret\n namespace: bluesky-build\n
Once the template has been created, Anna creates a TemplateInstance
in bluesky-anna-aurora-sandbox
namespace, referring to the Template
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: docker-secret-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: docker-pull-secret\n sync: true\n
Afterward, Bill can see that secrets has been successfully mapped in all matching namespaces.
kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"reference-guides/distributing-secrets-using-sealed-secret-template.html","title":"Distributing Secrets Using Sealed Secrets Template","text":"Bill is a cluster admin who wants to provide a mechanism for distributing secrets in multiple namespaces. For this, he wants to use Sealed Secrets as the solution by adding them to MTO Template CR
First, Bill creates a Template in which Sealed Secret is mentioned:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: tenant-sealed-secret\nresources:\n manifests:\n - kind: SealedSecret\n apiVersion: bitnami.com/v1alpha1\n metadata:\n name: mysecret\n spec:\n encryptedData:\n .dockerconfigjson: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq.....\n template:\n type: kubernetes.io/dockerconfigjson\n # this is an example of labels and annotations that will be added to the output secret\n metadata:\n labels:\n \"jenkins.io/credentials-type\": usernamePassword\n annotations:\n \"jenkins.io/credentials-description\": credentials from Kubernetes\n
Once the template has been created, Bill has to edit the Tenant
to add unique label to namespaces in which the secret has to be deployed. For this, he can use the support for common and specific labels across namespaces.
Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n\n # use this if you want to add label to some specific namespaces\n specificMetadata:\n - namespaces:\n - test-namespace\n labels:\n distribute-image-pull-secret: true\n\n # use this if you want to add label to all namespaces under your tenant\n commonMetadata:\n labels:\n distribute-image-pull-secret: true\n
Bill has added support for a new label distribute-image-pull-secret: true\"
for tenant projects/namespaces, now MTO will add that label depending on the used field.
Finally, Bill creates a TemplateGroupInstance
which will deploy the sealed secrets using the newly created project label and template.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: tenant-sealed-secret\nspec:\n template: tenant-sealed-secret\n selector:\n matchLabels:\n distribute-image-pull-secret: true\n sync: true\n
MTO will now deploy the sealed secrets mentioned in Template
to namespaces which have the mentioned label. The rest of the work to deploy secret from a sealed secret has to be done by Sealed Secrets Controller.
"},{"location":"reference-guides/distributing-secrets.html","title":"Distributing Secrets","text":"Bill is a cluster admin who wants to provide a mechanism for distributing secrets in multiple namespaces. For this, he wants to use Sealed Secrets as the solution by adding them to MTO Template CR
First, Bill creates a Template in which Sealed Secret is mentioned:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: tenant-sealed-secret\nresources:\n manifests:\n - kind: SealedSecret\n apiVersion: bitnami.com/v1alpha1\n metadata:\n name: mysecret\n spec:\n encryptedData:\n .dockerconfigjson: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq.....\n template:\n type: kubernetes.io/dockerconfigjson\n # this is an example of labels and annotations that will be added to the output secret\n metadata:\n labels:\n \"jenkins.io/credentials-type\": usernamePassword\n annotations:\n \"jenkins.io/credentials-description\": credentials from Kubernetes\n
Once the template has been created, Bill has to edit the Tenant
to add unique label to namespaces in which the secret has to be deployed. For this, he can use the support for common and specific labels across namespaces.
Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n\n # use this if you want to add label to some specific namespaces\n specificMetadata:\n - namespaces:\n - test-namespace\n labels:\n distribute-image-pull-secret: true\n\n # use this if you want to add label to all namespaces under your tenant\n commonMetadata:\n labels:\n distribute-image-pull-secret: true\n
Bill has added support for a new label distribute-image-pull-secret: true\"
for tenant projects/namespaces, now MTO will add that label depending on the used field.
Finally, Bill creates a TemplateGroupInstance
which will deploy the sealed secrets using the newly created project label and template.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: tenant-sealed-secret\nspec:\n template: tenant-sealed-secret\n selector:\n matchLabels:\n distribute-image-pull-secret: true\n sync: true\n
MTO will now deploy the sealed secrets mentioned in Template
to namespaces which have the mentioned label. The rest of the work to deploy secret from a sealed secret has to be done by Sealed Secrets Controller.
"},{"location":"reference-guides/extend-default-roles.html","title":"Extending the default access level for tenant members","text":"Bill as the cluster admin wants to extend the default access for tenant members. As an admin of an OpenShift Cluster, Bill can extend the admin, edit, and view ClusterRole using aggregation. Bill will first create a ClusterRole with privileges to resources which Bill wants to extend. Bill will add the aggregation label to the newly created ClusterRole for extending the default ClusterRoles provided by OpenShift.
kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: extend-view-role\n labels:\n rbac.authorization.k8s.io/aggregate-to-view: 'true'\nrules:\n - verbs:\n - get\n - list\n - watch\n apiGroups:\n - user.openshift.io\n resources:\n - groups\n
Note: You can learn more about aggregated-cluster-roles
here
"},{"location":"reference-guides/graph-visualization.html","title":"Graph Visualization on MTO Console","text":"Effortlessly associate tenants with their respective resources using the enhanced graph feature on the MTO Console. This dynamic graph illustrates the relationships between tenants and the resources they create, encompassing both MTO's proprietary resources and native Kubernetes/OpenShift elements.
Example Graph:
graph LR;\n A(alpha)-->B(dev);\n A-->C(prod);\n B-->D(limitrange);\n B-->E(owner-rolebinding);\n B-->F(editor-rolebinding);\n B-->G(viewer-rolebinding);\n C-->H(limitrange);\n C-->I(owner-rolebinding);\n C-->J(editor-rolebinding);\n C-->K(viewer-rolebinding);\n
Explore with an intuitive graph that showcases the relationships between tenants and their resources. The MTO Console's graph feature simplifies the understanding of complex structures, providing you with a visual representation of your tenant's organization.
To view the graph of your tenant, follow the steps below:
- Navigate to
Tenants
page on the MTO Console using the left navigation bar. - Click on
View
of the tenant for which you want to view the graph. - Click on
Graph
tab on the tenant details page.
"},{"location":"reference-guides/integrationconfig.html","title":"Configuring Managed Namespaces and ServiceAccounts in IntegrationConfig","text":"Bill is a cluster admin who can use IntegrationConfig
to configure how Multi Tenant Operator (MTO)
manages the cluster.
By default, MTO watches all namespaces and will enforce all the governing policies on them. All namespaces managed by MTO require the stakater.com/tenant
label. MTO ignores privileged namespaces that are mentioned in the IntegrationConfig, and does not manage them. Therefore, any tenant label on such namespaces will be ignored.
oc create namespace stakater-test\nError from server (Cannot Create namespace stakater-test without label stakater.com/tenant. User: Bill): admission webhook \"vnamespace.kb.io\" denied the request: Cannot CREATE namespace stakater-test without label stakater.com/tenant. User: Bill\n
Bill is trying to create a namespace without the stakater.com/tenant
label. Creating a namespace without this label is only allowed if the namespace is privileged. Privileged namespaces will be ignored by MTO and do not require the said label. Therefore, Bill will add the required regex in the IntegrationConfig, along with any other namespaces which are privileged and should be ignored by MTO - like default
, or namespaces with prefixes like openshift
, kube
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedNamespaces:\n - ^default$\n - ^openshift-.*\n - ^kube-.*\n - ^stakater-.*\n
After mentioning the required regex (^stakater-.*
) under privilegedNamespaces
, Bill can create the namespace without interference.
oc create namespace stakater-test\nnamespace/stakater-test created\n
MTO will also disallow all users which are not tenant owners to perform CRUD operations on namespaces. This will also prevent Service Accounts from performing CRUD operations.
If Bill wants MTO to ignore Service Accounts, then he would simply have to add them in the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedServiceAccounts:\n - system:serviceaccount:openshift\n - system:serviceaccount:stakater\n - system:serviceaccount:kube\n - system:serviceaccount:redhat\n - system:serviceaccount:hive\n
Bill can also use regex patterns to ignore a set of service accounts:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedServiceAccounts:\n - ^system:serviceaccount:openshift-.*\n - ^system:serviceaccount:stakater-.*\n
"},{"location":"reference-guides/integrationconfig.html#configuring-vault-in-integrationconfig","title":"Configuring Vault in IntegrationConfig","text":"Vault is used to secure, store and tightly control access to tokens, passwords, certificates, and encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.
If Bill (the cluster admin) has Vault configured in his cluster, then he can take benefit from MTO's integration with Vault.
MTO automatically creates Vault secret paths for tenants, where tenant members can securely save their secrets. It also authorizes tenant members to access these secrets via OIDC.
Bill would first have to integrate Vault with MTO by adding the details in IntegrationConfig. For more details
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n vault:\n enabled: true\n endpoint:\n secretReference:\n name: vault-root-token\n namespace: vault\n url: >-\n https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n sso:\n accessorID: auth_oidc_aa6aa9aa\n clientName: vault\n
Bill then creates a tenant for Anna and John:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@acme.org\n viewers:\n users:\n - john@acme.org\n quota: small\n sandbox: false\n
Now Bill goes to Vault
and sees that a path for tenant
has been made under the name bluesky/kv
, confirming that Tenant members with the Owner or Edit roles now have access to the tenant's Vault path.
Now if Anna sign's in to the Vault via OIDC, she can see her tenants path and secrets. Whereas if John sign's in to the Vault via OIDC, he can't see his tenants path or secrets as he doesn't have the access required to view them.
"},{"location":"reference-guides/integrationconfig.html#configuring-rhsso-red-hat-single-sign-on-in-integrationconfig","title":"Configuring RHSSO (Red Hat Single Sign-On) in IntegrationConfig","text":"Red Hat Single Sign-On RHSSO is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0.
If Bill the cluster admin has RHSSO configured in his cluster, then he can take benefit from MTO's integration with RHSSO and Vault.
MTO automatically allows tenant members to access Vault via OIDC(RHSSO authentication and authorization) to access secret paths for tenants where tenant members can securely save their secrets.
Bill would first have to integrate RHSSO with MTO by adding the details in IntegrationConfig. Visit here for more details.
rhsso:\n enabled: true\n realm: customer\n endpoint:\n secretReference:\n name: auth-secrets\n namespace: openshift-auth\n url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n
"},{"location":"reference-guides/mattermost.html","title":"Creating Mattermost Teams for your tenant","text":""},{"location":"reference-guides/mattermost.html#requirements","title":"Requirements","text":"MTO-Mattermost-Integration-Operator
Please contact stakater to install the Mattermost integration operator before following the below-mentioned steps.
"},{"location":"reference-guides/mattermost.html#steps-to-enable-integration","title":"Steps to enable integration","text":"Bill wants some tenants to also have their own Mattermost Teams. To make sure this happens correctly, Bill will first add the stakater.com/mattermost: true
label to the tenants. The label will enable the mto-mattermost-integration-operator
to create and manage Mattermost Teams based on Tenants.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: sigma\n labels:\n stakater.com/mattermost: 'true'\nspec:\n owners:\n users:\n - user\n editors:\n users:\n - user1\n quota: medium\n sandbox: false\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n
Now user can log In to Mattermost to see their Team and relevant channels associated with it.
The name of the Team is similar to the Tenant name. Notification channels are pre-configured for every team, and can be modified.
"},{"location":"reference-guides/resource-sync-by-tgi.html","title":"Sync Resources Deployed by TemplateGroupInstance","text":"The TemplateGroupInstance CR provides two types of resource sync for the resources mentioned in Template
For the given example, let's consider we want to apply the following template
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-secret\nresources:\n manifests:\n\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n\n - apiVersion: v1\n kind: ServiceAccount\n metadata:\n name: example-automated-thing\n secrets:\n - name: example-automated-thing-token-zyxwv\n
And the following TemplateGroupInstance is used to deploy these resources to namespaces having label kind: build
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-secret\n selector:\n matchLabels:\n kind: build\n sync: true\n
As we can see, in our TGI, we have a field spec.sync
which is set to true
. This will update the resources on two conditions:
- The Template CR is updated
-
The TemplateGroupInstance CR is reconciled/updated
-
If, for any reason, the underlying resource gets updated or deleted, TemplateGroupInstance
CR will try to revert it back to the state mentioned in the Template
CR.
Note
If the updated field of the deployed manifest is not mentioned in the Template, it will not get reverted. For example, if secrets
field is not mentioned in ServiceAcoount in the above Template, it will not get reverted if changed
"},{"location":"reference-guides/resource-sync-by-tgi.html#ignore-resources-updates-on-resources","title":"Ignore Resources Updates on Resources","text":"If the resources mentioned in Template
CR conflict with another controller/operator, and you want TemplateGroupInstance to not actively revert the resource updates, you can add the following label to the conflicting resource multi-tenant-operator/ignore-resource-updates: \"\"
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-secret\nresources:\n manifests:\n\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n\n - apiVersion: v1\n kind: ServiceAccount\n metadata:\n name: example-automated-thing\n labels:\n multi-tenant-operator/ignore-resource-updates: \"\"\n secrets:\n - name: example-automated-thing-token-zyxwv\n
Note
However, this label will not stop Multi Tenant Operator from updating the resource on following conditions: - Template gets updated - TemplateGroupInstance gets updated - Resource gets deleted
If you don't want to sync the resources in any case, you can disable sync via sync: false
in TemplateGroupInstance
spec.
"},{"location":"reference-guides/secret-distribution.html","title":"Propagate Secrets from Parent to Descendant namespaces","text":"Secrets like registry
credentials often need to exist in multiple Namespaces, so that Pods within different namespaces can have access to those credentials in form of secrets.
Manually creating secrets within different namespaces could lead to challenges, such as:
- Someone will have to create secret either manually or via GitOps each time there is a new descendant namespace that needs the secret
- If we update the parent secret, they will have to update the secret in all descendant namespaces
- This could be time-consuming, and a small mistake while creating or updating the secret could lead to unnecessary debugging
With the help of Multi-Tenant Operator's Template feature we can make this secret distribution experience easy.
For example, to copy a Secret called registry
which exists in the example
to new Namespaces whenever they are created, we will first create a Template which will have reference of the registry secret.
It will also push updates to the copied Secrets and keep the propagated secrets always sync and updated with parent namespaces.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: registry-secret\nresources:\n resourceMappings:\n secrets:\n - name: registry\n namespace: example\n
Now using this Template we can propagate registry secret to different namespaces that have some common set of labels.
For example, will just add one label kind: registry
and all namespaces with this label will get this secret.
For propagating it on different namespaces dynamically will have to create another resource called TemplateGroupInstance
. TemplateGroupInstance
will have Template
and matchLabel
mapping as shown below:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: registry-secret-group-instance\nspec:\n template: registry-secret\n selector:\n matchLabels:\n kind: registry\n sync: true\n
After reconciliation, you will be able to see those secrets in namespaces having mentioned label.
MTO will keep injecting this secret to the new namespaces created with that label.
kubectl get secret registry-secret -n example-ns-1\nNAME STATE AGE\nregistry-secret Active 3m\n\nkubectl get secret registry-secret -n example-ns-2\nNAME STATE AGE\nregistry-secret Active 3m\n
"},{"location":"tutorials/installation.html","title":"Installation","text":"This document contains instructions on installing, uninstalling and configuring Multi Tenant Operator using OpenShift MarketPlace.
-
OpenShift OperatorHub UI
-
CLI/GitOps
-
Uninstall
"},{"location":"tutorials/installation.html#requirements","title":"Requirements","text":" - An OpenShift cluster [v4.7 - v4.12]
"},{"location":"tutorials/installation.html#installing-via-operatorhub-ui","title":"Installing via OperatorHub UI","text":" - After opening OpenShift console click on
Operators
, followed by OperatorHub
from the side menu
- Now search for
Multi Tenant Operator
and then click on Multi Tenant Operator
tile
- Click on the
install
button
- Select
Updated channel
. Select multi-tenant-operator
to install the operator in multi-tenant-operator
namespace from Installed Namespace
dropdown menu. After configuring Update approval
click on the install
button.
Note: Use stable
channel for seamless upgrades. For Production Environment
prefer Manual
approval and use Automatic
for Development Environment
- Wait for the operator to be installed
- Once successfully installed, MTO will be ready to enforce multi-tenancy in your cluster
Note: MTO will be installed in multi-tenant-operator
namespace.
"},{"location":"tutorials/installation.html#configuring-integrationconfig","title":"Configuring IntegrationConfig","text":"IntegrationConfig is required to configure the settings of multi-tenancy for MTO.
- We recommend using the following IntegrationConfig as a starting point
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedNamespaces:\n - default\n - ^openshift-*\n - ^kube-*\n - ^redhat-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:default-*\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n - ^system:serviceaccount:redhat-*\n
For more details and configurations check out IntegrationConfig.
"},{"location":"tutorials/installation.html#installing-via-cli-or-gitops","title":"Installing via CLI OR GitOps","text":" - Create namespace
multi-tenant-operator
oc create namespace multi-tenant-operator\nnamespace/multi-tenant-operator created\n
- Create an OperatorGroup YAML for MTO and apply it in
multi-tenant-operator
namespace.
oc create -f - << EOF\napiVersion: operators.coreos.com/v1\nkind: OperatorGroup\nmetadata:\n name: tenant-operator\n namespace: multi-tenant-operator\nEOF\noperatorgroup.operators.coreos.com/tenant-operator created\n
- Create a subscription YAML for MTO and apply it in
multi-tenant-operator
namespace. To enable console set .spec.config.env[].ENABLE_CONSOLE
to true
. This will create a route resource, which can be used to access the Multi-Tenant-Operator console.
oc create -f - << EOF\napiVersion: operators.coreos.com/v1alpha1\nkind: Subscription\nmetadata:\n name: tenant-operator\n namespace: multi-tenant-operator\nspec:\n channel: stable\n installPlanApproval: Automatic\n name: tenant-operator\n source: certified-operators\n sourceNamespace: openshift-marketplace\n startingCSV: tenant-operator.v0.9.1\n config:\n env:\n - name: ENABLE_CONSOLE\n value: 'true'\nEOF\nsubscription.operators.coreos.com/tenant-operator created\n
Note: To bring MTO via GitOps, add the above files in GitOps repository.
- After creating the
subscription
custom resource open OpenShift console and click on Operators
, followed by Installed Operators
from the side menu
- Wait for the installation to complete
- Once the installation is complete click on
Workloads
, followed by Pods
from the side menu and select multi-tenant-operator
project
- Once pods are up and running, MTO will be ready to enforce multi-tenancy in your cluster
"},{"location":"tutorials/installation.html#configuring-integrationconfig_1","title":"Configuring IntegrationConfig","text":"IntegrationConfig is required to configure the settings of multi-tenancy for MTO.
- We recommend using the following IntegrationConfig as a starting point:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedNamespaces:\n - default\n - ^openshift-*\n - ^kube-*\n - ^redhat-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:default-*\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n - ^system:serviceaccount:redhat-*\n
For more details and configurations check out IntegrationConfig.
"},{"location":"tutorials/installation.html#uninstall-via-operatorhub-ui","title":"Uninstall via OperatorHub UI","text":"You can uninstall MTO by following these steps:
-
Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not. If yes, please set spec.onDelete.cleanNamespaces
to false
for all those tenants whose namespaces you want to retain, and spec.onDelete.cleanAppProject
to false
for all those tenants whose AppProject you want to retain. For more details check out onDelete
-
After making the required changes open OpenShift console and click on Operators
, followed by Installed Operators
from the side menu
- Now click on uninstall and confirm uninstall.
"},{"location":"tutorials/installation.html#notes","title":"Notes","text":" - For more details on how to use MTO please refer Tenant tutorial.
- For more details on how to extend your MTO manager ClusterRole please refer extend-admin-clusterrole.
"},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html","title":"Enabling Multi-Tenancy in ArgoCD","text":""},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#argocd-integration-in-multi-tenant-operator","title":"ArgoCD integration in Multi Tenant Operator","text":"With Multi Tenant Operator (MTO), cluster admins can configure multi tenancy in their cluster. Now with ArgoCD integration, multi tenancy can be configured in ArgoCD applications and AppProjects.
MTO (if configured to) will create AppProjects for each tenant. The AppProject will allow tenants to create ArgoCD Applications that can be synced to namespaces owned by those tenants. Cluster admins will also be able to blacklist certain namespaces resources if they want, and allow certain cluster scoped resources as well (see the NamespaceResourceBlacklist
and ClusterResourceWhitelist
sections in Integration Config docs and Tenant Custom Resource docs).
Note that ArgoCD integration in MTO is completely optional.
"},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#default-argocd-configuration","title":"Default ArgoCD configuration","text":"We have set a default ArgoCD configuration in Multi Tenant Operator that fulfils the following use cases:
- Tenants are able to see only their ArgoCD applications in the ArgoCD frontend
- Tenant 'Owners' and 'Editors' will have full access to their ArgoCD applications
- Tenants in the 'Viewers' group will have read-only access to their ArgoCD applications
- Tenants can sync all namespace-scoped resources, except those that are blacklisted in the spec
- Tenants can only sync cluster-scoped resources that are allow-listed in the spec
- Tenant 'Owners' can configure their own GitOps source repos at a tenant level
- Cluster admins can prevent specific resources from syncing via ArgoCD
- Cluster admins have full access to all ArgoCD applications and AppProjects
- Since ArgoCD integration is on a per-tenant level, namespace-scoped applications are only synced to Tenant's namespaces
"},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#creating-argocd-appprojects-for-your-tenant","title":"Creating ArgoCD AppProjects for your tenant","text":"Bill wants each tenant to also have their own ArgoCD AppProjects. To make sure this happens correctly, Bill will first specify the ArgoCD namespace in the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n argocd:\n namespace: openshift-operators\n ...\n
Afterward, Bill must specify the source GitOps repos for the tenant inside the tenant CR like so:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: sigma\nspec:\n argocd:\n sourceRepos:\n # specify source repos here\n - \"https://github.com/stakater/GitOps-config\"\n owners:\n users:\n - user\n editors:\n users:\n - user1\n quota: medium\n sandbox: false\n namespaces:\n withTenantPrefix:\n - build\n - stage\n - dev\n
Now Bill can see an AppProject will be created for the tenant
oc get AppProject -A\nNAMESPACE NAME AGE\nopenshift-operators sigma 5d15h\n
The following AppProject is created:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: sigma\n namespace: openshift-operators\nspec:\n destinations:\n - namespace: sigma-build\n server: \"https://kubernetes.default.svc\"\n - namespace: sigma-dev\n server: \"https://kubernetes.default.svc\"\n - namespace: sigma-stage\n server: \"https://kubernetes.default.svc\"\n roles:\n - description: >-\n Role that gives full access to all resources inside the tenant's\n namespace to the tenant owner groups\n groups:\n - saap-cluster-admins\n - stakater-team\n - sigma-owner-group\n name: sigma-owner\n policies:\n - \"p, proj:sigma:sigma-owner, *, *, sigma/*, allow\"\n - description: >-\n Role that gives edit access to all resources inside the tenant's\n namespace to the tenant owner group\n groups:\n - saap-cluster-admins\n - stakater-team\n - sigma-edit-group\n name: sigma-edit\n policies:\n - \"p, proj:sigma:sigma-edit, *, *, sigma/*, allow\"\n - description: >-\n Role that gives view access to all resources inside the tenant's\n namespace to the tenant owner group\n groups:\n - saap-cluster-admins\n - stakater-team\n - sigma-view-group\n name: sigma-view\n policies:\n - \"p, proj:sigma:sigma-view, *, get, sigma/*, allow\"\n sourceRepos:\n - \"https://github.com/stakater/gitops-config\"\n
Users belonging to the Sigma group will now only see applications created by them in the ArgoCD frontend now:
"},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#prevent-argocd-from-syncing-certain-namespaced-resources","title":"Prevent ArgoCD from syncing certain namespaced resources","text":"Bill wants tenants to not be able to sync ResourceQuota
and LimitRange
resources to their namespaces. To do this correctly, Bill will specify these resources to blacklist in the ArgoCD portion of the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n argocd:\n namespace: openshift-operators\n namespaceResourceBlacklist:\n - group: \"\"\n kind: ResourceQuota\n - group: \"\"\n kind: LimitRange\n ...\n
Now, if these resources are added to any tenant's project directory in GitOps, ArgoCD will not sync them to the cluster. The AppProject will also have the blacklisted resources added to it:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: sigma\n namespace: openshift-operators\nspec:\n ...\n namespaceResourceBlacklist:\n - group: ''\n kind: ResourceQuota\n - group: ''\n kind: LimitRange\n ...\n
"},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#allow-argocd-to-sync-certain-cluster-wide-resources","title":"Allow ArgoCD to sync certain cluster-wide resources","text":"Bill now wants tenants to be able to sync the Environment
cluster scoped resource to the cluster. To do this correctly, Bill will specify the resource to allow-list in the ArgoCD portion of the Integration Config's Spec:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n argocd:\n namespace: openshift-operators\n clusterResourceWhitelist:\n - group: \"\"\n kind: Environment\n ...\n
Now, if the resource is added to any tenant's project directory in GitOps, ArgoCD will sync them to the cluster. The AppProject will also have the allow-listed resources added to it:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: sigma\n namespace: openshift-operators\nspec:\n ...\n clusterResourceWhitelist:\n - group: \"\"\n kind: Environment\n ...\n
"},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#override-namespaceresourceblacklist-andor-clusterresourcewhitelist-per-tenant","title":"Override NamespaceResourceBlacklist and/or ClusterResourceWhitelist per Tenant","text":"Bill now wants a specific tenant to override the namespaceResourceBlacklist
and/or clusterResourceWhitelist
set via Integration Config. Bill will specify these in argoCD.appProjects
section of Tenant spec.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: blue-sky\nspec:\n argocd:\n sourceRepos:\n # specify source repos here\n - \"https://github.com/stakater/GitOps-config\"\n appProject:\n clusterResourceWhitelist:\n - group: admissionregistration.k8s.io\n kind: validatingwebhookconfigurations\n namespaceResourceBlacklist:\n - group: \"\"\n kind: ConfigMap\n owners:\n users:\n - user\n editors:\n users:\n - user1\n quota: medium\n sandbox: false\n namespaces:\n withTenantPrefix:\n - build\n - stage\n
"},{"location":"tutorials/template/template-group-instance.html","title":"More about TemplateGroupInstance","text":""},{"location":"tutorials/template/template-instance.html","title":"More about TemplateInstances","text":""},{"location":"tutorials/template/template.html","title":"Understanding and Utilizing Template","text":""},{"location":"tutorials/template/template.html#creating-templates","title":"Creating Templates","text":"Anna wants to create a Template that she can use to initialize or share common resources across namespaces (e.g. PullSecrets).
Anna can either create a template using manifests
field, covering Kubernetes or custom resources.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Or by using Helm Charts
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: redis\nresources:\n helm:\n releaseName: redis\n chart:\n repository:\n name: redis\n repoUrl: https://charts.bitnami.com/bitnami\n values: |\n redisPort: 6379\n
She can also use resourceMapping
field to copy over secrets and configmaps from one namespace to others.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: resource-mapping\nresources:\n resourceMappings:\n secrets:\n - name: docker-secret\n namespace: bluesky-build\n configMaps:\n - name: tronador-configMap\n namespace: stakater-tronador\n
Note: Resource mapping can be used via TGI to map resources within tenant namespaces or to some other tenant's namespace. If used with TI, the resources will only be mapped if namespaces belong to same tenant.
"},{"location":"tutorials/template/template.html#using-templates-with-default-parameters","title":"Using Templates with Default Parameters","text":"apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: namespace-parameterized-restrictions\nparameters:\n # Name of the parameter\n - name: DEFAULT_CPU_LIMIT\n # The default value of the parameter\n value: \"1\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"0.5\"\n # If a parameter is required the template instance will need to set it\n # required: true\n # Make sure only values are entered for this parameter\n validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n manifests:\n - apiVersion: v1\n kind: LimitRange\n metadata:\n name: namespace-limit-range-${namespace}\n spec:\n limits:\n - default:\n cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n defaultRequest:\n cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n type: Container\n
Parameters can be used with both manifests
and helm charts
"},{"location":"tutorials/tenant/assign-quota-tenant.html","title":"Assign Quota to a Tenant","text":""},{"location":"tutorials/tenant/assigning-metadata.html","title":"Assigning Common/Specific Metadata","text":""},{"location":"tutorials/tenant/assigning-metadata.html#distributing-common-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource","title":"Distributing common labels and annotations to tenant namespaces via Tenant Custom Resource","text":"Bill now wants to add labels/annotations to all the namespaces for a tenant. To create those labels/annotations Bill will just add them into commonMetadata.labels
/commonMetadata.annotations
field in the tenant CR.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n commonMetadata:\n labels:\n app.kubernetes.io/managed-by: tenant-operator\n app.kubernetes.io/part-of: tenant-alpha\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/infra=\nEOF\n
With the above configuration all tenant namespaces will now contain the mentioned labels and annotations.
"},{"location":"tutorials/tenant/assigning-metadata.html#distributing-specific-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource","title":"Distributing specific labels and annotations to tenant namespaces via Tenant Custom Resource","text":"Bill now wants to add labels/annotations to specific namespaces for a tenant. To create those labels/annotations Bill will just add them into specificMetadata.labels
/specificMetadata.annotations
and specific namespaces in specificMetadata.namespaces
field in the tenant CR.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandboxConfig:\n enabled: true\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n specificMetadata:\n - namespaces:\n - bluesky-anna-aurora-sandbox\n labels:\n app.kubernetes.io/is-sandbox: true\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\nEOF\n
With the above configuration all tenant namespaces will now contain the mentioned labels and annotations.
"},{"location":"tutorials/tenant/create-sandbox.html","title":"Create Sandbox Namespaces for Tenant Users","text":""},{"location":"tutorials/tenant/create-sandbox.html#assigning-users-sandbox-namespace","title":"Assigning Users Sandbox Namespace","text":"Bill assigned the ownership of bluesky
to Anna
and Anthony
. Now if the users want sandboxes to be made for them, they'll have to ask Bill
to enable sandbox
functionality.
To enable that, Bill will just set enabled: true
within the sandboxConfig
field
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandboxConfig:\n enabled: true\nEOF\n
With the above configuration Anna
and Anthony
will now have new sandboxes created
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\nbluesky-anthony-aurora-sandbox Active 5d5h\nbluesky-john-aurora-sandbox Active 5d5h\n
If Bill wants to make sure that only the sandbox owner can view his sandbox namespace, he can achieve this by setting private: true
within the sandboxConfig
filed.
"},{"location":"tutorials/tenant/create-sandbox.html#create-private-sandboxes","title":"Create Private Sandboxes","text":"Bill assigned the ownership of bluesky
to Anna
and Anthony
. Now if the users want sandboxes to be made for them, they'll have to ask Bill
to enable sandbox
functionality. The Users also want to make sure that the sandboxes that are created for them are also only visible to the user they belong to. To enable that, Bill will just set enabled: true
and private: true
within the sandboxConfig
field
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandboxConfig:\n enabled: true\n private: true\nEOF\n
With the above configuration Anna
and Anthony
will now have new sandboxes created
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\nbluesky-anthony-aurora-sandbox Active 5d5h\nbluesky-john-aurora-sandbox Active 5d5h\n
However, from the perspective of Anna
, only their sandbox will be visible
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\n
"},{"location":"tutorials/tenant/create-tenant.html","title":"Creating a Tenant","text":"Bill is a cluster admin who receives a new request from Aurora Solutions CTO asking for a new tenant for Anna's team.
Bill creates a new tenant called bluesky
in the cluster:
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandbox: false\nEOF\n
Bill checks if the new tenant is created:
kubectl get tenants.tenantoperator.stakater.com bluesky\nNAME STATE AGE\nbluesky Active 3m\n
Anna can now log in to the cluster and check if she can create namespaces
kubectl auth can-i create namespaces\nyes\n
However, cluster resources are not accessible to Anna
kubectl auth can-i get namespaces\nno\n\nkubectl auth can-i get persistentvolumes\nno\n
Including the Tenant
resource
kubectl auth can-i get tenants.tenantoperator.stakater.com\nno\n
"},{"location":"tutorials/tenant/create-tenant.html#assign-multiple-users-as-tenant-owner","title":"Assign multiple users as tenant owner","text":"In the example above, Bill assigned the ownership of bluesky
to Anna
. If another user, e.g. Anthony
needs to administer bluesky
, than Bill can assign the ownership of tenant to that user as well:
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandbox: false\nEOF\n
With the configuration above, Anthony can log in to the cluster and execute
kubectl auth can-i create namespaces\nyes\n
"},{"location":"tutorials/tenant/creating-namespaces.html","title":"Creating Namespaces","text":""},{"location":"tutorials/tenant/creating-namespaces.html#creating-namespaces-via-tenant-custom-resource","title":"Creating Namespaces via Tenant Custom Resource","text":"Bill now wants to create namespaces for dev
, build
and production
environments for the tenant members. To create those namespaces Bill will just add those names within the namespaces
field in the tenant CR. If Bill wants to append the tenant name as a prefix in namespace name, then he can use namespaces.withTenantPrefix
field. Else he can use namespaces.withoutTenantPrefix
for namespaces for which he does not need tenant name as a prefix.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n namespaces:\n withTenantPrefix:\n - dev\n - build\n withoutTenantPrefix:\n - prod\nEOF\n
With the above configuration tenant members will now see new namespaces have been created.
kubectl get namespaces\nNAME STATUS AGE\nbluesky-dev Active 5d5h\nbluesky-build Active 5d5h\nprod Active 5d5h\n
Anna as the tenant owner can create new namespaces for her tenant.
apiVersion: v1\nkind: Namespace\nmetadata:\n name: bluesky-production\n labels:\n stakater.com/tenant: bluesky\n
\u26a0\ufe0f Anna is required to add the tenant label stakater.com/tenant: bluesky
which contains the name of her tenant bluesky
, while creating the namespace. If this label is not added or if Anna does not belong to the bluesky
tenant, then Multi Tenant Operator will not allow the creation of that namespace.
When Anna creates the namespace, MTO assigns Anna and other tenant members the roles based on their user types, such as a tenant owner getting the OpenShift admin
role for that namespace.
As a tenant owner, Anna is able to create namespaces.
If you have enabled ArgoCD Multitenancy, our preferred solution is to create tenant namespaces by using Tenant spec to avoid syncing issues in ArgoCD console during namespace creation.
"},{"location":"tutorials/tenant/creating-namespaces.html#add-existing-namespaces-to-tenant-via-gitops","title":"Add Existing Namespaces to Tenant via GitOps","text":"Using GitOps as your preferred development workflow, you can add existing namespaces for your tenants by including the tenant label.
To add an existing namespace to your tenant via GitOps:
- First, migrate your namespace resource to your \u201cwatched\u201d git repository
- Edit your namespace
yaml
to include the tenant label - Tenant label follows the naming convention
stakater.com/tenant: <TENANT_NAME>
- Sync your GitOps repository with your cluster and allow changes to be propagated
- Verify that your Tenant users now have access to the namespace
For example, If Anna, a tenant owner, wants to add the namespace bluesky-dev
to her tenant via GitOps, after migrating her namespace manifest to a \u201cwatched repository\u201d
apiVersion: v1\nkind: Namespace\nmetadata:\n name: bluesky-dev\n
She can then add the tenant label
...\n labels:\n stakater.com/tenant: bluesky\n
Now all the users of the Bluesky
tenant now have access to the existing namespace.
Additionally, to remove namespaces from a tenant, simply remove the tenant label from the namespace resource and sync your changes to your cluster.
"},{"location":"tutorials/tenant/creating-namespaces.html#remove-namespaces-from-your-cluster-via-gitops","title":"Remove Namespaces from your Cluster via GitOps","text":"GitOps is a quick and efficient way to automate the management of your K8s resources.
To remove namespaces from your cluster via GitOps;
- Remove the
yaml
file containing your namespace configurations from your \u201cwatched\u201d git repository. - ArgoCD automatically sets the
[app.kubernetes.io/instance](http://app.kubernetes.io/instance)
label on resources it manages. It uses this label it to select resources which inform the basis of an app. To remove a namespace from a managed ArgoCD app, remove the ArgoCD label app.kubernetes.io/instance
from the namespace manifest. - You can edit your namespace manifest through the OpenShift Web Console or with the OpenShift command line tool.
- Now that you have removed your namespace manifest from your watched git repository, and from your managed ArgoCD apps, sync your git repository and allow your changes be propagated.
- Verify that your namespace has been deleted.
"},{"location":"tutorials/tenant/custom-rbac.html","title":"Applying Custom RBAC to a Tenant","text":""},{"location":"tutorials/tenant/deleting-tenant.html","title":"Deleting a Tenant","text":""},{"location":"tutorials/tenant/deleting-tenant.html#retaining-tenant-namespaces-and-appproject-when-a-tenant-is-being-deleted","title":"Retaining tenant namespaces and AppProject when a tenant is being deleted","text":"Bill now wants to delete tenant bluesky
and wants to retain all namespaces and AppProject of the tenant. To retain the namespaces Bill will set spec.onDelete.cleanNamespaces
, and spec.onDelete.cleanAppProjects
to false
.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n quota: small\n sandboxConfig:\n enabled: true\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n onDelete:\n cleanNamespaces: false\n cleanAppProject: false\n
With the above configuration all tenant namespaces and AppProject will not be deleted when tenant bluesky
is deleted. By default, the value of spec.onDelete.cleanNamespaces
is also false
and spec.onDelete.cleanAppProject
is true
"},{"location":"tutorials/tenant/tenant-hibernation.html","title":"Hibernating a Tenant","text":""},{"location":"tutorials/tenant/tenant-hibernation.html#hibernating-namespaces","title":"Hibernating Namespaces","text":"You can manage workloads in your cluster with MTO by implementing a hibernation schedule for your tenants. Hibernation downsizes the running Deployments and StatefulSets in a tenant\u2019s namespace according to a defined cron schedule. You can set a hibernation schedule for your tenants by adding the \u2018spec.hibernation\u2019 field to the tenant's respective Custom Resource.
hibernation:\n sleepSchedule: 23 * * * *\n wakeSchedule: 26 * * * *\n
spec.hibernation.sleepSchedule
accepts a cron expression indicating the time to put the workloads in your tenant\u2019s namespaces to sleep.
spec.hibernation.wakeSchedule
accepts a cron expression indicating the time to wake the workloads in your tenant\u2019s namespaces up.
Note
Both sleep and wake schedules must be specified for your Hibernation schedule to be valid.
Additionally, adding the hibernation.stakater.com/exclude: 'true'
annotation to a namespace excludes it from hibernating.
Note
This is only true for hibernation applied via the Tenant Custom Resource, and does not apply for hibernation done by manually creating a ResourceSupervisor (details about that below).
Note
This will not wake up an already sleeping namespace before the wake schedule.
"},{"location":"tutorials/tenant/tenant-hibernation.html#resource-supervisor","title":"Resource Supervisor","text":"Adding a Hibernation Schedule to a Tenant creates an accompanying ResourceSupervisor Custom Resource. The Resource Supervisor stores the Hibernation schedules and manages the current and previous states of all the applications, whether they are sleeping or awake.
When the sleep timer is activated, the Resource Supervisor controller stores the details of your applications (including the number of replicas, configurations, etc.) in the applications' namespaces and then puts your applications to sleep. When the wake timer is activated, the controller wakes up the applications using their stored details.
Enabling ArgoCD support for Tenants will also hibernate applications in the tenants' appProjects
.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: sigma\nspec:\n argocd:\n appProjects:\n - sigma\n namespace: openshift-gitops\n hibernation:\n sleepSchedule: 42 * * * *\n wakeSchedule: 45 * * * *\n namespaces:\n - tenant-ns1\n - tenant-ns2\n
Currently, Hibernation is available only for StatefulSets and Deployments.
"},{"location":"tutorials/tenant/tenant-hibernation.html#manual-creation-of-resourcesupervisor","title":"Manual creation of ResourceSupervisor","text":"Hibernation can also be applied by creating a ResourceSupervisor resource manually. The ResourceSupervisor definition will contain the hibernation cron schedule, the names of the namespaces to be hibernated, and the names of the ArgoCD AppProjects whose ArgoCD Applications have to be hibernated (as per the given schedule).
This method can be used to hibernate:
- Some specific namespaces and AppProjects in a tenant
- A set of namespaces and AppProjects belonging to different tenants
- Namespaces and AppProjects belonging to a tenant that the cluster admin is not a member of
- Non-tenant namespaces and ArgoCD AppProjects
As an example, the following ResourceSupervisor could be created manually, to apply hibernation explicitly to the 'ns1' and 'ns2' namespaces, and to the 'sample-app-project' AppProject.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: hibernator\nspec:\n argocd:\n appProjects:\n - sample-app-project\n namespace: openshift-gitops\n hibernation:\n sleepSchedule: 42 * * * *\n wakeSchedule: 45 * * * *\n namespaces:\n - ns1\n - ns2\n
"},{"location":"tutorials/tenant/tenant-hibernation.html#freeing-up-unused-resources-with-hibernation","title":"Freeing up unused resources with hibernation","text":""},{"location":"tutorials/tenant/tenant-hibernation.html#hibernating-a-tenant_1","title":"Hibernating a tenant","text":"Bill is a cluster administrator who wants to free up unused cluster resources at nighttime, in an effort to reduce costs (when the cluster isn't being used).
First, Bill creates a tenant with the hibernation
schedules mentioned in the spec, or adds the hibernation field to an existing tenant:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: sigma\nspec:\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n owners:\n users:\n - user\n editors:\n users:\n - user1\n quota: medium\n namespaces:\n withoutTenantPrefix:\n - build\n - stage\n - dev\n
The schedules above will put all the Deployments
and StatefulSets
within the tenant's namespaces to sleep, by reducing their pod count to 0 at 8 PM every weekday. At 8 AM on weekdays, the namespaces will then wake up by restoring their applications' previous pod counts.
Bill can verify this behaviour by checking the newly created ResourceSupervisor resource at run time:
oc get ResourceSupervisor -A\nNAME AGE\nsigma 5m\n
The ResourceSupervisor will look like this at 'running' time (as per the schedule):
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - build\n - stage\n - dev\nstatus:\n currentStatus: running\n nextReconcileTime: '2022-10-12T20:00:00Z'\n
The ResourceSupervisor will look like this at 'sleeping' time (as per the schedule):
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - build\n - stage\n - dev\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: build\n kind: Deployment\n name: example\n replicas: 3\n - Namespace: stage\n kind: Deployment\n name: example\n replicas: 3\n
Bill wants to prevent the build
namespace from going to sleep, so he can add the hibernation.stakater.com/exclude: 'true'
annotation to it. The ResourceSupervisor will now look like this after reconciling:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - stage\n - dev\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: stage\n kind: Deployment\n name: example\n replicas: 3\n
"},{"location":"tutorials/tenant/tenant-hibernation.html#hibernating-namespaces-andor-argocd-applications-with-resourcesupervisor","title":"Hibernating namespaces and/or ArgoCD Applications with ResourceSupervisor","text":"Bill, the cluster administrator, wants to hibernate a collection of namespaces and AppProjects belonging to multiple different tenants. He can do so by creating a ResourceSupervisor manually, specifying the hibernation schedule in its spec, the namespaces and ArgoCD Applications that need to be hibernated as per the mentioned schedule. Bill can also use the same method to hibernate some namespaces and ArgoCD Applications that do not belong to any tenant on his cluster.
The example given below will hibernate the ArgoCD Applications in the 'test-app-project' AppProject; and it will also hibernate the 'ns2' and 'ns4' namespaces.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: test-resource-supervisor\nspec:\n argocd:\n appProjects:\n - test-app-project\n namespace: argocd-ns\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - ns2\n - ns4\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: ns2\n kind: Deployment\n name: test-deployment\n replicas: 3\n
"},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html","title":"Enabling Multi-Tenancy in Vault","text":""},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html#vault-multitenancy","title":"Vault Multitenancy","text":"HashiCorp Vault is an identity-based secret and encryption management system. Vault validates and authorizes a system's clients (users, machines, apps) before providing them access to secrets or stored sensitive data.
"},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html#vault-integration-in-multi-tenant-operator","title":"Vault integration in Multi Tenant Operator","text":""},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html#service-account-auth-in-vault","title":"Service Account Auth in Vault","text":"MTO enables the Kubernetes auth method which can be used to authenticate with Vault using a Kubernetes Service Account Token. When enabled, for every tenant namespace, MTO automatically creates policies and roles that allow the service accounts present in those namespaces to read secrets at tenant's path in Vault. The name of the role is the same as namespace name.
These service accounts are required to have stakater.com/vault-access: true
label, so they can be authenticated with Vault via MTO.
The Diagram shows how MTO enables ServiceAccounts to read secrets from Vault.
"},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html#user-oidc-auth-in-vault","title":"User OIDC Auth in Vault","text":"This requires a running RHSSO(RedHat Single Sign On)
instance integrated with Vault over OIDC login method.
MTO integration with Vault and RHSSO provides a way for users to log in to Vault where they only have access to relevant tenant paths.
Once both integrations are set up with IntegrationConfig CR, MTO links tenant users to specific client roles named after their tenant under Vault client in RHSSO.
After that, MTO creates specific policies in Vault for its tenant users.
Mapping of tenant roles to Vault is shown below
Tenant Role Vault Path Vault Capabilities Owner, Editor (tenantName)/* Create, Read, Update, Delete, List Owner, Editor sys/mounts/(tenantName)/* Create, Read, Update, Delete, List Owner, Editor managed-addons/* Read, List Viewer (tenantName)/* Read A simple user login workflow is shown in the diagram below.
"},{"location":"usecases/admin-clusterrole.html","title":"Extending Admin ClusterRole","text":"Bill as the cluster admin want to add additional rules for admin ClusterRole.
Bill can extend the admin
role for MTO using the aggregation label for admin ClusterRole. Bill will create a new ClusterRole with all the permissions he needs to extend for MTO and add the aggregation label on the newly created ClusterRole.
kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: extend-admin-role\n labels:\n rbac.authorization.k8s.io/aggregate-to-admin: 'true'\nrules:\n - verbs:\n - create\n - update\n - patch\n - delete\n apiGroups:\n - user.openshift.io\n resources:\n - groups\n
Note: You can learn more about aggregated-cluster-roles
here
"},{"location":"usecases/admin-clusterrole.html#whats-next","title":"What\u2019s next","text":"See how Bill can hibernate unused namespaces at night
"},{"location":"usecases/argocd.html","title":"ArgoCD","text":""},{"location":"usecases/argocd.html#creating-argocd-appprojects-for-your-tenant","title":"Creating ArgoCD AppProjects for your tenant","text":"Bill wants each tenant to also have their own ArgoCD AppProjects. To make sure this happens correctly, Bill will first specify the ArgoCD namespace in the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n argocd:\n namespace: openshift-operators\n ...\n
Afterwards, Bill must specify the source GitOps repos for the tenant inside the tenant CR like so:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: sigma\nspec:\n argocd:\n sourceRepos:\n # specify source repos here\n - \"https://github.com/stakater/GitOps-config\"\n owners:\n users:\n - user\n editors:\n users:\n - user1\n quota: medium\n sandbox: false\n namespaces:\n withTenantPrefix:\n - build\n - stage\n - dev\n
Now Bill can see an AppProject will be created for the tenant
oc get AppProject -A\nNAMESPACE NAME AGE\nopenshift-operators sigma 5d15h\n
The following AppProject is created:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: sigma\n namespace: openshift-operators\nspec:\n destinations:\n - namespace: sigma-build\n server: \"https://kubernetes.default.svc\"\n - namespace: sigma-dev\n server: \"https://kubernetes.default.svc\"\n - namespace: sigma-stage\n server: \"https://kubernetes.default.svc\"\n roles:\n - description: >-\n Role that gives full access to all resources inside the tenant's\n namespace to the tenant owner group\n groups:\n - saap-cluster-admins\n - stakater-team\n - sigma-owner-group\n name: sigma-owner\n policies:\n - \"p, proj:sigma:sigma-owner, *, *, sigma/*, allow\"\n - description: >-\n Role that gives edit access to all resources inside the tenant's\n namespace to the tenant owner group\n groups:\n - saap-cluster-admins\n - stakater-team\n - sigma-edit-group\n name: sigma-edit\n policies:\n - \"p, proj:sigma:sigma-edit, *, *, sigma/*, allow\"\n - description: >-\n Role that gives view access to all resources inside the tenant's\n namespace to the tenant owner group\n groups:\n - saap-cluster-admins\n - stakater-team\n - sigma-view-group\n name: sigma-view\n policies:\n - \"p, proj:sigma:sigma-view, *, get, sigma/*, allow\"\n sourceRepos:\n - \"https://github.com/stakater/gitops-config\"\n
Users belonging to the Sigma group will now only see applications created by them in the ArgoCD frontend now:
"},{"location":"usecases/argocd.html#prevent-argocd-from-syncing-certain-namespaced-resources","title":"Prevent ArgoCD from syncing certain namespaced resources","text":"Bill wants tenants to not be able to sync ResourceQuota
and LimitRange
resources to their namespaces. To do this correctly, Bill will specify these resources to blacklist in the ArgoCD portion of the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n argocd:\n namespace: openshift-operators\n namespaceResourceBlacklist:\n - group: \"\"\n kind: ResourceQuota\n - group: \"\"\n kind: LimitRange\n ...\n
Now, if these resources are added to any tenant's project directory in GitOps, ArgoCD will not sync them to the cluster. The AppProject will also have the blacklisted resources added to it:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: sigma\n namespace: openshift-operators\nspec:\n ...\n namespaceResourceBlacklist:\n - group: ''\n kind: ResourceQuota\n - group: ''\n kind: LimitRange\n ...\n
"},{"location":"usecases/argocd.html#allow-argocd-to-sync-certain-cluster-wide-resources","title":"Allow ArgoCD to sync certain cluster-wide resources","text":"Bill now wants tenants to be able to sync the Environment
cluster scoped resource to the cluster. To do this correctly, Bill will specify the resource to allow-list in the ArgoCD portion of the Integration Config's Spec:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n argocd:\n namespace: openshift-operators\n clusterResourceWhitelist:\n - group: \"\"\n kind: Environment\n ...\n
Now, if the resource is added to any tenant's project directory in GitOps, ArgoCD will sync them to the cluster. The AppProject will also have the allow-listed resources added to it:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: sigma\n namespace: openshift-operators\nspec:\n ...\n clusterResourceWhitelist:\n - group: \"\"\n kind: Environment\n ...\n
"},{"location":"usecases/argocd.html#override-namespaceresourceblacklist-andor-clusterresourcewhitelist-per-tenant","title":"Override NamespaceResourceBlacklist and/or ClusterResourceWhitelist per Tenant","text":"Bill now wants a specific tenant to override the namespaceResourceBlacklist
and/or clusterResourceWhitelist
set via Integration Config. Bill will specify these in argoCD.appProjects
section of Tenant spec.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: blue-sky\nspec:\n argocd:\n sourceRepos:\n # specify source repos here\n - \"https://github.com/stakater/GitOps-config\"\n appProject:\n clusterResourceWhitelist:\n - group: admissionregistration.k8s.io\n kind: validatingwebhookconfigurations\n namespaceResourceBlacklist:\n - group: \"\"\n kind: ConfigMap\n owners:\n users:\n - user\n editors:\n users:\n - user1\n quota: medium\n sandbox: false\n namespaces:\n withTenantPrefix:\n - build\n - stage\n
"},{"location":"usecases/configuring-multitenant-network-isolation.html","title":"Configuring Multi-Tenant Isolation with Network Policy Template","text":"Bill is a cluster admin who wants to configure network policies to provide multi-tenant network isolation.
First, Bill creates a template for network policies:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: tenant-network-policy\nresources:\n manifests:\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-same-namespace\n spec:\n podSelector: {}\n ingress:\n - from:\n - podSelector: {}\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-from-openshift-monitoring\n spec:\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n network.openshift.io/policy-group: monitoring\n podSelector: {}\n policyTypes:\n - Ingress\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-from-openshift-ingress\n spec:\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n network.openshift.io/policy-group: ingress\n podSelector: {}\n policyTypes:\n - Ingress\n
Once the template has been created, Bill edits the IntegrationConfig to add unique label to all tenant projects:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n project:\n labels:\n stakater.com/workload-monitoring: \"true\"\n tenant-network-policy: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n sandbox:\n labels:\n stakater.com/kind: sandbox\n privilegedNamespaces:\n - default\n - ^openshift-*\n - ^kube-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n
Bill has added a new label tenant-network-policy: \"true\"
in project section of IntegrationConfig, now MTO will add that label in all tenant projects.
Finally Bill creates a TemplateGroupInstance
which will distribute the network policies using the newly added project label and template.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: tenant-network-policy-group\nspec:\n template: tenant-network-policy\n selector:\n matchLabels:\n tenant-network-policy: \"true\"\n sync: true\n
MTO will now deploy the network policies mentioned in Template
to all projects matching the label selector mentioned in the TemplateGroupInstance.
"},{"location":"usecases/custom-roles.html","title":"Changing the default access level for tenant owners","text":"This feature allows the cluster admins to change the default roles assigned to Tenant owner, editor, viewer groups.
For example, if Bill as the cluster admin wants to reduce the privileges that tenant owners have, so they cannot create or edit Roles or bind them. As an admin of an OpenShift cluster, Bill can do this by assigning the edit
role to all tenant owners. This is easily achieved by modifying the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - edit\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n
Once all namespaces reconcile, the old admin
RoleBindings should get replaced with the edit
ones for each tenant owner.
"},{"location":"usecases/custom-roles.html#giving-specific-permissions-to-some-tenants","title":"Giving specific permissions to some tenants","text":"Bill now wants the owners of the tenants bluesky
and alpha
to have admin
permissions over their namespaces. Custom roles feature will allow Bill to do this, by modifying the IntegrationConfig like this:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - edit\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n owner:\n clusterRoles:\n - admin\n - labelSelector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - bluesky\n owner:\n clusterRoles:\n - admin\n
New Bindings will be created for the Tenant owners of bluesky
and alpha
, corresponding to the admin
Role. Bindings for editors and viewer will be inherited from the default roles
. All other Tenant owners will have an edit
Role bound to them within their namespaces
"},{"location":"usecases/deploying-templates.html","title":"Distributing Resources in Namespaces","text":"Multi Tenant Operator has three Custom Resources which can cover this need using the Template
CR, depending upon the conditions and preference.
- TemplateGroupInstance
- TemplateInstance
- Tenant
Stakater Team, however, encourages the use of TemplateGroupInstance
to distribute resources in multiple namespaces as it is optimized for better performance.
"},{"location":"usecases/deploying-templates.html#deploying-template-to-namespaces-via-templategroupinstances","title":"Deploying Template to Namespaces via TemplateGroupInstances","text":"Bill, the cluster admin, wants to deploy a docker pull secret in namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with MatchLabels
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchLabels:\n kind: build\n sync: true\n
Afterwards, Bill can see that secrets have been successfully created in all label matching namespaces.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-secret Active 3m\n\nkubectl get secret docker-secret -n alpha-dave-aurora-sandbox\nNAME STATE AGE\ndocker-secret Active 2m\n
TemplateGroupInstance
can also target specific tenants or all tenant namespaces under a single yaml definition.
"},{"location":"usecases/deploying-templates.html#templategroupinstance-for-multiple-tenants","title":"TemplateGroupInstance for multiple Tenants","text":"It can be done by using the matchExpressions
field, dividing the tenant label in key and values.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\n sync: true\n
"},{"location":"usecases/deploying-templates.html#templategroupinstance-for-all-tenants","title":"TemplateGroupInstance for all Tenants","text":"This can also be done by using the matchExpressions
field, using just the tenant label key stakater.com/tenant
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: Exists\n sync: true\n
"},{"location":"usecases/deploying-templates.html#deploying-template-to-namespaces-via-tenant","title":"Deploying Template to Namespaces via Tenant","text":"Bill is a cluster admin who wants to deploy a docker-pull-secret in Anna's tenant namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Bill edits Anna's tenant and populates the namespacetemplate
field:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n editors:\n users:\n - john@aurora.org\n quota: small\n sandboxConfig:\n enabled: true\n templateInstances:\n - spec:\n template: docker-pull-secret\n selector:\n matchLabels:\n kind: build\n
Multi Tenant Operator will deploy TemplateInstances
mentioned in templateInstances
field, TemplateInstances
will only be applied in those namespaces
which belong to Anna's tenant
and have the matching label of kind: build
.
So now Anna adds label kind: build
to her existing namespace bluesky-anna-aurora-sandbox
, and after adding the label she see's that the secret has been created.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"usecases/deploying-templates.html#deploying-template-to-a-namespace-via-templateinstance","title":"Deploying Template to a Namespace via TemplateInstance","text":"Anna wants to deploy a docker pull secret in her namespace.
First Anna asks Bill, the cluster admin, to create a template of the secret for her:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Anna creates a TemplateInstance
in her namespace referring to the Template
she wants to deploy:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: docker-pull-secret-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: docker-pull-secret\n sync: true\n
Once this is created, Anna can see that the secret has been successfully applied.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"usecases/deploying-templates.html#passing-parameters-to-template-via-templateinstance-templategroupinstance-or-tenant","title":"Passing Parameters to Template via TemplateInstance, TemplateGroupInstance or Tenant","text":"Anna wants to deploy a LimitRange resource to certain namespaces.
First Anna asks Bill, the cluster admin, to create template with parameters for LimitRange for her:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: namespace-parameterized-restrictions\nparameters:\n # Name of the parameter\n - name: DEFAULT_CPU_LIMIT\n # The default value of the parameter\n value: \"1\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"0.5\"\n # If a parameter is required the template instance will need to set it\n # required: true\n # Make sure only values are entered for this parameter\n validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n manifests:\n - apiVersion: v1\n kind: LimitRange\n metadata:\n name: namespace-limit-range-${namespace}\n spec:\n limits:\n - default:\n cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n defaultRequest:\n cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n type: Container\n
Afterwards, Anna creates a TemplateInstance
in her namespace referring to the Template
she wants to deploy:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: namespace-parameterized-restrictions-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: namespace-parameterized-restrictions\n sync: true\nparameters:\n - name: DEFAULT_CPU_LIMIT\n value: \"1.5\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"1\"\n
If she wants to distribute the same Template over multiple namespaces, she can use TemplateGroupInstance
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: namespace-parameterized-restrictions-tgi\nspec:\n template: namespace-parameterized-restrictions\n sync: true\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\nparameters:\n - name: DEFAULT_CPU_LIMIT\n value: \"1.5\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"1\"\n
Or she can use her tenant to cover only the tenant namespaces.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n editors:\n users:\n - john@aurora.org\n quota: small\n sandboxConfig:\n enabled: true\n templateInstances:\n - spec:\n template: namespace-parameterized-restrictions\n sync: true\n parameters:\n - name: DEFAULT_CPU_LIMIT\n value: \"1.5\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"1\"\n selector:\n matchLabels:\n kind: build\n
"},{"location":"usecases/distributing-resources.html","title":"Copying Secrets and Configmaps across Tenant Namespaces via TGI","text":"Bill is a cluster admin who wants to map a docker-pull-secret
, present in a build
namespace, in tenant namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n resourceMappings:\n secrets:\n - name: docker-pull-secret\n namespace: build\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with MatchLabels
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchLabels:\n kind: build\n sync: true\n
Afterwards, Bill can see that secrets has been successfully mapped in all matching namespaces.
kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n\nkubectl get secret docker-pull-secret -n alpha-dave-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"usecases/distributing-resources.html#mapping-resources-within-tenant-namespaces-via-ti","title":"Mapping Resources within Tenant Namespaces via TI","text":"Anna is a tenant owner who wants to map a docker-pull-secret
, present in bluseky-build
namespace, to bluesky-anna-aurora-sandbox
namespace.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n resourceMappings:\n secrets:\n - name: docker-pull-secret\n namespace: bluesky-build\n
Once the template has been created, Anna creates a TemplateInstance
in bluesky-anna-aurora-sandbox
namespace, referring to the Template
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: docker-secret-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: docker-pull-secret\n sync: true\n
Afterwards, Bill can see that secrets has been successfully mapped in all matching namespaces.
kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"usecases/distributing-secrets-using-sealed-secret-template.html","title":"Distributing Secrets Using Sealed Secrets Template","text":"Bill is a cluster admin who wants to provide a mechanism for distributing secrets in multiple namespaces. For this, he wants to use Sealed Secrets as the solution by adding them to MTO Template CR
First, Bill creates a Template in which Sealed Secret is mentioned:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: tenant-sealed-secret\nresources:\n manifests:\n - kind: SealedSecret\n apiVersion: bitnami.com/v1alpha1\n metadata:\n name: mysecret\n spec:\n encryptedData:\n .dockerconfigjson: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq.....\n template:\n type: kubernetes.io/dockerconfigjson\n # this is an example of labels and annotations that will be added to the output secret\n metadata:\n labels:\n \"jenkins.io/credentials-type\": usernamePassword\n annotations:\n \"jenkins.io/credentials-description\": credentials from Kubernetes\n
Once the template has been created, Bill has to edit the Tenant
to add unique label to namespaces in which the secret has to be deployed. For this, he can use the support for common and specific labels across namespaces.
Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n\n # use this if you want to add label to some specific namespaces\n specificMetadata:\n - namespaces:\n - test-namespace\n labels:\n distribute-image-pull-secret: true\n\n # use this if you want to add label to all namespaces under your tenant\n commonMetadata:\n labels:\n distribute-image-pull-secret: true\n
Bill has added support for a new label distribute-image-pull-secret: true\"
for tenant projects/namespaces, now MTO will add that label depending on the used field.
Finally Bill creates a TemplateGroupInstance
which will deploy the sealed secrets using the newly created project label and template.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: tenant-sealed-secret\nspec:\n template: tenant-sealed-secret\n selector:\n matchLabels:\n distribute-image-pull-secret: true\n sync: true\n
MTO will now deploy the sealed secrets mentioned in Template
to namespaces which have the mentioned label. The rest of the work to deploy secret from a sealed secret has to be done by Sealed Secrets Controller.
"},{"location":"usecases/extend-default-roles.html","title":"Extending the default access level for tenant members","text":"Bill as the cluster admin wants to extend the default access for tenant members. As an admin of an OpenShift Cluster, Bill can extend the admin, edit, and view ClusterRole using aggregation. Bill will first create a ClusterRole with privileges to resources which Bill wants to extend. Bill will add the aggregation label to the newly created ClusterRole for extending the default ClusterRoles provided by OpenShift.
kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: extend-view-role\n labels:\n rbac.authorization.k8s.io/aggregate-to-view: 'true'\nrules:\n - verbs:\n - get\n - list\n - watch\n apiGroups:\n - user.openshift.io\n resources:\n - groups\n
Note: You can learn more about aggregated-cluster-roles
here
"},{"location":"usecases/hibernation.html","title":"Freeing up unused resources with hibernation","text":""},{"location":"usecases/hibernation.html#hibernating-a-tenant","title":"Hibernating a tenant","text":"Bill is a cluster administrator who wants to free up unused cluster resources at nighttime, in an effort to reduce costs (when the cluster isn't being used).
First, Bill creates a tenant with the hibernation
schedules mentioned in the spec, or adds the hibernation field to an existing tenant:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: sigma\nspec:\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n owners:\n users:\n - user\n editors:\n users:\n - user1\n quota: medium\n namespaces:\n withoutTenantPrefix:\n - build\n - stage\n - dev\n
The schedules above will put all the Deployments
and StatefulSets
within the tenant's namespaces to sleep, by reducing their pod count to 0 at 8 PM every weekday. At 8 AM on weekdays, the namespaces will then wake up by restoring their applications' previous pod counts.
Bill can verify this behaviour by checking the newly created ResourceSupervisor resource at run time:
oc get ResourceSupervisor -A\nNAME AGE\nsigma 5m\n
The ResourceSupervisor will look like this at 'running' time (as per the schedule):
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - build\n - stage\n - dev\nstatus:\n currentStatus: running\n nextReconcileTime: '2022-10-12T20:00:00Z'\n
The ResourceSupervisor will look like this at 'sleeping' time (as per the schedule):
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - build\n - stage\n - dev\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: build\n kind: Deployment\n name: example\n replicas: 3\n - Namespace: stage\n kind: Deployment\n name: example\n replicas: 3\n
Bill wants to prevent the build
namespace from going to sleep, so he can add the hibernation.stakater.com/exclude: 'true'
annotation to it. The ResourceSupervisor will now look like this after reconciling:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - stage\n - dev\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: stage\n kind: Deployment\n name: example\n replicas: 3\n
"},{"location":"usecases/hibernation.html#hibernating-namespaces-andor-argocd-applications-with-resourcesupervisor","title":"Hibernating namespaces and/or ArgoCD Applications with ResourceSupervisor","text":"Bill, the cluster administrator, wants to hibernate a collection of namespaces and AppProjects belonging to multiple different tenants. He can do so by creating a ResourceSupervisor manually, specifying the hibernation schedule in its spec, the namespaces and ArgoCD Applications that need to be hibernated as per the mentioned schedule. Bill can also use the same method to hibernate some namespaces and ArgoCD Applications that do not belong to any tenant on his cluster.
The example given below will hibernate the ArgoCD Applications in the 'test-app-project' AppProject; and it will also hibernate the 'ns2' and 'ns4' namespaces.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: test-resource-supervisor\nspec:\n argocd:\n appProjects:\n - test-app-project\n namespace: argocd-ns\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - ns2\n - ns4\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: ns2\n kind: Deployment\n name: test-deployment\n replicas: 3\n
"},{"location":"usecases/integrationconfig.html","title":"Configuring Managed Namespaces and ServiceAccounts in IntegrationConfig","text":"Bill is a cluster admin who can use IntegrationConfig
to configure how Multi Tenant Operator (MTO)
manages the cluster.
By default, MTO watches all namespaces and will enforce all the governing policies on them. All namespaces managed by MTO require the stakater.com/tenant
label. MTO ignores privileged namespaces that are mentioned in the IntegrationConfig, and does not manage them. Therefore, any tenant label on such namespaces will be ignored.
oc create namespace stakater-test\nError from server (Cannot Create namespace stakater-test without label stakater.com/tenant. User: Bill): admission webhook \"vnamespace.kb.io\" denied the request: Cannot CREATE namespace stakater-test without label stakater.com/tenant. User: Bill\n
Bill is trying to create a namespace without the stakater.com/tenant
label. Creating a namespace without this label is only allowed if the namespace is privileged. Privileged namespaces will be ignored by MTO and do not require the said label. Therefore, Bill will add the required regex in the IntegrationConfig, along with any other namespaces which are privileged and should be ignored by MTO - like default
, or namespaces with prefixes like openshift
, kube
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedNamespaces:\n - ^default$\n - ^openshift*\n - ^kube*\n - ^stakater*\n
After mentioning the required regex (^stakater*
) under privilegedNamespaces
, Bill can create the namespace without interference.
oc create namespace stakater-test\nnamespace/stakater-test created\n
MTO will also disallow all users which are not tenant owners to perform CRUD operations on namespaces. This will also prevent Service Accounts from performing CRUD operations.
If Bill wants MTO to ignore Service Accounts, then he would simply have to add them in the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedServiceAccounts:\n - system:serviceaccount:openshift\n - system:serviceaccount:stakater\n - system:serviceaccount:kube\n - system:serviceaccount:redhat\n - system:serviceaccount:hive\n
Bill can also use regex patterns to ignore a set of service accounts:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedServiceAccounts:\n - ^system:serviceaccount:openshift*\n - ^system:serviceaccount:stakater*\n
"},{"location":"usecases/integrationconfig.html#configuring-vault-in-integrationconfig","title":"Configuring Vault in IntegrationConfig","text":"Vault is used to secure, store and tightly control access to tokens, passwords, certificates, and encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.
If Bill (the cluster admin) has Vault configured in his cluster, then he can take benefit from MTO's integration with Vault.
MTO automatically creates Vault secret paths for tenants, where tenant members can securely save their secrets. It also authorizes tenant members to access these secrets via OIDC.
Bill would first have to integrate Vault with MTO by adding the details in IntegrationConfig. For more details
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n vault:\n enabled: true\n endpoint:\n secretReference:\n name: vault-root-token\n namespace: vault\n url: >-\n https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n sso:\n accessorID: auth_oidc_aa6aa9aa\n clientName: vault\n
Bill then creates a tenant for Anna and John:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@acme.org\n viewers:\n users:\n - john@acme.org\n quota: small\n sandbox: false\n
Now Bill goes to Vault
and sees that a path for tenant
has been made under the name bluesky/kv
, confirming that Tenant members with the Owner or Edit roles now have access to the tenant's Vault path.
Now if Anna sign's in to the Vault via OIDC, she can see her tenants path and secrets. Whereas if John sign's in to the Vault via OIDC, he can't see his tenants path or secrets as he doesn't have the access required to view them.
"},{"location":"usecases/integrationconfig.html#configuring-rhsso-red-hat-single-sign-on-in-integrationconfig","title":"Configuring RHSSO (Red Hat Single Sign-On) in IntegrationConfig","text":"Red Hat Single Sign-On RHSSO is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0.
If Bill the cluster admin has RHSSO configured in his cluster, then he can take benefit from MTO's integration with RHSSO and Vault.
MTO automatically allows tenant members to access Vault via OIDC(RHSSO authentication and authorization) to access secret paths for tenants where tenant members can securely save their secrets.
Bill would first have to integrate RHSSO with MTO by adding the details in IntegrationConfig. Visit here for more details.
rhsso:\n enabled: true\n realm: customer\n endpoint:\n secretReference:\n name: auth-secrets\n namespace: openshift-auth\n url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n
"},{"location":"usecases/mattermost.html","title":"Creating Mattermost Teams for your tenant","text":""},{"location":"usecases/mattermost.html#requirements","title":"Requirements","text":"MTO-Mattermost-Integration-Operator
Please contact stakater to install the Mattermost integration operator before following the below mentioned steps.
"},{"location":"usecases/mattermost.html#steps-to-enable-integration","title":"Steps to enable integration","text":"Bill wants some of the tenants to also have their own Mattermost Teams. To make sure this happens correctly, Bill will first add the stakater.com/mattermost: true
label to the tenants. The label will enable the mto-mattermost-integration-operator
to create and manage Mattermost Teams based on Tenants.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: sigma\n labels:\n stakater.com/mattermost: 'true'\nspec:\n owners:\n users:\n - user\n editors:\n users:\n - user1\n quota: medium\n sandbox: false\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n
Now user can logIn to Mattermost to see their Team and relevant channels associated with it.
The name of the Team is similar to the Tenant name. Notification channels are pre-configured for every team, and can be modified.
"},{"location":"usecases/namespace.html","title":"Creating Namespace","text":"Anna as the tenant owner can create new namespaces for her tenant.
apiVersion: v1\nkind: Namespace\nmetadata:\n name: bluesky-production\n labels:\n stakater.com/tenant: bluesky\n
\u26a0\ufe0f Anna is required to add the tenant label stakater.com/tenant: bluesky
which contains the name of her tenant bluesky
, while creating the namespace. If this label is not added or if Anna does not belong to the bluesky
tenant, then Multi Tenant Operator will not allow the creation of that namespace.
When Anna creates the namespace, MTO assigns Anna and other tenant members the roles based on their user types, such as a tenant owner getting the OpenShift admin
role for that namespace.
As a tenant owner, Anna is able to create namespaces.
If you have enabled ArgoCD Multitenancy, our preferred solution is to create tenant namespaces by using Tenant spec to avoid syncing issues in ArgoCD console during namespace creation.
"},{"location":"usecases/namespace.html#add-existing-namespaces-to-tenant-via-gitops","title":"Add Existing Namespaces to Tenant via GitOps","text":"Using GitOps as your preferred development workflow, you can add existing namespaces for your tenants by including the tenant label.
To add an existing namespace to your tenant via GitOps:
- First, migrate your namespace resource to your \u201cwatched\u201d git repository
- Edit your namespace
yaml
to include the tenant label - Tenant label follows the naming convention
stakater.com/tenant: <TENANT_NAME>
- Sync your GitOps repository with your cluster and allow changes to be propagated
- Verify that your Tenant users now have access to the namespace
For example, If Anna, a tenant owner, wants to add the namespace bluesky-dev
to her tenant via GitOps, after migrating her namespace manifest to a \u201cwatched repository\u201d
apiVersion: v1\nkind: Namespace\nmetadata:\n name: bluesky-dev\n
She can then add the tenant label
...\n labels:\n stakater.com/tenant: bluesky\n
Now all the users of the Bluesky
tenant now have access to the existing namespace.
Additionally, to remove namespaces from a tenant, simply remove the tenant label from the namespace resource and sync your changes to your cluster.
"},{"location":"usecases/namespace.html#remove-namespaces-from-your-cluster-via-gitops","title":"Remove Namespaces from your Cluster via GitOps","text":"GitOps is a quick and efficient way to automate the management of your K8s resources.
To remove namespaces from your cluster via GitOps;
- Remove the
yaml
file containing your namespace configurations from your \u201cwatched\u201d git repository. - ArgoCD automatically sets the
[app.kubernetes.io/instance](http://app.kubernetes.io/instance)
label on resources it manages. It uses this label it to select resources which inform the basis of an app. To remove a namespace from a managed ArgoCD app, remove the ArgoCD label app.kubernetes.io/instance
from the namespace manifest. - You can edit your namespace manifest through the OpenShift Web Console or with the OpenShift command line tool.
- Now that you have removed your namespace manifest from your watched git repository, and from your managed ArgoCD apps, sync your git repository and allow your changes be propagated.
- Verify that your namespace has been deleted.
"},{"location":"usecases/private-sandboxes.html","title":"Create Private Sandboxes","text":"Bill assigned the ownership of bluesky
to Anna
and Anthony
. Now if the users want sandboxes to be made for them, they'll have to ask Bill
to enable sandbox
functionality. The Users also want to make sure that the sandboxes that are created for them are also only visible to the user they belong to. To enable that, Bill will just set enabled: true
and private: true
within the sandboxConfig
field
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandboxConfig:\n enabled: true\n private: true\nEOF\n
With the above configuration Anna
and Anthony
will now have new sandboxes created
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\nbluesky-anthony-aurora-sandbox Active 5d5h\nbluesky-john-aurora-sandbox Active 5d5h\n
However, from the perspective of Anna
, only their sandbox will be visible
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\n
"},{"location":"usecases/quota.html","title":"Enforcing Quotas","text":"Using Multi Tenant Operator, the cluster-admin can set and enforce cluster resource quotas and limit ranges for tenants.
"},{"location":"usecases/quota.html#assigning-resource-quotas","title":"Assigning Resource Quotas","text":"Bill is a cluster admin who will first create Quota
CR where he sets the maximum resource limits that Anna's tenant will have. Here limitrange
is an optional field, cluster admin can skip it if not needed.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: small\nspec:\n resourcequota:\n hard:\n requests.cpu: '5'\n requests.memory: '5Gi'\n configmaps: \"10\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\n limitrange:\n limits:\n - type: \"Pod\"\n max:\n cpu: \"2\"\n memory: \"1Gi\"\n min:\n cpu: \"200m\"\n memory: \"100Mi\"\nEOF\n
For more details please refer to Quotas.
kubectl get quota small\nNAME STATE AGE\nsmall Active 3m\n
Bill then proceeds to create a tenant for Anna, while also linking the newly created Quota
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@stakater.com\n quota: small\n sandbox: false\nEOF\n
Now that the quota is linked with Anna's tenant, Anna can create any resource within the values of resource quota and limit range.
kubectl -n bluesky-production create deployment nginx --image nginx:latest --replicas 4\n
Once the resource quota assigned to the tenant has been reached, Anna cannot create further resources.
kubectl create pods bluesky-training\nError from server (Cannot exceed Namespace quota: please, reach out to the system administrators)\n
"},{"location":"usecases/secret-distribution.html","title":"Propagate Secrets from Parent to Descendant namespaces","text":"Secrets like registry
credentials often need to exist in multiple Namespaces, so that Pods within different namespaces can have access to those credentials in form of secrets.
Manually creating secrets within different namespaces could lead to challenges, such as:
- Someone will have to create secret either manually or via GitOps each time there is a new descendant namespace that needs the secret
- If we update the parent secret, they will have to update the secret in all descendant namespaces
- This could be time-consuming, and a small mistake while creating or updating the secret could lead to unnecessary debugging
With the help of Multi-Tenant Operator's Template feature we can make this secret distribution experience easy.
For example, to copy a Secret called registry
which exists in the example
to new Namespaces whenever they are created, we will first create a Template which will have reference of the registry secret.
It will also push updates to the copied Secrets and keep the propagated secrets always sync and updated with parent namespaces.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: registry-secret\nresources:\n resourceMappings:\n secrets:\n - name: registry\n namespace: example\n
Now using this Template we can propagate registry secret to different namespaces that has some common set of labels.
For example, will just add one label kind: registry
and all namespaces with this label will get this secret.
For propagating it on different namespaces dynamically will have to create another resource called TemplateGroupInstance
. TemplateGroupInstance
will have Template
and matchLabel
mapping as shown below:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: registry-secret-group-instance\nspec:\n template: registry-secret\n selector:\n matchLabels:\n kind: registry\n sync: true\n
After reconciliation, you will be able to see those secrets in namespaces having mentioned label.
MTO will keep injecting this secret to the new namespaces created with that label.
kubectl get secret registry-secret -n example-ns-1\nNAME STATE AGE\nregistry-secret Active 3m\n\nkubectl get secret registry-secret -n example-ns-2\nNAME STATE AGE\nregistry-secret Active 3m\n
"},{"location":"usecases/template.html","title":"Creating Templates","text":"Anna wants to create a Template that she can use to initialize or share common resources across namespaces (e.g. PullSecrets).
Anna can either create a template using manifests
field, covering Kubernetes or custom resources.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Or by using Helm Charts
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: redis\nresources:\n helm:\n releaseName: redis\n chart:\n repository:\n name: redis\n repoUrl: https://charts.bitnami.com/bitnami\n values: |\n redisPort: 6379\n
She can also use resourceMapping
field to copy over secrets and configmaps from one namespace to others.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: resource-mapping\nresources:\n resourceMappings:\n secrets:\n - name: docker-secret\n namespace: bluesky-build\n configMaps:\n - name: tronador-configMap\n namespace: stakater-tronador\n
Note: Resource mapping can be used via TGI to map resources within tenant namespaces or to some other tenant's namespace. If used with TI, the resources will only be mapped if namespaces belong to same tenant.
"},{"location":"usecases/template.html#using-templates-with-default-parameters","title":"Using Templates with Default Parameters","text":"apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: namespace-parameterized-restrictions\nparameters:\n # Name of the parameter\n - name: DEFAULT_CPU_LIMIT\n # The default value of the parameter\n value: \"1\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"0.5\"\n # If a parameter is required the template instance will need to set it\n # required: true\n # Make sure only values are entered for this parameter\n validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n manifests:\n - apiVersion: v1\n kind: LimitRange\n metadata:\n name: namespace-limit-range-${namespace}\n spec:\n limits:\n - default:\n cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n defaultRequest:\n cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n type: Container\n
Parameters can be used with both manifests
and helm charts
"},{"location":"usecases/tenant.html","title":"Creating Tenant","text":"Bill is a cluster admin who receives a new request from Aurora Solutions CTO asking for a new tenant for Anna's team.
Bill creates a new tenant called bluesky
in the cluster:
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandbox: false\nEOF\n
Bill checks if the new tenant is created:
kubectl get tenants.tenantoperator.stakater.com bluesky\nNAME STATE AGE\nbluesky Active 3m\n
Anna can now login to the cluster and check if she can create namespaces
kubectl auth can-i create namespaces\nyes\n
However, cluster resources are not accessible to Anna
kubectl auth can-i get namespaces\nno\n\nkubectl auth can-i get persistentvolumes\nno\n
Including the Tenant
resource
kubectl auth can-i get tenants.tenantoperator.stakater.com\nno\n
"},{"location":"usecases/tenant.html#assign-multiple-users-as-tenant-owner","title":"Assign multiple users as tenant owner","text":"In the example above, Bill assigned the ownership of bluesky
to Anna
. If another user, e.g. Anthony
needs to administer bluesky
, than Bill can assign the ownership of tenant to that user as well:
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandbox: false\nEOF\n
With the configuration above, Anthony can log-in to the cluster and execute
kubectl auth can-i create namespaces\nyes\n
"},{"location":"usecases/tenant.html#assigning-users-sandbox-namespace","title":"Assigning Users Sandbox Namespace","text":"Bill assigned the ownership of bluesky
to Anna
and Anthony
. Now if the users want sandboxes to be made for them, they'll have to ask Bill
to enable sandbox
functionality.
To enable that, Bill will just set enabled: true
within the sandboxConfig
field
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandboxConfig:\n enabled: true\nEOF\n
With the above configuration Anna
and Anthony
will now have new sandboxes created
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\nbluesky-anthony-aurora-sandbox Active 5d5h\nbluesky-john-aurora-sandbox Active 5d5h\n
If Bill wants to make sure that only the sandbox owner can view his sandbox namespace, he can achieve this by setting private: true
within the sandboxConfig
filed.
"},{"location":"usecases/tenant.html#creating-namespaces-via-tenant-custom-resource","title":"Creating Namespaces via Tenant Custom Resource","text":"Bill now wants to create namespaces for dev
, build
and production
environments for the tenant members. To create those namespaces Bill will just add those names within the namespaces
field in the tenant CR. If Bill wants to append the tenant name as a prefix in namespace name, then he can use namespaces.withTenantPrefix
field. Else he can use namespaces.withoutTenantPrefix
for namespaces for which he does not need tenant name as a prefix.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n namespaces:\n withTenantPrefix:\n - dev\n - build\n withoutTenantPrefix:\n - prod\nEOF\n
With the above configuration tenant members will now see new namespaces have been created.
kubectl get namespaces\nNAME STATUS AGE\nbluesky-dev Active 5d5h\nbluesky-build Active 5d5h\nprod Active 5d5h\n
"},{"location":"usecases/tenant.html#distributing-common-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource","title":"Distributing common labels and annotations to tenant namespaces via Tenant Custom Resource","text":"Bill now wants to add labels/annotations to all the namespaces for a tenant. To create those labels/annotations Bill will just add them into commonMetadata.labels
/commonMetadata.annotations
field in the tenant CR.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n commonMetadata:\n labels:\n app.kubernetes.io/managed-by: tenant-operator\n app.kubernetes.io/part-of: tenant-alpha\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/infra=\nEOF\n
With the above configuration all tenant namespaces will now contain the mentioned labels and annotations.
"},{"location":"usecases/tenant.html#distributing-specific-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource","title":"Distributing specific labels and annotations to tenant namespaces via Tenant Custom Resource","text":"Bill now wants to add labels/annotations to specific namespaces for a tenant. To create those labels/annotations Bill will just add them into specificMetadata.labels
/specificMetadata.annotations
and specific namespaces in specificMetadata.namespaces
field in the tenant CR.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandboxConfig:\n enabled: true\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n specificMetadata:\n - namespaces:\n - bluesky-anna-aurora-sandbox\n labels:\n app.kubernetes.io/is-sandbox: true\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\nEOF\n
With the above configuration all tenant namespaces will now contain the mentioned labels and annotations.
"},{"location":"usecases/tenant.html#retaining-tenant-namespaces-and-appproject-when-a-tenant-is-being-deleted","title":"Retaining tenant namespaces and AppProject when a tenant is being deleted","text":"Bill now wants to delete tenant bluesky
and wants to retain all namespaces and AppProject of the tenant. To retain the namespaces Bill will set spec.onDelete.cleanNamespaces
, and spec.onDelete.cleanAppProjects
to false
.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n quota: small\n sandboxConfig:\n enabled: true\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n onDelete:\n cleanNamespaces: false\n cleanAppProject: false\n
With the above configuration all tenant namespaces and AppProject will not be deleted when tenant bluesky
is deleted. By default, the value of spec.onDelete.cleanNamespaces
is also false
and spec.onDelete.cleanAppProject
is true
"},{"location":"usecases/volume-limits.html","title":"Limiting PersistentVolume for Tenant","text":"Bill, as a cluster admin, wants to restrict the amount of storage a Tenant can use. For that he'll add the requests.storage
field to quota.spec.resourcequota.hard
. If Bill wants to restrict tenant bluesky
to use only 50Gi
of storage, he'll first create a quota with requests.storage
field set to 50Gi
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: medium\nspec:\n resourcequota:\n hard:\n requests.cpu: '5'\n requests.memory: '10Gi'\n requests.storage: '50Gi'\n
Once the quota is created, Bill will create the tenant and set the quota field to the one he created.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n quota: medium\n sandbox: true\nEOF\n
Now, the combined storage used by all tenant namespaces will not exceed 50Gi
.
"},{"location":"usecases/volume-limits.html#adding-storageclass-restrictions-for-tenant","title":"Adding StorageClass Restrictions for Tenant","text":"Now, Bill, as a cluster admin, wants to make sure that no Tenant can provision more than a fixed amount of storage from a StorageClass. Bill can restrict that using <storage-class-name>.storageclass.storage.k8s.io/requests.storage
field in quota.spec.resourcequota.hard
field. If Bill wants to restrict tenant sigma
to use only 20Gi
of storage from storage class stakater
, he'll first create a StorageClass stakater
and then create the relevant Quota with stakater.storageclass.storage.k8s.io/requests.storage
field set to 20Gi
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: small\nspec:\n resourcequota:\n hard:\n requests.cpu: '2'\n requests.memory: '4Gi'\n stakater.storageclass.storage.k8s.io/requests.storage: '20Gi'\n
Once the quota is created, Bill will create the tenant and set the quota field to the one he created.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: sigma\nspec:\n owners:\n users:\n - dave@aurora.org\n quota: small\n sandbox: true\nEOF\n
Now, the combined storage provisioned from StorageClass stakater
used by all tenant namespaces will not exceed 20Gi
.
The 20Gi
limit will only be applied to StorageClass stakater
. If a tenant member creates a PVC with some other StorageClass, he will not be restricted.
Tip
More details about Resource Quota
can be found here
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"Introduction","text":"Kubernetes is designed to support a single tenant platform; OpenShift brings some improvements with its \"Secure by default\" concepts but it is still very complex to design and orchestrate all the moving parts involved in building a secure multi-tenant platform hence making it difficult for cluster admins to host multi-tenancy in a single OpenShift cluster. If multi-tenancy is achieved by sharing a cluster, it can have many advantages, e.g. efficient resource utilization, less configuration effort and easier sharing of cluster-internal resources among different tenants. OpenShift and all managed applications provide enough primitive resources to achieve multi-tenancy, but it requires professional skills and deep knowledge of OpenShift.
This is where Multi Tenant Operator (MTO) comes in and provides easy to manage/configure multi-tenancy. MTO provides wrappers around OpenShift resources to provide a higher level of abstraction to users. With MTO admins can configure Network and Security Policies, Resource Quotas, Limit Ranges, RBAC for every tenant, which are automatically inherited by all the namespaces and users in the tenant. Depending on the user's role, they are free to operate within their tenants in complete autonomy. MTO supports initializing new tenants using GitOps management pattern. Changes can be managed via PRs just like a typical GitOps workflow, so tenants can request changes, add new users, or remove users.
The idea of MTO is to use namespaces as independent sandboxes, where tenant applications can run independently of each other. Cluster admins shall configure MTO's custom resources, which then become a self-service system for tenants. This minimizes the efforts of the cluster admins.
MTO enables cluster admins to host multiple tenants in a single OpenShift Cluster, i.e.:
- Share an OpenShift cluster with multiple tenants
- Share managed applications with multiple tenants
- Configure and manage tenants and their sandboxes
MTO is also OpenShift certified
"},{"location":"index.html#features","title":"Features","text":"The major features of Multi Tenant Operator (MTO) are described below.
"},{"location":"index.html#kubernetes-multitenancy","title":"Kubernetes Multitenancy","text":"RBAC is one of the most complicated and error-prone parts of Kubernetes. With Multi Tenant Operator, you can rest assured that RBAC is configured with the \"least privilege\" mindset and all rules are kept up-to-date with zero manual effort.
Multi Tenant Operator binds existing ClusterRoles to the Tenant's Namespaces used for managing access to the Namespaces and the resources they contain. You can also modify the default roles or create new roles to have full control and customize access control for your users and teams.
Multi Tenant Operator is also able to leverage existing OpenShift groups or external groups synced from 3rd party identity management systems, for maintaining Tenant membership in your organization's current user management system.
"},{"location":"index.html#hashicorp-vault-multitenancy","title":"HashiCorp Vault Multitenancy","text":"Multi Tenant Operator extends the tenants permission model to HashiCorp Vault where it can create Vault paths and greatly ease the overhead of managing RBAC in Vault. Tenant users can manage their own secrets without the concern of someone else having access to their Vault paths.
More details on Vault Multitenancy
"},{"location":"index.html#argocd-multitenancy","title":"ArgoCD Multitenancy","text":"Multi Tenant Operator is not only providing strong Multi Tenancy for the OpenShift internals but also extends the tenants permission model to ArgoCD were it can provision AppProjects and Allowed Repositories for your tenants greatly ease the overhead of managing RBAC in ArgoCD.
More details on ArgoCD Multitenancy
"},{"location":"index.html#resource-management","title":"Resource Management","text":"Multi Tenant Operator provides a mechanism for defining Resource Quotas at the tenant scope, meaning all namespaces belonging to a particular tenant share the defined quota, which is why you are able to safely enable dev teams to self serve their namespaces whilst being confident that they can only use the resources allocated based on budget and business needs.
More details on Quota
"},{"location":"index.html#templates-and-template-distribution","title":"Templates and Template distribution","text":"Multi Tenant Operator allows admins/users to define templates for namespaces, so that others can instantiate these templates to provision namespaces with batteries loaded. A template could pre-populate a namespace for certain use cases or with basic tooling required. Templates allow you to define Kubernetes manifests, Helm chart and more to be applied when the template is used to create a namespace.
It also allows the parameterizing of these templates for flexibility and ease of use. It also provides the option to enforce the presence of templates in one tenant's or all the tenants' namespaces for configuring secure defaults.
Common use cases for namespace templates may be:
- Adding networking policies for multitenancy
- Adding development tooling to a namespace
- Deploying pre-populated databases with test data
- Injecting new namespaces with optional credentials such as image pull secrets
More details on Distributing Template Resources
"},{"location":"index.html#mto-console","title":"MTO Console","text":"Multi Tenant Operator Console is a comprehensive user interface designed for both administrators and tenant users to manage multi-tenant environments. The MTO Console simplifies the complexity involved in handling various aspects of tenants and their related resources. It serves as a centralized monitoring hub, offering insights into the current state of tenants, namespaces, templates and quotas. It is designed to provide a quick summary/snapshot of MTO's status and facilitates easier interaction with various resources such as tenants, namespaces, templates, and quotas.
More details on Console
"},{"location":"index.html#showback","title":"Showback","text":"The showback functionality in Multi Tenant Operator (MTO) Console is a significant feature designed to enhance the management of resources and costs in multi-tenant Kubernetes environments. This feature focuses on accurately tracking the usage of resources by each tenant, and/or namespace, enabling organizations to monitor and optimize their expenditures. Furthermore, this functionality supports financial planning and budgeting by offering a clear view of operational costs associated with each tenant. This can be particularly beneficial for organizations that chargeback internal departments or external clients based on resource usage, ensuring that billing is fair and reflective of actual consumption.
More details on Showback
"},{"location":"index.html#hibernation","title":"Hibernation","text":"Multi Tenant Operator can downscale Deployments and StatefulSets in a tenant's Namespace according to a defined sleep schedule. The Deployments and StatefulSets are brought back to their required replicas according to the provided wake schedule.
More details on Hibernation
"},{"location":"index.html#mattermost-multitenancy","title":"Mattermost Multitenancy","text":"Multi Tenant Operator can manage Mattermost to create Teams for tenant users. All tenant users get a unique team and a list of predefined channels gets created. When a user is removed from the tenant, the user is also removed from the Mattermost team corresponding to tenant.
More details on Mattermost
"},{"location":"index.html#remote-development-namespaces","title":"Remote Development Namespaces","text":"Multi Tenant Operator can be configured to automatically provision a namespace in the cluster for every member of the specific tenant, that will also be preloaded with any selected templates and consume the same pool of resources from the tenants quota creating safe remote dev namespaces that teams can use as scratch namespace for rapid prototyping and development. So, every developer gets a Kubernetes-based cloud development environment that feel like working on localhost.
More details on Sandboxes
"},{"location":"index.html#cross-namespace-resource-distribution","title":"Cross Namespace Resource Distribution","text":"Multi Tenant Operator supports cloning of secrets and configmaps from one namespace to another namespace based on label selectors. It uses templates to enable users to provide reference to secrets and configmaps. It uses a template group instance to distribute those secrets and namespaces in matching namespaces, even if namespaces belong to different tenants. If template instance is used then the resources will only be mapped if namespaces belong to same tenant.
More details on Distributing Secrets and ConfigMaps
"},{"location":"index.html#self-service","title":"Self-Service","text":"With Multi Tenant Operator, you can empower your users to safely provision namespaces for themselves and their teams (typically mapped to SSO groups). Team-owned namespaces and the resources inside them count towards the team's quotas rather than the user's individual limits and are automatically shared with all team members according to the access rules you configure in Multi Tenant Operator.
Also, by leveraging Multi Tenant Operator's templating mechanism, namespaces can be provisioned and automatically pre-populated with any kind of resource or multiple resources such as network policies, docker pull secrets or even Helm charts etc
"},{"location":"index.html#everything-as-codegitops-ready","title":"Everything as Code/GitOps Ready","text":"Multi Tenant Operator is designed and built to be 100% OpenShift-native and to be configured and managed the same familiar way as native OpenShift resources so is perfect for modern shops that are dedicated to GitOps as it is fully configurable using Custom Resources.
"},{"location":"index.html#preventing-clusters-sprawl","title":"Preventing Clusters Sprawl","text":"As companies look to further harness the power of cloud-native, they are adopting container technologies at rapid speed, increasing the number of clusters and workloads. As the number of Kubernetes clusters grows, this is an increasing work for the Ops team. When it comes to patching security issues or upgrading clusters, teams are doing five times the amount of work.
With Multi Tenant Operator teams can share a single cluster with multiple teams, groups of users, or departments by saving operational and management efforts. This prevents you from Kubernetes cluster sprawl.
"},{"location":"index.html#native-experience","title":"Native Experience","text":"Multi Tenant Operator provides multi-tenancy with a native Kubernetes experience without introducing additional management layers, plugins, or customized binaries.
"},{"location":"argocd-multitenancy.html","title":"ArgoCD Multi-tenancy","text":"ArgoCD is a declarative GitOps tool built to deploy applications to Kubernetes. While the continuous delivery (CD) space is seen by some as crowded these days, ArgoCD does bring some interesting capabilities to the table. Unlike other tools, ArgoCD is lightweight and easy to configure.
"},{"location":"argocd-multitenancy.html#why-argocd","title":"Why ArgoCD?","text":"Application definitions, configurations, and environments should be declarative and version controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand.
"},{"location":"argocd-multitenancy.html#argocd-integration-in-multi-tenant-operator","title":"ArgoCD integration in Multi Tenant Operator","text":"With Multi Tenant Operator (MTO), cluster admins can configure multi tenancy in their cluster. Now with ArgoCD integration, multi tenancy can be configured in ArgoCD applications and AppProjects.
MTO (if configured to) will create AppProjects for each tenant. The AppProject will allow tenants to create ArgoCD Applications that can be synced to namespaces owned by those tenants. Cluster admins will also be able to blacklist certain namespaces resources if they want, and allow certain cluster scoped resources as well (see the NamespaceResourceBlacklist
and ClusterResourceWhitelist
sections in Integration Config docs and Tenant Custom Resource docs).
Note that ArgoCD integration in MTO is completely optional.
"},{"location":"argocd-multitenancy.html#default-argocd-configuration","title":"Default ArgoCD configuration","text":"We have set a default ArgoCD configuration in Multi Tenant Operator that fulfils the following use cases:
- Tenants are able to see only their ArgoCD applications in the ArgoCD frontend
- Tenant 'Owners' and 'Editors' will have full access to their ArgoCD applications
- Tenants in the 'Viewers' group will have read-only access to their ArgoCD applications
- Tenants can sync all namespace-scoped resources, except those that are blacklisted in the spec
- Tenants can only sync cluster-scoped resources that are allow-listed in the spec
- Tenant 'Owners' can configure their own GitOps source repos at a tenant level
- Cluster admins can prevent specific resources from syncing via ArgoCD
- Cluster admins have full access to all ArgoCD applications and AppProjects
- Since ArgoCD integration is on a per-tenant level, namespace-scoped applications are only synced to Tenant's namespaces
Detailed use cases showing how to create AppProjects are mentioned in use cases for ArgoCD.
"},{"location":"changelog.html","title":"Changelog","text":""},{"location":"changelog.html#v010x","title":"v0.10.x","text":""},{"location":"changelog.html#v0100","title":"v0.10.0","text":""},{"location":"changelog.html#feature","title":"Feature","text":" - Added support for caching for MTO Console using PostgreSQL as caching layer.
- Added support for custom metrics with Template, Template Instance and Template Group Instance.
- Graph visualization of Tenant and its associated resources on MTO Console.
- Tenant and Admin level authz/authn support within MTO Console and Gateway.
- Now in MTO console you can view cost of different Tenant resources with different date, resource type and additional filters.
- MTO can now create a default keycloak realm, client and
mto-admin
user for Console. - Implemented Cluster Resource Quota for vanilla Kubernetes platform type.
- Dependency of TLS secrets for MTO Webhook.
- Added Helm Chart that would be used for installing MTO over Kubernetes.
- And it comes with default Cert Manager manifests for certificates.
- Support for MTO e2e.
"},{"location":"changelog.html#fix","title":"Fix","text":" - Updated CreateMergePatch to MergeMergePatches to address issues caused by losing
resourceVersion
and UID when converting oldObject
to newObject
. This prevents problems when the object is edited by another controller. - In Template Resource distribution for Secret type, we now consider the source's Secret field type, preventing default creation as Opaque regardless of the source's actual type.
- Enhanced admin permissions for tenant role in Vault to include Create, Update, Delete alongside existing Read and List privileges for the common-shared-secrets path. Viewers now have Read permission.
"},{"location":"changelog.html#enhanced","title":"Enhanced","text":" - Started to support Kubernetes along with OpenShift as platform type.
- Support of MTO's PostgreSQL instance as persistent storage for keycloak.
kube:admin
is now bypassed by default to perform operations, earlier kube:admin
needed to be mentioned in respective tenants to give it access over namespaces.
"},{"location":"changelog.html#v09x","title":"v0.9.x","text":""},{"location":"changelog.html#v094","title":"v0.9.4","text":" - enhance: Removed Quota's default support of adding it to Tenant CR in
spec.quota
, if quota.tenantoperator.stakater.com/is-default: \"true\"
annotation is present - fix: ValidatingWebhookConfiguration CRs are now owned by OLM, to handle cleanup upon operator uninstall
- enhance: TemplateGroupInstance CRs now actively watch the resources they apply, and perform functions to make sure they are in sync with the state mentioned in their respective Templates
More information about TemplateGroupInstance's sync at Sync Resources Deployed by TemplateGroupInstance
"},{"location":"changelog.html#v092","title":"v0.9.2","text":" - fix: Values within TemplateInstances created via Tenants will no longer be duplicated on Tenant CR update
- fix: Fixed a bug that made private namespaces become public
"},{"location":"changelog.html#v091","title":"v0.9.1","text":" - fix: Allow namespace controller to reconcile without crashing, if no IC exists
- fix: In case a group mentioned in IC doesn't exist, it won't block reconciliation or editing of MTO's manifests
"},{"location":"changelog.html#v090","title":"v0.9.0","text":" - feat: Added console for tenants, templates and integration config
- feat: Added support for custom realm name for RHSSO integration in Integration Config
- feat: Add multiple status conditions to tenant and TGI for success and failure cases
- feat: Show error messages with tenant and TGI status
- fix: Stop reconciliation breaking for tenant and TGI, instead continue and show warnings
- fix: Disable TGI/TI reconcile if mentioned template is not found.
- fix: Disable repeated users webhook in tenant
- enhance: Reduced API calls
- enhance: General enhancements and improvements
- chore: Update dependencies
"},{"location":"changelog.html#enabling-console","title":"Enabling console","text":" - To enable console visit Installation, and add config to subscription for OperatorHub based installation.
"},{"location":"changelog.html#v08x","title":"v0.8.x","text":""},{"location":"changelog.html#v083","title":"v0.8.3","text":" - fix: Reconcile namespaces when the group spec for tenants is changed, so new rolebindings can be created for them
"},{"location":"changelog.html#v081","title":"v0.8.1","text":" - fix: Updated release pipelines
"},{"location":"changelog.html#v080","title":"v0.8.0","text":" - feat: Allow custom roles for each tenant via label selector, more details in custom roles document
- Roles mapping is a required field in MTO's IntegrationConfig. By default, it will always be filled with OpenShift's admin/edit/view roles
- Ensure that mentioned roles exist within the cluster
- Remove coupling with OpenShift's built-in admin/edit/view roles
- feat: Removed coupling of ResourceSupervisor and Tenant resources
- Added list of namespaces to hibernate within the ResourceSupervisor resource
- Ensured that the same namespace cannot be added to two different Resource Supervisors
- Moved ResourceSupervisor into a separate pod
- Improved logs
- fix: Remove bug from tenant's common and specific metadata
- fix: Add missing field to Tenant's conversion webhook
- fix: Fix panic in ResourceSupervisor sleep functionality due to sending on closed channel
- chore: Update dependencies
"},{"location":"changelog.html#v07x","title":"v0.7.x","text":""},{"location":"changelog.html#v074","title":"v0.7.4","text":" - maintain: Automate certification of new MTO releases on RedHat's Operator Hub
"},{"location":"changelog.html#v073","title":"v0.7.3","text":" - feat: Updated Tenant CR to provide Tenant level AppProject permissions
"},{"location":"changelog.html#v072","title":"v0.7.2","text":" - feat: Add support to map secrets/configmaps from one namespace to other namespaces using TI. Secrets/configmaps will only be mapped if their namespaces belong to same Tenant
"},{"location":"changelog.html#v071","title":"v0.7.1","text":" - feat: Add option to keep AppProjects created by Multi Tenant Operator in case Tenant is deleted. By default, AppProjects get deleted
- fix: Status now updates after namespaces are created
- maintain: Changes to Helm chart's default behaviour
"},{"location":"changelog.html#v070","title":"v0.7.0","text":" - feat: Add support to map secrets/configmaps from one namespace to other namespaces using TGI. Resources can be mapped from one Tenant's namespaces to some other Tenant's namespaces
- feat: Allow creation of sandboxes that are private to the user
- feat: Allow creation of namespaces without tenant prefix from within tenant spec
- fix: Webhook changes will now be updated without manual intervention
- maintain: Updated Tenant CR version from v1beta1 to v1beta2. Conversion webhook is added to facilitate transition to new version
- see Tenant spec for updated spec
- enhance: Better automated testing
"},{"location":"changelog.html#v06x","title":"v0.6.x","text":""},{"location":"changelog.html#v061","title":"v0.6.1","text":" - fix: Update MTO service-account name in environment variable
"},{"location":"changelog.html#v060","title":"v0.6.0","text":" - feat: Add support to ArgoCD AppProjects created by Tenant Controller to have their sync disabled when relevant namespaces are hibernating
- feat: Add validation webhook for ResourceSupervisor
- fix: Delete ResourceSupervisor when hibernation is removed from tenant CR
- fix: CRQ and limit range not updating when quota changes
- fix: ArgoCD AppProjects created by Tenant Controller not updating when Tenant label is added to an existing namespace
- fix: Namespace workflow for TGI
- fix: ResourceSupervisor deletion workflow
- fix: Update RHSSO user filter for Vault integration
- fix: Update regex of namespace names in tenant CRD
- enhance: Optimize TGI and TI performance under load
- maintain: Bump Operator-SDK and Dependencies version
"},{"location":"changelog.html#v05x","title":"v0.5.x","text":""},{"location":"changelog.html#v054","title":"v0.5.4","text":" - fix: Update Helm dependency to v3.8.2
"},{"location":"changelog.html#v053","title":"v0.5.3","text":" - fix: Add support for parameters in Helm chartRepository in templates
"},{"location":"changelog.html#v052","title":"v0.5.2","text":" - fix: Add service name prefix for webhooks
"},{"location":"changelog.html#v051","title":"v0.5.1","text":" - fix: ResourceSupervisor CR no longer requires a field for the Tenant name
"},{"location":"changelog.html#v050","title":"v0.5.0","text":" - feat: Add support for tenant namespaces off-boarding. For more details check out onDelete
-
feat: Add tenant webhook for spec validation
-
fix: TemplateGroupInstance now cleans up leftover Template resources from namespaces that are no longer part of TGI namespace selector
-
fix: Fixed hibernation sync issue
-
enhance: Update tenant spec for applying common/specific namespace labels/annotations. For more details check out commonMetadata & SpecificMetadata
-
enhance: Add support for multi-pod architecture for Operator-Hub
-
chore: Remove conversion webhook for Quota and Tenant
"},{"location":"changelog.html#v04x","title":"v0.4.x","text":""},{"location":"changelog.html#v047","title":"v0.4.7","text":" - feat: Add hibernation of StatefulSets and Deployments based on a timer
- feat: New custom resource that handles hibernation
"},{"location":"changelog.html#v046","title":"v0.4.6","text":""},{"location":"changelog.html#v045","title":"v0.4.5","text":" - feat: Add support for applying labels/annotation on specific namespaces
"},{"location":"changelog.html#v044","title":"v0.4.4","text":" - fix: Update
privilegedNamespaces
regex
"},{"location":"changelog.html#v043","title":"v0.4.3","text":" - fix: IntegrationConfig will now be synced in all pods
"},{"location":"changelog.html#v042","title":"v0.4.2","text":" - feat: Added support to distribute common labels and annotations to tenant namespaces
"},{"location":"changelog.html#v041","title":"v0.4.1","text":" - fix: Update dependencies to latest version
"},{"location":"changelog.html#v040","title":"v0.4.0","text":" - feat: Controllers are now separated into individual pods
"},{"location":"changelog.html#v03x","title":"v0.3.x","text":""},{"location":"changelog.html#v0333","title":"v0.3.33","text":" - fix: Optimize namespace reconciliation
"},{"location":"changelog.html#v0333_1","title":"v0.3.33","text":" - fix: Revert v0.3.29 change till webhook network issue isn't resolved
"},{"location":"changelog.html#v0333_2","title":"v0.3.33","text":" - fix: Execute webhook and controller of matching custom resource in same pod
"},{"location":"changelog.html#v0330","title":"v0.3.30","text":" - feat: Namespace controller will now trigger TemplateGroupInstance when a new matching namespace is created
"},{"location":"changelog.html#v0329","title":"v0.3.29","text":" - feat: Controllers are now separated into individual pods
"},{"location":"changelog.html#v0328","title":"v0.3.28","text":" - fix: Enhancement of TemplateGroupInstance Namespace event listener
"},{"location":"changelog.html#v0327","title":"v0.3.27","text":" - feat: TemplateGroupInstance will create resources instantly whenever a Namespace with matching labels is created
"},{"location":"changelog.html#v0326","title":"v0.3.26","text":" - fix: Update reconciliation frequency of TemplateGroupInstance
"},{"location":"changelog.html#v0325","title":"v0.3.25","text":" - feat: TemplateGroupInstance will now directly create template resources instead of creating TemplateInstances
"},{"location":"changelog.html#migrating-from-pervious-version","title":"Migrating from pervious version","text":" - To migrate to Tenant-Operator:v0.3.25 perform the following steps
- Downscale Tenant-Operator deployment by setting the replicas count to 0
- Delete TemplateInstances created by TemplateGroupInstance (Naming convention of TemplateInstance created by TemplateGroupInstance is
group-{Template.Name}
) - Update version of Tenant-Operator to v0.3.25 and set the replicas count to 2. After Tenant-Operator pods are up TemplateGroupInstance will create the missing resources
"},{"location":"changelog.html#v0324","title":"v0.3.24","text":" - feat: Add feature to allow ArgoCD to sync specific cluster scoped custom resources, configurable via Integration Config. More details in relevant docs
"},{"location":"changelog.html#v0323","title":"v0.3.23","text":" - feat: Added concurrent reconcilers for template instance controller
"},{"location":"changelog.html#v0322","title":"v0.3.22","text":" - feat: Added validation webhook to prevent Tenant owners from creating RoleBindings with kind 'Group' or 'User'
- fix: Removed redundant logs for namespace webhook
- fix: Added missing check for users in a tenant owner's groups in namespace validation webhook
- fix: General enhancements and improvements
\u26a0\ufe0f Known Issues
caBundle
field in validation webhooks is not being populated for newly added webhooks. A temporary fix is to edit the validation webhook configuration manifest without the caBundle
field added in any webhook, so OpenShift can add it to all fields simultaneously - Edit the
ValidatingWebhookConfiguration
multi-tenant-operator-validating-webhook-configuration
by removing all the caBundle
fields of all webhooks - Save the manifest
- Verify that all
caBundle
fields have been populated - Restart Tenant-Operator pods
"},{"location":"changelog.html#v0321","title":"v0.3.21","text":" - feat: Added ClusterRole manager rules extension
"},{"location":"changelog.html#v0320","title":"v0.3.20","text":" - fix: Fixed the recreation of underlying template resources, if resources were deleted
"},{"location":"changelog.html#v0319","title":"v0.3.19","text":" - feat: Namespace webhook FailurePolicy is now set to Ignore instead of Fail
- fix: Fixed config not being updated in namespace webhook when Integration Config is updated
- fix: Fixed a crash that occurred in case of ArgoCD in Integration Config was not set during deletion of Tenant resource
\u26a0\ufe0f ApiVersion v1alpha1
of Tenant and Quota custom resources has been deprecated and is scheduled to be removed in the future. The following links contain the updated structure of both resources
- Quota v1beta1
- Tenant v1beta1
"},{"location":"changelog.html#v0318","title":"v0.3.18","text":" - fix: Add ArgoCD namespace to destination namespaces for App Projects
"},{"location":"changelog.html#v0317","title":"v0.3.17","text":" - fix: Cluster administrator's permission will now have higher precedence on privileged namespaces
"},{"location":"changelog.html#v0316","title":"v0.3.16","text":" - fix: Add groups mentioned in Tenant CR to ArgoCD App Project manifests' RBAC
"},{"location":"changelog.html#v0315","title":"v0.3.15","text":" - feat: Add validation webhook for TemplateInstance & TemplateGroupInstance to prevent their creation in case the Template they reference does not exist
"},{"location":"changelog.html#v0314","title":"v0.3.14","text":" - feat: Added Validation Webhook for Quota to prevent its deletion when a reference to it exists in any Tenant
- feat: Added Validation Webhook for Template to prevent its deletion when a reference to it exists in any Tenant, TemplateGroupInstance or TemplateInstance
- fix: Fixed a crash that occurred in case Integration Config was not found
"},{"location":"changelog.html#v0313","title":"v0.3.13","text":" - feat: Multi Tenant Operator will now consider all namespaces to be managed if any default Integration Config is not found
"},{"location":"changelog.html#v0312","title":"v0.3.12","text":" - fix: General enhancements and improvements
"},{"location":"changelog.html#v0311","title":"v0.3.11","text":" - fix: Fix Quota's conversion webhook converting the wrong LimitRange field
"},{"location":"changelog.html#v0310","title":"v0.3.10","text":" - fix: Fix Quota's LimitRange to its intended design by being an optional field
"},{"location":"changelog.html#v039","title":"v0.3.9","text":" - feat: Add ability to prevent certain resources from syncing via ArgoCD
"},{"location":"changelog.html#v038","title":"v0.3.8","text":" - feat: Add default annotation to OpenShift Projects that show description about the Project
"},{"location":"changelog.html#v037","title":"v0.3.7","text":" - fix: Fix a typo in Multi Tenant Operator's Helm release
"},{"location":"changelog.html#v036","title":"v0.3.6","text":" - fix: Fix ArgoCD's
destinationNamespaces
created by Multi Tenant Operator
"},{"location":"changelog.html#v035","title":"v0.3.5","text":" - fix: Change sandbox creation from 1 for each group to 1 for each user in a group
"},{"location":"changelog.html#v034","title":"v0.3.4","text":" - feat: Support creation of sandboxes for each group
"},{"location":"changelog.html#v033","title":"v0.3.3","text":" - feat: Add ability to create namespaces from a list of namespace prefixes listed in the Tenant CR
"},{"location":"changelog.html#v032","title":"v0.3.2","text":" - refactor: Restructure Quota CR, more details in relevant docs
- feat: Add support for adding LimitRanges in Quota
- feat: Add conversion webhook to convert existing v1alpha1 versions of quota to v1beta1
"},{"location":"changelog.html#v031","title":"v0.3.1","text":" - feat: Add ability to create ArgoCD AppProjects per tenant, more details in relevant docs
"},{"location":"changelog.html#v030","title":"v0.3.0","text":" - feat: Add support to add groups in addition to users as tenant members
"},{"location":"changelog.html#v02x","title":"v0.2.x","text":""},{"location":"changelog.html#v0233","title":"v0.2.33","text":" - refactor: Restructure Tenant spec, more details in relevant docs
- feat: Add conversion webhook to convert existing v1alpha1 versions of tenant to v1beta1
"},{"location":"changelog.html#v0232","title":"v0.2.32","text":" - refactor: Restructure integration config spec, more details in relevant docs
- feat: Allow users to input custom regex in certain fields inside of integration config, more details in relevant docs
"},{"location":"changelog.html#v0231","title":"v0.2.31","text":" - feat: Add limit range for
kube-RBAC-proxy
"},{"location":"customresources.html","title":"Custom Resources","text":"Below is the detailed explanation about Custom Resources of MTO
"},{"location":"customresources.html#1-quota","title":"1. Quota","text":"Cluster scoped resource:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: medium\nspec:\n resourcequota:\n hard:\n requests.cpu: '5'\n limits.cpu: '10'\n requests.memory: '5Gi'\n limits.memory: '10Gi'\n configmaps: \"10\"\n persistentvolumeclaims: \"4\"\n replicationcontrollers: \"20\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\n limitrange:\n limits:\n - type: \"Pod\"\n max:\n cpu: \"2\"\n memory: \"1Gi\"\n min:\n cpu: \"200m\"\n memory: \"100Mi\"\n - type: \"Container\"\n max:\n cpu: \"2\"\n memory: \"1Gi\"\n min:\n cpu: \"100m\"\n memory: \"50Mi\"\n default:\n cpu: \"300m\"\n memory: \"200Mi\"\n defaultRequest:\n cpu: \"200m\"\n memory: \"100Mi\"\n maxLimitRequestRatio:\n cpu: \"10\"\n
When several tenants share a single cluster with a fixed number of resources, there is a concern that one tenant could use more than its fair share of resources. Quota is a wrapper around OpenShift ClusterResourceQuota
and LimitRange
which provides administrators to limit resource consumption per Tenant
. For more details Quota.Spec , LimitRange.Spec
"},{"location":"customresources.html#2-tenant","title":"2. Tenant","text":"Cluster scoped resource:
The smallest valid Tenant definition is given below (with just one field in its spec):
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: alpha\nspec:\n quota: small\n
Here is a more detailed Tenant definition, explained below:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: alpha\nspec:\n owners: # optional\n users: # optional\n - dave@stakater.com\n groups: # optional\n - alpha\n editors: # optional\n users: # optional\n - jack@stakater.com\n viewers: # optional\n users: # optional\n - james@stakater.com\n quota: medium # required\n sandboxConfig: # optional\n enabled: true # optional\n private: true # optional\n onDelete: # optional\n cleanNamespaces: false # optional\n cleanAppProject: true # optional\n argocd: # optional\n sourceRepos: # required\n - https://github.com/stakater/gitops-config\n appProject: # optional\n clusterResourceWhitelist: # optional\n - group: tronador.stakater.com\n kind: Environment\n namespaceResourceBlacklist: # optional\n - group: \"\"\n kind: ConfigMap\n hibernation: # optional\n sleepSchedule: 23 * * * * # required\n wakeSchedule: 26 * * * * # required\n namespaces: # optional\n withTenantPrefix: # optional\n - dev\n - build\n withoutTenantPrefix: # optional\n - preview\n commonMetadata: # optional\n labels: # optional\n stakater.com/team: alpha\n annotations: # optional\n openshift.io/node-selector: node-role.kubernetes.io/infra=\n specificMetadata: # optional\n - annotations: # optional\n stakater.com/user: dave\n labels: # optional\n stakater.com/sandbox: true\n namespaces: # optional\n - alpha-dave-stakater-sandbox\n templateInstances: # optional\n - spec: # optional\n template: networkpolicy # required\n sync: true # optional\n parameters: # optional\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\n selector: # optional\n matchLabels: # optional\n policy: network-restriction\n
-
Tenant has 3 kinds of Members
. Each member type should have different roles assigned to them. These roles are gotten from the IntegrationConfig's TenantRoles field. You can customize these roles to your liking, but by default the following configuration applies:
Owners:
Users who will be owners of a tenant. They will have OpenShift admin-role assigned to their users, with additional access to create namespaces as well. Editors:
Users who will be editors of a tenant. They will have OpenShift edit-role assigned to their users. Viewers:
Users who will be viewers of a tenant. They will have OpenShift view-role assigned to their users. - For more details, check out their definitions.
-
Users
can be linked to the tenant by specifying there username in owners.users
, editors.users
and viewers.users
respectively.
-
Groups
can be linked to the tenant by specifying the group name in owners.groups
, editors.groups
and viewers.groups
respectively.
-
Tenant will have a Quota
to limit resource consumption.
-
sandboxConfig
is used to configure the tenant user sandbox feature
- Setting
enabled
to true will create sandbox namespaces for owners and editors. - Sandbox will follow the following naming convention {TenantName}-{UserName}-sandbox.
- In case of groups, the sandbox namespaces will be created for each member of the group.
- Setting
private
to true will make those sandboxes be only visible to the user they belong to. By default, sandbox namespaces are visible to all tenant members
-
onDelete
is used to tell Multi Tenant Operator what to do when a Tenant is deleted.
cleanNamespaces
if the value is set to true MTO deletes all tenant namespaces when a Tenant
is deleted. Default value is false. cleanAppProject
will keep the generated ArgoCD AppProject if the value is set to false. By default, the value is true.
-
argocd
is required if you want to create an ArgoCD AppProject for the tenant.
sourceRepos
contain a list of repositories that point to your GitOps. appProject
is used to set the clusterResourceWhitelist
and namespaceResourceBlacklist
resources. If these are also applied via IntegrationConfig
then those applied via Tenant CR will have higher precedence for given Tenant.
-
hibernation
can be used to create a schedule during which the namespaces belonging to the tenant will be put to sleep. The values of the sleepSchedule
and wakeSchedule
fields must be a string in a cron format.
-
Namespaces can also be created via tenant CR by specifying names in namespaces
.
- Multi Tenant Operator will append tenant name prefix while creating namespaces if the list of namespaces is under the
withTenantPrefix
field, so the format will be {TenantName}-{Name}. - Namespaces listed under the
withoutTenantPrefix
will be created with the given name. Writing down namespaces here that already exist within the cluster are not allowed. stakater.com/kind: {Name}
label will also be added to the namespaces.
-
commonMetadata
can be used to distribute common labels and annotations among tenant namespaces.
labels
distributes provided labels among all tenant namespaces annotations
distributes provided annotations among all tenant namespaces
-
specificMetadata
can be used to distribute specific labels and annotations among specific tenant namespaces.
labels
distributes given labels among specific tenant namespaces annotations
distributes given annotations among specific tenant namespaces namespaces
consists a list of specific tenant namespaces across which the labels and annotations will be distributed
-
Tenant automatically deploys template
resource mentioned in templateInstances
to matching tenant namespaces.
Template
resources are created in those namespaces
which belong to a tenant
and contain matching labels
. Template
resources are created in all namespaces
of a tenant
if selector
field is empty.
\u26a0\ufe0f If same label or annotation key is being applied using different methods provided, then the highest precedence will be given to specificMetadata
followed by commonMetadata
and in the end would be the ones applied from openshift.project.labels
/openshift.project.annotations
in IntegrationConfig
"},{"location":"customresources.html#3-template","title":"3. Template","text":"Cluster scoped resource:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: redis\nresources:\n helm:\n releaseName: redis\n chart:\n repository:\n name: redis\n repoUrl: https://charts.bitnami.com/bitnami\n values: |\n redisPort: 6379\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: networkpolicy\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\nresources:\n manifests:\n - kind: NetworkPolicy\n apiVersion: networking.k8s.io/v1\n metadata:\n name: deny-cross-ns-traffic\n spec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: \"${{CIDR_IP}}\"\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: resource-mapping\nresources:\n resourceMappings:\n secrets:\n - name: secret-s1\n namespace: namespace-n1\n configMaps:\n - name: configmap-c1\n namespace: namespace-n2\n
Templates are used to initialize Namespaces, share common resources across namespaces, and map secrets/configmaps from one namespace to other namespaces.
- They either contain one or more Kubernetes manifests, a reference to secrets/configmaps, or a Helm chart.
- They are being tracked by TemplateInstances in each Namespace they are applied to.
- They can contain pre-defined parameters such as ${namespace}/${tenant} or user-defined ${MY_PARAMETER} that can be specified within an TemplateInstance.
Also you can define custom variables in Template
and TemplateInstance
. The parameters defined in TemplateInstance
are overwritten the values defined in Template
.
Manifest Templates: The easiest option to define a Template is by specifying an array of Kubernetes manifests which should be applied when the Template is being instantiated.
Helm Chart Templates: Instead of manifests, a Template can specify a Helm chart that will be installed (using Helm template) when the Template is being instantiated.
Resource Mapping Templates: A template can be used to map secrets and configmaps from one tenant's namespace to another tenant's namespace, or within a tenant's namespace.
"},{"location":"customresources.html#mandatory-and-optional-templates","title":"Mandatory and Optional Templates","text":"Templates can either be mandatory or optional. By default, all Templates are optional. Cluster Admins can make Templates mandatory by adding them to the spec.templateInstances
array within the Tenant configuration. All Templates listed in spec.templateInstances
will always be instantiated within every Namespace
that is created for the respective Tenant.
"},{"location":"customresources.html#4-templateinstance","title":"4. TemplateInstance","text":"Namespace scoped resource:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: networkpolicy\n namespace: build\nspec:\n template: networkpolicy\n sync: true\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\n
TemplateInstance are used to keep track of resources created from Templates, which are being instantiated inside a Namespace. Generally, a TemplateInstance is created from a Template and then the TemplateInstances will not be updated when the Template changes later on. To change this behavior, it is possible to set spec.sync: true
in a TemplateInstance. Setting this option, means to keep this TemplateInstance in sync with the underlying template (similar to Helm upgrade).
"},{"location":"customresources.html#5-templategroupinstance","title":"5. TemplateGroupInstance","text":"Cluster scoped resource:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: namespace-parameterized-restrictions-tgi\nspec:\n template: namespace-parameterized-restrictions\n sync: true\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\n
TemplateGroupInstance distributes a template across multiple namespaces which are selected by labelSelector.
"},{"location":"customresources.html#6-resourcesupervisor","title":"6. ResourceSupervisor","text":"Cluster scoped resource:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: tenant-sample\nspec:\n argocd:\n appProjects:\n - tenant-sample\n hibernation:\n sleepSchedule: 23 * * * *\n wakeSchedule: 26 * * * *\n namespaces:\n - stage\n - dev\nstatus:\n currentStatus: running\n nextReconcileTime: '2022-07-07T11:23:00Z'\n
The ResourceSupervisor
is a resource created by MTO in case the Hibernation feature is enabled. The Resource manages the sleep/wake schedule of the namespaces owned by the tenant, and manages the previous state of any sleeping application. Currently, only StatefulSets and Deployments are put to sleep. Additionally, ArgoCD AppProjects that belong to the tenant have a deny
SyncWindow added to them.
The ResourceSupervisor
can be created both via the Tenant
or manually. For more details, check some of its use cases
"},{"location":"customresources.html#namespace","title":"Namespace","text":"apiVersion: v1\nkind: Namespace\nmetadata:\n labels:\n stakater.com/tenant: blue-sky\n name: build\n
- Namespace should have label
stakater.com/tenant
which contains the name of tenant to which it belongs to. The labels and annotations specified in the operator config, ocp.labels.project
and ocp.annotations.project
are inserted in the namespace by the controller.
"},{"location":"customresources.html#notes","title":"Notes","text":" tenant.spec.users.owner
: Can only create Namespaces with required tenant label and can delete Projects. To edit Namespace use GitOps/ArgoCD
"},{"location":"eula.html","title":"Multi Tenant Operator End User License Agreement","text":"Last revision date: 12 December 2022
IMPORTANT: THIS SOFTWARE END-USER LICENSE AGREEMENT (\"EULA\") IS A LEGAL AGREEMENT (\"Agreement\") BETWEEN YOU (THE CUSTOMER, EITHER AS AN INDIVIDUAL OR, IF PURCHASED OR OTHERWISE ACQUIRED BY OR FOR AN ENTITY, AS AN ENTITY) AND Stakater AB OR ITS SUBSIDUARY (\"COMPANY\"). READ IT CAREFULLY BEFORE COMPLETING THE INSTALLATION PROCESS AND USING MULTI TENANT OPERATOR (\"SOFTWARE\"). IT PROVIDES A LICENSE TO USE THE SOFTWARE AND CONTAINS WARRANTY INFORMATION AND LIABILITY DISCLAIMERS. BY INSTALLING AND USING THE SOFTWARE, YOU ARE CONFIRMING YOUR ACCEPTANCE OF THE SOFTWARE AND AGREEING TO BECOME BOUND BY THE TERMS OF THIS AGREEMENT.
In order to use the Software under this Agreement, you must receive a license key at the time of purchase, in accordance with the scope of use and other terms specified and as set forth in Section 1 of this Agreement.
"},{"location":"eula.html#1-license-grant","title":"1. License Grant","text":" -
1.1 General Use. This Agreement grants you a non-exclusive, non-transferable, limited license to the use rights for the Software, subject to the terms and conditions in this Agreement. The Software is licensed, not sold.
-
1.2 Electronic Delivery. All Software and license documentation shall be delivered by electronic means unless otherwise specified on the applicable invoice or at the time of purchase. Software shall be deemed delivered when it is made available for download for you by the Company (\"Delivery\").
"},{"location":"eula.html#2-modifications","title":"2. Modifications","text":""},{"location":"eula.html#3-restricted-uses","title":"3. Restricted Uses","text":" -
3.1 You shall not (and shall not allow any third party to):
-
(a) reverse engineer the Software or attempt to reconstruct or discover any source code, underlying ideas, algorithms, file formats or programming interfaces of the Software by any means whatsoever (except and only to the extent that applicable law prohibits or restricts reverse engineering restrictions);
-
(b) distribute, sell, sub-license, rent, lease or use the Software for time sharing, hosting, service provider or like purposes, except as expressly permitted under this Agreement;
-
(c) redistribute the Software;
-
(d) remove any product identification, proprietary, copyright or other notices contained in the Software;
-
(e) modify any part of the Software, create a derivative work of any part of the Software (except as permitted in Section 4), or incorporate the Software, except to the extent expressly authorized in writing by the Company;
-
(f) publicly disseminate performance information or analysis (including, without limitation, benchmarks) from any source relating to the Software;
-
(g) utilize any equipment, device, software, or other means designed to circumvent or remove any form of Source URL or copy protection used by the Company in connection with the Software, or use the Software together with any authorization code, Source URL, serial number, or other copy protection device not supplied by the Company;
-
(h) use the Software to develop a product which is competitive with any of the Company's product offerings;
-
(i) use unauthorized Source URLs or license key(s) or distribute or publish Source URLs or license key(s), except as may be expressly permitted by the Company in writing. If your unique license is ever published, the Company reserves the right to terminate your access without notice.
-
3.2 Under no circumstances may you use the Software as part of a product or service that provides similar functionality to the Software itself.
"},{"location":"eula.html#4-ownership","title":"4. Ownership","text":" - 4.1 Notwithstanding anything to the contrary contained herein, except for the limited license rights expressly provided herein, the Company and its suppliers have and will retain all rights, title and interest (including, without limitation, all patent, copyright, trademark, trade secret and other intellectual property rights) in and to the Software and all copies, modifications and derivative works thereof (including any changes which incorporate any of your ideas, feedback or suggestions). You acknowledge that you are obtaining only a limited license right to the Software, and that irrespective of any use of the words \"purchase\", \"sale\" or like terms hereunder no ownership rights are being conveyed to you under this Agreement or otherwise.
"},{"location":"eula.html#5-fees-and-payment","title":"5. Fees and Payment","text":" - 5.1 The Software license fees will be due and payable in full as set forth in the applicable invoice or at the time of purchase. You shall be responsible for all taxes, with-holdings, duties and levies arising from the order (excluding taxes based on the net income of the Company).
"},{"location":"eula.html#6-support-maintenance-and-services","title":"6. Support, Maintenance and Services","text":" - 6.1 Subject to the terms and conditions of this Agreement, as set forth in your invoice, and as set forth on the Stakater support page, support and maintenance services may be included with the purchase of your license subscription.
"},{"location":"eula.html#7-disclaimer-of-warranties","title":"7. Disclaimer of Warranties","text":" -
7.1 The Software is provided \"as is\", with all faults, defects and errors, and without warranty of any kind. The Company does not warrant that the Software will be free of bugs, errors, or other defects, and the Company shall have no liability of any kind for the use of or inability to use the Software, the Software content or any associated service, and you acknowledge that it is not technically practicable for the Company to do so.
-
7.2 To the maximum extent permitted by applicable law, the Company disclaims all warranties, express, implied, arising by law or otherwise, regarding the Software, the Software content and their respective performance or suitability for your intended use, including without limitation any implied warranty of merchantability, fitness for a particular purpose.
"},{"location":"eula.html#8-limitation-of-liability","title":"8. Limitation of Liability","text":" -
8.1 In no event will the Company be liable for any direct, indirect, consequential, incidental, special, exemplary, or punitive damages or liabilities whatsoever arising from or relating to the Software, the Software content or this Agreement, whether based on contract, tort (including negligence), strict liability or other theory, even if the Company has been advised of the possibility of such damages.
-
8.2 In no event will the Company's liability exceed the Software license price as indicated in the invoice. The existence of more than one claim will not enlarge or extend this limit.
"},{"location":"eula.html#9-remedies","title":"9. Remedies","text":""},{"location":"eula.html#10-acknowledgements","title":"10. Acknowledgements","text":" -
10.1 Consent to the Use of Data. You agree that the Company and its affiliates may collect and use technical information gathered as part of the product support services. The Company may use this information solely to improve products and services and will not disclose this information in a form that personally identifies individuals or organizations.
-
10.2 Government End Users. If the Software and related documentation are supplied to or purchased by or on behalf of a Government, then the Software is deemed to be \"commercial software\" as that term is used in the acquisition regulation system.
"},{"location":"eula.html#11-third-party-software","title":"11. Third Party Software","text":" -
11.1 Examples included in Software may provide links to third party libraries or code (collectively \"Third Party Software\") to implement various functions. Third Party Software does not comprise part of the Software. In some cases, access to Third Party Software may be included along with the Software delivery as a convenience for demonstration purposes. Licensee acknowledges:
-
(1) That some part of Third Party Software may require additional licensing of copyright and patents from the owners of such, and
-
(2) That distribution of any of the Software referencing or including any portion of a Third Party Software may require appropriate licensing from such third parties
"},{"location":"eula.html#12-miscellaneous","title":"12. Miscellaneous","text":" -
12.1 Entire Agreement. This Agreement sets forth our entire agreement with respect to the Software and the subject matter hereof and supersedes all prior and contemporaneous understandings and agreements whether written or oral.
-
12.2 Amendment. The Company reserves the right, in its sole discretion, to amend this Agreement from time. Amendments are managed as described in General Provisions.
-
12.3 Assignment. You may not assign this Agreement or any of its rights under this Agreement without the prior written consent of The Company and any attempted assignment without such consent shall be void.
-
12.4 Export Compliance. You agree to comply with all applicable laws and regulations, including laws, regulations, orders or other restrictions on export, re-export or redistribution of software.
-
12.5 Indemnification. You agree to defend, indemnify, and hold harmless the Company from and against any lawsuits, claims, losses, damages, fines and expenses (including attorneys' fees and costs) arising out of your use of the Software or breach of this Agreement.
-
12.6 Attorneys' Fees and Costs. The prevailing party in any action to enforce this Agreement will be entitled to recover its attorneys' fees and costs in connection with such action.
-
12.7 Severability. If any provision of this Agreement is held by a court of competent jurisdiction to be invalid, illegal, or unenforceable, the remainder of this Agreement will remain in full force and effect.
-
12.8 Waiver. Failure or neglect by either party to enforce at any time any of the provisions of this license Agreement shall not be construed or deemed to be a waiver of that party's rights under this Agreement.
-
12.9 Audit. The Company may, at its expense, appoint its own personnel or an independent third party to audit the numbers of installations of the Software in use by you. Any such audit shall be conducted upon thirty (30) days prior notice, during regular business hours and shall not unreasonably interfere with your business activities.
-
12.10 Headings. The headings of sections and paragraphs of this Agreement are for convenience of reference only and are not intended to restrict, affect or be of any weight in the interpretation or construction of the provisions of such sections or paragraphs.
"},{"location":"eula.html#13-contact-information","title":"13. Contact Information","text":" - 13.1 If you have any questions about this EULA, or if you want to contact the Company for any reason, please direct correspondence to
sales@stakater.com
.
"},{"location":"faq.html","title":"FAQs","text":""},{"location":"faq.html#namespace-admission-webhook","title":"Namespace Admission Webhook","text":""},{"location":"faq.html#q-error-received-while-performing-create-update-or-delete-action-on-namespace","title":"Q. Error received while performing Create, Update or Delete action on Namespace","text":"Cannot CREATE namespace test-john without label stakater.com/tenant\n
Answer. Error occurs when a user is trying to perform create, update, delete action on a namespace without the required stakater.com/tenant
label. This label is used by the operator to see that authorized users can perform that action on the namespace. Just add the label with the tenant name so that MTO knows which tenant the namespace belongs to, and who is authorized to perform create/update/delete operations. For more details please refer to Namespace use-case.
"},{"location":"faq.html#q-error-received-while-performing-create-update-or-delete-action-on-openshift-project","title":"Q. Error received while performing Create, Update or Delete action on OpenShift Project","text":"Cannot CREATE namespace testing without label stakater.com/tenant. User: system:serviceaccount:openshift-apiserver:openshift-apiserver-sa\n
Answer. This error occurs because we don't allow Tenant members to do operations on OpenShift Project, whenever an operation is done on a project, openshift-apiserver-sa
tries to do the same request onto a namespace. That's why the user sees openshift-apiserver-sa
Service Account instead of its own user in the error message.
The fix is to try the same operation on the namespace manifest instead.
"},{"location":"faq.html#q-error-received-while-doing-kubectl-apply-f-namespaceyaml","title":"Q. Error received while doing \"kubectl apply -f namespace.yaml\"","text":"Error from server (Forbidden): error when retrieving current configuration of:\nResource: \"/v1, Resource=namespaces\", GroupVersionKind: \"/v1, Kind=Namespace\"\nName: \"ns1\", Namespace: \"\"\nfrom server for: \"namespace.yaml\": namespaces \"ns1\" is forbidden: User \"muneeb\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"ns1\"\n
Answer. Tenant members will not be able to use kubectl apply
because apply
first gets all the instances of that resource, in this case namespaces, and then does the required operation on the selected resource. To maintain tenancy, tenant members do not the access to get or list all the namespaces.
The fix is to create namespaces with kubectl create
instead.
"},{"location":"faq.html#mto-argocd-integration","title":"MTO - ArgoCD Integration","text":""},{"location":"faq.html#q-how-do-i-deploy-cluster-scoped-resource-via-the-argocd-integration","title":"Q. How do I deploy cluster-scoped resource via the ArgoCD integration?","text":"Answer. Multi-Tenant Operator's ArgoCD Integration allows configuration of which cluster-scoped resources can be deployed, both globally and on a per-tenant basis. For a global allow-list that applies to all tenants, you can add both resource group
and kind
to the IntegrationConfig's spec.argocd.clusterResourceWhitelist
field. Alternatively, you can set this up on a tenant level by configuring the same details within a Tenant's spec.argocd.appProject.clusterResourceWhitelist
field. For more details, check out the ArgoCD integration use cases
"},{"location":"faq.html#q-invalidspecerror-application-repo-repo-is-not-permitted-in-project-project","title":"Q. InvalidSpecError: application repo \\<repo> is not permitted in project \\<project>","text":"Answer. The above error can occur if the ArgoCD Application is syncing from a source that is not allowed the referenced AppProject. To solve this, verify that you have referred to the correct project in the given ArgoCD Application, and that the repoURL used for the Application's source is valid. If the error still appears, you can add the URL to the relevant Tenant's spec.argocd.sourceRepos
array.
"},{"location":"faq.html#q-why-are-there-mto-showback-pods-failing-in-my-cluster","title":"Q. Why are there mto-showback-*
pods failing in my cluster?","text":"Answer. The mto-showback-*
pods are used to calculate the cost of the resources used by each tenant. These pods are created by the Multi-Tenant Operator and are scheduled to run every 10 minutes. If the pods are failing, it is likely that the operator's necessary to calculate cost are not present in the cluster. To solve this, you can navigate to Operators
-> Installed Operators
in the OpenShift console and check if the MTO-OpenCost and MTO-Prometheus operators are installed. If they are in a pending state, you can manually approve them to install them in the cluster.
"},{"location":"features.html","title":"Features","text":"The major features of Multi Tenant Operator (MTO) are described below.
"},{"location":"features.html#kubernetes-multitenancy","title":"Kubernetes Multitenancy","text":"RBAC is one of the most complicated and error-prone parts of Kubernetes. With Multi Tenant Operator, you can rest assured that RBAC is configured with the \"least privilege\" mindset and all rules are kept up-to-date with zero manual effort.
Multi Tenant Operator binds existing ClusterRoles to the Tenant's Namespaces used for managing access to the Namespaces and the resources they contain. You can also modify the default roles or create new roles to have full control and customize access control for your users and teams.
Multi Tenant Operator is also able to leverage existing OpenShift groups or external groups synced from 3rd party identity management systems, for maintaining Tenant membership in your organization's current user management system.
"},{"location":"features.html#hashicorp-vault-multitenancy","title":"HashiCorp Vault Multitenancy","text":"Multi Tenant Operator extends the tenants permission model to HashiCorp Vault where it can create Vault paths and greatly ease the overhead of managing RBAC in Vault. Tenant users can manage their own secrets without the concern of someone else having access to their Vault paths.
More details on Vault Multitenancy
"},{"location":"features.html#argocd-multitenancy","title":"ArgoCD Multitenancy","text":"Multi Tenant Operator is not only providing strong Multi Tenancy for the OpenShift internals but also extends the tenants permission model to ArgoCD were it can provision AppProjects and Allowed Repositories for your tenants greatly ease the overhead of managing RBAC in ArgoCD.
More details on ArgoCD Multitenancy
"},{"location":"features.html#mattermost-multitenancy","title":"Mattermost Multitenancy","text":"Multi Tenant Operator can manage Mattermost to create Teams for tenant users. All tenant users get a unique team and a list of predefined channels gets created. When a user is removed from the tenant, the user is also removed from the Mattermost team corresponding to tenant.
More details on Mattermost
"},{"location":"features.html#costresource-optimization","title":"Cost/Resource Optimization","text":"Multi Tenant Operator provides a mechanism for defining Resource Quotas at the tenant scope, meaning all namespaces belonging to a particular tenant share the defined quota, which is why you are able to safely enable dev teams to self serve their namespaces whilst being confident that they can only use the resources allocated based on budget and business needs.
More details on Quota
"},{"location":"features.html#remote-development-namespaces","title":"Remote Development Namespaces","text":"Multi Tenant Operator can be configured to automatically provision a namespace in the cluster for every member of the specific tenant, that will also be preloaded with any selected templates and consume the same pool of resources from the tenants quota creating safe remote dev namespaces that teams can use as scratch namespace for rapid prototyping and development. So, every developer gets a Kubernetes-based cloud development environment that feel like working on localhost.
More details on Sandboxes
"},{"location":"features.html#templates-and-template-distribution","title":"Templates and Template distribution","text":"Multi Tenant Operator allows admins/users to define templates for namespaces, so that others can instantiate these templates to provision namespaces with batteries loaded. A template could pre-populate a namespace for certain use cases or with basic tooling required. Templates allow you to define Kubernetes manifests, Helm chart and more to be applied when the template is used to create a namespace.
It also allows the parameterizing of these templates for flexibility and ease of use. It also provides the option to enforce the presence of templates in one tenant's or all the tenants' namespaces for configuring secure defaults.
Common use cases for namespace templates may be:
- Adding networking policies for multitenancy
- Adding development tooling to a namespace
- Deploying pre-populated databases with test data
- Injecting new namespaces with optional credentials such as image pull secrets
More details on Distributing Template Resources
"},{"location":"features.html#hibernation","title":"Hibernation","text":"Multi Tenant Operator can downscale Deployments and StatefulSets in a tenant's Namespace according to a defined sleep schedule. The Deployments and StatefulSets are brought back to their required replicas according to the provided wake schedule.
More details on Hibernation
"},{"location":"features.html#cross-namespace-resource-distribution","title":"Cross Namespace Resource Distribution","text":"Multi Tenant Operator supports cloning of secrets and configmaps from one namespace to another namespace based on label selectors. It uses templates to enable users to provide reference to secrets and configmaps. It uses a template group instance to distribute those secrets and namespaces in matching namespaces, even if namespaces belong to different tenants. If template instance is used then the resources will only be mapped if namespaces belong to same tenant.
More details on Distributing Secrets and ConfigMaps
"},{"location":"features.html#self-service","title":"Self-Service","text":"With Multi Tenant Operator, you can empower your users to safely provision namespaces for themselves and their teams (typically mapped to SSO groups). Team-owned namespaces and the resources inside them count towards the team's quotas rather than the user's individual limits and are automatically shared with all team members according to the access rules you configure in Multi Tenant Operator.
Also, by leveraging Multi Tenant Operator's templating mechanism, namespaces can be provisioned and automatically pre-populated with any kind of resource or multiple resources such as network policies, docker pull secrets or even Helm charts etc
"},{"location":"features.html#everything-as-codegitops-ready","title":"Everything as Code/GitOps Ready","text":"Multi Tenant Operator is designed and built to be 100% OpenShift-native and to be configured and managed the same familiar way as native OpenShift resources so is perfect for modern shops that are dedicated to GitOps as it is fully configurable using Custom Resources.
"},{"location":"features.html#preventing-clusters-sprawl","title":"Preventing Clusters Sprawl","text":"As companies look to further harness the power of cloud-native, they are adopting container technologies at rapid speed, increasing the number of clusters and workloads. As the number of Kubernetes clusters grows, this is an increasing work for the Ops team. When it comes to patching security issues or upgrading clusters, teams are doing five times the amount of work.
With Multi Tenant Operator teams can share a single cluster with multiple teams, groups of users, or departments by saving operational and management efforts. This prevents you from Kubernetes cluster sprawl.
"},{"location":"features.html#native-experience","title":"Native Experience","text":"Multi Tenant Operator provides multi-tenancy with a native Kubernetes experience without introducing additional management layers, plugins, or customized binaries.
"},{"location":"features.html#custom-metrics-support","title":"Custom Metrics Support","text":"Multi Tenant Operator now supports custom metrics for templates, template instances and template group instances.
Exposed metrics contain, number of resources deployed, number of resources failed, total number of resources deployed for template instances and template group instances. These metrics can be used to monitor the usage of templates and template instances in the cluster.
Additionally, this allows us to expose other performance metrics listed here.
More details on Enabling Custom Metrics
"},{"location":"features.html#graph-visualization-for-tenants","title":"Graph Visualization for Tenants","text":"Multi Tenant Operator now supports graph visualization for tenants on the MTO Console. Effortlessly associate tenants with their respective resources using the enhanced graph feature on the MTO Console. This dynamic graph illustrates the relationships between tenants and the resources they create, encompassing both MTO's proprietary resources and native Kubernetes/OpenShift elements.
More details on Graph Visualization
"},{"location":"hibernation.html","title":"Hibernating Namespaces","text":"You can manage workloads in your cluster with MTO by implementing a hibernation schedule for your tenants. Hibernation downsizes the running Deployments and StatefulSets in a tenant\u2019s namespace according to a defined cron schedule. You can set a hibernation schedule for your tenants by adding the \u2018spec.hibernation\u2019 field to the tenant's respective Custom Resource.
hibernation:\n sleepSchedule: 23 * * * *\n wakeSchedule: 26 * * * *\n
spec.hibernation.sleepSchedule
accepts a cron expression indicating the time to put the workloads in your tenant\u2019s namespaces to sleep.
spec.hibernation.wakeSchedule
accepts a cron expression indicating the time to wake the workloads in your tenant\u2019s namespaces up.
Note
Both sleep and wake schedules must be specified for your Hibernation schedule to be valid.
Additionally, adding the hibernation.stakater.com/exclude: 'true'
annotation to a namespace excludes it from hibernating.
Note
This is only true for hibernation applied via the Tenant Custom Resource, and does not apply for hibernation done by manually creating a ResourceSupervisor (details about that below).
Note
This will not wake up an already sleeping namespace before the wake schedule.
"},{"location":"hibernation.html#resource-supervisor","title":"Resource Supervisor","text":"Adding a Hibernation Schedule to a Tenant creates an accompanying ResourceSupervisor Custom Resource. The Resource Supervisor stores the Hibernation schedules and manages the current and previous states of all the applications, whether they are sleeping or awake.
When the sleep timer is activated, the Resource Supervisor controller stores the details of your applications (including the number of replicas, configurations, etc.) in the applications' namespaces and then puts your applications to sleep. When the wake timer is activated, the controller wakes up the applications using their stored details.
Enabling ArgoCD support for Tenants will also hibernate applications in the tenants' appProjects
.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: sigma\nspec:\n argocd:\n appProjects:\n - sigma\n namespace: openshift-gitops\n hibernation:\n sleepSchedule: 42 * * * *\n wakeSchedule: 45 * * * *\n namespaces:\n - tenant-ns1\n - tenant-ns2\n
Currently, Hibernation is available only for StatefulSets and Deployments.
"},{"location":"hibernation.html#manual-creation-of-resourcesupervisor","title":"Manual creation of ResourceSupervisor","text":"Hibernation can also be applied by creating a ResourceSupervisor resource manually. The ResourceSupervisor definition will contain the hibernation cron schedule, the names of the namespaces to be hibernated, and the names of the ArgoCD AppProjects whose ArgoCD Applications have to be hibernated (as per the given schedule).
This method can be used to hibernate:
- Some specific namespaces and AppProjects in a tenant
- A set of namespaces and AppProjects belonging to different tenants
- Namespaces and AppProjects belonging to a tenant that the cluster admin is not a member of
- Non-tenant namespaces and ArgoCD AppProjects
As an example, the following ResourceSupervisor could be created manually, to apply hibernation explicitly to the 'ns1' and 'ns2' namespaces, and to the 'sample-app-project' AppProject.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: hibernator\nspec:\n argocd:\n appProjects:\n - sample-app-project\n namespace: openshift-gitops\n hibernation:\n sleepSchedule: 42 * * * *\n wakeSchedule: 45 * * * *\n namespaces:\n - ns1\n - ns2\n
"},{"location":"installation.html","title":"Installation","text":"This document contains instructions on installing, uninstalling and configuring Multi Tenant Operator using OpenShift MarketPlace.
-
OpenShift OperatorHub UI
-
CLI/GitOps
-
Uninstall
"},{"location":"installation.html#requirements","title":"Requirements","text":" - An OpenShift cluster [v4.7 - v4.12]
"},{"location":"installation.html#installing-via-operatorhub-ui","title":"Installing via OperatorHub UI","text":" - After opening OpenShift console click on
Operators
, followed by OperatorHub
from the side menu
- Now search for
Multi Tenant Operator
and then click on Multi Tenant Operator
tile
- Click on the
install
button
- Select
Updated channel
. Select multi-tenant-operator
to install the operator in multi-tenant-operator
namespace from Installed Namespace
dropdown menu. After configuring Update approval
click on the install
button.
Note: Use stable
channel for seamless upgrades. For Production Environment
prefer Manual
approval and use Automatic
for Development Environment
- Wait for the operator to be installed
- Once successfully installed, MTO will be ready to enforce multi-tenancy in your cluster
Note: MTO will be installed in multi-tenant-operator
namespace.
"},{"location":"installation.html#configuring-integrationconfig","title":"Configuring IntegrationConfig","text":"IntegrationConfig is required to configure the settings of multi-tenancy for MTO.
- We recommend using the following IntegrationConfig as a starting point
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedNamespaces:\n - default\n - ^openshift-*\n - ^kube-*\n - ^redhat-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:default-*\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n - ^system:serviceaccount:redhat-*\n
For more details and configurations check out IntegrationConfig.
"},{"location":"installation.html#installing-via-cli-or-gitops","title":"Installing via CLI OR GitOps","text":" - Create namespace
multi-tenant-operator
oc create namespace multi-tenant-operator\nnamespace/multi-tenant-operator created\n
- Create an OperatorGroup YAML for MTO and apply it in
multi-tenant-operator
namespace.
oc create -f - << EOF\napiVersion: operators.coreos.com/v1\nkind: OperatorGroup\nmetadata:\n name: tenant-operator\n namespace: multi-tenant-operator\nEOF\noperatorgroup.operators.coreos.com/tenant-operator created\n
- Create a subscription YAML for MTO and apply it in
multi-tenant-operator
namespace. To enable console set .spec.config.env[].ENABLE_CONSOLE
to true
. This will create a route resource, which can be used to access the Multi-Tenant-Operator console.
oc create -f - << EOF\napiVersion: operators.coreos.com/v1alpha1\nkind: Subscription\nmetadata:\n name: tenant-operator\n namespace: multi-tenant-operator\nspec:\n channel: stable\n installPlanApproval: Automatic\n name: tenant-operator\n source: certified-operators\n sourceNamespace: openshift-marketplace\n startingCSV: tenant-operator.v0.9.1\n config:\n env:\n - name: ENABLE_CONSOLE\n value: 'true'\nEOF\nsubscription.operators.coreos.com/tenant-operator created\n
Note: To bring MTO via GitOps, add the above files in GitOps repository.
- After creating the
subscription
custom resource open OpenShift console and click on Operators
, followed by Installed Operators
from the side menu
- Wait for the installation to complete
- Once the installation is complete click on
Workloads
, followed by Pods
from the side menu and select multi-tenant-operator
project
- Once pods are up and running, MTO will be ready to enforce multi-tenancy in your cluster
"},{"location":"installation.html#configuring-integrationconfig_1","title":"Configuring IntegrationConfig","text":"IntegrationConfig is required to configure the settings of multi-tenancy for MTO.
- We recommend using the following IntegrationConfig as a starting point:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedNamespaces:\n - default\n - ^openshift-*\n - ^kube-*\n - ^redhat-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:default-*\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n - ^system:serviceaccount:redhat-*\n
For more details and configurations check out IntegrationConfig.
"},{"location":"installation.html#uninstall-via-operatorhub-ui","title":"Uninstall via OperatorHub UI","text":"You can uninstall MTO by following these steps:
-
Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not. If yes, please set spec.onDelete.cleanNamespaces
to false
for all those tenants whose namespaces you want to retain, and spec.onDelete.cleanAppProject
to false
for all those tenants whose AppProject you want to retain. For more details check out onDelete
-
After making the required changes open OpenShift console and click on Operators
, followed by Installed Operators
from the side menu
- Now click on uninstall and confirm uninstall.
"},{"location":"installation.html#notes","title":"Notes","text":" - For more details on how to use MTO please refer use-cases.
- For more details on how to extend your MTO manager ClusterRole please refer extend-admin-clusterrole.
"},{"location":"integration-config.html","title":"Integration Config","text":"IntegrationConfig is used to configure settings of multi-tenancy for Multi Tenant Operator.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - admin\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n - viewer\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n editor:\n clusterRoles:\n - custom-editor\n viewer:\n clusterRoles:\n - custom-viewer\n - custom-view\n openshift:\n project:\n labels:\n stakater.com/workload-monitoring: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n group:\n labels:\n role: customer-reader\n sandbox:\n labels:\n stakater.com/kind: sandbox\n clusterAdminGroups:\n - cluster-admins\n privilegedNamespaces:\n - ^default$\n - ^openshift-*\n - ^kube-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n groups:\n - cluster-admins\n argocd:\n namespace: openshift-operators\n namespaceResourceBlacklist:\n - group: '' # all groups\n kind: ResourceQuota\n clusterResourceWhitelist:\n - group: tronador.stakater.com\n kind: EnvironmentProvisioner\n rhsso:\n enabled: true\n realm: customer\n endpoint:\n url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n secretReference:\n name: auth-secrets\n namespace: openshift-auth\n vault:\n enabled: true\n endpoint:\n url: https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n secretReference:\n name: vault-root-token\n namespace: vault\n sso:\n clientName: vault\n accessorID: <ACCESSOR_ID_TOKEN>\n
Following are the different components that can be used to configure multi-tenancy in a cluster via Multi Tenant Operator.
"},{"location":"integration-config.html#tenantroles","title":"TenantRoles","text":"TenantRoles are required within the IntegrationConfig, as they are used for defining what roles will be applied to each Tenant namespace. The field allows optional custom roles, that are then used to create RoleBindings for namespaces that match a labelSelector.
\u26a0\ufe0f If you do not configure roles in any way, then the default OpenShift roles of owner
, edit
, and view
will apply to Tenant members. Their details can be found here
tenantRoles:\n default:\n owner:\n clusterRoles:\n - admin\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n - viewer\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n editor:\n clusterRoles:\n - custom-editor\n viewer:\n clusterRoles:\n - custom-viewer\n - custom-view\n
"},{"location":"integration-config.html#default","title":"Default","text":"This field contains roles that will be used to create default roleBindings for each namespace that belongs to tenants. These roleBindings are only created for a namespace if that namespaces isn't already matched by the custom
field below it. Therefore, it is required to have at least one role mentioned within each of its three subfields: owner
, editor
, and viewer
. These 3 subfields also correspond to the member fields of the Tenant CR
"},{"location":"integration-config.html#custom","title":"Custom","text":"An array of custom roles. Similar to the default
field, you can mention roles within this field as well. However, the custom roles also require the use of a labelSelector
for each iteration within the array. The roles mentioned here will only apply to the namespaces that are matched by the labelSelector. If a namespace is matched by 2 different labelSelectors, then both roles will apply to it. Additionally, roles can be skipped within the labelSelector. These missing roles are then inherited from the default
roles field . For example, if the following custom roles arrangement is used:
custom:\n- labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n
Then the editor
and viewer
roles will be taken from the default
roles field, as that is required to have at least one role mentioned.
"},{"location":"integration-config.html#openshift","title":"OpenShift","text":"openshift:\n project:\n labels:\n stakater.com/workload-monitoring: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n group:\n labels:\n role: customer-reader\n sandbox:\n labels:\n stakater.com/kind: sandbox\n clusterAdminGroups:\n - cluster-admins\n privilegedNamespaces:\n - ^default$\n - ^openshift-*\n - ^kube-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n groups:\n - cluster-admins\n
"},{"location":"integration-config.html#project-group-and-sandbox","title":"Project, group and sandbox","text":"We can use the openshift.project
, openshift.group
and openshift.sandbox
fields to automatically add labels
and annotations
to the Projects and Groups managed via MTO.
openshift:\n project:\n labels:\n stakater.com/workload-monitoring: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n group:\n labels:\n role: customer-reader\n sandbox:\n labels:\n stakater.com/kind: sandbox\n
If we want to add default labels/annotations to sandbox namespaces of tenants than we just simply add them in openshift.project.labels
/openshift.project.annotations
respectively.
Whenever a project is made it will have the labels and annotations as mentioned above.
kind: Project\napiVersion: project.openshift.io/v1\nmetadata:\n name: bluesky-build\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n labels:\n workload-monitoring: 'true'\n stakater.com/tenant: bluesky\nspec:\n finalizers:\n - kubernetes\nstatus:\n phase: Active\n
kind: Group\napiVersion: user.openshift.io/v1\nmetadata:\n name: bluesky-owner-group\n labels:\n role: customer-reader\nusers:\n - andrew@stakater.com\n
"},{"location":"integration-config.html#cluster-admin-groups","title":"Cluster Admin Groups","text":"clusterAdminGroups:
Contains names of the groups that are allowed to perform CRUD operations on namespaces present on the cluster. Users in the specified group(s) will be able to perform these operations without MTO getting in their way. MTO does not interfere even with the deletion of privilegedNamespaces.
Note
User kube:admin
is bypassed by default to perform operations as a cluster admin, this includes operations on all the namespaces.
"},{"location":"integration-config.html#privileged-namespaces","title":"Privileged Namespaces","text":"privilegedNamespaces:
Contains the list of namespaces
ignored by MTO. MTO will not manage the namespaces
in this list. Treatment for privileged namespaces does not involve further integrations or finalizers processing as with normal namespaces. Values in this list are regex patterns. For example:
- To ignore the
default
namespace, we can specify ^default$
- To ignore all namespaces starting with the
openshift-
prefix, we can specify ^openshift-*
. - To ignore any namespace containing
stakater
in its name, we can specify stakater
. (A constant word given as a regex pattern will match any namespace containing that word.)
"},{"location":"integration-config.html#privileged-serviceaccounts","title":"Privileged ServiceAccounts","text":"privilegedServiceAccounts:
Contains the list of ServiceAccounts
ignored by MTO. MTO will not manage the ServiceAccounts
in this list. Values in this list are regex patterns. For example, to ignore all ServiceAccounts
starting with the system:serviceaccount:openshift-
prefix, we can use ^system:serviceaccount:openshift-*
; and to ignore the system:serviceaccount:builder
service account we can use ^system:serviceaccount:builder$.
"},{"location":"integration-config.html#namespace-access-policy","title":"Namespace Access Policy","text":"namespaceAccessPolicy.Deny:
Can be used to restrict privileged users/groups CRUD operation over managed namespaces.
namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n groups:\n - cluster-admins\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n
\u26a0\ufe0f If you want to use a more complex regex pattern (for the openshift.privilegedNamespaces
or openshift.privilegedServiceAccounts
field), it is recommended that you test the regex pattern first - either locally or using a platform such as https://regex101.com/.
"},{"location":"integration-config.html#argocd","title":"ArgoCD","text":""},{"location":"integration-config.html#namespace","title":"Namespace","text":"argocd.namespace
is an optional field used to specify the namespace where ArgoCD Applications and AppProjects are deployed. The field should be populated when you want to create an ArgoCD AppProject for each tenant.
"},{"location":"integration-config.html#namespaceresourceblacklist","title":"NamespaceResourceBlacklist","text":"argocd:\n namespaceResourceBlacklist:\n - group: '' # all resource groups\n kind: ResourceQuota\n - group: ''\n kind: LimitRange\n - group: ''\n kind: NetworkPolicy\n
argocd.namespaceResourceBlacklist
prevents ArgoCD from syncing the listed resources from your GitOps repo.
"},{"location":"integration-config.html#clusterresourcewhitelist","title":"ClusterResourceWhitelist","text":"argocd:\n clusterResourceWhitelist:\n - group: tronador.stakater.com\n kind: EnvironmentProvisioner\n
argocd.clusterResourceWhitelist
allows ArgoCD to sync the listed cluster scoped resources from your GitOps repo.
"},{"location":"integration-config.html#rhsso-red-hat-single-sign-on","title":"RHSSO (Red Hat Single Sign-On)","text":"Red Hat Single Sign-On RHSSO is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0.
If RHSSO
is configured on a cluster, then RHSSO configuration can be enabled.
rhsso:\n enabled: true\n realm: customer\n endpoint:\n secretReference:\n name: auth-secrets\n namespace: openshift-auth\n url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n
If enabled, than admins have to provide secret and URL of RHSSO.
secretReference.name:
Will contain the name of the secret. secretReference.namespace:
Will contain the namespace of the secret. realm:
Will contain the realm name which is configured for users. url:
Will contain the URL of RHSSO.
"},{"location":"integration-config.html#vault","title":"Vault","text":"Vault is used to secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.
If vault
is configured on a cluster, then Vault configuration can be enabled.
Vault:\n enabled: true\n endpoint:\n secretReference:\n name: vault-root-token\n namespace: vault\n url: >-\n https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n sso:\n accessorID: <ACCESSOR_ID_TOKEN>\n clientName: vault\n
If enabled, than admins have to provide secret, URL and SSO accessorID of Vault.
secretReference.name:
Will contain the name of the secret. secretReference.namespace:
Will contain the namespace of the secret. url:
Will contain the URL of Vault. sso.accessorID:
Will contain the SSO accessorID. sso.clientName:
Will contain the client name.
For more details please refer use-cases
"},{"location":"tenant-roles.html","title":"Tenant Member Roles","text":"After adding support for custom roles within MTO, this page is only applicable if you use OpenShift and its default owner
, edit
, and view
roles. For more details, see the IntegrationConfig spec
MTO tenant members can have one of following 3 roles:
- Owner
- Editor
- Viewer
"},{"location":"tenant-roles.html#1-owner","title":"1. Owner","text":" fig 2. Shows how tenant owners manage their tenant using MTO
Owner is an admin of a tenant with some restrictions. It has privilege to see all resources in their Tenant with some additional privileges. They can also create new namespaces
.
Owners will also inherit roles from Edit
and View
.
"},{"location":"tenant-roles.html#access-permissions","title":"Access Permissions","text":" - Role and RoleBinding access in
Project
: - delete
- create
- list
- get
- update
- patch
"},{"location":"tenant-roles.html#quotas-permissions","title":"Quotas Permissions","text":""},{"location":"tenant-roles.html#resources-permissions","title":"Resources Permissions","text":" - CRUD access on Template, TemplateInstance and TemplateGroupInstance of MTO custom resources
- CRUD access on ImageStreamTags in
Project
- Get access on CustomResourceDefinitions in
Project
- Get, list, watch access on Builds, BuildConfigs in
Project
- CRUD access on following resources in
Project
: - Prometheuses
- Prometheusrules
- ServiceMonitors
- PodMonitors
- ThanosRulers
- Permission to create Namespaces.
- Restricted to perform actions on cluster resource Quotas and Limits.
"},{"location":"tenant-roles.html#2-editor","title":"2. Editor","text":" fig 3. Shows editors role in a tenant using MTO
Edit role will have edit access on their Projects
, but they wont have access on Roles
or RoleBindings
.
Editors will also inherit View
role.
"},{"location":"tenant-roles.html#access-permissions_1","title":"Access Permissions","text":" - ServiceAccount access in
Project
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- impersonate
"},{"location":"tenant-roles.html#quotas-permissions_1","title":"Quotas Permissions","text":" - AppliedClusterResourceQuotas and ResourceQuotaUsages access in
Project
"},{"location":"tenant-roles.html#builds-pods-pvc-permissions","title":"Builds ,Pods , PVC Permissions","text":" - Pod, PodDisruptionBudgets and PVC access in
Project
- get
- list
- watch
- create
- delete
- deletecollection
- patch
- update
- Build, BuildConfig, BuildLog, DeploymentConfig, Deployment, ConfigMap, ImageStream , ImageStreamImage and ImageStreamMapping access in
Project
- get
- list
- watch
- create
- delete
- deletecollection
- patch
- update
"},{"location":"tenant-roles.html#resources-permissions_1","title":"Resources Permissions","text":" - CRUD access on Template, TemplateInstance and TemplateGroupInstance of MTO custom resources
- Job, CronJob, Task, Trigger and Pipeline access in
Project
- get
- list
- watch
- create
- delete
- deletecollection
- patch
- update
- Get access on projects
- Route and NetworkPolicies access in
Project
- get
- list
- watch
- create
- delete
- deletecollection
- patch
- update
- Template, ReplicaSet, StatefulSet and DaemonSet access in
Project
- get
- list
- watch
- create
- delete
- deletecollection
- patch
- update
- CRUD access on all Projects related to
- Elasticsearch
- Logging
- Kibana
- Istio
- Jaeger
- Kiali
- Tekton.dev
- Get access on CustomResourceDefinitions in
Project
- Edit and view permission on
jenkins.build.openshift.io
- InstallPlan access in
Project
- Subscription and PackageManifest access in
Project
- get
- list
- watch
- create
- delete
- deletecollection
- patch
- update
"},{"location":"tenant-roles.html#3-viewer","title":"3. Viewer","text":" fig 4. Shows viewers role in a tenant using MTO
Viewer role will only have view access on their Project
.
"},{"location":"tenant-roles.html#access-permissions_2","title":"Access Permissions","text":" - ServiceAccount access in
Project
"},{"location":"tenant-roles.html#quotas-permissions_2","title":"Quotas Permissions","text":" - AppliedClusterResourceQuotas access in
Project
"},{"location":"tenant-roles.html#builds-pods-pvc-permissions_1","title":"Builds ,Pods , PVC Permissions","text":" - Pod, PodDisruptionBudget and PVC access in
Project
- Build, BuildConfig, BuildLog, DeploymentConfig, ConfigMap, ImageStream, ImageStreamImage and ImageStreamMapping access in
Project
"},{"location":"tenant-roles.html#resources-permissions_2","title":"Resources Permissions","text":" - Get, list, view access on Template, TemplateInstance and TemplateGroupInstance of MTO custom resources
- Job, CronJob, Task, Trigger and Pipeline access in
Project
- Get access on projects
- Routes, NetworkPolicies and Daemonset access in
Project
- Template, ReplicaSet, StatefulSet and Daemonset in
Project
- Get,list,watch access on all projects related to
- Elasticsearch
- Logging
- Kibana
- Istio
- Jaeger
- Kiali
- Tekton.dev
- Get, list, watch access on ImageStream, ImageStreamImage and ImageStreamMapping in
Project
- Get access on CustomResourceDefinition in
Project
- View permission on
Jenkins.Build.Openshift.io
- Subscription, PackageManifest and InstallPlan access in
Project
"},{"location":"troubleshooting.html","title":"Troubleshooting Guide","text":""},{"location":"troubleshooting.html#operatorhub-upgrade-error","title":"OperatorHub Upgrade Error","text":""},{"location":"troubleshooting.html#operator-is-stuck-in-upgrade-if-upgrade-approval-is-set-to-automatic","title":"Operator is stuck in upgrade if upgrade approval is set to Automatic","text":""},{"location":"troubleshooting.html#problem","title":"Problem","text":"If operator upgrade is set to Automatic Approval on OperatorHub, there may be scenarios where it gets blocked.
"},{"location":"troubleshooting.html#resolution","title":"Resolution","text":"Information
If upgrade approval is set to manual, and you want to skip upgrade of a specific version, then delete the InstallPlan created for that specific version. Operator Lifecycle Manager (OLM) will create the latest available InstallPlan which can be approved then.\n
As OLM does not allow to upgrade or downgrade from a version stuck because of error, the only possible fix is to uninstall the operator from the cluster. When the operator is uninstalled it removes all of its resources i.e., ClusterRoles, ClusterRoleBindings, and Deployments etc., except Custom Resource Definitions (CRDs), so none of the Custom Resources (CRs), Tenants, Templates etc., will be removed from the cluster. If any CRD has a conversion webhook defined then that webhook should be removed before installing the stable version of the operator. This can be achieved via removing the .spec.conversion
block from the CRD schema.
As an example, if you have installed v0.8.0 of Multi Tenant Operator on your cluster, then it'll stuck in an error error validating existing CRs against new CRD's schema for \"integrationconfigs.tenantoperator.stakater.com\": error validating custom resource against new schema for IntegrationConfig multi-tenant-operator/tenant-operator-config: [].spec.tenantRoles: Required value
. To resolve this issue, you'll first uninstall the MTO from the cluster. Once you uninstall the MTO, check Tenant CRD which will have a conversion block, which needs to be removed. After removing the conversion block from the Tenant CRD, install the latest available version of MTO from OperatorHub.
"},{"location":"troubleshooting.html#permission-issues","title":"Permission Issues","text":""},{"location":"troubleshooting.html#vault-user-permissions-are-not-updated-if-the-user-is-added-to-a-tenant-and-the-user-does-not-exist-in-rhsso","title":"Vault user permissions are not updated if the user is added to a Tenant, and the user does not exist in RHSSO","text":""},{"location":"troubleshooting.html#problem_1","title":"Problem","text":"If a user is added to tenant resource, and the user does not exist in RHSSO, then RHSSO is not updated with the user's Vault permission.
"},{"location":"troubleshooting.html#reproduction-steps","title":"Reproduction steps","text":" - Add a new user to Tenant CR
- Attempt to log in to Vault with the added user
- Vault denies that the user exists, and signs the user up via RHSSO. User is now created on RHSSO (you may check for the user on RHSSO).
"},{"location":"troubleshooting.html#resolution_1","title":"Resolution","text":"If the user does not exist in RHSSO, then MTO does not create the tenant access for Vault in RHSSO.
The user now needs to go to Vault, and sign up using OIDC. Then the user needs to wait for MTO to reconcile the updated tenant (reconciliation period is currently 1 hour). After reconciliation, MTO will add relevant access for the user in RHSSO.
If the user needs to be added immediately and it is not feasible to wait for next MTO reconciliation, then: add a label or annotation to the user, or restart the Tenant controller pod to force immediate reconciliation.
"},{"location":"vault-multitenancy.html","title":"Vault Multitenancy","text":"HashiCorp Vault is an identity-based secret and encryption management system. Vault validates and authorizes a system's clients (users, machines, apps) before providing them access to secrets or stored sensitive data.
"},{"location":"vault-multitenancy.html#vault-integration-in-multi-tenant-operator","title":"Vault integration in Multi Tenant Operator","text":""},{"location":"vault-multitenancy.html#service-account-auth-in-vault","title":"Service Account Auth in Vault","text":"MTO enables the Kubernetes auth method which can be used to authenticate with Vault using a Kubernetes Service Account Token. When enabled, for every tenant namespace, MTO automatically creates policies and roles that allow the service accounts present in those namespaces to read secrets at tenant's path in Vault. The name of the role is the same as namespace name.
These service accounts are required to have stakater.com/vault-access: true
label, so they can be authenticated with Vault via MTO.
The Diagram shows how MTO enables ServiceAccounts to read secrets from Vault.
"},{"location":"vault-multitenancy.html#user-oidc-auth-in-vault","title":"User OIDC Auth in Vault","text":"This requires a running RHSSO(RedHat Single Sign On)
instance integrated with Vault over OIDC login method.
MTO integration with Vault and RHSSO provides a way for users to log in to Vault where they only have access to relevant tenant paths.
Once both integrations are set-up with IntegrationConfig CR, MTO links tenant users to specific client roles named after their tenant under Vault client in RHSSO.
After that, MTO creates specific policies in Vault for its tenant users.
Mapping of tenant roles to Vault is shown below
Tenant Role Vault Path Vault Capabilities Owner, Editor (tenantName)/* Create, Read, Update, Delete, List Owner, Editor sys/mounts/(tenantName)/* Create, Read, Update, Delete, List Owner, Editor managed-addons/* Read, List Viewer (tenantName)/* Read A simple user login workflow is shown in the diagram below.
"},{"location":"explanation/auth.html","title":"Authentication and Authorization in MTO Console","text":""},{"location":"explanation/auth.html#keycloak-for-authentication","title":"Keycloak for Authentication","text":"MTO Console incorporates Keycloak, a leading authentication module, to manage user access securely and efficiently. Keycloak is provisioned automatically by our controllers, setting up a new realm, client, and a default user named mto
.
"},{"location":"explanation/auth.html#benefits","title":"Benefits","text":" - Industry Standard: Offers robust, reliable authentication in line with industry standards.
- Integration with Existing Systems: Enables easy linkage with existing Active Directories or SSO systems, avoiding the need for redundant user management.
- Administrative Control: Grants administrators full authority over user access to the console, enhancing security and operational integrity.
"},{"location":"explanation/auth.html#postgresql-as-persistent-storage-for-keycloak","title":"PostgreSQL as Persistent Storage for Keycloak","text":"MTO Console leverages PostgreSQL as the persistent storage solution for Keycloak, enhancing the reliability and flexibility of the authentication system.
It offers benefits such as enhanced data reliability, easy data export and import.
"},{"location":"explanation/auth.html#benefits_1","title":"Benefits","text":" - Persistent Data Storage: By using PostgreSQL, Keycloak's data, including realms, clients, and user information, is preserved even in the event of a pod restart. This ensures continuous availability and stability of the authentication system.
- Data Exportability: Customers can easily export Keycloak configurations and data from the PostgreSQL database.
- Transferability Across Environments: The exported data can be conveniently imported into another cluster or Keycloak instance, facilitating smooth transitions and backups.
- No Data Loss: Ensures that critical authentication data is not lost during system updates or maintenance.
- Operational Flexibility: Provides customers with greater control over their authentication data, enabling them to manage and migrate their configurations as needed.
"},{"location":"explanation/auth.html#built-in-module-for-authorization","title":"Built-in module for Authorization","text":"The MTO Console is equipped with an authorization module, designed to manage access rights intelligently and securely.
"},{"location":"explanation/auth.html#benefits_2","title":"Benefits","text":" - User and Tenant Based: Authorization decisions are made based on the user's membership in specific tenants, ensuring appropriate access control.
- Role-Specific Access: The module considers the roles assigned to users, granting permissions accordingly to maintain operational integrity.
- Elevated Privileges for Admins: Users identified as administrators or members of the clusterAdminGroups are granted comprehensive permissions across the console.
- Database Caching: Authorization decisions are cached in the database, reducing reliance on the Kubernetes API server.
- Faster, Reliable Access: This caching mechanism ensures quicker and more reliable access for users, enhancing the overall responsiveness of the MTO Console.
"},{"location":"explanation/console.html","title":"MTO Console","text":""},{"location":"explanation/console.html#introduction","title":"Introduction","text":"The Multi Tenant Operator (MTO) Console is a comprehensive user interface designed for both administrators and tenant users to manage multi-tenant environments. The MTO Console simplifies the complexity involved in handling various aspects of tenants and their related resources.
"},{"location":"explanation/console.html#dashboard-overview","title":"Dashboard Overview","text":"The dashboard serves as a centralized monitoring hub, offering insights into the current state of tenants, namespaces, and quotas. It is designed to provide a quick summary/snapshot of MTO resources' status. Additionally, it includes a Showback graph that presents a quick glance of the seven-day cost trends associated with the namespaces/tenants based on the logged-in user.
"},{"location":"explanation/console.html#tenants","title":"Tenants","text":"Here, admins have a bird's-eye view of all tenants, with the ability to delve into each one for detailed examination and management. This section is pivotal for observing the distribution and organization of tenants within the system. More information on each tenant can be accessed by clicking the view option against each tenant name.
"},{"location":"explanation/console.html#namespaces","title":"Namespaces","text":"Users can view all the namespaces that belong to their tenant, offering a comprehensive perspective of the accessible namespaces for tenant members. This section also provides options for detailed exploration.
"},{"location":"explanation/console.html#quotas","title":"Quotas","text":"MTO's Quotas are crucial for managing resource allocation. In this section, administrators can assess the quotas assigned to each tenant, ensuring a balanced distribution of resources in line with operational requirements.
"},{"location":"explanation/console.html#templates","title":"Templates","text":"The Templates section acts as a repository for standardized resource deployment patterns, which can be utilized to maintain consistency and reliability across tenant environments. Few examples include provisioning specific k8s manifests, helm charts, secrets or configmaps across a set of namespaces.
"},{"location":"explanation/console.html#showback","title":"Showback","text":"The Showback feature is an essential financial governance tool, providing detailed insights into the cost implications of resource usage by tenant or namespace or other filters. This facilitates a transparent cost management and internal chargeback or showback process, enabling informed decision-making regarding resource consumption and budgeting.
"},{"location":"explanation/console.html#user-roles-and-permissions","title":"User Roles and Permissions","text":""},{"location":"explanation/console.html#administrators","title":"Administrators","text":"Administrators have overarching access to the console, including the ability to view all namespaces and tenants. They have exclusive access to the IntegrationConfig, allowing them to view all the settings and integrations.
"},{"location":"explanation/console.html#tenant-users","title":"Tenant Users","text":"Regular tenant users can monitor and manage their allocated resources. However, they do not have access to the IntegrationConfig and cannot view resources across different tenants, ensuring data privacy and operational integrity.
"},{"location":"explanation/console.html#live-yaml-configuration-and-graph-view","title":"Live YAML Configuration and Graph View","text":"In the MTO Console, each resource section is equipped with a \"View\" button, revealing the live YAML configuration for complete information on the resource. For Tenant resources, a supplementary \"Graph\" option is available, illustrating the relationships and dependencies of all resources under a Tenant. This dual-view approach empowers users with both the detailed control of YAML and the holistic oversight of the graph view.
You can find more details on graph visualization here: Graph Visualization
"},{"location":"explanation/console.html#caching-and-database","title":"Caching and Database","text":"MTO integrates a dedicated database to streamline resource management. Now, all resources managed by MTO are efficiently stored in a Postgres database, enhancing the MTO Console's ability to efficiently retrieve all the resources for optimal presentation.
The implementation of this feature is facilitated by the Bootstrap controller, streamlining the deployment process. This controller creates the PostgreSQL Database, establishes a service for inter-pod communication, and generates a secret to ensure secure connectivity to the database.
Furthermore, the introduction of a dedicated cache layer ensures that there is no added burden on the kube API server when responding to MTO Console requests. This enhancement not only improves response times but also contributes to a more efficient and responsive resource management system.
"},{"location":"explanation/console.html#authentication-and-authorization","title":"Authentication and Authorization","text":"MTO Console ensures secure access control using a robust combination of Keycloak for authentication and a custom-built authorization module.
"},{"location":"explanation/console.html#keycloak-integration","title":"Keycloak Integration","text":"Keycloak, an industry-standard authentication tool, is integrated for secure user login and management. It supports seamless integration with existing ADs or SSO systems and grants administrators complete control over user access.
"},{"location":"explanation/console.html#custom-authorization-module","title":"Custom Authorization Module","text":"Complementing Keycloak, our custom authorization module intelligently controls access based on user roles and their association with tenants. Special checks are in place for admin users, granting them comprehensive permissions.
For more details on Keycloak's integration, PostgreSQL as persistent storage, and the intricacies of our authorization module, please visit here.
"},{"location":"explanation/console.html#conclusion","title":"Conclusion","text":"The MTO Console is engineered to simplify complex multi-tenant management. The current iteration focuses on providing comprehensive visibility. Future updates could include direct CUD (Create/Update/Delete) capabilities from the dashboard, enhancing the console\u2019s functionality. The Showback feature remains a standout, offering critical cost tracking and analysis. The delineation of roles between administrators and tenant users ensures a secure and organized operational framework.
"},{"location":"explanation/why-argocd-multi-tenancy.html","title":"Need for Multi-Tenancy in ArgoCD","text":""},{"location":"explanation/why-argocd-multi-tenancy.html#argocd-multi-tenancy","title":"ArgoCD Multi-tenancy","text":"ArgoCD is a declarative GitOps tool built to deploy applications to Kubernetes. While the continuous delivery (CD) space is seen by some as crowded these days, ArgoCD does bring some interesting capabilities to the table. Unlike other tools, ArgoCD is lightweight and easy to configure.
"},{"location":"explanation/why-argocd-multi-tenancy.html#why-argocd","title":"Why ArgoCD?","text":"Application definitions, configurations, and environments should be declarative and version controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand.
"},{"location":"explanation/why-vault-multi-tenancy.html","title":"Need for Multi-Tenancy in Vault","text":""},{"location":"faq/index.html","title":"Index","text":""},{"location":"how-to-guides/integration-config.html","title":"Integration Config","text":"IntegrationConfig is used to configure settings of multi-tenancy for Multi Tenant Operator.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - admin\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n - viewer\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n editor:\n clusterRoles:\n - custom-editor\n viewer:\n clusterRoles:\n - custom-viewer\n - custom-view\n openshift:\n project:\n labels:\n stakater.com/workload-monitoring: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n group:\n labels:\n role: customer-reader\n sandbox:\n labels:\n stakater.com/kind: sandbox\n clusterAdminGroups:\n - cluster-admins\n privilegedNamespaces:\n - ^default$\n - ^openshift-*\n - ^kube-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n groups:\n - cluster-admins\n argocd:\n namespace: openshift-operators\n namespaceResourceBlacklist:\n - group: '' # all groups\n kind: ResourceQuota\n clusterResourceWhitelist:\n - group: tronador.stakater.com\n kind: EnvironmentProvisioner\n rhsso:\n enabled: true\n realm: customer\n endpoint:\n url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n secretReference:\n name: auth-secrets\n namespace: openshift-auth\n vault:\n enabled: true\n endpoint:\n url: https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n secretReference:\n name: vault-root-token\n namespace: vault\n sso:\n clientName: vault\n accessorID: <ACCESSOR_ID_TOKEN>\n
Following are the different components that can be used to configure multi-tenancy in a cluster via Multi Tenant Operator.
"},{"location":"how-to-guides/integration-config.html#tenantroles","title":"TenantRoles","text":"TenantRoles are required within the IntegrationConfig, as they are used for defining what roles will be applied to each Tenant namespace. The field allows optional custom roles, that are then used to create RoleBindings for namespaces that match a labelSelector.
\u26a0\ufe0f If you do not configure roles in any way, then the default OpenShift roles of owner
, edit
, and view
will apply to Tenant members. Their details can be found here
tenantRoles:\n default:\n owner:\n clusterRoles:\n - admin\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n - viewer\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n editor:\n clusterRoles:\n - custom-editor\n viewer:\n clusterRoles:\n - custom-viewer\n - custom-view\n
"},{"location":"how-to-guides/integration-config.html#default","title":"Default","text":"This field contains roles that will be used to create default roleBindings for each namespace that belongs to tenants. These roleBindings are only created for a namespace if that namespace isn't already matched by the custom
field below it. Therefore, it is required to have at least one role mentioned within each of its three subfields: owner
, editor
, and viewer
. These 3 subfields also correspond to the member fields of the Tenant CR
"},{"location":"how-to-guides/integration-config.html#custom","title":"Custom","text":"An array of custom roles. Similar to the default
field, you can mention roles within this field as well. However, the custom roles also require the use of a labelSelector
for each iteration within the array. The roles mentioned here will only apply to the namespaces that are matched by the labelSelector. If a namespace is matched by 2 different labelSelectors, then both roles will apply to it. Additionally, roles can be skipped within the labelSelector. These missing roles are then inherited from the default
roles field . For example, if the following custom roles arrangement is used:
custom:\n- labelSelector:\n matchExpressions:\n - key: stakater.com/kind\n operator: In\n values:\n - build\n matchLabels:\n stakater.com/kind: dev\n owner:\n clusterRoles:\n - custom-owner\n
Then the editor
and viewer
roles will be taken from the default
roles field, as that is required to have at least one role mentioned.
"},{"location":"how-to-guides/integration-config.html#openshift","title":"OpenShift","text":"openshift:\n project:\n labels:\n stakater.com/workload-monitoring: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n group:\n labels:\n role: customer-reader\n sandbox:\n labels:\n stakater.com/kind: sandbox\n clusterAdminGroups:\n - cluster-admins\n privilegedNamespaces:\n - ^default$\n - ^openshift-*\n - ^kube-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n groups:\n - cluster-admins\n
"},{"location":"how-to-guides/integration-config.html#project-group-and-sandbox","title":"Project, group and sandbox","text":"We can use the openshift.project
, openshift.group
and openshift.sandbox
fields to automatically add labels
and annotations
to the Projects and Groups managed via MTO.
openshift:\n project:\n labels:\n stakater.com/workload-monitoring: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n group:\n labels:\n role: customer-reader\n sandbox:\n labels:\n stakater.com/kind: sandbox\n
If we want to add default labels/annotations to sandbox namespaces of tenants than we just simply add them in openshift.project.labels
/openshift.project.annotations
respectively.
Whenever a project is made it will have the labels and annotations as mentioned above.
kind: Project\napiVersion: project.openshift.io/v1\nmetadata:\n name: bluesky-build\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n labels:\n workload-monitoring: 'true'\n stakater.com/tenant: bluesky\nspec:\n finalizers:\n - kubernetes\nstatus:\n phase: Active\n
kind: Group\napiVersion: user.openshift.io/v1\nmetadata:\n name: bluesky-owner-group\n labels:\n role: customer-reader\nusers:\n - andrew@stakater.com\n
"},{"location":"how-to-guides/integration-config.html#cluster-admin-groups","title":"Cluster Admin Groups","text":"clusterAdminGroups:
Contains names of the groups that are allowed to perform CRUD operations on namespaces present on the cluster. Users in the specified group(s) will be able to perform these operations without MTO getting in their way
"},{"location":"how-to-guides/integration-config.html#privileged-namespaces","title":"Privileged Namespaces","text":"privilegedNamespaces:
Contains the list of namespaces
ignored by MTO. MTO will not manage the namespaces
in this list. Values in this list are regex patterns. For example:
- To ignore the
default
namespace, we can specify ^default$
- To ignore all namespaces starting with the
openshift-
prefix, we can specify ^openshift-*
. - To ignore any namespace containing
stakater
in its name, we can specify stakater
. (A constant word given as a regex pattern will match any namespace containing that word.)
"},{"location":"how-to-guides/integration-config.html#privileged-serviceaccounts","title":"Privileged ServiceAccounts","text":"privilegedServiceAccounts:
Contains the list of ServiceAccounts
ignored by MTO. MTO will not manage the ServiceAccounts
in this list. Values in this list are regex patterns. For example, to ignore all ServiceAccounts
starting with the system:serviceaccount:openshift-
prefix, we can use ^system:serviceaccount:openshift-*
; and to ignore the system:serviceaccount:builder
service account we can use ^system:serviceaccount:builder$.
"},{"location":"how-to-guides/integration-config.html#namespace-access-policy","title":"Namespace Access Policy","text":"namespaceAccessPolicy.Deny:
Can be used to restrict privileged users/groups CRUD operation over managed namespaces.
namespaceAccessPolicy:\n deny:\n privilegedNamespaces:\n groups:\n - cluster-admins\n users:\n - system:serviceaccount:openshift-argocd:argocd-application-controller\n - adam@stakater.com\n
\u26a0\ufe0f If you want to use a more complex regex pattern (for the openshift.privilegedNamespaces
or openshift.privilegedServiceAccounts
field), it is recommended that you test the regex pattern first - either locally or using a platform such as https://regex101.com/.
"},{"location":"how-to-guides/integration-config.html#argocd","title":"ArgoCD","text":""},{"location":"how-to-guides/integration-config.html#namespace","title":"Namespace","text":"argocd.namespace
is an optional field used to specify the namespace where ArgoCD Applications and AppProjects are deployed. The field should be populated when you want to create an ArgoCD AppProject for each tenant.
"},{"location":"how-to-guides/integration-config.html#namespaceresourceblacklist","title":"NamespaceResourceBlacklist","text":"argocd:\n namespaceResourceBlacklist:\n - group: '' # all resource groups\n kind: ResourceQuota\n - group: ''\n kind: LimitRange\n - group: ''\n kind: NetworkPolicy\n
argocd.namespaceResourceBlacklist
prevents ArgoCD from syncing the listed resources from your GitOps repo.
"},{"location":"how-to-guides/integration-config.html#clusterresourcewhitelist","title":"ClusterResourceWhitelist","text":"argocd:\n clusterResourceWhitelist:\n - group: tronador.stakater.com\n kind: EnvironmentProvisioner\n
argocd.clusterResourceWhitelist
allows ArgoCD to sync the listed cluster scoped resources from your GitOps repo.
"},{"location":"how-to-guides/integration-config.html#rhsso-red-hat-single-sign-on","title":"RHSSO (Red Hat Single Sign-On)","text":"Red Hat Single Sign-On RHSSO is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0.
If RHSSO
is configured on a cluster, then RHSSO configuration can be enabled.
rhsso:\n enabled: true\n realm: customer\n endpoint:\n secretReference:\n name: auth-secrets\n namespace: openshift-auth\n url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n
If enabled, then admins have to provide secret and URL of RHSSO.
secretReference.name:
Will contain the name of the secret. secretReference.namespace:
Will contain the namespace of the secret. realm:
Will contain the realm name which is configured for users. url:
Will contain the URL of RHSSO.
"},{"location":"how-to-guides/integration-config.html#vault","title":"Vault","text":"Vault is used to secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.
If vault
is configured on a cluster, then Vault configuration can be enabled.
Vault:\n enabled: true\n endpoint:\n secretReference:\n name: vault-root-token\n namespace: vault\n url: >-\n https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n sso:\n accessorID: <ACCESSOR_ID_TOKEN>\n clientName: vault\n
If enabled, then admins have to provide secret, URL and SSO accessorID of Vault.
secretReference.name:
Will contain the name of the secret. secretReference.namespace:
Will contain the namespace of the secret. url:
Will contain the URL of Vault. sso.accessorID:
Will contain the SSO accessorID. sso.clientName:
Will contain the client name.
"},{"location":"how-to-guides/quota.html","title":"Quota","text":"Using Multi Tenant Operator, the cluster-admin can set and enforce cluster resource quotas and limit ranges for tenants.
"},{"location":"how-to-guides/quota.html#assigning-resource-quotas","title":"Assigning Resource Quotas","text":"Bill is a cluster admin who will first create Quota
CR where he sets the maximum resource limits that Anna's tenant will have. Here limitrange
is an optional field, cluster admin can skip it if not needed.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: small\nspec:\n resourcequota:\n hard:\n requests.cpu: '5'\n requests.memory: '5Gi'\n configmaps: \"10\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\n limitrange:\n limits:\n - type: \"Pod\"\n max:\n cpu: \"2\"\n memory: \"1Gi\"\n min:\n cpu: \"200m\"\n memory: \"100Mi\"\nEOF\n
For more details please refer to Quotas.
kubectl get quota small\nNAME STATE AGE\nsmall Active 3m\n
Bill then proceeds to create a tenant for Anna, while also linking the newly created Quota
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@stakater.com\n quota: small\n sandbox: false\nEOF\n
Now that the quota is linked with Anna's tenant, Anna can create any resource within the values of resource quota and limit range.
kubectl -n bluesky-production create deployment nginx --image nginx:latest --replicas 4\n
Once the resource quota assigned to the tenant has been reached, Anna cannot create further resources.
kubectl create pods bluesky-training\nError from server (Cannot exceed Namespace quota: please, reach out to the system administrators)\n
"},{"location":"how-to-guides/quota.html#limiting-persistentvolume-for-tenant","title":"Limiting PersistentVolume for Tenant","text":"Bill, as a cluster admin, wants to restrict the amount of storage a Tenant can use. For that he'll add the requests.storage
field to quota.spec.resourcequota.hard
. If Bill wants to restrict tenant bluesky
to use only 50Gi
of storage, he'll first create a quota with requests.storage
field set to 50Gi
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: medium\nspec:\n resourcequota:\n hard:\n requests.cpu: '5'\n requests.memory: '10Gi'\n requests.storage: '50Gi'\n
Once the quota is created, Bill will create the tenant and set the quota field to the one he created.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n quota: medium\n sandbox: true\nEOF\n
Now, the combined storage used by all tenant namespaces will not exceed 50Gi
.
"},{"location":"how-to-guides/quota.html#adding-storageclass-restrictions-for-tenant","title":"Adding StorageClass Restrictions for Tenant","text":"Now, Bill, as a cluster admin, wants to make sure that no Tenant can provision more than a fixed amount of storage from a StorageClass. Bill can restrict that using <storage-class-name>.storageclass.storage.k8s.io/requests.storage
field in quota.spec.resourcequota.hard
field. If Bill wants to restrict tenant sigma
to use only 20Gi
of storage from storage class stakater
, he'll first create a StorageClass stakater
and then create the relevant Quota with stakater.storageclass.storage.k8s.io/requests.storage
field set to 20Gi
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: small\nspec:\n resourcequota:\n hard:\n requests.cpu: '2'\n requests.memory: '4Gi'\n stakater.storageclass.storage.k8s.io/requests.storage: '20Gi'\n
Once the quota is created, Bill will create the tenant and set the quota field to the one he created.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: sigma\nspec:\n owners:\n users:\n - dave@aurora.org\n quota: small\n sandbox: true\nEOF\n
Now, the combined storage provisioned from StorageClass stakater
used by all tenant namespaces will not exceed 20Gi
.
The 20Gi
limit will only be applied to StorageClass stakater
. If a tenant member creates a PVC with some other StorageClass, he will not be restricted.
Tip
More details about Resource Quota
can be found here
"},{"location":"how-to-guides/template-group-instance.html","title":"TemplateGroupInstance","text":"Cluster scoped resource:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: namespace-parameterized-restrictions-tgi\nspec:\n template: namespace-parameterized-restrictions\n sync: true\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\n
TemplateGroupInstance distributes a template across multiple namespaces which are selected by labelSelector.
"},{"location":"how-to-guides/template-instance.html","title":"TemplateInstance","text":"Namespace scoped resource:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: networkpolicy\n namespace: build\nspec:\n template: networkpolicy\n sync: true\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\n
TemplateInstance are used to keep track of resources created from Templates, which are being instantiated inside a Namespace. Generally, a TemplateInstance is created from a Template and then the TemplateInstances will not be updated when the Template changes later on. To change this behavior, it is possible to set spec.sync: true
in a TemplateInstance. Setting this option, means to keep this TemplateInstance in sync with the underlying template (similar to Helm upgrade).
"},{"location":"how-to-guides/template.html","title":"Template","text":""},{"location":"how-to-guides/template.html#cluster-scoped-resource","title":"Cluster scoped resource","text":"apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: redis\nresources:\n helm:\n releaseName: redis\n chart:\n repository:\n name: redis\n repoUrl: https://charts.bitnami.com/bitnami\n values: |\n redisPort: 6379\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: networkpolicy\nparameters:\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\nresources:\n manifests:\n - kind: NetworkPolicy\n apiVersion: networking.k8s.io/v1\n metadata:\n name: deny-cross-ns-traffic\n spec:\n podSelector:\n matchLabels:\n role: db\n policyTypes:\n - Ingress\n - Egress\n ingress:\n - from:\n - ipBlock:\n cidr: \"${{CIDR_IP}}\"\n except:\n - 172.17.1.0/24\n - namespaceSelector:\n matchLabels:\n project: myproject\n - podSelector:\n matchLabels:\n role: frontend\n ports:\n - protocol: TCP\n port: 6379\n egress:\n - to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 5978\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: resource-mapping\nresources:\n resourceMappings:\n secrets:\n - name: secret-s1\n namespace: namespace-n1\n configMaps:\n - name: configmap-c1\n namespace: namespace-n2\n
Templates are used to initialize Namespaces, share common resources across namespaces, and map secrets/configmaps from one namespace to other namespaces.
- They either contain one or more Kubernetes manifests, a reference to secrets/configmaps, or a Helm chart.
- They are being tracked by TemplateInstances in each Namespace they are applied to.
- They can contain pre-defined parameters such as ${namespace}/${tenant} or user-defined ${MY_PARAMETER} that can be specified within an TemplateInstance.
Also, you can define custom variables in Template
and TemplateInstance
. The parameters defined in TemplateInstance
are overwritten the values defined in Template
.
Manifest Templates: The easiest option to define a Template is by specifying an array of Kubernetes manifests which should be applied when the Template is being instantiated.
Helm Chart Templates: Instead of manifests, a Template can specify a Helm chart that will be installed (using Helm template) when the Template is being instantiated.
Resource Mapping Templates: A template can be used to map secrets and configmaps from one tenant's namespace to another tenant's namespace, or within a tenant's namespace.
"},{"location":"how-to-guides/template.html#mandatory-and-optional-templates","title":"Mandatory and Optional Templates","text":"Templates can either be mandatory or optional. By default, all Templates are optional. Cluster Admins can make Templates mandatory by adding them to the spec.templateInstances
array within the Tenant configuration. All Templates listed in spec.templateInstances
will always be instantiated within every Namespace
that is created for the respective Tenant.
"},{"location":"how-to-guides/tenant.html","title":"Tenant","text":"Cluster scoped resource:
The smallest valid Tenant definition is given below (with just one field in its spec):
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: alpha\nspec:\n quota: small\n
Here is a more detailed Tenant definition, explained below:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: alpha\nspec:\n owners: # optional\n users: # optional\n - dave@stakater.com\n groups: # optional\n - alpha\n editors: # optional\n users: # optional\n - jack@stakater.com\n viewers: # optional\n users: # optional\n - james@stakater.com\n quota: medium # required\n sandboxConfig: # optional\n enabled: true # optional\n private: true # optional\n onDelete: # optional\n cleanNamespaces: false # optional\n cleanAppProject: true # optional\n argocd: # optional\n sourceRepos: # required\n - https://github.com/stakater/gitops-config\n appProject: # optional\n clusterResourceWhitelist: # optional\n - group: tronador.stakater.com\n kind: Environment\n namespaceResourceBlacklist: # optional\n - group: \"\"\n kind: ConfigMap\n hibernation: # optional\n sleepSchedule: 23 * * * * # required\n wakeSchedule: 26 * * * * # required\n namespaces: # optional\n withTenantPrefix: # optional\n - dev\n - build\n withoutTenantPrefix: # optional\n - preview\n commonMetadata: # optional\n labels: # optional\n stakater.com/team: alpha\n annotations: # optional\n openshift.io/node-selector: node-role.kubernetes.io/infra=\n specificMetadata: # optional\n - annotations: # optional\n stakater.com/user: dave\n labels: # optional\n stakater.com/sandbox: true\n namespaces: # optional\n - alpha-dave-stakater-sandbox\n templateInstances: # optional\n - spec: # optional\n template: networkpolicy # required\n sync: true # optional\n parameters: # optional\n - name: CIDR_IP\n value: \"172.17.0.0/16\"\n selector: # optional\n matchLabels: # optional\n policy: network-restriction\n
-
Tenant has 3 kinds of Members
. Each member type should have different roles assigned to them. These roles are gotten from the IntegrationConfig's TenantRoles field. You can customize these roles to your liking, but by default the following configuration applies:
Owners:
Users who will be owners of a tenant. They will have OpenShift admin-role assigned to their users, with additional access to create namespaces as well. Editors:
Users who will be editors of a tenant. They will have OpenShift edit-role assigned to their users. Viewers:
Users who will be viewers of a tenant. They will have OpenShift view-role assigned to their users. - For more details, check out their definitions.
-
Users
can be linked to the tenant by specifying there username in owners.users
, editors.users
and viewers.users
respectively.
-
Groups
can be linked to the tenant by specifying the group name in owners.groups
, editors.groups
and viewers.groups
respectively.
-
Tenant will have a Quota
to limit resource consumption.
-
sandboxConfig
is used to configure the tenant user sandbox feature
- Setting
enabled
to true will create sandbox namespaces for owners and editors. - Sandbox will follow the following naming convention {TenantName}-{UserName}-sandbox.
- In case of groups, the sandbox namespaces will be created for each member of the group.
- Setting
private
to true will make those sandboxes be only visible to the user they belong to. By default, sandbox namespaces are visible to all tenant members
-
onDelete
is used to tell Multi Tenant Operator what to do when a Tenant is deleted.
cleanNamespaces
if the value is set to true MTO deletes all tenant namespaces when a Tenant
is deleted. Default value is false. cleanAppProject
will keep the generated ArgoCD AppProject if the value is set to false. By default, the value is true.
-
argocd
is required if you want to create an ArgoCD AppProject for the tenant.
sourceRepos
contain a list of repositories that point to your GitOps. appProject
is used to set the clusterResourceWhitelist
and namespaceResourceBlacklist
resources. If these are also applied via IntegrationConfig
then those applied via Tenant CR will have higher precedence for given Tenant.
-
hibernation
can be used to create a schedule during which the namespaces belonging to the tenant will be put to sleep. The values of the sleepSchedule
and wakeSchedule
fields must be a string in a cron format.
-
Namespaces can also be created via tenant CR by specifying names in namespaces
.
- Multi Tenant Operator will append tenant name prefix while creating namespaces if the list of namespaces is under the
withTenantPrefix
field, so the format will be {TenantName}-{Name}. - Namespaces listed under the
withoutTenantPrefix
will be created with the given name. Writing down namespaces here that already exist within the cluster are not allowed. stakater.com/kind: {Name}
label will also be added to the namespaces.
-
commonMetadata
can be used to distribute common labels and annotations among tenant namespaces.
labels
distributes provided labels among all tenant namespaces annotations
distributes provided annotations among all tenant namespaces
-
specificMetadata
can be used to distribute specific labels and annotations among specific tenant namespaces.
labels
distributes given labels among specific tenant namespaces annotations
distributes given annotations among specific tenant namespaces namespaces
consists a list of specific tenant namespaces across which the labels and annotations will be distributed
-
Tenant automatically deploys template
resource mentioned in templateInstances
to matching tenant namespaces.
Template
resources are created in those namespaces
which belong to a tenant
and contain matching labels
. Template
resources are created in all namespaces
of a tenant
if selector
field is empty.
\u26a0\ufe0f If same label or annotation key is being applied using different methods provided, then the highest precedence will be given to specificMetadata
followed by commonMetadata
and in the end would be the ones applied from openshift.project.labels
/openshift.project.annotations
in IntegrationConfig
"},{"location":"how-to-guides/offboarding/uninstalling.html","title":"Uninstall via OperatorHub UI","text":"You can uninstall MTO by following these steps:
-
Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not. If yes, please set spec.onDelete.cleanNamespaces
to false
for all those tenants whose namespaces you want to retain, and spec.onDelete.cleanAppProject
to false
for all those tenants whose AppProject you want to retain. For more details check out onDelete
-
After making the required changes open OpenShift console and click on Operators
, followed by Installed Operators
from the side menu
- Now click on uninstall and confirm uninstall.
"},{"location":"how-to-guides/offboarding/uninstalling.html#notes","title":"Notes","text":" - For more details on how to use MTO please refer Tenant's tutorial.
- For more details on how to extend your MTO manager ClusterRole please refer extend-admin-clusterrole.
"},{"location":"reference-guides/add-remove-namespace-gitops.html","title":"Add/Remove Namespace from Tenant via GitOps","text":""},{"location":"reference-guides/admin-clusterrole.html","title":"Extending Admin ClusterRole","text":"Bill as the cluster admin want to add additional rules for admin ClusterRole.
Bill can extend the admin
role for MTO using the aggregation label for admin ClusterRole. Bill will create a new ClusterRole with all the permissions he needs to extend for MTO and add the aggregation label on the newly created ClusterRole.
kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: extend-admin-role\n labels:\n rbac.authorization.k8s.io/aggregate-to-admin: 'true'\nrules:\n - verbs:\n - create\n - update\n - patch\n - delete\n apiGroups:\n - user.openshift.io\n resources:\n - groups\n
Note: You can learn more about aggregated-cluster-roles
here
"},{"location":"reference-guides/admin-clusterrole.html#whats-next","title":"What\u2019s next","text":"See how Bill can hibernate unused namespaces at night
"},{"location":"reference-guides/configuring-multitenant-network-isolation.html","title":"Configuring Multi-Tenant Isolation with Network Policy Template","text":"Bill is a cluster admin who wants to configure network policies to provide multi-tenant network isolation.
First, Bill creates a template for network policies:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: tenant-network-policy\nresources:\n manifests:\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-same-namespace\n spec:\n podSelector: {}\n ingress:\n - from:\n - podSelector: {}\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-from-openshift-monitoring\n spec:\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n network.openshift.io/policy-group: monitoring\n podSelector: {}\n policyTypes:\n - Ingress\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-from-openshift-ingress\n spec:\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n network.openshift.io/policy-group: ingress\n podSelector: {}\n policyTypes:\n - Ingress\n
Once the template has been created, Bill edits the IntegrationConfig to add unique label to all tenant projects:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n project:\n labels:\n stakater.com/workload-monitoring: \"true\"\n tenant-network-policy: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n sandbox:\n labels:\n stakater.com/kind: sandbox\n privilegedNamespaces:\n - default\n - ^openshift-*\n - ^kube-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n
Bill has added a new label tenant-network-policy: \"true\"
in project section of IntegrationConfig, now MTO will add that label in all tenant projects.
Finally, Bill creates a TemplateGroupInstance
which will distribute the network policies using the newly added project label and template.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: tenant-network-policy-group\nspec:\n template: tenant-network-policy\n selector:\n matchLabels:\n tenant-network-policy: \"true\"\n sync: true\n
MTO will now deploy the network policies mentioned in Template
to all projects matching the label selector mentioned in the TemplateGroupInstance.
"},{"location":"reference-guides/custom-metrics.html","title":"Custom Metrics Support","text":"Multi Tenant Operator now supports custom metrics for templates, template instances and template group instances. This feature allows users to monitor the usage of templates and template instances in their cluster.
To enable custom metrics and view them in your OpenShift cluster, you need to follow the steps below:
- Ensure that cluster monitoring is enabled in your cluster. You can check this by going to
Observe
-> Metrics
in the OpenShift console. - Navigate to
Administration
-> Namespaces
in the OpenShift console. Select the namespace where you have installed Multi Tenant Operator. - Add the following label to the namespace:
openshift.io/cluster-monitoring=true
. This will enable cluster monitoring for the namespace. - To ensure that the metrics are being scraped for the namespace, navigate to
Observe
-> Targets
in the OpenShift console. You should see the namespace in the list of targets. - To view the custom metrics, navigate to
Observe
-> Metrics
in the OpenShift console. You should see the custom metrics for templates, template instances and template group instances in the list of metrics.
"},{"location":"reference-guides/custom-roles.html","title":"Changing the default access level for tenant owners","text":"This feature allows the cluster admins to change the default roles assigned to Tenant owner, editor, viewer groups.
For example, if Bill as the cluster admin wants to reduce the privileges that tenant owners have, so they cannot create or edit Roles or bind them. As an admin of an OpenShift cluster, Bill can do this by assigning the edit
role to all tenant owners. This is easily achieved by modifying the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - edit\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n
Once all namespaces reconcile, the old admin
RoleBindings should get replaced with the edit
ones for each tenant owner.
"},{"location":"reference-guides/custom-roles.html#giving-specific-permissions-to-some-tenants","title":"Giving specific permissions to some tenants","text":"Bill now wants the owners of the tenants bluesky
and alpha
to have admin
permissions over their namespaces. Custom roles feature will allow Bill to do this, by modifying the IntegrationConfig like this:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - edit\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n owner:\n clusterRoles:\n - admin\n - labelSelector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - bluesky\n owner:\n clusterRoles:\n - admin\n
New Bindings will be created for the Tenant owners of bluesky
and alpha
, corresponding to the admin
Role. Bindings for editors and viewer will be inherited from the default roles
. All other Tenant owners will have an edit
Role bound to them within their namespaces
"},{"location":"reference-guides/deploying-templates.html","title":"Distributing Resources in Namespaces","text":"Multi Tenant Operator has three Custom Resources which can cover this need using the Template
CR, depending upon the conditions and preference.
- TemplateGroupInstance
- TemplateInstance
- Tenant
Stakater Team, however, encourages the use of TemplateGroupInstance
to distribute resources in multiple namespaces as it is optimized for better performance.
"},{"location":"reference-guides/deploying-templates.html#deploying-template-to-namespaces-via-templategroupinstances","title":"Deploying Template to Namespaces via TemplateGroupInstances","text":"Bill, the cluster admin, wants to deploy a docker pull secret in namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with MatchLabels
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchLabels:\n kind: build\n sync: true\n
Afterward, Bill can see that secrets have been successfully created in all label matching namespaces.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-secret Active 3m\n\nkubectl get secret docker-secret -n alpha-dave-aurora-sandbox\nNAME STATE AGE\ndocker-secret Active 2m\n
TemplateGroupInstance
can also target specific tenants or all tenant namespaces under a single yaml definition.
"},{"location":"reference-guides/deploying-templates.html#templategroupinstance-for-multiple-tenants","title":"TemplateGroupInstance for multiple Tenants","text":"It can be done by using the matchExpressions
field, dividing the tenant label in key and values.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\n sync: true\n
"},{"location":"reference-guides/deploying-templates.html#templategroupinstance-for-all-tenants","title":"TemplateGroupInstance for all Tenants","text":"This can also be done by using the matchExpressions
field, using just the tenant label key stakater.com/tenant
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: Exists\n sync: true\n
"},{"location":"reference-guides/deploying-templates.html#deploying-template-to-namespaces-via-tenant","title":"Deploying Template to Namespaces via Tenant","text":"Bill is a cluster admin who wants to deploy a docker-pull-secret in Anna's tenant namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Bill edits Anna's tenant and populates the namespacetemplate
field:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n editors:\n users:\n - john@aurora.org\n quota: small\n sandboxConfig:\n enabled: true\n templateInstances:\n - spec:\n template: docker-pull-secret\n selector:\n matchLabels:\n kind: build\n
Multi Tenant Operator will deploy TemplateInstances
mentioned in templateInstances
field, TemplateInstances
will only be applied in those namespaces
which belong to Anna's tenant
and have the matching label of kind: build
.
So now Anna adds label kind: build
to her existing namespace bluesky-anna-aurora-sandbox
, and after adding the label she sees that the secret has been created.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"reference-guides/deploying-templates.html#deploying-template-to-a-namespace-via-templateinstance","title":"Deploying Template to a Namespace via TemplateInstance","text":"Anna wants to deploy a docker pull secret in her namespace.
First Anna asks Bill, the cluster admin, to create a template of the secret for her:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Anna creates a TemplateInstance
in her namespace referring to the Template
she wants to deploy:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: docker-pull-secret-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: docker-pull-secret\n sync: true\n
Once this is created, Anna can see that the secret has been successfully applied.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"reference-guides/deploying-templates.html#passing-parameters-to-template-via-templateinstance-templategroupinstance-or-tenant","title":"Passing Parameters to Template via TemplateInstance, TemplateGroupInstance or Tenant","text":"Anna wants to deploy a LimitRange resource to certain namespaces.
First Anna asks Bill, the cluster admin, to create template with parameters for LimitRange for her:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: namespace-parameterized-restrictions\nparameters:\n # Name of the parameter\n - name: DEFAULT_CPU_LIMIT\n # The default value of the parameter\n value: \"1\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"0.5\"\n # If a parameter is required the template instance will need to set it\n # required: true\n # Make sure only values are entered for this parameter\n validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n manifests:\n - apiVersion: v1\n kind: LimitRange\n metadata:\n name: namespace-limit-range-${namespace}\n spec:\n limits:\n - default:\n cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n defaultRequest:\n cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n type: Container\n
Afterward, Anna creates a TemplateInstance
in her namespace referring to the Template
she wants to deploy:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: namespace-parameterized-restrictions-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: namespace-parameterized-restrictions\n sync: true\nparameters:\n - name: DEFAULT_CPU_LIMIT\n value: \"1.5\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"1\"\n
If she wants to distribute the same Template over multiple namespaces, she can use TemplateGroupInstance
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: namespace-parameterized-restrictions-tgi\nspec:\n template: namespace-parameterized-restrictions\n sync: true\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\nparameters:\n - name: DEFAULT_CPU_LIMIT\n value: \"1.5\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"1\"\n
Or she can use her tenant to cover only the tenant namespaces.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n editors:\n users:\n - john@aurora.org\n quota: small\n sandboxConfig:\n enabled: true\n templateInstances:\n - spec:\n template: namespace-parameterized-restrictions\n sync: true\n parameters:\n - name: DEFAULT_CPU_LIMIT\n value: \"1.5\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"1\"\n selector:\n matchLabels:\n kind: build\n
"},{"location":"reference-guides/distributing-resources.html","title":"Copying Secrets and Configmaps across Tenant Namespaces via TGI","text":"Bill is a cluster admin who wants to map a docker-pull-secret
, present in a build
namespace, in tenant namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n resourceMappings:\n secrets:\n - name: docker-pull-secret\n namespace: build\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with MatchLabels
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchLabels:\n kind: build\n sync: true\n
Afterward, Bill can see that secrets has been successfully mapped in all matching namespaces.
kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n\nkubectl get secret docker-pull-secret -n alpha-dave-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"reference-guides/distributing-resources.html#mapping-resources-within-tenant-namespaces-via-ti","title":"Mapping Resources within Tenant Namespaces via TI","text":"Anna is a tenant owner who wants to map a docker-pull-secret
, present in bluseky-build
namespace, to bluesky-anna-aurora-sandbox
namespace.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n resourceMappings:\n secrets:\n - name: docker-pull-secret\n namespace: bluesky-build\n
Once the template has been created, Anna creates a TemplateInstance
in bluesky-anna-aurora-sandbox
namespace, referring to the Template
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: docker-secret-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: docker-pull-secret\n sync: true\n
Afterward, Bill can see that secrets has been successfully mapped in all matching namespaces.
kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"reference-guides/distributing-secrets-using-sealed-secret-template.html","title":"Distributing Secrets Using Sealed Secrets Template","text":"Bill is a cluster admin who wants to provide a mechanism for distributing secrets in multiple namespaces. For this, he wants to use Sealed Secrets as the solution by adding them to MTO Template CR
First, Bill creates a Template in which Sealed Secret is mentioned:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: tenant-sealed-secret\nresources:\n manifests:\n - kind: SealedSecret\n apiVersion: bitnami.com/v1alpha1\n metadata:\n name: mysecret\n spec:\n encryptedData:\n .dockerconfigjson: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq.....\n template:\n type: kubernetes.io/dockerconfigjson\n # this is an example of labels and annotations that will be added to the output secret\n metadata:\n labels:\n \"jenkins.io/credentials-type\": usernamePassword\n annotations:\n \"jenkins.io/credentials-description\": credentials from Kubernetes\n
Once the template has been created, Bill has to edit the Tenant
to add unique label to namespaces in which the secret has to be deployed. For this, he can use the support for common and specific labels across namespaces.
Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n\n # use this if you want to add label to some specific namespaces\n specificMetadata:\n - namespaces:\n - test-namespace\n labels:\n distribute-image-pull-secret: true\n\n # use this if you want to add label to all namespaces under your tenant\n commonMetadata:\n labels:\n distribute-image-pull-secret: true\n
Bill has added support for a new label distribute-image-pull-secret: true\"
for tenant projects/namespaces, now MTO will add that label depending on the used field.
Finally, Bill creates a TemplateGroupInstance
which will deploy the sealed secrets using the newly created project label and template.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: tenant-sealed-secret\nspec:\n template: tenant-sealed-secret\n selector:\n matchLabels:\n distribute-image-pull-secret: true\n sync: true\n
MTO will now deploy the sealed secrets mentioned in Template
to namespaces which have the mentioned label. The rest of the work to deploy secret from a sealed secret has to be done by Sealed Secrets Controller.
"},{"location":"reference-guides/distributing-secrets.html","title":"Distributing Secrets","text":"Bill is a cluster admin who wants to provide a mechanism for distributing secrets in multiple namespaces. For this, he wants to use Sealed Secrets as the solution by adding them to MTO Template CR
First, Bill creates a Template in which Sealed Secret is mentioned:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: tenant-sealed-secret\nresources:\n manifests:\n - kind: SealedSecret\n apiVersion: bitnami.com/v1alpha1\n metadata:\n name: mysecret\n spec:\n encryptedData:\n .dockerconfigjson: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq.....\n template:\n type: kubernetes.io/dockerconfigjson\n # this is an example of labels and annotations that will be added to the output secret\n metadata:\n labels:\n \"jenkins.io/credentials-type\": usernamePassword\n annotations:\n \"jenkins.io/credentials-description\": credentials from Kubernetes\n
Once the template has been created, Bill has to edit the Tenant
to add unique label to namespaces in which the secret has to be deployed. For this, he can use the support for common and specific labels across namespaces.
Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n\n # use this if you want to add label to some specific namespaces\n specificMetadata:\n - namespaces:\n - test-namespace\n labels:\n distribute-image-pull-secret: true\n\n # use this if you want to add label to all namespaces under your tenant\n commonMetadata:\n labels:\n distribute-image-pull-secret: true\n
Bill has added support for a new label distribute-image-pull-secret: true\"
for tenant projects/namespaces, now MTO will add that label depending on the used field.
Finally, Bill creates a TemplateGroupInstance
which will deploy the sealed secrets using the newly created project label and template.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: tenant-sealed-secret\nspec:\n template: tenant-sealed-secret\n selector:\n matchLabels:\n distribute-image-pull-secret: true\n sync: true\n
MTO will now deploy the sealed secrets mentioned in Template
to namespaces which have the mentioned label. The rest of the work to deploy secret from a sealed secret has to be done by Sealed Secrets Controller.
"},{"location":"reference-guides/extend-default-roles.html","title":"Extending the default access level for tenant members","text":"Bill as the cluster admin wants to extend the default access for tenant members. As an admin of an OpenShift Cluster, Bill can extend the admin, edit, and view ClusterRole using aggregation. Bill will first create a ClusterRole with privileges to resources which Bill wants to extend. Bill will add the aggregation label to the newly created ClusterRole for extending the default ClusterRoles provided by OpenShift.
kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: extend-view-role\n labels:\n rbac.authorization.k8s.io/aggregate-to-view: 'true'\nrules:\n - verbs:\n - get\n - list\n - watch\n apiGroups:\n - user.openshift.io\n resources:\n - groups\n
Note: You can learn more about aggregated-cluster-roles
here
"},{"location":"reference-guides/graph-visualization.html","title":"Graph Visualization on MTO Console","text":"Effortlessly associate tenants with their respective resources using the enhanced graph feature on the MTO Console. This dynamic graph illustrates the relationships between tenants and the resources they create, encompassing both MTO's proprietary resources and native Kubernetes/OpenShift elements.
Example Graph:
graph LR;\n A(alpha)-->B(dev);\n A-->C(prod);\n B-->D(limitrange);\n B-->E(owner-rolebinding);\n B-->F(editor-rolebinding);\n B-->G(viewer-rolebinding);\n C-->H(limitrange);\n C-->I(owner-rolebinding);\n C-->J(editor-rolebinding);\n C-->K(viewer-rolebinding);\n
Explore with an intuitive graph that showcases the relationships between tenants and their resources. The MTO Console's graph feature simplifies the understanding of complex structures, providing you with a visual representation of your tenant's organization.
To view the graph of your tenant, follow the steps below:
- Navigate to
Tenants
page on the MTO Console using the left navigation bar. - Click on
View
of the tenant for which you want to view the graph. - Click on
Graph
tab on the tenant details page.
"},{"location":"reference-guides/integrationconfig.html","title":"Configuring Managed Namespaces and ServiceAccounts in IntegrationConfig","text":"Bill is a cluster admin who can use IntegrationConfig
to configure how Multi Tenant Operator (MTO)
manages the cluster.
By default, MTO watches all namespaces and will enforce all the governing policies on them. All namespaces managed by MTO require the stakater.com/tenant
label. MTO ignores privileged namespaces that are mentioned in the IntegrationConfig, and does not manage them. Therefore, any tenant label on such namespaces will be ignored.
oc create namespace stakater-test\nError from server (Cannot Create namespace stakater-test without label stakater.com/tenant. User: Bill): admission webhook \"vnamespace.kb.io\" denied the request: Cannot CREATE namespace stakater-test without label stakater.com/tenant. User: Bill\n
Bill is trying to create a namespace without the stakater.com/tenant
label. Creating a namespace without this label is only allowed if the namespace is privileged. Privileged namespaces will be ignored by MTO and do not require the said label. Therefore, Bill will add the required regex in the IntegrationConfig, along with any other namespaces which are privileged and should be ignored by MTO - like default
, or namespaces with prefixes like openshift
, kube
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedNamespaces:\n - ^default$\n - ^openshift-.*\n - ^kube-.*\n - ^stakater-.*\n
After mentioning the required regex (^stakater-.*
) under privilegedNamespaces
, Bill can create the namespace without interference.
oc create namespace stakater-test\nnamespace/stakater-test created\n
MTO will also disallow all users which are not tenant owners to perform CRUD operations on namespaces. This will also prevent Service Accounts from performing CRUD operations.
If Bill wants MTO to ignore Service Accounts, then he would simply have to add them in the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedServiceAccounts:\n - system:serviceaccount:openshift\n - system:serviceaccount:stakater\n - system:serviceaccount:kube\n - system:serviceaccount:redhat\n - system:serviceaccount:hive\n
Bill can also use regex patterns to ignore a set of service accounts:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedServiceAccounts:\n - ^system:serviceaccount:openshift-.*\n - ^system:serviceaccount:stakater-.*\n
"},{"location":"reference-guides/integrationconfig.html#configuring-vault-in-integrationconfig","title":"Configuring Vault in IntegrationConfig","text":"Vault is used to secure, store and tightly control access to tokens, passwords, certificates, and encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.
If Bill (the cluster admin) has Vault configured in his cluster, then he can take benefit from MTO's integration with Vault.
MTO automatically creates Vault secret paths for tenants, where tenant members can securely save their secrets. It also authorizes tenant members to access these secrets via OIDC.
Bill would first have to integrate Vault with MTO by adding the details in IntegrationConfig. For more details
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n vault:\n enabled: true\n endpoint:\n secretReference:\n name: vault-root-token\n namespace: vault\n url: >-\n https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n sso:\n accessorID: auth_oidc_aa6aa9aa\n clientName: vault\n
Bill then creates a tenant for Anna and John:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@acme.org\n viewers:\n users:\n - john@acme.org\n quota: small\n sandbox: false\n
Now Bill goes to Vault
and sees that a path for tenant
has been made under the name bluesky/kv
, confirming that Tenant members with the Owner or Edit roles now have access to the tenant's Vault path.
Now if Anna sign's in to the Vault via OIDC, she can see her tenants path and secrets. Whereas if John sign's in to the Vault via OIDC, he can't see his tenants path or secrets as he doesn't have the access required to view them.
"},{"location":"reference-guides/integrationconfig.html#configuring-rhsso-red-hat-single-sign-on-in-integrationconfig","title":"Configuring RHSSO (Red Hat Single Sign-On) in IntegrationConfig","text":"Red Hat Single Sign-On RHSSO is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0.
If Bill the cluster admin has RHSSO configured in his cluster, then he can take benefit from MTO's integration with RHSSO and Vault.
MTO automatically allows tenant members to access Vault via OIDC(RHSSO authentication and authorization) to access secret paths for tenants where tenant members can securely save their secrets.
Bill would first have to integrate RHSSO with MTO by adding the details in IntegrationConfig. Visit here for more details.
rhsso:\n enabled: true\n realm: customer\n endpoint:\n secretReference:\n name: auth-secrets\n namespace: openshift-auth\n url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n
"},{"location":"reference-guides/mattermost.html","title":"Creating Mattermost Teams for your tenant","text":""},{"location":"reference-guides/mattermost.html#requirements","title":"Requirements","text":"MTO-Mattermost-Integration-Operator
Please contact stakater to install the Mattermost integration operator before following the below-mentioned steps.
"},{"location":"reference-guides/mattermost.html#steps-to-enable-integration","title":"Steps to enable integration","text":"Bill wants some tenants to also have their own Mattermost Teams. To make sure this happens correctly, Bill will first add the stakater.com/mattermost: true
label to the tenants. The label will enable the mto-mattermost-integration-operator
to create and manage Mattermost Teams based on Tenants.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: sigma\n labels:\n stakater.com/mattermost: 'true'\nspec:\n owners:\n users:\n - user\n editors:\n users:\n - user1\n quota: medium\n sandbox: false\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n
Now user can log In to Mattermost to see their Team and relevant channels associated with it.
The name of the Team is similar to the Tenant name. Notification channels are pre-configured for every team, and can be modified.
"},{"location":"reference-guides/resource-sync-by-tgi.html","title":"Sync Resources Deployed by TemplateGroupInstance","text":"The TemplateGroupInstance CR provides two types of resource sync for the resources mentioned in Template
For the given example, let's consider we want to apply the following template
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-secret\nresources:\n manifests:\n\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n\n - apiVersion: v1\n kind: ServiceAccount\n metadata:\n name: example-automated-thing\n secrets:\n - name: example-automated-thing-token-zyxwv\n
And the following TemplateGroupInstance is used to deploy these resources to namespaces having label kind: build
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-secret\n selector:\n matchLabels:\n kind: build\n sync: true\n
As we can see, in our TGI, we have a field spec.sync
which is set to true
. This will update the resources on two conditions:
- The Template CR is updated
-
The TemplateGroupInstance CR is reconciled/updated
-
If, for any reason, the underlying resource gets updated or deleted, TemplateGroupInstance
CR will try to revert it back to the state mentioned in the Template
CR.
Note
If the updated field of the deployed manifest is not mentioned in the Template, it will not get reverted. For example, if secrets
field is not mentioned in ServiceAcoount in the above Template, it will not get reverted if changed
"},{"location":"reference-guides/resource-sync-by-tgi.html#ignore-resources-updates-on-resources","title":"Ignore Resources Updates on Resources","text":"If the resources mentioned in Template
CR conflict with another controller/operator, and you want TemplateGroupInstance to not actively revert the resource updates, you can add the following label to the conflicting resource multi-tenant-operator/ignore-resource-updates: \"\"
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-secret\nresources:\n manifests:\n\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n\n - apiVersion: v1\n kind: ServiceAccount\n metadata:\n name: example-automated-thing\n labels:\n multi-tenant-operator/ignore-resource-updates: \"\"\n secrets:\n - name: example-automated-thing-token-zyxwv\n
Note
However, this label will not stop Multi Tenant Operator from updating the resource on following conditions: - Template gets updated - TemplateGroupInstance gets updated - Resource gets deleted
If you don't want to sync the resources in any case, you can disable sync via sync: false
in TemplateGroupInstance
spec.
"},{"location":"reference-guides/secret-distribution.html","title":"Propagate Secrets from Parent to Descendant namespaces","text":"Secrets like registry
credentials often need to exist in multiple Namespaces, so that Pods within different namespaces can have access to those credentials in form of secrets.
Manually creating secrets within different namespaces could lead to challenges, such as:
- Someone will have to create secret either manually or via GitOps each time there is a new descendant namespace that needs the secret
- If we update the parent secret, they will have to update the secret in all descendant namespaces
- This could be time-consuming, and a small mistake while creating or updating the secret could lead to unnecessary debugging
With the help of Multi-Tenant Operator's Template feature we can make this secret distribution experience easy.
For example, to copy a Secret called registry
which exists in the example
to new Namespaces whenever they are created, we will first create a Template which will have reference of the registry secret.
It will also push updates to the copied Secrets and keep the propagated secrets always sync and updated with parent namespaces.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: registry-secret\nresources:\n resourceMappings:\n secrets:\n - name: registry\n namespace: example\n
Now using this Template we can propagate registry secret to different namespaces that have some common set of labels.
For example, will just add one label kind: registry
and all namespaces with this label will get this secret.
For propagating it on different namespaces dynamically will have to create another resource called TemplateGroupInstance
. TemplateGroupInstance
will have Template
and matchLabel
mapping as shown below:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: registry-secret-group-instance\nspec:\n template: registry-secret\n selector:\n matchLabels:\n kind: registry\n sync: true\n
After reconciliation, you will be able to see those secrets in namespaces having mentioned label.
MTO will keep injecting this secret to the new namespaces created with that label.
kubectl get secret registry-secret -n example-ns-1\nNAME STATE AGE\nregistry-secret Active 3m\n\nkubectl get secret registry-secret -n example-ns-2\nNAME STATE AGE\nregistry-secret Active 3m\n
"},{"location":"tutorials/installation.html","title":"Installation","text":"This document contains instructions on installing, uninstalling and configuring Multi Tenant Operator using OpenShift MarketPlace.
-
OpenShift OperatorHub UI
-
CLI/GitOps
-
Uninstall
"},{"location":"tutorials/installation.html#requirements","title":"Requirements","text":" - An OpenShift cluster [v4.7 - v4.12]
"},{"location":"tutorials/installation.html#installing-via-operatorhub-ui","title":"Installing via OperatorHub UI","text":" - After opening OpenShift console click on
Operators
, followed by OperatorHub
from the side menu
- Now search for
Multi Tenant Operator
and then click on Multi Tenant Operator
tile
- Click on the
install
button
- Select
Updated channel
. Select multi-tenant-operator
to install the operator in multi-tenant-operator
namespace from Installed Namespace
dropdown menu. After configuring Update approval
click on the install
button.
Note: Use stable
channel for seamless upgrades. For Production Environment
prefer Manual
approval and use Automatic
for Development Environment
- Wait for the operator to be installed
- Once successfully installed, MTO will be ready to enforce multi-tenancy in your cluster
Note: MTO will be installed in multi-tenant-operator
namespace.
"},{"location":"tutorials/installation.html#configuring-integrationconfig","title":"Configuring IntegrationConfig","text":"IntegrationConfig is required to configure the settings of multi-tenancy for MTO.
- We recommend using the following IntegrationConfig as a starting point
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedNamespaces:\n - default\n - ^openshift-*\n - ^kube-*\n - ^redhat-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:default-*\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n - ^system:serviceaccount:redhat-*\n
For more details and configurations check out IntegrationConfig.
"},{"location":"tutorials/installation.html#installing-via-cli-or-gitops","title":"Installing via CLI OR GitOps","text":" - Create namespace
multi-tenant-operator
oc create namespace multi-tenant-operator\nnamespace/multi-tenant-operator created\n
- Create an OperatorGroup YAML for MTO and apply it in
multi-tenant-operator
namespace.
oc create -f - << EOF\napiVersion: operators.coreos.com/v1\nkind: OperatorGroup\nmetadata:\n name: tenant-operator\n namespace: multi-tenant-operator\nEOF\noperatorgroup.operators.coreos.com/tenant-operator created\n
- Create a subscription YAML for MTO and apply it in
multi-tenant-operator
namespace. To enable console set .spec.config.env[].ENABLE_CONSOLE
to true
. This will create a route resource, which can be used to access the Multi-Tenant-Operator console.
oc create -f - << EOF\napiVersion: operators.coreos.com/v1alpha1\nkind: Subscription\nmetadata:\n name: tenant-operator\n namespace: multi-tenant-operator\nspec:\n channel: stable\n installPlanApproval: Automatic\n name: tenant-operator\n source: certified-operators\n sourceNamespace: openshift-marketplace\n startingCSV: tenant-operator.v0.9.1\n config:\n env:\n - name: ENABLE_CONSOLE\n value: 'true'\nEOF\nsubscription.operators.coreos.com/tenant-operator created\n
Note: To bring MTO via GitOps, add the above files in GitOps repository.
- After creating the
subscription
custom resource open OpenShift console and click on Operators
, followed by Installed Operators
from the side menu
- Wait for the installation to complete
- Once the installation is complete click on
Workloads
, followed by Pods
from the side menu and select multi-tenant-operator
project
- Once pods are up and running, MTO will be ready to enforce multi-tenancy in your cluster
"},{"location":"tutorials/installation.html#configuring-integrationconfig_1","title":"Configuring IntegrationConfig","text":"IntegrationConfig is required to configure the settings of multi-tenancy for MTO.
- We recommend using the following IntegrationConfig as a starting point:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedNamespaces:\n - default\n - ^openshift-*\n - ^kube-*\n - ^redhat-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:default-*\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n - ^system:serviceaccount:redhat-*\n
For more details and configurations check out IntegrationConfig.
"},{"location":"tutorials/installation.html#uninstall-via-operatorhub-ui","title":"Uninstall via OperatorHub UI","text":"You can uninstall MTO by following these steps:
-
Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not. If yes, please set spec.onDelete.cleanNamespaces
to false
for all those tenants whose namespaces you want to retain, and spec.onDelete.cleanAppProject
to false
for all those tenants whose AppProject you want to retain. For more details check out onDelete
-
After making the required changes open OpenShift console and click on Operators
, followed by Installed Operators
from the side menu
- Now click on uninstall and confirm uninstall.
"},{"location":"tutorials/installation.html#notes","title":"Notes","text":" - For more details on how to use MTO please refer Tenant tutorial.
- For more details on how to extend your MTO manager ClusterRole please refer extend-admin-clusterrole.
"},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html","title":"Enabling Multi-Tenancy in ArgoCD","text":""},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#argocd-integration-in-multi-tenant-operator","title":"ArgoCD integration in Multi Tenant Operator","text":"With Multi Tenant Operator (MTO), cluster admins can configure multi tenancy in their cluster. Now with ArgoCD integration, multi tenancy can be configured in ArgoCD applications and AppProjects.
MTO (if configured to) will create AppProjects for each tenant. The AppProject will allow tenants to create ArgoCD Applications that can be synced to namespaces owned by those tenants. Cluster admins will also be able to blacklist certain namespaces resources if they want, and allow certain cluster scoped resources as well (see the NamespaceResourceBlacklist
and ClusterResourceWhitelist
sections in Integration Config docs and Tenant Custom Resource docs).
Note that ArgoCD integration in MTO is completely optional.
"},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#default-argocd-configuration","title":"Default ArgoCD configuration","text":"We have set a default ArgoCD configuration in Multi Tenant Operator that fulfils the following use cases:
- Tenants are able to see only their ArgoCD applications in the ArgoCD frontend
- Tenant 'Owners' and 'Editors' will have full access to their ArgoCD applications
- Tenants in the 'Viewers' group will have read-only access to their ArgoCD applications
- Tenants can sync all namespace-scoped resources, except those that are blacklisted in the spec
- Tenants can only sync cluster-scoped resources that are allow-listed in the spec
- Tenant 'Owners' can configure their own GitOps source repos at a tenant level
- Cluster admins can prevent specific resources from syncing via ArgoCD
- Cluster admins have full access to all ArgoCD applications and AppProjects
- Since ArgoCD integration is on a per-tenant level, namespace-scoped applications are only synced to Tenant's namespaces
"},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#creating-argocd-appprojects-for-your-tenant","title":"Creating ArgoCD AppProjects for your tenant","text":"Bill wants each tenant to also have their own ArgoCD AppProjects. To make sure this happens correctly, Bill will first specify the ArgoCD namespace in the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n argocd:\n namespace: openshift-operators\n ...\n
Afterward, Bill must specify the source GitOps repos for the tenant inside the tenant CR like so:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: sigma\nspec:\n argocd:\n sourceRepos:\n # specify source repos here\n - \"https://github.com/stakater/GitOps-config\"\n owners:\n users:\n - user\n editors:\n users:\n - user1\n quota: medium\n sandbox: false\n namespaces:\n withTenantPrefix:\n - build\n - stage\n - dev\n
Now Bill can see an AppProject will be created for the tenant
oc get AppProject -A\nNAMESPACE NAME AGE\nopenshift-operators sigma 5d15h\n
The following AppProject is created:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: sigma\n namespace: openshift-operators\nspec:\n destinations:\n - namespace: sigma-build\n server: \"https://kubernetes.default.svc\"\n - namespace: sigma-dev\n server: \"https://kubernetes.default.svc\"\n - namespace: sigma-stage\n server: \"https://kubernetes.default.svc\"\n roles:\n - description: >-\n Role that gives full access to all resources inside the tenant's\n namespace to the tenant owner groups\n groups:\n - saap-cluster-admins\n - stakater-team\n - sigma-owner-group\n name: sigma-owner\n policies:\n - \"p, proj:sigma:sigma-owner, *, *, sigma/*, allow\"\n - description: >-\n Role that gives edit access to all resources inside the tenant's\n namespace to the tenant owner group\n groups:\n - saap-cluster-admins\n - stakater-team\n - sigma-edit-group\n name: sigma-edit\n policies:\n - \"p, proj:sigma:sigma-edit, *, *, sigma/*, allow\"\n - description: >-\n Role that gives view access to all resources inside the tenant's\n namespace to the tenant owner group\n groups:\n - saap-cluster-admins\n - stakater-team\n - sigma-view-group\n name: sigma-view\n policies:\n - \"p, proj:sigma:sigma-view, *, get, sigma/*, allow\"\n sourceRepos:\n - \"https://github.com/stakater/gitops-config\"\n
Users belonging to the Sigma group will now only see applications created by them in the ArgoCD frontend now:
"},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#prevent-argocd-from-syncing-certain-namespaced-resources","title":"Prevent ArgoCD from syncing certain namespaced resources","text":"Bill wants tenants to not be able to sync ResourceQuota
and LimitRange
resources to their namespaces. To do this correctly, Bill will specify these resources to blacklist in the ArgoCD portion of the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n argocd:\n namespace: openshift-operators\n namespaceResourceBlacklist:\n - group: \"\"\n kind: ResourceQuota\n - group: \"\"\n kind: LimitRange\n ...\n
Now, if these resources are added to any tenant's project directory in GitOps, ArgoCD will not sync them to the cluster. The AppProject will also have the blacklisted resources added to it:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: sigma\n namespace: openshift-operators\nspec:\n ...\n namespaceResourceBlacklist:\n - group: ''\n kind: ResourceQuota\n - group: ''\n kind: LimitRange\n ...\n
"},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#allow-argocd-to-sync-certain-cluster-wide-resources","title":"Allow ArgoCD to sync certain cluster-wide resources","text":"Bill now wants tenants to be able to sync the Environment
cluster scoped resource to the cluster. To do this correctly, Bill will specify the resource to allow-list in the ArgoCD portion of the Integration Config's Spec:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n argocd:\n namespace: openshift-operators\n clusterResourceWhitelist:\n - group: \"\"\n kind: Environment\n ...\n
Now, if the resource is added to any tenant's project directory in GitOps, ArgoCD will sync them to the cluster. The AppProject will also have the allow-listed resources added to it:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: sigma\n namespace: openshift-operators\nspec:\n ...\n clusterResourceWhitelist:\n - group: \"\"\n kind: Environment\n ...\n
"},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#override-namespaceresourceblacklist-andor-clusterresourcewhitelist-per-tenant","title":"Override NamespaceResourceBlacklist and/or ClusterResourceWhitelist per Tenant","text":"Bill now wants a specific tenant to override the namespaceResourceBlacklist
and/or clusterResourceWhitelist
set via Integration Config. Bill will specify these in argoCD.appProjects
section of Tenant spec.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: blue-sky\nspec:\n argocd:\n sourceRepos:\n # specify source repos here\n - \"https://github.com/stakater/GitOps-config\"\n appProject:\n clusterResourceWhitelist:\n - group: admissionregistration.k8s.io\n kind: validatingwebhookconfigurations\n namespaceResourceBlacklist:\n - group: \"\"\n kind: ConfigMap\n owners:\n users:\n - user\n editors:\n users:\n - user1\n quota: medium\n sandbox: false\n namespaces:\n withTenantPrefix:\n - build\n - stage\n
"},{"location":"tutorials/template/template-group-instance.html","title":"More about TemplateGroupInstance","text":""},{"location":"tutorials/template/template-instance.html","title":"More about TemplateInstances","text":""},{"location":"tutorials/template/template.html","title":"Understanding and Utilizing Template","text":""},{"location":"tutorials/template/template.html#creating-templates","title":"Creating Templates","text":"Anna wants to create a Template that she can use to initialize or share common resources across namespaces (e.g. PullSecrets).
Anna can either create a template using manifests
field, covering Kubernetes or custom resources.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Or by using Helm Charts
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: redis\nresources:\n helm:\n releaseName: redis\n chart:\n repository:\n name: redis\n repoUrl: https://charts.bitnami.com/bitnami\n values: |\n redisPort: 6379\n
She can also use resourceMapping
field to copy over secrets and configmaps from one namespace to others.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: resource-mapping\nresources:\n resourceMappings:\n secrets:\n - name: docker-secret\n namespace: bluesky-build\n configMaps:\n - name: tronador-configMap\n namespace: stakater-tronador\n
Note: Resource mapping can be used via TGI to map resources within tenant namespaces or to some other tenant's namespace. If used with TI, the resources will only be mapped if namespaces belong to same tenant.
"},{"location":"tutorials/template/template.html#using-templates-with-default-parameters","title":"Using Templates with Default Parameters","text":"apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: namespace-parameterized-restrictions\nparameters:\n # Name of the parameter\n - name: DEFAULT_CPU_LIMIT\n # The default value of the parameter\n value: \"1\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"0.5\"\n # If a parameter is required the template instance will need to set it\n # required: true\n # Make sure only values are entered for this parameter\n validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n manifests:\n - apiVersion: v1\n kind: LimitRange\n metadata:\n name: namespace-limit-range-${namespace}\n spec:\n limits:\n - default:\n cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n defaultRequest:\n cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n type: Container\n
Parameters can be used with both manifests
and helm charts
"},{"location":"tutorials/tenant/assign-quota-tenant.html","title":"Assign Quota to a Tenant","text":""},{"location":"tutorials/tenant/assigning-metadata.html","title":"Assigning Common/Specific Metadata","text":""},{"location":"tutorials/tenant/assigning-metadata.html#distributing-common-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource","title":"Distributing common labels and annotations to tenant namespaces via Tenant Custom Resource","text":"Bill now wants to add labels/annotations to all the namespaces for a tenant. To create those labels/annotations Bill will just add them into commonMetadata.labels
/commonMetadata.annotations
field in the tenant CR.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n commonMetadata:\n labels:\n app.kubernetes.io/managed-by: tenant-operator\n app.kubernetes.io/part-of: tenant-alpha\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/infra=\nEOF\n
With the above configuration all tenant namespaces will now contain the mentioned labels and annotations.
"},{"location":"tutorials/tenant/assigning-metadata.html#distributing-specific-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource","title":"Distributing specific labels and annotations to tenant namespaces via Tenant Custom Resource","text":"Bill now wants to add labels/annotations to specific namespaces for a tenant. To create those labels/annotations Bill will just add them into specificMetadata.labels
/specificMetadata.annotations
and specific namespaces in specificMetadata.namespaces
field in the tenant CR.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandboxConfig:\n enabled: true\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n specificMetadata:\n - namespaces:\n - bluesky-anna-aurora-sandbox\n labels:\n app.kubernetes.io/is-sandbox: true\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\nEOF\n
With the above configuration all tenant namespaces will now contain the mentioned labels and annotations.
"},{"location":"tutorials/tenant/create-sandbox.html","title":"Create Sandbox Namespaces for Tenant Users","text":""},{"location":"tutorials/tenant/create-sandbox.html#assigning-users-sandbox-namespace","title":"Assigning Users Sandbox Namespace","text":"Bill assigned the ownership of bluesky
to Anna
and Anthony
. Now if the users want sandboxes to be made for them, they'll have to ask Bill
to enable sandbox
functionality.
To enable that, Bill will just set enabled: true
within the sandboxConfig
field
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandboxConfig:\n enabled: true\nEOF\n
With the above configuration Anna
and Anthony
will now have new sandboxes created
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\nbluesky-anthony-aurora-sandbox Active 5d5h\nbluesky-john-aurora-sandbox Active 5d5h\n
If Bill wants to make sure that only the sandbox owner can view his sandbox namespace, he can achieve this by setting private: true
within the sandboxConfig
filed.
"},{"location":"tutorials/tenant/create-sandbox.html#create-private-sandboxes","title":"Create Private Sandboxes","text":"Bill assigned the ownership of bluesky
to Anna
and Anthony
. Now if the users want sandboxes to be made for them, they'll have to ask Bill
to enable sandbox
functionality. The Users also want to make sure that the sandboxes that are created for them are also only visible to the user they belong to. To enable that, Bill will just set enabled: true
and private: true
within the sandboxConfig
field
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandboxConfig:\n enabled: true\n private: true\nEOF\n
With the above configuration Anna
and Anthony
will now have new sandboxes created
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\nbluesky-anthony-aurora-sandbox Active 5d5h\nbluesky-john-aurora-sandbox Active 5d5h\n
However, from the perspective of Anna
, only their sandbox will be visible
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\n
"},{"location":"tutorials/tenant/create-tenant.html","title":"Creating a Tenant","text":"Bill is a cluster admin who receives a new request from Aurora Solutions CTO asking for a new tenant for Anna's team.
Bill creates a new tenant called bluesky
in the cluster:
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandbox: false\nEOF\n
Bill checks if the new tenant is created:
kubectl get tenants.tenantoperator.stakater.com bluesky\nNAME STATE AGE\nbluesky Active 3m\n
Anna can now log in to the cluster and check if she can create namespaces
kubectl auth can-i create namespaces\nyes\n
However, cluster resources are not accessible to Anna
kubectl auth can-i get namespaces\nno\n\nkubectl auth can-i get persistentvolumes\nno\n
Including the Tenant
resource
kubectl auth can-i get tenants.tenantoperator.stakater.com\nno\n
"},{"location":"tutorials/tenant/create-tenant.html#assign-multiple-users-as-tenant-owner","title":"Assign multiple users as tenant owner","text":"In the example above, Bill assigned the ownership of bluesky
to Anna
. If another user, e.g. Anthony
needs to administer bluesky
, than Bill can assign the ownership of tenant to that user as well:
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandbox: false\nEOF\n
With the configuration above, Anthony can log in to the cluster and execute
kubectl auth can-i create namespaces\nyes\n
"},{"location":"tutorials/tenant/creating-namespaces.html","title":"Creating Namespaces","text":""},{"location":"tutorials/tenant/creating-namespaces.html#creating-namespaces-via-tenant-custom-resource","title":"Creating Namespaces via Tenant Custom Resource","text":"Bill now wants to create namespaces for dev
, build
and production
environments for the tenant members. To create those namespaces Bill will just add those names within the namespaces
field in the tenant CR. If Bill wants to append the tenant name as a prefix in namespace name, then he can use namespaces.withTenantPrefix
field. Else he can use namespaces.withoutTenantPrefix
for namespaces for which he does not need tenant name as a prefix.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n namespaces:\n withTenantPrefix:\n - dev\n - build\n withoutTenantPrefix:\n - prod\nEOF\n
With the above configuration tenant members will now see new namespaces have been created.
kubectl get namespaces\nNAME STATUS AGE\nbluesky-dev Active 5d5h\nbluesky-build Active 5d5h\nprod Active 5d5h\n
Anna as the tenant owner can create new namespaces for her tenant.
apiVersion: v1\nkind: Namespace\nmetadata:\n name: bluesky-production\n labels:\n stakater.com/tenant: bluesky\n
\u26a0\ufe0f Anna is required to add the tenant label stakater.com/tenant: bluesky
which contains the name of her tenant bluesky
, while creating the namespace. If this label is not added or if Anna does not belong to the bluesky
tenant, then Multi Tenant Operator will not allow the creation of that namespace.
When Anna creates the namespace, MTO assigns Anna and other tenant members the roles based on their user types, such as a tenant owner getting the OpenShift admin
role for that namespace.
As a tenant owner, Anna is able to create namespaces.
If you have enabled ArgoCD Multitenancy, our preferred solution is to create tenant namespaces by using Tenant spec to avoid syncing issues in ArgoCD console during namespace creation.
"},{"location":"tutorials/tenant/creating-namespaces.html#add-existing-namespaces-to-tenant-via-gitops","title":"Add Existing Namespaces to Tenant via GitOps","text":"Using GitOps as your preferred development workflow, you can add existing namespaces for your tenants by including the tenant label.
To add an existing namespace to your tenant via GitOps:
- First, migrate your namespace resource to your \u201cwatched\u201d git repository
- Edit your namespace
yaml
to include the tenant label - Tenant label follows the naming convention
stakater.com/tenant: <TENANT_NAME>
- Sync your GitOps repository with your cluster and allow changes to be propagated
- Verify that your Tenant users now have access to the namespace
For example, If Anna, a tenant owner, wants to add the namespace bluesky-dev
to her tenant via GitOps, after migrating her namespace manifest to a \u201cwatched repository\u201d
apiVersion: v1\nkind: Namespace\nmetadata:\n name: bluesky-dev\n
She can then add the tenant label
...\n labels:\n stakater.com/tenant: bluesky\n
Now all the users of the Bluesky
tenant now have access to the existing namespace.
Additionally, to remove namespaces from a tenant, simply remove the tenant label from the namespace resource and sync your changes to your cluster.
"},{"location":"tutorials/tenant/creating-namespaces.html#remove-namespaces-from-your-cluster-via-gitops","title":"Remove Namespaces from your Cluster via GitOps","text":"GitOps is a quick and efficient way to automate the management of your K8s resources.
To remove namespaces from your cluster via GitOps;
- Remove the
yaml
file containing your namespace configurations from your \u201cwatched\u201d git repository. - ArgoCD automatically sets the
[app.kubernetes.io/instance](http://app.kubernetes.io/instance)
label on resources it manages. It uses this label it to select resources which inform the basis of an app. To remove a namespace from a managed ArgoCD app, remove the ArgoCD label app.kubernetes.io/instance
from the namespace manifest. - You can edit your namespace manifest through the OpenShift Web Console or with the OpenShift command line tool.
- Now that you have removed your namespace manifest from your watched git repository, and from your managed ArgoCD apps, sync your git repository and allow your changes be propagated.
- Verify that your namespace has been deleted.
"},{"location":"tutorials/tenant/custom-rbac.html","title":"Applying Custom RBAC to a Tenant","text":""},{"location":"tutorials/tenant/deleting-tenant.html","title":"Deleting a Tenant","text":""},{"location":"tutorials/tenant/deleting-tenant.html#retaining-tenant-namespaces-and-appproject-when-a-tenant-is-being-deleted","title":"Retaining tenant namespaces and AppProject when a tenant is being deleted","text":"Bill now wants to delete tenant bluesky
and wants to retain all namespaces and AppProject of the tenant. To retain the namespaces Bill will set spec.onDelete.cleanNamespaces
, and spec.onDelete.cleanAppProjects
to false
.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n quota: small\n sandboxConfig:\n enabled: true\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n onDelete:\n cleanNamespaces: false\n cleanAppProject: false\n
With the above configuration all tenant namespaces and AppProject will not be deleted when tenant bluesky
is deleted. By default, the value of spec.onDelete.cleanNamespaces
is also false
and spec.onDelete.cleanAppProject
is true
"},{"location":"tutorials/tenant/tenant-hibernation.html","title":"Hibernating a Tenant","text":""},{"location":"tutorials/tenant/tenant-hibernation.html#hibernating-namespaces","title":"Hibernating Namespaces","text":"You can manage workloads in your cluster with MTO by implementing a hibernation schedule for your tenants. Hibernation downsizes the running Deployments and StatefulSets in a tenant\u2019s namespace according to a defined cron schedule. You can set a hibernation schedule for your tenants by adding the \u2018spec.hibernation\u2019 field to the tenant's respective Custom Resource.
hibernation:\n sleepSchedule: 23 * * * *\n wakeSchedule: 26 * * * *\n
spec.hibernation.sleepSchedule
accepts a cron expression indicating the time to put the workloads in your tenant\u2019s namespaces to sleep.
spec.hibernation.wakeSchedule
accepts a cron expression indicating the time to wake the workloads in your tenant\u2019s namespaces up.
Note
Both sleep and wake schedules must be specified for your Hibernation schedule to be valid.
Additionally, adding the hibernation.stakater.com/exclude: 'true'
annotation to a namespace excludes it from hibernating.
Note
This is only true for hibernation applied via the Tenant Custom Resource, and does not apply for hibernation done by manually creating a ResourceSupervisor (details about that below).
Note
This will not wake up an already sleeping namespace before the wake schedule.
"},{"location":"tutorials/tenant/tenant-hibernation.html#resource-supervisor","title":"Resource Supervisor","text":"Adding a Hibernation Schedule to a Tenant creates an accompanying ResourceSupervisor Custom Resource. The Resource Supervisor stores the Hibernation schedules and manages the current and previous states of all the applications, whether they are sleeping or awake.
When the sleep timer is activated, the Resource Supervisor controller stores the details of your applications (including the number of replicas, configurations, etc.) in the applications' namespaces and then puts your applications to sleep. When the wake timer is activated, the controller wakes up the applications using their stored details.
Enabling ArgoCD support for Tenants will also hibernate applications in the tenants' appProjects
.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: sigma\nspec:\n argocd:\n appProjects:\n - sigma\n namespace: openshift-gitops\n hibernation:\n sleepSchedule: 42 * * * *\n wakeSchedule: 45 * * * *\n namespaces:\n - tenant-ns1\n - tenant-ns2\n
Currently, Hibernation is available only for StatefulSets and Deployments.
"},{"location":"tutorials/tenant/tenant-hibernation.html#manual-creation-of-resourcesupervisor","title":"Manual creation of ResourceSupervisor","text":"Hibernation can also be applied by creating a ResourceSupervisor resource manually. The ResourceSupervisor definition will contain the hibernation cron schedule, the names of the namespaces to be hibernated, and the names of the ArgoCD AppProjects whose ArgoCD Applications have to be hibernated (as per the given schedule).
This method can be used to hibernate:
- Some specific namespaces and AppProjects in a tenant
- A set of namespaces and AppProjects belonging to different tenants
- Namespaces and AppProjects belonging to a tenant that the cluster admin is not a member of
- Non-tenant namespaces and ArgoCD AppProjects
As an example, the following ResourceSupervisor could be created manually, to apply hibernation explicitly to the 'ns1' and 'ns2' namespaces, and to the 'sample-app-project' AppProject.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: hibernator\nspec:\n argocd:\n appProjects:\n - sample-app-project\n namespace: openshift-gitops\n hibernation:\n sleepSchedule: 42 * * * *\n wakeSchedule: 45 * * * *\n namespaces:\n - ns1\n - ns2\n
"},{"location":"tutorials/tenant/tenant-hibernation.html#freeing-up-unused-resources-with-hibernation","title":"Freeing up unused resources with hibernation","text":""},{"location":"tutorials/tenant/tenant-hibernation.html#hibernating-a-tenant_1","title":"Hibernating a tenant","text":"Bill is a cluster administrator who wants to free up unused cluster resources at nighttime, in an effort to reduce costs (when the cluster isn't being used).
First, Bill creates a tenant with the hibernation
schedules mentioned in the spec, or adds the hibernation field to an existing tenant:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: sigma\nspec:\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n owners:\n users:\n - user\n editors:\n users:\n - user1\n quota: medium\n namespaces:\n withoutTenantPrefix:\n - build\n - stage\n - dev\n
The schedules above will put all the Deployments
and StatefulSets
within the tenant's namespaces to sleep, by reducing their pod count to 0 at 8 PM every weekday. At 8 AM on weekdays, the namespaces will then wake up by restoring their applications' previous pod counts.
Bill can verify this behaviour by checking the newly created ResourceSupervisor resource at run time:
oc get ResourceSupervisor -A\nNAME AGE\nsigma 5m\n
The ResourceSupervisor will look like this at 'running' time (as per the schedule):
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - build\n - stage\n - dev\nstatus:\n currentStatus: running\n nextReconcileTime: '2022-10-12T20:00:00Z'\n
The ResourceSupervisor will look like this at 'sleeping' time (as per the schedule):
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - build\n - stage\n - dev\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: build\n kind: Deployment\n name: example\n replicas: 3\n - Namespace: stage\n kind: Deployment\n name: example\n replicas: 3\n
Bill wants to prevent the build
namespace from going to sleep, so he can add the hibernation.stakater.com/exclude: 'true'
annotation to it. The ResourceSupervisor will now look like this after reconciling:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - stage\n - dev\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: stage\n kind: Deployment\n name: example\n replicas: 3\n
"},{"location":"tutorials/tenant/tenant-hibernation.html#hibernating-namespaces-andor-argocd-applications-with-resourcesupervisor","title":"Hibernating namespaces and/or ArgoCD Applications with ResourceSupervisor","text":"Bill, the cluster administrator, wants to hibernate a collection of namespaces and AppProjects belonging to multiple different tenants. He can do so by creating a ResourceSupervisor manually, specifying the hibernation schedule in its spec, the namespaces and ArgoCD Applications that need to be hibernated as per the mentioned schedule. Bill can also use the same method to hibernate some namespaces and ArgoCD Applications that do not belong to any tenant on his cluster.
The example given below will hibernate the ArgoCD Applications in the 'test-app-project' AppProject; and it will also hibernate the 'ns2' and 'ns4' namespaces.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: test-resource-supervisor\nspec:\n argocd:\n appProjects:\n - test-app-project\n namespace: argocd-ns\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - ns2\n - ns4\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: ns2\n kind: Deployment\n name: test-deployment\n replicas: 3\n
"},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html","title":"Enabling Multi-Tenancy in Vault","text":""},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html#vault-multitenancy","title":"Vault Multitenancy","text":"HashiCorp Vault is an identity-based secret and encryption management system. Vault validates and authorizes a system's clients (users, machines, apps) before providing them access to secrets or stored sensitive data.
"},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html#vault-integration-in-multi-tenant-operator","title":"Vault integration in Multi Tenant Operator","text":""},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html#service-account-auth-in-vault","title":"Service Account Auth in Vault","text":"MTO enables the Kubernetes auth method which can be used to authenticate with Vault using a Kubernetes Service Account Token. When enabled, for every tenant namespace, MTO automatically creates policies and roles that allow the service accounts present in those namespaces to read secrets at tenant's path in Vault. The name of the role is the same as namespace name.
These service accounts are required to have stakater.com/vault-access: true
label, so they can be authenticated with Vault via MTO.
The Diagram shows how MTO enables ServiceAccounts to read secrets from Vault.
"},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html#user-oidc-auth-in-vault","title":"User OIDC Auth in Vault","text":"This requires a running RHSSO(RedHat Single Sign On)
instance integrated with Vault over OIDC login method.
MTO integration with Vault and RHSSO provides a way for users to log in to Vault where they only have access to relevant tenant paths.
Once both integrations are set up with IntegrationConfig CR, MTO links tenant users to specific client roles named after their tenant under Vault client in RHSSO.
After that, MTO creates specific policies in Vault for its tenant users.
Mapping of tenant roles to Vault is shown below
Tenant Role Vault Path Vault Capabilities Owner, Editor (tenantName)/* Create, Read, Update, Delete, List Owner, Editor sys/mounts/(tenantName)/* Create, Read, Update, Delete, List Owner, Editor managed-addons/* Read, List Viewer (tenantName)/* Read A simple user login workflow is shown in the diagram below.
"},{"location":"usecases/admin-clusterrole.html","title":"Extending Admin ClusterRole","text":"Bill as the cluster admin want to add additional rules for admin ClusterRole.
Bill can extend the admin
role for MTO using the aggregation label for admin ClusterRole. Bill will create a new ClusterRole with all the permissions he needs to extend for MTO and add the aggregation label on the newly created ClusterRole.
kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: extend-admin-role\n labels:\n rbac.authorization.k8s.io/aggregate-to-admin: 'true'\nrules:\n - verbs:\n - create\n - update\n - patch\n - delete\n apiGroups:\n - user.openshift.io\n resources:\n - groups\n
Note: You can learn more about aggregated-cluster-roles
here
"},{"location":"usecases/admin-clusterrole.html#whats-next","title":"What\u2019s next","text":"See how Bill can hibernate unused namespaces at night
"},{"location":"usecases/argocd.html","title":"ArgoCD","text":""},{"location":"usecases/argocd.html#creating-argocd-appprojects-for-your-tenant","title":"Creating ArgoCD AppProjects for your tenant","text":"Bill wants each tenant to also have their own ArgoCD AppProjects. To make sure this happens correctly, Bill will first specify the ArgoCD namespace in the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n argocd:\n namespace: openshift-operators\n ...\n
Afterwards, Bill must specify the source GitOps repos for the tenant inside the tenant CR like so:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: sigma\nspec:\n argocd:\n sourceRepos:\n # specify source repos here\n - \"https://github.com/stakater/GitOps-config\"\n owners:\n users:\n - user\n editors:\n users:\n - user1\n quota: medium\n sandbox: false\n namespaces:\n withTenantPrefix:\n - build\n - stage\n - dev\n
Now Bill can see an AppProject will be created for the tenant
oc get AppProject -A\nNAMESPACE NAME AGE\nopenshift-operators sigma 5d15h\n
The following AppProject is created:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: sigma\n namespace: openshift-operators\nspec:\n destinations:\n - namespace: sigma-build\n server: \"https://kubernetes.default.svc\"\n - namespace: sigma-dev\n server: \"https://kubernetes.default.svc\"\n - namespace: sigma-stage\n server: \"https://kubernetes.default.svc\"\n roles:\n - description: >-\n Role that gives full access to all resources inside the tenant's\n namespace to the tenant owner group\n groups:\n - saap-cluster-admins\n - stakater-team\n - sigma-owner-group\n name: sigma-owner\n policies:\n - \"p, proj:sigma:sigma-owner, *, *, sigma/*, allow\"\n - description: >-\n Role that gives edit access to all resources inside the tenant's\n namespace to the tenant owner group\n groups:\n - saap-cluster-admins\n - stakater-team\n - sigma-edit-group\n name: sigma-edit\n policies:\n - \"p, proj:sigma:sigma-edit, *, *, sigma/*, allow\"\n - description: >-\n Role that gives view access to all resources inside the tenant's\n namespace to the tenant owner group\n groups:\n - saap-cluster-admins\n - stakater-team\n - sigma-view-group\n name: sigma-view\n policies:\n - \"p, proj:sigma:sigma-view, *, get, sigma/*, allow\"\n sourceRepos:\n - \"https://github.com/stakater/gitops-config\"\n
Users belonging to the Sigma group will now only see applications created by them in the ArgoCD frontend now:
"},{"location":"usecases/argocd.html#prevent-argocd-from-syncing-certain-namespaced-resources","title":"Prevent ArgoCD from syncing certain namespaced resources","text":"Bill wants tenants to not be able to sync ResourceQuota
and LimitRange
resources to their namespaces. To do this correctly, Bill will specify these resources to blacklist in the ArgoCD portion of the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n argocd:\n namespace: openshift-operators\n namespaceResourceBlacklist:\n - group: \"\"\n kind: ResourceQuota\n - group: \"\"\n kind: LimitRange\n ...\n
Now, if these resources are added to any tenant's project directory in GitOps, ArgoCD will not sync them to the cluster. The AppProject will also have the blacklisted resources added to it:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: sigma\n namespace: openshift-operators\nspec:\n ...\n namespaceResourceBlacklist:\n - group: ''\n kind: ResourceQuota\n - group: ''\n kind: LimitRange\n ...\n
"},{"location":"usecases/argocd.html#allow-argocd-to-sync-certain-cluster-wide-resources","title":"Allow ArgoCD to sync certain cluster-wide resources","text":"Bill now wants tenants to be able to sync the Environment
cluster scoped resource to the cluster. To do this correctly, Bill will specify the resource to allow-list in the ArgoCD portion of the Integration Config's Spec:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n ...\n argocd:\n namespace: openshift-operators\n clusterResourceWhitelist:\n - group: \"\"\n kind: Environment\n ...\n
Now, if the resource is added to any tenant's project directory in GitOps, ArgoCD will sync them to the cluster. The AppProject will also have the allow-listed resources added to it:
apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n name: sigma\n namespace: openshift-operators\nspec:\n ...\n clusterResourceWhitelist:\n - group: \"\"\n kind: Environment\n ...\n
"},{"location":"usecases/argocd.html#override-namespaceresourceblacklist-andor-clusterresourcewhitelist-per-tenant","title":"Override NamespaceResourceBlacklist and/or ClusterResourceWhitelist per Tenant","text":"Bill now wants a specific tenant to override the namespaceResourceBlacklist
and/or clusterResourceWhitelist
set via Integration Config. Bill will specify these in argoCD.appProjects
section of Tenant spec.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: blue-sky\nspec:\n argocd:\n sourceRepos:\n # specify source repos here\n - \"https://github.com/stakater/GitOps-config\"\n appProject:\n clusterResourceWhitelist:\n - group: admissionregistration.k8s.io\n kind: validatingwebhookconfigurations\n namespaceResourceBlacklist:\n - group: \"\"\n kind: ConfigMap\n owners:\n users:\n - user\n editors:\n users:\n - user1\n quota: medium\n sandbox: false\n namespaces:\n withTenantPrefix:\n - build\n - stage\n
"},{"location":"usecases/configuring-multitenant-network-isolation.html","title":"Configuring Multi-Tenant Isolation with Network Policy Template","text":"Bill is a cluster admin who wants to configure network policies to provide multi-tenant network isolation.
First, Bill creates a template for network policies:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: tenant-network-policy\nresources:\n manifests:\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-same-namespace\n spec:\n podSelector: {}\n ingress:\n - from:\n - podSelector: {}\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-from-openshift-monitoring\n spec:\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n network.openshift.io/policy-group: monitoring\n podSelector: {}\n policyTypes:\n - Ingress\n - apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: allow-from-openshift-ingress\n spec:\n ingress:\n - from:\n - namespaceSelector:\n matchLabels:\n network.openshift.io/policy-group: ingress\n podSelector: {}\n policyTypes:\n - Ingress\n
Once the template has been created, Bill edits the IntegrationConfig to add unique label to all tenant projects:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n project:\n labels:\n stakater.com/workload-monitoring: \"true\"\n tenant-network-policy: \"true\"\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\n sandbox:\n labels:\n stakater.com/kind: sandbox\n privilegedNamespaces:\n - default\n - ^openshift-*\n - ^kube-*\n privilegedServiceAccounts:\n - ^system:serviceaccount:openshift-*\n - ^system:serviceaccount:kube-*\n
Bill has added a new label tenant-network-policy: \"true\"
in project section of IntegrationConfig, now MTO will add that label in all tenant projects.
Finally Bill creates a TemplateGroupInstance
which will distribute the network policies using the newly added project label and template.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: tenant-network-policy-group\nspec:\n template: tenant-network-policy\n selector:\n matchLabels:\n tenant-network-policy: \"true\"\n sync: true\n
MTO will now deploy the network policies mentioned in Template
to all projects matching the label selector mentioned in the TemplateGroupInstance.
"},{"location":"usecases/custom-roles.html","title":"Changing the default access level for tenant owners","text":"This feature allows the cluster admins to change the default roles assigned to Tenant owner, editor, viewer groups.
For example, if Bill as the cluster admin wants to reduce the privileges that tenant owners have, so they cannot create or edit Roles or bind them. As an admin of an OpenShift cluster, Bill can do this by assigning the edit
role to all tenant owners. This is easily achieved by modifying the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - edit\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n
Once all namespaces reconcile, the old admin
RoleBindings should get replaced with the edit
ones for each tenant owner.
"},{"location":"usecases/custom-roles.html#giving-specific-permissions-to-some-tenants","title":"Giving specific permissions to some tenants","text":"Bill now wants the owners of the tenants bluesky
and alpha
to have admin
permissions over their namespaces. Custom roles feature will allow Bill to do this, by modifying the IntegrationConfig like this:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n tenantRoles:\n default:\n owner:\n clusterRoles:\n - edit\n editor:\n clusterRoles:\n - edit\n viewer:\n clusterRoles:\n - view\n custom:\n - labelSelector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n owner:\n clusterRoles:\n - admin\n - labelSelector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - bluesky\n owner:\n clusterRoles:\n - admin\n
New Bindings will be created for the Tenant owners of bluesky
and alpha
, corresponding to the admin
Role. Bindings for editors and viewer will be inherited from the default roles
. All other Tenant owners will have an edit
Role bound to them within their namespaces
"},{"location":"usecases/deploying-templates.html","title":"Distributing Resources in Namespaces","text":"Multi Tenant Operator has three Custom Resources which can cover this need using the Template
CR, depending upon the conditions and preference.
- TemplateGroupInstance
- TemplateInstance
- Tenant
Stakater Team, however, encourages the use of TemplateGroupInstance
to distribute resources in multiple namespaces as it is optimized for better performance.
"},{"location":"usecases/deploying-templates.html#deploying-template-to-namespaces-via-templategroupinstances","title":"Deploying Template to Namespaces via TemplateGroupInstances","text":"Bill, the cluster admin, wants to deploy a docker pull secret in namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with MatchLabels
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchLabels:\n kind: build\n sync: true\n
Afterwards, Bill can see that secrets have been successfully created in all label matching namespaces.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-secret Active 3m\n\nkubectl get secret docker-secret -n alpha-dave-aurora-sandbox\nNAME STATE AGE\ndocker-secret Active 2m\n
TemplateGroupInstance
can also target specific tenants or all tenant namespaces under a single yaml definition.
"},{"location":"usecases/deploying-templates.html#templategroupinstance-for-multiple-tenants","title":"TemplateGroupInstance for multiple Tenants","text":"It can be done by using the matchExpressions
field, dividing the tenant label in key and values.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\n sync: true\n
"},{"location":"usecases/deploying-templates.html#templategroupinstance-for-all-tenants","title":"TemplateGroupInstance for all Tenants","text":"This can also be done by using the matchExpressions
field, using just the tenant label key stakater.com/tenant
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: Exists\n sync: true\n
"},{"location":"usecases/deploying-templates.html#deploying-template-to-namespaces-via-tenant","title":"Deploying Template to Namespaces via Tenant","text":"Bill is a cluster admin who wants to deploy a docker-pull-secret in Anna's tenant namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Bill edits Anna's tenant and populates the namespacetemplate
field:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n editors:\n users:\n - john@aurora.org\n quota: small\n sandboxConfig:\n enabled: true\n templateInstances:\n - spec:\n template: docker-pull-secret\n selector:\n matchLabels:\n kind: build\n
Multi Tenant Operator will deploy TemplateInstances
mentioned in templateInstances
field, TemplateInstances
will only be applied in those namespaces
which belong to Anna's tenant
and have the matching label of kind: build
.
So now Anna adds label kind: build
to her existing namespace bluesky-anna-aurora-sandbox
, and after adding the label she see's that the secret has been created.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"usecases/deploying-templates.html#deploying-template-to-a-namespace-via-templateinstance","title":"Deploying Template to a Namespace via TemplateInstance","text":"Anna wants to deploy a docker pull secret in her namespace.
First Anna asks Bill, the cluster admin, to create a template of the secret for her:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Once the template has been created, Anna creates a TemplateInstance
in her namespace referring to the Template
she wants to deploy:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: docker-pull-secret-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: docker-pull-secret\n sync: true\n
Once this is created, Anna can see that the secret has been successfully applied.
kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"usecases/deploying-templates.html#passing-parameters-to-template-via-templateinstance-templategroupinstance-or-tenant","title":"Passing Parameters to Template via TemplateInstance, TemplateGroupInstance or Tenant","text":"Anna wants to deploy a LimitRange resource to certain namespaces.
First Anna asks Bill, the cluster admin, to create template with parameters for LimitRange for her:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: namespace-parameterized-restrictions\nparameters:\n # Name of the parameter\n - name: DEFAULT_CPU_LIMIT\n # The default value of the parameter\n value: \"1\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"0.5\"\n # If a parameter is required the template instance will need to set it\n # required: true\n # Make sure only values are entered for this parameter\n validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n manifests:\n - apiVersion: v1\n kind: LimitRange\n metadata:\n name: namespace-limit-range-${namespace}\n spec:\n limits:\n - default:\n cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n defaultRequest:\n cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n type: Container\n
Afterwards, Anna creates a TemplateInstance
in her namespace referring to the Template
she wants to deploy:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: namespace-parameterized-restrictions-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: namespace-parameterized-restrictions\n sync: true\nparameters:\n - name: DEFAULT_CPU_LIMIT\n value: \"1.5\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"1\"\n
If she wants to distribute the same Template over multiple namespaces, she can use TemplateGroupInstance
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: namespace-parameterized-restrictions-tgi\nspec:\n template: namespace-parameterized-restrictions\n sync: true\n selector:\n matchExpressions:\n - key: stakater.com/tenant\n operator: In\n values:\n - alpha\n - beta\nparameters:\n - name: DEFAULT_CPU_LIMIT\n value: \"1.5\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"1\"\n
Or she can use her tenant to cover only the tenant namespaces.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n editors:\n users:\n - john@aurora.org\n quota: small\n sandboxConfig:\n enabled: true\n templateInstances:\n - spec:\n template: namespace-parameterized-restrictions\n sync: true\n parameters:\n - name: DEFAULT_CPU_LIMIT\n value: \"1.5\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"1\"\n selector:\n matchLabels:\n kind: build\n
"},{"location":"usecases/distributing-resources.html","title":"Copying Secrets and Configmaps across Tenant Namespaces via TGI","text":"Bill is a cluster admin who wants to map a docker-pull-secret
, present in a build
namespace, in tenant namespaces where certain labels exists.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n resourceMappings:\n secrets:\n - name: docker-pull-secret\n namespace: build\n
Once the template has been created, Bill makes a TemplateGroupInstance
referring to the Template
he wants to deploy with MatchLabels
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: docker-secret-group-instance\nspec:\n template: docker-pull-secret\n selector:\n matchLabels:\n kind: build\n sync: true\n
Afterwards, Bill can see that secrets has been successfully mapped in all matching namespaces.
kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n\nkubectl get secret docker-pull-secret -n alpha-dave-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"usecases/distributing-resources.html#mapping-resources-within-tenant-namespaces-via-ti","title":"Mapping Resources within Tenant Namespaces via TI","text":"Anna is a tenant owner who wants to map a docker-pull-secret
, present in bluseky-build
namespace, to bluesky-anna-aurora-sandbox
namespace.
First, Bill creates a template:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n resourceMappings:\n secrets:\n - name: docker-pull-secret\n namespace: bluesky-build\n
Once the template has been created, Anna creates a TemplateInstance
in bluesky-anna-aurora-sandbox
namespace, referring to the Template
.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n name: docker-secret-instance\n namespace: bluesky-anna-aurora-sandbox\nspec:\n template: docker-pull-secret\n sync: true\n
Afterwards, Bill can see that secrets has been successfully mapped in all matching namespaces.
kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME STATE AGE\ndocker-pull-secret Active 3m\n
"},{"location":"usecases/distributing-secrets-using-sealed-secret-template.html","title":"Distributing Secrets Using Sealed Secrets Template","text":"Bill is a cluster admin who wants to provide a mechanism for distributing secrets in multiple namespaces. For this, he wants to use Sealed Secrets as the solution by adding them to MTO Template CR
First, Bill creates a Template in which Sealed Secret is mentioned:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: tenant-sealed-secret\nresources:\n manifests:\n - kind: SealedSecret\n apiVersion: bitnami.com/v1alpha1\n metadata:\n name: mysecret\n spec:\n encryptedData:\n .dockerconfigjson: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq.....\n template:\n type: kubernetes.io/dockerconfigjson\n # this is an example of labels and annotations that will be added to the output secret\n metadata:\n labels:\n \"jenkins.io/credentials-type\": usernamePassword\n annotations:\n \"jenkins.io/credentials-description\": credentials from Kubernetes\n
Once the template has been created, Bill has to edit the Tenant
to add unique label to namespaces in which the secret has to be deployed. For this, he can use the support for common and specific labels across namespaces.
Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n\n # use this if you want to add label to some specific namespaces\n specificMetadata:\n - namespaces:\n - test-namespace\n labels:\n distribute-image-pull-secret: true\n\n # use this if you want to add label to all namespaces under your tenant\n commonMetadata:\n labels:\n distribute-image-pull-secret: true\n
Bill has added support for a new label distribute-image-pull-secret: true\"
for tenant projects/namespaces, now MTO will add that label depending on the used field.
Finally Bill creates a TemplateGroupInstance
which will deploy the sealed secrets using the newly created project label and template.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: tenant-sealed-secret\nspec:\n template: tenant-sealed-secret\n selector:\n matchLabels:\n distribute-image-pull-secret: true\n sync: true\n
MTO will now deploy the sealed secrets mentioned in Template
to namespaces which have the mentioned label. The rest of the work to deploy secret from a sealed secret has to be done by Sealed Secrets Controller.
"},{"location":"usecases/extend-default-roles.html","title":"Extending the default access level for tenant members","text":"Bill as the cluster admin wants to extend the default access for tenant members. As an admin of an OpenShift Cluster, Bill can extend the admin, edit, and view ClusterRole using aggregation. Bill will first create a ClusterRole with privileges to resources which Bill wants to extend. Bill will add the aggregation label to the newly created ClusterRole for extending the default ClusterRoles provided by OpenShift.
kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: extend-view-role\n labels:\n rbac.authorization.k8s.io/aggregate-to-view: 'true'\nrules:\n - verbs:\n - get\n - list\n - watch\n apiGroups:\n - user.openshift.io\n resources:\n - groups\n
Note: You can learn more about aggregated-cluster-roles
here
"},{"location":"usecases/hibernation.html","title":"Freeing up unused resources with hibernation","text":""},{"location":"usecases/hibernation.html#hibernating-a-tenant","title":"Hibernating a tenant","text":"Bill is a cluster administrator who wants to free up unused cluster resources at nighttime, in an effort to reduce costs (when the cluster isn't being used).
First, Bill creates a tenant with the hibernation
schedules mentioned in the spec, or adds the hibernation field to an existing tenant:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: sigma\nspec:\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n owners:\n users:\n - user\n editors:\n users:\n - user1\n quota: medium\n namespaces:\n withoutTenantPrefix:\n - build\n - stage\n - dev\n
The schedules above will put all the Deployments
and StatefulSets
within the tenant's namespaces to sleep, by reducing their pod count to 0 at 8 PM every weekday. At 8 AM on weekdays, the namespaces will then wake up by restoring their applications' previous pod counts.
Bill can verify this behaviour by checking the newly created ResourceSupervisor resource at run time:
oc get ResourceSupervisor -A\nNAME AGE\nsigma 5m\n
The ResourceSupervisor will look like this at 'running' time (as per the schedule):
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - build\n - stage\n - dev\nstatus:\n currentStatus: running\n nextReconcileTime: '2022-10-12T20:00:00Z'\n
The ResourceSupervisor will look like this at 'sleeping' time (as per the schedule):
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - build\n - stage\n - dev\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: build\n kind: Deployment\n name: example\n replicas: 3\n - Namespace: stage\n kind: Deployment\n name: example\n replicas: 3\n
Bill wants to prevent the build
namespace from going to sleep, so he can add the hibernation.stakater.com/exclude: 'true'
annotation to it. The ResourceSupervisor will now look like this after reconciling:
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: example\nspec:\n argocd:\n appProjects: []\n namespace: ''\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - stage\n - dev\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: stage\n kind: Deployment\n name: example\n replicas: 3\n
"},{"location":"usecases/hibernation.html#hibernating-namespaces-andor-argocd-applications-with-resourcesupervisor","title":"Hibernating namespaces and/or ArgoCD Applications with ResourceSupervisor","text":"Bill, the cluster administrator, wants to hibernate a collection of namespaces and AppProjects belonging to multiple different tenants. He can do so by creating a ResourceSupervisor manually, specifying the hibernation schedule in its spec, the namespaces and ArgoCD Applications that need to be hibernated as per the mentioned schedule. Bill can also use the same method to hibernate some namespaces and ArgoCD Applications that do not belong to any tenant on his cluster.
The example given below will hibernate the ArgoCD Applications in the 'test-app-project' AppProject; and it will also hibernate the 'ns2' and 'ns4' namespaces.
apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n name: test-resource-supervisor\nspec:\n argocd:\n appProjects:\n - test-app-project\n namespace: argocd-ns\n hibernation:\n sleepSchedule: 0 20 * * 1-5\n wakeSchedule: 0 8 * * 1-5\n namespaces:\n - ns2\n - ns4\nstatus:\n currentStatus: sleeping\n nextReconcileTime: '2022-10-13T08:00:00Z'\n sleepingApplications:\n - Namespace: ns2\n kind: Deployment\n name: test-deployment\n replicas: 3\n
"},{"location":"usecases/integrationconfig.html","title":"Configuring Managed Namespaces and ServiceAccounts in IntegrationConfig","text":"Bill is a cluster admin who can use IntegrationConfig
to configure how Multi Tenant Operator (MTO)
manages the cluster.
By default, MTO watches all namespaces and will enforce all the governing policies on them. All namespaces managed by MTO require the stakater.com/tenant
label. MTO ignores privileged namespaces that are mentioned in the IntegrationConfig, and does not manage them. Therefore, any tenant label on such namespaces will be ignored.
oc create namespace stakater-test\nError from server (Cannot Create namespace stakater-test without label stakater.com/tenant. User: Bill): admission webhook \"vnamespace.kb.io\" denied the request: Cannot CREATE namespace stakater-test without label stakater.com/tenant. User: Bill\n
Bill is trying to create a namespace without the stakater.com/tenant
label. Creating a namespace without this label is only allowed if the namespace is privileged. Privileged namespaces will be ignored by MTO and do not require the said label. Therefore, Bill will add the required regex in the IntegrationConfig, along with any other namespaces which are privileged and should be ignored by MTO - like default
, or namespaces with prefixes like openshift
, kube
:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedNamespaces:\n - ^default$\n - ^openshift*\n - ^kube*\n - ^stakater*\n
After mentioning the required regex (^stakater*
) under privilegedNamespaces
, Bill can create the namespace without interference.
oc create namespace stakater-test\nnamespace/stakater-test created\n
MTO will also disallow all users which are not tenant owners to perform CRUD operations on namespaces. This will also prevent Service Accounts from performing CRUD operations.
If Bill wants MTO to ignore Service Accounts, then he would simply have to add them in the IntegrationConfig:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedServiceAccounts:\n - system:serviceaccount:openshift\n - system:serviceaccount:stakater\n - system:serviceaccount:kube\n - system:serviceaccount:redhat\n - system:serviceaccount:hive\n
Bill can also use regex patterns to ignore a set of service accounts:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n openshift:\n privilegedServiceAccounts:\n - ^system:serviceaccount:openshift*\n - ^system:serviceaccount:stakater*\n
"},{"location":"usecases/integrationconfig.html#configuring-vault-in-integrationconfig","title":"Configuring Vault in IntegrationConfig","text":"Vault is used to secure, store and tightly control access to tokens, passwords, certificates, and encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.
If Bill (the cluster admin) has Vault configured in his cluster, then he can take benefit from MTO's integration with Vault.
MTO automatically creates Vault secret paths for tenants, where tenant members can securely save their secrets. It also authorizes tenant members to access these secrets via OIDC.
Bill would first have to integrate Vault with MTO by adding the details in IntegrationConfig. For more details
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n name: tenant-operator-config\n namespace: multi-tenant-operator\nspec:\n vault:\n enabled: true\n endpoint:\n secretReference:\n name: vault-root-token\n namespace: vault\n url: >-\n https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n sso:\n accessorID: auth_oidc_aa6aa9aa\n clientName: vault\n
Bill then creates a tenant for Anna and John:
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@acme.org\n viewers:\n users:\n - john@acme.org\n quota: small\n sandbox: false\n
Now Bill goes to Vault
and sees that a path for tenant
has been made under the name bluesky/kv
, confirming that Tenant members with the Owner or Edit roles now have access to the tenant's Vault path.
Now if Anna sign's in to the Vault via OIDC, she can see her tenants path and secrets. Whereas if John sign's in to the Vault via OIDC, he can't see his tenants path or secrets as he doesn't have the access required to view them.
"},{"location":"usecases/integrationconfig.html#configuring-rhsso-red-hat-single-sign-on-in-integrationconfig","title":"Configuring RHSSO (Red Hat Single Sign-On) in IntegrationConfig","text":"Red Hat Single Sign-On RHSSO is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0.
If Bill the cluster admin has RHSSO configured in his cluster, then he can take benefit from MTO's integration with RHSSO and Vault.
MTO automatically allows tenant members to access Vault via OIDC(RHSSO authentication and authorization) to access secret paths for tenants where tenant members can securely save their secrets.
Bill would first have to integrate RHSSO with MTO by adding the details in IntegrationConfig. Visit here for more details.
rhsso:\n enabled: true\n realm: customer\n endpoint:\n secretReference:\n name: auth-secrets\n namespace: openshift-auth\n url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n
"},{"location":"usecases/mattermost.html","title":"Creating Mattermost Teams for your tenant","text":""},{"location":"usecases/mattermost.html#requirements","title":"Requirements","text":"MTO-Mattermost-Integration-Operator
Please contact stakater to install the Mattermost integration operator before following the below mentioned steps.
"},{"location":"usecases/mattermost.html#steps-to-enable-integration","title":"Steps to enable integration","text":"Bill wants some of the tenants to also have their own Mattermost Teams. To make sure this happens correctly, Bill will first add the stakater.com/mattermost: true
label to the tenants. The label will enable the mto-mattermost-integration-operator
to create and manage Mattermost Teams based on Tenants.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: sigma\n labels:\n stakater.com/mattermost: 'true'\nspec:\n owners:\n users:\n - user\n editors:\n users:\n - user1\n quota: medium\n sandbox: false\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n
Now user can logIn to Mattermost to see their Team and relevant channels associated with it.
The name of the Team is similar to the Tenant name. Notification channels are pre-configured for every team, and can be modified.
"},{"location":"usecases/namespace.html","title":"Creating Namespace","text":"Anna as the tenant owner can create new namespaces for her tenant.
apiVersion: v1\nkind: Namespace\nmetadata:\n name: bluesky-production\n labels:\n stakater.com/tenant: bluesky\n
\u26a0\ufe0f Anna is required to add the tenant label stakater.com/tenant: bluesky
which contains the name of her tenant bluesky
, while creating the namespace. If this label is not added or if Anna does not belong to the bluesky
tenant, then Multi Tenant Operator will not allow the creation of that namespace.
When Anna creates the namespace, MTO assigns Anna and other tenant members the roles based on their user types, such as a tenant owner getting the OpenShift admin
role for that namespace.
As a tenant owner, Anna is able to create namespaces.
If you have enabled ArgoCD Multitenancy, our preferred solution is to create tenant namespaces by using Tenant spec to avoid syncing issues in ArgoCD console during namespace creation.
"},{"location":"usecases/namespace.html#add-existing-namespaces-to-tenant-via-gitops","title":"Add Existing Namespaces to Tenant via GitOps","text":"Using GitOps as your preferred development workflow, you can add existing namespaces for your tenants by including the tenant label.
To add an existing namespace to your tenant via GitOps:
- First, migrate your namespace resource to your \u201cwatched\u201d git repository
- Edit your namespace
yaml
to include the tenant label - Tenant label follows the naming convention
stakater.com/tenant: <TENANT_NAME>
- Sync your GitOps repository with your cluster and allow changes to be propagated
- Verify that your Tenant users now have access to the namespace
For example, If Anna, a tenant owner, wants to add the namespace bluesky-dev
to her tenant via GitOps, after migrating her namespace manifest to a \u201cwatched repository\u201d
apiVersion: v1\nkind: Namespace\nmetadata:\n name: bluesky-dev\n
She can then add the tenant label
...\n labels:\n stakater.com/tenant: bluesky\n
Now all the users of the Bluesky
tenant now have access to the existing namespace.
Additionally, to remove namespaces from a tenant, simply remove the tenant label from the namespace resource and sync your changes to your cluster.
"},{"location":"usecases/namespace.html#remove-namespaces-from-your-cluster-via-gitops","title":"Remove Namespaces from your Cluster via GitOps","text":"GitOps is a quick and efficient way to automate the management of your K8s resources.
To remove namespaces from your cluster via GitOps;
- Remove the
yaml
file containing your namespace configurations from your \u201cwatched\u201d git repository. - ArgoCD automatically sets the
[app.kubernetes.io/instance](http://app.kubernetes.io/instance)
label on resources it manages. It uses this label it to select resources which inform the basis of an app. To remove a namespace from a managed ArgoCD app, remove the ArgoCD label app.kubernetes.io/instance
from the namespace manifest. - You can edit your namespace manifest through the OpenShift Web Console or with the OpenShift command line tool.
- Now that you have removed your namespace manifest from your watched git repository, and from your managed ArgoCD apps, sync your git repository and allow your changes be propagated.
- Verify that your namespace has been deleted.
"},{"location":"usecases/private-sandboxes.html","title":"Create Private Sandboxes","text":"Bill assigned the ownership of bluesky
to Anna
and Anthony
. Now if the users want sandboxes to be made for them, they'll have to ask Bill
to enable sandbox
functionality. The Users also want to make sure that the sandboxes that are created for them are also only visible to the user they belong to. To enable that, Bill will just set enabled: true
and private: true
within the sandboxConfig
field
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandboxConfig:\n enabled: true\n private: true\nEOF\n
With the above configuration Anna
and Anthony
will now have new sandboxes created
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\nbluesky-anthony-aurora-sandbox Active 5d5h\nbluesky-john-aurora-sandbox Active 5d5h\n
However, from the perspective of Anna
, only their sandbox will be visible
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\n
"},{"location":"usecases/quota.html","title":"Enforcing Quotas","text":"Using Multi Tenant Operator, the cluster-admin can set and enforce cluster resource quotas and limit ranges for tenants.
"},{"location":"usecases/quota.html#assigning-resource-quotas","title":"Assigning Resource Quotas","text":"Bill is a cluster admin who will first create Quota
CR where he sets the maximum resource limits that Anna's tenant will have. Here limitrange
is an optional field, cluster admin can skip it if not needed.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: small\nspec:\n resourcequota:\n hard:\n requests.cpu: '5'\n requests.memory: '5Gi'\n configmaps: \"10\"\n secrets: \"10\"\n services: \"10\"\n services.loadbalancers: \"2\"\n limitrange:\n limits:\n - type: \"Pod\"\n max:\n cpu: \"2\"\n memory: \"1Gi\"\n min:\n cpu: \"200m\"\n memory: \"100Mi\"\nEOF\n
For more details please refer to Quotas.
kubectl get quota small\nNAME STATE AGE\nsmall Active 3m\n
Bill then proceeds to create a tenant for Anna, while also linking the newly created Quota
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@stakater.com\n quota: small\n sandbox: false\nEOF\n
Now that the quota is linked with Anna's tenant, Anna can create any resource within the values of resource quota and limit range.
kubectl -n bluesky-production create deployment nginx --image nginx:latest --replicas 4\n
Once the resource quota assigned to the tenant has been reached, Anna cannot create further resources.
kubectl create pods bluesky-training\nError from server (Cannot exceed Namespace quota: please, reach out to the system administrators)\n
"},{"location":"usecases/secret-distribution.html","title":"Propagate Secrets from Parent to Descendant namespaces","text":"Secrets like registry
credentials often need to exist in multiple Namespaces, so that Pods within different namespaces can have access to those credentials in form of secrets.
Manually creating secrets within different namespaces could lead to challenges, such as:
- Someone will have to create secret either manually or via GitOps each time there is a new descendant namespace that needs the secret
- If we update the parent secret, they will have to update the secret in all descendant namespaces
- This could be time-consuming, and a small mistake while creating or updating the secret could lead to unnecessary debugging
With the help of Multi-Tenant Operator's Template feature we can make this secret distribution experience easy.
For example, to copy a Secret called registry
which exists in the example
to new Namespaces whenever they are created, we will first create a Template which will have reference of the registry secret.
It will also push updates to the copied Secrets and keep the propagated secrets always sync and updated with parent namespaces.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: registry-secret\nresources:\n resourceMappings:\n secrets:\n - name: registry\n namespace: example\n
Now using this Template we can propagate registry secret to different namespaces that has some common set of labels.
For example, will just add one label kind: registry
and all namespaces with this label will get this secret.
For propagating it on different namespaces dynamically will have to create another resource called TemplateGroupInstance
. TemplateGroupInstance
will have Template
and matchLabel
mapping as shown below:
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n name: registry-secret-group-instance\nspec:\n template: registry-secret\n selector:\n matchLabels:\n kind: registry\n sync: true\n
After reconciliation, you will be able to see those secrets in namespaces having mentioned label.
MTO will keep injecting this secret to the new namespaces created with that label.
kubectl get secret registry-secret -n example-ns-1\nNAME STATE AGE\nregistry-secret Active 3m\n\nkubectl get secret registry-secret -n example-ns-2\nNAME STATE AGE\nregistry-secret Active 3m\n
"},{"location":"usecases/template.html","title":"Creating Templates","text":"Anna wants to create a Template that she can use to initialize or share common resources across namespaces (e.g. PullSecrets).
Anna can either create a template using manifests
field, covering Kubernetes or custom resources.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: docker-pull-secret\nresources:\n manifests:\n - kind: Secret\n apiVersion: v1\n metadata:\n name: docker-pull-secret\n data:\n .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n type: kubernetes.io/dockercfg\n
Or by using Helm Charts
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: redis\nresources:\n helm:\n releaseName: redis\n chart:\n repository:\n name: redis\n repoUrl: https://charts.bitnami.com/bitnami\n values: |\n redisPort: 6379\n
She can also use resourceMapping
field to copy over secrets and configmaps from one namespace to others.
apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: resource-mapping\nresources:\n resourceMappings:\n secrets:\n - name: docker-secret\n namespace: bluesky-build\n configMaps:\n - name: tronador-configMap\n namespace: stakater-tronador\n
Note: Resource mapping can be used via TGI to map resources within tenant namespaces or to some other tenant's namespace. If used with TI, the resources will only be mapped if namespaces belong to same tenant.
"},{"location":"usecases/template.html#using-templates-with-default-parameters","title":"Using Templates with Default Parameters","text":"apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n name: namespace-parameterized-restrictions\nparameters:\n # Name of the parameter\n - name: DEFAULT_CPU_LIMIT\n # The default value of the parameter\n value: \"1\"\n - name: DEFAULT_CPU_REQUESTS\n value: \"0.5\"\n # If a parameter is required the template instance will need to set it\n # required: true\n # Make sure only values are entered for this parameter\n validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n manifests:\n - apiVersion: v1\n kind: LimitRange\n metadata:\n name: namespace-limit-range-${namespace}\n spec:\n limits:\n - default:\n cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n defaultRequest:\n cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n type: Container\n
Parameters can be used with both manifests
and helm charts
"},{"location":"usecases/tenant.html","title":"Creating Tenant","text":"Bill is a cluster admin who receives a new request from Aurora Solutions CTO asking for a new tenant for Anna's team.
Bill creates a new tenant called bluesky
in the cluster:
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandbox: false\nEOF\n
Bill checks if the new tenant is created:
kubectl get tenants.tenantoperator.stakater.com bluesky\nNAME STATE AGE\nbluesky Active 3m\n
Anna can now login to the cluster and check if she can create namespaces
kubectl auth can-i create namespaces\nyes\n
However, cluster resources are not accessible to Anna
kubectl auth can-i get namespaces\nno\n\nkubectl auth can-i get persistentvolumes\nno\n
Including the Tenant
resource
kubectl auth can-i get tenants.tenantoperator.stakater.com\nno\n
"},{"location":"usecases/tenant.html#assign-multiple-users-as-tenant-owner","title":"Assign multiple users as tenant owner","text":"In the example above, Bill assigned the ownership of bluesky
to Anna
. If another user, e.g. Anthony
needs to administer bluesky
, than Bill can assign the ownership of tenant to that user as well:
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandbox: false\nEOF\n
With the configuration above, Anthony can log-in to the cluster and execute
kubectl auth can-i create namespaces\nyes\n
"},{"location":"usecases/tenant.html#assigning-users-sandbox-namespace","title":"Assigning Users Sandbox Namespace","text":"Bill assigned the ownership of bluesky
to Anna
and Anthony
. Now if the users want sandboxes to be made for them, they'll have to ask Bill
to enable sandbox
functionality.
To enable that, Bill will just set enabled: true
within the sandboxConfig
field
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandboxConfig:\n enabled: true\nEOF\n
With the above configuration Anna
and Anthony
will now have new sandboxes created
kubectl get namespaces\nNAME STATUS AGE\nbluesky-anna-aurora-sandbox Active 5d5h\nbluesky-anthony-aurora-sandbox Active 5d5h\nbluesky-john-aurora-sandbox Active 5d5h\n
If Bill wants to make sure that only the sandbox owner can view his sandbox namespace, he can achieve this by setting private: true
within the sandboxConfig
filed.
"},{"location":"usecases/tenant.html#creating-namespaces-via-tenant-custom-resource","title":"Creating Namespaces via Tenant Custom Resource","text":"Bill now wants to create namespaces for dev
, build
and production
environments for the tenant members. To create those namespaces Bill will just add those names within the namespaces
field in the tenant CR. If Bill wants to append the tenant name as a prefix in namespace name, then he can use namespaces.withTenantPrefix
field. Else he can use namespaces.withoutTenantPrefix
for namespaces for which he does not need tenant name as a prefix.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n namespaces:\n withTenantPrefix:\n - dev\n - build\n withoutTenantPrefix:\n - prod\nEOF\n
With the above configuration tenant members will now see new namespaces have been created.
kubectl get namespaces\nNAME STATUS AGE\nbluesky-dev Active 5d5h\nbluesky-build Active 5d5h\nprod Active 5d5h\n
"},{"location":"usecases/tenant.html#distributing-common-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource","title":"Distributing common labels and annotations to tenant namespaces via Tenant Custom Resource","text":"Bill now wants to add labels/annotations to all the namespaces for a tenant. To create those labels/annotations Bill will just add them into commonMetadata.labels
/commonMetadata.annotations
field in the tenant CR.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n commonMetadata:\n labels:\n app.kubernetes.io/managed-by: tenant-operator\n app.kubernetes.io/part-of: tenant-alpha\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/infra=\nEOF\n
With the above configuration all tenant namespaces will now contain the mentioned labels and annotations.
"},{"location":"usecases/tenant.html#distributing-specific-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource","title":"Distributing specific labels and annotations to tenant namespaces via Tenant Custom Resource","text":"Bill now wants to add labels/annotations to specific namespaces for a tenant. To create those labels/annotations Bill will just add them into specificMetadata.labels
/specificMetadata.annotations
and specific namespaces in specificMetadata.namespaces
field in the tenant CR.
kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n editors:\n users:\n - john@aurora.org\n groups:\n - alpha\n quota: small\n sandboxConfig:\n enabled: true\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n specificMetadata:\n - namespaces:\n - bluesky-anna-aurora-sandbox\n labels:\n app.kubernetes.io/is-sandbox: true\n annotations:\n openshift.io/node-selector: node-role.kubernetes.io/worker=\nEOF\n
With the above configuration all tenant namespaces will now contain the mentioned labels and annotations.
"},{"location":"usecases/tenant.html#retaining-tenant-namespaces-and-appproject-when-a-tenant-is-being-deleted","title":"Retaining tenant namespaces and AppProject when a tenant is being deleted","text":"Bill now wants to delete tenant bluesky
and wants to retain all namespaces and AppProject of the tenant. To retain the namespaces Bill will set spec.onDelete.cleanNamespaces
, and spec.onDelete.cleanAppProjects
to false
.
apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n quota: small\n sandboxConfig:\n enabled: true\n namespaces:\n withTenantPrefix:\n - dev\n - build\n - prod\n onDelete:\n cleanNamespaces: false\n cleanAppProject: false\n
With the above configuration all tenant namespaces and AppProject will not be deleted when tenant bluesky
is deleted. By default, the value of spec.onDelete.cleanNamespaces
is also false
and spec.onDelete.cleanAppProject
is true
"},{"location":"usecases/volume-limits.html","title":"Limiting PersistentVolume for Tenant","text":"Bill, as a cluster admin, wants to restrict the amount of storage a Tenant can use. For that he'll add the requests.storage
field to quota.spec.resourcequota.hard
. If Bill wants to restrict tenant bluesky
to use only 50Gi
of storage, he'll first create a quota with requests.storage
field set to 50Gi
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: medium\nspec:\n resourcequota:\n hard:\n requests.cpu: '5'\n requests.memory: '10Gi'\n requests.storage: '50Gi'\n
Once the quota is created, Bill will create the tenant and set the quota field to the one he created.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: bluesky\nspec:\n owners:\n users:\n - anna@aurora.org\n - anthony@aurora.org\n quota: medium\n sandbox: true\nEOF\n
Now, the combined storage used by all tenant namespaces will not exceed 50Gi
.
"},{"location":"usecases/volume-limits.html#adding-storageclass-restrictions-for-tenant","title":"Adding StorageClass Restrictions for Tenant","text":"Now, Bill, as a cluster admin, wants to make sure that no Tenant can provision more than a fixed amount of storage from a StorageClass. Bill can restrict that using <storage-class-name>.storageclass.storage.k8s.io/requests.storage
field in quota.spec.resourcequota.hard
field. If Bill wants to restrict tenant sigma
to use only 20Gi
of storage from storage class stakater
, he'll first create a StorageClass stakater
and then create the relevant Quota with stakater.storageclass.storage.k8s.io/requests.storage
field set to 20Gi
.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n name: small\nspec:\n resourcequota:\n hard:\n requests.cpu: '2'\n requests.memory: '4Gi'\n stakater.storageclass.storage.k8s.io/requests.storage: '20Gi'\n
Once the quota is created, Bill will create the tenant and set the quota field to the one he created.
kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n name: sigma\nspec:\n owners:\n users:\n - dave@aurora.org\n quota: small\n sandbox: true\nEOF\n
Now, the combined storage provisioned from StorageClass stakater
used by all tenant namespaces will not exceed 20Gi
.
The 20Gi
limit will only be applied to StorageClass stakater
. If a tenant member creates a PVC with some other StorageClass, he will not be restricted.
Tip
More details about Resource Quota
can be found here
"}]}
\ No newline at end of file
diff --git a/0.10/sitemap.xml b/0.10/sitemap.xml
index aca2da611..d8dfd9e82 100644
--- a/0.10/sitemap.xml
+++ b/0.10/sitemap.xml
@@ -2,357 +2,362 @@
https://docs.stakater.com/0.10/index.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/argocd-multitenancy.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/changelog.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/customresources.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/eula.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/faq.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/features.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/hibernation.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/installation.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/integration-config.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/tenant-roles.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/troubleshooting.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/vault-multitenancy.html
- 2023-12-06
+ 2023-12-07
+ daily
+
+
+ https://docs.stakater.com/0.10/explanation/auth.html
+ 2023-12-07
daily
https://docs.stakater.com/0.10/explanation/console.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/explanation/why-argocd-multi-tenancy.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/explanation/why-vault-multi-tenancy.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/faq/index.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/how-to-guides/integration-config.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/how-to-guides/quota.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/how-to-guides/template-group-instance.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/how-to-guides/template-instance.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/how-to-guides/template.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/how-to-guides/tenant.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/how-to-guides/offboarding/uninstalling.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/reference-guides/add-remove-namespace-gitops.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/reference-guides/admin-clusterrole.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/reference-guides/configuring-multitenant-network-isolation.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/reference-guides/custom-metrics.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/reference-guides/custom-roles.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/reference-guides/deploying-templates.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/reference-guides/distributing-resources.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/reference-guides/distributing-secrets-using-sealed-secret-template.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/reference-guides/distributing-secrets.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/reference-guides/extend-default-roles.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/reference-guides/graph-visualization.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/reference-guides/integrationconfig.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/reference-guides/mattermost.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/reference-guides/resource-sync-by-tgi.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/reference-guides/secret-distribution.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/tutorials/installation.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/tutorials/argocd/enabling-multi-tenancy-argocd.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/tutorials/template/template-group-instance.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/tutorials/template/template-instance.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/tutorials/template/template.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/tutorials/tenant/assign-quota-tenant.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/tutorials/tenant/assigning-metadata.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/tutorials/tenant/create-sandbox.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/tutorials/tenant/create-tenant.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/tutorials/tenant/creating-namespaces.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/tutorials/tenant/custom-rbac.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/tutorials/tenant/deleting-tenant.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/tutorials/tenant/tenant-hibernation.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/tutorials/vault/enabling-multi-tenancy-vault.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/admin-clusterrole.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/argocd.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/configuring-multitenant-network-isolation.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/custom-roles.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/deploying-templates.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/distributing-resources.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/distributing-secrets-using-sealed-secret-template.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/extend-default-roles.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/hibernation.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/integrationconfig.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/mattermost.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/namespace.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/private-sandboxes.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/quota.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/secret-distribution.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/template.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/tenant.html
- 2023-12-06
+ 2023-12-07
daily
https://docs.stakater.com/0.10/usecases/volume-limits.html
- 2023-12-06
+ 2023-12-07
daily
\ No newline at end of file
diff --git a/0.10/sitemap.xml.gz b/0.10/sitemap.xml.gz
index 6ef7ca31b..f3dbf3396 100644
Binary files a/0.10/sitemap.xml.gz and b/0.10/sitemap.xml.gz differ