From 3b4f6885acebdef941ef5dcb95237be6df5c0ad4 Mon Sep 17 00:00:00 2001 From: stakater-user Date: Thu, 7 Dec 2023 08:58:24 +0000 Subject: [PATCH] Deployed 9a38516 to 0.10 with MkDocs 1.5.3 and mike 2.0.0 --- 0.10/explanation/auth.html | 1638 +++++++++++++++++++++++++++++++++ 0.10/explanation/console.html | 48 +- 0.10/search/search_index.json | 2 +- 0.10/sitemap.xml | 147 +-- 0.10/sitemap.xml.gz | Bin 841 -> 845 bytes 5 files changed, 1759 insertions(+), 76 deletions(-) create mode 100644 0.10/explanation/auth.html diff --git a/0.10/explanation/auth.html b/0.10/explanation/auth.html new file mode 100644 index 000000000..bacbd6704 --- /dev/null +++ b/0.10/explanation/auth.html @@ -0,0 +1,1638 @@ + + + + + + + + + + + + + + + + + + + + + Authentication and Authorization in MTO Console - Multi Tenant Operator + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + + + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Authentication and Authorization in MTO Console

+

Keycloak for Authentication

+

MTO Console incorporates Keycloak, a leading authentication module, to manage user access securely and efficiently. Keycloak is provisioned automatically by our controllers, setting up a new realm, client, and a default user named mto.

+

Benefits

+
    +
  • Industry Standard: Offers robust, reliable authentication in line with industry standards.
  • +
  • Integration with Existing Systems: Enables easy linkage with existing Active Directories or SSO systems, avoiding the need for redundant user management.
  • +
  • Administrative Control: Grants administrators full authority over user access to the console, enhancing security and operational integrity.
  • +
+

PostgreSQL as Persistent Storage for Keycloak

+

MTO Console leverages PostgreSQL as the persistent storage solution for Keycloak, enhancing the reliability and flexibility of the authentication system.

+

It offers benefits such as enhanced data reliability, easy data export and import.

+

Benefits

+
    +
  • Persistent Data Storage: By using PostgreSQL, Keycloak's data, including realms, clients, and user information, is preserved even in the event of a pod restart. This ensures continuous availability and stability of the authentication system.
  • +
  • Data Exportability: Customers can easily export Keycloak configurations and data from the PostgreSQL database.
  • +
  • Transferability Across Environments: The exported data can be conveniently imported into another cluster or Keycloak instance, facilitating smooth transitions and backups.
  • +
  • No Data Loss: Ensures that critical authentication data is not lost during system updates or maintenance.
  • +
  • Operational Flexibility: Provides customers with greater control over their authentication data, enabling them to manage and migrate their configurations as needed.
  • +
+

Built-in module for Authorization

+

The MTO Console is equipped with an authorization module, designed to manage access rights intelligently and securely.

+

Benefits

+
    +
  • User and Tenant Based: Authorization decisions are made based on the user's membership in specific tenants, ensuring appropriate access control.
  • +
  • Role-Specific Access: The module considers the roles assigned to users, granting permissions accordingly to maintain operational integrity.
  • +
  • Elevated Privileges for Admins: Users identified as administrators or members of the clusterAdminGroups are granted comprehensive permissions across the console.
  • +
  • Database Caching: Authorization decisions are cached in the database, reducing reliance on the Kubernetes API server.
  • +
  • Faster, Reliable Access: This caching mechanism ensures quicker and more reliable access for users, enhancing the overall responsiveness of the MTO Console.
  • +
+ + + + + + + + +
+
+ + +
+ +
+ +

Copyright © 2023 Stakater AB – Change cookie settings

+ +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/0.10/explanation/console.html b/0.10/explanation/console.html index f6bb3dc0f..31279385c 100644 --- a/0.10/explanation/console.html +++ b/0.10/explanation/console.html @@ -1488,7 +1488,7 @@
  • - Administrators : + Administrators @@ -1497,7 +1497,7 @@
  • - Tenant Users : + Tenant Users @@ -1524,6 +1524,39 @@ +
  • + +
  • + + + Authentication and Authorization + + + + +
  • @@ -1576,10 +1609,10 @@

    Showback

    The Showback feature is an essential financial governance tool, providing detailed insights into the cost implications of resource usage by tenant or namespace or other filters. This facilitates a transparent cost management and internal chargeback or showback process, enabling informed decision-making regarding resource consumption and budgeting.

    image

    User Roles and Permissions

    -

    Administrators :

    +

    Administrators

    Administrators have overarching access to the console, including the ability to view all namespaces and tenants. They have exclusive access to the IntegrationConfig, allowing them to view all the settings and integrations.

    image

    -

    Tenant Users :

    +

    Tenant Users

    Regular tenant users can monitor and manage their allocated resources. However, they do not have access to the IntegrationConfig and cannot view resources across different tenants, ensuring data privacy and operational integrity.

    Live YAML Configuration and Graph View

    In the MTO Console, each resource section is equipped with a "View" button, revealing the live YAML configuration for complete information on the resource. For Tenant resources, a supplementary "Graph" option is available, illustrating the relationships and dependencies of all resources under a Tenant. This dual-view approach empowers users with both the detailed control of YAML and the holistic oversight of the graph view.

    @@ -1589,6 +1622,13 @@

    Caching and Database

    MTO integrates a dedicated database to streamline resource management. Now, all resources managed by MTO are efficiently stored in a Postgres database, enhancing the MTO Console's ability to efficiently retrieve all the resources for optimal presentation.

    The implementation of this feature is facilitated by the Bootstrap controller, streamlining the deployment process. This controller creates the PostgreSQL Database, establishes a service for inter-pod communication, and generates a secret to ensure secure connectivity to the database.

    Furthermore, the introduction of a dedicated cache layer ensures that there is no added burden on the kube API server when responding to MTO Console requests. This enhancement not only improves response times but also contributes to a more efficient and responsive resource management system.

    +

    Authentication and Authorization

    +

    MTO Console ensures secure access control using a robust combination of Keycloak for authentication and a custom-built authorization module.

    +

    Keycloak Integration

    +

    Keycloak, an industry-standard authentication tool, is integrated for secure user login and management. It supports seamless integration with existing ADs or SSO systems and grants administrators complete control over user access.

    +

    Custom Authorization Module

    +

    Complementing Keycloak, our custom authorization module intelligently controls access based on user roles and their association with tenants. Special checks are in place for admin users, granting them comprehensive permissions.

    +

    For more details on Keycloak's integration, PostgreSQL as persistent storage, and the intricacies of our authorization module, please visit here.

    Conclusion

    The MTO Console is engineered to simplify complex multi-tenant management. The current iteration focuses on providing comprehensive visibility. Future updates could include direct CUD (Create/Update/Delete) capabilities from the dashboard, enhancing the console’s functionality. The Showback feature remains a standout, offering critical cost tracking and analysis. The delineation of roles between administrators and tenant users ensures a secure and organized operational framework.

    diff --git a/0.10/search/search_index.json b/0.10/search/search_index.json index fdc973f12..c71fbb3eb 100644 --- a/0.10/search/search_index.json +++ b/0.10/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"Introduction","text":"

    Kubernetes is designed to support a single tenant platform; OpenShift brings some improvements with its \"Secure by default\" concepts but it is still very complex to design and orchestrate all the moving parts involved in building a secure multi-tenant platform hence making it difficult for cluster admins to host multi-tenancy in a single OpenShift cluster. If multi-tenancy is achieved by sharing a cluster, it can have many advantages, e.g. efficient resource utilization, less configuration effort and easier sharing of cluster-internal resources among different tenants. OpenShift and all managed applications provide enough primitive resources to achieve multi-tenancy, but it requires professional skills and deep knowledge of OpenShift.

    This is where Multi Tenant Operator (MTO) comes in and provides easy to manage/configure multi-tenancy. MTO provides wrappers around OpenShift resources to provide a higher level of abstraction to users. With MTO admins can configure Network and Security Policies, Resource Quotas, Limit Ranges, RBAC for every tenant, which are automatically inherited by all the namespaces and users in the tenant. Depending on the user's role, they are free to operate within their tenants in complete autonomy. MTO supports initializing new tenants using GitOps management pattern. Changes can be managed via PRs just like a typical GitOps workflow, so tenants can request changes, add new users, or remove users.

    The idea of MTO is to use namespaces as independent sandboxes, where tenant applications can run independently of each other. Cluster admins shall configure MTO's custom resources, which then become a self-service system for tenants. This minimizes the efforts of the cluster admins.

    MTO enables cluster admins to host multiple tenants in a single OpenShift Cluster, i.e.:

    MTO is also OpenShift certified

    "},{"location":"index.html#features","title":"Features","text":"

    The major features of Multi Tenant Operator (MTO) are described below.

    "},{"location":"index.html#kubernetes-multitenancy","title":"Kubernetes Multitenancy","text":"

    RBAC is one of the most complicated and error-prone parts of Kubernetes. With Multi Tenant Operator, you can rest assured that RBAC is configured with the \"least privilege\" mindset and all rules are kept up-to-date with zero manual effort.

    Multi Tenant Operator binds existing ClusterRoles to the Tenant's Namespaces used for managing access to the Namespaces and the resources they contain. You can also modify the default roles or create new roles to have full control and customize access control for your users and teams.

    Multi Tenant Operator is also able to leverage existing OpenShift groups or external groups synced from 3rd party identity management systems, for maintaining Tenant membership in your organization's current user management system.

    "},{"location":"index.html#hashicorp-vault-multitenancy","title":"HashiCorp Vault Multitenancy","text":"

    Multi Tenant Operator extends the tenants permission model to HashiCorp Vault where it can create Vault paths and greatly ease the overhead of managing RBAC in Vault. Tenant users can manage their own secrets without the concern of someone else having access to their Vault paths.

    More details on Vault Multitenancy

    "},{"location":"index.html#argocd-multitenancy","title":"ArgoCD Multitenancy","text":"

    Multi Tenant Operator is not only providing strong Multi Tenancy for the OpenShift internals but also extends the tenants permission model to ArgoCD were it can provision AppProjects and Allowed Repositories for your tenants greatly ease the overhead of managing RBAC in ArgoCD.

    More details on ArgoCD Multitenancy

    "},{"location":"index.html#resource-management","title":"Resource Management","text":"

    Multi Tenant Operator provides a mechanism for defining Resource Quotas at the tenant scope, meaning all namespaces belonging to a particular tenant share the defined quota, which is why you are able to safely enable dev teams to self serve their namespaces whilst being confident that they can only use the resources allocated based on budget and business needs.

    More details on Quota

    "},{"location":"index.html#templates-and-template-distribution","title":"Templates and Template distribution","text":"

    Multi Tenant Operator allows admins/users to define templates for namespaces, so that others can instantiate these templates to provision namespaces with batteries loaded. A template could pre-populate a namespace for certain use cases or with basic tooling required. Templates allow you to define Kubernetes manifests, Helm chart and more to be applied when the template is used to create a namespace.

    It also allows the parameterizing of these templates for flexibility and ease of use. It also provides the option to enforce the presence of templates in one tenant's or all the tenants' namespaces for configuring secure defaults.

    Common use cases for namespace templates may be:

    More details on Distributing Template Resources

    "},{"location":"index.html#mto-console","title":"MTO Console","text":"

    Multi Tenant Operator Console is a comprehensive user interface designed for both administrators and tenant users to manage multi-tenant environments. The MTO Console simplifies the complexity involved in handling various aspects of tenants and their related resources. It serves as a centralized monitoring hub, offering insights into the current state of tenants, namespaces, templates and quotas. It is designed to provide a quick summary/snapshot of MTO's status and facilitates easier interaction with various resources such as tenants, namespaces, templates, and quotas.

    More details on Console

    "},{"location":"index.html#showback","title":"Showback","text":"

    The showback functionality in Multi Tenant Operator (MTO) Console is a significant feature designed to enhance the management of resources and costs in multi-tenant Kubernetes environments. This feature focuses on accurately tracking the usage of resources by each tenant, and/or namespace, enabling organizations to monitor and optimize their expenditures. Furthermore, this functionality supports financial planning and budgeting by offering a clear view of operational costs associated with each tenant. This can be particularly beneficial for organizations that chargeback internal departments or external clients based on resource usage, ensuring that billing is fair and reflective of actual consumption.

    More details on Showback

    "},{"location":"index.html#hibernation","title":"Hibernation","text":"

    Multi Tenant Operator can downscale Deployments and StatefulSets in a tenant's Namespace according to a defined sleep schedule. The Deployments and StatefulSets are brought back to their required replicas according to the provided wake schedule.

    More details on Hibernation

    "},{"location":"index.html#mattermost-multitenancy","title":"Mattermost Multitenancy","text":"

    Multi Tenant Operator can manage Mattermost to create Teams for tenant users. All tenant users get a unique team and a list of predefined channels gets created. When a user is removed from the tenant, the user is also removed from the Mattermost team corresponding to tenant.

    More details on Mattermost

    "},{"location":"index.html#remote-development-namespaces","title":"Remote Development Namespaces","text":"

    Multi Tenant Operator can be configured to automatically provision a namespace in the cluster for every member of the specific tenant, that will also be preloaded with any selected templates and consume the same pool of resources from the tenants quota creating safe remote dev namespaces that teams can use as scratch namespace for rapid prototyping and development. So, every developer gets a Kubernetes-based cloud development environment that feel like working on localhost.

    More details on Sandboxes

    "},{"location":"index.html#cross-namespace-resource-distribution","title":"Cross Namespace Resource Distribution","text":"

    Multi Tenant Operator supports cloning of secrets and configmaps from one namespace to another namespace based on label selectors. It uses templates to enable users to provide reference to secrets and configmaps. It uses a template group instance to distribute those secrets and namespaces in matching namespaces, even if namespaces belong to different tenants. If template instance is used then the resources will only be mapped if namespaces belong to same tenant.

    More details on Distributing Secrets and ConfigMaps

    "},{"location":"index.html#self-service","title":"Self-Service","text":"

    With Multi Tenant Operator, you can empower your users to safely provision namespaces for themselves and their teams (typically mapped to SSO groups). Team-owned namespaces and the resources inside them count towards the team's quotas rather than the user's individual limits and are automatically shared with all team members according to the access rules you configure in Multi Tenant Operator.

    Also, by leveraging Multi Tenant Operator's templating mechanism, namespaces can be provisioned and automatically pre-populated with any kind of resource or multiple resources such as network policies, docker pull secrets or even Helm charts etc

    "},{"location":"index.html#everything-as-codegitops-ready","title":"Everything as Code/GitOps Ready","text":"

    Multi Tenant Operator is designed and built to be 100% OpenShift-native and to be configured and managed the same familiar way as native OpenShift resources so is perfect for modern shops that are dedicated to GitOps as it is fully configurable using Custom Resources.

    "},{"location":"index.html#preventing-clusters-sprawl","title":"Preventing Clusters Sprawl","text":"

    As companies look to further harness the power of cloud-native, they are adopting container technologies at rapid speed, increasing the number of clusters and workloads. As the number of Kubernetes clusters grows, this is an increasing work for the Ops team. When it comes to patching security issues or upgrading clusters, teams are doing five times the amount of work.

    With Multi Tenant Operator teams can share a single cluster with multiple teams, groups of users, or departments by saving operational and management efforts. This prevents you from Kubernetes cluster sprawl.

    "},{"location":"index.html#native-experience","title":"Native Experience","text":"

    Multi Tenant Operator provides multi-tenancy with a native Kubernetes experience without introducing additional management layers, plugins, or customized binaries.

    "},{"location":"argocd-multitenancy.html","title":"ArgoCD Multi-tenancy","text":"

    ArgoCD is a declarative GitOps tool built to deploy applications to Kubernetes. While the continuous delivery (CD) space is seen by some as crowded these days, ArgoCD does bring some interesting capabilities to the table. Unlike other tools, ArgoCD is lightweight and easy to configure.

    "},{"location":"argocd-multitenancy.html#why-argocd","title":"Why ArgoCD?","text":"

    Application definitions, configurations, and environments should be declarative and version controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand.

    "},{"location":"argocd-multitenancy.html#argocd-integration-in-multi-tenant-operator","title":"ArgoCD integration in Multi Tenant Operator","text":"

    With Multi Tenant Operator (MTO), cluster admins can configure multi tenancy in their cluster. Now with ArgoCD integration, multi tenancy can be configured in ArgoCD applications and AppProjects.

    MTO (if configured to) will create AppProjects for each tenant. The AppProject will allow tenants to create ArgoCD Applications that can be synced to namespaces owned by those tenants. Cluster admins will also be able to blacklist certain namespaces resources if they want, and allow certain cluster scoped resources as well (see the NamespaceResourceBlacklist and ClusterResourceWhitelist sections in Integration Config docs and Tenant Custom Resource docs).

    Note that ArgoCD integration in MTO is completely optional.

    "},{"location":"argocd-multitenancy.html#default-argocd-configuration","title":"Default ArgoCD configuration","text":"

    We have set a default ArgoCD configuration in Multi Tenant Operator that fulfils the following use cases:

    Detailed use cases showing how to create AppProjects are mentioned in use cases for ArgoCD.

    "},{"location":"changelog.html","title":"Changelog","text":""},{"location":"changelog.html#v010x","title":"v0.10.x","text":""},{"location":"changelog.html#v0100","title":"v0.10.0","text":""},{"location":"changelog.html#feature","title":"Feature","text":""},{"location":"changelog.html#fix","title":"Fix","text":""},{"location":"changelog.html#enhanced","title":"Enhanced","text":""},{"location":"changelog.html#v09x","title":"v0.9.x","text":""},{"location":"changelog.html#v094","title":"v0.9.4","text":"

    More information about TemplateGroupInstance's sync at Sync Resources Deployed by TemplateGroupInstance

    "},{"location":"changelog.html#v092","title":"v0.9.2","text":""},{"location":"changelog.html#v091","title":"v0.9.1","text":""},{"location":"changelog.html#v090","title":"v0.9.0","text":""},{"location":"changelog.html#enabling-console","title":"Enabling console","text":""},{"location":"changelog.html#v08x","title":"v0.8.x","text":""},{"location":"changelog.html#v083","title":"v0.8.3","text":""},{"location":"changelog.html#v081","title":"v0.8.1","text":""},{"location":"changelog.html#v080","title":"v0.8.0","text":""},{"location":"changelog.html#v07x","title":"v0.7.x","text":""},{"location":"changelog.html#v074","title":"v0.7.4","text":""},{"location":"changelog.html#v073","title":"v0.7.3","text":""},{"location":"changelog.html#v072","title":"v0.7.2","text":""},{"location":"changelog.html#v071","title":"v0.7.1","text":""},{"location":"changelog.html#v070","title":"v0.7.0","text":""},{"location":"changelog.html#v06x","title":"v0.6.x","text":""},{"location":"changelog.html#v061","title":"v0.6.1","text":""},{"location":"changelog.html#v060","title":"v0.6.0","text":""},{"location":"changelog.html#v05x","title":"v0.5.x","text":""},{"location":"changelog.html#v054","title":"v0.5.4","text":""},{"location":"changelog.html#v053","title":"v0.5.3","text":""},{"location":"changelog.html#v052","title":"v0.5.2","text":""},{"location":"changelog.html#v051","title":"v0.5.1","text":""},{"location":"changelog.html#v050","title":"v0.5.0","text":""},{"location":"changelog.html#v04x","title":"v0.4.x","text":""},{"location":"changelog.html#v047","title":"v0.4.7","text":""},{"location":"changelog.html#v046","title":"v0.4.6","text":""},{"location":"changelog.html#v045","title":"v0.4.5","text":""},{"location":"changelog.html#v044","title":"v0.4.4","text":""},{"location":"changelog.html#v043","title":"v0.4.3","text":""},{"location":"changelog.html#v042","title":"v0.4.2","text":""},{"location":"changelog.html#v041","title":"v0.4.1","text":""},{"location":"changelog.html#v040","title":"v0.4.0","text":""},{"location":"changelog.html#v03x","title":"v0.3.x","text":""},{"location":"changelog.html#v0333","title":"v0.3.33","text":""},{"location":"changelog.html#v0333_1","title":"v0.3.33","text":""},{"location":"changelog.html#v0333_2","title":"v0.3.33","text":""},{"location":"changelog.html#v0330","title":"v0.3.30","text":""},{"location":"changelog.html#v0329","title":"v0.3.29","text":""},{"location":"changelog.html#v0328","title":"v0.3.28","text":""},{"location":"changelog.html#v0327","title":"v0.3.27","text":""},{"location":"changelog.html#v0326","title":"v0.3.26","text":""},{"location":"changelog.html#v0325","title":"v0.3.25","text":""},{"location":"changelog.html#migrating-from-pervious-version","title":"Migrating from pervious version","text":""},{"location":"changelog.html#v0324","title":"v0.3.24","text":""},{"location":"changelog.html#v0323","title":"v0.3.23","text":""},{"location":"changelog.html#v0322","title":"v0.3.22","text":"

    \u26a0\ufe0f Known Issues

    "},{"location":"changelog.html#v0321","title":"v0.3.21","text":""},{"location":"changelog.html#v0320","title":"v0.3.20","text":""},{"location":"changelog.html#v0319","title":"v0.3.19","text":"

    \u26a0\ufe0f ApiVersion v1alpha1 of Tenant and Quota custom resources has been deprecated and is scheduled to be removed in the future. The following links contain the updated structure of both resources

    "},{"location":"changelog.html#v0318","title":"v0.3.18","text":""},{"location":"changelog.html#v0317","title":"v0.3.17","text":""},{"location":"changelog.html#v0316","title":"v0.3.16","text":""},{"location":"changelog.html#v0315","title":"v0.3.15","text":""},{"location":"changelog.html#v0314","title":"v0.3.14","text":""},{"location":"changelog.html#v0313","title":"v0.3.13","text":""},{"location":"changelog.html#v0312","title":"v0.3.12","text":""},{"location":"changelog.html#v0311","title":"v0.3.11","text":""},{"location":"changelog.html#v0310","title":"v0.3.10","text":""},{"location":"changelog.html#v039","title":"v0.3.9","text":""},{"location":"changelog.html#v038","title":"v0.3.8","text":""},{"location":"changelog.html#v037","title":"v0.3.7","text":""},{"location":"changelog.html#v036","title":"v0.3.6","text":""},{"location":"changelog.html#v035","title":"v0.3.5","text":""},{"location":"changelog.html#v034","title":"v0.3.4","text":""},{"location":"changelog.html#v033","title":"v0.3.3","text":""},{"location":"changelog.html#v032","title":"v0.3.2","text":""},{"location":"changelog.html#v031","title":"v0.3.1","text":""},{"location":"changelog.html#v030","title":"v0.3.0","text":""},{"location":"changelog.html#v02x","title":"v0.2.x","text":""},{"location":"changelog.html#v0233","title":"v0.2.33","text":""},{"location":"changelog.html#v0232","title":"v0.2.32","text":""},{"location":"changelog.html#v0231","title":"v0.2.31","text":""},{"location":"customresources.html","title":"Custom Resources","text":"

    Below is the detailed explanation about Custom Resources of MTO

    "},{"location":"customresources.html#1-quota","title":"1. Quota","text":"

    Cluster scoped resource:

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n  name: medium\nspec:\n  resourcequota:\n    hard:\n      requests.cpu: '5'\n      limits.cpu: '10'\n      requests.memory: '5Gi'\n      limits.memory: '10Gi'\n      configmaps: \"10\"\n      persistentvolumeclaims: \"4\"\n      replicationcontrollers: \"20\"\n      secrets: \"10\"\n      services: \"10\"\n      services.loadbalancers: \"2\"\n  limitrange:\n    limits:\n      - type: \"Pod\"\n        max:\n          cpu: \"2\"\n          memory: \"1Gi\"\n        min:\n          cpu: \"200m\"\n          memory: \"100Mi\"\n      - type: \"Container\"\n        max:\n          cpu: \"2\"\n          memory: \"1Gi\"\n        min:\n          cpu: \"100m\"\n          memory: \"50Mi\"\n        default:\n          cpu: \"300m\"\n          memory: \"200Mi\"\n        defaultRequest:\n          cpu: \"200m\"\n          memory: \"100Mi\"\n        maxLimitRequestRatio:\n          cpu: \"10\"\n

    When several tenants share a single cluster with a fixed number of resources, there is a concern that one tenant could use more than its fair share of resources. Quota is a wrapper around OpenShift ClusterResourceQuota and LimitRange which provides administrators to limit resource consumption per Tenant. For more details Quota.Spec , LimitRange.Spec

    "},{"location":"customresources.html#2-tenant","title":"2. Tenant","text":"

    Cluster scoped resource:

    The smallest valid Tenant definition is given below (with just one field in its spec):

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: alpha\nspec:\n  quota: small\n

    Here is a more detailed Tenant definition, explained below:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: alpha\nspec:\n  owners: # optional\n    users: # optional\n      - dave@stakater.com\n    groups: # optional\n      - alpha\n  editors: # optional\n    users: # optional\n      - jack@stakater.com\n  viewers: # optional\n    users: # optional\n      - james@stakater.com\n  quota: medium # required\n  sandboxConfig: # optional\n    enabled: true # optional\n    private: true # optional\n  onDelete: # optional\n    cleanNamespaces: false # optional\n    cleanAppProject: true # optional\n  argocd: # optional\n    sourceRepos: # required\n      - https://github.com/stakater/gitops-config\n    appProject: # optional\n      clusterResourceWhitelist: # optional\n        - group: tronador.stakater.com\n          kind: Environment\n      namespaceResourceBlacklist: # optional\n        - group: \"\"\n          kind: ConfigMap\n  hibernation: # optional\n    sleepSchedule: 23 * * * * # required\n    wakeSchedule: 26 * * * * # required\n  namespaces: # optional\n    withTenantPrefix: # optional\n      - dev\n      - build\n    withoutTenantPrefix: # optional\n      - preview\n  commonMetadata: # optional\n    labels: # optional\n      stakater.com/team: alpha\n    annotations: # optional\n      openshift.io/node-selector: node-role.kubernetes.io/infra=\n  specificMetadata: # optional\n    - annotations: # optional\n        stakater.com/user: dave\n      labels: # optional\n        stakater.com/sandbox: true\n      namespaces: # optional\n        - alpha-dave-stakater-sandbox\n  templateInstances: # optional\n  - spec: # optional\n      template: networkpolicy # required\n      sync: true  # optional\n      parameters: # optional\n        - name: CIDR_IP\n          value: \"172.17.0.0/16\"\n    selector: # optional\n      matchLabels: # optional\n        policy: network-restriction\n

    \u26a0\ufe0f If same label or annotation key is being applied using different methods provided, then the highest precedence will be given to specificMetadata followed by commonMetadata and in the end would be the ones applied from openshift.project.labels/openshift.project.annotations in IntegrationConfig

    "},{"location":"customresources.html#3-template","title":"3. Template","text":"

    Cluster scoped resource:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: redis\nresources:\n  helm:\n    releaseName: redis\n    chart:\n      repository:\n        name: redis\n        repoUrl: https://charts.bitnami.com/bitnami\n    values: |\n      redisPort: 6379\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: networkpolicy\nparameters:\n  - name: CIDR_IP\n    value: \"172.17.0.0/16\"\nresources:\n  manifests:\n    - kind: NetworkPolicy\n      apiVersion: networking.k8s.io/v1\n      metadata:\n        name: deny-cross-ns-traffic\n      spec:\n        podSelector:\n          matchLabels:\n            role: db\n        policyTypes:\n        - Ingress\n        - Egress\n        ingress:\n        - from:\n          - ipBlock:\n              cidr: \"${{CIDR_IP}}\"\n              except:\n              - 172.17.1.0/24\n          - namespaceSelector:\n              matchLabels:\n                project: myproject\n          - podSelector:\n              matchLabels:\n                role: frontend\n          ports:\n          - protocol: TCP\n            port: 6379\n        egress:\n        - to:\n          - ipBlock:\n              cidr: 10.0.0.0/24\n          ports:\n          - protocol: TCP\n            port: 5978\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: resource-mapping\nresources:\n  resourceMappings:\n    secrets:\n      - name: secret-s1\n        namespace: namespace-n1\n    configMaps:\n      - name: configmap-c1\n        namespace: namespace-n2\n

    Templates are used to initialize Namespaces, share common resources across namespaces, and map secrets/configmaps from one namespace to other namespaces.

    Also you can define custom variables in Template and TemplateInstance . The parameters defined in TemplateInstance are overwritten the values defined in Template .

    Manifest Templates: The easiest option to define a Template is by specifying an array of Kubernetes manifests which should be applied when the Template is being instantiated.

    Helm Chart Templates: Instead of manifests, a Template can specify a Helm chart that will be installed (using Helm template) when the Template is being instantiated.

    Resource Mapping Templates: A template can be used to map secrets and configmaps from one tenant's namespace to another tenant's namespace, or within a tenant's namespace.

    "},{"location":"customresources.html#mandatory-and-optional-templates","title":"Mandatory and Optional Templates","text":"

    Templates can either be mandatory or optional. By default, all Templates are optional. Cluster Admins can make Templates mandatory by adding them to the spec.templateInstances array within the Tenant configuration. All Templates listed in spec.templateInstances will always be instantiated within every Namespace that is created for the respective Tenant.

    "},{"location":"customresources.html#4-templateinstance","title":"4. TemplateInstance","text":"

    Namespace scoped resource:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n  name: networkpolicy\n  namespace: build\nspec:\n  template: networkpolicy\n  sync: true\nparameters:\n  - name: CIDR_IP\n    value: \"172.17.0.0/16\"\n

    TemplateInstance are used to keep track of resources created from Templates, which are being instantiated inside a Namespace. Generally, a TemplateInstance is created from a Template and then the TemplateInstances will not be updated when the Template changes later on. To change this behavior, it is possible to set spec.sync: true in a TemplateInstance. Setting this option, means to keep this TemplateInstance in sync with the underlying template (similar to Helm upgrade).

    "},{"location":"customresources.html#5-templategroupinstance","title":"5. TemplateGroupInstance","text":"

    Cluster scoped resource:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: namespace-parameterized-restrictions-tgi\nspec:\n  template: namespace-parameterized-restrictions\n  sync: true\n  selector:\n    matchExpressions:\n    - key: stakater.com/tenant\n      operator: In\n      values:\n        - alpha\n        - beta\nparameters:\n  - name: CIDR_IP\n    value: \"172.17.0.0/16\"\n

    TemplateGroupInstance distributes a template across multiple namespaces which are selected by labelSelector.

    "},{"location":"customresources.html#6-resourcesupervisor","title":"6. ResourceSupervisor","text":"

    Cluster scoped resource:

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: tenant-sample\nspec:\n argocd:\n   appProjects:\n     - tenant-sample\n  hibernation:\n    sleepSchedule: 23 * * * *\n    wakeSchedule: 26 * * * *\n  namespaces:\n    - stage\n    - dev\nstatus:\n  currentStatus: running\n  nextReconcileTime: '2022-07-07T11:23:00Z'\n

    The ResourceSupervisor is a resource created by MTO in case the Hibernation feature is enabled. The Resource manages the sleep/wake schedule of the namespaces owned by the tenant, and manages the previous state of any sleeping application. Currently, only StatefulSets and Deployments are put to sleep. Additionally, ArgoCD AppProjects that belong to the tenant have a deny SyncWindow added to them.

    The ResourceSupervisor can be created both via the Tenant or manually. For more details, check some of its use cases

    "},{"location":"customresources.html#namespace","title":"Namespace","text":"
    apiVersion: v1\nkind: Namespace\nmetadata:\n  labels:\n    stakater.com/tenant: blue-sky\n  name: build\n
    "},{"location":"customresources.html#notes","title":"Notes","text":""},{"location":"eula.html","title":"Multi Tenant Operator End User License Agreement","text":"

    Last revision date: 12 December 2022

    IMPORTANT: THIS SOFTWARE END-USER LICENSE AGREEMENT (\"EULA\") IS A LEGAL AGREEMENT (\"Agreement\") BETWEEN YOU (THE CUSTOMER, EITHER AS AN INDIVIDUAL OR, IF PURCHASED OR OTHERWISE ACQUIRED BY OR FOR AN ENTITY, AS AN ENTITY) AND Stakater AB OR ITS SUBSIDUARY (\"COMPANY\"). READ IT CAREFULLY BEFORE COMPLETING THE INSTALLATION PROCESS AND USING MULTI TENANT OPERATOR (\"SOFTWARE\"). IT PROVIDES A LICENSE TO USE THE SOFTWARE AND CONTAINS WARRANTY INFORMATION AND LIABILITY DISCLAIMERS. BY INSTALLING AND USING THE SOFTWARE, YOU ARE CONFIRMING YOUR ACCEPTANCE OF THE SOFTWARE AND AGREEING TO BECOME BOUND BY THE TERMS OF THIS AGREEMENT.

    In order to use the Software under this Agreement, you must receive a license key at the time of purchase, in accordance with the scope of use and other terms specified and as set forth in Section 1 of this Agreement.

    "},{"location":"eula.html#1-license-grant","title":"1. License Grant","text":""},{"location":"eula.html#2-modifications","title":"2. Modifications","text":""},{"location":"eula.html#3-restricted-uses","title":"3. Restricted Uses","text":""},{"location":"eula.html#4-ownership","title":"4. Ownership","text":""},{"location":"eula.html#5-fees-and-payment","title":"5. Fees and Payment","text":""},{"location":"eula.html#6-support-maintenance-and-services","title":"6. Support, Maintenance and Services","text":""},{"location":"eula.html#7-disclaimer-of-warranties","title":"7. Disclaimer of Warranties","text":""},{"location":"eula.html#8-limitation-of-liability","title":"8. Limitation of Liability","text":""},{"location":"eula.html#9-remedies","title":"9. Remedies","text":""},{"location":"eula.html#10-acknowledgements","title":"10. Acknowledgements","text":""},{"location":"eula.html#11-third-party-software","title":"11. Third Party Software","text":""},{"location":"eula.html#12-miscellaneous","title":"12. Miscellaneous","text":""},{"location":"eula.html#13-contact-information","title":"13. Contact Information","text":""},{"location":"faq.html","title":"FAQs","text":""},{"location":"faq.html#namespace-admission-webhook","title":"Namespace Admission Webhook","text":""},{"location":"faq.html#q-error-received-while-performing-create-update-or-delete-action-on-namespace","title":"Q. Error received while performing Create, Update or Delete action on Namespace","text":"
    Cannot CREATE namespace test-john without label stakater.com/tenant\n

    Answer. Error occurs when a user is trying to perform create, update, delete action on a namespace without the required stakater.com/tenant label. This label is used by the operator to see that authorized users can perform that action on the namespace. Just add the label with the tenant name so that MTO knows which tenant the namespace belongs to, and who is authorized to perform create/update/delete operations. For more details please refer to Namespace use-case.

    "},{"location":"faq.html#q-error-received-while-performing-create-update-or-delete-action-on-openshift-project","title":"Q. Error received while performing Create, Update or Delete action on OpenShift Project","text":"
    Cannot CREATE namespace testing without label stakater.com/tenant. User: system:serviceaccount:openshift-apiserver:openshift-apiserver-sa\n

    Answer. This error occurs because we don't allow Tenant members to do operations on OpenShift Project, whenever an operation is done on a project, openshift-apiserver-sa tries to do the same request onto a namespace. That's why the user sees openshift-apiserver-sa Service Account instead of its own user in the error message.

    The fix is to try the same operation on the namespace manifest instead.

    "},{"location":"faq.html#q-error-received-while-doing-kubectl-apply-f-namespaceyaml","title":"Q. Error received while doing \"kubectl apply -f namespace.yaml\"","text":"
    Error from server (Forbidden): error when retrieving current configuration of:\nResource: \"/v1, Resource=namespaces\", GroupVersionKind: \"/v1, Kind=Namespace\"\nName: \"ns1\", Namespace: \"\"\nfrom server for: \"namespace.yaml\": namespaces \"ns1\" is forbidden: User \"muneeb\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"ns1\"\n

    Answer. Tenant members will not be able to use kubectl apply because apply first gets all the instances of that resource, in this case namespaces, and then does the required operation on the selected resource. To maintain tenancy, tenant members do not the access to get or list all the namespaces.

    The fix is to create namespaces with kubectl create instead.

    "},{"location":"faq.html#mto-argocd-integration","title":"MTO - ArgoCD Integration","text":""},{"location":"faq.html#q-how-do-i-deploy-cluster-scoped-resource-via-the-argocd-integration","title":"Q. How do I deploy cluster-scoped resource via the ArgoCD integration?","text":"

    Answer. Multi-Tenant Operator's ArgoCD Integration allows configuration of which cluster-scoped resources can be deployed, both globally and on a per-tenant basis. For a global allow-list that applies to all tenants, you can add both resource group and kind to the IntegrationConfig's spec.argocd.clusterResourceWhitelist field. Alternatively, you can set this up on a tenant level by configuring the same details within a Tenant's spec.argocd.appProject.clusterResourceWhitelist field. For more details, check out the ArgoCD integration use cases

    "},{"location":"faq.html#q-invalidspecerror-application-repo-repo-is-not-permitted-in-project-project","title":"Q. InvalidSpecError: application repo \\<repo> is not permitted in project \\<project>","text":"

    Answer. The above error can occur if the ArgoCD Application is syncing from a source that is not allowed the referenced AppProject. To solve this, verify that you have referred to the correct project in the given ArgoCD Application, and that the repoURL used for the Application's source is valid. If the error still appears, you can add the URL to the relevant Tenant's spec.argocd.sourceRepos array.

    "},{"location":"faq.html#q-why-are-there-mto-showback-pods-failing-in-my-cluster","title":"Q. Why are there mto-showback-* pods failing in my cluster?","text":"

    Answer. The mto-showback-* pods are used to calculate the cost of the resources used by each tenant. These pods are created by the Multi-Tenant Operator and are scheduled to run every 10 minutes. If the pods are failing, it is likely that the operator's necessary to calculate cost are not present in the cluster. To solve this, you can navigate to Operators -> Installed Operators in the OpenShift console and check if the MTO-OpenCost and MTO-Prometheus operators are installed. If they are in a pending state, you can manually approve them to install them in the cluster.

    "},{"location":"features.html","title":"Features","text":"

    The major features of Multi Tenant Operator (MTO) are described below.

    "},{"location":"features.html#kubernetes-multitenancy","title":"Kubernetes Multitenancy","text":"

    RBAC is one of the most complicated and error-prone parts of Kubernetes. With Multi Tenant Operator, you can rest assured that RBAC is configured with the \"least privilege\" mindset and all rules are kept up-to-date with zero manual effort.

    Multi Tenant Operator binds existing ClusterRoles to the Tenant's Namespaces used for managing access to the Namespaces and the resources they contain. You can also modify the default roles or create new roles to have full control and customize access control for your users and teams.

    Multi Tenant Operator is also able to leverage existing OpenShift groups or external groups synced from 3rd party identity management systems, for maintaining Tenant membership in your organization's current user management system.

    "},{"location":"features.html#hashicorp-vault-multitenancy","title":"HashiCorp Vault Multitenancy","text":"

    Multi Tenant Operator extends the tenants permission model to HashiCorp Vault where it can create Vault paths and greatly ease the overhead of managing RBAC in Vault. Tenant users can manage their own secrets without the concern of someone else having access to their Vault paths.

    More details on Vault Multitenancy

    "},{"location":"features.html#argocd-multitenancy","title":"ArgoCD Multitenancy","text":"

    Multi Tenant Operator is not only providing strong Multi Tenancy for the OpenShift internals but also extends the tenants permission model to ArgoCD were it can provision AppProjects and Allowed Repositories for your tenants greatly ease the overhead of managing RBAC in ArgoCD.

    More details on ArgoCD Multitenancy

    "},{"location":"features.html#mattermost-multitenancy","title":"Mattermost Multitenancy","text":"

    Multi Tenant Operator can manage Mattermost to create Teams for tenant users. All tenant users get a unique team and a list of predefined channels gets created. When a user is removed from the tenant, the user is also removed from the Mattermost team corresponding to tenant.

    More details on Mattermost

    "},{"location":"features.html#costresource-optimization","title":"Cost/Resource Optimization","text":"

    Multi Tenant Operator provides a mechanism for defining Resource Quotas at the tenant scope, meaning all namespaces belonging to a particular tenant share the defined quota, which is why you are able to safely enable dev teams to self serve their namespaces whilst being confident that they can only use the resources allocated based on budget and business needs.

    More details on Quota

    "},{"location":"features.html#remote-development-namespaces","title":"Remote Development Namespaces","text":"

    Multi Tenant Operator can be configured to automatically provision a namespace in the cluster for every member of the specific tenant, that will also be preloaded with any selected templates and consume the same pool of resources from the tenants quota creating safe remote dev namespaces that teams can use as scratch namespace for rapid prototyping and development. So, every developer gets a Kubernetes-based cloud development environment that feel like working on localhost.

    More details on Sandboxes

    "},{"location":"features.html#templates-and-template-distribution","title":"Templates and Template distribution","text":"

    Multi Tenant Operator allows admins/users to define templates for namespaces, so that others can instantiate these templates to provision namespaces with batteries loaded. A template could pre-populate a namespace for certain use cases or with basic tooling required. Templates allow you to define Kubernetes manifests, Helm chart and more to be applied when the template is used to create a namespace.

    It also allows the parameterizing of these templates for flexibility and ease of use. It also provides the option to enforce the presence of templates in one tenant's or all the tenants' namespaces for configuring secure defaults.

    Common use cases for namespace templates may be:

    More details on Distributing Template Resources

    "},{"location":"features.html#hibernation","title":"Hibernation","text":"

    Multi Tenant Operator can downscale Deployments and StatefulSets in a tenant's Namespace according to a defined sleep schedule. The Deployments and StatefulSets are brought back to their required replicas according to the provided wake schedule.

    More details on Hibernation

    "},{"location":"features.html#cross-namespace-resource-distribution","title":"Cross Namespace Resource Distribution","text":"

    Multi Tenant Operator supports cloning of secrets and configmaps from one namespace to another namespace based on label selectors. It uses templates to enable users to provide reference to secrets and configmaps. It uses a template group instance to distribute those secrets and namespaces in matching namespaces, even if namespaces belong to different tenants. If template instance is used then the resources will only be mapped if namespaces belong to same tenant.

    More details on Distributing Secrets and ConfigMaps

    "},{"location":"features.html#self-service","title":"Self-Service","text":"

    With Multi Tenant Operator, you can empower your users to safely provision namespaces for themselves and their teams (typically mapped to SSO groups). Team-owned namespaces and the resources inside them count towards the team's quotas rather than the user's individual limits and are automatically shared with all team members according to the access rules you configure in Multi Tenant Operator.

    Also, by leveraging Multi Tenant Operator's templating mechanism, namespaces can be provisioned and automatically pre-populated with any kind of resource or multiple resources such as network policies, docker pull secrets or even Helm charts etc

    "},{"location":"features.html#everything-as-codegitops-ready","title":"Everything as Code/GitOps Ready","text":"

    Multi Tenant Operator is designed and built to be 100% OpenShift-native and to be configured and managed the same familiar way as native OpenShift resources so is perfect for modern shops that are dedicated to GitOps as it is fully configurable using Custom Resources.

    "},{"location":"features.html#preventing-clusters-sprawl","title":"Preventing Clusters Sprawl","text":"

    As companies look to further harness the power of cloud-native, they are adopting container technologies at rapid speed, increasing the number of clusters and workloads. As the number of Kubernetes clusters grows, this is an increasing work for the Ops team. When it comes to patching security issues or upgrading clusters, teams are doing five times the amount of work.

    With Multi Tenant Operator teams can share a single cluster with multiple teams, groups of users, or departments by saving operational and management efforts. This prevents you from Kubernetes cluster sprawl.

    "},{"location":"features.html#native-experience","title":"Native Experience","text":"

    Multi Tenant Operator provides multi-tenancy with a native Kubernetes experience without introducing additional management layers, plugins, or customized binaries.

    "},{"location":"features.html#custom-metrics-support","title":"Custom Metrics Support","text":"

    Multi Tenant Operator now supports custom metrics for templates, template instances and template group instances.

    Exposed metrics contain, number of resources deployed, number of resources failed, total number of resources deployed for template instances and template group instances. These metrics can be used to monitor the usage of templates and template instances in the cluster.

    Additionally, this allows us to expose other performance metrics listed here.

    More details on Enabling Custom Metrics

    "},{"location":"features.html#graph-visualization-for-tenants","title":"Graph Visualization for Tenants","text":"

    Multi Tenant Operator now supports graph visualization for tenants on the MTO Console. Effortlessly associate tenants with their respective resources using the enhanced graph feature on the MTO Console. This dynamic graph illustrates the relationships between tenants and the resources they create, encompassing both MTO's proprietary resources and native Kubernetes/OpenShift elements.

    More details on Graph Visualization

    "},{"location":"hibernation.html","title":"Hibernating Namespaces","text":"

    You can manage workloads in your cluster with MTO by implementing a hibernation schedule for your tenants. Hibernation downsizes the running Deployments and StatefulSets in a tenant\u2019s namespace according to a defined cron schedule. You can set a hibernation schedule for your tenants by adding the \u2018spec.hibernation\u2019 field to the tenant's respective Custom Resource.

    hibernation:\n  sleepSchedule: 23 * * * *\n  wakeSchedule: 26 * * * *\n

    spec.hibernation.sleepSchedule accepts a cron expression indicating the time to put the workloads in your tenant\u2019s namespaces to sleep.

    spec.hibernation.wakeSchedule accepts a cron expression indicating the time to wake the workloads in your tenant\u2019s namespaces up.

    Note

    Both sleep and wake schedules must be specified for your Hibernation schedule to be valid.

    Additionally, adding the hibernation.stakater.com/exclude: 'true' annotation to a namespace excludes it from hibernating.

    Note

    This is only true for hibernation applied via the Tenant Custom Resource, and does not apply for hibernation done by manually creating a ResourceSupervisor (details about that below).

    Note

    This will not wake up an already sleeping namespace before the wake schedule.

    "},{"location":"hibernation.html#resource-supervisor","title":"Resource Supervisor","text":"

    Adding a Hibernation Schedule to a Tenant creates an accompanying ResourceSupervisor Custom Resource. The Resource Supervisor stores the Hibernation schedules and manages the current and previous states of all the applications, whether they are sleeping or awake.

    When the sleep timer is activated, the Resource Supervisor controller stores the details of your applications (including the number of replicas, configurations, etc.) in the applications' namespaces and then puts your applications to sleep. When the wake timer is activated, the controller wakes up the applications using their stored details.

    Enabling ArgoCD support for Tenants will also hibernate applications in the tenants' appProjects.

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: sigma\nspec:\n  argocd:\n    appProjects:\n      - sigma\n    namespace: openshift-gitops\n  hibernation:\n    sleepSchedule: 42 * * * *\n    wakeSchedule: 45 * * * *\n  namespaces:\n    - tenant-ns1\n    - tenant-ns2\n

    Currently, Hibernation is available only for StatefulSets and Deployments.

    "},{"location":"hibernation.html#manual-creation-of-resourcesupervisor","title":"Manual creation of ResourceSupervisor","text":"

    Hibernation can also be applied by creating a ResourceSupervisor resource manually. The ResourceSupervisor definition will contain the hibernation cron schedule, the names of the namespaces to be hibernated, and the names of the ArgoCD AppProjects whose ArgoCD Applications have to be hibernated (as per the given schedule).

    This method can be used to hibernate:

    As an example, the following ResourceSupervisor could be created manually, to apply hibernation explicitly to the 'ns1' and 'ns2' namespaces, and to the 'sample-app-project' AppProject.

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: hibernator\nspec:\n  argocd:\n    appProjects:\n      - sample-app-project\n    namespace: openshift-gitops\n  hibernation:\n    sleepSchedule: 42 * * * *\n    wakeSchedule: 45 * * * *\n  namespaces:\n    - ns1\n    - ns2\n
    "},{"location":"installation.html","title":"Installation","text":"

    This document contains instructions on installing, uninstalling and configuring Multi Tenant Operator using OpenShift MarketPlace.

    1. OpenShift OperatorHub UI

    2. CLI/GitOps

    3. Uninstall

    "},{"location":"installation.html#requirements","title":"Requirements","text":""},{"location":"installation.html#installing-via-operatorhub-ui","title":"Installing via OperatorHub UI","text":"

    Note: Use stable channel for seamless upgrades. For Production Environment prefer Manual approval and use Automatic for Development Environment

    Note: MTO will be installed in multi-tenant-operator namespace.

    "},{"location":"installation.html#configuring-integrationconfig","title":"Configuring IntegrationConfig","text":"

    IntegrationConfig is required to configure the settings of multi-tenancy for MTO.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedNamespaces:\n      - default\n      - ^openshift-*\n      - ^kube-*\n      - ^redhat-*\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:default-*\n      - ^system:serviceaccount:openshift-*\n      - ^system:serviceaccount:kube-*\n      - ^system:serviceaccount:redhat-*\n

    For more details and configurations check out IntegrationConfig.

    "},{"location":"installation.html#installing-via-cli-or-gitops","title":"Installing via CLI OR GitOps","text":"
    oc create namespace multi-tenant-operator\nnamespace/multi-tenant-operator created\n
    oc create -f - << EOF\napiVersion: operators.coreos.com/v1\nkind: OperatorGroup\nmetadata:\n  name: tenant-operator\n  namespace: multi-tenant-operator\nEOF\noperatorgroup.operators.coreos.com/tenant-operator created\n
    oc create -f - << EOF\napiVersion: operators.coreos.com/v1alpha1\nkind: Subscription\nmetadata:\n  name: tenant-operator\n  namespace: multi-tenant-operator\nspec:\n  channel: stable\n  installPlanApproval: Automatic\n  name: tenant-operator\n  source: certified-operators\n  sourceNamespace: openshift-marketplace\n  startingCSV: tenant-operator.v0.9.1\n  config:\n    env:\n      - name: ENABLE_CONSOLE\n        value: 'true'\nEOF\nsubscription.operators.coreos.com/tenant-operator created\n

    Note: To bring MTO via GitOps, add the above files in GitOps repository.

    "},{"location":"installation.html#configuring-integrationconfig_1","title":"Configuring IntegrationConfig","text":"

    IntegrationConfig is required to configure the settings of multi-tenancy for MTO.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedNamespaces:\n      - default\n      - ^openshift-*\n      - ^kube-*\n      - ^redhat-*\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:default-*\n      - ^system:serviceaccount:openshift-*\n      - ^system:serviceaccount:kube-*\n      - ^system:serviceaccount:redhat-*\n

    For more details and configurations check out IntegrationConfig.

    "},{"location":"installation.html#uninstall-via-operatorhub-ui","title":"Uninstall via OperatorHub UI","text":"

    You can uninstall MTO by following these steps:

    "},{"location":"installation.html#notes","title":"Notes","text":""},{"location":"integration-config.html","title":"Integration Config","text":"

    IntegrationConfig is used to configure settings of multi-tenancy for Multi Tenant Operator.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  tenantRoles:\n    default:\n      owner:\n        clusterRoles:\n          - admin\n      editor:\n        clusterRoles:\n          - edit\n      viewer:\n        clusterRoles:\n          - view\n          - viewer\n    custom:\n    - labelSelector:\n        matchExpressions:\n        - key: stakater.com/kind\n          operator: In\n          values:\n            - build\n        matchLabels:\n          stakater.com/kind: dev\n      owner:\n        clusterRoles:\n          - custom-owner\n      editor:\n        clusterRoles:\n          - custom-editor\n      viewer:\n        clusterRoles:\n          - custom-viewer\n          - custom-view\n  openshift:\n    project:\n      labels:\n        stakater.com/workload-monitoring: \"true\"\n      annotations:\n        openshift.io/node-selector: node-role.kubernetes.io/worker=\n    group:\n      labels:\n        role: customer-reader\n    sandbox:\n      labels:\n        stakater.com/kind: sandbox\n    clusterAdminGroups:\n      - cluster-admins\n    privilegedNamespaces:\n      - ^default$\n      - ^openshift-*\n      - ^kube-*\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:openshift-*\n      - ^system:serviceaccount:kube-*\n    namespaceAccessPolicy:\n      deny:\n        privilegedNamespaces:\n          users:\n            - system:serviceaccount:openshift-argocd:argocd-application-controller\n            - adam@stakater.com\n          groups:\n            - cluster-admins\n  argocd:\n    namespace: openshift-operators\n    namespaceResourceBlacklist:\n      - group: '' # all groups\n        kind: ResourceQuota\n    clusterResourceWhitelist:\n      - group: tronador.stakater.com\n        kind: EnvironmentProvisioner\n  rhsso:\n    enabled: true\n    realm: customer\n    endpoint:\n      url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n      secretReference:\n        name: auth-secrets\n        namespace: openshift-auth\n  vault:\n    enabled: true\n    endpoint:\n      url: https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n      secretReference:\n        name: vault-root-token\n        namespace: vault\n    sso:\n      clientName: vault\n      accessorID: <ACCESSOR_ID_TOKEN>\n

    Following are the different components that can be used to configure multi-tenancy in a cluster via Multi Tenant Operator.

    "},{"location":"integration-config.html#tenantroles","title":"TenantRoles","text":"

    TenantRoles are required within the IntegrationConfig, as they are used for defining what roles will be applied to each Tenant namespace. The field allows optional custom roles, that are then used to create RoleBindings for namespaces that match a labelSelector.

    \u26a0\ufe0f If you do not configure roles in any way, then the default OpenShift roles of owner, edit, and view will apply to Tenant members. Their details can be found here

    tenantRoles:\n  default:\n    owner:\n      clusterRoles:\n        - admin\n    editor:\n      clusterRoles:\n        - edit\n    viewer:\n      clusterRoles:\n        - view\n        - viewer\n  custom:\n  - labelSelector:\n      matchExpressions:\n      - key: stakater.com/kind\n        operator: In\n        values:\n          - build\n      matchLabels:\n        stakater.com/kind: dev\n    owner:\n      clusterRoles:\n        - custom-owner\n    editor:\n      clusterRoles:\n        - custom-editor\n    viewer:\n      clusterRoles:\n        - custom-viewer\n        - custom-view\n
    "},{"location":"integration-config.html#default","title":"Default","text":"

    This field contains roles that will be used to create default roleBindings for each namespace that belongs to tenants. These roleBindings are only created for a namespace if that namespaces isn't already matched by the custom field below it. Therefore, it is required to have at least one role mentioned within each of its three subfields: owner, editor, and viewer. These 3 subfields also correspond to the member fields of the Tenant CR

    "},{"location":"integration-config.html#custom","title":"Custom","text":"

    An array of custom roles. Similar to the default field, you can mention roles within this field as well. However, the custom roles also require the use of a labelSelector for each iteration within the array. The roles mentioned here will only apply to the namespaces that are matched by the labelSelector. If a namespace is matched by 2 different labelSelectors, then both roles will apply to it. Additionally, roles can be skipped within the labelSelector. These missing roles are then inherited from the default roles field . For example, if the following custom roles arrangement is used:

    custom:\n- labelSelector:\n    matchExpressions:\n    - key: stakater.com/kind\n      operator: In\n      values:\n        - build\n    matchLabels:\n      stakater.com/kind: dev\n  owner:\n    clusterRoles:\n      - custom-owner\n

    Then the editor and viewer roles will be taken from the default roles field, as that is required to have at least one role mentioned.

    "},{"location":"integration-config.html#openshift","title":"OpenShift","text":"
    openshift:\n  project:\n    labels:\n      stakater.com/workload-monitoring: \"true\"\n    annotations:\n      openshift.io/node-selector: node-role.kubernetes.io/worker=\n  group:\n    labels:\n      role: customer-reader\n  sandbox:\n    labels:\n      stakater.com/kind: sandbox\n  clusterAdminGroups:\n    - cluster-admins\n  privilegedNamespaces:\n    - ^default$\n    - ^openshift-*\n    - ^kube-*\n  privilegedServiceAccounts:\n    - ^system:serviceaccount:openshift-*\n    - ^system:serviceaccount:kube-*\n  namespaceAccessPolicy:\n    deny:\n      privilegedNamespaces:\n        users:\n          - system:serviceaccount:openshift-argocd:argocd-application-controller\n          - adam@stakater.com\n        groups:\n          - cluster-admins\n
    "},{"location":"integration-config.html#project-group-and-sandbox","title":"Project, group and sandbox","text":"

    We can use the openshift.project, openshift.group and openshift.sandbox fields to automatically add labels and annotations to the Projects and Groups managed via MTO.

      openshift:\n    project:\n      labels:\n        stakater.com/workload-monitoring: \"true\"\n      annotations:\n        openshift.io/node-selector: node-role.kubernetes.io/worker=\n    group:\n      labels:\n        role: customer-reader\n    sandbox:\n      labels:\n        stakater.com/kind: sandbox\n

    If we want to add default labels/annotations to sandbox namespaces of tenants than we just simply add them in openshift.project.labels/openshift.project.annotations respectively.

    Whenever a project is made it will have the labels and annotations as mentioned above.

    kind: Project\napiVersion: project.openshift.io/v1\nmetadata:\n  name: bluesky-build\n  annotations:\n    openshift.io/node-selector: node-role.kubernetes.io/worker=\n  labels:\n    workload-monitoring: 'true'\n    stakater.com/tenant: bluesky\nspec:\n  finalizers:\n    - kubernetes\nstatus:\n  phase: Active\n
    kind: Group\napiVersion: user.openshift.io/v1\nmetadata:\n  name: bluesky-owner-group\n  labels:\n    role: customer-reader\nusers:\n  - andrew@stakater.com\n
    "},{"location":"integration-config.html#cluster-admin-groups","title":"Cluster Admin Groups","text":"

    clusterAdminGroups: Contains names of the groups that are allowed to perform CRUD operations on namespaces present on the cluster. Users in the specified group(s) will be able to perform these operations without MTO getting in their way. MTO does not interfere even with the deletion of privilegedNamespaces.

    Note

    User kube:admin is bypassed by default to perform operations as a cluster admin, this includes operations on all the namespaces.

    "},{"location":"integration-config.html#privileged-namespaces","title":"Privileged Namespaces","text":"

    privilegedNamespaces: Contains the list of namespaces ignored by MTO. MTO will not manage the namespaces in this list. Treatment for privileged namespaces does not involve further integrations or finalizers processing as with normal namespaces. Values in this list are regex patterns. For example:

    "},{"location":"integration-config.html#privileged-serviceaccounts","title":"Privileged ServiceAccounts","text":"

    privilegedServiceAccounts: Contains the list of ServiceAccounts ignored by MTO. MTO will not manage the ServiceAccounts in this list. Values in this list are regex patterns. For example, to ignore all ServiceAccounts starting with the system:serviceaccount:openshift- prefix, we can use ^system:serviceaccount:openshift-*; and to ignore the system:serviceaccount:builder service account we can use ^system:serviceaccount:builder$.

    "},{"location":"integration-config.html#namespace-access-policy","title":"Namespace Access Policy","text":"

    namespaceAccessPolicy.Deny: Can be used to restrict privileged users/groups CRUD operation over managed namespaces.

    namespaceAccessPolicy:\n  deny:\n    privilegedNamespaces:\n      groups:\n        - cluster-admins\n      users:\n        - system:serviceaccount:openshift-argocd:argocd-application-controller\n        - adam@stakater.com\n

    \u26a0\ufe0f If you want to use a more complex regex pattern (for the openshift.privilegedNamespaces or openshift.privilegedServiceAccounts field), it is recommended that you test the regex pattern first - either locally or using a platform such as https://regex101.com/.

    "},{"location":"integration-config.html#argocd","title":"ArgoCD","text":""},{"location":"integration-config.html#namespace","title":"Namespace","text":"

    argocd.namespace is an optional field used to specify the namespace where ArgoCD Applications and AppProjects are deployed. The field should be populated when you want to create an ArgoCD AppProject for each tenant.

    "},{"location":"integration-config.html#namespaceresourceblacklist","title":"NamespaceResourceBlacklist","text":"
    argocd:\n  namespaceResourceBlacklist:\n  - group: '' # all resource groups\n    kind: ResourceQuota\n  - group: ''\n    kind: LimitRange\n  - group: ''\n    kind: NetworkPolicy\n

    argocd.namespaceResourceBlacklist prevents ArgoCD from syncing the listed resources from your GitOps repo.

    "},{"location":"integration-config.html#clusterresourcewhitelist","title":"ClusterResourceWhitelist","text":"
    argocd:\n  clusterResourceWhitelist:\n  - group: tronador.stakater.com\n    kind: EnvironmentProvisioner\n

    argocd.clusterResourceWhitelist allows ArgoCD to sync the listed cluster scoped resources from your GitOps repo.

    "},{"location":"integration-config.html#rhsso-red-hat-single-sign-on","title":"RHSSO (Red Hat Single Sign-On)","text":"

    Red Hat Single Sign-On RHSSO is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0.

    If RHSSO is configured on a cluster, then RHSSO configuration can be enabled.

    rhsso:\n  enabled: true\n  realm: customer\n  endpoint:\n    secretReference:\n      name: auth-secrets\n      namespace: openshift-auth\n    url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n

    If enabled, than admins have to provide secret and URL of RHSSO.

    "},{"location":"integration-config.html#vault","title":"Vault","text":"

    Vault is used to secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.

    If vault is configured on a cluster, then Vault configuration can be enabled.

    Vault:\n  enabled: true\n  endpoint:\n    secretReference:\n      name: vault-root-token\n      namespace: vault\n    url: >-\n      https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n  sso:\n    accessorID: <ACCESSOR_ID_TOKEN>\n    clientName: vault\n

    If enabled, than admins have to provide secret, URL and SSO accessorID of Vault.

    For more details please refer use-cases

    "},{"location":"tenant-roles.html","title":"Tenant Member Roles","text":"

    After adding support for custom roles within MTO, this page is only applicable if you use OpenShift and its default owner, edit, and view roles. For more details, see the IntegrationConfig spec

    MTO tenant members can have one of following 3 roles:

    1. Owner
    2. Editor
    3. Viewer
    "},{"location":"tenant-roles.html#1-owner","title":"1. Owner","text":"

    fig 2. Shows how tenant owners manage their tenant using MTO

    Owner is an admin of a tenant with some restrictions. It has privilege to see all resources in their Tenant with some additional privileges. They can also create new namespaces.

    Owners will also inherit roles from Edit and View.

    "},{"location":"tenant-roles.html#access-permissions","title":"Access Permissions","text":""},{"location":"tenant-roles.html#quotas-permissions","title":"Quotas Permissions","text":""},{"location":"tenant-roles.html#resources-permissions","title":"Resources Permissions","text":""},{"location":"tenant-roles.html#2-editor","title":"2. Editor","text":"

    fig 3. Shows editors role in a tenant using MTO

    Edit role will have edit access on their Projects, but they wont have access on Roles or RoleBindings.

    Editors will also inherit View role.

    "},{"location":"tenant-roles.html#access-permissions_1","title":"Access Permissions","text":""},{"location":"tenant-roles.html#quotas-permissions_1","title":"Quotas Permissions","text":""},{"location":"tenant-roles.html#builds-pods-pvc-permissions","title":"Builds ,Pods , PVC Permissions","text":""},{"location":"tenant-roles.html#resources-permissions_1","title":"Resources Permissions","text":""},{"location":"tenant-roles.html#3-viewer","title":"3. Viewer","text":"

    fig 4. Shows viewers role in a tenant using MTO

    Viewer role will only have view access on their Project.

    "},{"location":"tenant-roles.html#access-permissions_2","title":"Access Permissions","text":""},{"location":"tenant-roles.html#quotas-permissions_2","title":"Quotas Permissions","text":""},{"location":"tenant-roles.html#builds-pods-pvc-permissions_1","title":"Builds ,Pods , PVC Permissions","text":""},{"location":"tenant-roles.html#resources-permissions_2","title":"Resources Permissions","text":""},{"location":"troubleshooting.html","title":"Troubleshooting Guide","text":""},{"location":"troubleshooting.html#operatorhub-upgrade-error","title":"OperatorHub Upgrade Error","text":""},{"location":"troubleshooting.html#operator-is-stuck-in-upgrade-if-upgrade-approval-is-set-to-automatic","title":"Operator is stuck in upgrade if upgrade approval is set to Automatic","text":""},{"location":"troubleshooting.html#problem","title":"Problem","text":"

    If operator upgrade is set to Automatic Approval on OperatorHub, there may be scenarios where it gets blocked.

    "},{"location":"troubleshooting.html#resolution","title":"Resolution","text":"

    Information

    If upgrade approval is set to manual, and you want to skip upgrade of a specific version, then delete the InstallPlan created for that specific version. Operator Lifecycle Manager (OLM) will create the latest available InstallPlan which can be approved then.\n

    As OLM does not allow to upgrade or downgrade from a version stuck because of error, the only possible fix is to uninstall the operator from the cluster. When the operator is uninstalled it removes all of its resources i.e., ClusterRoles, ClusterRoleBindings, and Deployments etc., except Custom Resource Definitions (CRDs), so none of the Custom Resources (CRs), Tenants, Templates etc., will be removed from the cluster. If any CRD has a conversion webhook defined then that webhook should be removed before installing the stable version of the operator. This can be achieved via removing the .spec.conversion block from the CRD schema.

    As an example, if you have installed v0.8.0 of Multi Tenant Operator on your cluster, then it'll stuck in an error error validating existing CRs against new CRD's schema for \"integrationconfigs.tenantoperator.stakater.com\": error validating custom resource against new schema for IntegrationConfig multi-tenant-operator/tenant-operator-config: [].spec.tenantRoles: Required value. To resolve this issue, you'll first uninstall the MTO from the cluster. Once you uninstall the MTO, check Tenant CRD which will have a conversion block, which needs to be removed. After removing the conversion block from the Tenant CRD, install the latest available version of MTO from OperatorHub.

    "},{"location":"troubleshooting.html#permission-issues","title":"Permission Issues","text":""},{"location":"troubleshooting.html#vault-user-permissions-are-not-updated-if-the-user-is-added-to-a-tenant-and-the-user-does-not-exist-in-rhsso","title":"Vault user permissions are not updated if the user is added to a Tenant, and the user does not exist in RHSSO","text":""},{"location":"troubleshooting.html#problem_1","title":"Problem","text":"

    If a user is added to tenant resource, and the user does not exist in RHSSO, then RHSSO is not updated with the user's Vault permission.

    "},{"location":"troubleshooting.html#reproduction-steps","title":"Reproduction steps","text":"
    1. Add a new user to Tenant CR
    2. Attempt to log in to Vault with the added user
    3. Vault denies that the user exists, and signs the user up via RHSSO. User is now created on RHSSO (you may check for the user on RHSSO).
    "},{"location":"troubleshooting.html#resolution_1","title":"Resolution","text":"

    If the user does not exist in RHSSO, then MTO does not create the tenant access for Vault in RHSSO.

    The user now needs to go to Vault, and sign up using OIDC. Then the user needs to wait for MTO to reconcile the updated tenant (reconciliation period is currently 1 hour). After reconciliation, MTO will add relevant access for the user in RHSSO.

    If the user needs to be added immediately and it is not feasible to wait for next MTO reconciliation, then: add a label or annotation to the user, or restart the Tenant controller pod to force immediate reconciliation.

    "},{"location":"vault-multitenancy.html","title":"Vault Multitenancy","text":"

    HashiCorp Vault is an identity-based secret and encryption management system. Vault validates and authorizes a system's clients (users, machines, apps) before providing them access to secrets or stored sensitive data.

    "},{"location":"vault-multitenancy.html#vault-integration-in-multi-tenant-operator","title":"Vault integration in Multi Tenant Operator","text":""},{"location":"vault-multitenancy.html#service-account-auth-in-vault","title":"Service Account Auth in Vault","text":"

    MTO enables the Kubernetes auth method which can be used to authenticate with Vault using a Kubernetes Service Account Token. When enabled, for every tenant namespace, MTO automatically creates policies and roles that allow the service accounts present in those namespaces to read secrets at tenant's path in Vault. The name of the role is the same as namespace name.

    These service accounts are required to have stakater.com/vault-access: true label, so they can be authenticated with Vault via MTO.

    The Diagram shows how MTO enables ServiceAccounts to read secrets from Vault.

    "},{"location":"vault-multitenancy.html#user-oidc-auth-in-vault","title":"User OIDC Auth in Vault","text":"

    This requires a running RHSSO(RedHat Single Sign On) instance integrated with Vault over OIDC login method.

    MTO integration with Vault and RHSSO provides a way for users to log in to Vault where they only have access to relevant tenant paths.

    Once both integrations are set-up with IntegrationConfig CR, MTO links tenant users to specific client roles named after their tenant under Vault client in RHSSO.

    After that, MTO creates specific policies in Vault for its tenant users.

    Mapping of tenant roles to Vault is shown below

    Tenant Role Vault Path Vault Capabilities Owner, Editor (tenantName)/* Create, Read, Update, Delete, List Owner, Editor sys/mounts/(tenantName)/* Create, Read, Update, Delete, List Owner, Editor managed-addons/* Read, List Viewer (tenantName)/* Read

    A simple user login workflow is shown in the diagram below.

    "},{"location":"explanation/console.html","title":"MTO Console","text":""},{"location":"explanation/console.html#introduction","title":"Introduction","text":"

    The Multi Tenant Operator (MTO) Console is a comprehensive user interface designed for both administrators and tenant users to manage multi-tenant environments. The MTO Console simplifies the complexity involved in handling various aspects of tenants and their related resources.

    "},{"location":"explanation/console.html#dashboard-overview","title":"Dashboard Overview","text":"

    The dashboard serves as a centralized monitoring hub, offering insights into the current state of tenants, namespaces, and quotas. It is designed to provide a quick summary/snapshot of MTO resources' status. Additionally, it includes a Showback graph that presents a quick glance of the seven-day cost trends associated with the namespaces/tenants based on the logged-in user.

    "},{"location":"explanation/console.html#tenants","title":"Tenants","text":"

    Here, admins have a bird's-eye view of all tenants, with the ability to delve into each one for detailed examination and management. This section is pivotal for observing the distribution and organization of tenants within the system. More information on each tenant can be accessed by clicking the view option against each tenant name.

    "},{"location":"explanation/console.html#namespaces","title":"Namespaces","text":"

    Users can view all the namespaces that belong to their tenant, offering a comprehensive perspective of the accessible namespaces for tenant members. This section also provides options for detailed exploration.

    "},{"location":"explanation/console.html#quotas","title":"Quotas","text":"

    MTO's Quotas are crucial for managing resource allocation. In this section, administrators can assess the quotas assigned to each tenant, ensuring a balanced distribution of resources in line with operational requirements.

    "},{"location":"explanation/console.html#templates","title":"Templates","text":"

    The Templates section acts as a repository for standardized resource deployment patterns, which can be utilized to maintain consistency and reliability across tenant environments. Few examples include provisioning specific k8s manifests, helm charts, secrets or configmaps across a set of namespaces.

    "},{"location":"explanation/console.html#showback","title":"Showback","text":"

    The Showback feature is an essential financial governance tool, providing detailed insights into the cost implications of resource usage by tenant or namespace or other filters. This facilitates a transparent cost management and internal chargeback or showback process, enabling informed decision-making regarding resource consumption and budgeting.

    "},{"location":"explanation/console.html#user-roles-and-permissions","title":"User Roles and Permissions","text":""},{"location":"explanation/console.html#administrators","title":"Administrators :","text":"

    Administrators have overarching access to the console, including the ability to view all namespaces and tenants. They have exclusive access to the IntegrationConfig, allowing them to view all the settings and integrations.

    "},{"location":"explanation/console.html#tenant-users","title":"Tenant Users :","text":"

    Regular tenant users can monitor and manage their allocated resources. However, they do not have access to the IntegrationConfig and cannot view resources across different tenants, ensuring data privacy and operational integrity.

    "},{"location":"explanation/console.html#live-yaml-configuration-and-graph-view","title":"Live YAML Configuration and Graph View","text":"

    In the MTO Console, each resource section is equipped with a \"View\" button, revealing the live YAML configuration for complete information on the resource. For Tenant resources, a supplementary \"Graph\" option is available, illustrating the relationships and dependencies of all resources under a Tenant. This dual-view approach empowers users with both the detailed control of YAML and the holistic oversight of the graph view.

    You can find more details on graph visualization here: Graph Visualization

    "},{"location":"explanation/console.html#caching-and-database","title":"Caching and Database","text":"

    MTO integrates a dedicated database to streamline resource management. Now, all resources managed by MTO are efficiently stored in a Postgres database, enhancing the MTO Console's ability to efficiently retrieve all the resources for optimal presentation.

    The implementation of this feature is facilitated by the Bootstrap controller, streamlining the deployment process. This controller creates the PostgreSQL Database, establishes a service for inter-pod communication, and generates a secret to ensure secure connectivity to the database.

    Furthermore, the introduction of a dedicated cache layer ensures that there is no added burden on the kube API server when responding to MTO Console requests. This enhancement not only improves response times but also contributes to a more efficient and responsive resource management system.

    "},{"location":"explanation/console.html#conclusion","title":"Conclusion","text":"

    The MTO Console is engineered to simplify complex multi-tenant management. The current iteration focuses on providing comprehensive visibility. Future updates could include direct CUD (Create/Update/Delete) capabilities from the dashboard, enhancing the console\u2019s functionality. The Showback feature remains a standout, offering critical cost tracking and analysis. The delineation of roles between administrators and tenant users ensures a secure and organized operational framework.

    "},{"location":"explanation/why-argocd-multi-tenancy.html","title":"Need for Multi-Tenancy in ArgoCD","text":""},{"location":"explanation/why-argocd-multi-tenancy.html#argocd-multi-tenancy","title":"ArgoCD Multi-tenancy","text":"

    ArgoCD is a declarative GitOps tool built to deploy applications to Kubernetes. While the continuous delivery (CD) space is seen by some as crowded these days, ArgoCD does bring some interesting capabilities to the table. Unlike other tools, ArgoCD is lightweight and easy to configure.

    "},{"location":"explanation/why-argocd-multi-tenancy.html#why-argocd","title":"Why ArgoCD?","text":"

    Application definitions, configurations, and environments should be declarative and version controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand.

    "},{"location":"explanation/why-vault-multi-tenancy.html","title":"Need for Multi-Tenancy in Vault","text":""},{"location":"faq/index.html","title":"Index","text":""},{"location":"how-to-guides/integration-config.html","title":"Integration Config","text":"

    IntegrationConfig is used to configure settings of multi-tenancy for Multi Tenant Operator.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  tenantRoles:\n    default:\n      owner:\n        clusterRoles:\n          - admin\n      editor:\n        clusterRoles:\n          - edit\n      viewer:\n        clusterRoles:\n          - view\n          - viewer\n    custom:\n    - labelSelector:\n        matchExpressions:\n        - key: stakater.com/kind\n          operator: In\n          values:\n            - build\n        matchLabels:\n          stakater.com/kind: dev\n      owner:\n        clusterRoles:\n          - custom-owner\n      editor:\n        clusterRoles:\n          - custom-editor\n      viewer:\n        clusterRoles:\n          - custom-viewer\n          - custom-view\n  openshift:\n    project:\n      labels:\n        stakater.com/workload-monitoring: \"true\"\n      annotations:\n        openshift.io/node-selector: node-role.kubernetes.io/worker=\n    group:\n      labels:\n        role: customer-reader\n    sandbox:\n      labels:\n        stakater.com/kind: sandbox\n    clusterAdminGroups:\n      - cluster-admins\n    privilegedNamespaces:\n      - ^default$\n      - ^openshift-*\n      - ^kube-*\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:openshift-*\n      - ^system:serviceaccount:kube-*\n    namespaceAccessPolicy:\n      deny:\n        privilegedNamespaces:\n          users:\n            - system:serviceaccount:openshift-argocd:argocd-application-controller\n            - adam@stakater.com\n          groups:\n            - cluster-admins\n  argocd:\n    namespace: openshift-operators\n    namespaceResourceBlacklist:\n      - group: '' # all groups\n        kind: ResourceQuota\n    clusterResourceWhitelist:\n      - group: tronador.stakater.com\n        kind: EnvironmentProvisioner\n  rhsso:\n    enabled: true\n    realm: customer\n    endpoint:\n      url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n      secretReference:\n        name: auth-secrets\n        namespace: openshift-auth\n  vault:\n    enabled: true\n    endpoint:\n      url: https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n      secretReference:\n        name: vault-root-token\n        namespace: vault\n    sso:\n      clientName: vault\n      accessorID: <ACCESSOR_ID_TOKEN>\n

    Following are the different components that can be used to configure multi-tenancy in a cluster via Multi Tenant Operator.

    "},{"location":"how-to-guides/integration-config.html#tenantroles","title":"TenantRoles","text":"

    TenantRoles are required within the IntegrationConfig, as they are used for defining what roles will be applied to each Tenant namespace. The field allows optional custom roles, that are then used to create RoleBindings for namespaces that match a labelSelector.

    \u26a0\ufe0f If you do not configure roles in any way, then the default OpenShift roles of owner, edit, and view will apply to Tenant members. Their details can be found here

    tenantRoles:\n  default:\n    owner:\n      clusterRoles:\n        - admin\n    editor:\n      clusterRoles:\n        - edit\n    viewer:\n      clusterRoles:\n        - view\n        - viewer\n  custom:\n  - labelSelector:\n      matchExpressions:\n      - key: stakater.com/kind\n        operator: In\n        values:\n          - build\n      matchLabels:\n        stakater.com/kind: dev\n    owner:\n      clusterRoles:\n        - custom-owner\n    editor:\n      clusterRoles:\n        - custom-editor\n    viewer:\n      clusterRoles:\n        - custom-viewer\n        - custom-view\n
    "},{"location":"how-to-guides/integration-config.html#default","title":"Default","text":"

    This field contains roles that will be used to create default roleBindings for each namespace that belongs to tenants. These roleBindings are only created for a namespace if that namespace isn't already matched by the custom field below it. Therefore, it is required to have at least one role mentioned within each of its three subfields: owner, editor, and viewer. These 3 subfields also correspond to the member fields of the Tenant CR

    "},{"location":"how-to-guides/integration-config.html#custom","title":"Custom","text":"

    An array of custom roles. Similar to the default field, you can mention roles within this field as well. However, the custom roles also require the use of a labelSelector for each iteration within the array. The roles mentioned here will only apply to the namespaces that are matched by the labelSelector. If a namespace is matched by 2 different labelSelectors, then both roles will apply to it. Additionally, roles can be skipped within the labelSelector. These missing roles are then inherited from the default roles field . For example, if the following custom roles arrangement is used:

    custom:\n- labelSelector:\n    matchExpressions:\n    - key: stakater.com/kind\n      operator: In\n      values:\n        - build\n    matchLabels:\n      stakater.com/kind: dev\n  owner:\n    clusterRoles:\n      - custom-owner\n

    Then the editor and viewer roles will be taken from the default roles field, as that is required to have at least one role mentioned.

    "},{"location":"how-to-guides/integration-config.html#openshift","title":"OpenShift","text":"
    openshift:\n  project:\n    labels:\n      stakater.com/workload-monitoring: \"true\"\n    annotations:\n      openshift.io/node-selector: node-role.kubernetes.io/worker=\n  group:\n    labels:\n      role: customer-reader\n  sandbox:\n    labels:\n      stakater.com/kind: sandbox\n  clusterAdminGroups:\n    - cluster-admins\n  privilegedNamespaces:\n    - ^default$\n    - ^openshift-*\n    - ^kube-*\n  privilegedServiceAccounts:\n    - ^system:serviceaccount:openshift-*\n    - ^system:serviceaccount:kube-*\n  namespaceAccessPolicy:\n    deny:\n      privilegedNamespaces:\n        users:\n          - system:serviceaccount:openshift-argocd:argocd-application-controller\n          - adam@stakater.com\n        groups:\n          - cluster-admins\n
    "},{"location":"how-to-guides/integration-config.html#project-group-and-sandbox","title":"Project, group and sandbox","text":"

    We can use the openshift.project, openshift.group and openshift.sandbox fields to automatically add labels and annotations to the Projects and Groups managed via MTO.

      openshift:\n    project:\n      labels:\n        stakater.com/workload-monitoring: \"true\"\n      annotations:\n        openshift.io/node-selector: node-role.kubernetes.io/worker=\n    group:\n      labels:\n        role: customer-reader\n    sandbox:\n      labels:\n        stakater.com/kind: sandbox\n

    If we want to add default labels/annotations to sandbox namespaces of tenants than we just simply add them in openshift.project.labels/openshift.project.annotations respectively.

    Whenever a project is made it will have the labels and annotations as mentioned above.

    kind: Project\napiVersion: project.openshift.io/v1\nmetadata:\n  name: bluesky-build\n  annotations:\n    openshift.io/node-selector: node-role.kubernetes.io/worker=\n  labels:\n    workload-monitoring: 'true'\n    stakater.com/tenant: bluesky\nspec:\n  finalizers:\n    - kubernetes\nstatus:\n  phase: Active\n
    kind: Group\napiVersion: user.openshift.io/v1\nmetadata:\n  name: bluesky-owner-group\n  labels:\n    role: customer-reader\nusers:\n  - andrew@stakater.com\n
    "},{"location":"how-to-guides/integration-config.html#cluster-admin-groups","title":"Cluster Admin Groups","text":"

    clusterAdminGroups: Contains names of the groups that are allowed to perform CRUD operations on namespaces present on the cluster. Users in the specified group(s) will be able to perform these operations without MTO getting in their way

    "},{"location":"how-to-guides/integration-config.html#privileged-namespaces","title":"Privileged Namespaces","text":"

    privilegedNamespaces: Contains the list of namespaces ignored by MTO. MTO will not manage the namespaces in this list. Values in this list are regex patterns. For example:

    "},{"location":"how-to-guides/integration-config.html#privileged-serviceaccounts","title":"Privileged ServiceAccounts","text":"

    privilegedServiceAccounts: Contains the list of ServiceAccounts ignored by MTO. MTO will not manage the ServiceAccounts in this list. Values in this list are regex patterns. For example, to ignore all ServiceAccounts starting with the system:serviceaccount:openshift- prefix, we can use ^system:serviceaccount:openshift-*; and to ignore the system:serviceaccount:builder service account we can use ^system:serviceaccount:builder$.

    "},{"location":"how-to-guides/integration-config.html#namespace-access-policy","title":"Namespace Access Policy","text":"

    namespaceAccessPolicy.Deny: Can be used to restrict privileged users/groups CRUD operation over managed namespaces.

    namespaceAccessPolicy:\n  deny:\n    privilegedNamespaces:\n      groups:\n        - cluster-admins\n      users:\n        - system:serviceaccount:openshift-argocd:argocd-application-controller\n        - adam@stakater.com\n

    \u26a0\ufe0f If you want to use a more complex regex pattern (for the openshift.privilegedNamespaces or openshift.privilegedServiceAccounts field), it is recommended that you test the regex pattern first - either locally or using a platform such as https://regex101.com/.

    "},{"location":"how-to-guides/integration-config.html#argocd","title":"ArgoCD","text":""},{"location":"how-to-guides/integration-config.html#namespace","title":"Namespace","text":"

    argocd.namespace is an optional field used to specify the namespace where ArgoCD Applications and AppProjects are deployed. The field should be populated when you want to create an ArgoCD AppProject for each tenant.

    "},{"location":"how-to-guides/integration-config.html#namespaceresourceblacklist","title":"NamespaceResourceBlacklist","text":"
    argocd:\n  namespaceResourceBlacklist:\n  - group: '' # all resource groups\n    kind: ResourceQuota\n  - group: ''\n    kind: LimitRange\n  - group: ''\n    kind: NetworkPolicy\n

    argocd.namespaceResourceBlacklist prevents ArgoCD from syncing the listed resources from your GitOps repo.

    "},{"location":"how-to-guides/integration-config.html#clusterresourcewhitelist","title":"ClusterResourceWhitelist","text":"
    argocd:\n  clusterResourceWhitelist:\n  - group: tronador.stakater.com\n    kind: EnvironmentProvisioner\n

    argocd.clusterResourceWhitelist allows ArgoCD to sync the listed cluster scoped resources from your GitOps repo.

    "},{"location":"how-to-guides/integration-config.html#rhsso-red-hat-single-sign-on","title":"RHSSO (Red Hat Single Sign-On)","text":"

    Red Hat Single Sign-On RHSSO is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0.

    If RHSSO is configured on a cluster, then RHSSO configuration can be enabled.

    rhsso:\n  enabled: true\n  realm: customer\n  endpoint:\n    secretReference:\n      name: auth-secrets\n      namespace: openshift-auth\n    url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n

    If enabled, then admins have to provide secret and URL of RHSSO.

    "},{"location":"how-to-guides/integration-config.html#vault","title":"Vault","text":"

    Vault is used to secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.

    If vault is configured on a cluster, then Vault configuration can be enabled.

    Vault:\n  enabled: true\n  endpoint:\n    secretReference:\n      name: vault-root-token\n      namespace: vault\n    url: >-\n      https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n  sso:\n    accessorID: <ACCESSOR_ID_TOKEN>\n    clientName: vault\n

    If enabled, then admins have to provide secret, URL and SSO accessorID of Vault.

    "},{"location":"how-to-guides/quota.html","title":"Quota","text":"

    Using Multi Tenant Operator, the cluster-admin can set and enforce cluster resource quotas and limit ranges for tenants.

    "},{"location":"how-to-guides/quota.html#assigning-resource-quotas","title":"Assigning Resource Quotas","text":"

    Bill is a cluster admin who will first create Quota CR where he sets the maximum resource limits that Anna's tenant will have. Here limitrange is an optional field, cluster admin can skip it if not needed.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n  name: small\nspec:\n  resourcequota:\n    hard:\n      requests.cpu: '5'\n      requests.memory: '5Gi'\n      configmaps: \"10\"\n      secrets: \"10\"\n      services: \"10\"\n      services.loadbalancers: \"2\"\n  limitrange:\n    limits:\n      - type: \"Pod\"\n        max:\n          cpu: \"2\"\n          memory: \"1Gi\"\n        min:\n          cpu: \"200m\"\n          memory: \"100Mi\"\nEOF\n

    For more details please refer to Quotas.

    kubectl get quota small\nNAME       STATE    AGE\nsmall      Active   3m\n

    Bill then proceeds to create a tenant for Anna, while also linking the newly created Quota.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@stakater.com\n  quota: small\n  sandbox: false\nEOF\n

    Now that the quota is linked with Anna's tenant, Anna can create any resource within the values of resource quota and limit range.

    kubectl -n bluesky-production create deployment nginx --image nginx:latest --replicas 4\n

    Once the resource quota assigned to the tenant has been reached, Anna cannot create further resources.

    kubectl create pods bluesky-training\nError from server (Cannot exceed Namespace quota: please, reach out to the system administrators)\n
    "},{"location":"how-to-guides/quota.html#limiting-persistentvolume-for-tenant","title":"Limiting PersistentVolume for Tenant","text":"

    Bill, as a cluster admin, wants to restrict the amount of storage a Tenant can use. For that he'll add the requests.storage field to quota.spec.resourcequota.hard. If Bill wants to restrict tenant bluesky to use only 50Gi of storage, he'll first create a quota with requests.storage field set to 50Gi.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n  name: medium\nspec:\n  resourcequota:\n    hard:\n      requests.cpu: '5'\n      requests.memory: '10Gi'\n      requests.storage: '50Gi'\n

    Once the quota is created, Bill will create the tenant and set the quota field to the one he created.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  quota: medium\n  sandbox: true\nEOF\n

    Now, the combined storage used by all tenant namespaces will not exceed 50Gi.

    "},{"location":"how-to-guides/quota.html#adding-storageclass-restrictions-for-tenant","title":"Adding StorageClass Restrictions for Tenant","text":"

    Now, Bill, as a cluster admin, wants to make sure that no Tenant can provision more than a fixed amount of storage from a StorageClass. Bill can restrict that using <storage-class-name>.storageclass.storage.k8s.io/requests.storage field in quota.spec.resourcequota.hard field. If Bill wants to restrict tenant sigma to use only 20Gi of storage from storage class stakater, he'll first create a StorageClass stakater and then create the relevant Quota with stakater.storageclass.storage.k8s.io/requests.storage field set to 20Gi.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n  name: small\nspec:\n  resourcequota:\n    hard:\n      requests.cpu: '2'\n      requests.memory: '4Gi'\n      stakater.storageclass.storage.k8s.io/requests.storage: '20Gi'\n

    Once the quota is created, Bill will create the tenant and set the quota field to the one he created.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: sigma\nspec:\n  owners:\n    users:\n    - dave@aurora.org\n  quota: small\n  sandbox: true\nEOF\n

    Now, the combined storage provisioned from StorageClass stakater used by all tenant namespaces will not exceed 20Gi.

    The 20Gi limit will only be applied to StorageClass stakater. If a tenant member creates a PVC with some other StorageClass, he will not be restricted.

    Tip

    More details about Resource Quota can be found here

    "},{"location":"how-to-guides/template-group-instance.html","title":"TemplateGroupInstance","text":"

    Cluster scoped resource:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: namespace-parameterized-restrictions-tgi\nspec:\n  template: namespace-parameterized-restrictions\n  sync: true\n  selector:\n    matchExpressions:\n    - key: stakater.com/tenant\n      operator: In\n      values:\n        - alpha\n        - beta\nparameters:\n  - name: CIDR_IP\n    value: \"172.17.0.0/16\"\n

    TemplateGroupInstance distributes a template across multiple namespaces which are selected by labelSelector.

    "},{"location":"how-to-guides/template-instance.html","title":"TemplateInstance","text":"

    Namespace scoped resource:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n  name: networkpolicy\n  namespace: build\nspec:\n  template: networkpolicy\n  sync: true\nparameters:\n  - name: CIDR_IP\n    value: \"172.17.0.0/16\"\n

    TemplateInstance are used to keep track of resources created from Templates, which are being instantiated inside a Namespace. Generally, a TemplateInstance is created from a Template and then the TemplateInstances will not be updated when the Template changes later on. To change this behavior, it is possible to set spec.sync: true in a TemplateInstance. Setting this option, means to keep this TemplateInstance in sync with the underlying template (similar to Helm upgrade).

    "},{"location":"how-to-guides/template.html","title":"Template","text":""},{"location":"how-to-guides/template.html#cluster-scoped-resource","title":"Cluster scoped resource","text":"
    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: redis\nresources:\n  helm:\n    releaseName: redis\n    chart:\n      repository:\n        name: redis\n        repoUrl: https://charts.bitnami.com/bitnami\n    values: |\n      redisPort: 6379\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: networkpolicy\nparameters:\n  - name: CIDR_IP\n    value: \"172.17.0.0/16\"\nresources:\n  manifests:\n    - kind: NetworkPolicy\n      apiVersion: networking.k8s.io/v1\n      metadata:\n        name: deny-cross-ns-traffic\n      spec:\n        podSelector:\n          matchLabels:\n            role: db\n        policyTypes:\n        - Ingress\n        - Egress\n        ingress:\n        - from:\n          - ipBlock:\n              cidr: \"${{CIDR_IP}}\"\n              except:\n              - 172.17.1.0/24\n          - namespaceSelector:\n              matchLabels:\n                project: myproject\n          - podSelector:\n              matchLabels:\n                role: frontend\n          ports:\n          - protocol: TCP\n            port: 6379\n        egress:\n        - to:\n          - ipBlock:\n              cidr: 10.0.0.0/24\n          ports:\n          - protocol: TCP\n            port: 5978\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: resource-mapping\nresources:\n  resourceMappings:\n    secrets:\n      - name: secret-s1\n        namespace: namespace-n1\n    configMaps:\n      - name: configmap-c1\n        namespace: namespace-n2\n

    Templates are used to initialize Namespaces, share common resources across namespaces, and map secrets/configmaps from one namespace to other namespaces.

    Also, you can define custom variables in Template and TemplateInstance . The parameters defined in TemplateInstance are overwritten the values defined in Template .

    Manifest Templates: The easiest option to define a Template is by specifying an array of Kubernetes manifests which should be applied when the Template is being instantiated.

    Helm Chart Templates: Instead of manifests, a Template can specify a Helm chart that will be installed (using Helm template) when the Template is being instantiated.

    Resource Mapping Templates: A template can be used to map secrets and configmaps from one tenant's namespace to another tenant's namespace, or within a tenant's namespace.

    "},{"location":"how-to-guides/template.html#mandatory-and-optional-templates","title":"Mandatory and Optional Templates","text":"

    Templates can either be mandatory or optional. By default, all Templates are optional. Cluster Admins can make Templates mandatory by adding them to the spec.templateInstances array within the Tenant configuration. All Templates listed in spec.templateInstances will always be instantiated within every Namespace that is created for the respective Tenant.

    "},{"location":"how-to-guides/tenant.html","title":"Tenant","text":"

    Cluster scoped resource:

    The smallest valid Tenant definition is given below (with just one field in its spec):

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: alpha\nspec:\n  quota: small\n

    Here is a more detailed Tenant definition, explained below:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: alpha\nspec:\n  owners: # optional\n    users: # optional\n      - dave@stakater.com\n    groups: # optional\n      - alpha\n  editors: # optional\n    users: # optional\n      - jack@stakater.com\n  viewers: # optional\n    users: # optional\n      - james@stakater.com\n  quota: medium # required\n  sandboxConfig: # optional\n    enabled: true # optional\n    private: true # optional\n  onDelete: # optional\n    cleanNamespaces: false # optional\n    cleanAppProject: true # optional\n  argocd: # optional\n    sourceRepos: # required\n      - https://github.com/stakater/gitops-config\n    appProject: # optional\n      clusterResourceWhitelist: # optional\n        - group: tronador.stakater.com\n          kind: Environment\n      namespaceResourceBlacklist: # optional\n        - group: \"\"\n          kind: ConfigMap\n  hibernation: # optional\n    sleepSchedule: 23 * * * * # required\n    wakeSchedule: 26 * * * * # required\n  namespaces: # optional\n    withTenantPrefix: # optional\n      - dev\n      - build\n    withoutTenantPrefix: # optional\n      - preview\n  commonMetadata: # optional\n    labels: # optional\n      stakater.com/team: alpha\n    annotations: # optional\n      openshift.io/node-selector: node-role.kubernetes.io/infra=\n  specificMetadata: # optional\n    - annotations: # optional\n        stakater.com/user: dave\n      labels: # optional\n        stakater.com/sandbox: true\n      namespaces: # optional\n        - alpha-dave-stakater-sandbox\n  templateInstances: # optional\n  - spec: # optional\n      template: networkpolicy # required\n      sync: true  # optional\n      parameters: # optional\n        - name: CIDR_IP\n          value: \"172.17.0.0/16\"\n    selector: # optional\n      matchLabels: # optional\n        policy: network-restriction\n

    \u26a0\ufe0f If same label or annotation key is being applied using different methods provided, then the highest precedence will be given to specificMetadata followed by commonMetadata and in the end would be the ones applied from openshift.project.labels/openshift.project.annotations in IntegrationConfig

    "},{"location":"how-to-guides/offboarding/uninstalling.html","title":"Uninstall via OperatorHub UI","text":"

    You can uninstall MTO by following these steps:

    "},{"location":"how-to-guides/offboarding/uninstalling.html#notes","title":"Notes","text":""},{"location":"reference-guides/add-remove-namespace-gitops.html","title":"Add/Remove Namespace from Tenant via GitOps","text":""},{"location":"reference-guides/admin-clusterrole.html","title":"Extending Admin ClusterRole","text":"

    Bill as the cluster admin want to add additional rules for admin ClusterRole.

    Bill can extend the admin role for MTO using the aggregation label for admin ClusterRole. Bill will create a new ClusterRole with all the permissions he needs to extend for MTO and add the aggregation label on the newly created ClusterRole.

    kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: extend-admin-role\n  labels:\n    rbac.authorization.k8s.io/aggregate-to-admin: 'true'\nrules:\n  - verbs:\n      - create\n      - update\n      - patch\n      - delete\n    apiGroups:\n      - user.openshift.io\n    resources:\n      - groups\n

    Note: You can learn more about aggregated-cluster-roles here

    "},{"location":"reference-guides/admin-clusterrole.html#whats-next","title":"What\u2019s next","text":"

    See how Bill can hibernate unused namespaces at night

    "},{"location":"reference-guides/configuring-multitenant-network-isolation.html","title":"Configuring Multi-Tenant Isolation with Network Policy Template","text":"

    Bill is a cluster admin who wants to configure network policies to provide multi-tenant network isolation.

    First, Bill creates a template for network policies:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: tenant-network-policy\nresources:\n  manifests:\n  - apiVersion: networking.k8s.io/v1\n    kind: NetworkPolicy\n    metadata:\n      name: allow-same-namespace\n    spec:\n      podSelector: {}\n      ingress:\n      - from:\n        - podSelector: {}\n  - apiVersion: networking.k8s.io/v1\n    kind: NetworkPolicy\n    metadata:\n      name: allow-from-openshift-monitoring\n    spec:\n      ingress:\n      - from:\n        - namespaceSelector:\n            matchLabels:\n              network.openshift.io/policy-group: monitoring\n      podSelector: {}\n      policyTypes:\n      - Ingress\n  - apiVersion: networking.k8s.io/v1\n    kind: NetworkPolicy\n    metadata:\n      name: allow-from-openshift-ingress\n    spec:\n      ingress:\n      - from:\n        - namespaceSelector:\n            matchLabels:\n              network.openshift.io/policy-group: ingress\n      podSelector: {}\n      policyTypes:\n      - Ingress\n

    Once the template has been created, Bill edits the IntegrationConfig to add unique label to all tenant projects:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    project:\n      labels:\n        stakater.com/workload-monitoring: \"true\"\n        tenant-network-policy: \"true\"\n      annotations:\n        openshift.io/node-selector: node-role.kubernetes.io/worker=\n    sandbox:\n      labels:\n        stakater.com/kind: sandbox\n    privilegedNamespaces:\n      - default\n      - ^openshift-*\n      - ^kube-*\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:openshift-*\n      - ^system:serviceaccount:kube-*\n

    Bill has added a new label tenant-network-policy: \"true\" in project section of IntegrationConfig, now MTO will add that label in all tenant projects.

    Finally, Bill creates a TemplateGroupInstance which will distribute the network policies using the newly added project label and template.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: tenant-network-policy-group\nspec:\n  template: tenant-network-policy\n  selector:\n    matchLabels:\n      tenant-network-policy: \"true\"\n  sync: true\n

    MTO will now deploy the network policies mentioned in Template to all projects matching the label selector mentioned in the TemplateGroupInstance.

    "},{"location":"reference-guides/custom-metrics.html","title":"Custom Metrics Support","text":"

    Multi Tenant Operator now supports custom metrics for templates, template instances and template group instances. This feature allows users to monitor the usage of templates and template instances in their cluster.

    To enable custom metrics and view them in your OpenShift cluster, you need to follow the steps below:

    "},{"location":"reference-guides/custom-roles.html","title":"Changing the default access level for tenant owners","text":"

    This feature allows the cluster admins to change the default roles assigned to Tenant owner, editor, viewer groups.

    For example, if Bill as the cluster admin wants to reduce the privileges that tenant owners have, so they cannot create or edit Roles or bind them. As an admin of an OpenShift cluster, Bill can do this by assigning the edit role to all tenant owners. This is easily achieved by modifying the IntegrationConfig:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  tenantRoles:\n    default:\n      owner:\n        clusterRoles:\n          - edit\n      editor:\n        clusterRoles:\n          - edit\n      viewer:\n        clusterRoles:\n          - view\n

    Once all namespaces reconcile, the old admin RoleBindings should get replaced with the edit ones for each tenant owner.

    "},{"location":"reference-guides/custom-roles.html#giving-specific-permissions-to-some-tenants","title":"Giving specific permissions to some tenants","text":"

    Bill now wants the owners of the tenants bluesky and alpha to have admin permissions over their namespaces. Custom roles feature will allow Bill to do this, by modifying the IntegrationConfig like this:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  tenantRoles:\n    default:\n      owner:\n        clusterRoles:\n          - edit\n      editor:\n        clusterRoles:\n          - edit\n      viewer:\n        clusterRoles:\n          - view\n    custom:\n    - labelSelector:\n        matchExpressions:\n        - key: stakater.com/tenant\n          operator: In\n          values:\n            - alpha\n      owner:\n        clusterRoles:\n          - admin\n    - labelSelector:\n        matchExpressions:\n        - key: stakater.com/tenant\n          operator: In\n          values:\n            - bluesky\n      owner:\n        clusterRoles:\n          - admin\n

    New Bindings will be created for the Tenant owners of bluesky and alpha, corresponding to the admin Role. Bindings for editors and viewer will be inherited from the default roles. All other Tenant owners will have an edit Role bound to them within their namespaces

    "},{"location":"reference-guides/deploying-templates.html","title":"Distributing Resources in Namespaces","text":"

    Multi Tenant Operator has three Custom Resources which can cover this need using the Template CR, depending upon the conditions and preference.

    1. TemplateGroupInstance
    2. TemplateInstance
    3. Tenant

    Stakater Team, however, encourages the use of TemplateGroupInstance to distribute resources in multiple namespaces as it is optimized for better performance.

    "},{"location":"reference-guides/deploying-templates.html#deploying-template-to-namespaces-via-templategroupinstances","title":"Deploying Template to Namespaces via TemplateGroupInstances","text":"

    Bill, the cluster admin, wants to deploy a docker pull secret in namespaces where certain labels exists.

    First, Bill creates a template:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-secret\nresources:\n  manifests:\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n

    Once the template has been created, Bill makes a TemplateGroupInstance referring to the Template he wants to deploy with MatchLabels:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-pull-secret\n  selector:\n    matchLabels:\n      kind: build\n  sync: true\n

    Afterward, Bill can see that secrets have been successfully created in all label matching namespaces.

    kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME             STATE    AGE\ndocker-secret    Active   3m\n\nkubectl get secret docker-secret -n alpha-dave-aurora-sandbox\nNAME             STATE    AGE\ndocker-secret    Active   2m\n

    TemplateGroupInstance can also target specific tenants or all tenant namespaces under a single yaml definition.

    "},{"location":"reference-guides/deploying-templates.html#templategroupinstance-for-multiple-tenants","title":"TemplateGroupInstance for multiple Tenants","text":"

    It can be done by using the matchExpressions field, dividing the tenant label in key and values.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-pull-secret\n  selector:\n    matchExpressions:\n    - key: stakater.com/tenant\n      operator: In\n      values:\n        - alpha\n        - beta\n  sync: true\n
    "},{"location":"reference-guides/deploying-templates.html#templategroupinstance-for-all-tenants","title":"TemplateGroupInstance for all Tenants","text":"

    This can also be done by using the matchExpressions field, using just the tenant label key stakater.com/tenant.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-pull-secret\n  selector:\n    matchExpressions:\n    - key: stakater.com/tenant\n      operator: Exists\n  sync: true\n
    "},{"location":"reference-guides/deploying-templates.html#deploying-template-to-namespaces-via-tenant","title":"Deploying Template to Namespaces via Tenant","text":"

    Bill is a cluster admin who wants to deploy a docker-pull-secret in Anna's tenant namespaces where certain labels exists.

    First, Bill creates a template:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  manifests:\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n

    Once the template has been created, Bill edits Anna's tenant and populates the namespacetemplate field:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n  quota: small\n  sandboxConfig:\n    enabled: true\n  templateInstances:\n  - spec:\n      template: docker-pull-secret\n    selector:\n      matchLabels:\n        kind: build\n

    Multi Tenant Operator will deploy TemplateInstances mentioned in templateInstances field, TemplateInstances will only be applied in those namespaces which belong to Anna's tenant and have the matching label of kind: build.

    So now Anna adds label kind: build to her existing namespace bluesky-anna-aurora-sandbox, and after adding the label she sees that the secret has been created.

    kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME                  STATE    AGE\ndocker-pull-secret    Active   3m\n
    "},{"location":"reference-guides/deploying-templates.html#deploying-template-to-a-namespace-via-templateinstance","title":"Deploying Template to a Namespace via TemplateInstance","text":"

    Anna wants to deploy a docker pull secret in her namespace.

    First Anna asks Bill, the cluster admin, to create a template of the secret for her:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  manifests:\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n

    Once the template has been created, Anna creates a TemplateInstance in her namespace referring to the Template she wants to deploy:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n  name: docker-pull-secret-instance\n  namespace: bluesky-anna-aurora-sandbox\nspec:\n  template: docker-pull-secret\n  sync: true\n

    Once this is created, Anna can see that the secret has been successfully applied.

    kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME                  STATE    AGE\ndocker-pull-secret    Active   3m\n
    "},{"location":"reference-guides/deploying-templates.html#passing-parameters-to-template-via-templateinstance-templategroupinstance-or-tenant","title":"Passing Parameters to Template via TemplateInstance, TemplateGroupInstance or Tenant","text":"

    Anna wants to deploy a LimitRange resource to certain namespaces.

    First Anna asks Bill, the cluster admin, to create template with parameters for LimitRange for her:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: namespace-parameterized-restrictions\nparameters:\n  # Name of the parameter\n  - name: DEFAULT_CPU_LIMIT\n    # The default value of the parameter\n    value: \"1\"\n  - name: DEFAULT_CPU_REQUESTS\n    value: \"0.5\"\n    # If a parameter is required the template instance will need to set it\n    # required: true\n    # Make sure only values are entered for this parameter\n    validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n  manifests:\n    - apiVersion: v1\n      kind: LimitRange\n      metadata:\n        name: namespace-limit-range-${namespace}\n      spec:\n        limits:\n          - default:\n              cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n            defaultRequest:\n              cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n            type: Container\n

    Afterward, Anna creates a TemplateInstance in her namespace referring to the Template she wants to deploy:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n  name: namespace-parameterized-restrictions-instance\n  namespace: bluesky-anna-aurora-sandbox\nspec:\n  template: namespace-parameterized-restrictions\n  sync: true\nparameters:\n  - name: DEFAULT_CPU_LIMIT\n    value: \"1.5\"\n  - name: DEFAULT_CPU_REQUESTS\n    value: \"1\"\n

    If she wants to distribute the same Template over multiple namespaces, she can use TemplateGroupInstance.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: namespace-parameterized-restrictions-tgi\nspec:\n  template: namespace-parameterized-restrictions\n  sync: true\n  selector:\n    matchExpressions:\n    - key: stakater.com/tenant\n      operator: In\n      values:\n        - alpha\n        - beta\nparameters:\n  - name: DEFAULT_CPU_LIMIT\n    value: \"1.5\"\n  - name: DEFAULT_CPU_REQUESTS\n    value: \"1\"\n

    Or she can use her tenant to cover only the tenant namespaces.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n  quota: small\n  sandboxConfig:\n    enabled: true\n  templateInstances:\n  - spec:\n      template: namespace-parameterized-restrictions\n      sync: true\n    parameters:\n      - name: DEFAULT_CPU_LIMIT\n        value: \"1.5\"\n      - name: DEFAULT_CPU_REQUESTS\n        value: \"1\"\n    selector:\n      matchLabels:\n        kind: build\n
    "},{"location":"reference-guides/distributing-resources.html","title":"Copying Secrets and Configmaps across Tenant Namespaces via TGI","text":"

    Bill is a cluster admin who wants to map a docker-pull-secret, present in a build namespace, in tenant namespaces where certain labels exists.

    First, Bill creates a template:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  resourceMappings:\n    secrets:\n      - name: docker-pull-secret\n        namespace: build\n

    Once the template has been created, Bill makes a TemplateGroupInstance referring to the Template he wants to deploy with MatchLabels:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-pull-secret\n  selector:\n    matchLabels:\n      kind: build\n  sync: true\n

    Afterward, Bill can see that secrets has been successfully mapped in all matching namespaces.

    kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME             STATE    AGE\ndocker-pull-secret    Active   3m\n\nkubectl get secret docker-pull-secret -n alpha-dave-aurora-sandbox\nNAME             STATE    AGE\ndocker-pull-secret    Active   3m\n
    "},{"location":"reference-guides/distributing-resources.html#mapping-resources-within-tenant-namespaces-via-ti","title":"Mapping Resources within Tenant Namespaces via TI","text":"

    Anna is a tenant owner who wants to map a docker-pull-secret, present in bluseky-build namespace, to bluesky-anna-aurora-sandbox namespace.

    First, Bill creates a template:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  resourceMappings:\n    secrets:\n      - name: docker-pull-secret\n        namespace: bluesky-build\n

    Once the template has been created, Anna creates a TemplateInstance in bluesky-anna-aurora-sandbox namespace, referring to the Template.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n  name: docker-secret-instance\n  namespace: bluesky-anna-aurora-sandbox\nspec:\n  template: docker-pull-secret\n  sync: true\n

    Afterward, Bill can see that secrets has been successfully mapped in all matching namespaces.

    kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME             STATE    AGE\ndocker-pull-secret    Active   3m\n
    "},{"location":"reference-guides/distributing-secrets-using-sealed-secret-template.html","title":"Distributing Secrets Using Sealed Secrets Template","text":"

    Bill is a cluster admin who wants to provide a mechanism for distributing secrets in multiple namespaces. For this, he wants to use Sealed Secrets as the solution by adding them to MTO Template CR

    First, Bill creates a Template in which Sealed Secret is mentioned:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: tenant-sealed-secret\nresources:\n  manifests:\n  - kind: SealedSecret\n    apiVersion: bitnami.com/v1alpha1\n    metadata:\n      name: mysecret\n    spec:\n      encryptedData:\n        .dockerconfigjson: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq.....\n      template:\n        type: kubernetes.io/dockerconfigjson\n        # this is an example of labels and annotations that will be added to the output secret\n        metadata:\n          labels:\n            \"jenkins.io/credentials-type\": usernamePassword\n          annotations:\n            \"jenkins.io/credentials-description\": credentials from Kubernetes\n

    Once the template has been created, Bill has to edit the Tenant to add unique label to namespaces in which the secret has to be deployed. For this, he can use the support for common and specific labels across namespaces.

    Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n\n  # use this if you want to add label to some specific namespaces\n  specificMetadata:\n    - namespaces:\n        - test-namespace\n      labels:\n        distribute-image-pull-secret: true\n\n  # use this if you want to add label to all namespaces under your tenant\n  commonMetadata:\n    labels:\n      distribute-image-pull-secret: true\n

    Bill has added support for a new label distribute-image-pull-secret: true\" for tenant projects/namespaces, now MTO will add that label depending on the used field.

    Finally, Bill creates a TemplateGroupInstance which will deploy the sealed secrets using the newly created project label and template.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: tenant-sealed-secret\nspec:\n  template: tenant-sealed-secret\n  selector:\n    matchLabels:\n      distribute-image-pull-secret: true\n  sync: true\n

    MTO will now deploy the sealed secrets mentioned in Template to namespaces which have the mentioned label. The rest of the work to deploy secret from a sealed secret has to be done by Sealed Secrets Controller.

    "},{"location":"reference-guides/distributing-secrets.html","title":"Distributing Secrets","text":"

    Bill is a cluster admin who wants to provide a mechanism for distributing secrets in multiple namespaces. For this, he wants to use Sealed Secrets as the solution by adding them to MTO Template CR

    First, Bill creates a Template in which Sealed Secret is mentioned:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: tenant-sealed-secret\nresources:\n  manifests:\n  - kind: SealedSecret\n    apiVersion: bitnami.com/v1alpha1\n    metadata:\n      name: mysecret\n    spec:\n      encryptedData:\n        .dockerconfigjson: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq.....\n      template:\n        type: kubernetes.io/dockerconfigjson\n        # this is an example of labels and annotations that will be added to the output secret\n        metadata:\n          labels:\n            \"jenkins.io/credentials-type\": usernamePassword\n          annotations:\n            \"jenkins.io/credentials-description\": credentials from Kubernetes\n

    Once the template has been created, Bill has to edit the Tenant to add unique label to namespaces in which the secret has to be deployed. For this, he can use the support for common and specific labels across namespaces.

    Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n\n  # use this if you want to add label to some specific namespaces\n  specificMetadata:\n    - namespaces:\n        - test-namespace\n      labels:\n        distribute-image-pull-secret: true\n\n  # use this if you want to add label to all namespaces under your tenant\n  commonMetadata:\n    labels:\n      distribute-image-pull-secret: true\n

    Bill has added support for a new label distribute-image-pull-secret: true\" for tenant projects/namespaces, now MTO will add that label depending on the used field.

    Finally, Bill creates a TemplateGroupInstance which will deploy the sealed secrets using the newly created project label and template.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: tenant-sealed-secret\nspec:\n  template: tenant-sealed-secret\n  selector:\n    matchLabels:\n      distribute-image-pull-secret: true\n  sync: true\n

    MTO will now deploy the sealed secrets mentioned in Template to namespaces which have the mentioned label. The rest of the work to deploy secret from a sealed secret has to be done by Sealed Secrets Controller.

    "},{"location":"reference-guides/extend-default-roles.html","title":"Extending the default access level for tenant members","text":"

    Bill as the cluster admin wants to extend the default access for tenant members. As an admin of an OpenShift Cluster, Bill can extend the admin, edit, and view ClusterRole using aggregation. Bill will first create a ClusterRole with privileges to resources which Bill wants to extend. Bill will add the aggregation label to the newly created ClusterRole for extending the default ClusterRoles provided by OpenShift.

    kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: extend-view-role\n  labels:\n    rbac.authorization.k8s.io/aggregate-to-view: 'true'\nrules:\n  - verbs:\n      - get\n      - list\n      - watch\n    apiGroups:\n      - user.openshift.io\n    resources:\n      - groups\n

    Note: You can learn more about aggregated-cluster-roles here

    "},{"location":"reference-guides/graph-visualization.html","title":"Graph Visualization on MTO Console","text":"

    Effortlessly associate tenants with their respective resources using the enhanced graph feature on the MTO Console. This dynamic graph illustrates the relationships between tenants and the resources they create, encompassing both MTO's proprietary resources and native Kubernetes/OpenShift elements.

    Example Graph:

      graph LR;\n      A(alpha)-->B(dev);\n      A-->C(prod);\n      B-->D(limitrange);\n      B-->E(owner-rolebinding);\n      B-->F(editor-rolebinding);\n      B-->G(viewer-rolebinding);\n      C-->H(limitrange);\n      C-->I(owner-rolebinding);\n      C-->J(editor-rolebinding);\n      C-->K(viewer-rolebinding);\n

    Explore with an intuitive graph that showcases the relationships between tenants and their resources. The MTO Console's graph feature simplifies the understanding of complex structures, providing you with a visual representation of your tenant's organization.

    To view the graph of your tenant, follow the steps below:

    "},{"location":"reference-guides/integrationconfig.html","title":"Configuring Managed Namespaces and ServiceAccounts in IntegrationConfig","text":"

    Bill is a cluster admin who can use IntegrationConfig to configure how Multi Tenant Operator (MTO) manages the cluster.

    By default, MTO watches all namespaces and will enforce all the governing policies on them. All namespaces managed by MTO require the stakater.com/tenant label. MTO ignores privileged namespaces that are mentioned in the IntegrationConfig, and does not manage them. Therefore, any tenant label on such namespaces will be ignored.

    oc create namespace stakater-test\nError from server (Cannot Create namespace stakater-test without label stakater.com/tenant. User: Bill): admission webhook \"vnamespace.kb.io\" denied the request: Cannot CREATE namespace stakater-test without label stakater.com/tenant. User: Bill\n

    Bill is trying to create a namespace without the stakater.com/tenant label. Creating a namespace without this label is only allowed if the namespace is privileged. Privileged namespaces will be ignored by MTO and do not require the said label. Therefore, Bill will add the required regex in the IntegrationConfig, along with any other namespaces which are privileged and should be ignored by MTO - like default, or namespaces with prefixes like openshift, kube:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedNamespaces:\n      - ^default$\n      - ^openshift-.*\n      - ^kube-.*\n      - ^stakater-.*\n

    After mentioning the required regex (^stakater-.*) under privilegedNamespaces, Bill can create the namespace without interference.

    oc create namespace stakater-test\nnamespace/stakater-test created\n

    MTO will also disallow all users which are not tenant owners to perform CRUD operations on namespaces. This will also prevent Service Accounts from performing CRUD operations.

    If Bill wants MTO to ignore Service Accounts, then he would simply have to add them in the IntegrationConfig:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedServiceAccounts:\n      - system:serviceaccount:openshift\n      - system:serviceaccount:stakater\n      - system:serviceaccount:kube\n      - system:serviceaccount:redhat\n      - system:serviceaccount:hive\n

    Bill can also use regex patterns to ignore a set of service accounts:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:openshift-.*\n      - ^system:serviceaccount:stakater-.*\n
    "},{"location":"reference-guides/integrationconfig.html#configuring-vault-in-integrationconfig","title":"Configuring Vault in IntegrationConfig","text":"

    Vault is used to secure, store and tightly control access to tokens, passwords, certificates, and encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.

    If Bill (the cluster admin) has Vault configured in his cluster, then he can take benefit from MTO's integration with Vault.

    MTO automatically creates Vault secret paths for tenants, where tenant members can securely save their secrets. It also authorizes tenant members to access these secrets via OIDC.

    Bill would first have to integrate Vault with MTO by adding the details in IntegrationConfig. For more details

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  vault:\n    enabled: true\n    endpoint:\n      secretReference:\n        name: vault-root-token\n        namespace: vault\n      url: >-\n        https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n    sso:\n      accessorID: auth_oidc_aa6aa9aa\n      clientName: vault\n

    Bill then creates a tenant for Anna and John:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@acme.org\n  viewers:\n    users:\n    - john@acme.org\n  quota: small\n  sandbox: false\n

    Now Bill goes to Vault and sees that a path for tenant has been made under the name bluesky/kv, confirming that Tenant members with the Owner or Edit roles now have access to the tenant's Vault path.

    Now if Anna sign's in to the Vault via OIDC, she can see her tenants path and secrets. Whereas if John sign's in to the Vault via OIDC, he can't see his tenants path or secrets as he doesn't have the access required to view them.

    "},{"location":"reference-guides/integrationconfig.html#configuring-rhsso-red-hat-single-sign-on-in-integrationconfig","title":"Configuring RHSSO (Red Hat Single Sign-On) in IntegrationConfig","text":"

    Red Hat Single Sign-On RHSSO is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0.

    If Bill the cluster admin has RHSSO configured in his cluster, then he can take benefit from MTO's integration with RHSSO and Vault.

    MTO automatically allows tenant members to access Vault via OIDC(RHSSO authentication and authorization) to access secret paths for tenants where tenant members can securely save their secrets.

    Bill would first have to integrate RHSSO with MTO by adding the details in IntegrationConfig. Visit here for more details.

    rhsso:\n  enabled: true\n  realm: customer\n  endpoint:\n    secretReference:\n      name: auth-secrets\n      namespace: openshift-auth\n    url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n
    "},{"location":"reference-guides/mattermost.html","title":"Creating Mattermost Teams for your tenant","text":""},{"location":"reference-guides/mattermost.html#requirements","title":"Requirements","text":"

    MTO-Mattermost-Integration-Operator

    Please contact stakater to install the Mattermost integration operator before following the below-mentioned steps.

    "},{"location":"reference-guides/mattermost.html#steps-to-enable-integration","title":"Steps to enable integration","text":"

    Bill wants some tenants to also have their own Mattermost Teams. To make sure this happens correctly, Bill will first add the stakater.com/mattermost: true label to the tenants. The label will enable the mto-mattermost-integration-operator to create and manage Mattermost Teams based on Tenants.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: sigma\n  labels:\n    stakater.com/mattermost: 'true'\nspec:\n  owners:\n    users:\n      - user\n  editors:\n    users:\n      - user1\n  quota: medium\n  sandbox: false\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n

    Now user can log In to Mattermost to see their Team and relevant channels associated with it.

    The name of the Team is similar to the Tenant name. Notification channels are pre-configured for every team, and can be modified.

    "},{"location":"reference-guides/resource-sync-by-tgi.html","title":"Sync Resources Deployed by TemplateGroupInstance","text":"

    The TemplateGroupInstance CR provides two types of resource sync for the resources mentioned in Template

    For the given example, let's consider we want to apply the following template

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-secret\nresources:\n  manifests:\n\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n\n    - apiVersion: v1\n      kind: ServiceAccount\n      metadata:\n        name: example-automated-thing\n      secrets:\n        - name: example-automated-thing-token-zyxwv\n

    And the following TemplateGroupInstance is used to deploy these resources to namespaces having label kind: build

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-secret\n  selector:\n    matchLabels:\n      kind: build\n  sync: true\n

    As we can see, in our TGI, we have a field spec.sync which is set to true. This will update the resources on two conditions:

    Note

    If the updated field of the deployed manifest is not mentioned in the Template, it will not get reverted. For example, if secrets field is not mentioned in ServiceAcoount in the above Template, it will not get reverted if changed

    "},{"location":"reference-guides/resource-sync-by-tgi.html#ignore-resources-updates-on-resources","title":"Ignore Resources Updates on Resources","text":"

    If the resources mentioned in Template CR conflict with another controller/operator, and you want TemplateGroupInstance to not actively revert the resource updates, you can add the following label to the conflicting resource multi-tenant-operator/ignore-resource-updates: \"\".

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-secret\nresources:\n  manifests:\n\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n\n    - apiVersion: v1\n      kind: ServiceAccount\n      metadata:\n        name: example-automated-thing\n        labels:\n          multi-tenant-operator/ignore-resource-updates: \"\"\n      secrets:\n        - name: example-automated-thing-token-zyxwv\n

    Note

    However, this label will not stop Multi Tenant Operator from updating the resource on following conditions: - Template gets updated - TemplateGroupInstance gets updated - Resource gets deleted

    If you don't want to sync the resources in any case, you can disable sync via sync: false in TemplateGroupInstance spec.

    "},{"location":"reference-guides/secret-distribution.html","title":"Propagate Secrets from Parent to Descendant namespaces","text":"

    Secrets like registry credentials often need to exist in multiple Namespaces, so that Pods within different namespaces can have access to those credentials in form of secrets.

    Manually creating secrets within different namespaces could lead to challenges, such as:

    With the help of Multi-Tenant Operator's Template feature we can make this secret distribution experience easy.

    For example, to copy a Secret called registry which exists in the example to new Namespaces whenever they are created, we will first create a Template which will have reference of the registry secret.

    It will also push updates to the copied Secrets and keep the propagated secrets always sync and updated with parent namespaces.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: registry-secret\nresources:\n  resourceMappings:\n    secrets:\n      - name: registry\n        namespace: example\n

    Now using this Template we can propagate registry secret to different namespaces that have some common set of labels.

    For example, will just add one label kind: registry and all namespaces with this label will get this secret.

    For propagating it on different namespaces dynamically will have to create another resource called TemplateGroupInstance. TemplateGroupInstance will have Template and matchLabel mapping as shown below:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: registry-secret-group-instance\nspec:\n  template: registry-secret\n  selector:\n    matchLabels:\n      kind: registry\n  sync: true\n

    After reconciliation, you will be able to see those secrets in namespaces having mentioned label.

    MTO will keep injecting this secret to the new namespaces created with that label.

    kubectl get secret registry-secret -n example-ns-1\nNAME             STATE    AGE\nregistry-secret    Active   3m\n\nkubectl get secret registry-secret -n example-ns-2\nNAME             STATE    AGE\nregistry-secret    Active   3m\n
    "},{"location":"tutorials/installation.html","title":"Installation","text":"

    This document contains instructions on installing, uninstalling and configuring Multi Tenant Operator using OpenShift MarketPlace.

    1. OpenShift OperatorHub UI

    2. CLI/GitOps

    3. Uninstall

    "},{"location":"tutorials/installation.html#requirements","title":"Requirements","text":""},{"location":"tutorials/installation.html#installing-via-operatorhub-ui","title":"Installing via OperatorHub UI","text":"

    Note: Use stable channel for seamless upgrades. For Production Environment prefer Manual approval and use Automatic for Development Environment

    Note: MTO will be installed in multi-tenant-operator namespace.

    "},{"location":"tutorials/installation.html#configuring-integrationconfig","title":"Configuring IntegrationConfig","text":"

    IntegrationConfig is required to configure the settings of multi-tenancy for MTO.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedNamespaces:\n      - default\n      - ^openshift-*\n      - ^kube-*\n      - ^redhat-*\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:default-*\n      - ^system:serviceaccount:openshift-*\n      - ^system:serviceaccount:kube-*\n      - ^system:serviceaccount:redhat-*\n

    For more details and configurations check out IntegrationConfig.

    "},{"location":"tutorials/installation.html#installing-via-cli-or-gitops","title":"Installing via CLI OR GitOps","text":"
    oc create namespace multi-tenant-operator\nnamespace/multi-tenant-operator created\n
    oc create -f - << EOF\napiVersion: operators.coreos.com/v1\nkind: OperatorGroup\nmetadata:\n  name: tenant-operator\n  namespace: multi-tenant-operator\nEOF\noperatorgroup.operators.coreos.com/tenant-operator created\n
    oc create -f - << EOF\napiVersion: operators.coreos.com/v1alpha1\nkind: Subscription\nmetadata:\n  name: tenant-operator\n  namespace: multi-tenant-operator\nspec:\n  channel: stable\n  installPlanApproval: Automatic\n  name: tenant-operator\n  source: certified-operators\n  sourceNamespace: openshift-marketplace\n  startingCSV: tenant-operator.v0.9.1\n  config:\n    env:\n      - name: ENABLE_CONSOLE\n        value: 'true'\nEOF\nsubscription.operators.coreos.com/tenant-operator created\n

    Note: To bring MTO via GitOps, add the above files in GitOps repository.

    "},{"location":"tutorials/installation.html#configuring-integrationconfig_1","title":"Configuring IntegrationConfig","text":"

    IntegrationConfig is required to configure the settings of multi-tenancy for MTO.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedNamespaces:\n      - default\n      - ^openshift-*\n      - ^kube-*\n      - ^redhat-*\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:default-*\n      - ^system:serviceaccount:openshift-*\n      - ^system:serviceaccount:kube-*\n      - ^system:serviceaccount:redhat-*\n

    For more details and configurations check out IntegrationConfig.

    "},{"location":"tutorials/installation.html#uninstall-via-operatorhub-ui","title":"Uninstall via OperatorHub UI","text":"

    You can uninstall MTO by following these steps:

    "},{"location":"tutorials/installation.html#notes","title":"Notes","text":""},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html","title":"Enabling Multi-Tenancy in ArgoCD","text":""},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#argocd-integration-in-multi-tenant-operator","title":"ArgoCD integration in Multi Tenant Operator","text":"

    With Multi Tenant Operator (MTO), cluster admins can configure multi tenancy in their cluster. Now with ArgoCD integration, multi tenancy can be configured in ArgoCD applications and AppProjects.

    MTO (if configured to) will create AppProjects for each tenant. The AppProject will allow tenants to create ArgoCD Applications that can be synced to namespaces owned by those tenants. Cluster admins will also be able to blacklist certain namespaces resources if they want, and allow certain cluster scoped resources as well (see the NamespaceResourceBlacklist and ClusterResourceWhitelist sections in Integration Config docs and Tenant Custom Resource docs).

    Note that ArgoCD integration in MTO is completely optional.

    "},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#default-argocd-configuration","title":"Default ArgoCD configuration","text":"

    We have set a default ArgoCD configuration in Multi Tenant Operator that fulfils the following use cases:

    "},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#creating-argocd-appprojects-for-your-tenant","title":"Creating ArgoCD AppProjects for your tenant","text":"

    Bill wants each tenant to also have their own ArgoCD AppProjects. To make sure this happens correctly, Bill will first specify the ArgoCD namespace in the IntegrationConfig:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  ...\n  argocd:\n    namespace: openshift-operators\n  ...\n

    Afterward, Bill must specify the source GitOps repos for the tenant inside the tenant CR like so:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: sigma\nspec:\n  argocd:\n    sourceRepos:\n      # specify source repos here\n      - \"https://github.com/stakater/GitOps-config\"\n  owners:\n    users:\n      - user\n  editors:\n    users:\n      - user1\n  quota: medium\n  sandbox: false\n  namespaces:\n    withTenantPrefix:\n      - build\n      - stage\n      - dev\n

    Now Bill can see an AppProject will be created for the tenant

    oc get AppProject -A\nNAMESPACE             NAME           AGE\nopenshift-operators   sigma        5d15h\n

    The following AppProject is created:

    apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n  name: sigma\n  namespace: openshift-operators\nspec:\n  destinations:\n    - namespace: sigma-build\n      server: \"https://kubernetes.default.svc\"\n    - namespace: sigma-dev\n      server: \"https://kubernetes.default.svc\"\n    - namespace: sigma-stage\n      server: \"https://kubernetes.default.svc\"\n  roles:\n    - description: >-\n        Role that gives full access to all resources inside the tenant's\n        namespace to the tenant owner groups\n      groups:\n        - saap-cluster-admins\n        - stakater-team\n        - sigma-owner-group\n      name: sigma-owner\n      policies:\n        - \"p, proj:sigma:sigma-owner, *, *, sigma/*, allow\"\n    - description: >-\n        Role that gives edit access to all resources inside the tenant's\n        namespace to the tenant owner group\n      groups:\n        - saap-cluster-admins\n        - stakater-team\n        - sigma-edit-group\n      name: sigma-edit\n      policies:\n        - \"p, proj:sigma:sigma-edit, *, *, sigma/*, allow\"\n    - description: >-\n        Role that gives view access to all resources inside the tenant's\n        namespace to the tenant owner group\n      groups:\n        - saap-cluster-admins\n        - stakater-team\n        - sigma-view-group\n      name: sigma-view\n      policies:\n        - \"p, proj:sigma:sigma-view, *, get, sigma/*, allow\"\n  sourceRepos:\n    - \"https://github.com/stakater/gitops-config\"\n

    Users belonging to the Sigma group will now only see applications created by them in the ArgoCD frontend now:

    "},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#prevent-argocd-from-syncing-certain-namespaced-resources","title":"Prevent ArgoCD from syncing certain namespaced resources","text":"

    Bill wants tenants to not be able to sync ResourceQuota and LimitRange resources to their namespaces. To do this correctly, Bill will specify these resources to blacklist in the ArgoCD portion of the IntegrationConfig:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  ...\n  argocd:\n    namespace: openshift-operators\n    namespaceResourceBlacklist:\n      - group: \"\"\n        kind: ResourceQuota\n      - group: \"\"\n        kind: LimitRange\n  ...\n

    Now, if these resources are added to any tenant's project directory in GitOps, ArgoCD will not sync them to the cluster. The AppProject will also have the blacklisted resources added to it:

    apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n  name: sigma\n  namespace: openshift-operators\nspec:\n  ...\n  namespaceResourceBlacklist:\n    - group: ''\n      kind: ResourceQuota\n    - group: ''\n      kind: LimitRange\n  ...\n
    "},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#allow-argocd-to-sync-certain-cluster-wide-resources","title":"Allow ArgoCD to sync certain cluster-wide resources","text":"

    Bill now wants tenants to be able to sync the Environment cluster scoped resource to the cluster. To do this correctly, Bill will specify the resource to allow-list in the ArgoCD portion of the Integration Config's Spec:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  ...\n  argocd:\n    namespace: openshift-operators\n    clusterResourceWhitelist:\n      - group: \"\"\n        kind: Environment\n  ...\n

    Now, if the resource is added to any tenant's project directory in GitOps, ArgoCD will sync them to the cluster. The AppProject will also have the allow-listed resources added to it:

    apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n  name: sigma\n  namespace: openshift-operators\nspec:\n  ...\n  clusterResourceWhitelist:\n  - group: \"\"\n    kind: Environment\n  ...\n
    "},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#override-namespaceresourceblacklist-andor-clusterresourcewhitelist-per-tenant","title":"Override NamespaceResourceBlacklist and/or ClusterResourceWhitelist per Tenant","text":"

    Bill now wants a specific tenant to override the namespaceResourceBlacklist and/or clusterResourceWhitelist set via Integration Config. Bill will specify these in argoCD.appProjects section of Tenant spec.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: blue-sky\nspec:\n  argocd:\n    sourceRepos:\n      # specify source repos here\n      - \"https://github.com/stakater/GitOps-config\"\n    appProject:\n      clusterResourceWhitelist:\n        - group: admissionregistration.k8s.io\n          kind: validatingwebhookconfigurations\n      namespaceResourceBlacklist:\n        - group: \"\"\n          kind: ConfigMap\n  owners:\n    users:\n      - user\n  editors:\n    users:\n      - user1\n  quota: medium\n  sandbox: false\n  namespaces:\n    withTenantPrefix:\n      - build\n      - stage\n
    "},{"location":"tutorials/template/template-group-instance.html","title":"More about TemplateGroupInstance","text":""},{"location":"tutorials/template/template-instance.html","title":"More about TemplateInstances","text":""},{"location":"tutorials/template/template.html","title":"Understanding and Utilizing Template","text":""},{"location":"tutorials/template/template.html#creating-templates","title":"Creating Templates","text":"

    Anna wants to create a Template that she can use to initialize or share common resources across namespaces (e.g. PullSecrets).

    Anna can either create a template using manifests field, covering Kubernetes or custom resources.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  manifests:\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n

    Or by using Helm Charts

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: redis\nresources:\n  helm:\n    releaseName: redis\n    chart:\n      repository:\n        name: redis\n        repoUrl: https://charts.bitnami.com/bitnami\n    values: |\n      redisPort: 6379\n

    She can also use resourceMapping field to copy over secrets and configmaps from one namespace to others.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: resource-mapping\nresources:\n  resourceMappings:\n    secrets:\n      - name: docker-secret\n        namespace: bluesky-build\n    configMaps:\n      - name: tronador-configMap\n        namespace: stakater-tronador\n

    Note: Resource mapping can be used via TGI to map resources within tenant namespaces or to some other tenant's namespace. If used with TI, the resources will only be mapped if namespaces belong to same tenant.

    "},{"location":"tutorials/template/template.html#using-templates-with-default-parameters","title":"Using Templates with Default Parameters","text":"
    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: namespace-parameterized-restrictions\nparameters:\n  # Name of the parameter\n  - name: DEFAULT_CPU_LIMIT\n    # The default value of the parameter\n    value: \"1\"\n  - name: DEFAULT_CPU_REQUESTS\n    value: \"0.5\"\n    # If a parameter is required the template instance will need to set it\n    # required: true\n    # Make sure only values are entered for this parameter\n    validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n  manifests:\n    - apiVersion: v1\n      kind: LimitRange\n      metadata:\n        name: namespace-limit-range-${namespace}\n      spec:\n        limits:\n          - default:\n              cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n            defaultRequest:\n              cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n            type: Container\n

    Parameters can be used with both manifests and helm charts

    "},{"location":"tutorials/tenant/assign-quota-tenant.html","title":"Assign Quota to a Tenant","text":""},{"location":"tutorials/tenant/assigning-metadata.html","title":"Assigning Common/Specific Metadata","text":""},{"location":"tutorials/tenant/assigning-metadata.html#distributing-common-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource","title":"Distributing common labels and annotations to tenant namespaces via Tenant Custom Resource","text":"

    Bill now wants to add labels/annotations to all the namespaces for a tenant. To create those labels/annotations Bill will just add them into commonMetadata.labels/commonMetadata.annotations field in the tenant CR.

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n  commonMetadata:\n    labels:\n      app.kubernetes.io/managed-by: tenant-operator\n      app.kubernetes.io/part-of: tenant-alpha\n    annotations:\n      openshift.io/node-selector: node-role.kubernetes.io/infra=\nEOF\n

    With the above configuration all tenant namespaces will now contain the mentioned labels and annotations.

    "},{"location":"tutorials/tenant/assigning-metadata.html#distributing-specific-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource","title":"Distributing specific labels and annotations to tenant namespaces via Tenant Custom Resource","text":"

    Bill now wants to add labels/annotations to specific namespaces for a tenant. To create those labels/annotations Bill will just add them into specificMetadata.labels/specificMetadata.annotations and specific namespaces in specificMetadata.namespaces field in the tenant CR.

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandboxConfig:\n    enabled: true\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n  specificMetadata:\n    - namespaces:\n        - bluesky-anna-aurora-sandbox\n      labels:\n        app.kubernetes.io/is-sandbox: true\n      annotations:\n        openshift.io/node-selector: node-role.kubernetes.io/worker=\nEOF\n

    With the above configuration all tenant namespaces will now contain the mentioned labels and annotations.

    "},{"location":"tutorials/tenant/create-sandbox.html","title":"Create Sandbox Namespaces for Tenant Users","text":""},{"location":"tutorials/tenant/create-sandbox.html#assigning-users-sandbox-namespace","title":"Assigning Users Sandbox Namespace","text":"

    Bill assigned the ownership of bluesky to Anna and Anthony. Now if the users want sandboxes to be made for them, they'll have to ask Bill to enable sandbox functionality.

    To enable that, Bill will just set enabled: true within the sandboxConfig field

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandboxConfig:\n    enabled: true\nEOF\n

    With the above configuration Anna and Anthony will now have new sandboxes created

    kubectl get namespaces\nNAME                             STATUS   AGE\nbluesky-anna-aurora-sandbox      Active   5d5h\nbluesky-anthony-aurora-sandbox   Active   5d5h\nbluesky-john-aurora-sandbox      Active   5d5h\n

    If Bill wants to make sure that only the sandbox owner can view his sandbox namespace, he can achieve this by setting private: true within the sandboxConfig filed.

    "},{"location":"tutorials/tenant/create-sandbox.html#create-private-sandboxes","title":"Create Private Sandboxes","text":"

    Bill assigned the ownership of bluesky to Anna and Anthony. Now if the users want sandboxes to be made for them, they'll have to ask Bill to enable sandbox functionality. The Users also want to make sure that the sandboxes that are created for them are also only visible to the user they belong to. To enable that, Bill will just set enabled: true and private: true within the sandboxConfig field

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandboxConfig:\n    enabled: true\n    private: true\nEOF\n

    With the above configuration Anna and Anthony will now have new sandboxes created

    kubectl get namespaces\nNAME                             STATUS   AGE\nbluesky-anna-aurora-sandbox      Active   5d5h\nbluesky-anthony-aurora-sandbox   Active   5d5h\nbluesky-john-aurora-sandbox      Active   5d5h\n

    However, from the perspective of Anna, only their sandbox will be visible

    kubectl get namespaces\nNAME                             STATUS   AGE\nbluesky-anna-aurora-sandbox      Active   5d5h\n
    "},{"location":"tutorials/tenant/create-tenant.html","title":"Creating a Tenant","text":"

    Bill is a cluster admin who receives a new request from Aurora Solutions CTO asking for a new tenant for Anna's team.

    Bill creates a new tenant called bluesky in the cluster:

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandbox: false\nEOF\n

    Bill checks if the new tenant is created:

    kubectl get tenants.tenantoperator.stakater.com bluesky\nNAME       STATE    AGE\nbluesky    Active   3m\n

    Anna can now log in to the cluster and check if she can create namespaces

    kubectl auth can-i create namespaces\nyes\n

    However, cluster resources are not accessible to Anna

    kubectl auth can-i get namespaces\nno\n\nkubectl auth can-i get persistentvolumes\nno\n

    Including the Tenant resource

    kubectl auth can-i get tenants.tenantoperator.stakater.com\nno\n
    "},{"location":"tutorials/tenant/create-tenant.html#assign-multiple-users-as-tenant-owner","title":"Assign multiple users as tenant owner","text":"

    In the example above, Bill assigned the ownership of bluesky to Anna. If another user, e.g. Anthony needs to administer bluesky, than Bill can assign the ownership of tenant to that user as well:

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandbox: false\nEOF\n

    With the configuration above, Anthony can log in to the cluster and execute

    kubectl auth can-i create namespaces\nyes\n
    "},{"location":"tutorials/tenant/creating-namespaces.html","title":"Creating Namespaces","text":""},{"location":"tutorials/tenant/creating-namespaces.html#creating-namespaces-via-tenant-custom-resource","title":"Creating Namespaces via Tenant Custom Resource","text":"

    Bill now wants to create namespaces for dev, build and production environments for the tenant members. To create those namespaces Bill will just add those names within the namespaces field in the tenant CR. If Bill wants to append the tenant name as a prefix in namespace name, then he can use namespaces.withTenantPrefix field. Else he can use namespaces.withoutTenantPrefix for namespaces for which he does not need tenant name as a prefix.

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n    withoutTenantPrefix:\n      - prod\nEOF\n

    With the above configuration tenant members will now see new namespaces have been created.

    kubectl get namespaces\nNAME             STATUS   AGE\nbluesky-dev      Active   5d5h\nbluesky-build    Active   5d5h\nprod             Active   5d5h\n

    Anna as the tenant owner can create new namespaces for her tenant.

    apiVersion: v1\nkind: Namespace\nmetadata:\n  name: bluesky-production\n  labels:\n    stakater.com/tenant: bluesky\n

    \u26a0\ufe0f Anna is required to add the tenant label stakater.com/tenant: bluesky which contains the name of her tenant bluesky, while creating the namespace. If this label is not added or if Anna does not belong to the bluesky tenant, then Multi Tenant Operator will not allow the creation of that namespace.

    When Anna creates the namespace, MTO assigns Anna and other tenant members the roles based on their user types, such as a tenant owner getting the OpenShift admin role for that namespace.

    As a tenant owner, Anna is able to create namespaces.

    If you have enabled ArgoCD Multitenancy, our preferred solution is to create tenant namespaces by using Tenant spec to avoid syncing issues in ArgoCD console during namespace creation.

    "},{"location":"tutorials/tenant/creating-namespaces.html#add-existing-namespaces-to-tenant-via-gitops","title":"Add Existing Namespaces to Tenant via GitOps","text":"

    Using GitOps as your preferred development workflow, you can add existing namespaces for your tenants by including the tenant label.

    To add an existing namespace to your tenant via GitOps:

    1. First, migrate your namespace resource to your \u201cwatched\u201d git repository
    2. Edit your namespace yaml to include the tenant label
    3. Tenant label follows the naming convention stakater.com/tenant: <TENANT_NAME>
    4. Sync your GitOps repository with your cluster and allow changes to be propagated
    5. Verify that your Tenant users now have access to the namespace

    For example, If Anna, a tenant owner, wants to add the namespace bluesky-dev to her tenant via GitOps, after migrating her namespace manifest to a \u201cwatched repository\u201d

    apiVersion: v1\nkind: Namespace\nmetadata:\n  name: bluesky-dev\n

    She can then add the tenant label

     ...\n  labels:\n    stakater.com/tenant: bluesky\n

    Now all the users of the Bluesky tenant now have access to the existing namespace.

    Additionally, to remove namespaces from a tenant, simply remove the tenant label from the namespace resource and sync your changes to your cluster.

    "},{"location":"tutorials/tenant/creating-namespaces.html#remove-namespaces-from-your-cluster-via-gitops","title":"Remove Namespaces from your Cluster via GitOps","text":"

    GitOps is a quick and efficient way to automate the management of your K8s resources.

    To remove namespaces from your cluster via GitOps;

    "},{"location":"tutorials/tenant/custom-rbac.html","title":"Applying Custom RBAC to a Tenant","text":""},{"location":"tutorials/tenant/deleting-tenant.html","title":"Deleting a Tenant","text":""},{"location":"tutorials/tenant/deleting-tenant.html#retaining-tenant-namespaces-and-appproject-when-a-tenant-is-being-deleted","title":"Retaining tenant namespaces and AppProject when a tenant is being deleted","text":"

    Bill now wants to delete tenant bluesky and wants to retain all namespaces and AppProject of the tenant. To retain the namespaces Bill will set spec.onDelete.cleanNamespaces, and spec.onDelete.cleanAppProjects to false.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  quota: small\n  sandboxConfig:\n    enabled: true\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n  onDelete:\n    cleanNamespaces: false\n    cleanAppProject: false\n

    With the above configuration all tenant namespaces and AppProject will not be deleted when tenant bluesky is deleted. By default, the value of spec.onDelete.cleanNamespaces is also false and spec.onDelete.cleanAppProject is true

    "},{"location":"tutorials/tenant/tenant-hibernation.html","title":"Hibernating a Tenant","text":""},{"location":"tutorials/tenant/tenant-hibernation.html#hibernating-namespaces","title":"Hibernating Namespaces","text":"

    You can manage workloads in your cluster with MTO by implementing a hibernation schedule for your tenants. Hibernation downsizes the running Deployments and StatefulSets in a tenant\u2019s namespace according to a defined cron schedule. You can set a hibernation schedule for your tenants by adding the \u2018spec.hibernation\u2019 field to the tenant's respective Custom Resource.

    hibernation:\n  sleepSchedule: 23 * * * *\n  wakeSchedule: 26 * * * *\n

    spec.hibernation.sleepSchedule accepts a cron expression indicating the time to put the workloads in your tenant\u2019s namespaces to sleep.

    spec.hibernation.wakeSchedule accepts a cron expression indicating the time to wake the workloads in your tenant\u2019s namespaces up.

    Note

    Both sleep and wake schedules must be specified for your Hibernation schedule to be valid.

    Additionally, adding the hibernation.stakater.com/exclude: 'true' annotation to a namespace excludes it from hibernating.

    Note

    This is only true for hibernation applied via the Tenant Custom Resource, and does not apply for hibernation done by manually creating a ResourceSupervisor (details about that below).

    Note

    This will not wake up an already sleeping namespace before the wake schedule.

    "},{"location":"tutorials/tenant/tenant-hibernation.html#resource-supervisor","title":"Resource Supervisor","text":"

    Adding a Hibernation Schedule to a Tenant creates an accompanying ResourceSupervisor Custom Resource. The Resource Supervisor stores the Hibernation schedules and manages the current and previous states of all the applications, whether they are sleeping or awake.

    When the sleep timer is activated, the Resource Supervisor controller stores the details of your applications (including the number of replicas, configurations, etc.) in the applications' namespaces and then puts your applications to sleep. When the wake timer is activated, the controller wakes up the applications using their stored details.

    Enabling ArgoCD support for Tenants will also hibernate applications in the tenants' appProjects.

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: sigma\nspec:\n  argocd:\n    appProjects:\n      - sigma\n    namespace: openshift-gitops\n  hibernation:\n    sleepSchedule: 42 * * * *\n    wakeSchedule: 45 * * * *\n  namespaces:\n    - tenant-ns1\n    - tenant-ns2\n

    Currently, Hibernation is available only for StatefulSets and Deployments.

    "},{"location":"tutorials/tenant/tenant-hibernation.html#manual-creation-of-resourcesupervisor","title":"Manual creation of ResourceSupervisor","text":"

    Hibernation can also be applied by creating a ResourceSupervisor resource manually. The ResourceSupervisor definition will contain the hibernation cron schedule, the names of the namespaces to be hibernated, and the names of the ArgoCD AppProjects whose ArgoCD Applications have to be hibernated (as per the given schedule).

    This method can be used to hibernate:

    As an example, the following ResourceSupervisor could be created manually, to apply hibernation explicitly to the 'ns1' and 'ns2' namespaces, and to the 'sample-app-project' AppProject.

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: hibernator\nspec:\n  argocd:\n    appProjects:\n      - sample-app-project\n    namespace: openshift-gitops\n  hibernation:\n    sleepSchedule: 42 * * * *\n    wakeSchedule: 45 * * * *\n  namespaces:\n    - ns1\n    - ns2\n
    "},{"location":"tutorials/tenant/tenant-hibernation.html#freeing-up-unused-resources-with-hibernation","title":"Freeing up unused resources with hibernation","text":""},{"location":"tutorials/tenant/tenant-hibernation.html#hibernating-a-tenant_1","title":"Hibernating a tenant","text":"

    Bill is a cluster administrator who wants to free up unused cluster resources at nighttime, in an effort to reduce costs (when the cluster isn't being used).

    First, Bill creates a tenant with the hibernation schedules mentioned in the spec, or adds the hibernation field to an existing tenant:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: sigma\nspec:\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  owners:\n    users:\n      - user\n  editors:\n    users:\n      - user1\n  quota: medium\n  namespaces:\n    withoutTenantPrefix:\n      - build\n      - stage\n      - dev\n

    The schedules above will put all the Deployments and StatefulSets within the tenant's namespaces to sleep, by reducing their pod count to 0 at 8 PM every weekday. At 8 AM on weekdays, the namespaces will then wake up by restoring their applications' previous pod counts.

    Bill can verify this behaviour by checking the newly created ResourceSupervisor resource at run time:

    oc get ResourceSupervisor -A\nNAME           AGE\nsigma          5m\n

    The ResourceSupervisor will look like this at 'running' time (as per the schedule):

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: example\nspec:\n  argocd:\n    appProjects: []\n    namespace: ''\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  namespaces:\n    - build\n    - stage\n    - dev\nstatus:\n  currentStatus: running\n  nextReconcileTime: '2022-10-12T20:00:00Z'\n

    The ResourceSupervisor will look like this at 'sleeping' time (as per the schedule):

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: example\nspec:\n  argocd:\n    appProjects: []\n    namespace: ''\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  namespaces:\n    - build\n    - stage\n    - dev\nstatus:\n  currentStatus: sleeping\n  nextReconcileTime: '2022-10-13T08:00:00Z'\n  sleepingApplications:\n    - Namespace: build\n      kind: Deployment\n      name: example\n      replicas: 3\n    - Namespace: stage\n      kind: Deployment\n      name: example\n      replicas: 3\n

    Bill wants to prevent the build namespace from going to sleep, so he can add the hibernation.stakater.com/exclude: 'true' annotation to it. The ResourceSupervisor will now look like this after reconciling:

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: example\nspec:\n  argocd:\n    appProjects: []\n    namespace: ''\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  namespaces:\n    - stage\n    - dev\nstatus:\n  currentStatus: sleeping\n  nextReconcileTime: '2022-10-13T08:00:00Z'\n  sleepingApplications:\n    - Namespace: stage\n      kind: Deployment\n      name: example\n      replicas: 3\n
    "},{"location":"tutorials/tenant/tenant-hibernation.html#hibernating-namespaces-andor-argocd-applications-with-resourcesupervisor","title":"Hibernating namespaces and/or ArgoCD Applications with ResourceSupervisor","text":"

    Bill, the cluster administrator, wants to hibernate a collection of namespaces and AppProjects belonging to multiple different tenants. He can do so by creating a ResourceSupervisor manually, specifying the hibernation schedule in its spec, the namespaces and ArgoCD Applications that need to be hibernated as per the mentioned schedule. Bill can also use the same method to hibernate some namespaces and ArgoCD Applications that do not belong to any tenant on his cluster.

    The example given below will hibernate the ArgoCD Applications in the 'test-app-project' AppProject; and it will also hibernate the 'ns2' and 'ns4' namespaces.

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: test-resource-supervisor\nspec:\n  argocd:\n    appProjects:\n      - test-app-project\n    namespace: argocd-ns\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  namespaces:\n    - ns2\n    - ns4\nstatus:\n  currentStatus: sleeping\n  nextReconcileTime: '2022-10-13T08:00:00Z'\n  sleepingApplications:\n    - Namespace: ns2\n      kind: Deployment\n      name: test-deployment\n      replicas: 3\n
    "},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html","title":"Enabling Multi-Tenancy in Vault","text":""},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html#vault-multitenancy","title":"Vault Multitenancy","text":"

    HashiCorp Vault is an identity-based secret and encryption management system. Vault validates and authorizes a system's clients (users, machines, apps) before providing them access to secrets or stored sensitive data.

    "},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html#vault-integration-in-multi-tenant-operator","title":"Vault integration in Multi Tenant Operator","text":""},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html#service-account-auth-in-vault","title":"Service Account Auth in Vault","text":"

    MTO enables the Kubernetes auth method which can be used to authenticate with Vault using a Kubernetes Service Account Token. When enabled, for every tenant namespace, MTO automatically creates policies and roles that allow the service accounts present in those namespaces to read secrets at tenant's path in Vault. The name of the role is the same as namespace name.

    These service accounts are required to have stakater.com/vault-access: true label, so they can be authenticated with Vault via MTO.

    The Diagram shows how MTO enables ServiceAccounts to read secrets from Vault.

    "},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html#user-oidc-auth-in-vault","title":"User OIDC Auth in Vault","text":"

    This requires a running RHSSO(RedHat Single Sign On) instance integrated with Vault over OIDC login method.

    MTO integration with Vault and RHSSO provides a way for users to log in to Vault where they only have access to relevant tenant paths.

    Once both integrations are set up with IntegrationConfig CR, MTO links tenant users to specific client roles named after their tenant under Vault client in RHSSO.

    After that, MTO creates specific policies in Vault for its tenant users.

    Mapping of tenant roles to Vault is shown below

    Tenant Role Vault Path Vault Capabilities Owner, Editor (tenantName)/* Create, Read, Update, Delete, List Owner, Editor sys/mounts/(tenantName)/* Create, Read, Update, Delete, List Owner, Editor managed-addons/* Read, List Viewer (tenantName)/* Read

    A simple user login workflow is shown in the diagram below.

    "},{"location":"usecases/admin-clusterrole.html","title":"Extending Admin ClusterRole","text":"

    Bill as the cluster admin want to add additional rules for admin ClusterRole.

    Bill can extend the admin role for MTO using the aggregation label for admin ClusterRole. Bill will create a new ClusterRole with all the permissions he needs to extend for MTO and add the aggregation label on the newly created ClusterRole.

    kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: extend-admin-role\n  labels:\n    rbac.authorization.k8s.io/aggregate-to-admin: 'true'\nrules:\n  - verbs:\n      - create\n      - update\n      - patch\n      - delete\n    apiGroups:\n      - user.openshift.io\n    resources:\n      - groups\n

    Note: You can learn more about aggregated-cluster-roles here

    "},{"location":"usecases/admin-clusterrole.html#whats-next","title":"What\u2019s next","text":"

    See how Bill can hibernate unused namespaces at night

    "},{"location":"usecases/argocd.html","title":"ArgoCD","text":""},{"location":"usecases/argocd.html#creating-argocd-appprojects-for-your-tenant","title":"Creating ArgoCD AppProjects for your tenant","text":"

    Bill wants each tenant to also have their own ArgoCD AppProjects. To make sure this happens correctly, Bill will first specify the ArgoCD namespace in the IntegrationConfig:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  ...\n  argocd:\n    namespace: openshift-operators\n  ...\n

    Afterwards, Bill must specify the source GitOps repos for the tenant inside the tenant CR like so:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: sigma\nspec:\n  argocd:\n    sourceRepos:\n      # specify source repos here\n      - \"https://github.com/stakater/GitOps-config\"\n  owners:\n    users:\n      - user\n  editors:\n    users:\n      - user1\n  quota: medium\n  sandbox: false\n  namespaces:\n    withTenantPrefix:\n      - build\n      - stage\n      - dev\n

    Now Bill can see an AppProject will be created for the tenant

    oc get AppProject -A\nNAMESPACE             NAME           AGE\nopenshift-operators   sigma        5d15h\n

    The following AppProject is created:

    apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n  name: sigma\n  namespace: openshift-operators\nspec:\n  destinations:\n    - namespace: sigma-build\n      server: \"https://kubernetes.default.svc\"\n    - namespace: sigma-dev\n      server: \"https://kubernetes.default.svc\"\n    - namespace: sigma-stage\n      server: \"https://kubernetes.default.svc\"\n  roles:\n    - description: >-\n        Role that gives full access to all resources inside the tenant's\n        namespace to the tenant owner group\n      groups:\n        - saap-cluster-admins\n        - stakater-team\n        - sigma-owner-group\n      name: sigma-owner\n      policies:\n        - \"p, proj:sigma:sigma-owner, *, *, sigma/*, allow\"\n    - description: >-\n        Role that gives edit access to all resources inside the tenant's\n        namespace to the tenant owner group\n      groups:\n        - saap-cluster-admins\n        - stakater-team\n        - sigma-edit-group\n      name: sigma-edit\n      policies:\n        - \"p, proj:sigma:sigma-edit, *, *, sigma/*, allow\"\n    - description: >-\n        Role that gives view access to all resources inside the tenant's\n        namespace to the tenant owner group\n      groups:\n        - saap-cluster-admins\n        - stakater-team\n        - sigma-view-group\n      name: sigma-view\n      policies:\n        - \"p, proj:sigma:sigma-view, *, get, sigma/*, allow\"\n  sourceRepos:\n    - \"https://github.com/stakater/gitops-config\"\n

    Users belonging to the Sigma group will now only see applications created by them in the ArgoCD frontend now:

    "},{"location":"usecases/argocd.html#prevent-argocd-from-syncing-certain-namespaced-resources","title":"Prevent ArgoCD from syncing certain namespaced resources","text":"

    Bill wants tenants to not be able to sync ResourceQuota and LimitRange resources to their namespaces. To do this correctly, Bill will specify these resources to blacklist in the ArgoCD portion of the IntegrationConfig:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  ...\n  argocd:\n    namespace: openshift-operators\n    namespaceResourceBlacklist:\n      - group: \"\"\n        kind: ResourceQuota\n      - group: \"\"\n        kind: LimitRange\n  ...\n

    Now, if these resources are added to any tenant's project directory in GitOps, ArgoCD will not sync them to the cluster. The AppProject will also have the blacklisted resources added to it:

    apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n  name: sigma\n  namespace: openshift-operators\nspec:\n  ...\n  namespaceResourceBlacklist:\n    - group: ''\n      kind: ResourceQuota\n    - group: ''\n      kind: LimitRange\n  ...\n
    "},{"location":"usecases/argocd.html#allow-argocd-to-sync-certain-cluster-wide-resources","title":"Allow ArgoCD to sync certain cluster-wide resources","text":"

    Bill now wants tenants to be able to sync the Environment cluster scoped resource to the cluster. To do this correctly, Bill will specify the resource to allow-list in the ArgoCD portion of the Integration Config's Spec:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  ...\n  argocd:\n    namespace: openshift-operators\n    clusterResourceWhitelist:\n      - group: \"\"\n        kind: Environment\n  ...\n

    Now, if the resource is added to any tenant's project directory in GitOps, ArgoCD will sync them to the cluster. The AppProject will also have the allow-listed resources added to it:

    apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n  name: sigma\n  namespace: openshift-operators\nspec:\n  ...\n  clusterResourceWhitelist:\n  - group: \"\"\n    kind: Environment\n  ...\n
    "},{"location":"usecases/argocd.html#override-namespaceresourceblacklist-andor-clusterresourcewhitelist-per-tenant","title":"Override NamespaceResourceBlacklist and/or ClusterResourceWhitelist per Tenant","text":"

    Bill now wants a specific tenant to override the namespaceResourceBlacklist and/or clusterResourceWhitelist set via Integration Config. Bill will specify these in argoCD.appProjects section of Tenant spec.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: blue-sky\nspec:\n  argocd:\n    sourceRepos:\n      # specify source repos here\n      - \"https://github.com/stakater/GitOps-config\"\n    appProject:\n      clusterResourceWhitelist:\n        - group: admissionregistration.k8s.io\n          kind: validatingwebhookconfigurations\n      namespaceResourceBlacklist:\n        - group: \"\"\n          kind: ConfigMap\n  owners:\n    users:\n      - user\n  editors:\n    users:\n      - user1\n  quota: medium\n  sandbox: false\n  namespaces:\n    withTenantPrefix:\n      - build\n      - stage\n
    "},{"location":"usecases/configuring-multitenant-network-isolation.html","title":"Configuring Multi-Tenant Isolation with Network Policy Template","text":"

    Bill is a cluster admin who wants to configure network policies to provide multi-tenant network isolation.

    First, Bill creates a template for network policies:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: tenant-network-policy\nresources:\n  manifests:\n  - apiVersion: networking.k8s.io/v1\n    kind: NetworkPolicy\n    metadata:\n      name: allow-same-namespace\n    spec:\n      podSelector: {}\n      ingress:\n      - from:\n        - podSelector: {}\n  - apiVersion: networking.k8s.io/v1\n    kind: NetworkPolicy\n    metadata:\n      name: allow-from-openshift-monitoring\n    spec:\n      ingress:\n      - from:\n        - namespaceSelector:\n            matchLabels:\n              network.openshift.io/policy-group: monitoring\n      podSelector: {}\n      policyTypes:\n      - Ingress\n  - apiVersion: networking.k8s.io/v1\n    kind: NetworkPolicy\n    metadata:\n      name: allow-from-openshift-ingress\n    spec:\n      ingress:\n      - from:\n        - namespaceSelector:\n            matchLabels:\n              network.openshift.io/policy-group: ingress\n      podSelector: {}\n      policyTypes:\n      - Ingress\n

    Once the template has been created, Bill edits the IntegrationConfig to add unique label to all tenant projects:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    project:\n      labels:\n        stakater.com/workload-monitoring: \"true\"\n        tenant-network-policy: \"true\"\n      annotations:\n        openshift.io/node-selector: node-role.kubernetes.io/worker=\n    sandbox:\n      labels:\n        stakater.com/kind: sandbox\n    privilegedNamespaces:\n      - default\n      - ^openshift-*\n      - ^kube-*\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:openshift-*\n      - ^system:serviceaccount:kube-*\n

    Bill has added a new label tenant-network-policy: \"true\" in project section of IntegrationConfig, now MTO will add that label in all tenant projects.

    Finally Bill creates a TemplateGroupInstance which will distribute the network policies using the newly added project label and template.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: tenant-network-policy-group\nspec:\n  template: tenant-network-policy\n  selector:\n    matchLabels:\n      tenant-network-policy: \"true\"\n  sync: true\n

    MTO will now deploy the network policies mentioned in Template to all projects matching the label selector mentioned in the TemplateGroupInstance.

    "},{"location":"usecases/custom-roles.html","title":"Changing the default access level for tenant owners","text":"

    This feature allows the cluster admins to change the default roles assigned to Tenant owner, editor, viewer groups.

    For example, if Bill as the cluster admin wants to reduce the privileges that tenant owners have, so they cannot create or edit Roles or bind them. As an admin of an OpenShift cluster, Bill can do this by assigning the edit role to all tenant owners. This is easily achieved by modifying the IntegrationConfig:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  tenantRoles:\n    default:\n      owner:\n        clusterRoles:\n          - edit\n      editor:\n        clusterRoles:\n          - edit\n      viewer:\n        clusterRoles:\n          - view\n

    Once all namespaces reconcile, the old admin RoleBindings should get replaced with the edit ones for each tenant owner.

    "},{"location":"usecases/custom-roles.html#giving-specific-permissions-to-some-tenants","title":"Giving specific permissions to some tenants","text":"

    Bill now wants the owners of the tenants bluesky and alpha to have admin permissions over their namespaces. Custom roles feature will allow Bill to do this, by modifying the IntegrationConfig like this:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  tenantRoles:\n    default:\n      owner:\n        clusterRoles:\n          - edit\n      editor:\n        clusterRoles:\n          - edit\n      viewer:\n        clusterRoles:\n          - view\n    custom:\n    - labelSelector:\n        matchExpressions:\n        - key: stakater.com/tenant\n          operator: In\n          values:\n            - alpha\n      owner:\n        clusterRoles:\n          - admin\n    - labelSelector:\n        matchExpressions:\n        - key: stakater.com/tenant\n          operator: In\n          values:\n            - bluesky\n      owner:\n        clusterRoles:\n          - admin\n

    New Bindings will be created for the Tenant owners of bluesky and alpha, corresponding to the admin Role. Bindings for editors and viewer will be inherited from the default roles. All other Tenant owners will have an edit Role bound to them within their namespaces

    "},{"location":"usecases/deploying-templates.html","title":"Distributing Resources in Namespaces","text":"

    Multi Tenant Operator has three Custom Resources which can cover this need using the Template CR, depending upon the conditions and preference.

    1. TemplateGroupInstance
    2. TemplateInstance
    3. Tenant

    Stakater Team, however, encourages the use of TemplateGroupInstance to distribute resources in multiple namespaces as it is optimized for better performance.

    "},{"location":"usecases/deploying-templates.html#deploying-template-to-namespaces-via-templategroupinstances","title":"Deploying Template to Namespaces via TemplateGroupInstances","text":"

    Bill, the cluster admin, wants to deploy a docker pull secret in namespaces where certain labels exists.

    First, Bill creates a template:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-secret\nresources:\n  manifests:\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n

    Once the template has been created, Bill makes a TemplateGroupInstance referring to the Template he wants to deploy with MatchLabels:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-pull-secret\n  selector:\n    matchLabels:\n      kind: build\n  sync: true\n

    Afterwards, Bill can see that secrets have been successfully created in all label matching namespaces.

    kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME             STATE    AGE\ndocker-secret    Active   3m\n\nkubectl get secret docker-secret -n alpha-dave-aurora-sandbox\nNAME             STATE    AGE\ndocker-secret    Active   2m\n

    TemplateGroupInstance can also target specific tenants or all tenant namespaces under a single yaml definition.

    "},{"location":"usecases/deploying-templates.html#templategroupinstance-for-multiple-tenants","title":"TemplateGroupInstance for multiple Tenants","text":"

    It can be done by using the matchExpressions field, dividing the tenant label in key and values.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-pull-secret\n  selector:\n    matchExpressions:\n    - key: stakater.com/tenant\n      operator: In\n      values:\n        - alpha\n        - beta\n  sync: true\n
    "},{"location":"usecases/deploying-templates.html#templategroupinstance-for-all-tenants","title":"TemplateGroupInstance for all Tenants","text":"

    This can also be done by using the matchExpressions field, using just the tenant label key stakater.com/tenant.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-pull-secret\n  selector:\n    matchExpressions:\n    - key: stakater.com/tenant\n      operator: Exists\n  sync: true\n
    "},{"location":"usecases/deploying-templates.html#deploying-template-to-namespaces-via-tenant","title":"Deploying Template to Namespaces via Tenant","text":"

    Bill is a cluster admin who wants to deploy a docker-pull-secret in Anna's tenant namespaces where certain labels exists.

    First, Bill creates a template:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  manifests:\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n

    Once the template has been created, Bill edits Anna's tenant and populates the namespacetemplate field:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n  quota: small\n  sandboxConfig:\n    enabled: true\n  templateInstances:\n  - spec:\n      template: docker-pull-secret\n    selector:\n      matchLabels:\n        kind: build\n

    Multi Tenant Operator will deploy TemplateInstances mentioned in templateInstances field, TemplateInstances will only be applied in those namespaces which belong to Anna's tenant and have the matching label of kind: build.

    So now Anna adds label kind: build to her existing namespace bluesky-anna-aurora-sandbox, and after adding the label she see's that the secret has been created.

    kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME                  STATE    AGE\ndocker-pull-secret    Active   3m\n
    "},{"location":"usecases/deploying-templates.html#deploying-template-to-a-namespace-via-templateinstance","title":"Deploying Template to a Namespace via TemplateInstance","text":"

    Anna wants to deploy a docker pull secret in her namespace.

    First Anna asks Bill, the cluster admin, to create a template of the secret for her:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  manifests:\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n

    Once the template has been created, Anna creates a TemplateInstance in her namespace referring to the Template she wants to deploy:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n  name: docker-pull-secret-instance\n  namespace: bluesky-anna-aurora-sandbox\nspec:\n  template: docker-pull-secret\n  sync: true\n

    Once this is created, Anna can see that the secret has been successfully applied.

    kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME                  STATE    AGE\ndocker-pull-secret    Active   3m\n
    "},{"location":"usecases/deploying-templates.html#passing-parameters-to-template-via-templateinstance-templategroupinstance-or-tenant","title":"Passing Parameters to Template via TemplateInstance, TemplateGroupInstance or Tenant","text":"

    Anna wants to deploy a LimitRange resource to certain namespaces.

    First Anna asks Bill, the cluster admin, to create template with parameters for LimitRange for her:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: namespace-parameterized-restrictions\nparameters:\n  # Name of the parameter\n  - name: DEFAULT_CPU_LIMIT\n    # The default value of the parameter\n    value: \"1\"\n  - name: DEFAULT_CPU_REQUESTS\n    value: \"0.5\"\n    # If a parameter is required the template instance will need to set it\n    # required: true\n    # Make sure only values are entered for this parameter\n    validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n  manifests:\n    - apiVersion: v1\n      kind: LimitRange\n      metadata:\n        name: namespace-limit-range-${namespace}\n      spec:\n        limits:\n          - default:\n              cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n            defaultRequest:\n              cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n            type: Container\n

    Afterwards, Anna creates a TemplateInstance in her namespace referring to the Template she wants to deploy:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n  name: namespace-parameterized-restrictions-instance\n  namespace: bluesky-anna-aurora-sandbox\nspec:\n  template: namespace-parameterized-restrictions\n  sync: true\nparameters:\n  - name: DEFAULT_CPU_LIMIT\n    value: \"1.5\"\n  - name: DEFAULT_CPU_REQUESTS\n    value: \"1\"\n

    If she wants to distribute the same Template over multiple namespaces, she can use TemplateGroupInstance.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: namespace-parameterized-restrictions-tgi\nspec:\n  template: namespace-parameterized-restrictions\n  sync: true\n  selector:\n    matchExpressions:\n    - key: stakater.com/tenant\n      operator: In\n      values:\n        - alpha\n        - beta\nparameters:\n  - name: DEFAULT_CPU_LIMIT\n    value: \"1.5\"\n  - name: DEFAULT_CPU_REQUESTS\n    value: \"1\"\n

    Or she can use her tenant to cover only the tenant namespaces.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n  quota: small\n  sandboxConfig:\n    enabled: true\n  templateInstances:\n  - spec:\n      template: namespace-parameterized-restrictions\n      sync: true\n    parameters:\n      - name: DEFAULT_CPU_LIMIT\n        value: \"1.5\"\n      - name: DEFAULT_CPU_REQUESTS\n        value: \"1\"\n    selector:\n      matchLabels:\n        kind: build\n
    "},{"location":"usecases/distributing-resources.html","title":"Copying Secrets and Configmaps across Tenant Namespaces via TGI","text":"

    Bill is a cluster admin who wants to map a docker-pull-secret, present in a build namespace, in tenant namespaces where certain labels exists.

    First, Bill creates a template:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  resourceMappings:\n    secrets:\n      - name: docker-pull-secret\n        namespace: build\n

    Once the template has been created, Bill makes a TemplateGroupInstance referring to the Template he wants to deploy with MatchLabels:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-pull-secret\n  selector:\n    matchLabels:\n      kind: build\n  sync: true\n

    Afterwards, Bill can see that secrets has been successfully mapped in all matching namespaces.

    kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME             STATE    AGE\ndocker-pull-secret    Active   3m\n\nkubectl get secret docker-pull-secret -n alpha-dave-aurora-sandbox\nNAME             STATE    AGE\ndocker-pull-secret    Active   3m\n
    "},{"location":"usecases/distributing-resources.html#mapping-resources-within-tenant-namespaces-via-ti","title":"Mapping Resources within Tenant Namespaces via TI","text":"

    Anna is a tenant owner who wants to map a docker-pull-secret, present in bluseky-build namespace, to bluesky-anna-aurora-sandbox namespace.

    First, Bill creates a template:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  resourceMappings:\n    secrets:\n      - name: docker-pull-secret\n        namespace: bluesky-build\n

    Once the template has been created, Anna creates a TemplateInstance in bluesky-anna-aurora-sandbox namespace, referring to the Template.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n  name: docker-secret-instance\n  namespace: bluesky-anna-aurora-sandbox\nspec:\n  template: docker-pull-secret\n  sync: true\n

    Afterwards, Bill can see that secrets has been successfully mapped in all matching namespaces.

    kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME             STATE    AGE\ndocker-pull-secret    Active   3m\n
    "},{"location":"usecases/distributing-secrets-using-sealed-secret-template.html","title":"Distributing Secrets Using Sealed Secrets Template","text":"

    Bill is a cluster admin who wants to provide a mechanism for distributing secrets in multiple namespaces. For this, he wants to use Sealed Secrets as the solution by adding them to MTO Template CR

    First, Bill creates a Template in which Sealed Secret is mentioned:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: tenant-sealed-secret\nresources:\n  manifests:\n  - kind: SealedSecret\n    apiVersion: bitnami.com/v1alpha1\n    metadata:\n      name: mysecret\n    spec:\n      encryptedData:\n        .dockerconfigjson: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq.....\n      template:\n        type: kubernetes.io/dockerconfigjson\n        # this is an example of labels and annotations that will be added to the output secret\n        metadata:\n          labels:\n            \"jenkins.io/credentials-type\": usernamePassword\n          annotations:\n            \"jenkins.io/credentials-description\": credentials from Kubernetes\n

    Once the template has been created, Bill has to edit the Tenant to add unique label to namespaces in which the secret has to be deployed. For this, he can use the support for common and specific labels across namespaces.

    Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n\n  # use this if you want to add label to some specific namespaces\n  specificMetadata:\n    - namespaces:\n        - test-namespace\n      labels:\n        distribute-image-pull-secret: true\n\n  # use this if you want to add label to all namespaces under your tenant\n  commonMetadata:\n    labels:\n      distribute-image-pull-secret: true\n

    Bill has added support for a new label distribute-image-pull-secret: true\" for tenant projects/namespaces, now MTO will add that label depending on the used field.

    Finally Bill creates a TemplateGroupInstance which will deploy the sealed secrets using the newly created project label and template.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: tenant-sealed-secret\nspec:\n  template: tenant-sealed-secret\n  selector:\n    matchLabels:\n      distribute-image-pull-secret: true\n  sync: true\n

    MTO will now deploy the sealed secrets mentioned in Template to namespaces which have the mentioned label. The rest of the work to deploy secret from a sealed secret has to be done by Sealed Secrets Controller.

    "},{"location":"usecases/extend-default-roles.html","title":"Extending the default access level for tenant members","text":"

    Bill as the cluster admin wants to extend the default access for tenant members. As an admin of an OpenShift Cluster, Bill can extend the admin, edit, and view ClusterRole using aggregation. Bill will first create a ClusterRole with privileges to resources which Bill wants to extend. Bill will add the aggregation label to the newly created ClusterRole for extending the default ClusterRoles provided by OpenShift.

    kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: extend-view-role\n  labels:\n    rbac.authorization.k8s.io/aggregate-to-view: 'true'\nrules:\n  - verbs:\n      - get\n      - list\n      - watch\n    apiGroups:\n      - user.openshift.io\n    resources:\n      - groups\n

    Note: You can learn more about aggregated-cluster-roles here

    "},{"location":"usecases/hibernation.html","title":"Freeing up unused resources with hibernation","text":""},{"location":"usecases/hibernation.html#hibernating-a-tenant","title":"Hibernating a tenant","text":"

    Bill is a cluster administrator who wants to free up unused cluster resources at nighttime, in an effort to reduce costs (when the cluster isn't being used).

    First, Bill creates a tenant with the hibernation schedules mentioned in the spec, or adds the hibernation field to an existing tenant:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: sigma\nspec:\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  owners:\n    users:\n      - user\n  editors:\n    users:\n      - user1\n  quota: medium\n  namespaces:\n    withoutTenantPrefix:\n      - build\n      - stage\n      - dev\n

    The schedules above will put all the Deployments and StatefulSets within the tenant's namespaces to sleep, by reducing their pod count to 0 at 8 PM every weekday. At 8 AM on weekdays, the namespaces will then wake up by restoring their applications' previous pod counts.

    Bill can verify this behaviour by checking the newly created ResourceSupervisor resource at run time:

    oc get ResourceSupervisor -A\nNAME           AGE\nsigma          5m\n

    The ResourceSupervisor will look like this at 'running' time (as per the schedule):

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: example\nspec:\n  argocd:\n    appProjects: []\n    namespace: ''\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  namespaces:\n    - build\n    - stage\n    - dev\nstatus:\n  currentStatus: running\n  nextReconcileTime: '2022-10-12T20:00:00Z'\n

    The ResourceSupervisor will look like this at 'sleeping' time (as per the schedule):

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: example\nspec:\n  argocd:\n    appProjects: []\n    namespace: ''\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  namespaces:\n    - build\n    - stage\n    - dev\nstatus:\n  currentStatus: sleeping\n  nextReconcileTime: '2022-10-13T08:00:00Z'\n  sleepingApplications:\n    - Namespace: build\n      kind: Deployment\n      name: example\n      replicas: 3\n    - Namespace: stage\n      kind: Deployment\n      name: example\n      replicas: 3\n

    Bill wants to prevent the build namespace from going to sleep, so he can add the hibernation.stakater.com/exclude: 'true' annotation to it. The ResourceSupervisor will now look like this after reconciling:

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: example\nspec:\n  argocd:\n    appProjects: []\n    namespace: ''\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  namespaces:\n    - stage\n    - dev\nstatus:\n  currentStatus: sleeping\n  nextReconcileTime: '2022-10-13T08:00:00Z'\n  sleepingApplications:\n    - Namespace: stage\n      kind: Deployment\n      name: example\n      replicas: 3\n
    "},{"location":"usecases/hibernation.html#hibernating-namespaces-andor-argocd-applications-with-resourcesupervisor","title":"Hibernating namespaces and/or ArgoCD Applications with ResourceSupervisor","text":"

    Bill, the cluster administrator, wants to hibernate a collection of namespaces and AppProjects belonging to multiple different tenants. He can do so by creating a ResourceSupervisor manually, specifying the hibernation schedule in its spec, the namespaces and ArgoCD Applications that need to be hibernated as per the mentioned schedule. Bill can also use the same method to hibernate some namespaces and ArgoCD Applications that do not belong to any tenant on his cluster.

    The example given below will hibernate the ArgoCD Applications in the 'test-app-project' AppProject; and it will also hibernate the 'ns2' and 'ns4' namespaces.

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: test-resource-supervisor\nspec:\n  argocd:\n    appProjects:\n      - test-app-project\n    namespace: argocd-ns\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  namespaces:\n    - ns2\n    - ns4\nstatus:\n  currentStatus: sleeping\n  nextReconcileTime: '2022-10-13T08:00:00Z'\n  sleepingApplications:\n    - Namespace: ns2\n      kind: Deployment\n      name: test-deployment\n      replicas: 3\n
    "},{"location":"usecases/integrationconfig.html","title":"Configuring Managed Namespaces and ServiceAccounts in IntegrationConfig","text":"

    Bill is a cluster admin who can use IntegrationConfig to configure how Multi Tenant Operator (MTO) manages the cluster.

    By default, MTO watches all namespaces and will enforce all the governing policies on them. All namespaces managed by MTO require the stakater.com/tenant label. MTO ignores privileged namespaces that are mentioned in the IntegrationConfig, and does not manage them. Therefore, any tenant label on such namespaces will be ignored.

    oc create namespace stakater-test\nError from server (Cannot Create namespace stakater-test without label stakater.com/tenant. User: Bill): admission webhook \"vnamespace.kb.io\" denied the request: Cannot CREATE namespace stakater-test without label stakater.com/tenant. User: Bill\n

    Bill is trying to create a namespace without the stakater.com/tenant label. Creating a namespace without this label is only allowed if the namespace is privileged. Privileged namespaces will be ignored by MTO and do not require the said label. Therefore, Bill will add the required regex in the IntegrationConfig, along with any other namespaces which are privileged and should be ignored by MTO - like default, or namespaces with prefixes like openshift, kube:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedNamespaces:\n      - ^default$\n      - ^openshift*\n      - ^kube*\n      - ^stakater*\n

    After mentioning the required regex (^stakater*) under privilegedNamespaces, Bill can create the namespace without interference.

    oc create namespace stakater-test\nnamespace/stakater-test created\n

    MTO will also disallow all users which are not tenant owners to perform CRUD operations on namespaces. This will also prevent Service Accounts from performing CRUD operations.

    If Bill wants MTO to ignore Service Accounts, then he would simply have to add them in the IntegrationConfig:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedServiceAccounts:\n      - system:serviceaccount:openshift\n      - system:serviceaccount:stakater\n      - system:serviceaccount:kube\n      - system:serviceaccount:redhat\n      - system:serviceaccount:hive\n

    Bill can also use regex patterns to ignore a set of service accounts:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:openshift*\n      - ^system:serviceaccount:stakater*\n
    "},{"location":"usecases/integrationconfig.html#configuring-vault-in-integrationconfig","title":"Configuring Vault in IntegrationConfig","text":"

    Vault is used to secure, store and tightly control access to tokens, passwords, certificates, and encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.

    If Bill (the cluster admin) has Vault configured in his cluster, then he can take benefit from MTO's integration with Vault.

    MTO automatically creates Vault secret paths for tenants, where tenant members can securely save their secrets. It also authorizes tenant members to access these secrets via OIDC.

    Bill would first have to integrate Vault with MTO by adding the details in IntegrationConfig. For more details

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  vault:\n    enabled: true\n    endpoint:\n      secretReference:\n        name: vault-root-token\n        namespace: vault\n      url: >-\n        https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n    sso:\n      accessorID: auth_oidc_aa6aa9aa\n      clientName: vault\n

    Bill then creates a tenant for Anna and John:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@acme.org\n  viewers:\n    users:\n    - john@acme.org\n  quota: small\n  sandbox: false\n

    Now Bill goes to Vault and sees that a path for tenant has been made under the name bluesky/kv, confirming that Tenant members with the Owner or Edit roles now have access to the tenant's Vault path.

    Now if Anna sign's in to the Vault via OIDC, she can see her tenants path and secrets. Whereas if John sign's in to the Vault via OIDC, he can't see his tenants path or secrets as he doesn't have the access required to view them.

    "},{"location":"usecases/integrationconfig.html#configuring-rhsso-red-hat-single-sign-on-in-integrationconfig","title":"Configuring RHSSO (Red Hat Single Sign-On) in IntegrationConfig","text":"

    Red Hat Single Sign-On RHSSO is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0.

    If Bill the cluster admin has RHSSO configured in his cluster, then he can take benefit from MTO's integration with RHSSO and Vault.

    MTO automatically allows tenant members to access Vault via OIDC(RHSSO authentication and authorization) to access secret paths for tenants where tenant members can securely save their secrets.

    Bill would first have to integrate RHSSO with MTO by adding the details in IntegrationConfig. Visit here for more details.

    rhsso:\n  enabled: true\n  realm: customer\n  endpoint:\n    secretReference:\n      name: auth-secrets\n      namespace: openshift-auth\n    url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n
    "},{"location":"usecases/mattermost.html","title":"Creating Mattermost Teams for your tenant","text":""},{"location":"usecases/mattermost.html#requirements","title":"Requirements","text":"

    MTO-Mattermost-Integration-Operator

    Please contact stakater to install the Mattermost integration operator before following the below mentioned steps.

    "},{"location":"usecases/mattermost.html#steps-to-enable-integration","title":"Steps to enable integration","text":"

    Bill wants some of the tenants to also have their own Mattermost Teams. To make sure this happens correctly, Bill will first add the stakater.com/mattermost: true label to the tenants. The label will enable the mto-mattermost-integration-operator to create and manage Mattermost Teams based on Tenants.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: sigma\n  labels:\n    stakater.com/mattermost: 'true'\nspec:\n  owners:\n    users:\n      - user\n  editors:\n    users:\n      - user1\n  quota: medium\n  sandbox: false\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n

    Now user can logIn to Mattermost to see their Team and relevant channels associated with it.

    The name of the Team is similar to the Tenant name. Notification channels are pre-configured for every team, and can be modified.

    "},{"location":"usecases/namespace.html","title":"Creating Namespace","text":"

    Anna as the tenant owner can create new namespaces for her tenant.

    apiVersion: v1\nkind: Namespace\nmetadata:\n  name: bluesky-production\n  labels:\n    stakater.com/tenant: bluesky\n

    \u26a0\ufe0f Anna is required to add the tenant label stakater.com/tenant: bluesky which contains the name of her tenant bluesky, while creating the namespace. If this label is not added or if Anna does not belong to the bluesky tenant, then Multi Tenant Operator will not allow the creation of that namespace.

    When Anna creates the namespace, MTO assigns Anna and other tenant members the roles based on their user types, such as a tenant owner getting the OpenShift admin role for that namespace.

    As a tenant owner, Anna is able to create namespaces.

    If you have enabled ArgoCD Multitenancy, our preferred solution is to create tenant namespaces by using Tenant spec to avoid syncing issues in ArgoCD console during namespace creation.

    "},{"location":"usecases/namespace.html#add-existing-namespaces-to-tenant-via-gitops","title":"Add Existing Namespaces to Tenant via GitOps","text":"

    Using GitOps as your preferred development workflow, you can add existing namespaces for your tenants by including the tenant label.

    To add an existing namespace to your tenant via GitOps:

    1. First, migrate your namespace resource to your \u201cwatched\u201d git repository
    2. Edit your namespace yaml to include the tenant label
    3. Tenant label follows the naming convention stakater.com/tenant: <TENANT_NAME>
    4. Sync your GitOps repository with your cluster and allow changes to be propagated
    5. Verify that your Tenant users now have access to the namespace

    For example, If Anna, a tenant owner, wants to add the namespace bluesky-dev to her tenant via GitOps, after migrating her namespace manifest to a \u201cwatched repository\u201d

    apiVersion: v1\nkind: Namespace\nmetadata:\n  name: bluesky-dev\n

    She can then add the tenant label

     ...\n  labels:\n    stakater.com/tenant: bluesky\n

    Now all the users of the Bluesky tenant now have access to the existing namespace.

    Additionally, to remove namespaces from a tenant, simply remove the tenant label from the namespace resource and sync your changes to your cluster.

    "},{"location":"usecases/namespace.html#remove-namespaces-from-your-cluster-via-gitops","title":"Remove Namespaces from your Cluster via GitOps","text":"

    GitOps is a quick and efficient way to automate the management of your K8s resources.

    To remove namespaces from your cluster via GitOps;

    "},{"location":"usecases/private-sandboxes.html","title":"Create Private Sandboxes","text":"

    Bill assigned the ownership of bluesky to Anna and Anthony. Now if the users want sandboxes to be made for them, they'll have to ask Bill to enable sandbox functionality. The Users also want to make sure that the sandboxes that are created for them are also only visible to the user they belong to. To enable that, Bill will just set enabled: true and private: true within the sandboxConfig field

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandboxConfig:\n    enabled: true\n    private: true\nEOF\n

    With the above configuration Anna and Anthony will now have new sandboxes created

    kubectl get namespaces\nNAME                             STATUS   AGE\nbluesky-anna-aurora-sandbox      Active   5d5h\nbluesky-anthony-aurora-sandbox   Active   5d5h\nbluesky-john-aurora-sandbox      Active   5d5h\n

    However, from the perspective of Anna, only their sandbox will be visible

    kubectl get namespaces\nNAME                             STATUS   AGE\nbluesky-anna-aurora-sandbox      Active   5d5h\n
    "},{"location":"usecases/quota.html","title":"Enforcing Quotas","text":"

    Using Multi Tenant Operator, the cluster-admin can set and enforce cluster resource quotas and limit ranges for tenants.

    "},{"location":"usecases/quota.html#assigning-resource-quotas","title":"Assigning Resource Quotas","text":"

    Bill is a cluster admin who will first create Quota CR where he sets the maximum resource limits that Anna's tenant will have. Here limitrange is an optional field, cluster admin can skip it if not needed.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n  name: small\nspec:\n  resourcequota:\n    hard:\n      requests.cpu: '5'\n      requests.memory: '5Gi'\n      configmaps: \"10\"\n      secrets: \"10\"\n      services: \"10\"\n      services.loadbalancers: \"2\"\n  limitrange:\n    limits:\n      - type: \"Pod\"\n        max:\n          cpu: \"2\"\n          memory: \"1Gi\"\n        min:\n          cpu: \"200m\"\n          memory: \"100Mi\"\nEOF\n

    For more details please refer to Quotas.

    kubectl get quota small\nNAME       STATE    AGE\nsmall      Active   3m\n

    Bill then proceeds to create a tenant for Anna, while also linking the newly created Quota.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@stakater.com\n  quota: small\n  sandbox: false\nEOF\n

    Now that the quota is linked with Anna's tenant, Anna can create any resource within the values of resource quota and limit range.

    kubectl -n bluesky-production create deployment nginx --image nginx:latest --replicas 4\n

    Once the resource quota assigned to the tenant has been reached, Anna cannot create further resources.

    kubectl create pods bluesky-training\nError from server (Cannot exceed Namespace quota: please, reach out to the system administrators)\n
    "},{"location":"usecases/secret-distribution.html","title":"Propagate Secrets from Parent to Descendant namespaces","text":"

    Secrets like registry credentials often need to exist in multiple Namespaces, so that Pods within different namespaces can have access to those credentials in form of secrets.

    Manually creating secrets within different namespaces could lead to challenges, such as:

    With the help of Multi-Tenant Operator's Template feature we can make this secret distribution experience easy.

    For example, to copy a Secret called registry which exists in the example to new Namespaces whenever they are created, we will first create a Template which will have reference of the registry secret.

    It will also push updates to the copied Secrets and keep the propagated secrets always sync and updated with parent namespaces.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: registry-secret\nresources:\n  resourceMappings:\n    secrets:\n      - name: registry\n        namespace: example\n

    Now using this Template we can propagate registry secret to different namespaces that has some common set of labels.

    For example, will just add one label kind: registry and all namespaces with this label will get this secret.

    For propagating it on different namespaces dynamically will have to create another resource called TemplateGroupInstance. TemplateGroupInstance will have Template and matchLabel mapping as shown below:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: registry-secret-group-instance\nspec:\n  template: registry-secret\n  selector:\n    matchLabels:\n      kind: registry\n  sync: true\n

    After reconciliation, you will be able to see those secrets in namespaces having mentioned label.

    MTO will keep injecting this secret to the new namespaces created with that label.

    kubectl get secret registry-secret -n example-ns-1\nNAME             STATE    AGE\nregistry-secret    Active   3m\n\nkubectl get secret registry-secret -n example-ns-2\nNAME             STATE    AGE\nregistry-secret    Active   3m\n
    "},{"location":"usecases/template.html","title":"Creating Templates","text":"

    Anna wants to create a Template that she can use to initialize or share common resources across namespaces (e.g. PullSecrets).

    Anna can either create a template using manifests field, covering Kubernetes or custom resources.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  manifests:\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n

    Or by using Helm Charts

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: redis\nresources:\n  helm:\n    releaseName: redis\n    chart:\n      repository:\n        name: redis\n        repoUrl: https://charts.bitnami.com/bitnami\n    values: |\n      redisPort: 6379\n

    She can also use resourceMapping field to copy over secrets and configmaps from one namespace to others.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: resource-mapping\nresources:\n  resourceMappings:\n    secrets:\n      - name: docker-secret\n        namespace: bluesky-build\n    configMaps:\n      - name: tronador-configMap\n        namespace: stakater-tronador\n

    Note: Resource mapping can be used via TGI to map resources within tenant namespaces or to some other tenant's namespace. If used with TI, the resources will only be mapped if namespaces belong to same tenant.

    "},{"location":"usecases/template.html#using-templates-with-default-parameters","title":"Using Templates with Default Parameters","text":"
    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: namespace-parameterized-restrictions\nparameters:\n  # Name of the parameter\n  - name: DEFAULT_CPU_LIMIT\n    # The default value of the parameter\n    value: \"1\"\n  - name: DEFAULT_CPU_REQUESTS\n    value: \"0.5\"\n    # If a parameter is required the template instance will need to set it\n    # required: true\n    # Make sure only values are entered for this parameter\n    validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n  manifests:\n    - apiVersion: v1\n      kind: LimitRange\n      metadata:\n        name: namespace-limit-range-${namespace}\n      spec:\n        limits:\n          - default:\n              cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n            defaultRequest:\n              cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n            type: Container\n

    Parameters can be used with both manifests and helm charts

    "},{"location":"usecases/tenant.html","title":"Creating Tenant","text":"

    Bill is a cluster admin who receives a new request from Aurora Solutions CTO asking for a new tenant for Anna's team.

    Bill creates a new tenant called bluesky in the cluster:

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandbox: false\nEOF\n

    Bill checks if the new tenant is created:

    kubectl get tenants.tenantoperator.stakater.com bluesky\nNAME       STATE    AGE\nbluesky    Active   3m\n

    Anna can now login to the cluster and check if she can create namespaces

    kubectl auth can-i create namespaces\nyes\n

    However, cluster resources are not accessible to Anna

    kubectl auth can-i get namespaces\nno\n\nkubectl auth can-i get persistentvolumes\nno\n

    Including the Tenant resource

    kubectl auth can-i get tenants.tenantoperator.stakater.com\nno\n
    "},{"location":"usecases/tenant.html#assign-multiple-users-as-tenant-owner","title":"Assign multiple users as tenant owner","text":"

    In the example above, Bill assigned the ownership of bluesky to Anna. If another user, e.g. Anthony needs to administer bluesky, than Bill can assign the ownership of tenant to that user as well:

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandbox: false\nEOF\n

    With the configuration above, Anthony can log-in to the cluster and execute

    kubectl auth can-i create namespaces\nyes\n
    "},{"location":"usecases/tenant.html#assigning-users-sandbox-namespace","title":"Assigning Users Sandbox Namespace","text":"

    Bill assigned the ownership of bluesky to Anna and Anthony. Now if the users want sandboxes to be made for them, they'll have to ask Bill to enable sandbox functionality.

    To enable that, Bill will just set enabled: true within the sandboxConfig field

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandboxConfig:\n    enabled: true\nEOF\n

    With the above configuration Anna and Anthony will now have new sandboxes created

    kubectl get namespaces\nNAME                             STATUS   AGE\nbluesky-anna-aurora-sandbox      Active   5d5h\nbluesky-anthony-aurora-sandbox   Active   5d5h\nbluesky-john-aurora-sandbox      Active   5d5h\n

    If Bill wants to make sure that only the sandbox owner can view his sandbox namespace, he can achieve this by setting private: true within the sandboxConfig filed.

    "},{"location":"usecases/tenant.html#creating-namespaces-via-tenant-custom-resource","title":"Creating Namespaces via Tenant Custom Resource","text":"

    Bill now wants to create namespaces for dev, build and production environments for the tenant members. To create those namespaces Bill will just add those names within the namespaces field in the tenant CR. If Bill wants to append the tenant name as a prefix in namespace name, then he can use namespaces.withTenantPrefix field. Else he can use namespaces.withoutTenantPrefix for namespaces for which he does not need tenant name as a prefix.

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n    withoutTenantPrefix:\n      - prod\nEOF\n

    With the above configuration tenant members will now see new namespaces have been created.

    kubectl get namespaces\nNAME             STATUS   AGE\nbluesky-dev      Active   5d5h\nbluesky-build    Active   5d5h\nprod             Active   5d5h\n
    "},{"location":"usecases/tenant.html#distributing-common-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource","title":"Distributing common labels and annotations to tenant namespaces via Tenant Custom Resource","text":"

    Bill now wants to add labels/annotations to all the namespaces for a tenant. To create those labels/annotations Bill will just add them into commonMetadata.labels/commonMetadata.annotations field in the tenant CR.

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n  commonMetadata:\n    labels:\n      app.kubernetes.io/managed-by: tenant-operator\n      app.kubernetes.io/part-of: tenant-alpha\n    annotations:\n      openshift.io/node-selector: node-role.kubernetes.io/infra=\nEOF\n

    With the above configuration all tenant namespaces will now contain the mentioned labels and annotations.

    "},{"location":"usecases/tenant.html#distributing-specific-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource","title":"Distributing specific labels and annotations to tenant namespaces via Tenant Custom Resource","text":"

    Bill now wants to add labels/annotations to specific namespaces for a tenant. To create those labels/annotations Bill will just add them into specificMetadata.labels/specificMetadata.annotations and specific namespaces in specificMetadata.namespaces field in the tenant CR.

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandboxConfig:\n    enabled: true\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n  specificMetadata:\n    - namespaces:\n        - bluesky-anna-aurora-sandbox\n      labels:\n        app.kubernetes.io/is-sandbox: true\n      annotations:\n        openshift.io/node-selector: node-role.kubernetes.io/worker=\nEOF\n

    With the above configuration all tenant namespaces will now contain the mentioned labels and annotations.

    "},{"location":"usecases/tenant.html#retaining-tenant-namespaces-and-appproject-when-a-tenant-is-being-deleted","title":"Retaining tenant namespaces and AppProject when a tenant is being deleted","text":"

    Bill now wants to delete tenant bluesky and wants to retain all namespaces and AppProject of the tenant. To retain the namespaces Bill will set spec.onDelete.cleanNamespaces, and spec.onDelete.cleanAppProjects to false.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  quota: small\n  sandboxConfig:\n    enabled: true\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n  onDelete:\n    cleanNamespaces: false\n    cleanAppProject: false\n

    With the above configuration all tenant namespaces and AppProject will not be deleted when tenant bluesky is deleted. By default, the value of spec.onDelete.cleanNamespaces is also false and spec.onDelete.cleanAppProject is true

    "},{"location":"usecases/volume-limits.html","title":"Limiting PersistentVolume for Tenant","text":"

    Bill, as a cluster admin, wants to restrict the amount of storage a Tenant can use. For that he'll add the requests.storage field to quota.spec.resourcequota.hard. If Bill wants to restrict tenant bluesky to use only 50Gi of storage, he'll first create a quota with requests.storage field set to 50Gi.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n  name: medium\nspec:\n  resourcequota:\n    hard:\n      requests.cpu: '5'\n      requests.memory: '10Gi'\n      requests.storage: '50Gi'\n

    Once the quota is created, Bill will create the tenant and set the quota field to the one he created.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  quota: medium\n  sandbox: true\nEOF\n

    Now, the combined storage used by all tenant namespaces will not exceed 50Gi.

    "},{"location":"usecases/volume-limits.html#adding-storageclass-restrictions-for-tenant","title":"Adding StorageClass Restrictions for Tenant","text":"

    Now, Bill, as a cluster admin, wants to make sure that no Tenant can provision more than a fixed amount of storage from a StorageClass. Bill can restrict that using <storage-class-name>.storageclass.storage.k8s.io/requests.storage field in quota.spec.resourcequota.hard field. If Bill wants to restrict tenant sigma to use only 20Gi of storage from storage class stakater, he'll first create a StorageClass stakater and then create the relevant Quota with stakater.storageclass.storage.k8s.io/requests.storage field set to 20Gi.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n  name: small\nspec:\n  resourcequota:\n    hard:\n      requests.cpu: '2'\n      requests.memory: '4Gi'\n      stakater.storageclass.storage.k8s.io/requests.storage: '20Gi'\n

    Once the quota is created, Bill will create the tenant and set the quota field to the one he created.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: sigma\nspec:\n  owners:\n    users:\n    - dave@aurora.org\n  quota: small\n  sandbox: true\nEOF\n

    Now, the combined storage provisioned from StorageClass stakater used by all tenant namespaces will not exceed 20Gi.

    The 20Gi limit will only be applied to StorageClass stakater. If a tenant member creates a PVC with some other StorageClass, he will not be restricted.

    Tip

    More details about Resource Quota can be found here

    "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"index.html","title":"Introduction","text":"

    Kubernetes is designed to support a single tenant platform; OpenShift brings some improvements with its \"Secure by default\" concepts but it is still very complex to design and orchestrate all the moving parts involved in building a secure multi-tenant platform hence making it difficult for cluster admins to host multi-tenancy in a single OpenShift cluster. If multi-tenancy is achieved by sharing a cluster, it can have many advantages, e.g. efficient resource utilization, less configuration effort and easier sharing of cluster-internal resources among different tenants. OpenShift and all managed applications provide enough primitive resources to achieve multi-tenancy, but it requires professional skills and deep knowledge of OpenShift.

    This is where Multi Tenant Operator (MTO) comes in and provides easy to manage/configure multi-tenancy. MTO provides wrappers around OpenShift resources to provide a higher level of abstraction to users. With MTO admins can configure Network and Security Policies, Resource Quotas, Limit Ranges, RBAC for every tenant, which are automatically inherited by all the namespaces and users in the tenant. Depending on the user's role, they are free to operate within their tenants in complete autonomy. MTO supports initializing new tenants using GitOps management pattern. Changes can be managed via PRs just like a typical GitOps workflow, so tenants can request changes, add new users, or remove users.

    The idea of MTO is to use namespaces as independent sandboxes, where tenant applications can run independently of each other. Cluster admins shall configure MTO's custom resources, which then become a self-service system for tenants. This minimizes the efforts of the cluster admins.

    MTO enables cluster admins to host multiple tenants in a single OpenShift Cluster, i.e.:

    MTO is also OpenShift certified

    "},{"location":"index.html#features","title":"Features","text":"

    The major features of Multi Tenant Operator (MTO) are described below.

    "},{"location":"index.html#kubernetes-multitenancy","title":"Kubernetes Multitenancy","text":"

    RBAC is one of the most complicated and error-prone parts of Kubernetes. With Multi Tenant Operator, you can rest assured that RBAC is configured with the \"least privilege\" mindset and all rules are kept up-to-date with zero manual effort.

    Multi Tenant Operator binds existing ClusterRoles to the Tenant's Namespaces used for managing access to the Namespaces and the resources they contain. You can also modify the default roles or create new roles to have full control and customize access control for your users and teams.

    Multi Tenant Operator is also able to leverage existing OpenShift groups or external groups synced from 3rd party identity management systems, for maintaining Tenant membership in your organization's current user management system.

    "},{"location":"index.html#hashicorp-vault-multitenancy","title":"HashiCorp Vault Multitenancy","text":"

    Multi Tenant Operator extends the tenants permission model to HashiCorp Vault where it can create Vault paths and greatly ease the overhead of managing RBAC in Vault. Tenant users can manage their own secrets without the concern of someone else having access to their Vault paths.

    More details on Vault Multitenancy

    "},{"location":"index.html#argocd-multitenancy","title":"ArgoCD Multitenancy","text":"

    Multi Tenant Operator is not only providing strong Multi Tenancy for the OpenShift internals but also extends the tenants permission model to ArgoCD were it can provision AppProjects and Allowed Repositories for your tenants greatly ease the overhead of managing RBAC in ArgoCD.

    More details on ArgoCD Multitenancy

    "},{"location":"index.html#resource-management","title":"Resource Management","text":"

    Multi Tenant Operator provides a mechanism for defining Resource Quotas at the tenant scope, meaning all namespaces belonging to a particular tenant share the defined quota, which is why you are able to safely enable dev teams to self serve their namespaces whilst being confident that they can only use the resources allocated based on budget and business needs.

    More details on Quota

    "},{"location":"index.html#templates-and-template-distribution","title":"Templates and Template distribution","text":"

    Multi Tenant Operator allows admins/users to define templates for namespaces, so that others can instantiate these templates to provision namespaces with batteries loaded. A template could pre-populate a namespace for certain use cases or with basic tooling required. Templates allow you to define Kubernetes manifests, Helm chart and more to be applied when the template is used to create a namespace.

    It also allows the parameterizing of these templates for flexibility and ease of use. It also provides the option to enforce the presence of templates in one tenant's or all the tenants' namespaces for configuring secure defaults.

    Common use cases for namespace templates may be:

    More details on Distributing Template Resources

    "},{"location":"index.html#mto-console","title":"MTO Console","text":"

    Multi Tenant Operator Console is a comprehensive user interface designed for both administrators and tenant users to manage multi-tenant environments. The MTO Console simplifies the complexity involved in handling various aspects of tenants and their related resources. It serves as a centralized monitoring hub, offering insights into the current state of tenants, namespaces, templates and quotas. It is designed to provide a quick summary/snapshot of MTO's status and facilitates easier interaction with various resources such as tenants, namespaces, templates, and quotas.

    More details on Console

    "},{"location":"index.html#showback","title":"Showback","text":"

    The showback functionality in Multi Tenant Operator (MTO) Console is a significant feature designed to enhance the management of resources and costs in multi-tenant Kubernetes environments. This feature focuses on accurately tracking the usage of resources by each tenant, and/or namespace, enabling organizations to monitor and optimize their expenditures. Furthermore, this functionality supports financial planning and budgeting by offering a clear view of operational costs associated with each tenant. This can be particularly beneficial for organizations that chargeback internal departments or external clients based on resource usage, ensuring that billing is fair and reflective of actual consumption.

    More details on Showback

    "},{"location":"index.html#hibernation","title":"Hibernation","text":"

    Multi Tenant Operator can downscale Deployments and StatefulSets in a tenant's Namespace according to a defined sleep schedule. The Deployments and StatefulSets are brought back to their required replicas according to the provided wake schedule.

    More details on Hibernation

    "},{"location":"index.html#mattermost-multitenancy","title":"Mattermost Multitenancy","text":"

    Multi Tenant Operator can manage Mattermost to create Teams for tenant users. All tenant users get a unique team and a list of predefined channels gets created. When a user is removed from the tenant, the user is also removed from the Mattermost team corresponding to tenant.

    More details on Mattermost

    "},{"location":"index.html#remote-development-namespaces","title":"Remote Development Namespaces","text":"

    Multi Tenant Operator can be configured to automatically provision a namespace in the cluster for every member of the specific tenant, that will also be preloaded with any selected templates and consume the same pool of resources from the tenants quota creating safe remote dev namespaces that teams can use as scratch namespace for rapid prototyping and development. So, every developer gets a Kubernetes-based cloud development environment that feel like working on localhost.

    More details on Sandboxes

    "},{"location":"index.html#cross-namespace-resource-distribution","title":"Cross Namespace Resource Distribution","text":"

    Multi Tenant Operator supports cloning of secrets and configmaps from one namespace to another namespace based on label selectors. It uses templates to enable users to provide reference to secrets and configmaps. It uses a template group instance to distribute those secrets and namespaces in matching namespaces, even if namespaces belong to different tenants. If template instance is used then the resources will only be mapped if namespaces belong to same tenant.

    More details on Distributing Secrets and ConfigMaps

    "},{"location":"index.html#self-service","title":"Self-Service","text":"

    With Multi Tenant Operator, you can empower your users to safely provision namespaces for themselves and their teams (typically mapped to SSO groups). Team-owned namespaces and the resources inside them count towards the team's quotas rather than the user's individual limits and are automatically shared with all team members according to the access rules you configure in Multi Tenant Operator.

    Also, by leveraging Multi Tenant Operator's templating mechanism, namespaces can be provisioned and automatically pre-populated with any kind of resource or multiple resources such as network policies, docker pull secrets or even Helm charts etc

    "},{"location":"index.html#everything-as-codegitops-ready","title":"Everything as Code/GitOps Ready","text":"

    Multi Tenant Operator is designed and built to be 100% OpenShift-native and to be configured and managed the same familiar way as native OpenShift resources so is perfect for modern shops that are dedicated to GitOps as it is fully configurable using Custom Resources.

    "},{"location":"index.html#preventing-clusters-sprawl","title":"Preventing Clusters Sprawl","text":"

    As companies look to further harness the power of cloud-native, they are adopting container technologies at rapid speed, increasing the number of clusters and workloads. As the number of Kubernetes clusters grows, this is an increasing work for the Ops team. When it comes to patching security issues or upgrading clusters, teams are doing five times the amount of work.

    With Multi Tenant Operator teams can share a single cluster with multiple teams, groups of users, or departments by saving operational and management efforts. This prevents you from Kubernetes cluster sprawl.

    "},{"location":"index.html#native-experience","title":"Native Experience","text":"

    Multi Tenant Operator provides multi-tenancy with a native Kubernetes experience without introducing additional management layers, plugins, or customized binaries.

    "},{"location":"argocd-multitenancy.html","title":"ArgoCD Multi-tenancy","text":"

    ArgoCD is a declarative GitOps tool built to deploy applications to Kubernetes. While the continuous delivery (CD) space is seen by some as crowded these days, ArgoCD does bring some interesting capabilities to the table. Unlike other tools, ArgoCD is lightweight and easy to configure.

    "},{"location":"argocd-multitenancy.html#why-argocd","title":"Why ArgoCD?","text":"

    Application definitions, configurations, and environments should be declarative and version controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand.

    "},{"location":"argocd-multitenancy.html#argocd-integration-in-multi-tenant-operator","title":"ArgoCD integration in Multi Tenant Operator","text":"

    With Multi Tenant Operator (MTO), cluster admins can configure multi tenancy in their cluster. Now with ArgoCD integration, multi tenancy can be configured in ArgoCD applications and AppProjects.

    MTO (if configured to) will create AppProjects for each tenant. The AppProject will allow tenants to create ArgoCD Applications that can be synced to namespaces owned by those tenants. Cluster admins will also be able to blacklist certain namespaces resources if they want, and allow certain cluster scoped resources as well (see the NamespaceResourceBlacklist and ClusterResourceWhitelist sections in Integration Config docs and Tenant Custom Resource docs).

    Note that ArgoCD integration in MTO is completely optional.

    "},{"location":"argocd-multitenancy.html#default-argocd-configuration","title":"Default ArgoCD configuration","text":"

    We have set a default ArgoCD configuration in Multi Tenant Operator that fulfils the following use cases:

    Detailed use cases showing how to create AppProjects are mentioned in use cases for ArgoCD.

    "},{"location":"changelog.html","title":"Changelog","text":""},{"location":"changelog.html#v010x","title":"v0.10.x","text":""},{"location":"changelog.html#v0100","title":"v0.10.0","text":""},{"location":"changelog.html#feature","title":"Feature","text":""},{"location":"changelog.html#fix","title":"Fix","text":""},{"location":"changelog.html#enhanced","title":"Enhanced","text":""},{"location":"changelog.html#v09x","title":"v0.9.x","text":""},{"location":"changelog.html#v094","title":"v0.9.4","text":"

    More information about TemplateGroupInstance's sync at Sync Resources Deployed by TemplateGroupInstance

    "},{"location":"changelog.html#v092","title":"v0.9.2","text":""},{"location":"changelog.html#v091","title":"v0.9.1","text":""},{"location":"changelog.html#v090","title":"v0.9.0","text":""},{"location":"changelog.html#enabling-console","title":"Enabling console","text":""},{"location":"changelog.html#v08x","title":"v0.8.x","text":""},{"location":"changelog.html#v083","title":"v0.8.3","text":""},{"location":"changelog.html#v081","title":"v0.8.1","text":""},{"location":"changelog.html#v080","title":"v0.8.0","text":""},{"location":"changelog.html#v07x","title":"v0.7.x","text":""},{"location":"changelog.html#v074","title":"v0.7.4","text":""},{"location":"changelog.html#v073","title":"v0.7.3","text":""},{"location":"changelog.html#v072","title":"v0.7.2","text":""},{"location":"changelog.html#v071","title":"v0.7.1","text":""},{"location":"changelog.html#v070","title":"v0.7.0","text":""},{"location":"changelog.html#v06x","title":"v0.6.x","text":""},{"location":"changelog.html#v061","title":"v0.6.1","text":""},{"location":"changelog.html#v060","title":"v0.6.0","text":""},{"location":"changelog.html#v05x","title":"v0.5.x","text":""},{"location":"changelog.html#v054","title":"v0.5.4","text":""},{"location":"changelog.html#v053","title":"v0.5.3","text":""},{"location":"changelog.html#v052","title":"v0.5.2","text":""},{"location":"changelog.html#v051","title":"v0.5.1","text":""},{"location":"changelog.html#v050","title":"v0.5.0","text":""},{"location":"changelog.html#v04x","title":"v0.4.x","text":""},{"location":"changelog.html#v047","title":"v0.4.7","text":""},{"location":"changelog.html#v046","title":"v0.4.6","text":""},{"location":"changelog.html#v045","title":"v0.4.5","text":""},{"location":"changelog.html#v044","title":"v0.4.4","text":""},{"location":"changelog.html#v043","title":"v0.4.3","text":""},{"location":"changelog.html#v042","title":"v0.4.2","text":""},{"location":"changelog.html#v041","title":"v0.4.1","text":""},{"location":"changelog.html#v040","title":"v0.4.0","text":""},{"location":"changelog.html#v03x","title":"v0.3.x","text":""},{"location":"changelog.html#v0333","title":"v0.3.33","text":""},{"location":"changelog.html#v0333_1","title":"v0.3.33","text":""},{"location":"changelog.html#v0333_2","title":"v0.3.33","text":""},{"location":"changelog.html#v0330","title":"v0.3.30","text":""},{"location":"changelog.html#v0329","title":"v0.3.29","text":""},{"location":"changelog.html#v0328","title":"v0.3.28","text":""},{"location":"changelog.html#v0327","title":"v0.3.27","text":""},{"location":"changelog.html#v0326","title":"v0.3.26","text":""},{"location":"changelog.html#v0325","title":"v0.3.25","text":""},{"location":"changelog.html#migrating-from-pervious-version","title":"Migrating from pervious version","text":""},{"location":"changelog.html#v0324","title":"v0.3.24","text":""},{"location":"changelog.html#v0323","title":"v0.3.23","text":""},{"location":"changelog.html#v0322","title":"v0.3.22","text":"

    \u26a0\ufe0f Known Issues

    "},{"location":"changelog.html#v0321","title":"v0.3.21","text":""},{"location":"changelog.html#v0320","title":"v0.3.20","text":""},{"location":"changelog.html#v0319","title":"v0.3.19","text":"

    \u26a0\ufe0f ApiVersion v1alpha1 of Tenant and Quota custom resources has been deprecated and is scheduled to be removed in the future. The following links contain the updated structure of both resources

    "},{"location":"changelog.html#v0318","title":"v0.3.18","text":""},{"location":"changelog.html#v0317","title":"v0.3.17","text":""},{"location":"changelog.html#v0316","title":"v0.3.16","text":""},{"location":"changelog.html#v0315","title":"v0.3.15","text":""},{"location":"changelog.html#v0314","title":"v0.3.14","text":""},{"location":"changelog.html#v0313","title":"v0.3.13","text":""},{"location":"changelog.html#v0312","title":"v0.3.12","text":""},{"location":"changelog.html#v0311","title":"v0.3.11","text":""},{"location":"changelog.html#v0310","title":"v0.3.10","text":""},{"location":"changelog.html#v039","title":"v0.3.9","text":""},{"location":"changelog.html#v038","title":"v0.3.8","text":""},{"location":"changelog.html#v037","title":"v0.3.7","text":""},{"location":"changelog.html#v036","title":"v0.3.6","text":""},{"location":"changelog.html#v035","title":"v0.3.5","text":""},{"location":"changelog.html#v034","title":"v0.3.4","text":""},{"location":"changelog.html#v033","title":"v0.3.3","text":""},{"location":"changelog.html#v032","title":"v0.3.2","text":""},{"location":"changelog.html#v031","title":"v0.3.1","text":""},{"location":"changelog.html#v030","title":"v0.3.0","text":""},{"location":"changelog.html#v02x","title":"v0.2.x","text":""},{"location":"changelog.html#v0233","title":"v0.2.33","text":""},{"location":"changelog.html#v0232","title":"v0.2.32","text":""},{"location":"changelog.html#v0231","title":"v0.2.31","text":""},{"location":"customresources.html","title":"Custom Resources","text":"

    Below is the detailed explanation about Custom Resources of MTO

    "},{"location":"customresources.html#1-quota","title":"1. Quota","text":"

    Cluster scoped resource:

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n  name: medium\nspec:\n  resourcequota:\n    hard:\n      requests.cpu: '5'\n      limits.cpu: '10'\n      requests.memory: '5Gi'\n      limits.memory: '10Gi'\n      configmaps: \"10\"\n      persistentvolumeclaims: \"4\"\n      replicationcontrollers: \"20\"\n      secrets: \"10\"\n      services: \"10\"\n      services.loadbalancers: \"2\"\n  limitrange:\n    limits:\n      - type: \"Pod\"\n        max:\n          cpu: \"2\"\n          memory: \"1Gi\"\n        min:\n          cpu: \"200m\"\n          memory: \"100Mi\"\n      - type: \"Container\"\n        max:\n          cpu: \"2\"\n          memory: \"1Gi\"\n        min:\n          cpu: \"100m\"\n          memory: \"50Mi\"\n        default:\n          cpu: \"300m\"\n          memory: \"200Mi\"\n        defaultRequest:\n          cpu: \"200m\"\n          memory: \"100Mi\"\n        maxLimitRequestRatio:\n          cpu: \"10\"\n

    When several tenants share a single cluster with a fixed number of resources, there is a concern that one tenant could use more than its fair share of resources. Quota is a wrapper around OpenShift ClusterResourceQuota and LimitRange which provides administrators to limit resource consumption per Tenant. For more details Quota.Spec , LimitRange.Spec

    "},{"location":"customresources.html#2-tenant","title":"2. Tenant","text":"

    Cluster scoped resource:

    The smallest valid Tenant definition is given below (with just one field in its spec):

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: alpha\nspec:\n  quota: small\n

    Here is a more detailed Tenant definition, explained below:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: alpha\nspec:\n  owners: # optional\n    users: # optional\n      - dave@stakater.com\n    groups: # optional\n      - alpha\n  editors: # optional\n    users: # optional\n      - jack@stakater.com\n  viewers: # optional\n    users: # optional\n      - james@stakater.com\n  quota: medium # required\n  sandboxConfig: # optional\n    enabled: true # optional\n    private: true # optional\n  onDelete: # optional\n    cleanNamespaces: false # optional\n    cleanAppProject: true # optional\n  argocd: # optional\n    sourceRepos: # required\n      - https://github.com/stakater/gitops-config\n    appProject: # optional\n      clusterResourceWhitelist: # optional\n        - group: tronador.stakater.com\n          kind: Environment\n      namespaceResourceBlacklist: # optional\n        - group: \"\"\n          kind: ConfigMap\n  hibernation: # optional\n    sleepSchedule: 23 * * * * # required\n    wakeSchedule: 26 * * * * # required\n  namespaces: # optional\n    withTenantPrefix: # optional\n      - dev\n      - build\n    withoutTenantPrefix: # optional\n      - preview\n  commonMetadata: # optional\n    labels: # optional\n      stakater.com/team: alpha\n    annotations: # optional\n      openshift.io/node-selector: node-role.kubernetes.io/infra=\n  specificMetadata: # optional\n    - annotations: # optional\n        stakater.com/user: dave\n      labels: # optional\n        stakater.com/sandbox: true\n      namespaces: # optional\n        - alpha-dave-stakater-sandbox\n  templateInstances: # optional\n  - spec: # optional\n      template: networkpolicy # required\n      sync: true  # optional\n      parameters: # optional\n        - name: CIDR_IP\n          value: \"172.17.0.0/16\"\n    selector: # optional\n      matchLabels: # optional\n        policy: network-restriction\n

    \u26a0\ufe0f If same label or annotation key is being applied using different methods provided, then the highest precedence will be given to specificMetadata followed by commonMetadata and in the end would be the ones applied from openshift.project.labels/openshift.project.annotations in IntegrationConfig

    "},{"location":"customresources.html#3-template","title":"3. Template","text":"

    Cluster scoped resource:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: redis\nresources:\n  helm:\n    releaseName: redis\n    chart:\n      repository:\n        name: redis\n        repoUrl: https://charts.bitnami.com/bitnami\n    values: |\n      redisPort: 6379\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: networkpolicy\nparameters:\n  - name: CIDR_IP\n    value: \"172.17.0.0/16\"\nresources:\n  manifests:\n    - kind: NetworkPolicy\n      apiVersion: networking.k8s.io/v1\n      metadata:\n        name: deny-cross-ns-traffic\n      spec:\n        podSelector:\n          matchLabels:\n            role: db\n        policyTypes:\n        - Ingress\n        - Egress\n        ingress:\n        - from:\n          - ipBlock:\n              cidr: \"${{CIDR_IP}}\"\n              except:\n              - 172.17.1.0/24\n          - namespaceSelector:\n              matchLabels:\n                project: myproject\n          - podSelector:\n              matchLabels:\n                role: frontend\n          ports:\n          - protocol: TCP\n            port: 6379\n        egress:\n        - to:\n          - ipBlock:\n              cidr: 10.0.0.0/24\n          ports:\n          - protocol: TCP\n            port: 5978\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: resource-mapping\nresources:\n  resourceMappings:\n    secrets:\n      - name: secret-s1\n        namespace: namespace-n1\n    configMaps:\n      - name: configmap-c1\n        namespace: namespace-n2\n

    Templates are used to initialize Namespaces, share common resources across namespaces, and map secrets/configmaps from one namespace to other namespaces.

    Also you can define custom variables in Template and TemplateInstance . The parameters defined in TemplateInstance are overwritten the values defined in Template .

    Manifest Templates: The easiest option to define a Template is by specifying an array of Kubernetes manifests which should be applied when the Template is being instantiated.

    Helm Chart Templates: Instead of manifests, a Template can specify a Helm chart that will be installed (using Helm template) when the Template is being instantiated.

    Resource Mapping Templates: A template can be used to map secrets and configmaps from one tenant's namespace to another tenant's namespace, or within a tenant's namespace.

    "},{"location":"customresources.html#mandatory-and-optional-templates","title":"Mandatory and Optional Templates","text":"

    Templates can either be mandatory or optional. By default, all Templates are optional. Cluster Admins can make Templates mandatory by adding them to the spec.templateInstances array within the Tenant configuration. All Templates listed in spec.templateInstances will always be instantiated within every Namespace that is created for the respective Tenant.

    "},{"location":"customresources.html#4-templateinstance","title":"4. TemplateInstance","text":"

    Namespace scoped resource:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n  name: networkpolicy\n  namespace: build\nspec:\n  template: networkpolicy\n  sync: true\nparameters:\n  - name: CIDR_IP\n    value: \"172.17.0.0/16\"\n

    TemplateInstance are used to keep track of resources created from Templates, which are being instantiated inside a Namespace. Generally, a TemplateInstance is created from a Template and then the TemplateInstances will not be updated when the Template changes later on. To change this behavior, it is possible to set spec.sync: true in a TemplateInstance. Setting this option, means to keep this TemplateInstance in sync with the underlying template (similar to Helm upgrade).

    "},{"location":"customresources.html#5-templategroupinstance","title":"5. TemplateGroupInstance","text":"

    Cluster scoped resource:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: namespace-parameterized-restrictions-tgi\nspec:\n  template: namespace-parameterized-restrictions\n  sync: true\n  selector:\n    matchExpressions:\n    - key: stakater.com/tenant\n      operator: In\n      values:\n        - alpha\n        - beta\nparameters:\n  - name: CIDR_IP\n    value: \"172.17.0.0/16\"\n

    TemplateGroupInstance distributes a template across multiple namespaces which are selected by labelSelector.

    "},{"location":"customresources.html#6-resourcesupervisor","title":"6. ResourceSupervisor","text":"

    Cluster scoped resource:

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: tenant-sample\nspec:\n argocd:\n   appProjects:\n     - tenant-sample\n  hibernation:\n    sleepSchedule: 23 * * * *\n    wakeSchedule: 26 * * * *\n  namespaces:\n    - stage\n    - dev\nstatus:\n  currentStatus: running\n  nextReconcileTime: '2022-07-07T11:23:00Z'\n

    The ResourceSupervisor is a resource created by MTO in case the Hibernation feature is enabled. The Resource manages the sleep/wake schedule of the namespaces owned by the tenant, and manages the previous state of any sleeping application. Currently, only StatefulSets and Deployments are put to sleep. Additionally, ArgoCD AppProjects that belong to the tenant have a deny SyncWindow added to them.

    The ResourceSupervisor can be created both via the Tenant or manually. For more details, check some of its use cases

    "},{"location":"customresources.html#namespace","title":"Namespace","text":"
    apiVersion: v1\nkind: Namespace\nmetadata:\n  labels:\n    stakater.com/tenant: blue-sky\n  name: build\n
    "},{"location":"customresources.html#notes","title":"Notes","text":""},{"location":"eula.html","title":"Multi Tenant Operator End User License Agreement","text":"

    Last revision date: 12 December 2022

    IMPORTANT: THIS SOFTWARE END-USER LICENSE AGREEMENT (\"EULA\") IS A LEGAL AGREEMENT (\"Agreement\") BETWEEN YOU (THE CUSTOMER, EITHER AS AN INDIVIDUAL OR, IF PURCHASED OR OTHERWISE ACQUIRED BY OR FOR AN ENTITY, AS AN ENTITY) AND Stakater AB OR ITS SUBSIDUARY (\"COMPANY\"). READ IT CAREFULLY BEFORE COMPLETING THE INSTALLATION PROCESS AND USING MULTI TENANT OPERATOR (\"SOFTWARE\"). IT PROVIDES A LICENSE TO USE THE SOFTWARE AND CONTAINS WARRANTY INFORMATION AND LIABILITY DISCLAIMERS. BY INSTALLING AND USING THE SOFTWARE, YOU ARE CONFIRMING YOUR ACCEPTANCE OF THE SOFTWARE AND AGREEING TO BECOME BOUND BY THE TERMS OF THIS AGREEMENT.

    In order to use the Software under this Agreement, you must receive a license key at the time of purchase, in accordance with the scope of use and other terms specified and as set forth in Section 1 of this Agreement.

    "},{"location":"eula.html#1-license-grant","title":"1. License Grant","text":""},{"location":"eula.html#2-modifications","title":"2. Modifications","text":""},{"location":"eula.html#3-restricted-uses","title":"3. Restricted Uses","text":""},{"location":"eula.html#4-ownership","title":"4. Ownership","text":""},{"location":"eula.html#5-fees-and-payment","title":"5. Fees and Payment","text":""},{"location":"eula.html#6-support-maintenance-and-services","title":"6. Support, Maintenance and Services","text":""},{"location":"eula.html#7-disclaimer-of-warranties","title":"7. Disclaimer of Warranties","text":""},{"location":"eula.html#8-limitation-of-liability","title":"8. Limitation of Liability","text":""},{"location":"eula.html#9-remedies","title":"9. Remedies","text":""},{"location":"eula.html#10-acknowledgements","title":"10. Acknowledgements","text":""},{"location":"eula.html#11-third-party-software","title":"11. Third Party Software","text":""},{"location":"eula.html#12-miscellaneous","title":"12. Miscellaneous","text":""},{"location":"eula.html#13-contact-information","title":"13. Contact Information","text":""},{"location":"faq.html","title":"FAQs","text":""},{"location":"faq.html#namespace-admission-webhook","title":"Namespace Admission Webhook","text":""},{"location":"faq.html#q-error-received-while-performing-create-update-or-delete-action-on-namespace","title":"Q. Error received while performing Create, Update or Delete action on Namespace","text":"
    Cannot CREATE namespace test-john without label stakater.com/tenant\n

    Answer. Error occurs when a user is trying to perform create, update, delete action on a namespace without the required stakater.com/tenant label. This label is used by the operator to see that authorized users can perform that action on the namespace. Just add the label with the tenant name so that MTO knows which tenant the namespace belongs to, and who is authorized to perform create/update/delete operations. For more details please refer to Namespace use-case.

    "},{"location":"faq.html#q-error-received-while-performing-create-update-or-delete-action-on-openshift-project","title":"Q. Error received while performing Create, Update or Delete action on OpenShift Project","text":"
    Cannot CREATE namespace testing without label stakater.com/tenant. User: system:serviceaccount:openshift-apiserver:openshift-apiserver-sa\n

    Answer. This error occurs because we don't allow Tenant members to do operations on OpenShift Project, whenever an operation is done on a project, openshift-apiserver-sa tries to do the same request onto a namespace. That's why the user sees openshift-apiserver-sa Service Account instead of its own user in the error message.

    The fix is to try the same operation on the namespace manifest instead.

    "},{"location":"faq.html#q-error-received-while-doing-kubectl-apply-f-namespaceyaml","title":"Q. Error received while doing \"kubectl apply -f namespace.yaml\"","text":"
    Error from server (Forbidden): error when retrieving current configuration of:\nResource: \"/v1, Resource=namespaces\", GroupVersionKind: \"/v1, Kind=Namespace\"\nName: \"ns1\", Namespace: \"\"\nfrom server for: \"namespace.yaml\": namespaces \"ns1\" is forbidden: User \"muneeb\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"ns1\"\n

    Answer. Tenant members will not be able to use kubectl apply because apply first gets all the instances of that resource, in this case namespaces, and then does the required operation on the selected resource. To maintain tenancy, tenant members do not the access to get or list all the namespaces.

    The fix is to create namespaces with kubectl create instead.

    "},{"location":"faq.html#mto-argocd-integration","title":"MTO - ArgoCD Integration","text":""},{"location":"faq.html#q-how-do-i-deploy-cluster-scoped-resource-via-the-argocd-integration","title":"Q. How do I deploy cluster-scoped resource via the ArgoCD integration?","text":"

    Answer. Multi-Tenant Operator's ArgoCD Integration allows configuration of which cluster-scoped resources can be deployed, both globally and on a per-tenant basis. For a global allow-list that applies to all tenants, you can add both resource group and kind to the IntegrationConfig's spec.argocd.clusterResourceWhitelist field. Alternatively, you can set this up on a tenant level by configuring the same details within a Tenant's spec.argocd.appProject.clusterResourceWhitelist field. For more details, check out the ArgoCD integration use cases

    "},{"location":"faq.html#q-invalidspecerror-application-repo-repo-is-not-permitted-in-project-project","title":"Q. InvalidSpecError: application repo \\<repo> is not permitted in project \\<project>","text":"

    Answer. The above error can occur if the ArgoCD Application is syncing from a source that is not allowed the referenced AppProject. To solve this, verify that you have referred to the correct project in the given ArgoCD Application, and that the repoURL used for the Application's source is valid. If the error still appears, you can add the URL to the relevant Tenant's spec.argocd.sourceRepos array.

    "},{"location":"faq.html#q-why-are-there-mto-showback-pods-failing-in-my-cluster","title":"Q. Why are there mto-showback-* pods failing in my cluster?","text":"

    Answer. The mto-showback-* pods are used to calculate the cost of the resources used by each tenant. These pods are created by the Multi-Tenant Operator and are scheduled to run every 10 minutes. If the pods are failing, it is likely that the operator's necessary to calculate cost are not present in the cluster. To solve this, you can navigate to Operators -> Installed Operators in the OpenShift console and check if the MTO-OpenCost and MTO-Prometheus operators are installed. If they are in a pending state, you can manually approve them to install them in the cluster.

    "},{"location":"features.html","title":"Features","text":"

    The major features of Multi Tenant Operator (MTO) are described below.

    "},{"location":"features.html#kubernetes-multitenancy","title":"Kubernetes Multitenancy","text":"

    RBAC is one of the most complicated and error-prone parts of Kubernetes. With Multi Tenant Operator, you can rest assured that RBAC is configured with the \"least privilege\" mindset and all rules are kept up-to-date with zero manual effort.

    Multi Tenant Operator binds existing ClusterRoles to the Tenant's Namespaces used for managing access to the Namespaces and the resources they contain. You can also modify the default roles or create new roles to have full control and customize access control for your users and teams.

    Multi Tenant Operator is also able to leverage existing OpenShift groups or external groups synced from 3rd party identity management systems, for maintaining Tenant membership in your organization's current user management system.

    "},{"location":"features.html#hashicorp-vault-multitenancy","title":"HashiCorp Vault Multitenancy","text":"

    Multi Tenant Operator extends the tenants permission model to HashiCorp Vault where it can create Vault paths and greatly ease the overhead of managing RBAC in Vault. Tenant users can manage their own secrets without the concern of someone else having access to their Vault paths.

    More details on Vault Multitenancy

    "},{"location":"features.html#argocd-multitenancy","title":"ArgoCD Multitenancy","text":"

    Multi Tenant Operator is not only providing strong Multi Tenancy for the OpenShift internals but also extends the tenants permission model to ArgoCD were it can provision AppProjects and Allowed Repositories for your tenants greatly ease the overhead of managing RBAC in ArgoCD.

    More details on ArgoCD Multitenancy

    "},{"location":"features.html#mattermost-multitenancy","title":"Mattermost Multitenancy","text":"

    Multi Tenant Operator can manage Mattermost to create Teams for tenant users. All tenant users get a unique team and a list of predefined channels gets created. When a user is removed from the tenant, the user is also removed from the Mattermost team corresponding to tenant.

    More details on Mattermost

    "},{"location":"features.html#costresource-optimization","title":"Cost/Resource Optimization","text":"

    Multi Tenant Operator provides a mechanism for defining Resource Quotas at the tenant scope, meaning all namespaces belonging to a particular tenant share the defined quota, which is why you are able to safely enable dev teams to self serve their namespaces whilst being confident that they can only use the resources allocated based on budget and business needs.

    More details on Quota

    "},{"location":"features.html#remote-development-namespaces","title":"Remote Development Namespaces","text":"

    Multi Tenant Operator can be configured to automatically provision a namespace in the cluster for every member of the specific tenant, that will also be preloaded with any selected templates and consume the same pool of resources from the tenants quota creating safe remote dev namespaces that teams can use as scratch namespace for rapid prototyping and development. So, every developer gets a Kubernetes-based cloud development environment that feel like working on localhost.

    More details on Sandboxes

    "},{"location":"features.html#templates-and-template-distribution","title":"Templates and Template distribution","text":"

    Multi Tenant Operator allows admins/users to define templates for namespaces, so that others can instantiate these templates to provision namespaces with batteries loaded. A template could pre-populate a namespace for certain use cases or with basic tooling required. Templates allow you to define Kubernetes manifests, Helm chart and more to be applied when the template is used to create a namespace.

    It also allows the parameterizing of these templates for flexibility and ease of use. It also provides the option to enforce the presence of templates in one tenant's or all the tenants' namespaces for configuring secure defaults.

    Common use cases for namespace templates may be:

    More details on Distributing Template Resources

    "},{"location":"features.html#hibernation","title":"Hibernation","text":"

    Multi Tenant Operator can downscale Deployments and StatefulSets in a tenant's Namespace according to a defined sleep schedule. The Deployments and StatefulSets are brought back to their required replicas according to the provided wake schedule.

    More details on Hibernation

    "},{"location":"features.html#cross-namespace-resource-distribution","title":"Cross Namespace Resource Distribution","text":"

    Multi Tenant Operator supports cloning of secrets and configmaps from one namespace to another namespace based on label selectors. It uses templates to enable users to provide reference to secrets and configmaps. It uses a template group instance to distribute those secrets and namespaces in matching namespaces, even if namespaces belong to different tenants. If template instance is used then the resources will only be mapped if namespaces belong to same tenant.

    More details on Distributing Secrets and ConfigMaps

    "},{"location":"features.html#self-service","title":"Self-Service","text":"

    With Multi Tenant Operator, you can empower your users to safely provision namespaces for themselves and their teams (typically mapped to SSO groups). Team-owned namespaces and the resources inside them count towards the team's quotas rather than the user's individual limits and are automatically shared with all team members according to the access rules you configure in Multi Tenant Operator.

    Also, by leveraging Multi Tenant Operator's templating mechanism, namespaces can be provisioned and automatically pre-populated with any kind of resource or multiple resources such as network policies, docker pull secrets or even Helm charts etc

    "},{"location":"features.html#everything-as-codegitops-ready","title":"Everything as Code/GitOps Ready","text":"

    Multi Tenant Operator is designed and built to be 100% OpenShift-native and to be configured and managed the same familiar way as native OpenShift resources so is perfect for modern shops that are dedicated to GitOps as it is fully configurable using Custom Resources.

    "},{"location":"features.html#preventing-clusters-sprawl","title":"Preventing Clusters Sprawl","text":"

    As companies look to further harness the power of cloud-native, they are adopting container technologies at rapid speed, increasing the number of clusters and workloads. As the number of Kubernetes clusters grows, this is an increasing work for the Ops team. When it comes to patching security issues or upgrading clusters, teams are doing five times the amount of work.

    With Multi Tenant Operator teams can share a single cluster with multiple teams, groups of users, or departments by saving operational and management efforts. This prevents you from Kubernetes cluster sprawl.

    "},{"location":"features.html#native-experience","title":"Native Experience","text":"

    Multi Tenant Operator provides multi-tenancy with a native Kubernetes experience without introducing additional management layers, plugins, or customized binaries.

    "},{"location":"features.html#custom-metrics-support","title":"Custom Metrics Support","text":"

    Multi Tenant Operator now supports custom metrics for templates, template instances and template group instances.

    Exposed metrics contain, number of resources deployed, number of resources failed, total number of resources deployed for template instances and template group instances. These metrics can be used to monitor the usage of templates and template instances in the cluster.

    Additionally, this allows us to expose other performance metrics listed here.

    More details on Enabling Custom Metrics

    "},{"location":"features.html#graph-visualization-for-tenants","title":"Graph Visualization for Tenants","text":"

    Multi Tenant Operator now supports graph visualization for tenants on the MTO Console. Effortlessly associate tenants with their respective resources using the enhanced graph feature on the MTO Console. This dynamic graph illustrates the relationships between tenants and the resources they create, encompassing both MTO's proprietary resources and native Kubernetes/OpenShift elements.

    More details on Graph Visualization

    "},{"location":"hibernation.html","title":"Hibernating Namespaces","text":"

    You can manage workloads in your cluster with MTO by implementing a hibernation schedule for your tenants. Hibernation downsizes the running Deployments and StatefulSets in a tenant\u2019s namespace according to a defined cron schedule. You can set a hibernation schedule for your tenants by adding the \u2018spec.hibernation\u2019 field to the tenant's respective Custom Resource.

    hibernation:\n  sleepSchedule: 23 * * * *\n  wakeSchedule: 26 * * * *\n

    spec.hibernation.sleepSchedule accepts a cron expression indicating the time to put the workloads in your tenant\u2019s namespaces to sleep.

    spec.hibernation.wakeSchedule accepts a cron expression indicating the time to wake the workloads in your tenant\u2019s namespaces up.

    Note

    Both sleep and wake schedules must be specified for your Hibernation schedule to be valid.

    Additionally, adding the hibernation.stakater.com/exclude: 'true' annotation to a namespace excludes it from hibernating.

    Note

    This is only true for hibernation applied via the Tenant Custom Resource, and does not apply for hibernation done by manually creating a ResourceSupervisor (details about that below).

    Note

    This will not wake up an already sleeping namespace before the wake schedule.

    "},{"location":"hibernation.html#resource-supervisor","title":"Resource Supervisor","text":"

    Adding a Hibernation Schedule to a Tenant creates an accompanying ResourceSupervisor Custom Resource. The Resource Supervisor stores the Hibernation schedules and manages the current and previous states of all the applications, whether they are sleeping or awake.

    When the sleep timer is activated, the Resource Supervisor controller stores the details of your applications (including the number of replicas, configurations, etc.) in the applications' namespaces and then puts your applications to sleep. When the wake timer is activated, the controller wakes up the applications using their stored details.

    Enabling ArgoCD support for Tenants will also hibernate applications in the tenants' appProjects.

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: sigma\nspec:\n  argocd:\n    appProjects:\n      - sigma\n    namespace: openshift-gitops\n  hibernation:\n    sleepSchedule: 42 * * * *\n    wakeSchedule: 45 * * * *\n  namespaces:\n    - tenant-ns1\n    - tenant-ns2\n

    Currently, Hibernation is available only for StatefulSets and Deployments.

    "},{"location":"hibernation.html#manual-creation-of-resourcesupervisor","title":"Manual creation of ResourceSupervisor","text":"

    Hibernation can also be applied by creating a ResourceSupervisor resource manually. The ResourceSupervisor definition will contain the hibernation cron schedule, the names of the namespaces to be hibernated, and the names of the ArgoCD AppProjects whose ArgoCD Applications have to be hibernated (as per the given schedule).

    This method can be used to hibernate:

    As an example, the following ResourceSupervisor could be created manually, to apply hibernation explicitly to the 'ns1' and 'ns2' namespaces, and to the 'sample-app-project' AppProject.

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: hibernator\nspec:\n  argocd:\n    appProjects:\n      - sample-app-project\n    namespace: openshift-gitops\n  hibernation:\n    sleepSchedule: 42 * * * *\n    wakeSchedule: 45 * * * *\n  namespaces:\n    - ns1\n    - ns2\n
    "},{"location":"installation.html","title":"Installation","text":"

    This document contains instructions on installing, uninstalling and configuring Multi Tenant Operator using OpenShift MarketPlace.

    1. OpenShift OperatorHub UI

    2. CLI/GitOps

    3. Uninstall

    "},{"location":"installation.html#requirements","title":"Requirements","text":""},{"location":"installation.html#installing-via-operatorhub-ui","title":"Installing via OperatorHub UI","text":"

    Note: Use stable channel for seamless upgrades. For Production Environment prefer Manual approval and use Automatic for Development Environment

    Note: MTO will be installed in multi-tenant-operator namespace.

    "},{"location":"installation.html#configuring-integrationconfig","title":"Configuring IntegrationConfig","text":"

    IntegrationConfig is required to configure the settings of multi-tenancy for MTO.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedNamespaces:\n      - default\n      - ^openshift-*\n      - ^kube-*\n      - ^redhat-*\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:default-*\n      - ^system:serviceaccount:openshift-*\n      - ^system:serviceaccount:kube-*\n      - ^system:serviceaccount:redhat-*\n

    For more details and configurations check out IntegrationConfig.

    "},{"location":"installation.html#installing-via-cli-or-gitops","title":"Installing via CLI OR GitOps","text":"
    oc create namespace multi-tenant-operator\nnamespace/multi-tenant-operator created\n
    oc create -f - << EOF\napiVersion: operators.coreos.com/v1\nkind: OperatorGroup\nmetadata:\n  name: tenant-operator\n  namespace: multi-tenant-operator\nEOF\noperatorgroup.operators.coreos.com/tenant-operator created\n
    oc create -f - << EOF\napiVersion: operators.coreos.com/v1alpha1\nkind: Subscription\nmetadata:\n  name: tenant-operator\n  namespace: multi-tenant-operator\nspec:\n  channel: stable\n  installPlanApproval: Automatic\n  name: tenant-operator\n  source: certified-operators\n  sourceNamespace: openshift-marketplace\n  startingCSV: tenant-operator.v0.9.1\n  config:\n    env:\n      - name: ENABLE_CONSOLE\n        value: 'true'\nEOF\nsubscription.operators.coreos.com/tenant-operator created\n

    Note: To bring MTO via GitOps, add the above files in GitOps repository.

    "},{"location":"installation.html#configuring-integrationconfig_1","title":"Configuring IntegrationConfig","text":"

    IntegrationConfig is required to configure the settings of multi-tenancy for MTO.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedNamespaces:\n      - default\n      - ^openshift-*\n      - ^kube-*\n      - ^redhat-*\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:default-*\n      - ^system:serviceaccount:openshift-*\n      - ^system:serviceaccount:kube-*\n      - ^system:serviceaccount:redhat-*\n

    For more details and configurations check out IntegrationConfig.

    "},{"location":"installation.html#uninstall-via-operatorhub-ui","title":"Uninstall via OperatorHub UI","text":"

    You can uninstall MTO by following these steps:

    "},{"location":"installation.html#notes","title":"Notes","text":""},{"location":"integration-config.html","title":"Integration Config","text":"

    IntegrationConfig is used to configure settings of multi-tenancy for Multi Tenant Operator.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  tenantRoles:\n    default:\n      owner:\n        clusterRoles:\n          - admin\n      editor:\n        clusterRoles:\n          - edit\n      viewer:\n        clusterRoles:\n          - view\n          - viewer\n    custom:\n    - labelSelector:\n        matchExpressions:\n        - key: stakater.com/kind\n          operator: In\n          values:\n            - build\n        matchLabels:\n          stakater.com/kind: dev\n      owner:\n        clusterRoles:\n          - custom-owner\n      editor:\n        clusterRoles:\n          - custom-editor\n      viewer:\n        clusterRoles:\n          - custom-viewer\n          - custom-view\n  openshift:\n    project:\n      labels:\n        stakater.com/workload-monitoring: \"true\"\n      annotations:\n        openshift.io/node-selector: node-role.kubernetes.io/worker=\n    group:\n      labels:\n        role: customer-reader\n    sandbox:\n      labels:\n        stakater.com/kind: sandbox\n    clusterAdminGroups:\n      - cluster-admins\n    privilegedNamespaces:\n      - ^default$\n      - ^openshift-*\n      - ^kube-*\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:openshift-*\n      - ^system:serviceaccount:kube-*\n    namespaceAccessPolicy:\n      deny:\n        privilegedNamespaces:\n          users:\n            - system:serviceaccount:openshift-argocd:argocd-application-controller\n            - adam@stakater.com\n          groups:\n            - cluster-admins\n  argocd:\n    namespace: openshift-operators\n    namespaceResourceBlacklist:\n      - group: '' # all groups\n        kind: ResourceQuota\n    clusterResourceWhitelist:\n      - group: tronador.stakater.com\n        kind: EnvironmentProvisioner\n  rhsso:\n    enabled: true\n    realm: customer\n    endpoint:\n      url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n      secretReference:\n        name: auth-secrets\n        namespace: openshift-auth\n  vault:\n    enabled: true\n    endpoint:\n      url: https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n      secretReference:\n        name: vault-root-token\n        namespace: vault\n    sso:\n      clientName: vault\n      accessorID: <ACCESSOR_ID_TOKEN>\n

    Following are the different components that can be used to configure multi-tenancy in a cluster via Multi Tenant Operator.

    "},{"location":"integration-config.html#tenantroles","title":"TenantRoles","text":"

    TenantRoles are required within the IntegrationConfig, as they are used for defining what roles will be applied to each Tenant namespace. The field allows optional custom roles, that are then used to create RoleBindings for namespaces that match a labelSelector.

    \u26a0\ufe0f If you do not configure roles in any way, then the default OpenShift roles of owner, edit, and view will apply to Tenant members. Their details can be found here

    tenantRoles:\n  default:\n    owner:\n      clusterRoles:\n        - admin\n    editor:\n      clusterRoles:\n        - edit\n    viewer:\n      clusterRoles:\n        - view\n        - viewer\n  custom:\n  - labelSelector:\n      matchExpressions:\n      - key: stakater.com/kind\n        operator: In\n        values:\n          - build\n      matchLabels:\n        stakater.com/kind: dev\n    owner:\n      clusterRoles:\n        - custom-owner\n    editor:\n      clusterRoles:\n        - custom-editor\n    viewer:\n      clusterRoles:\n        - custom-viewer\n        - custom-view\n
    "},{"location":"integration-config.html#default","title":"Default","text":"

    This field contains roles that will be used to create default roleBindings for each namespace that belongs to tenants. These roleBindings are only created for a namespace if that namespaces isn't already matched by the custom field below it. Therefore, it is required to have at least one role mentioned within each of its three subfields: owner, editor, and viewer. These 3 subfields also correspond to the member fields of the Tenant CR

    "},{"location":"integration-config.html#custom","title":"Custom","text":"

    An array of custom roles. Similar to the default field, you can mention roles within this field as well. However, the custom roles also require the use of a labelSelector for each iteration within the array. The roles mentioned here will only apply to the namespaces that are matched by the labelSelector. If a namespace is matched by 2 different labelSelectors, then both roles will apply to it. Additionally, roles can be skipped within the labelSelector. These missing roles are then inherited from the default roles field . For example, if the following custom roles arrangement is used:

    custom:\n- labelSelector:\n    matchExpressions:\n    - key: stakater.com/kind\n      operator: In\n      values:\n        - build\n    matchLabels:\n      stakater.com/kind: dev\n  owner:\n    clusterRoles:\n      - custom-owner\n

    Then the editor and viewer roles will be taken from the default roles field, as that is required to have at least one role mentioned.

    "},{"location":"integration-config.html#openshift","title":"OpenShift","text":"
    openshift:\n  project:\n    labels:\n      stakater.com/workload-monitoring: \"true\"\n    annotations:\n      openshift.io/node-selector: node-role.kubernetes.io/worker=\n  group:\n    labels:\n      role: customer-reader\n  sandbox:\n    labels:\n      stakater.com/kind: sandbox\n  clusterAdminGroups:\n    - cluster-admins\n  privilegedNamespaces:\n    - ^default$\n    - ^openshift-*\n    - ^kube-*\n  privilegedServiceAccounts:\n    - ^system:serviceaccount:openshift-*\n    - ^system:serviceaccount:kube-*\n  namespaceAccessPolicy:\n    deny:\n      privilegedNamespaces:\n        users:\n          - system:serviceaccount:openshift-argocd:argocd-application-controller\n          - adam@stakater.com\n        groups:\n          - cluster-admins\n
    "},{"location":"integration-config.html#project-group-and-sandbox","title":"Project, group and sandbox","text":"

    We can use the openshift.project, openshift.group and openshift.sandbox fields to automatically add labels and annotations to the Projects and Groups managed via MTO.

      openshift:\n    project:\n      labels:\n        stakater.com/workload-monitoring: \"true\"\n      annotations:\n        openshift.io/node-selector: node-role.kubernetes.io/worker=\n    group:\n      labels:\n        role: customer-reader\n    sandbox:\n      labels:\n        stakater.com/kind: sandbox\n

    If we want to add default labels/annotations to sandbox namespaces of tenants than we just simply add them in openshift.project.labels/openshift.project.annotations respectively.

    Whenever a project is made it will have the labels and annotations as mentioned above.

    kind: Project\napiVersion: project.openshift.io/v1\nmetadata:\n  name: bluesky-build\n  annotations:\n    openshift.io/node-selector: node-role.kubernetes.io/worker=\n  labels:\n    workload-monitoring: 'true'\n    stakater.com/tenant: bluesky\nspec:\n  finalizers:\n    - kubernetes\nstatus:\n  phase: Active\n
    kind: Group\napiVersion: user.openshift.io/v1\nmetadata:\n  name: bluesky-owner-group\n  labels:\n    role: customer-reader\nusers:\n  - andrew@stakater.com\n
    "},{"location":"integration-config.html#cluster-admin-groups","title":"Cluster Admin Groups","text":"

    clusterAdminGroups: Contains names of the groups that are allowed to perform CRUD operations on namespaces present on the cluster. Users in the specified group(s) will be able to perform these operations without MTO getting in their way. MTO does not interfere even with the deletion of privilegedNamespaces.

    Note

    User kube:admin is bypassed by default to perform operations as a cluster admin, this includes operations on all the namespaces.

    "},{"location":"integration-config.html#privileged-namespaces","title":"Privileged Namespaces","text":"

    privilegedNamespaces: Contains the list of namespaces ignored by MTO. MTO will not manage the namespaces in this list. Treatment for privileged namespaces does not involve further integrations or finalizers processing as with normal namespaces. Values in this list are regex patterns. For example:

    "},{"location":"integration-config.html#privileged-serviceaccounts","title":"Privileged ServiceAccounts","text":"

    privilegedServiceAccounts: Contains the list of ServiceAccounts ignored by MTO. MTO will not manage the ServiceAccounts in this list. Values in this list are regex patterns. For example, to ignore all ServiceAccounts starting with the system:serviceaccount:openshift- prefix, we can use ^system:serviceaccount:openshift-*; and to ignore the system:serviceaccount:builder service account we can use ^system:serviceaccount:builder$.

    "},{"location":"integration-config.html#namespace-access-policy","title":"Namespace Access Policy","text":"

    namespaceAccessPolicy.Deny: Can be used to restrict privileged users/groups CRUD operation over managed namespaces.

    namespaceAccessPolicy:\n  deny:\n    privilegedNamespaces:\n      groups:\n        - cluster-admins\n      users:\n        - system:serviceaccount:openshift-argocd:argocd-application-controller\n        - adam@stakater.com\n

    \u26a0\ufe0f If you want to use a more complex regex pattern (for the openshift.privilegedNamespaces or openshift.privilegedServiceAccounts field), it is recommended that you test the regex pattern first - either locally or using a platform such as https://regex101.com/.

    "},{"location":"integration-config.html#argocd","title":"ArgoCD","text":""},{"location":"integration-config.html#namespace","title":"Namespace","text":"

    argocd.namespace is an optional field used to specify the namespace where ArgoCD Applications and AppProjects are deployed. The field should be populated when you want to create an ArgoCD AppProject for each tenant.

    "},{"location":"integration-config.html#namespaceresourceblacklist","title":"NamespaceResourceBlacklist","text":"
    argocd:\n  namespaceResourceBlacklist:\n  - group: '' # all resource groups\n    kind: ResourceQuota\n  - group: ''\n    kind: LimitRange\n  - group: ''\n    kind: NetworkPolicy\n

    argocd.namespaceResourceBlacklist prevents ArgoCD from syncing the listed resources from your GitOps repo.

    "},{"location":"integration-config.html#clusterresourcewhitelist","title":"ClusterResourceWhitelist","text":"
    argocd:\n  clusterResourceWhitelist:\n  - group: tronador.stakater.com\n    kind: EnvironmentProvisioner\n

    argocd.clusterResourceWhitelist allows ArgoCD to sync the listed cluster scoped resources from your GitOps repo.

    "},{"location":"integration-config.html#rhsso-red-hat-single-sign-on","title":"RHSSO (Red Hat Single Sign-On)","text":"

    Red Hat Single Sign-On RHSSO is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0.

    If RHSSO is configured on a cluster, then RHSSO configuration can be enabled.

    rhsso:\n  enabled: true\n  realm: customer\n  endpoint:\n    secretReference:\n      name: auth-secrets\n      namespace: openshift-auth\n    url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n

    If enabled, than admins have to provide secret and URL of RHSSO.

    "},{"location":"integration-config.html#vault","title":"Vault","text":"

    Vault is used to secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.

    If vault is configured on a cluster, then Vault configuration can be enabled.

    Vault:\n  enabled: true\n  endpoint:\n    secretReference:\n      name: vault-root-token\n      namespace: vault\n    url: >-\n      https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n  sso:\n    accessorID: <ACCESSOR_ID_TOKEN>\n    clientName: vault\n

    If enabled, than admins have to provide secret, URL and SSO accessorID of Vault.

    For more details please refer use-cases

    "},{"location":"tenant-roles.html","title":"Tenant Member Roles","text":"

    After adding support for custom roles within MTO, this page is only applicable if you use OpenShift and its default owner, edit, and view roles. For more details, see the IntegrationConfig spec

    MTO tenant members can have one of following 3 roles:

    1. Owner
    2. Editor
    3. Viewer
    "},{"location":"tenant-roles.html#1-owner","title":"1. Owner","text":"

    fig 2. Shows how tenant owners manage their tenant using MTO

    Owner is an admin of a tenant with some restrictions. It has privilege to see all resources in their Tenant with some additional privileges. They can also create new namespaces.

    Owners will also inherit roles from Edit and View.

    "},{"location":"tenant-roles.html#access-permissions","title":"Access Permissions","text":""},{"location":"tenant-roles.html#quotas-permissions","title":"Quotas Permissions","text":""},{"location":"tenant-roles.html#resources-permissions","title":"Resources Permissions","text":""},{"location":"tenant-roles.html#2-editor","title":"2. Editor","text":"

    fig 3. Shows editors role in a tenant using MTO

    Edit role will have edit access on their Projects, but they wont have access on Roles or RoleBindings.

    Editors will also inherit View role.

    "},{"location":"tenant-roles.html#access-permissions_1","title":"Access Permissions","text":""},{"location":"tenant-roles.html#quotas-permissions_1","title":"Quotas Permissions","text":""},{"location":"tenant-roles.html#builds-pods-pvc-permissions","title":"Builds ,Pods , PVC Permissions","text":""},{"location":"tenant-roles.html#resources-permissions_1","title":"Resources Permissions","text":""},{"location":"tenant-roles.html#3-viewer","title":"3. Viewer","text":"

    fig 4. Shows viewers role in a tenant using MTO

    Viewer role will only have view access on their Project.

    "},{"location":"tenant-roles.html#access-permissions_2","title":"Access Permissions","text":""},{"location":"tenant-roles.html#quotas-permissions_2","title":"Quotas Permissions","text":""},{"location":"tenant-roles.html#builds-pods-pvc-permissions_1","title":"Builds ,Pods , PVC Permissions","text":""},{"location":"tenant-roles.html#resources-permissions_2","title":"Resources Permissions","text":""},{"location":"troubleshooting.html","title":"Troubleshooting Guide","text":""},{"location":"troubleshooting.html#operatorhub-upgrade-error","title":"OperatorHub Upgrade Error","text":""},{"location":"troubleshooting.html#operator-is-stuck-in-upgrade-if-upgrade-approval-is-set-to-automatic","title":"Operator is stuck in upgrade if upgrade approval is set to Automatic","text":""},{"location":"troubleshooting.html#problem","title":"Problem","text":"

    If operator upgrade is set to Automatic Approval on OperatorHub, there may be scenarios where it gets blocked.

    "},{"location":"troubleshooting.html#resolution","title":"Resolution","text":"

    Information

    If upgrade approval is set to manual, and you want to skip upgrade of a specific version, then delete the InstallPlan created for that specific version. Operator Lifecycle Manager (OLM) will create the latest available InstallPlan which can be approved then.\n

    As OLM does not allow to upgrade or downgrade from a version stuck because of error, the only possible fix is to uninstall the operator from the cluster. When the operator is uninstalled it removes all of its resources i.e., ClusterRoles, ClusterRoleBindings, and Deployments etc., except Custom Resource Definitions (CRDs), so none of the Custom Resources (CRs), Tenants, Templates etc., will be removed from the cluster. If any CRD has a conversion webhook defined then that webhook should be removed before installing the stable version of the operator. This can be achieved via removing the .spec.conversion block from the CRD schema.

    As an example, if you have installed v0.8.0 of Multi Tenant Operator on your cluster, then it'll stuck in an error error validating existing CRs against new CRD's schema for \"integrationconfigs.tenantoperator.stakater.com\": error validating custom resource against new schema for IntegrationConfig multi-tenant-operator/tenant-operator-config: [].spec.tenantRoles: Required value. To resolve this issue, you'll first uninstall the MTO from the cluster. Once you uninstall the MTO, check Tenant CRD which will have a conversion block, which needs to be removed. After removing the conversion block from the Tenant CRD, install the latest available version of MTO from OperatorHub.

    "},{"location":"troubleshooting.html#permission-issues","title":"Permission Issues","text":""},{"location":"troubleshooting.html#vault-user-permissions-are-not-updated-if-the-user-is-added-to-a-tenant-and-the-user-does-not-exist-in-rhsso","title":"Vault user permissions are not updated if the user is added to a Tenant, and the user does not exist in RHSSO","text":""},{"location":"troubleshooting.html#problem_1","title":"Problem","text":"

    If a user is added to tenant resource, and the user does not exist in RHSSO, then RHSSO is not updated with the user's Vault permission.

    "},{"location":"troubleshooting.html#reproduction-steps","title":"Reproduction steps","text":"
    1. Add a new user to Tenant CR
    2. Attempt to log in to Vault with the added user
    3. Vault denies that the user exists, and signs the user up via RHSSO. User is now created on RHSSO (you may check for the user on RHSSO).
    "},{"location":"troubleshooting.html#resolution_1","title":"Resolution","text":"

    If the user does not exist in RHSSO, then MTO does not create the tenant access for Vault in RHSSO.

    The user now needs to go to Vault, and sign up using OIDC. Then the user needs to wait for MTO to reconcile the updated tenant (reconciliation period is currently 1 hour). After reconciliation, MTO will add relevant access for the user in RHSSO.

    If the user needs to be added immediately and it is not feasible to wait for next MTO reconciliation, then: add a label or annotation to the user, or restart the Tenant controller pod to force immediate reconciliation.

    "},{"location":"vault-multitenancy.html","title":"Vault Multitenancy","text":"

    HashiCorp Vault is an identity-based secret and encryption management system. Vault validates and authorizes a system's clients (users, machines, apps) before providing them access to secrets or stored sensitive data.

    "},{"location":"vault-multitenancy.html#vault-integration-in-multi-tenant-operator","title":"Vault integration in Multi Tenant Operator","text":""},{"location":"vault-multitenancy.html#service-account-auth-in-vault","title":"Service Account Auth in Vault","text":"

    MTO enables the Kubernetes auth method which can be used to authenticate with Vault using a Kubernetes Service Account Token. When enabled, for every tenant namespace, MTO automatically creates policies and roles that allow the service accounts present in those namespaces to read secrets at tenant's path in Vault. The name of the role is the same as namespace name.

    These service accounts are required to have stakater.com/vault-access: true label, so they can be authenticated with Vault via MTO.

    The Diagram shows how MTO enables ServiceAccounts to read secrets from Vault.

    "},{"location":"vault-multitenancy.html#user-oidc-auth-in-vault","title":"User OIDC Auth in Vault","text":"

    This requires a running RHSSO(RedHat Single Sign On) instance integrated with Vault over OIDC login method.

    MTO integration with Vault and RHSSO provides a way for users to log in to Vault where they only have access to relevant tenant paths.

    Once both integrations are set-up with IntegrationConfig CR, MTO links tenant users to specific client roles named after their tenant under Vault client in RHSSO.

    After that, MTO creates specific policies in Vault for its tenant users.

    Mapping of tenant roles to Vault is shown below

    Tenant Role Vault Path Vault Capabilities Owner, Editor (tenantName)/* Create, Read, Update, Delete, List Owner, Editor sys/mounts/(tenantName)/* Create, Read, Update, Delete, List Owner, Editor managed-addons/* Read, List Viewer (tenantName)/* Read

    A simple user login workflow is shown in the diagram below.

    "},{"location":"explanation/auth.html","title":"Authentication and Authorization in MTO Console","text":""},{"location":"explanation/auth.html#keycloak-for-authentication","title":"Keycloak for Authentication","text":"

    MTO Console incorporates Keycloak, a leading authentication module, to manage user access securely and efficiently. Keycloak is provisioned automatically by our controllers, setting up a new realm, client, and a default user named mto.

    "},{"location":"explanation/auth.html#benefits","title":"Benefits","text":""},{"location":"explanation/auth.html#postgresql-as-persistent-storage-for-keycloak","title":"PostgreSQL as Persistent Storage for Keycloak","text":"

    MTO Console leverages PostgreSQL as the persistent storage solution for Keycloak, enhancing the reliability and flexibility of the authentication system.

    It offers benefits such as enhanced data reliability, easy data export and import.

    "},{"location":"explanation/auth.html#benefits_1","title":"Benefits","text":""},{"location":"explanation/auth.html#built-in-module-for-authorization","title":"Built-in module for Authorization","text":"

    The MTO Console is equipped with an authorization module, designed to manage access rights intelligently and securely.

    "},{"location":"explanation/auth.html#benefits_2","title":"Benefits","text":""},{"location":"explanation/console.html","title":"MTO Console","text":""},{"location":"explanation/console.html#introduction","title":"Introduction","text":"

    The Multi Tenant Operator (MTO) Console is a comprehensive user interface designed for both administrators and tenant users to manage multi-tenant environments. The MTO Console simplifies the complexity involved in handling various aspects of tenants and their related resources.

    "},{"location":"explanation/console.html#dashboard-overview","title":"Dashboard Overview","text":"

    The dashboard serves as a centralized monitoring hub, offering insights into the current state of tenants, namespaces, and quotas. It is designed to provide a quick summary/snapshot of MTO resources' status. Additionally, it includes a Showback graph that presents a quick glance of the seven-day cost trends associated with the namespaces/tenants based on the logged-in user.

    "},{"location":"explanation/console.html#tenants","title":"Tenants","text":"

    Here, admins have a bird's-eye view of all tenants, with the ability to delve into each one for detailed examination and management. This section is pivotal for observing the distribution and organization of tenants within the system. More information on each tenant can be accessed by clicking the view option against each tenant name.

    "},{"location":"explanation/console.html#namespaces","title":"Namespaces","text":"

    Users can view all the namespaces that belong to their tenant, offering a comprehensive perspective of the accessible namespaces for tenant members. This section also provides options for detailed exploration.

    "},{"location":"explanation/console.html#quotas","title":"Quotas","text":"

    MTO's Quotas are crucial for managing resource allocation. In this section, administrators can assess the quotas assigned to each tenant, ensuring a balanced distribution of resources in line with operational requirements.

    "},{"location":"explanation/console.html#templates","title":"Templates","text":"

    The Templates section acts as a repository for standardized resource deployment patterns, which can be utilized to maintain consistency and reliability across tenant environments. Few examples include provisioning specific k8s manifests, helm charts, secrets or configmaps across a set of namespaces.

    "},{"location":"explanation/console.html#showback","title":"Showback","text":"

    The Showback feature is an essential financial governance tool, providing detailed insights into the cost implications of resource usage by tenant or namespace or other filters. This facilitates a transparent cost management and internal chargeback or showback process, enabling informed decision-making regarding resource consumption and budgeting.

    "},{"location":"explanation/console.html#user-roles-and-permissions","title":"User Roles and Permissions","text":""},{"location":"explanation/console.html#administrators","title":"Administrators","text":"

    Administrators have overarching access to the console, including the ability to view all namespaces and tenants. They have exclusive access to the IntegrationConfig, allowing them to view all the settings and integrations.

    "},{"location":"explanation/console.html#tenant-users","title":"Tenant Users","text":"

    Regular tenant users can monitor and manage their allocated resources. However, they do not have access to the IntegrationConfig and cannot view resources across different tenants, ensuring data privacy and operational integrity.

    "},{"location":"explanation/console.html#live-yaml-configuration-and-graph-view","title":"Live YAML Configuration and Graph View","text":"

    In the MTO Console, each resource section is equipped with a \"View\" button, revealing the live YAML configuration for complete information on the resource. For Tenant resources, a supplementary \"Graph\" option is available, illustrating the relationships and dependencies of all resources under a Tenant. This dual-view approach empowers users with both the detailed control of YAML and the holistic oversight of the graph view.

    You can find more details on graph visualization here: Graph Visualization

    "},{"location":"explanation/console.html#caching-and-database","title":"Caching and Database","text":"

    MTO integrates a dedicated database to streamline resource management. Now, all resources managed by MTO are efficiently stored in a Postgres database, enhancing the MTO Console's ability to efficiently retrieve all the resources for optimal presentation.

    The implementation of this feature is facilitated by the Bootstrap controller, streamlining the deployment process. This controller creates the PostgreSQL Database, establishes a service for inter-pod communication, and generates a secret to ensure secure connectivity to the database.

    Furthermore, the introduction of a dedicated cache layer ensures that there is no added burden on the kube API server when responding to MTO Console requests. This enhancement not only improves response times but also contributes to a more efficient and responsive resource management system.

    "},{"location":"explanation/console.html#authentication-and-authorization","title":"Authentication and Authorization","text":"

    MTO Console ensures secure access control using a robust combination of Keycloak for authentication and a custom-built authorization module.

    "},{"location":"explanation/console.html#keycloak-integration","title":"Keycloak Integration","text":"

    Keycloak, an industry-standard authentication tool, is integrated for secure user login and management. It supports seamless integration with existing ADs or SSO systems and grants administrators complete control over user access.

    "},{"location":"explanation/console.html#custom-authorization-module","title":"Custom Authorization Module","text":"

    Complementing Keycloak, our custom authorization module intelligently controls access based on user roles and their association with tenants. Special checks are in place for admin users, granting them comprehensive permissions.

    For more details on Keycloak's integration, PostgreSQL as persistent storage, and the intricacies of our authorization module, please visit here.

    "},{"location":"explanation/console.html#conclusion","title":"Conclusion","text":"

    The MTO Console is engineered to simplify complex multi-tenant management. The current iteration focuses on providing comprehensive visibility. Future updates could include direct CUD (Create/Update/Delete) capabilities from the dashboard, enhancing the console\u2019s functionality. The Showback feature remains a standout, offering critical cost tracking and analysis. The delineation of roles between administrators and tenant users ensures a secure and organized operational framework.

    "},{"location":"explanation/why-argocd-multi-tenancy.html","title":"Need for Multi-Tenancy in ArgoCD","text":""},{"location":"explanation/why-argocd-multi-tenancy.html#argocd-multi-tenancy","title":"ArgoCD Multi-tenancy","text":"

    ArgoCD is a declarative GitOps tool built to deploy applications to Kubernetes. While the continuous delivery (CD) space is seen by some as crowded these days, ArgoCD does bring some interesting capabilities to the table. Unlike other tools, ArgoCD is lightweight and easy to configure.

    "},{"location":"explanation/why-argocd-multi-tenancy.html#why-argocd","title":"Why ArgoCD?","text":"

    Application definitions, configurations, and environments should be declarative and version controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand.

    "},{"location":"explanation/why-vault-multi-tenancy.html","title":"Need for Multi-Tenancy in Vault","text":""},{"location":"faq/index.html","title":"Index","text":""},{"location":"how-to-guides/integration-config.html","title":"Integration Config","text":"

    IntegrationConfig is used to configure settings of multi-tenancy for Multi Tenant Operator.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  tenantRoles:\n    default:\n      owner:\n        clusterRoles:\n          - admin\n      editor:\n        clusterRoles:\n          - edit\n      viewer:\n        clusterRoles:\n          - view\n          - viewer\n    custom:\n    - labelSelector:\n        matchExpressions:\n        - key: stakater.com/kind\n          operator: In\n          values:\n            - build\n        matchLabels:\n          stakater.com/kind: dev\n      owner:\n        clusterRoles:\n          - custom-owner\n      editor:\n        clusterRoles:\n          - custom-editor\n      viewer:\n        clusterRoles:\n          - custom-viewer\n          - custom-view\n  openshift:\n    project:\n      labels:\n        stakater.com/workload-monitoring: \"true\"\n      annotations:\n        openshift.io/node-selector: node-role.kubernetes.io/worker=\n    group:\n      labels:\n        role: customer-reader\n    sandbox:\n      labels:\n        stakater.com/kind: sandbox\n    clusterAdminGroups:\n      - cluster-admins\n    privilegedNamespaces:\n      - ^default$\n      - ^openshift-*\n      - ^kube-*\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:openshift-*\n      - ^system:serviceaccount:kube-*\n    namespaceAccessPolicy:\n      deny:\n        privilegedNamespaces:\n          users:\n            - system:serviceaccount:openshift-argocd:argocd-application-controller\n            - adam@stakater.com\n          groups:\n            - cluster-admins\n  argocd:\n    namespace: openshift-operators\n    namespaceResourceBlacklist:\n      - group: '' # all groups\n        kind: ResourceQuota\n    clusterResourceWhitelist:\n      - group: tronador.stakater.com\n        kind: EnvironmentProvisioner\n  rhsso:\n    enabled: true\n    realm: customer\n    endpoint:\n      url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n      secretReference:\n        name: auth-secrets\n        namespace: openshift-auth\n  vault:\n    enabled: true\n    endpoint:\n      url: https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n      secretReference:\n        name: vault-root-token\n        namespace: vault\n    sso:\n      clientName: vault\n      accessorID: <ACCESSOR_ID_TOKEN>\n

    Following are the different components that can be used to configure multi-tenancy in a cluster via Multi Tenant Operator.

    "},{"location":"how-to-guides/integration-config.html#tenantroles","title":"TenantRoles","text":"

    TenantRoles are required within the IntegrationConfig, as they are used for defining what roles will be applied to each Tenant namespace. The field allows optional custom roles, that are then used to create RoleBindings for namespaces that match a labelSelector.

    \u26a0\ufe0f If you do not configure roles in any way, then the default OpenShift roles of owner, edit, and view will apply to Tenant members. Their details can be found here

    tenantRoles:\n  default:\n    owner:\n      clusterRoles:\n        - admin\n    editor:\n      clusterRoles:\n        - edit\n    viewer:\n      clusterRoles:\n        - view\n        - viewer\n  custom:\n  - labelSelector:\n      matchExpressions:\n      - key: stakater.com/kind\n        operator: In\n        values:\n          - build\n      matchLabels:\n        stakater.com/kind: dev\n    owner:\n      clusterRoles:\n        - custom-owner\n    editor:\n      clusterRoles:\n        - custom-editor\n    viewer:\n      clusterRoles:\n        - custom-viewer\n        - custom-view\n
    "},{"location":"how-to-guides/integration-config.html#default","title":"Default","text":"

    This field contains roles that will be used to create default roleBindings for each namespace that belongs to tenants. These roleBindings are only created for a namespace if that namespace isn't already matched by the custom field below it. Therefore, it is required to have at least one role mentioned within each of its three subfields: owner, editor, and viewer. These 3 subfields also correspond to the member fields of the Tenant CR

    "},{"location":"how-to-guides/integration-config.html#custom","title":"Custom","text":"

    An array of custom roles. Similar to the default field, you can mention roles within this field as well. However, the custom roles also require the use of a labelSelector for each iteration within the array. The roles mentioned here will only apply to the namespaces that are matched by the labelSelector. If a namespace is matched by 2 different labelSelectors, then both roles will apply to it. Additionally, roles can be skipped within the labelSelector. These missing roles are then inherited from the default roles field . For example, if the following custom roles arrangement is used:

    custom:\n- labelSelector:\n    matchExpressions:\n    - key: stakater.com/kind\n      operator: In\n      values:\n        - build\n    matchLabels:\n      stakater.com/kind: dev\n  owner:\n    clusterRoles:\n      - custom-owner\n

    Then the editor and viewer roles will be taken from the default roles field, as that is required to have at least one role mentioned.

    "},{"location":"how-to-guides/integration-config.html#openshift","title":"OpenShift","text":"
    openshift:\n  project:\n    labels:\n      stakater.com/workload-monitoring: \"true\"\n    annotations:\n      openshift.io/node-selector: node-role.kubernetes.io/worker=\n  group:\n    labels:\n      role: customer-reader\n  sandbox:\n    labels:\n      stakater.com/kind: sandbox\n  clusterAdminGroups:\n    - cluster-admins\n  privilegedNamespaces:\n    - ^default$\n    - ^openshift-*\n    - ^kube-*\n  privilegedServiceAccounts:\n    - ^system:serviceaccount:openshift-*\n    - ^system:serviceaccount:kube-*\n  namespaceAccessPolicy:\n    deny:\n      privilegedNamespaces:\n        users:\n          - system:serviceaccount:openshift-argocd:argocd-application-controller\n          - adam@stakater.com\n        groups:\n          - cluster-admins\n
    "},{"location":"how-to-guides/integration-config.html#project-group-and-sandbox","title":"Project, group and sandbox","text":"

    We can use the openshift.project, openshift.group and openshift.sandbox fields to automatically add labels and annotations to the Projects and Groups managed via MTO.

      openshift:\n    project:\n      labels:\n        stakater.com/workload-monitoring: \"true\"\n      annotations:\n        openshift.io/node-selector: node-role.kubernetes.io/worker=\n    group:\n      labels:\n        role: customer-reader\n    sandbox:\n      labels:\n        stakater.com/kind: sandbox\n

    If we want to add default labels/annotations to sandbox namespaces of tenants than we just simply add them in openshift.project.labels/openshift.project.annotations respectively.

    Whenever a project is made it will have the labels and annotations as mentioned above.

    kind: Project\napiVersion: project.openshift.io/v1\nmetadata:\n  name: bluesky-build\n  annotations:\n    openshift.io/node-selector: node-role.kubernetes.io/worker=\n  labels:\n    workload-monitoring: 'true'\n    stakater.com/tenant: bluesky\nspec:\n  finalizers:\n    - kubernetes\nstatus:\n  phase: Active\n
    kind: Group\napiVersion: user.openshift.io/v1\nmetadata:\n  name: bluesky-owner-group\n  labels:\n    role: customer-reader\nusers:\n  - andrew@stakater.com\n
    "},{"location":"how-to-guides/integration-config.html#cluster-admin-groups","title":"Cluster Admin Groups","text":"

    clusterAdminGroups: Contains names of the groups that are allowed to perform CRUD operations on namespaces present on the cluster. Users in the specified group(s) will be able to perform these operations without MTO getting in their way

    "},{"location":"how-to-guides/integration-config.html#privileged-namespaces","title":"Privileged Namespaces","text":"

    privilegedNamespaces: Contains the list of namespaces ignored by MTO. MTO will not manage the namespaces in this list. Values in this list are regex patterns. For example:

    "},{"location":"how-to-guides/integration-config.html#privileged-serviceaccounts","title":"Privileged ServiceAccounts","text":"

    privilegedServiceAccounts: Contains the list of ServiceAccounts ignored by MTO. MTO will not manage the ServiceAccounts in this list. Values in this list are regex patterns. For example, to ignore all ServiceAccounts starting with the system:serviceaccount:openshift- prefix, we can use ^system:serviceaccount:openshift-*; and to ignore the system:serviceaccount:builder service account we can use ^system:serviceaccount:builder$.

    "},{"location":"how-to-guides/integration-config.html#namespace-access-policy","title":"Namespace Access Policy","text":"

    namespaceAccessPolicy.Deny: Can be used to restrict privileged users/groups CRUD operation over managed namespaces.

    namespaceAccessPolicy:\n  deny:\n    privilegedNamespaces:\n      groups:\n        - cluster-admins\n      users:\n        - system:serviceaccount:openshift-argocd:argocd-application-controller\n        - adam@stakater.com\n

    \u26a0\ufe0f If you want to use a more complex regex pattern (for the openshift.privilegedNamespaces or openshift.privilegedServiceAccounts field), it is recommended that you test the regex pattern first - either locally or using a platform such as https://regex101.com/.

    "},{"location":"how-to-guides/integration-config.html#argocd","title":"ArgoCD","text":""},{"location":"how-to-guides/integration-config.html#namespace","title":"Namespace","text":"

    argocd.namespace is an optional field used to specify the namespace where ArgoCD Applications and AppProjects are deployed. The field should be populated when you want to create an ArgoCD AppProject for each tenant.

    "},{"location":"how-to-guides/integration-config.html#namespaceresourceblacklist","title":"NamespaceResourceBlacklist","text":"
    argocd:\n  namespaceResourceBlacklist:\n  - group: '' # all resource groups\n    kind: ResourceQuota\n  - group: ''\n    kind: LimitRange\n  - group: ''\n    kind: NetworkPolicy\n

    argocd.namespaceResourceBlacklist prevents ArgoCD from syncing the listed resources from your GitOps repo.

    "},{"location":"how-to-guides/integration-config.html#clusterresourcewhitelist","title":"ClusterResourceWhitelist","text":"
    argocd:\n  clusterResourceWhitelist:\n  - group: tronador.stakater.com\n    kind: EnvironmentProvisioner\n

    argocd.clusterResourceWhitelist allows ArgoCD to sync the listed cluster scoped resources from your GitOps repo.

    "},{"location":"how-to-guides/integration-config.html#rhsso-red-hat-single-sign-on","title":"RHSSO (Red Hat Single Sign-On)","text":"

    Red Hat Single Sign-On RHSSO is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0.

    If RHSSO is configured on a cluster, then RHSSO configuration can be enabled.

    rhsso:\n  enabled: true\n  realm: customer\n  endpoint:\n    secretReference:\n      name: auth-secrets\n      namespace: openshift-auth\n    url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n

    If enabled, then admins have to provide secret and URL of RHSSO.

    "},{"location":"how-to-guides/integration-config.html#vault","title":"Vault","text":"

    Vault is used to secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.

    If vault is configured on a cluster, then Vault configuration can be enabled.

    Vault:\n  enabled: true\n  endpoint:\n    secretReference:\n      name: vault-root-token\n      namespace: vault\n    url: >-\n      https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n  sso:\n    accessorID: <ACCESSOR_ID_TOKEN>\n    clientName: vault\n

    If enabled, then admins have to provide secret, URL and SSO accessorID of Vault.

    "},{"location":"how-to-guides/quota.html","title":"Quota","text":"

    Using Multi Tenant Operator, the cluster-admin can set and enforce cluster resource quotas and limit ranges for tenants.

    "},{"location":"how-to-guides/quota.html#assigning-resource-quotas","title":"Assigning Resource Quotas","text":"

    Bill is a cluster admin who will first create Quota CR where he sets the maximum resource limits that Anna's tenant will have. Here limitrange is an optional field, cluster admin can skip it if not needed.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n  name: small\nspec:\n  resourcequota:\n    hard:\n      requests.cpu: '5'\n      requests.memory: '5Gi'\n      configmaps: \"10\"\n      secrets: \"10\"\n      services: \"10\"\n      services.loadbalancers: \"2\"\n  limitrange:\n    limits:\n      - type: \"Pod\"\n        max:\n          cpu: \"2\"\n          memory: \"1Gi\"\n        min:\n          cpu: \"200m\"\n          memory: \"100Mi\"\nEOF\n

    For more details please refer to Quotas.

    kubectl get quota small\nNAME       STATE    AGE\nsmall      Active   3m\n

    Bill then proceeds to create a tenant for Anna, while also linking the newly created Quota.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@stakater.com\n  quota: small\n  sandbox: false\nEOF\n

    Now that the quota is linked with Anna's tenant, Anna can create any resource within the values of resource quota and limit range.

    kubectl -n bluesky-production create deployment nginx --image nginx:latest --replicas 4\n

    Once the resource quota assigned to the tenant has been reached, Anna cannot create further resources.

    kubectl create pods bluesky-training\nError from server (Cannot exceed Namespace quota: please, reach out to the system administrators)\n
    "},{"location":"how-to-guides/quota.html#limiting-persistentvolume-for-tenant","title":"Limiting PersistentVolume for Tenant","text":"

    Bill, as a cluster admin, wants to restrict the amount of storage a Tenant can use. For that he'll add the requests.storage field to quota.spec.resourcequota.hard. If Bill wants to restrict tenant bluesky to use only 50Gi of storage, he'll first create a quota with requests.storage field set to 50Gi.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n  name: medium\nspec:\n  resourcequota:\n    hard:\n      requests.cpu: '5'\n      requests.memory: '10Gi'\n      requests.storage: '50Gi'\n

    Once the quota is created, Bill will create the tenant and set the quota field to the one he created.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  quota: medium\n  sandbox: true\nEOF\n

    Now, the combined storage used by all tenant namespaces will not exceed 50Gi.

    "},{"location":"how-to-guides/quota.html#adding-storageclass-restrictions-for-tenant","title":"Adding StorageClass Restrictions for Tenant","text":"

    Now, Bill, as a cluster admin, wants to make sure that no Tenant can provision more than a fixed amount of storage from a StorageClass. Bill can restrict that using <storage-class-name>.storageclass.storage.k8s.io/requests.storage field in quota.spec.resourcequota.hard field. If Bill wants to restrict tenant sigma to use only 20Gi of storage from storage class stakater, he'll first create a StorageClass stakater and then create the relevant Quota with stakater.storageclass.storage.k8s.io/requests.storage field set to 20Gi.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n  name: small\nspec:\n  resourcequota:\n    hard:\n      requests.cpu: '2'\n      requests.memory: '4Gi'\n      stakater.storageclass.storage.k8s.io/requests.storage: '20Gi'\n

    Once the quota is created, Bill will create the tenant and set the quota field to the one he created.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: sigma\nspec:\n  owners:\n    users:\n    - dave@aurora.org\n  quota: small\n  sandbox: true\nEOF\n

    Now, the combined storage provisioned from StorageClass stakater used by all tenant namespaces will not exceed 20Gi.

    The 20Gi limit will only be applied to StorageClass stakater. If a tenant member creates a PVC with some other StorageClass, he will not be restricted.

    Tip

    More details about Resource Quota can be found here

    "},{"location":"how-to-guides/template-group-instance.html","title":"TemplateGroupInstance","text":"

    Cluster scoped resource:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: namespace-parameterized-restrictions-tgi\nspec:\n  template: namespace-parameterized-restrictions\n  sync: true\n  selector:\n    matchExpressions:\n    - key: stakater.com/tenant\n      operator: In\n      values:\n        - alpha\n        - beta\nparameters:\n  - name: CIDR_IP\n    value: \"172.17.0.0/16\"\n

    TemplateGroupInstance distributes a template across multiple namespaces which are selected by labelSelector.

    "},{"location":"how-to-guides/template-instance.html","title":"TemplateInstance","text":"

    Namespace scoped resource:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n  name: networkpolicy\n  namespace: build\nspec:\n  template: networkpolicy\n  sync: true\nparameters:\n  - name: CIDR_IP\n    value: \"172.17.0.0/16\"\n

    TemplateInstance are used to keep track of resources created from Templates, which are being instantiated inside a Namespace. Generally, a TemplateInstance is created from a Template and then the TemplateInstances will not be updated when the Template changes later on. To change this behavior, it is possible to set spec.sync: true in a TemplateInstance. Setting this option, means to keep this TemplateInstance in sync with the underlying template (similar to Helm upgrade).

    "},{"location":"how-to-guides/template.html","title":"Template","text":""},{"location":"how-to-guides/template.html#cluster-scoped-resource","title":"Cluster scoped resource","text":"
    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: redis\nresources:\n  helm:\n    releaseName: redis\n    chart:\n      repository:\n        name: redis\n        repoUrl: https://charts.bitnami.com/bitnami\n    values: |\n      redisPort: 6379\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: networkpolicy\nparameters:\n  - name: CIDR_IP\n    value: \"172.17.0.0/16\"\nresources:\n  manifests:\n    - kind: NetworkPolicy\n      apiVersion: networking.k8s.io/v1\n      metadata:\n        name: deny-cross-ns-traffic\n      spec:\n        podSelector:\n          matchLabels:\n            role: db\n        policyTypes:\n        - Ingress\n        - Egress\n        ingress:\n        - from:\n          - ipBlock:\n              cidr: \"${{CIDR_IP}}\"\n              except:\n              - 172.17.1.0/24\n          - namespaceSelector:\n              matchLabels:\n                project: myproject\n          - podSelector:\n              matchLabels:\n                role: frontend\n          ports:\n          - protocol: TCP\n            port: 6379\n        egress:\n        - to:\n          - ipBlock:\n              cidr: 10.0.0.0/24\n          ports:\n          - protocol: TCP\n            port: 5978\n---\napiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: resource-mapping\nresources:\n  resourceMappings:\n    secrets:\n      - name: secret-s1\n        namespace: namespace-n1\n    configMaps:\n      - name: configmap-c1\n        namespace: namespace-n2\n

    Templates are used to initialize Namespaces, share common resources across namespaces, and map secrets/configmaps from one namespace to other namespaces.

    Also, you can define custom variables in Template and TemplateInstance . The parameters defined in TemplateInstance are overwritten the values defined in Template .

    Manifest Templates: The easiest option to define a Template is by specifying an array of Kubernetes manifests which should be applied when the Template is being instantiated.

    Helm Chart Templates: Instead of manifests, a Template can specify a Helm chart that will be installed (using Helm template) when the Template is being instantiated.

    Resource Mapping Templates: A template can be used to map secrets and configmaps from one tenant's namespace to another tenant's namespace, or within a tenant's namespace.

    "},{"location":"how-to-guides/template.html#mandatory-and-optional-templates","title":"Mandatory and Optional Templates","text":"

    Templates can either be mandatory or optional. By default, all Templates are optional. Cluster Admins can make Templates mandatory by adding them to the spec.templateInstances array within the Tenant configuration. All Templates listed in spec.templateInstances will always be instantiated within every Namespace that is created for the respective Tenant.

    "},{"location":"how-to-guides/tenant.html","title":"Tenant","text":"

    Cluster scoped resource:

    The smallest valid Tenant definition is given below (with just one field in its spec):

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: alpha\nspec:\n  quota: small\n

    Here is a more detailed Tenant definition, explained below:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: alpha\nspec:\n  owners: # optional\n    users: # optional\n      - dave@stakater.com\n    groups: # optional\n      - alpha\n  editors: # optional\n    users: # optional\n      - jack@stakater.com\n  viewers: # optional\n    users: # optional\n      - james@stakater.com\n  quota: medium # required\n  sandboxConfig: # optional\n    enabled: true # optional\n    private: true # optional\n  onDelete: # optional\n    cleanNamespaces: false # optional\n    cleanAppProject: true # optional\n  argocd: # optional\n    sourceRepos: # required\n      - https://github.com/stakater/gitops-config\n    appProject: # optional\n      clusterResourceWhitelist: # optional\n        - group: tronador.stakater.com\n          kind: Environment\n      namespaceResourceBlacklist: # optional\n        - group: \"\"\n          kind: ConfigMap\n  hibernation: # optional\n    sleepSchedule: 23 * * * * # required\n    wakeSchedule: 26 * * * * # required\n  namespaces: # optional\n    withTenantPrefix: # optional\n      - dev\n      - build\n    withoutTenantPrefix: # optional\n      - preview\n  commonMetadata: # optional\n    labels: # optional\n      stakater.com/team: alpha\n    annotations: # optional\n      openshift.io/node-selector: node-role.kubernetes.io/infra=\n  specificMetadata: # optional\n    - annotations: # optional\n        stakater.com/user: dave\n      labels: # optional\n        stakater.com/sandbox: true\n      namespaces: # optional\n        - alpha-dave-stakater-sandbox\n  templateInstances: # optional\n  - spec: # optional\n      template: networkpolicy # required\n      sync: true  # optional\n      parameters: # optional\n        - name: CIDR_IP\n          value: \"172.17.0.0/16\"\n    selector: # optional\n      matchLabels: # optional\n        policy: network-restriction\n

    \u26a0\ufe0f If same label or annotation key is being applied using different methods provided, then the highest precedence will be given to specificMetadata followed by commonMetadata and in the end would be the ones applied from openshift.project.labels/openshift.project.annotations in IntegrationConfig

    "},{"location":"how-to-guides/offboarding/uninstalling.html","title":"Uninstall via OperatorHub UI","text":"

    You can uninstall MTO by following these steps:

    "},{"location":"how-to-guides/offboarding/uninstalling.html#notes","title":"Notes","text":""},{"location":"reference-guides/add-remove-namespace-gitops.html","title":"Add/Remove Namespace from Tenant via GitOps","text":""},{"location":"reference-guides/admin-clusterrole.html","title":"Extending Admin ClusterRole","text":"

    Bill as the cluster admin want to add additional rules for admin ClusterRole.

    Bill can extend the admin role for MTO using the aggregation label for admin ClusterRole. Bill will create a new ClusterRole with all the permissions he needs to extend for MTO and add the aggregation label on the newly created ClusterRole.

    kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: extend-admin-role\n  labels:\n    rbac.authorization.k8s.io/aggregate-to-admin: 'true'\nrules:\n  - verbs:\n      - create\n      - update\n      - patch\n      - delete\n    apiGroups:\n      - user.openshift.io\n    resources:\n      - groups\n

    Note: You can learn more about aggregated-cluster-roles here

    "},{"location":"reference-guides/admin-clusterrole.html#whats-next","title":"What\u2019s next","text":"

    See how Bill can hibernate unused namespaces at night

    "},{"location":"reference-guides/configuring-multitenant-network-isolation.html","title":"Configuring Multi-Tenant Isolation with Network Policy Template","text":"

    Bill is a cluster admin who wants to configure network policies to provide multi-tenant network isolation.

    First, Bill creates a template for network policies:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: tenant-network-policy\nresources:\n  manifests:\n  - apiVersion: networking.k8s.io/v1\n    kind: NetworkPolicy\n    metadata:\n      name: allow-same-namespace\n    spec:\n      podSelector: {}\n      ingress:\n      - from:\n        - podSelector: {}\n  - apiVersion: networking.k8s.io/v1\n    kind: NetworkPolicy\n    metadata:\n      name: allow-from-openshift-monitoring\n    spec:\n      ingress:\n      - from:\n        - namespaceSelector:\n            matchLabels:\n              network.openshift.io/policy-group: monitoring\n      podSelector: {}\n      policyTypes:\n      - Ingress\n  - apiVersion: networking.k8s.io/v1\n    kind: NetworkPolicy\n    metadata:\n      name: allow-from-openshift-ingress\n    spec:\n      ingress:\n      - from:\n        - namespaceSelector:\n            matchLabels:\n              network.openshift.io/policy-group: ingress\n      podSelector: {}\n      policyTypes:\n      - Ingress\n

    Once the template has been created, Bill edits the IntegrationConfig to add unique label to all tenant projects:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    project:\n      labels:\n        stakater.com/workload-monitoring: \"true\"\n        tenant-network-policy: \"true\"\n      annotations:\n        openshift.io/node-selector: node-role.kubernetes.io/worker=\n    sandbox:\n      labels:\n        stakater.com/kind: sandbox\n    privilegedNamespaces:\n      - default\n      - ^openshift-*\n      - ^kube-*\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:openshift-*\n      - ^system:serviceaccount:kube-*\n

    Bill has added a new label tenant-network-policy: \"true\" in project section of IntegrationConfig, now MTO will add that label in all tenant projects.

    Finally, Bill creates a TemplateGroupInstance which will distribute the network policies using the newly added project label and template.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: tenant-network-policy-group\nspec:\n  template: tenant-network-policy\n  selector:\n    matchLabels:\n      tenant-network-policy: \"true\"\n  sync: true\n

    MTO will now deploy the network policies mentioned in Template to all projects matching the label selector mentioned in the TemplateGroupInstance.

    "},{"location":"reference-guides/custom-metrics.html","title":"Custom Metrics Support","text":"

    Multi Tenant Operator now supports custom metrics for templates, template instances and template group instances. This feature allows users to monitor the usage of templates and template instances in their cluster.

    To enable custom metrics and view them in your OpenShift cluster, you need to follow the steps below:

    "},{"location":"reference-guides/custom-roles.html","title":"Changing the default access level for tenant owners","text":"

    This feature allows the cluster admins to change the default roles assigned to Tenant owner, editor, viewer groups.

    For example, if Bill as the cluster admin wants to reduce the privileges that tenant owners have, so they cannot create or edit Roles or bind them. As an admin of an OpenShift cluster, Bill can do this by assigning the edit role to all tenant owners. This is easily achieved by modifying the IntegrationConfig:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  tenantRoles:\n    default:\n      owner:\n        clusterRoles:\n          - edit\n      editor:\n        clusterRoles:\n          - edit\n      viewer:\n        clusterRoles:\n          - view\n

    Once all namespaces reconcile, the old admin RoleBindings should get replaced with the edit ones for each tenant owner.

    "},{"location":"reference-guides/custom-roles.html#giving-specific-permissions-to-some-tenants","title":"Giving specific permissions to some tenants","text":"

    Bill now wants the owners of the tenants bluesky and alpha to have admin permissions over their namespaces. Custom roles feature will allow Bill to do this, by modifying the IntegrationConfig like this:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  tenantRoles:\n    default:\n      owner:\n        clusterRoles:\n          - edit\n      editor:\n        clusterRoles:\n          - edit\n      viewer:\n        clusterRoles:\n          - view\n    custom:\n    - labelSelector:\n        matchExpressions:\n        - key: stakater.com/tenant\n          operator: In\n          values:\n            - alpha\n      owner:\n        clusterRoles:\n          - admin\n    - labelSelector:\n        matchExpressions:\n        - key: stakater.com/tenant\n          operator: In\n          values:\n            - bluesky\n      owner:\n        clusterRoles:\n          - admin\n

    New Bindings will be created for the Tenant owners of bluesky and alpha, corresponding to the admin Role. Bindings for editors and viewer will be inherited from the default roles. All other Tenant owners will have an edit Role bound to them within their namespaces

    "},{"location":"reference-guides/deploying-templates.html","title":"Distributing Resources in Namespaces","text":"

    Multi Tenant Operator has three Custom Resources which can cover this need using the Template CR, depending upon the conditions and preference.

    1. TemplateGroupInstance
    2. TemplateInstance
    3. Tenant

    Stakater Team, however, encourages the use of TemplateGroupInstance to distribute resources in multiple namespaces as it is optimized for better performance.

    "},{"location":"reference-guides/deploying-templates.html#deploying-template-to-namespaces-via-templategroupinstances","title":"Deploying Template to Namespaces via TemplateGroupInstances","text":"

    Bill, the cluster admin, wants to deploy a docker pull secret in namespaces where certain labels exists.

    First, Bill creates a template:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-secret\nresources:\n  manifests:\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n

    Once the template has been created, Bill makes a TemplateGroupInstance referring to the Template he wants to deploy with MatchLabels:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-pull-secret\n  selector:\n    matchLabels:\n      kind: build\n  sync: true\n

    Afterward, Bill can see that secrets have been successfully created in all label matching namespaces.

    kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME             STATE    AGE\ndocker-secret    Active   3m\n\nkubectl get secret docker-secret -n alpha-dave-aurora-sandbox\nNAME             STATE    AGE\ndocker-secret    Active   2m\n

    TemplateGroupInstance can also target specific tenants or all tenant namespaces under a single yaml definition.

    "},{"location":"reference-guides/deploying-templates.html#templategroupinstance-for-multiple-tenants","title":"TemplateGroupInstance for multiple Tenants","text":"

    It can be done by using the matchExpressions field, dividing the tenant label in key and values.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-pull-secret\n  selector:\n    matchExpressions:\n    - key: stakater.com/tenant\n      operator: In\n      values:\n        - alpha\n        - beta\n  sync: true\n
    "},{"location":"reference-guides/deploying-templates.html#templategroupinstance-for-all-tenants","title":"TemplateGroupInstance for all Tenants","text":"

    This can also be done by using the matchExpressions field, using just the tenant label key stakater.com/tenant.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-pull-secret\n  selector:\n    matchExpressions:\n    - key: stakater.com/tenant\n      operator: Exists\n  sync: true\n
    "},{"location":"reference-guides/deploying-templates.html#deploying-template-to-namespaces-via-tenant","title":"Deploying Template to Namespaces via Tenant","text":"

    Bill is a cluster admin who wants to deploy a docker-pull-secret in Anna's tenant namespaces where certain labels exists.

    First, Bill creates a template:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  manifests:\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n

    Once the template has been created, Bill edits Anna's tenant and populates the namespacetemplate field:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n  quota: small\n  sandboxConfig:\n    enabled: true\n  templateInstances:\n  - spec:\n      template: docker-pull-secret\n    selector:\n      matchLabels:\n        kind: build\n

    Multi Tenant Operator will deploy TemplateInstances mentioned in templateInstances field, TemplateInstances will only be applied in those namespaces which belong to Anna's tenant and have the matching label of kind: build.

    So now Anna adds label kind: build to her existing namespace bluesky-anna-aurora-sandbox, and after adding the label she sees that the secret has been created.

    kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME                  STATE    AGE\ndocker-pull-secret    Active   3m\n
    "},{"location":"reference-guides/deploying-templates.html#deploying-template-to-a-namespace-via-templateinstance","title":"Deploying Template to a Namespace via TemplateInstance","text":"

    Anna wants to deploy a docker pull secret in her namespace.

    First Anna asks Bill, the cluster admin, to create a template of the secret for her:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  manifests:\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n

    Once the template has been created, Anna creates a TemplateInstance in her namespace referring to the Template she wants to deploy:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n  name: docker-pull-secret-instance\n  namespace: bluesky-anna-aurora-sandbox\nspec:\n  template: docker-pull-secret\n  sync: true\n

    Once this is created, Anna can see that the secret has been successfully applied.

    kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME                  STATE    AGE\ndocker-pull-secret    Active   3m\n
    "},{"location":"reference-guides/deploying-templates.html#passing-parameters-to-template-via-templateinstance-templategroupinstance-or-tenant","title":"Passing Parameters to Template via TemplateInstance, TemplateGroupInstance or Tenant","text":"

    Anna wants to deploy a LimitRange resource to certain namespaces.

    First Anna asks Bill, the cluster admin, to create template with parameters for LimitRange for her:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: namespace-parameterized-restrictions\nparameters:\n  # Name of the parameter\n  - name: DEFAULT_CPU_LIMIT\n    # The default value of the parameter\n    value: \"1\"\n  - name: DEFAULT_CPU_REQUESTS\n    value: \"0.5\"\n    # If a parameter is required the template instance will need to set it\n    # required: true\n    # Make sure only values are entered for this parameter\n    validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n  manifests:\n    - apiVersion: v1\n      kind: LimitRange\n      metadata:\n        name: namespace-limit-range-${namespace}\n      spec:\n        limits:\n          - default:\n              cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n            defaultRequest:\n              cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n            type: Container\n

    Afterward, Anna creates a TemplateInstance in her namespace referring to the Template she wants to deploy:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n  name: namespace-parameterized-restrictions-instance\n  namespace: bluesky-anna-aurora-sandbox\nspec:\n  template: namespace-parameterized-restrictions\n  sync: true\nparameters:\n  - name: DEFAULT_CPU_LIMIT\n    value: \"1.5\"\n  - name: DEFAULT_CPU_REQUESTS\n    value: \"1\"\n

    If she wants to distribute the same Template over multiple namespaces, she can use TemplateGroupInstance.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: namespace-parameterized-restrictions-tgi\nspec:\n  template: namespace-parameterized-restrictions\n  sync: true\n  selector:\n    matchExpressions:\n    - key: stakater.com/tenant\n      operator: In\n      values:\n        - alpha\n        - beta\nparameters:\n  - name: DEFAULT_CPU_LIMIT\n    value: \"1.5\"\n  - name: DEFAULT_CPU_REQUESTS\n    value: \"1\"\n

    Or she can use her tenant to cover only the tenant namespaces.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n  quota: small\n  sandboxConfig:\n    enabled: true\n  templateInstances:\n  - spec:\n      template: namespace-parameterized-restrictions\n      sync: true\n    parameters:\n      - name: DEFAULT_CPU_LIMIT\n        value: \"1.5\"\n      - name: DEFAULT_CPU_REQUESTS\n        value: \"1\"\n    selector:\n      matchLabels:\n        kind: build\n
    "},{"location":"reference-guides/distributing-resources.html","title":"Copying Secrets and Configmaps across Tenant Namespaces via TGI","text":"

    Bill is a cluster admin who wants to map a docker-pull-secret, present in a build namespace, in tenant namespaces where certain labels exists.

    First, Bill creates a template:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  resourceMappings:\n    secrets:\n      - name: docker-pull-secret\n        namespace: build\n

    Once the template has been created, Bill makes a TemplateGroupInstance referring to the Template he wants to deploy with MatchLabels:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-pull-secret\n  selector:\n    matchLabels:\n      kind: build\n  sync: true\n

    Afterward, Bill can see that secrets has been successfully mapped in all matching namespaces.

    kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME             STATE    AGE\ndocker-pull-secret    Active   3m\n\nkubectl get secret docker-pull-secret -n alpha-dave-aurora-sandbox\nNAME             STATE    AGE\ndocker-pull-secret    Active   3m\n
    "},{"location":"reference-guides/distributing-resources.html#mapping-resources-within-tenant-namespaces-via-ti","title":"Mapping Resources within Tenant Namespaces via TI","text":"

    Anna is a tenant owner who wants to map a docker-pull-secret, present in bluseky-build namespace, to bluesky-anna-aurora-sandbox namespace.

    First, Bill creates a template:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  resourceMappings:\n    secrets:\n      - name: docker-pull-secret\n        namespace: bluesky-build\n

    Once the template has been created, Anna creates a TemplateInstance in bluesky-anna-aurora-sandbox namespace, referring to the Template.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n  name: docker-secret-instance\n  namespace: bluesky-anna-aurora-sandbox\nspec:\n  template: docker-pull-secret\n  sync: true\n

    Afterward, Bill can see that secrets has been successfully mapped in all matching namespaces.

    kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME             STATE    AGE\ndocker-pull-secret    Active   3m\n
    "},{"location":"reference-guides/distributing-secrets-using-sealed-secret-template.html","title":"Distributing Secrets Using Sealed Secrets Template","text":"

    Bill is a cluster admin who wants to provide a mechanism for distributing secrets in multiple namespaces. For this, he wants to use Sealed Secrets as the solution by adding them to MTO Template CR

    First, Bill creates a Template in which Sealed Secret is mentioned:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: tenant-sealed-secret\nresources:\n  manifests:\n  - kind: SealedSecret\n    apiVersion: bitnami.com/v1alpha1\n    metadata:\n      name: mysecret\n    spec:\n      encryptedData:\n        .dockerconfigjson: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq.....\n      template:\n        type: kubernetes.io/dockerconfigjson\n        # this is an example of labels and annotations that will be added to the output secret\n        metadata:\n          labels:\n            \"jenkins.io/credentials-type\": usernamePassword\n          annotations:\n            \"jenkins.io/credentials-description\": credentials from Kubernetes\n

    Once the template has been created, Bill has to edit the Tenant to add unique label to namespaces in which the secret has to be deployed. For this, he can use the support for common and specific labels across namespaces.

    Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n\n  # use this if you want to add label to some specific namespaces\n  specificMetadata:\n    - namespaces:\n        - test-namespace\n      labels:\n        distribute-image-pull-secret: true\n\n  # use this if you want to add label to all namespaces under your tenant\n  commonMetadata:\n    labels:\n      distribute-image-pull-secret: true\n

    Bill has added support for a new label distribute-image-pull-secret: true\" for tenant projects/namespaces, now MTO will add that label depending on the used field.

    Finally, Bill creates a TemplateGroupInstance which will deploy the sealed secrets using the newly created project label and template.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: tenant-sealed-secret\nspec:\n  template: tenant-sealed-secret\n  selector:\n    matchLabels:\n      distribute-image-pull-secret: true\n  sync: true\n

    MTO will now deploy the sealed secrets mentioned in Template to namespaces which have the mentioned label. The rest of the work to deploy secret from a sealed secret has to be done by Sealed Secrets Controller.

    "},{"location":"reference-guides/distributing-secrets.html","title":"Distributing Secrets","text":"

    Bill is a cluster admin who wants to provide a mechanism for distributing secrets in multiple namespaces. For this, he wants to use Sealed Secrets as the solution by adding them to MTO Template CR

    First, Bill creates a Template in which Sealed Secret is mentioned:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: tenant-sealed-secret\nresources:\n  manifests:\n  - kind: SealedSecret\n    apiVersion: bitnami.com/v1alpha1\n    metadata:\n      name: mysecret\n    spec:\n      encryptedData:\n        .dockerconfigjson: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq.....\n      template:\n        type: kubernetes.io/dockerconfigjson\n        # this is an example of labels and annotations that will be added to the output secret\n        metadata:\n          labels:\n            \"jenkins.io/credentials-type\": usernamePassword\n          annotations:\n            \"jenkins.io/credentials-description\": credentials from Kubernetes\n

    Once the template has been created, Bill has to edit the Tenant to add unique label to namespaces in which the secret has to be deployed. For this, he can use the support for common and specific labels across namespaces.

    Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n\n  # use this if you want to add label to some specific namespaces\n  specificMetadata:\n    - namespaces:\n        - test-namespace\n      labels:\n        distribute-image-pull-secret: true\n\n  # use this if you want to add label to all namespaces under your tenant\n  commonMetadata:\n    labels:\n      distribute-image-pull-secret: true\n

    Bill has added support for a new label distribute-image-pull-secret: true\" for tenant projects/namespaces, now MTO will add that label depending on the used field.

    Finally, Bill creates a TemplateGroupInstance which will deploy the sealed secrets using the newly created project label and template.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: tenant-sealed-secret\nspec:\n  template: tenant-sealed-secret\n  selector:\n    matchLabels:\n      distribute-image-pull-secret: true\n  sync: true\n

    MTO will now deploy the sealed secrets mentioned in Template to namespaces which have the mentioned label. The rest of the work to deploy secret from a sealed secret has to be done by Sealed Secrets Controller.

    "},{"location":"reference-guides/extend-default-roles.html","title":"Extending the default access level for tenant members","text":"

    Bill as the cluster admin wants to extend the default access for tenant members. As an admin of an OpenShift Cluster, Bill can extend the admin, edit, and view ClusterRole using aggregation. Bill will first create a ClusterRole with privileges to resources which Bill wants to extend. Bill will add the aggregation label to the newly created ClusterRole for extending the default ClusterRoles provided by OpenShift.

    kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: extend-view-role\n  labels:\n    rbac.authorization.k8s.io/aggregate-to-view: 'true'\nrules:\n  - verbs:\n      - get\n      - list\n      - watch\n    apiGroups:\n      - user.openshift.io\n    resources:\n      - groups\n

    Note: You can learn more about aggregated-cluster-roles here

    "},{"location":"reference-guides/graph-visualization.html","title":"Graph Visualization on MTO Console","text":"

    Effortlessly associate tenants with their respective resources using the enhanced graph feature on the MTO Console. This dynamic graph illustrates the relationships between tenants and the resources they create, encompassing both MTO's proprietary resources and native Kubernetes/OpenShift elements.

    Example Graph:

      graph LR;\n      A(alpha)-->B(dev);\n      A-->C(prod);\n      B-->D(limitrange);\n      B-->E(owner-rolebinding);\n      B-->F(editor-rolebinding);\n      B-->G(viewer-rolebinding);\n      C-->H(limitrange);\n      C-->I(owner-rolebinding);\n      C-->J(editor-rolebinding);\n      C-->K(viewer-rolebinding);\n

    Explore with an intuitive graph that showcases the relationships between tenants and their resources. The MTO Console's graph feature simplifies the understanding of complex structures, providing you with a visual representation of your tenant's organization.

    To view the graph of your tenant, follow the steps below:

    "},{"location":"reference-guides/integrationconfig.html","title":"Configuring Managed Namespaces and ServiceAccounts in IntegrationConfig","text":"

    Bill is a cluster admin who can use IntegrationConfig to configure how Multi Tenant Operator (MTO) manages the cluster.

    By default, MTO watches all namespaces and will enforce all the governing policies on them. All namespaces managed by MTO require the stakater.com/tenant label. MTO ignores privileged namespaces that are mentioned in the IntegrationConfig, and does not manage them. Therefore, any tenant label on such namespaces will be ignored.

    oc create namespace stakater-test\nError from server (Cannot Create namespace stakater-test without label stakater.com/tenant. User: Bill): admission webhook \"vnamespace.kb.io\" denied the request: Cannot CREATE namespace stakater-test without label stakater.com/tenant. User: Bill\n

    Bill is trying to create a namespace without the stakater.com/tenant label. Creating a namespace without this label is only allowed if the namespace is privileged. Privileged namespaces will be ignored by MTO and do not require the said label. Therefore, Bill will add the required regex in the IntegrationConfig, along with any other namespaces which are privileged and should be ignored by MTO - like default, or namespaces with prefixes like openshift, kube:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedNamespaces:\n      - ^default$\n      - ^openshift-.*\n      - ^kube-.*\n      - ^stakater-.*\n

    After mentioning the required regex (^stakater-.*) under privilegedNamespaces, Bill can create the namespace without interference.

    oc create namespace stakater-test\nnamespace/stakater-test created\n

    MTO will also disallow all users which are not tenant owners to perform CRUD operations on namespaces. This will also prevent Service Accounts from performing CRUD operations.

    If Bill wants MTO to ignore Service Accounts, then he would simply have to add them in the IntegrationConfig:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedServiceAccounts:\n      - system:serviceaccount:openshift\n      - system:serviceaccount:stakater\n      - system:serviceaccount:kube\n      - system:serviceaccount:redhat\n      - system:serviceaccount:hive\n

    Bill can also use regex patterns to ignore a set of service accounts:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:openshift-.*\n      - ^system:serviceaccount:stakater-.*\n
    "},{"location":"reference-guides/integrationconfig.html#configuring-vault-in-integrationconfig","title":"Configuring Vault in IntegrationConfig","text":"

    Vault is used to secure, store and tightly control access to tokens, passwords, certificates, and encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.

    If Bill (the cluster admin) has Vault configured in his cluster, then he can take benefit from MTO's integration with Vault.

    MTO automatically creates Vault secret paths for tenants, where tenant members can securely save their secrets. It also authorizes tenant members to access these secrets via OIDC.

    Bill would first have to integrate Vault with MTO by adding the details in IntegrationConfig. For more details

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  vault:\n    enabled: true\n    endpoint:\n      secretReference:\n        name: vault-root-token\n        namespace: vault\n      url: >-\n        https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n    sso:\n      accessorID: auth_oidc_aa6aa9aa\n      clientName: vault\n

    Bill then creates a tenant for Anna and John:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@acme.org\n  viewers:\n    users:\n    - john@acme.org\n  quota: small\n  sandbox: false\n

    Now Bill goes to Vault and sees that a path for tenant has been made under the name bluesky/kv, confirming that Tenant members with the Owner or Edit roles now have access to the tenant's Vault path.

    Now if Anna sign's in to the Vault via OIDC, she can see her tenants path and secrets. Whereas if John sign's in to the Vault via OIDC, he can't see his tenants path or secrets as he doesn't have the access required to view them.

    "},{"location":"reference-guides/integrationconfig.html#configuring-rhsso-red-hat-single-sign-on-in-integrationconfig","title":"Configuring RHSSO (Red Hat Single Sign-On) in IntegrationConfig","text":"

    Red Hat Single Sign-On RHSSO is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0.

    If Bill the cluster admin has RHSSO configured in his cluster, then he can take benefit from MTO's integration with RHSSO and Vault.

    MTO automatically allows tenant members to access Vault via OIDC(RHSSO authentication and authorization) to access secret paths for tenants where tenant members can securely save their secrets.

    Bill would first have to integrate RHSSO with MTO by adding the details in IntegrationConfig. Visit here for more details.

    rhsso:\n  enabled: true\n  realm: customer\n  endpoint:\n    secretReference:\n      name: auth-secrets\n      namespace: openshift-auth\n    url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n
    "},{"location":"reference-guides/mattermost.html","title":"Creating Mattermost Teams for your tenant","text":""},{"location":"reference-guides/mattermost.html#requirements","title":"Requirements","text":"

    MTO-Mattermost-Integration-Operator

    Please contact stakater to install the Mattermost integration operator before following the below-mentioned steps.

    "},{"location":"reference-guides/mattermost.html#steps-to-enable-integration","title":"Steps to enable integration","text":"

    Bill wants some tenants to also have their own Mattermost Teams. To make sure this happens correctly, Bill will first add the stakater.com/mattermost: true label to the tenants. The label will enable the mto-mattermost-integration-operator to create and manage Mattermost Teams based on Tenants.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: sigma\n  labels:\n    stakater.com/mattermost: 'true'\nspec:\n  owners:\n    users:\n      - user\n  editors:\n    users:\n      - user1\n  quota: medium\n  sandbox: false\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n

    Now user can log In to Mattermost to see their Team and relevant channels associated with it.

    The name of the Team is similar to the Tenant name. Notification channels are pre-configured for every team, and can be modified.

    "},{"location":"reference-guides/resource-sync-by-tgi.html","title":"Sync Resources Deployed by TemplateGroupInstance","text":"

    The TemplateGroupInstance CR provides two types of resource sync for the resources mentioned in Template

    For the given example, let's consider we want to apply the following template

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-secret\nresources:\n  manifests:\n\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n\n    - apiVersion: v1\n      kind: ServiceAccount\n      metadata:\n        name: example-automated-thing\n      secrets:\n        - name: example-automated-thing-token-zyxwv\n

    And the following TemplateGroupInstance is used to deploy these resources to namespaces having label kind: build

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-secret\n  selector:\n    matchLabels:\n      kind: build\n  sync: true\n

    As we can see, in our TGI, we have a field spec.sync which is set to true. This will update the resources on two conditions:

    Note

    If the updated field of the deployed manifest is not mentioned in the Template, it will not get reverted. For example, if secrets field is not mentioned in ServiceAcoount in the above Template, it will not get reverted if changed

    "},{"location":"reference-guides/resource-sync-by-tgi.html#ignore-resources-updates-on-resources","title":"Ignore Resources Updates on Resources","text":"

    If the resources mentioned in Template CR conflict with another controller/operator, and you want TemplateGroupInstance to not actively revert the resource updates, you can add the following label to the conflicting resource multi-tenant-operator/ignore-resource-updates: \"\".

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-secret\nresources:\n  manifests:\n\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n\n    - apiVersion: v1\n      kind: ServiceAccount\n      metadata:\n        name: example-automated-thing\n        labels:\n          multi-tenant-operator/ignore-resource-updates: \"\"\n      secrets:\n        - name: example-automated-thing-token-zyxwv\n

    Note

    However, this label will not stop Multi Tenant Operator from updating the resource on following conditions: - Template gets updated - TemplateGroupInstance gets updated - Resource gets deleted

    If you don't want to sync the resources in any case, you can disable sync via sync: false in TemplateGroupInstance spec.

    "},{"location":"reference-guides/secret-distribution.html","title":"Propagate Secrets from Parent to Descendant namespaces","text":"

    Secrets like registry credentials often need to exist in multiple Namespaces, so that Pods within different namespaces can have access to those credentials in form of secrets.

    Manually creating secrets within different namespaces could lead to challenges, such as:

    With the help of Multi-Tenant Operator's Template feature we can make this secret distribution experience easy.

    For example, to copy a Secret called registry which exists in the example to new Namespaces whenever they are created, we will first create a Template which will have reference of the registry secret.

    It will also push updates to the copied Secrets and keep the propagated secrets always sync and updated with parent namespaces.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: registry-secret\nresources:\n  resourceMappings:\n    secrets:\n      - name: registry\n        namespace: example\n

    Now using this Template we can propagate registry secret to different namespaces that have some common set of labels.

    For example, will just add one label kind: registry and all namespaces with this label will get this secret.

    For propagating it on different namespaces dynamically will have to create another resource called TemplateGroupInstance. TemplateGroupInstance will have Template and matchLabel mapping as shown below:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: registry-secret-group-instance\nspec:\n  template: registry-secret\n  selector:\n    matchLabels:\n      kind: registry\n  sync: true\n

    After reconciliation, you will be able to see those secrets in namespaces having mentioned label.

    MTO will keep injecting this secret to the new namespaces created with that label.

    kubectl get secret registry-secret -n example-ns-1\nNAME             STATE    AGE\nregistry-secret    Active   3m\n\nkubectl get secret registry-secret -n example-ns-2\nNAME             STATE    AGE\nregistry-secret    Active   3m\n
    "},{"location":"tutorials/installation.html","title":"Installation","text":"

    This document contains instructions on installing, uninstalling and configuring Multi Tenant Operator using OpenShift MarketPlace.

    1. OpenShift OperatorHub UI

    2. CLI/GitOps

    3. Uninstall

    "},{"location":"tutorials/installation.html#requirements","title":"Requirements","text":""},{"location":"tutorials/installation.html#installing-via-operatorhub-ui","title":"Installing via OperatorHub UI","text":"

    Note: Use stable channel for seamless upgrades. For Production Environment prefer Manual approval and use Automatic for Development Environment

    Note: MTO will be installed in multi-tenant-operator namespace.

    "},{"location":"tutorials/installation.html#configuring-integrationconfig","title":"Configuring IntegrationConfig","text":"

    IntegrationConfig is required to configure the settings of multi-tenancy for MTO.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedNamespaces:\n      - default\n      - ^openshift-*\n      - ^kube-*\n      - ^redhat-*\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:default-*\n      - ^system:serviceaccount:openshift-*\n      - ^system:serviceaccount:kube-*\n      - ^system:serviceaccount:redhat-*\n

    For more details and configurations check out IntegrationConfig.

    "},{"location":"tutorials/installation.html#installing-via-cli-or-gitops","title":"Installing via CLI OR GitOps","text":"
    oc create namespace multi-tenant-operator\nnamespace/multi-tenant-operator created\n
    oc create -f - << EOF\napiVersion: operators.coreos.com/v1\nkind: OperatorGroup\nmetadata:\n  name: tenant-operator\n  namespace: multi-tenant-operator\nEOF\noperatorgroup.operators.coreos.com/tenant-operator created\n
    oc create -f - << EOF\napiVersion: operators.coreos.com/v1alpha1\nkind: Subscription\nmetadata:\n  name: tenant-operator\n  namespace: multi-tenant-operator\nspec:\n  channel: stable\n  installPlanApproval: Automatic\n  name: tenant-operator\n  source: certified-operators\n  sourceNamespace: openshift-marketplace\n  startingCSV: tenant-operator.v0.9.1\n  config:\n    env:\n      - name: ENABLE_CONSOLE\n        value: 'true'\nEOF\nsubscription.operators.coreos.com/tenant-operator created\n

    Note: To bring MTO via GitOps, add the above files in GitOps repository.

    "},{"location":"tutorials/installation.html#configuring-integrationconfig_1","title":"Configuring IntegrationConfig","text":"

    IntegrationConfig is required to configure the settings of multi-tenancy for MTO.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedNamespaces:\n      - default\n      - ^openshift-*\n      - ^kube-*\n      - ^redhat-*\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:default-*\n      - ^system:serviceaccount:openshift-*\n      - ^system:serviceaccount:kube-*\n      - ^system:serviceaccount:redhat-*\n

    For more details and configurations check out IntegrationConfig.

    "},{"location":"tutorials/installation.html#uninstall-via-operatorhub-ui","title":"Uninstall via OperatorHub UI","text":"

    You can uninstall MTO by following these steps:

    "},{"location":"tutorials/installation.html#notes","title":"Notes","text":""},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html","title":"Enabling Multi-Tenancy in ArgoCD","text":""},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#argocd-integration-in-multi-tenant-operator","title":"ArgoCD integration in Multi Tenant Operator","text":"

    With Multi Tenant Operator (MTO), cluster admins can configure multi tenancy in their cluster. Now with ArgoCD integration, multi tenancy can be configured in ArgoCD applications and AppProjects.

    MTO (if configured to) will create AppProjects for each tenant. The AppProject will allow tenants to create ArgoCD Applications that can be synced to namespaces owned by those tenants. Cluster admins will also be able to blacklist certain namespaces resources if they want, and allow certain cluster scoped resources as well (see the NamespaceResourceBlacklist and ClusterResourceWhitelist sections in Integration Config docs and Tenant Custom Resource docs).

    Note that ArgoCD integration in MTO is completely optional.

    "},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#default-argocd-configuration","title":"Default ArgoCD configuration","text":"

    We have set a default ArgoCD configuration in Multi Tenant Operator that fulfils the following use cases:

    "},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#creating-argocd-appprojects-for-your-tenant","title":"Creating ArgoCD AppProjects for your tenant","text":"

    Bill wants each tenant to also have their own ArgoCD AppProjects. To make sure this happens correctly, Bill will first specify the ArgoCD namespace in the IntegrationConfig:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  ...\n  argocd:\n    namespace: openshift-operators\n  ...\n

    Afterward, Bill must specify the source GitOps repos for the tenant inside the tenant CR like so:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: sigma\nspec:\n  argocd:\n    sourceRepos:\n      # specify source repos here\n      - \"https://github.com/stakater/GitOps-config\"\n  owners:\n    users:\n      - user\n  editors:\n    users:\n      - user1\n  quota: medium\n  sandbox: false\n  namespaces:\n    withTenantPrefix:\n      - build\n      - stage\n      - dev\n

    Now Bill can see an AppProject will be created for the tenant

    oc get AppProject -A\nNAMESPACE             NAME           AGE\nopenshift-operators   sigma        5d15h\n

    The following AppProject is created:

    apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n  name: sigma\n  namespace: openshift-operators\nspec:\n  destinations:\n    - namespace: sigma-build\n      server: \"https://kubernetes.default.svc\"\n    - namespace: sigma-dev\n      server: \"https://kubernetes.default.svc\"\n    - namespace: sigma-stage\n      server: \"https://kubernetes.default.svc\"\n  roles:\n    - description: >-\n        Role that gives full access to all resources inside the tenant's\n        namespace to the tenant owner groups\n      groups:\n        - saap-cluster-admins\n        - stakater-team\n        - sigma-owner-group\n      name: sigma-owner\n      policies:\n        - \"p, proj:sigma:sigma-owner, *, *, sigma/*, allow\"\n    - description: >-\n        Role that gives edit access to all resources inside the tenant's\n        namespace to the tenant owner group\n      groups:\n        - saap-cluster-admins\n        - stakater-team\n        - sigma-edit-group\n      name: sigma-edit\n      policies:\n        - \"p, proj:sigma:sigma-edit, *, *, sigma/*, allow\"\n    - description: >-\n        Role that gives view access to all resources inside the tenant's\n        namespace to the tenant owner group\n      groups:\n        - saap-cluster-admins\n        - stakater-team\n        - sigma-view-group\n      name: sigma-view\n      policies:\n        - \"p, proj:sigma:sigma-view, *, get, sigma/*, allow\"\n  sourceRepos:\n    - \"https://github.com/stakater/gitops-config\"\n

    Users belonging to the Sigma group will now only see applications created by them in the ArgoCD frontend now:

    "},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#prevent-argocd-from-syncing-certain-namespaced-resources","title":"Prevent ArgoCD from syncing certain namespaced resources","text":"

    Bill wants tenants to not be able to sync ResourceQuota and LimitRange resources to their namespaces. To do this correctly, Bill will specify these resources to blacklist in the ArgoCD portion of the IntegrationConfig:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  ...\n  argocd:\n    namespace: openshift-operators\n    namespaceResourceBlacklist:\n      - group: \"\"\n        kind: ResourceQuota\n      - group: \"\"\n        kind: LimitRange\n  ...\n

    Now, if these resources are added to any tenant's project directory in GitOps, ArgoCD will not sync them to the cluster. The AppProject will also have the blacklisted resources added to it:

    apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n  name: sigma\n  namespace: openshift-operators\nspec:\n  ...\n  namespaceResourceBlacklist:\n    - group: ''\n      kind: ResourceQuota\n    - group: ''\n      kind: LimitRange\n  ...\n
    "},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#allow-argocd-to-sync-certain-cluster-wide-resources","title":"Allow ArgoCD to sync certain cluster-wide resources","text":"

    Bill now wants tenants to be able to sync the Environment cluster scoped resource to the cluster. To do this correctly, Bill will specify the resource to allow-list in the ArgoCD portion of the Integration Config's Spec:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  ...\n  argocd:\n    namespace: openshift-operators\n    clusterResourceWhitelist:\n      - group: \"\"\n        kind: Environment\n  ...\n

    Now, if the resource is added to any tenant's project directory in GitOps, ArgoCD will sync them to the cluster. The AppProject will also have the allow-listed resources added to it:

    apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n  name: sigma\n  namespace: openshift-operators\nspec:\n  ...\n  clusterResourceWhitelist:\n  - group: \"\"\n    kind: Environment\n  ...\n
    "},{"location":"tutorials/argocd/enabling-multi-tenancy-argocd.html#override-namespaceresourceblacklist-andor-clusterresourcewhitelist-per-tenant","title":"Override NamespaceResourceBlacklist and/or ClusterResourceWhitelist per Tenant","text":"

    Bill now wants a specific tenant to override the namespaceResourceBlacklist and/or clusterResourceWhitelist set via Integration Config. Bill will specify these in argoCD.appProjects section of Tenant spec.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: blue-sky\nspec:\n  argocd:\n    sourceRepos:\n      # specify source repos here\n      - \"https://github.com/stakater/GitOps-config\"\n    appProject:\n      clusterResourceWhitelist:\n        - group: admissionregistration.k8s.io\n          kind: validatingwebhookconfigurations\n      namespaceResourceBlacklist:\n        - group: \"\"\n          kind: ConfigMap\n  owners:\n    users:\n      - user\n  editors:\n    users:\n      - user1\n  quota: medium\n  sandbox: false\n  namespaces:\n    withTenantPrefix:\n      - build\n      - stage\n
    "},{"location":"tutorials/template/template-group-instance.html","title":"More about TemplateGroupInstance","text":""},{"location":"tutorials/template/template-instance.html","title":"More about TemplateInstances","text":""},{"location":"tutorials/template/template.html","title":"Understanding and Utilizing Template","text":""},{"location":"tutorials/template/template.html#creating-templates","title":"Creating Templates","text":"

    Anna wants to create a Template that she can use to initialize or share common resources across namespaces (e.g. PullSecrets).

    Anna can either create a template using manifests field, covering Kubernetes or custom resources.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  manifests:\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n

    Or by using Helm Charts

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: redis\nresources:\n  helm:\n    releaseName: redis\n    chart:\n      repository:\n        name: redis\n        repoUrl: https://charts.bitnami.com/bitnami\n    values: |\n      redisPort: 6379\n

    She can also use resourceMapping field to copy over secrets and configmaps from one namespace to others.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: resource-mapping\nresources:\n  resourceMappings:\n    secrets:\n      - name: docker-secret\n        namespace: bluesky-build\n    configMaps:\n      - name: tronador-configMap\n        namespace: stakater-tronador\n

    Note: Resource mapping can be used via TGI to map resources within tenant namespaces or to some other tenant's namespace. If used with TI, the resources will only be mapped if namespaces belong to same tenant.

    "},{"location":"tutorials/template/template.html#using-templates-with-default-parameters","title":"Using Templates with Default Parameters","text":"
    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: namespace-parameterized-restrictions\nparameters:\n  # Name of the parameter\n  - name: DEFAULT_CPU_LIMIT\n    # The default value of the parameter\n    value: \"1\"\n  - name: DEFAULT_CPU_REQUESTS\n    value: \"0.5\"\n    # If a parameter is required the template instance will need to set it\n    # required: true\n    # Make sure only values are entered for this parameter\n    validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n  manifests:\n    - apiVersion: v1\n      kind: LimitRange\n      metadata:\n        name: namespace-limit-range-${namespace}\n      spec:\n        limits:\n          - default:\n              cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n            defaultRequest:\n              cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n            type: Container\n

    Parameters can be used with both manifests and helm charts

    "},{"location":"tutorials/tenant/assign-quota-tenant.html","title":"Assign Quota to a Tenant","text":""},{"location":"tutorials/tenant/assigning-metadata.html","title":"Assigning Common/Specific Metadata","text":""},{"location":"tutorials/tenant/assigning-metadata.html#distributing-common-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource","title":"Distributing common labels and annotations to tenant namespaces via Tenant Custom Resource","text":"

    Bill now wants to add labels/annotations to all the namespaces for a tenant. To create those labels/annotations Bill will just add them into commonMetadata.labels/commonMetadata.annotations field in the tenant CR.

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n  commonMetadata:\n    labels:\n      app.kubernetes.io/managed-by: tenant-operator\n      app.kubernetes.io/part-of: tenant-alpha\n    annotations:\n      openshift.io/node-selector: node-role.kubernetes.io/infra=\nEOF\n

    With the above configuration all tenant namespaces will now contain the mentioned labels and annotations.

    "},{"location":"tutorials/tenant/assigning-metadata.html#distributing-specific-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource","title":"Distributing specific labels and annotations to tenant namespaces via Tenant Custom Resource","text":"

    Bill now wants to add labels/annotations to specific namespaces for a tenant. To create those labels/annotations Bill will just add them into specificMetadata.labels/specificMetadata.annotations and specific namespaces in specificMetadata.namespaces field in the tenant CR.

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandboxConfig:\n    enabled: true\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n  specificMetadata:\n    - namespaces:\n        - bluesky-anna-aurora-sandbox\n      labels:\n        app.kubernetes.io/is-sandbox: true\n      annotations:\n        openshift.io/node-selector: node-role.kubernetes.io/worker=\nEOF\n

    With the above configuration all tenant namespaces will now contain the mentioned labels and annotations.

    "},{"location":"tutorials/tenant/create-sandbox.html","title":"Create Sandbox Namespaces for Tenant Users","text":""},{"location":"tutorials/tenant/create-sandbox.html#assigning-users-sandbox-namespace","title":"Assigning Users Sandbox Namespace","text":"

    Bill assigned the ownership of bluesky to Anna and Anthony. Now if the users want sandboxes to be made for them, they'll have to ask Bill to enable sandbox functionality.

    To enable that, Bill will just set enabled: true within the sandboxConfig field

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandboxConfig:\n    enabled: true\nEOF\n

    With the above configuration Anna and Anthony will now have new sandboxes created

    kubectl get namespaces\nNAME                             STATUS   AGE\nbluesky-anna-aurora-sandbox      Active   5d5h\nbluesky-anthony-aurora-sandbox   Active   5d5h\nbluesky-john-aurora-sandbox      Active   5d5h\n

    If Bill wants to make sure that only the sandbox owner can view his sandbox namespace, he can achieve this by setting private: true within the sandboxConfig filed.

    "},{"location":"tutorials/tenant/create-sandbox.html#create-private-sandboxes","title":"Create Private Sandboxes","text":"

    Bill assigned the ownership of bluesky to Anna and Anthony. Now if the users want sandboxes to be made for them, they'll have to ask Bill to enable sandbox functionality. The Users also want to make sure that the sandboxes that are created for them are also only visible to the user they belong to. To enable that, Bill will just set enabled: true and private: true within the sandboxConfig field

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandboxConfig:\n    enabled: true\n    private: true\nEOF\n

    With the above configuration Anna and Anthony will now have new sandboxes created

    kubectl get namespaces\nNAME                             STATUS   AGE\nbluesky-anna-aurora-sandbox      Active   5d5h\nbluesky-anthony-aurora-sandbox   Active   5d5h\nbluesky-john-aurora-sandbox      Active   5d5h\n

    However, from the perspective of Anna, only their sandbox will be visible

    kubectl get namespaces\nNAME                             STATUS   AGE\nbluesky-anna-aurora-sandbox      Active   5d5h\n
    "},{"location":"tutorials/tenant/create-tenant.html","title":"Creating a Tenant","text":"

    Bill is a cluster admin who receives a new request from Aurora Solutions CTO asking for a new tenant for Anna's team.

    Bill creates a new tenant called bluesky in the cluster:

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandbox: false\nEOF\n

    Bill checks if the new tenant is created:

    kubectl get tenants.tenantoperator.stakater.com bluesky\nNAME       STATE    AGE\nbluesky    Active   3m\n

    Anna can now log in to the cluster and check if she can create namespaces

    kubectl auth can-i create namespaces\nyes\n

    However, cluster resources are not accessible to Anna

    kubectl auth can-i get namespaces\nno\n\nkubectl auth can-i get persistentvolumes\nno\n

    Including the Tenant resource

    kubectl auth can-i get tenants.tenantoperator.stakater.com\nno\n
    "},{"location":"tutorials/tenant/create-tenant.html#assign-multiple-users-as-tenant-owner","title":"Assign multiple users as tenant owner","text":"

    In the example above, Bill assigned the ownership of bluesky to Anna. If another user, e.g. Anthony needs to administer bluesky, than Bill can assign the ownership of tenant to that user as well:

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandbox: false\nEOF\n

    With the configuration above, Anthony can log in to the cluster and execute

    kubectl auth can-i create namespaces\nyes\n
    "},{"location":"tutorials/tenant/creating-namespaces.html","title":"Creating Namespaces","text":""},{"location":"tutorials/tenant/creating-namespaces.html#creating-namespaces-via-tenant-custom-resource","title":"Creating Namespaces via Tenant Custom Resource","text":"

    Bill now wants to create namespaces for dev, build and production environments for the tenant members. To create those namespaces Bill will just add those names within the namespaces field in the tenant CR. If Bill wants to append the tenant name as a prefix in namespace name, then he can use namespaces.withTenantPrefix field. Else he can use namespaces.withoutTenantPrefix for namespaces for which he does not need tenant name as a prefix.

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n    withoutTenantPrefix:\n      - prod\nEOF\n

    With the above configuration tenant members will now see new namespaces have been created.

    kubectl get namespaces\nNAME             STATUS   AGE\nbluesky-dev      Active   5d5h\nbluesky-build    Active   5d5h\nprod             Active   5d5h\n

    Anna as the tenant owner can create new namespaces for her tenant.

    apiVersion: v1\nkind: Namespace\nmetadata:\n  name: bluesky-production\n  labels:\n    stakater.com/tenant: bluesky\n

    \u26a0\ufe0f Anna is required to add the tenant label stakater.com/tenant: bluesky which contains the name of her tenant bluesky, while creating the namespace. If this label is not added or if Anna does not belong to the bluesky tenant, then Multi Tenant Operator will not allow the creation of that namespace.

    When Anna creates the namespace, MTO assigns Anna and other tenant members the roles based on their user types, such as a tenant owner getting the OpenShift admin role for that namespace.

    As a tenant owner, Anna is able to create namespaces.

    If you have enabled ArgoCD Multitenancy, our preferred solution is to create tenant namespaces by using Tenant spec to avoid syncing issues in ArgoCD console during namespace creation.

    "},{"location":"tutorials/tenant/creating-namespaces.html#add-existing-namespaces-to-tenant-via-gitops","title":"Add Existing Namespaces to Tenant via GitOps","text":"

    Using GitOps as your preferred development workflow, you can add existing namespaces for your tenants by including the tenant label.

    To add an existing namespace to your tenant via GitOps:

    1. First, migrate your namespace resource to your \u201cwatched\u201d git repository
    2. Edit your namespace yaml to include the tenant label
    3. Tenant label follows the naming convention stakater.com/tenant: <TENANT_NAME>
    4. Sync your GitOps repository with your cluster and allow changes to be propagated
    5. Verify that your Tenant users now have access to the namespace

    For example, If Anna, a tenant owner, wants to add the namespace bluesky-dev to her tenant via GitOps, after migrating her namespace manifest to a \u201cwatched repository\u201d

    apiVersion: v1\nkind: Namespace\nmetadata:\n  name: bluesky-dev\n

    She can then add the tenant label

     ...\n  labels:\n    stakater.com/tenant: bluesky\n

    Now all the users of the Bluesky tenant now have access to the existing namespace.

    Additionally, to remove namespaces from a tenant, simply remove the tenant label from the namespace resource and sync your changes to your cluster.

    "},{"location":"tutorials/tenant/creating-namespaces.html#remove-namespaces-from-your-cluster-via-gitops","title":"Remove Namespaces from your Cluster via GitOps","text":"

    GitOps is a quick and efficient way to automate the management of your K8s resources.

    To remove namespaces from your cluster via GitOps;

    "},{"location":"tutorials/tenant/custom-rbac.html","title":"Applying Custom RBAC to a Tenant","text":""},{"location":"tutorials/tenant/deleting-tenant.html","title":"Deleting a Tenant","text":""},{"location":"tutorials/tenant/deleting-tenant.html#retaining-tenant-namespaces-and-appproject-when-a-tenant-is-being-deleted","title":"Retaining tenant namespaces and AppProject when a tenant is being deleted","text":"

    Bill now wants to delete tenant bluesky and wants to retain all namespaces and AppProject of the tenant. To retain the namespaces Bill will set spec.onDelete.cleanNamespaces, and spec.onDelete.cleanAppProjects to false.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  quota: small\n  sandboxConfig:\n    enabled: true\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n  onDelete:\n    cleanNamespaces: false\n    cleanAppProject: false\n

    With the above configuration all tenant namespaces and AppProject will not be deleted when tenant bluesky is deleted. By default, the value of spec.onDelete.cleanNamespaces is also false and spec.onDelete.cleanAppProject is true

    "},{"location":"tutorials/tenant/tenant-hibernation.html","title":"Hibernating a Tenant","text":""},{"location":"tutorials/tenant/tenant-hibernation.html#hibernating-namespaces","title":"Hibernating Namespaces","text":"

    You can manage workloads in your cluster with MTO by implementing a hibernation schedule for your tenants. Hibernation downsizes the running Deployments and StatefulSets in a tenant\u2019s namespace according to a defined cron schedule. You can set a hibernation schedule for your tenants by adding the \u2018spec.hibernation\u2019 field to the tenant's respective Custom Resource.

    hibernation:\n  sleepSchedule: 23 * * * *\n  wakeSchedule: 26 * * * *\n

    spec.hibernation.sleepSchedule accepts a cron expression indicating the time to put the workloads in your tenant\u2019s namespaces to sleep.

    spec.hibernation.wakeSchedule accepts a cron expression indicating the time to wake the workloads in your tenant\u2019s namespaces up.

    Note

    Both sleep and wake schedules must be specified for your Hibernation schedule to be valid.

    Additionally, adding the hibernation.stakater.com/exclude: 'true' annotation to a namespace excludes it from hibernating.

    Note

    This is only true for hibernation applied via the Tenant Custom Resource, and does not apply for hibernation done by manually creating a ResourceSupervisor (details about that below).

    Note

    This will not wake up an already sleeping namespace before the wake schedule.

    "},{"location":"tutorials/tenant/tenant-hibernation.html#resource-supervisor","title":"Resource Supervisor","text":"

    Adding a Hibernation Schedule to a Tenant creates an accompanying ResourceSupervisor Custom Resource. The Resource Supervisor stores the Hibernation schedules and manages the current and previous states of all the applications, whether they are sleeping or awake.

    When the sleep timer is activated, the Resource Supervisor controller stores the details of your applications (including the number of replicas, configurations, etc.) in the applications' namespaces and then puts your applications to sleep. When the wake timer is activated, the controller wakes up the applications using their stored details.

    Enabling ArgoCD support for Tenants will also hibernate applications in the tenants' appProjects.

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: sigma\nspec:\n  argocd:\n    appProjects:\n      - sigma\n    namespace: openshift-gitops\n  hibernation:\n    sleepSchedule: 42 * * * *\n    wakeSchedule: 45 * * * *\n  namespaces:\n    - tenant-ns1\n    - tenant-ns2\n

    Currently, Hibernation is available only for StatefulSets and Deployments.

    "},{"location":"tutorials/tenant/tenant-hibernation.html#manual-creation-of-resourcesupervisor","title":"Manual creation of ResourceSupervisor","text":"

    Hibernation can also be applied by creating a ResourceSupervisor resource manually. The ResourceSupervisor definition will contain the hibernation cron schedule, the names of the namespaces to be hibernated, and the names of the ArgoCD AppProjects whose ArgoCD Applications have to be hibernated (as per the given schedule).

    This method can be used to hibernate:

    As an example, the following ResourceSupervisor could be created manually, to apply hibernation explicitly to the 'ns1' and 'ns2' namespaces, and to the 'sample-app-project' AppProject.

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: hibernator\nspec:\n  argocd:\n    appProjects:\n      - sample-app-project\n    namespace: openshift-gitops\n  hibernation:\n    sleepSchedule: 42 * * * *\n    wakeSchedule: 45 * * * *\n  namespaces:\n    - ns1\n    - ns2\n
    "},{"location":"tutorials/tenant/tenant-hibernation.html#freeing-up-unused-resources-with-hibernation","title":"Freeing up unused resources with hibernation","text":""},{"location":"tutorials/tenant/tenant-hibernation.html#hibernating-a-tenant_1","title":"Hibernating a tenant","text":"

    Bill is a cluster administrator who wants to free up unused cluster resources at nighttime, in an effort to reduce costs (when the cluster isn't being used).

    First, Bill creates a tenant with the hibernation schedules mentioned in the spec, or adds the hibernation field to an existing tenant:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: sigma\nspec:\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  owners:\n    users:\n      - user\n  editors:\n    users:\n      - user1\n  quota: medium\n  namespaces:\n    withoutTenantPrefix:\n      - build\n      - stage\n      - dev\n

    The schedules above will put all the Deployments and StatefulSets within the tenant's namespaces to sleep, by reducing their pod count to 0 at 8 PM every weekday. At 8 AM on weekdays, the namespaces will then wake up by restoring their applications' previous pod counts.

    Bill can verify this behaviour by checking the newly created ResourceSupervisor resource at run time:

    oc get ResourceSupervisor -A\nNAME           AGE\nsigma          5m\n

    The ResourceSupervisor will look like this at 'running' time (as per the schedule):

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: example\nspec:\n  argocd:\n    appProjects: []\n    namespace: ''\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  namespaces:\n    - build\n    - stage\n    - dev\nstatus:\n  currentStatus: running\n  nextReconcileTime: '2022-10-12T20:00:00Z'\n

    The ResourceSupervisor will look like this at 'sleeping' time (as per the schedule):

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: example\nspec:\n  argocd:\n    appProjects: []\n    namespace: ''\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  namespaces:\n    - build\n    - stage\n    - dev\nstatus:\n  currentStatus: sleeping\n  nextReconcileTime: '2022-10-13T08:00:00Z'\n  sleepingApplications:\n    - Namespace: build\n      kind: Deployment\n      name: example\n      replicas: 3\n    - Namespace: stage\n      kind: Deployment\n      name: example\n      replicas: 3\n

    Bill wants to prevent the build namespace from going to sleep, so he can add the hibernation.stakater.com/exclude: 'true' annotation to it. The ResourceSupervisor will now look like this after reconciling:

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: example\nspec:\n  argocd:\n    appProjects: []\n    namespace: ''\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  namespaces:\n    - stage\n    - dev\nstatus:\n  currentStatus: sleeping\n  nextReconcileTime: '2022-10-13T08:00:00Z'\n  sleepingApplications:\n    - Namespace: stage\n      kind: Deployment\n      name: example\n      replicas: 3\n
    "},{"location":"tutorials/tenant/tenant-hibernation.html#hibernating-namespaces-andor-argocd-applications-with-resourcesupervisor","title":"Hibernating namespaces and/or ArgoCD Applications with ResourceSupervisor","text":"

    Bill, the cluster administrator, wants to hibernate a collection of namespaces and AppProjects belonging to multiple different tenants. He can do so by creating a ResourceSupervisor manually, specifying the hibernation schedule in its spec, the namespaces and ArgoCD Applications that need to be hibernated as per the mentioned schedule. Bill can also use the same method to hibernate some namespaces and ArgoCD Applications that do not belong to any tenant on his cluster.

    The example given below will hibernate the ArgoCD Applications in the 'test-app-project' AppProject; and it will also hibernate the 'ns2' and 'ns4' namespaces.

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: test-resource-supervisor\nspec:\n  argocd:\n    appProjects:\n      - test-app-project\n    namespace: argocd-ns\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  namespaces:\n    - ns2\n    - ns4\nstatus:\n  currentStatus: sleeping\n  nextReconcileTime: '2022-10-13T08:00:00Z'\n  sleepingApplications:\n    - Namespace: ns2\n      kind: Deployment\n      name: test-deployment\n      replicas: 3\n
    "},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html","title":"Enabling Multi-Tenancy in Vault","text":""},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html#vault-multitenancy","title":"Vault Multitenancy","text":"

    HashiCorp Vault is an identity-based secret and encryption management system. Vault validates and authorizes a system's clients (users, machines, apps) before providing them access to secrets or stored sensitive data.

    "},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html#vault-integration-in-multi-tenant-operator","title":"Vault integration in Multi Tenant Operator","text":""},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html#service-account-auth-in-vault","title":"Service Account Auth in Vault","text":"

    MTO enables the Kubernetes auth method which can be used to authenticate with Vault using a Kubernetes Service Account Token. When enabled, for every tenant namespace, MTO automatically creates policies and roles that allow the service accounts present in those namespaces to read secrets at tenant's path in Vault. The name of the role is the same as namespace name.

    These service accounts are required to have stakater.com/vault-access: true label, so they can be authenticated with Vault via MTO.

    The Diagram shows how MTO enables ServiceAccounts to read secrets from Vault.

    "},{"location":"tutorials/vault/enabling-multi-tenancy-vault.html#user-oidc-auth-in-vault","title":"User OIDC Auth in Vault","text":"

    This requires a running RHSSO(RedHat Single Sign On) instance integrated with Vault over OIDC login method.

    MTO integration with Vault and RHSSO provides a way for users to log in to Vault where they only have access to relevant tenant paths.

    Once both integrations are set up with IntegrationConfig CR, MTO links tenant users to specific client roles named after their tenant under Vault client in RHSSO.

    After that, MTO creates specific policies in Vault for its tenant users.

    Mapping of tenant roles to Vault is shown below

    Tenant Role Vault Path Vault Capabilities Owner, Editor (tenantName)/* Create, Read, Update, Delete, List Owner, Editor sys/mounts/(tenantName)/* Create, Read, Update, Delete, List Owner, Editor managed-addons/* Read, List Viewer (tenantName)/* Read

    A simple user login workflow is shown in the diagram below.

    "},{"location":"usecases/admin-clusterrole.html","title":"Extending Admin ClusterRole","text":"

    Bill as the cluster admin want to add additional rules for admin ClusterRole.

    Bill can extend the admin role for MTO using the aggregation label for admin ClusterRole. Bill will create a new ClusterRole with all the permissions he needs to extend for MTO and add the aggregation label on the newly created ClusterRole.

    kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: extend-admin-role\n  labels:\n    rbac.authorization.k8s.io/aggregate-to-admin: 'true'\nrules:\n  - verbs:\n      - create\n      - update\n      - patch\n      - delete\n    apiGroups:\n      - user.openshift.io\n    resources:\n      - groups\n

    Note: You can learn more about aggregated-cluster-roles here

    "},{"location":"usecases/admin-clusterrole.html#whats-next","title":"What\u2019s next","text":"

    See how Bill can hibernate unused namespaces at night

    "},{"location":"usecases/argocd.html","title":"ArgoCD","text":""},{"location":"usecases/argocd.html#creating-argocd-appprojects-for-your-tenant","title":"Creating ArgoCD AppProjects for your tenant","text":"

    Bill wants each tenant to also have their own ArgoCD AppProjects. To make sure this happens correctly, Bill will first specify the ArgoCD namespace in the IntegrationConfig:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  ...\n  argocd:\n    namespace: openshift-operators\n  ...\n

    Afterwards, Bill must specify the source GitOps repos for the tenant inside the tenant CR like so:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: sigma\nspec:\n  argocd:\n    sourceRepos:\n      # specify source repos here\n      - \"https://github.com/stakater/GitOps-config\"\n  owners:\n    users:\n      - user\n  editors:\n    users:\n      - user1\n  quota: medium\n  sandbox: false\n  namespaces:\n    withTenantPrefix:\n      - build\n      - stage\n      - dev\n

    Now Bill can see an AppProject will be created for the tenant

    oc get AppProject -A\nNAMESPACE             NAME           AGE\nopenshift-operators   sigma        5d15h\n

    The following AppProject is created:

    apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n  name: sigma\n  namespace: openshift-operators\nspec:\n  destinations:\n    - namespace: sigma-build\n      server: \"https://kubernetes.default.svc\"\n    - namespace: sigma-dev\n      server: \"https://kubernetes.default.svc\"\n    - namespace: sigma-stage\n      server: \"https://kubernetes.default.svc\"\n  roles:\n    - description: >-\n        Role that gives full access to all resources inside the tenant's\n        namespace to the tenant owner group\n      groups:\n        - saap-cluster-admins\n        - stakater-team\n        - sigma-owner-group\n      name: sigma-owner\n      policies:\n        - \"p, proj:sigma:sigma-owner, *, *, sigma/*, allow\"\n    - description: >-\n        Role that gives edit access to all resources inside the tenant's\n        namespace to the tenant owner group\n      groups:\n        - saap-cluster-admins\n        - stakater-team\n        - sigma-edit-group\n      name: sigma-edit\n      policies:\n        - \"p, proj:sigma:sigma-edit, *, *, sigma/*, allow\"\n    - description: >-\n        Role that gives view access to all resources inside the tenant's\n        namespace to the tenant owner group\n      groups:\n        - saap-cluster-admins\n        - stakater-team\n        - sigma-view-group\n      name: sigma-view\n      policies:\n        - \"p, proj:sigma:sigma-view, *, get, sigma/*, allow\"\n  sourceRepos:\n    - \"https://github.com/stakater/gitops-config\"\n

    Users belonging to the Sigma group will now only see applications created by them in the ArgoCD frontend now:

    "},{"location":"usecases/argocd.html#prevent-argocd-from-syncing-certain-namespaced-resources","title":"Prevent ArgoCD from syncing certain namespaced resources","text":"

    Bill wants tenants to not be able to sync ResourceQuota and LimitRange resources to their namespaces. To do this correctly, Bill will specify these resources to blacklist in the ArgoCD portion of the IntegrationConfig:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  ...\n  argocd:\n    namespace: openshift-operators\n    namespaceResourceBlacklist:\n      - group: \"\"\n        kind: ResourceQuota\n      - group: \"\"\n        kind: LimitRange\n  ...\n

    Now, if these resources are added to any tenant's project directory in GitOps, ArgoCD will not sync them to the cluster. The AppProject will also have the blacklisted resources added to it:

    apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n  name: sigma\n  namespace: openshift-operators\nspec:\n  ...\n  namespaceResourceBlacklist:\n    - group: ''\n      kind: ResourceQuota\n    - group: ''\n      kind: LimitRange\n  ...\n
    "},{"location":"usecases/argocd.html#allow-argocd-to-sync-certain-cluster-wide-resources","title":"Allow ArgoCD to sync certain cluster-wide resources","text":"

    Bill now wants tenants to be able to sync the Environment cluster scoped resource to the cluster. To do this correctly, Bill will specify the resource to allow-list in the ArgoCD portion of the Integration Config's Spec:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  ...\n  argocd:\n    namespace: openshift-operators\n    clusterResourceWhitelist:\n      - group: \"\"\n        kind: Environment\n  ...\n

    Now, if the resource is added to any tenant's project directory in GitOps, ArgoCD will sync them to the cluster. The AppProject will also have the allow-listed resources added to it:

    apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\n  name: sigma\n  namespace: openshift-operators\nspec:\n  ...\n  clusterResourceWhitelist:\n  - group: \"\"\n    kind: Environment\n  ...\n
    "},{"location":"usecases/argocd.html#override-namespaceresourceblacklist-andor-clusterresourcewhitelist-per-tenant","title":"Override NamespaceResourceBlacklist and/or ClusterResourceWhitelist per Tenant","text":"

    Bill now wants a specific tenant to override the namespaceResourceBlacklist and/or clusterResourceWhitelist set via Integration Config. Bill will specify these in argoCD.appProjects section of Tenant spec.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: blue-sky\nspec:\n  argocd:\n    sourceRepos:\n      # specify source repos here\n      - \"https://github.com/stakater/GitOps-config\"\n    appProject:\n      clusterResourceWhitelist:\n        - group: admissionregistration.k8s.io\n          kind: validatingwebhookconfigurations\n      namespaceResourceBlacklist:\n        - group: \"\"\n          kind: ConfigMap\n  owners:\n    users:\n      - user\n  editors:\n    users:\n      - user1\n  quota: medium\n  sandbox: false\n  namespaces:\n    withTenantPrefix:\n      - build\n      - stage\n
    "},{"location":"usecases/configuring-multitenant-network-isolation.html","title":"Configuring Multi-Tenant Isolation with Network Policy Template","text":"

    Bill is a cluster admin who wants to configure network policies to provide multi-tenant network isolation.

    First, Bill creates a template for network policies:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: tenant-network-policy\nresources:\n  manifests:\n  - apiVersion: networking.k8s.io/v1\n    kind: NetworkPolicy\n    metadata:\n      name: allow-same-namespace\n    spec:\n      podSelector: {}\n      ingress:\n      - from:\n        - podSelector: {}\n  - apiVersion: networking.k8s.io/v1\n    kind: NetworkPolicy\n    metadata:\n      name: allow-from-openshift-monitoring\n    spec:\n      ingress:\n      - from:\n        - namespaceSelector:\n            matchLabels:\n              network.openshift.io/policy-group: monitoring\n      podSelector: {}\n      policyTypes:\n      - Ingress\n  - apiVersion: networking.k8s.io/v1\n    kind: NetworkPolicy\n    metadata:\n      name: allow-from-openshift-ingress\n    spec:\n      ingress:\n      - from:\n        - namespaceSelector:\n            matchLabels:\n              network.openshift.io/policy-group: ingress\n      podSelector: {}\n      policyTypes:\n      - Ingress\n

    Once the template has been created, Bill edits the IntegrationConfig to add unique label to all tenant projects:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    project:\n      labels:\n        stakater.com/workload-monitoring: \"true\"\n        tenant-network-policy: \"true\"\n      annotations:\n        openshift.io/node-selector: node-role.kubernetes.io/worker=\n    sandbox:\n      labels:\n        stakater.com/kind: sandbox\n    privilegedNamespaces:\n      - default\n      - ^openshift-*\n      - ^kube-*\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:openshift-*\n      - ^system:serviceaccount:kube-*\n

    Bill has added a new label tenant-network-policy: \"true\" in project section of IntegrationConfig, now MTO will add that label in all tenant projects.

    Finally Bill creates a TemplateGroupInstance which will distribute the network policies using the newly added project label and template.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: tenant-network-policy-group\nspec:\n  template: tenant-network-policy\n  selector:\n    matchLabels:\n      tenant-network-policy: \"true\"\n  sync: true\n

    MTO will now deploy the network policies mentioned in Template to all projects matching the label selector mentioned in the TemplateGroupInstance.

    "},{"location":"usecases/custom-roles.html","title":"Changing the default access level for tenant owners","text":"

    This feature allows the cluster admins to change the default roles assigned to Tenant owner, editor, viewer groups.

    For example, if Bill as the cluster admin wants to reduce the privileges that tenant owners have, so they cannot create or edit Roles or bind them. As an admin of an OpenShift cluster, Bill can do this by assigning the edit role to all tenant owners. This is easily achieved by modifying the IntegrationConfig:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  tenantRoles:\n    default:\n      owner:\n        clusterRoles:\n          - edit\n      editor:\n        clusterRoles:\n          - edit\n      viewer:\n        clusterRoles:\n          - view\n

    Once all namespaces reconcile, the old admin RoleBindings should get replaced with the edit ones for each tenant owner.

    "},{"location":"usecases/custom-roles.html#giving-specific-permissions-to-some-tenants","title":"Giving specific permissions to some tenants","text":"

    Bill now wants the owners of the tenants bluesky and alpha to have admin permissions over their namespaces. Custom roles feature will allow Bill to do this, by modifying the IntegrationConfig like this:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  tenantRoles:\n    default:\n      owner:\n        clusterRoles:\n          - edit\n      editor:\n        clusterRoles:\n          - edit\n      viewer:\n        clusterRoles:\n          - view\n    custom:\n    - labelSelector:\n        matchExpressions:\n        - key: stakater.com/tenant\n          operator: In\n          values:\n            - alpha\n      owner:\n        clusterRoles:\n          - admin\n    - labelSelector:\n        matchExpressions:\n        - key: stakater.com/tenant\n          operator: In\n          values:\n            - bluesky\n      owner:\n        clusterRoles:\n          - admin\n

    New Bindings will be created for the Tenant owners of bluesky and alpha, corresponding to the admin Role. Bindings for editors and viewer will be inherited from the default roles. All other Tenant owners will have an edit Role bound to them within their namespaces

    "},{"location":"usecases/deploying-templates.html","title":"Distributing Resources in Namespaces","text":"

    Multi Tenant Operator has three Custom Resources which can cover this need using the Template CR, depending upon the conditions and preference.

    1. TemplateGroupInstance
    2. TemplateInstance
    3. Tenant

    Stakater Team, however, encourages the use of TemplateGroupInstance to distribute resources in multiple namespaces as it is optimized for better performance.

    "},{"location":"usecases/deploying-templates.html#deploying-template-to-namespaces-via-templategroupinstances","title":"Deploying Template to Namespaces via TemplateGroupInstances","text":"

    Bill, the cluster admin, wants to deploy a docker pull secret in namespaces where certain labels exists.

    First, Bill creates a template:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-secret\nresources:\n  manifests:\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n

    Once the template has been created, Bill makes a TemplateGroupInstance referring to the Template he wants to deploy with MatchLabels:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-pull-secret\n  selector:\n    matchLabels:\n      kind: build\n  sync: true\n

    Afterwards, Bill can see that secrets have been successfully created in all label matching namespaces.

    kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME             STATE    AGE\ndocker-secret    Active   3m\n\nkubectl get secret docker-secret -n alpha-dave-aurora-sandbox\nNAME             STATE    AGE\ndocker-secret    Active   2m\n

    TemplateGroupInstance can also target specific tenants or all tenant namespaces under a single yaml definition.

    "},{"location":"usecases/deploying-templates.html#templategroupinstance-for-multiple-tenants","title":"TemplateGroupInstance for multiple Tenants","text":"

    It can be done by using the matchExpressions field, dividing the tenant label in key and values.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-pull-secret\n  selector:\n    matchExpressions:\n    - key: stakater.com/tenant\n      operator: In\n      values:\n        - alpha\n        - beta\n  sync: true\n
    "},{"location":"usecases/deploying-templates.html#templategroupinstance-for-all-tenants","title":"TemplateGroupInstance for all Tenants","text":"

    This can also be done by using the matchExpressions field, using just the tenant label key stakater.com/tenant.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-pull-secret\n  selector:\n    matchExpressions:\n    - key: stakater.com/tenant\n      operator: Exists\n  sync: true\n
    "},{"location":"usecases/deploying-templates.html#deploying-template-to-namespaces-via-tenant","title":"Deploying Template to Namespaces via Tenant","text":"

    Bill is a cluster admin who wants to deploy a docker-pull-secret in Anna's tenant namespaces where certain labels exists.

    First, Bill creates a template:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  manifests:\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n

    Once the template has been created, Bill edits Anna's tenant and populates the namespacetemplate field:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n  quota: small\n  sandboxConfig:\n    enabled: true\n  templateInstances:\n  - spec:\n      template: docker-pull-secret\n    selector:\n      matchLabels:\n        kind: build\n

    Multi Tenant Operator will deploy TemplateInstances mentioned in templateInstances field, TemplateInstances will only be applied in those namespaces which belong to Anna's tenant and have the matching label of kind: build.

    So now Anna adds label kind: build to her existing namespace bluesky-anna-aurora-sandbox, and after adding the label she see's that the secret has been created.

    kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME                  STATE    AGE\ndocker-pull-secret    Active   3m\n
    "},{"location":"usecases/deploying-templates.html#deploying-template-to-a-namespace-via-templateinstance","title":"Deploying Template to a Namespace via TemplateInstance","text":"

    Anna wants to deploy a docker pull secret in her namespace.

    First Anna asks Bill, the cluster admin, to create a template of the secret for her:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  manifests:\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n

    Once the template has been created, Anna creates a TemplateInstance in her namespace referring to the Template she wants to deploy:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n  name: docker-pull-secret-instance\n  namespace: bluesky-anna-aurora-sandbox\nspec:\n  template: docker-pull-secret\n  sync: true\n

    Once this is created, Anna can see that the secret has been successfully applied.

    kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox\nNAME                  STATE    AGE\ndocker-pull-secret    Active   3m\n
    "},{"location":"usecases/deploying-templates.html#passing-parameters-to-template-via-templateinstance-templategroupinstance-or-tenant","title":"Passing Parameters to Template via TemplateInstance, TemplateGroupInstance or Tenant","text":"

    Anna wants to deploy a LimitRange resource to certain namespaces.

    First Anna asks Bill, the cluster admin, to create template with parameters for LimitRange for her:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: namespace-parameterized-restrictions\nparameters:\n  # Name of the parameter\n  - name: DEFAULT_CPU_LIMIT\n    # The default value of the parameter\n    value: \"1\"\n  - name: DEFAULT_CPU_REQUESTS\n    value: \"0.5\"\n    # If a parameter is required the template instance will need to set it\n    # required: true\n    # Make sure only values are entered for this parameter\n    validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n  manifests:\n    - apiVersion: v1\n      kind: LimitRange\n      metadata:\n        name: namespace-limit-range-${namespace}\n      spec:\n        limits:\n          - default:\n              cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n            defaultRequest:\n              cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n            type: Container\n

    Afterwards, Anna creates a TemplateInstance in her namespace referring to the Template she wants to deploy:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n  name: namespace-parameterized-restrictions-instance\n  namespace: bluesky-anna-aurora-sandbox\nspec:\n  template: namespace-parameterized-restrictions\n  sync: true\nparameters:\n  - name: DEFAULT_CPU_LIMIT\n    value: \"1.5\"\n  - name: DEFAULT_CPU_REQUESTS\n    value: \"1\"\n

    If she wants to distribute the same Template over multiple namespaces, she can use TemplateGroupInstance.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: namespace-parameterized-restrictions-tgi\nspec:\n  template: namespace-parameterized-restrictions\n  sync: true\n  selector:\n    matchExpressions:\n    - key: stakater.com/tenant\n      operator: In\n      values:\n        - alpha\n        - beta\nparameters:\n  - name: DEFAULT_CPU_LIMIT\n    value: \"1.5\"\n  - name: DEFAULT_CPU_REQUESTS\n    value: \"1\"\n

    Or she can use her tenant to cover only the tenant namespaces.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n  quota: small\n  sandboxConfig:\n    enabled: true\n  templateInstances:\n  - spec:\n      template: namespace-parameterized-restrictions\n      sync: true\n    parameters:\n      - name: DEFAULT_CPU_LIMIT\n        value: \"1.5\"\n      - name: DEFAULT_CPU_REQUESTS\n        value: \"1\"\n    selector:\n      matchLabels:\n        kind: build\n
    "},{"location":"usecases/distributing-resources.html","title":"Copying Secrets and Configmaps across Tenant Namespaces via TGI","text":"

    Bill is a cluster admin who wants to map a docker-pull-secret, present in a build namespace, in tenant namespaces where certain labels exists.

    First, Bill creates a template:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  resourceMappings:\n    secrets:\n      - name: docker-pull-secret\n        namespace: build\n

    Once the template has been created, Bill makes a TemplateGroupInstance referring to the Template he wants to deploy with MatchLabels:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: docker-secret-group-instance\nspec:\n  template: docker-pull-secret\n  selector:\n    matchLabels:\n      kind: build\n  sync: true\n

    Afterwards, Bill can see that secrets has been successfully mapped in all matching namespaces.

    kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME             STATE    AGE\ndocker-pull-secret    Active   3m\n\nkubectl get secret docker-pull-secret -n alpha-dave-aurora-sandbox\nNAME             STATE    AGE\ndocker-pull-secret    Active   3m\n
    "},{"location":"usecases/distributing-resources.html#mapping-resources-within-tenant-namespaces-via-ti","title":"Mapping Resources within Tenant Namespaces via TI","text":"

    Anna is a tenant owner who wants to map a docker-pull-secret, present in bluseky-build namespace, to bluesky-anna-aurora-sandbox namespace.

    First, Bill creates a template:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  resourceMappings:\n    secrets:\n      - name: docker-pull-secret\n        namespace: bluesky-build\n

    Once the template has been created, Anna creates a TemplateInstance in bluesky-anna-aurora-sandbox namespace, referring to the Template.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateInstance\nmetadata:\n  name: docker-secret-instance\n  namespace: bluesky-anna-aurora-sandbox\nspec:\n  template: docker-pull-secret\n  sync: true\n

    Afterwards, Bill can see that secrets has been successfully mapped in all matching namespaces.

    kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox\nNAME             STATE    AGE\ndocker-pull-secret    Active   3m\n
    "},{"location":"usecases/distributing-secrets-using-sealed-secret-template.html","title":"Distributing Secrets Using Sealed Secrets Template","text":"

    Bill is a cluster admin who wants to provide a mechanism for distributing secrets in multiple namespaces. For this, he wants to use Sealed Secrets as the solution by adding them to MTO Template CR

    First, Bill creates a Template in which Sealed Secret is mentioned:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: tenant-sealed-secret\nresources:\n  manifests:\n  - kind: SealedSecret\n    apiVersion: bitnami.com/v1alpha1\n    metadata:\n      name: mysecret\n    spec:\n      encryptedData:\n        .dockerconfigjson: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq.....\n      template:\n        type: kubernetes.io/dockerconfigjson\n        # this is an example of labels and annotations that will be added to the output secret\n        metadata:\n          labels:\n            \"jenkins.io/credentials-type\": usernamePassword\n          annotations:\n            \"jenkins.io/credentials-description\": credentials from Kubernetes\n

    Once the template has been created, Bill has to edit the Tenant to add unique label to namespaces in which the secret has to be deployed. For this, he can use the support for common and specific labels across namespaces.

    Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n\n  # use this if you want to add label to some specific namespaces\n  specificMetadata:\n    - namespaces:\n        - test-namespace\n      labels:\n        distribute-image-pull-secret: true\n\n  # use this if you want to add label to all namespaces under your tenant\n  commonMetadata:\n    labels:\n      distribute-image-pull-secret: true\n

    Bill has added support for a new label distribute-image-pull-secret: true\" for tenant projects/namespaces, now MTO will add that label depending on the used field.

    Finally Bill creates a TemplateGroupInstance which will deploy the sealed secrets using the newly created project label and template.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: tenant-sealed-secret\nspec:\n  template: tenant-sealed-secret\n  selector:\n    matchLabels:\n      distribute-image-pull-secret: true\n  sync: true\n

    MTO will now deploy the sealed secrets mentioned in Template to namespaces which have the mentioned label. The rest of the work to deploy secret from a sealed secret has to be done by Sealed Secrets Controller.

    "},{"location":"usecases/extend-default-roles.html","title":"Extending the default access level for tenant members","text":"

    Bill as the cluster admin wants to extend the default access for tenant members. As an admin of an OpenShift Cluster, Bill can extend the admin, edit, and view ClusterRole using aggregation. Bill will first create a ClusterRole with privileges to resources which Bill wants to extend. Bill will add the aggregation label to the newly created ClusterRole for extending the default ClusterRoles provided by OpenShift.

    kind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: extend-view-role\n  labels:\n    rbac.authorization.k8s.io/aggregate-to-view: 'true'\nrules:\n  - verbs:\n      - get\n      - list\n      - watch\n    apiGroups:\n      - user.openshift.io\n    resources:\n      - groups\n

    Note: You can learn more about aggregated-cluster-roles here

    "},{"location":"usecases/hibernation.html","title":"Freeing up unused resources with hibernation","text":""},{"location":"usecases/hibernation.html#hibernating-a-tenant","title":"Hibernating a tenant","text":"

    Bill is a cluster administrator who wants to free up unused cluster resources at nighttime, in an effort to reduce costs (when the cluster isn't being used).

    First, Bill creates a tenant with the hibernation schedules mentioned in the spec, or adds the hibernation field to an existing tenant:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: sigma\nspec:\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  owners:\n    users:\n      - user\n  editors:\n    users:\n      - user1\n  quota: medium\n  namespaces:\n    withoutTenantPrefix:\n      - build\n      - stage\n      - dev\n

    The schedules above will put all the Deployments and StatefulSets within the tenant's namespaces to sleep, by reducing their pod count to 0 at 8 PM every weekday. At 8 AM on weekdays, the namespaces will then wake up by restoring their applications' previous pod counts.

    Bill can verify this behaviour by checking the newly created ResourceSupervisor resource at run time:

    oc get ResourceSupervisor -A\nNAME           AGE\nsigma          5m\n

    The ResourceSupervisor will look like this at 'running' time (as per the schedule):

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: example\nspec:\n  argocd:\n    appProjects: []\n    namespace: ''\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  namespaces:\n    - build\n    - stage\n    - dev\nstatus:\n  currentStatus: running\n  nextReconcileTime: '2022-10-12T20:00:00Z'\n

    The ResourceSupervisor will look like this at 'sleeping' time (as per the schedule):

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: example\nspec:\n  argocd:\n    appProjects: []\n    namespace: ''\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  namespaces:\n    - build\n    - stage\n    - dev\nstatus:\n  currentStatus: sleeping\n  nextReconcileTime: '2022-10-13T08:00:00Z'\n  sleepingApplications:\n    - Namespace: build\n      kind: Deployment\n      name: example\n      replicas: 3\n    - Namespace: stage\n      kind: Deployment\n      name: example\n      replicas: 3\n

    Bill wants to prevent the build namespace from going to sleep, so he can add the hibernation.stakater.com/exclude: 'true' annotation to it. The ResourceSupervisor will now look like this after reconciling:

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: example\nspec:\n  argocd:\n    appProjects: []\n    namespace: ''\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  namespaces:\n    - stage\n    - dev\nstatus:\n  currentStatus: sleeping\n  nextReconcileTime: '2022-10-13T08:00:00Z'\n  sleepingApplications:\n    - Namespace: stage\n      kind: Deployment\n      name: example\n      replicas: 3\n
    "},{"location":"usecases/hibernation.html#hibernating-namespaces-andor-argocd-applications-with-resourcesupervisor","title":"Hibernating namespaces and/or ArgoCD Applications with ResourceSupervisor","text":"

    Bill, the cluster administrator, wants to hibernate a collection of namespaces and AppProjects belonging to multiple different tenants. He can do so by creating a ResourceSupervisor manually, specifying the hibernation schedule in its spec, the namespaces and ArgoCD Applications that need to be hibernated as per the mentioned schedule. Bill can also use the same method to hibernate some namespaces and ArgoCD Applications that do not belong to any tenant on his cluster.

    The example given below will hibernate the ArgoCD Applications in the 'test-app-project' AppProject; and it will also hibernate the 'ns2' and 'ns4' namespaces.

    apiVersion: tenantoperator.stakater.com/v1beta1\nkind: ResourceSupervisor\nmetadata:\n  name: test-resource-supervisor\nspec:\n  argocd:\n    appProjects:\n      - test-app-project\n    namespace: argocd-ns\n  hibernation:\n    sleepSchedule: 0 20 * * 1-5\n    wakeSchedule: 0 8 * * 1-5\n  namespaces:\n    - ns2\n    - ns4\nstatus:\n  currentStatus: sleeping\n  nextReconcileTime: '2022-10-13T08:00:00Z'\n  sleepingApplications:\n    - Namespace: ns2\n      kind: Deployment\n      name: test-deployment\n      replicas: 3\n
    "},{"location":"usecases/integrationconfig.html","title":"Configuring Managed Namespaces and ServiceAccounts in IntegrationConfig","text":"

    Bill is a cluster admin who can use IntegrationConfig to configure how Multi Tenant Operator (MTO) manages the cluster.

    By default, MTO watches all namespaces and will enforce all the governing policies on them. All namespaces managed by MTO require the stakater.com/tenant label. MTO ignores privileged namespaces that are mentioned in the IntegrationConfig, and does not manage them. Therefore, any tenant label on such namespaces will be ignored.

    oc create namespace stakater-test\nError from server (Cannot Create namespace stakater-test without label stakater.com/tenant. User: Bill): admission webhook \"vnamespace.kb.io\" denied the request: Cannot CREATE namespace stakater-test without label stakater.com/tenant. User: Bill\n

    Bill is trying to create a namespace without the stakater.com/tenant label. Creating a namespace without this label is only allowed if the namespace is privileged. Privileged namespaces will be ignored by MTO and do not require the said label. Therefore, Bill will add the required regex in the IntegrationConfig, along with any other namespaces which are privileged and should be ignored by MTO - like default, or namespaces with prefixes like openshift, kube:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedNamespaces:\n      - ^default$\n      - ^openshift*\n      - ^kube*\n      - ^stakater*\n

    After mentioning the required regex (^stakater*) under privilegedNamespaces, Bill can create the namespace without interference.

    oc create namespace stakater-test\nnamespace/stakater-test created\n

    MTO will also disallow all users which are not tenant owners to perform CRUD operations on namespaces. This will also prevent Service Accounts from performing CRUD operations.

    If Bill wants MTO to ignore Service Accounts, then he would simply have to add them in the IntegrationConfig:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedServiceAccounts:\n      - system:serviceaccount:openshift\n      - system:serviceaccount:stakater\n      - system:serviceaccount:kube\n      - system:serviceaccount:redhat\n      - system:serviceaccount:hive\n

    Bill can also use regex patterns to ignore a set of service accounts:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  openshift:\n    privilegedServiceAccounts:\n      - ^system:serviceaccount:openshift*\n      - ^system:serviceaccount:stakater*\n
    "},{"location":"usecases/integrationconfig.html#configuring-vault-in-integrationconfig","title":"Configuring Vault in IntegrationConfig","text":"

    Vault is used to secure, store and tightly control access to tokens, passwords, certificates, and encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.

    If Bill (the cluster admin) has Vault configured in his cluster, then he can take benefit from MTO's integration with Vault.

    MTO automatically creates Vault secret paths for tenants, where tenant members can securely save their secrets. It also authorizes tenant members to access these secrets via OIDC.

    Bill would first have to integrate Vault with MTO by adding the details in IntegrationConfig. For more details

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: IntegrationConfig\nmetadata:\n  name: tenant-operator-config\n  namespace: multi-tenant-operator\nspec:\n  vault:\n    enabled: true\n    endpoint:\n      secretReference:\n        name: vault-root-token\n        namespace: vault\n      url: >-\n        https://vault.apps.prod.abcdefghi.kubeapp.cloud/\n    sso:\n      accessorID: auth_oidc_aa6aa9aa\n      clientName: vault\n

    Bill then creates a tenant for Anna and John:

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@acme.org\n  viewers:\n    users:\n    - john@acme.org\n  quota: small\n  sandbox: false\n

    Now Bill goes to Vault and sees that a path for tenant has been made under the name bluesky/kv, confirming that Tenant members with the Owner or Edit roles now have access to the tenant's Vault path.

    Now if Anna sign's in to the Vault via OIDC, she can see her tenants path and secrets. Whereas if John sign's in to the Vault via OIDC, he can't see his tenants path or secrets as he doesn't have the access required to view them.

    "},{"location":"usecases/integrationconfig.html#configuring-rhsso-red-hat-single-sign-on-in-integrationconfig","title":"Configuring RHSSO (Red Hat Single Sign-On) in IntegrationConfig","text":"

    Red Hat Single Sign-On RHSSO is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0.

    If Bill the cluster admin has RHSSO configured in his cluster, then he can take benefit from MTO's integration with RHSSO and Vault.

    MTO automatically allows tenant members to access Vault via OIDC(RHSSO authentication and authorization) to access secret paths for tenants where tenant members can securely save their secrets.

    Bill would first have to integrate RHSSO with MTO by adding the details in IntegrationConfig. Visit here for more details.

    rhsso:\n  enabled: true\n  realm: customer\n  endpoint:\n    secretReference:\n      name: auth-secrets\n      namespace: openshift-auth\n    url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/\n
    "},{"location":"usecases/mattermost.html","title":"Creating Mattermost Teams for your tenant","text":""},{"location":"usecases/mattermost.html#requirements","title":"Requirements","text":"

    MTO-Mattermost-Integration-Operator

    Please contact stakater to install the Mattermost integration operator before following the below mentioned steps.

    "},{"location":"usecases/mattermost.html#steps-to-enable-integration","title":"Steps to enable integration","text":"

    Bill wants some of the tenants to also have their own Mattermost Teams. To make sure this happens correctly, Bill will first add the stakater.com/mattermost: true label to the tenants. The label will enable the mto-mattermost-integration-operator to create and manage Mattermost Teams based on Tenants.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: sigma\n  labels:\n    stakater.com/mattermost: 'true'\nspec:\n  owners:\n    users:\n      - user\n  editors:\n    users:\n      - user1\n  quota: medium\n  sandbox: false\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n

    Now user can logIn to Mattermost to see their Team and relevant channels associated with it.

    The name of the Team is similar to the Tenant name. Notification channels are pre-configured for every team, and can be modified.

    "},{"location":"usecases/namespace.html","title":"Creating Namespace","text":"

    Anna as the tenant owner can create new namespaces for her tenant.

    apiVersion: v1\nkind: Namespace\nmetadata:\n  name: bluesky-production\n  labels:\n    stakater.com/tenant: bluesky\n

    \u26a0\ufe0f Anna is required to add the tenant label stakater.com/tenant: bluesky which contains the name of her tenant bluesky, while creating the namespace. If this label is not added or if Anna does not belong to the bluesky tenant, then Multi Tenant Operator will not allow the creation of that namespace.

    When Anna creates the namespace, MTO assigns Anna and other tenant members the roles based on their user types, such as a tenant owner getting the OpenShift admin role for that namespace.

    As a tenant owner, Anna is able to create namespaces.

    If you have enabled ArgoCD Multitenancy, our preferred solution is to create tenant namespaces by using Tenant spec to avoid syncing issues in ArgoCD console during namespace creation.

    "},{"location":"usecases/namespace.html#add-existing-namespaces-to-tenant-via-gitops","title":"Add Existing Namespaces to Tenant via GitOps","text":"

    Using GitOps as your preferred development workflow, you can add existing namespaces for your tenants by including the tenant label.

    To add an existing namespace to your tenant via GitOps:

    1. First, migrate your namespace resource to your \u201cwatched\u201d git repository
    2. Edit your namespace yaml to include the tenant label
    3. Tenant label follows the naming convention stakater.com/tenant: <TENANT_NAME>
    4. Sync your GitOps repository with your cluster and allow changes to be propagated
    5. Verify that your Tenant users now have access to the namespace

    For example, If Anna, a tenant owner, wants to add the namespace bluesky-dev to her tenant via GitOps, after migrating her namespace manifest to a \u201cwatched repository\u201d

    apiVersion: v1\nkind: Namespace\nmetadata:\n  name: bluesky-dev\n

    She can then add the tenant label

     ...\n  labels:\n    stakater.com/tenant: bluesky\n

    Now all the users of the Bluesky tenant now have access to the existing namespace.

    Additionally, to remove namespaces from a tenant, simply remove the tenant label from the namespace resource and sync your changes to your cluster.

    "},{"location":"usecases/namespace.html#remove-namespaces-from-your-cluster-via-gitops","title":"Remove Namespaces from your Cluster via GitOps","text":"

    GitOps is a quick and efficient way to automate the management of your K8s resources.

    To remove namespaces from your cluster via GitOps;

    "},{"location":"usecases/private-sandboxes.html","title":"Create Private Sandboxes","text":"

    Bill assigned the ownership of bluesky to Anna and Anthony. Now if the users want sandboxes to be made for them, they'll have to ask Bill to enable sandbox functionality. The Users also want to make sure that the sandboxes that are created for them are also only visible to the user they belong to. To enable that, Bill will just set enabled: true and private: true within the sandboxConfig field

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandboxConfig:\n    enabled: true\n    private: true\nEOF\n

    With the above configuration Anna and Anthony will now have new sandboxes created

    kubectl get namespaces\nNAME                             STATUS   AGE\nbluesky-anna-aurora-sandbox      Active   5d5h\nbluesky-anthony-aurora-sandbox   Active   5d5h\nbluesky-john-aurora-sandbox      Active   5d5h\n

    However, from the perspective of Anna, only their sandbox will be visible

    kubectl get namespaces\nNAME                             STATUS   AGE\nbluesky-anna-aurora-sandbox      Active   5d5h\n
    "},{"location":"usecases/quota.html","title":"Enforcing Quotas","text":"

    Using Multi Tenant Operator, the cluster-admin can set and enforce cluster resource quotas and limit ranges for tenants.

    "},{"location":"usecases/quota.html#assigning-resource-quotas","title":"Assigning Resource Quotas","text":"

    Bill is a cluster admin who will first create Quota CR where he sets the maximum resource limits that Anna's tenant will have. Here limitrange is an optional field, cluster admin can skip it if not needed.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n  name: small\nspec:\n  resourcequota:\n    hard:\n      requests.cpu: '5'\n      requests.memory: '5Gi'\n      configmaps: \"10\"\n      secrets: \"10\"\n      services: \"10\"\n      services.loadbalancers: \"2\"\n  limitrange:\n    limits:\n      - type: \"Pod\"\n        max:\n          cpu: \"2\"\n          memory: \"1Gi\"\n        min:\n          cpu: \"200m\"\n          memory: \"100Mi\"\nEOF\n

    For more details please refer to Quotas.

    kubectl get quota small\nNAME       STATE    AGE\nsmall      Active   3m\n

    Bill then proceeds to create a tenant for Anna, while also linking the newly created Quota.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@stakater.com\n  quota: small\n  sandbox: false\nEOF\n

    Now that the quota is linked with Anna's tenant, Anna can create any resource within the values of resource quota and limit range.

    kubectl -n bluesky-production create deployment nginx --image nginx:latest --replicas 4\n

    Once the resource quota assigned to the tenant has been reached, Anna cannot create further resources.

    kubectl create pods bluesky-training\nError from server (Cannot exceed Namespace quota: please, reach out to the system administrators)\n
    "},{"location":"usecases/secret-distribution.html","title":"Propagate Secrets from Parent to Descendant namespaces","text":"

    Secrets like registry credentials often need to exist in multiple Namespaces, so that Pods within different namespaces can have access to those credentials in form of secrets.

    Manually creating secrets within different namespaces could lead to challenges, such as:

    With the help of Multi-Tenant Operator's Template feature we can make this secret distribution experience easy.

    For example, to copy a Secret called registry which exists in the example to new Namespaces whenever they are created, we will first create a Template which will have reference of the registry secret.

    It will also push updates to the copied Secrets and keep the propagated secrets always sync and updated with parent namespaces.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: registry-secret\nresources:\n  resourceMappings:\n    secrets:\n      - name: registry\n        namespace: example\n

    Now using this Template we can propagate registry secret to different namespaces that has some common set of labels.

    For example, will just add one label kind: registry and all namespaces with this label will get this secret.

    For propagating it on different namespaces dynamically will have to create another resource called TemplateGroupInstance. TemplateGroupInstance will have Template and matchLabel mapping as shown below:

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: TemplateGroupInstance\nmetadata:\n  name: registry-secret-group-instance\nspec:\n  template: registry-secret\n  selector:\n    matchLabels:\n      kind: registry\n  sync: true\n

    After reconciliation, you will be able to see those secrets in namespaces having mentioned label.

    MTO will keep injecting this secret to the new namespaces created with that label.

    kubectl get secret registry-secret -n example-ns-1\nNAME             STATE    AGE\nregistry-secret    Active   3m\n\nkubectl get secret registry-secret -n example-ns-2\nNAME             STATE    AGE\nregistry-secret    Active   3m\n
    "},{"location":"usecases/template.html","title":"Creating Templates","text":"

    Anna wants to create a Template that she can use to initialize or share common resources across namespaces (e.g. PullSecrets).

    Anna can either create a template using manifests field, covering Kubernetes or custom resources.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: docker-pull-secret\nresources:\n  manifests:\n    - kind: Secret\n      apiVersion: v1\n      metadata:\n        name: docker-pull-secret\n      data:\n        .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K\n      type: kubernetes.io/dockercfg\n

    Or by using Helm Charts

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: redis\nresources:\n  helm:\n    releaseName: redis\n    chart:\n      repository:\n        name: redis\n        repoUrl: https://charts.bitnami.com/bitnami\n    values: |\n      redisPort: 6379\n

    She can also use resourceMapping field to copy over secrets and configmaps from one namespace to others.

    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: resource-mapping\nresources:\n  resourceMappings:\n    secrets:\n      - name: docker-secret\n        namespace: bluesky-build\n    configMaps:\n      - name: tronador-configMap\n        namespace: stakater-tronador\n

    Note: Resource mapping can be used via TGI to map resources within tenant namespaces or to some other tenant's namespace. If used with TI, the resources will only be mapped if namespaces belong to same tenant.

    "},{"location":"usecases/template.html#using-templates-with-default-parameters","title":"Using Templates with Default Parameters","text":"
    apiVersion: tenantoperator.stakater.com/v1alpha1\nkind: Template\nmetadata:\n  name: namespace-parameterized-restrictions\nparameters:\n  # Name of the parameter\n  - name: DEFAULT_CPU_LIMIT\n    # The default value of the parameter\n    value: \"1\"\n  - name: DEFAULT_CPU_REQUESTS\n    value: \"0.5\"\n    # If a parameter is required the template instance will need to set it\n    # required: true\n    # Make sure only values are entered for this parameter\n    validation: \"^[0-9]*\\\\.?[0-9]+$\"\nresources:\n  manifests:\n    - apiVersion: v1\n      kind: LimitRange\n      metadata:\n        name: namespace-limit-range-${namespace}\n      spec:\n        limits:\n          - default:\n              cpu: \"${{DEFAULT_CPU_LIMIT}}\"\n            defaultRequest:\n              cpu: \"${{DEFAULT_CPU_REQUESTS}}\"\n            type: Container\n

    Parameters can be used with both manifests and helm charts

    "},{"location":"usecases/tenant.html","title":"Creating Tenant","text":"

    Bill is a cluster admin who receives a new request from Aurora Solutions CTO asking for a new tenant for Anna's team.

    Bill creates a new tenant called bluesky in the cluster:

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandbox: false\nEOF\n

    Bill checks if the new tenant is created:

    kubectl get tenants.tenantoperator.stakater.com bluesky\nNAME       STATE    AGE\nbluesky    Active   3m\n

    Anna can now login to the cluster and check if she can create namespaces

    kubectl auth can-i create namespaces\nyes\n

    However, cluster resources are not accessible to Anna

    kubectl auth can-i get namespaces\nno\n\nkubectl auth can-i get persistentvolumes\nno\n

    Including the Tenant resource

    kubectl auth can-i get tenants.tenantoperator.stakater.com\nno\n
    "},{"location":"usecases/tenant.html#assign-multiple-users-as-tenant-owner","title":"Assign multiple users as tenant owner","text":"

    In the example above, Bill assigned the ownership of bluesky to Anna. If another user, e.g. Anthony needs to administer bluesky, than Bill can assign the ownership of tenant to that user as well:

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandbox: false\nEOF\n

    With the configuration above, Anthony can log-in to the cluster and execute

    kubectl auth can-i create namespaces\nyes\n
    "},{"location":"usecases/tenant.html#assigning-users-sandbox-namespace","title":"Assigning Users Sandbox Namespace","text":"

    Bill assigned the ownership of bluesky to Anna and Anthony. Now if the users want sandboxes to be made for them, they'll have to ask Bill to enable sandbox functionality.

    To enable that, Bill will just set enabled: true within the sandboxConfig field

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandboxConfig:\n    enabled: true\nEOF\n

    With the above configuration Anna and Anthony will now have new sandboxes created

    kubectl get namespaces\nNAME                             STATUS   AGE\nbluesky-anna-aurora-sandbox      Active   5d5h\nbluesky-anthony-aurora-sandbox   Active   5d5h\nbluesky-john-aurora-sandbox      Active   5d5h\n

    If Bill wants to make sure that only the sandbox owner can view his sandbox namespace, he can achieve this by setting private: true within the sandboxConfig filed.

    "},{"location":"usecases/tenant.html#creating-namespaces-via-tenant-custom-resource","title":"Creating Namespaces via Tenant Custom Resource","text":"

    Bill now wants to create namespaces for dev, build and production environments for the tenant members. To create those namespaces Bill will just add those names within the namespaces field in the tenant CR. If Bill wants to append the tenant name as a prefix in namespace name, then he can use namespaces.withTenantPrefix field. Else he can use namespaces.withoutTenantPrefix for namespaces for which he does not need tenant name as a prefix.

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n    withoutTenantPrefix:\n      - prod\nEOF\n

    With the above configuration tenant members will now see new namespaces have been created.

    kubectl get namespaces\nNAME             STATUS   AGE\nbluesky-dev      Active   5d5h\nbluesky-build    Active   5d5h\nprod             Active   5d5h\n
    "},{"location":"usecases/tenant.html#distributing-common-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource","title":"Distributing common labels and annotations to tenant namespaces via Tenant Custom Resource","text":"

    Bill now wants to add labels/annotations to all the namespaces for a tenant. To create those labels/annotations Bill will just add them into commonMetadata.labels/commonMetadata.annotations field in the tenant CR.

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n  commonMetadata:\n    labels:\n      app.kubernetes.io/managed-by: tenant-operator\n      app.kubernetes.io/part-of: tenant-alpha\n    annotations:\n      openshift.io/node-selector: node-role.kubernetes.io/infra=\nEOF\n

    With the above configuration all tenant namespaces will now contain the mentioned labels and annotations.

    "},{"location":"usecases/tenant.html#distributing-specific-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource","title":"Distributing specific labels and annotations to tenant namespaces via Tenant Custom Resource","text":"

    Bill now wants to add labels/annotations to specific namespaces for a tenant. To create those labels/annotations Bill will just add them into specificMetadata.labels/specificMetadata.annotations and specific namespaces in specificMetadata.namespaces field in the tenant CR.

    kubectl apply -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  editors:\n    users:\n    - john@aurora.org\n    groups:\n    - alpha\n  quota: small\n  sandboxConfig:\n    enabled: true\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n  specificMetadata:\n    - namespaces:\n        - bluesky-anna-aurora-sandbox\n      labels:\n        app.kubernetes.io/is-sandbox: true\n      annotations:\n        openshift.io/node-selector: node-role.kubernetes.io/worker=\nEOF\n

    With the above configuration all tenant namespaces will now contain the mentioned labels and annotations.

    "},{"location":"usecases/tenant.html#retaining-tenant-namespaces-and-appproject-when-a-tenant-is-being-deleted","title":"Retaining tenant namespaces and AppProject when a tenant is being deleted","text":"

    Bill now wants to delete tenant bluesky and wants to retain all namespaces and AppProject of the tenant. To retain the namespaces Bill will set spec.onDelete.cleanNamespaces, and spec.onDelete.cleanAppProjects to false.

    apiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  quota: small\n  sandboxConfig:\n    enabled: true\n  namespaces:\n    withTenantPrefix:\n      - dev\n      - build\n      - prod\n  onDelete:\n    cleanNamespaces: false\n    cleanAppProject: false\n

    With the above configuration all tenant namespaces and AppProject will not be deleted when tenant bluesky is deleted. By default, the value of spec.onDelete.cleanNamespaces is also false and spec.onDelete.cleanAppProject is true

    "},{"location":"usecases/volume-limits.html","title":"Limiting PersistentVolume for Tenant","text":"

    Bill, as a cluster admin, wants to restrict the amount of storage a Tenant can use. For that he'll add the requests.storage field to quota.spec.resourcequota.hard. If Bill wants to restrict tenant bluesky to use only 50Gi of storage, he'll first create a quota with requests.storage field set to 50Gi.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n  name: medium\nspec:\n  resourcequota:\n    hard:\n      requests.cpu: '5'\n      requests.memory: '10Gi'\n      requests.storage: '50Gi'\n

    Once the quota is created, Bill will create the tenant and set the quota field to the one he created.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: bluesky\nspec:\n  owners:\n    users:\n    - anna@aurora.org\n    - anthony@aurora.org\n  quota: medium\n  sandbox: true\nEOF\n

    Now, the combined storage used by all tenant namespaces will not exceed 50Gi.

    "},{"location":"usecases/volume-limits.html#adding-storageclass-restrictions-for-tenant","title":"Adding StorageClass Restrictions for Tenant","text":"

    Now, Bill, as a cluster admin, wants to make sure that no Tenant can provision more than a fixed amount of storage from a StorageClass. Bill can restrict that using <storage-class-name>.storageclass.storage.k8s.io/requests.storage field in quota.spec.resourcequota.hard field. If Bill wants to restrict tenant sigma to use only 20Gi of storage from storage class stakater, he'll first create a StorageClass stakater and then create the relevant Quota with stakater.storageclass.storage.k8s.io/requests.storage field set to 20Gi.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta1\nkind: Quota\nmetadata:\n  name: small\nspec:\n  resourcequota:\n    hard:\n      requests.cpu: '2'\n      requests.memory: '4Gi'\n      stakater.storageclass.storage.k8s.io/requests.storage: '20Gi'\n

    Once the quota is created, Bill will create the tenant and set the quota field to the one he created.

    kubectl create -f - << EOF\napiVersion: tenantoperator.stakater.com/v1beta2\nkind: Tenant\nmetadata:\n  name: sigma\nspec:\n  owners:\n    users:\n    - dave@aurora.org\n  quota: small\n  sandbox: true\nEOF\n

    Now, the combined storage provisioned from StorageClass stakater used by all tenant namespaces will not exceed 20Gi.

    The 20Gi limit will only be applied to StorageClass stakater. If a tenant member creates a PVC with some other StorageClass, he will not be restricted.

    Tip

    More details about Resource Quota can be found here

    "}]} \ No newline at end of file diff --git a/0.10/sitemap.xml b/0.10/sitemap.xml index aca2da611..d8dfd9e82 100644 --- a/0.10/sitemap.xml +++ b/0.10/sitemap.xml @@ -2,357 +2,362 @@ https://docs.stakater.com/0.10/index.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/argocd-multitenancy.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/changelog.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/customresources.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/eula.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/faq.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/features.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/hibernation.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/installation.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/integration-config.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/tenant-roles.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/troubleshooting.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/vault-multitenancy.html - 2023-12-06 + 2023-12-07 + daily + + + https://docs.stakater.com/0.10/explanation/auth.html + 2023-12-07 daily https://docs.stakater.com/0.10/explanation/console.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/explanation/why-argocd-multi-tenancy.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/explanation/why-vault-multi-tenancy.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/faq/index.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/how-to-guides/integration-config.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/how-to-guides/quota.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/how-to-guides/template-group-instance.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/how-to-guides/template-instance.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/how-to-guides/template.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/how-to-guides/tenant.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/how-to-guides/offboarding/uninstalling.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/reference-guides/add-remove-namespace-gitops.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/reference-guides/admin-clusterrole.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/reference-guides/configuring-multitenant-network-isolation.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/reference-guides/custom-metrics.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/reference-guides/custom-roles.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/reference-guides/deploying-templates.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/reference-guides/distributing-resources.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/reference-guides/distributing-secrets-using-sealed-secret-template.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/reference-guides/distributing-secrets.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/reference-guides/extend-default-roles.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/reference-guides/graph-visualization.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/reference-guides/integrationconfig.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/reference-guides/mattermost.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/reference-guides/resource-sync-by-tgi.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/reference-guides/secret-distribution.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/tutorials/installation.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/tutorials/argocd/enabling-multi-tenancy-argocd.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/tutorials/template/template-group-instance.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/tutorials/template/template-instance.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/tutorials/template/template.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/tutorials/tenant/assign-quota-tenant.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/tutorials/tenant/assigning-metadata.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/tutorials/tenant/create-sandbox.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/tutorials/tenant/create-tenant.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/tutorials/tenant/creating-namespaces.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/tutorials/tenant/custom-rbac.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/tutorials/tenant/deleting-tenant.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/tutorials/tenant/tenant-hibernation.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/tutorials/vault/enabling-multi-tenancy-vault.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/admin-clusterrole.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/argocd.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/configuring-multitenant-network-isolation.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/custom-roles.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/deploying-templates.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/distributing-resources.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/distributing-secrets-using-sealed-secret-template.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/extend-default-roles.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/hibernation.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/integrationconfig.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/mattermost.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/namespace.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/private-sandboxes.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/quota.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/secret-distribution.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/template.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/tenant.html - 2023-12-06 + 2023-12-07 daily https://docs.stakater.com/0.10/usecases/volume-limits.html - 2023-12-06 + 2023-12-07 daily \ No newline at end of file diff --git a/0.10/sitemap.xml.gz b/0.10/sitemap.xml.gz index 6ef7ca31b77df1cf5f6c9971cb16cb53343b026d..f3dbf3396c416d262e351c15266923a0e29002fd 100644 GIT binary patch delta 830 zcmV-E1Ht^s2F(TsABzYGFNtxH2OfWV=mN*N^$FSspvbXAgp0x%dg;@bS}dz@6BIoZ zocdy$qJ-~&@yOyb9X*=4ti)+gI^v^}M^=rf3o9K5zE; zv}%dEFI9Cs9@l6Hoa}HNVpE}N^B1h9#cH$uyxQH}=U0D$@ze0y+Mss*RWN^FMhAs! zB>5r<;#vil&wJQZ#%nmOTXJ?=_1l+cOC%Tc?s4<@K|DN)&BwfDT3c*Wt@I7-BfRdk zH1@o$7AqrGHFNz1p^QyXTDW8>3-;2hbJ4ux)of_Wrc6k|MZl1v0xX&pl9fdR?&WLQ zd>|>s!hhw~)DXR-oFa?nG(LZ?S<3;Gpos&gP{Hp_*%@ib?IdEbg$F9dkm?NChCsQ1 z7tMN*xoq;Oop9=`9CJV=6SZt$^ZkW6aM|!4+p}0UR)Sk}^a2*a&4NF-2@Lw}HXH>7 z(Ile*xfPLL;a^iA$%r@4b2-8ZM3YNQCx%YLE8wVrF*_^dzGq?t9QJ?vI>^`?A64>G zD_wdM6=4q%axh<1NUcQ#7Y-o2bby`gj~NObauecA&g(W|-WN-6U~cfof|Ozophb^M z!UG*cd=V!1WaZCkx3UIrMmV5o6tZM5e~P;y*U;H;?nCKP2)GF_m;ay|B*ai3gDD;tQH#f!|=uMKF`xk*NTI|B(3?uuskeAG6oab z4Zf3D6(K5AyK2$>0(X8KwxUf>!m&^Xca3!BNy;__I;m4EnK?ybgWI9Rlhl-p#9 z?99a4bJ3TYU~Wm%L1ISnOJ`_ z@7%ES!0nc70<4S{w)0b66y IIao9R08tF25C8xG delta 826 zcmV-A1I7H!2FV5oABzYGkZf>~2OfV46m8)+w?0Ap02Dcvh;UIjLoa>$Qj293Zi1qR zf>U2?)3osY_Gp%0+CHD0y+1%i6a3TaVZB-1gI7Tt-#o3pfBPyvt)6#x+Y~JV-RI37 zpH?kV_ob?i$Kx6efs-AsLu@KkZGMB*v{-G{pI5uP`~2t+FuocdTN~7_KMH^5!|0%J zjU-z16|R{i$nw6dLC@oyFlm&a~)wyWi@n|+QWm6`k;38m1Q2`ds3dzc%0r&E? zY(9{bV&VUCYifvIQcjUYa~gl2*R16LO3=iCQ>ftgrtFL~8)*$Xqu0)J`~cR*pHK${~w67TE9}+p}2qDuUY&-V0d73k&{Sc^R~@Z8!=FqDe*r za?8uV!oQ|Kk`Zqnk#d9+h$d(4P7KA{E8wVrF*_^dz71gn9QOM<$k=~-303k_cUpR1 z5@8P!axh<1NUcQ#7Y-o2bby`gKQk0Myp} zBIQzRS^5+2$>1)44R=44RT9xOUJRz3VBlQ5Az(O}fuxp&dy9WJ{5kvgQIFE=I-FRt za}1jSy^osND?(-hrkTE$iWj&B3p7sj{K97Pvbfx7aOEE^%e~e33?CLPB;__)B0F>R zid}wdIkWyB2JTf37xQx_=4N_c!TFh;zh`G}a?Y~YS%WhsK(>mk_WKv?6