diff --git a/content/argocd-multitenancy.md b/content/argocd-multitenancy.md deleted file mode 100644 index 540250cd5..000000000 --- a/content/argocd-multitenancy.md +++ /dev/null @@ -1,31 +0,0 @@ -# ArgoCD Multi-tenancy - -ArgoCD is a declarative GitOps tool built to deploy applications to Kubernetes. While the continuous delivery (CD) space is seen by some as crowded these days, ArgoCD does bring some interesting capabilities to the table. Unlike other tools, ArgoCD is lightweight and easy to configure. - -## Why ArgoCD? - -Application definitions, configurations, and environments should be declarative and version controlled. Application deployment and lifecycle management should be automated, auditable, and easy to understand. - -## ArgoCD integration in Multi Tenant Operator - -With Multi Tenant Operator (MTO), cluster admins can configure multi tenancy in their cluster. Now with ArgoCD integration, multi tenancy can be configured in ArgoCD applications and AppProjects. - -MTO (if configured to) will create AppProjects for each tenant. The AppProject will allow tenants to create ArgoCD Applications that can be synced to namespaces owned by those tenants. Cluster admins will also be able to blacklist certain namespaces resources if they want, and allow certain cluster scoped resources as well (see the `NamespaceResourceBlacklist` and `ClusterResourceWhitelist` sections in [Integration Config docs](./integration-config.md) and [Tenant Custom Resource docs](./customresources.md)). - -Note that ArgoCD integration in MTO is completely optional. - -## Default ArgoCD configuration - -We have set a default ArgoCD configuration in Multi Tenant Operator that fulfils the following use cases: - -- Tenants are able to see only their ArgoCD applications in the ArgoCD frontend -- Tenant 'Owners' and 'Editors' will have full access to their ArgoCD applications -- Tenants in the 'Viewers' group will have read-only access to their ArgoCD applications -- Tenants can sync all namespace-scoped resources, except those that are blacklisted in the spec -- Tenants can only sync cluster-scoped resources that are allow-listed in the spec -- Tenant 'Owners' can configure their own GitOps source repos at a tenant level -- Cluster admins can prevent specific resources from syncing via ArgoCD -- Cluster admins have full access to all ArgoCD applications and AppProjects -- Since ArgoCD integration is on a per-tenant level, namespace-scoped applications are only synced to Tenant's namespaces - -Detailed use cases showing how to create AppProjects are mentioned in [use cases for ArgoCD](./usecases/argocd.md). diff --git a/content/changelog.md b/content/changelog.md index c3c2324ad..5ef4634d0 100644 --- a/content/changelog.md +++ b/content/changelog.md @@ -4,7 +4,7 @@ ### v0.10.0 -### Feature +#### Feature - Added support for caching for MTO Console using PostgreSQL as caching layer. - Added support for custom metrics with Template, Template Instance and Template Group Instance. @@ -18,13 +18,13 @@ - And it comes with default Cert Manager manifests for certificates. - Support for MTO e2e. -### Fix +#### Fix - Updated CreateMergePatch to MergeMergePatches to address issues caused by losing `resourceVersion` and UID when converting `oldObject` to `newObject`. This prevents problems when the object is edited by another controller. - In Template Resource distribution for Secret type, we now consider the source's Secret field type, preventing default creation as Opaque regardless of the source's actual type. - Enhanced admin permissions for tenant role in Vault to include Create, Update, Delete alongside existing Read and List privileges for the common-shared-secrets path. Viewers now have Read permission. -### Enhanced +#### Enhanced - Started to support Kubernetes along with OpenShift as platform type. - Support of MTO's PostgreSQL instance as persistent storage for keycloak. diff --git a/content/customresources.md b/content/customresources.md deleted file mode 100644 index 1f4155d70..000000000 --- a/content/customresources.md +++ /dev/null @@ -1,372 +0,0 @@ -# Custom Resources - -Below is the detailed explanation about Custom Resources of MTO - -## 1. Quota - -Cluster scoped resource: - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta1 -kind: Quota -metadata: - name: medium -spec: - resourcequota: - hard: - requests.cpu: '5' - limits.cpu: '10' - requests.memory: '5Gi' - limits.memory: '10Gi' - configmaps: "10" - persistentvolumeclaims: "4" - replicationcontrollers: "20" - secrets: "10" - services: "10" - services.loadbalancers: "2" - limitrange: - limits: - - type: "Pod" - max: - cpu: "2" - memory: "1Gi" - min: - cpu: "200m" - memory: "100Mi" - - type: "Container" - max: - cpu: "2" - memory: "1Gi" - min: - cpu: "100m" - memory: "50Mi" - default: - cpu: "300m" - memory: "200Mi" - defaultRequest: - cpu: "200m" - memory: "100Mi" - maxLimitRequestRatio: - cpu: "10" -``` - -When several tenants share a single cluster with a fixed number of resources, there is a concern that one tenant could use more than its fair share of resources. Quota is a wrapper around OpenShift `ClusterResourceQuota` and `LimitRange` which provides administrators to limit resource consumption per `Tenant`. -For more details [Quota.Spec](https://kubernetes.io/docs/concepts/policy/resource-quotas/) , [LimitRange.Spec](https://kubernetes.io/docs/concepts/policy/limit-range/) - -## 2. Tenant - -Cluster scoped resource: - -The smallest valid Tenant definition is given below (with just one field in its spec): - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: alpha -spec: - quota: small -``` - -Here is a more detailed Tenant definition, explained below: - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: alpha -spec: - owners: # optional - users: # optional - - dave@stakater.com - groups: # optional - - alpha - editors: # optional - users: # optional - - jack@stakater.com - viewers: # optional - users: # optional - - james@stakater.com - quota: medium # required - sandboxConfig: # optional - enabled: true # optional - private: true # optional - onDelete: # optional - cleanNamespaces: false # optional - cleanAppProject: true # optional - argocd: # optional - sourceRepos: # required - - https://github.com/stakater/gitops-config - appProject: # optional - clusterResourceWhitelist: # optional - - group: tronador.stakater.com - kind: Environment - namespaceResourceBlacklist: # optional - - group: "" - kind: ConfigMap - hibernation: # optional - sleepSchedule: 23 * * * * # required - wakeSchedule: 26 * * * * # required - namespaces: # optional - withTenantPrefix: # optional - - dev - - build - withoutTenantPrefix: # optional - - preview - commonMetadata: # optional - labels: # optional - stakater.com/team: alpha - annotations: # optional - openshift.io/node-selector: node-role.kubernetes.io/infra= - specificMetadata: # optional - - annotations: # optional - stakater.com/user: dave - labels: # optional - stakater.com/sandbox: true - namespaces: # optional - - alpha-dave-stakater-sandbox - templateInstances: # optional - - spec: # optional - template: networkpolicy # required - sync: true # optional - parameters: # optional - - name: CIDR_IP - value: "172.17.0.0/16" - selector: # optional - matchLabels: # optional - policy: network-restriction -``` - -* Tenant has 3 kinds of `Members`. Each member type should have different roles assigned to them. These roles are gotten from the [IntegrationConfig's TenantRoles field](integration-config.md#tenantroles). You can customize these roles to your liking, but by default the following configuration applies: - * `Owners:` Users who will be owners of a tenant. They will have OpenShift admin-role assigned to their users, with additional access to create namespaces as well. - * `Editors:` Users who will be editors of a tenant. They will have OpenShift edit-role assigned to their users. - * `Viewers:` Users who will be viewers of a tenant. They will have OpenShift view-role assigned to their users. - * For more details, check out [their definitions](./tenant-roles.md). - -* `Users` can be linked to the tenant by specifying there username in `owners.users`, `editors.users` and `viewers.users` respectively. - -* `Groups` can be linked to the tenant by specifying the group name in `owners.groups`, `editors.groups` and `viewers.groups` respectively. - -* Tenant will have a `Quota` to limit resource consumption. - -* `sandboxConfig` is used to configure the tenant user sandbox feature - * Setting `enabled` to *true* will create *sandbox namespaces* for owners and editors. - * Sandbox will follow the following naming convention **{TenantName}**-**{UserName}**-*sandbox*. - * In case of groups, the sandbox namespaces will be created for each member of the group. - * Setting `private` to *true* will make those sandboxes be only visible to the user they belong to. By default, sandbox namespaces are visible to all tenant members - -* `onDelete` is used to tell Multi Tenant Operator what to do when a Tenant is deleted. - * `cleanNamespaces` if the value is set to **true** *MTO* deletes all *tenant namespaces* when a `Tenant` is deleted. Default value is **false**. - * `cleanAppProject` will keep the generated ArgoCD AppProject if the value is set to **false**. By default, the value is **true**. - -* `argocd` is required if you want to create an ArgoCD AppProject for the tenant. - * `sourceRepos` contain a list of repositories that point to your GitOps. - * `appProject` is used to set the `clusterResourceWhitelist` and `namespaceResourceBlacklist` resources. If these are also applied via `IntegrationConfig` then those applied via Tenant CR will have higher precedence for given Tenant. - -* `hibernation` can be used to create a schedule during which the namespaces belonging to the tenant will be put to sleep. The values of the `sleepSchedule` and `wakeSchedule` fields must be a string in a cron format. - -* Namespaces can also be created via tenant CR by *specifying names* in `namespaces`. - * Multi Tenant Operator will append *tenant name* prefix while creating namespaces if the list of namespaces is under the `withTenantPrefix` field, so the format will be **{TenantName}**-**{Name}**. - * Namespaces listed under the `withoutTenantPrefix` will be created with the given name. Writing down namespaces here that already exist within the cluster are not allowed. - * `stakater.com/kind: {Name}` label will also be added to the namespaces. - -* `commonMetadata` can be used to distribute common labels and annotations among tenant namespaces. - * `labels` distributes provided labels among all tenant namespaces - * `annotations` distributes provided annotations among all tenant namespaces - -* `specificMetadata` can be used to distribute specific labels and annotations among specific tenant namespaces. - * `labels` distributes given labels among specific tenant namespaces - * `annotations` distributes given annotations among specific tenant namespaces - * `namespaces` consists a list of specific tenant namespaces across which the labels and annotations will be distributed - -* Tenant automatically deploys `template` resource mentioned in `templateInstances` to matching tenant namespaces. - * `Template` resources are created in those `namespaces` which belong to a `tenant` and contain `matching labels`. - * `Template` resources are created in all `namespaces` of a `tenant` if `selector` field is empty. - -> ⚠️ If same label or annotation key is being applied using different methods provided, then the highest precedence will be given to `specificMetadata` followed by `commonMetadata` and in the end would be the ones applied from `openshift.project.labels`/`openshift.project.annotations` in `IntegrationConfig` - -## 3. Template - -Cluster scoped resource: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: Template -metadata: - name: redis -resources: - helm: - releaseName: redis - chart: - repository: - name: redis - repoUrl: https://charts.bitnami.com/bitnami - values: | - redisPort: 6379 ---- -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: Template -metadata: - name: networkpolicy -parameters: - - name: CIDR_IP - value: "172.17.0.0/16" -resources: - manifests: - - kind: NetworkPolicy - apiVersion: networking.k8s.io/v1 - metadata: - name: deny-cross-ns-traffic - spec: - podSelector: - matchLabels: - role: db - policyTypes: - - Ingress - - Egress - ingress: - - from: - - ipBlock: - cidr: "${{CIDR_IP}}" - except: - - 172.17.1.0/24 - - namespaceSelector: - matchLabels: - project: myproject - - podSelector: - matchLabels: - role: frontend - ports: - - protocol: TCP - port: 6379 - egress: - - to: - - ipBlock: - cidr: 10.0.0.0/24 - ports: - - protocol: TCP - port: 5978 ---- -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: Template -metadata: - name: resource-mapping -resources: - resourceMappings: - secrets: - - name: secret-s1 - namespace: namespace-n1 - configMaps: - - name: configmap-c1 - namespace: namespace-n2 -``` - -Templates are used to initialize Namespaces, share common resources across namespaces, and map secrets/configmaps from one namespace to other namespaces. - -* They either contain one or more Kubernetes manifests, a reference to secrets/configmaps, or a Helm chart. -* They are being tracked by TemplateInstances in each Namespace they are applied to. -* They can contain pre-defined parameters such as ${namespace}/${tenant} or user-defined ${MY_PARAMETER} that can be specified within an TemplateInstance. - -Also you can define custom variables in `Template` and `TemplateInstance` . The parameters defined in `TemplateInstance` are overwritten the values defined in `Template` . - -Manifest Templates: The easiest option to define a Template is by specifying an array of Kubernetes manifests which should be applied when the Template is being instantiated. - -Helm Chart Templates: Instead of manifests, a Template can specify a Helm chart that will be installed (using Helm template) when the Template is being instantiated. - -Resource Mapping Templates: A template can be used to map secrets and configmaps from one tenant's namespace to another tenant's namespace, or within a tenant's namespace. - -### Mandatory and Optional Templates - - Templates can either be mandatory or optional. By default, all Templates are optional. Cluster Admins can make Templates mandatory by adding them to the `spec.templateInstances` array within the Tenant configuration. All Templates listed in `spec.templateInstances` will always be instantiated within every `Namespace` that is created for the respective Tenant. - -## 4. TemplateInstance - -Namespace scoped resource: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: TemplateInstance -metadata: - name: networkpolicy - namespace: build -spec: - template: networkpolicy - sync: true -parameters: - - name: CIDR_IP - value: "172.17.0.0/16" -``` - -TemplateInstance are used to keep track of resources created from Templates, which are being instantiated inside a Namespace. -Generally, a TemplateInstance is created from a Template and then the TemplateInstances will not be updated when the Template changes later on. To change this behavior, it is possible to set `spec.sync: true` in a TemplateInstance. Setting this option, means to keep this TemplateInstance in sync with the underlying template (similar to Helm upgrade). - -## 5. TemplateGroupInstance - -Cluster scoped resource: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: TemplateGroupInstance -metadata: - name: namespace-parameterized-restrictions-tgi -spec: - template: namespace-parameterized-restrictions - sync: true - selector: - matchExpressions: - - key: stakater.com/tenant - operator: In - values: - - alpha - - beta -parameters: - - name: CIDR_IP - value: "172.17.0.0/16" -``` - -TemplateGroupInstance distributes a template across multiple namespaces which are selected by labelSelector. - -## 6. ResourceSupervisor - -Cluster scoped resource: - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta1 -kind: ResourceSupervisor -metadata: - name: tenant-sample -spec: - argocd: - appProjects: - - tenant-sample - hibernation: - sleepSchedule: 23 * * * * - wakeSchedule: 26 * * * * - namespaces: - - stage - - dev -status: - currentStatus: running - nextReconcileTime: '2022-07-07T11:23:00Z' -``` - -The `ResourceSupervisor` is a resource created by MTO in case the [Hibernation](./hibernation.md) feature is enabled. The Resource manages the sleep/wake schedule of the namespaces owned by the tenant, and manages the previous state of any sleeping application. Currently, only StatefulSets and Deployments are put to sleep. Additionally, ArgoCD AppProjects that belong to the tenant have a `deny` SyncWindow added to them. - -The `ResourceSupervisor` can be created both via the `Tenant` or manually. For more details, check some of its [use cases](./usecases/hibernation.md) - -## Namespace - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - labels: - stakater.com/tenant: blue-sky - name: build -``` - -* Namespace should have label `stakater.com/tenant` which contains the name of tenant to which it belongs to. The labels and annotations specified in the operator config, `ocp.labels.project` and `ocp.annotations.project` are inserted in the namespace by the controller. - -## Notes - -* `tenant.spec.users.owner`: Can only create *Namespaces* with required *tenant label* and can delete *Projects*. To edit *Namespace* use `GitOps/ArgoCD` diff --git a/content/explanation/why-vault-multi-tenancy.md b/content/explanation/why-vault-multi-tenancy.md deleted file mode 100644 index 616d5048b..000000000 --- a/content/explanation/why-vault-multi-tenancy.md +++ /dev/null @@ -1 +0,0 @@ -# Need for Multi-Tenancy in Vault diff --git a/content/faq/index.md b/content/faq/index.md deleted file mode 100644 index 8b013d6a6..000000000 --- a/content/faq/index.md +++ /dev/null @@ -1 +0,0 @@ -# Index diff --git a/content/features.md b/content/features.md deleted file mode 100644 index 56427f52a..000000000 --- a/content/features.md +++ /dev/null @@ -1,104 +0,0 @@ -# Features - -The major features of Multi Tenant Operator (MTO) are described below. - -## Kubernetes Multitenancy - -RBAC is one of the most complicated and error-prone parts of Kubernetes. With Multi Tenant Operator, you can rest assured that RBAC is configured with the "least privilege" mindset and all rules are kept up-to-date with zero manual effort. - -Multi Tenant Operator binds existing ClusterRoles to the Tenant's Namespaces used for managing access to the Namespaces and the resources they contain. You can also modify the default roles or create new roles to have full control and customize access control for your users and teams. - -Multi Tenant Operator is also able to leverage existing OpenShift groups or external groups synced from 3rd party identity management systems, for maintaining Tenant membership in your organization's current user management system. - -## HashiCorp Vault Multitenancy - -Multi Tenant Operator extends the tenants permission model to HashiCorp Vault where it can create Vault paths and greatly ease the overhead of managing RBAC in Vault. Tenant users can manage their own secrets without the concern of someone else having access to their Vault paths. - -More details on [Vault Multitenancy](./tutorials/vault/enabling-multi-tenancy-vault.md) - -## ArgoCD Multitenancy - -Multi Tenant Operator is not only providing strong Multi Tenancy for the OpenShift internals but also extends the tenants permission model to ArgoCD were it can provision AppProjects and Allowed Repositories for your tenants greatly ease the overhead of managing RBAC in ArgoCD. - -More details on [ArgoCD Multitenancy](./tutorials/argocd/enabling-multi-tenancy-argocd.md) - -## Mattermost Multitenancy - -Multi Tenant Operator can manage Mattermost to create Teams for tenant users. All tenant users get a unique team and a list of predefined channels gets created. When a user is removed from the tenant, the user is also removed from the Mattermost team corresponding to tenant. - -More details on [Mattermost](./reference-guides/mattermost.md) - -## Cost/Resource Optimization - -Multi Tenant Operator provides a mechanism for defining Resource Quotas at the tenant scope, meaning all namespaces belonging to a particular tenant share the defined quota, which is why you are able to safely enable dev teams to self serve their namespaces whilst being confident that they can only use the resources allocated based on budget and business needs. - -More details on [Quota](./how-to-guides/quota.md) - -## Remote Development Namespaces - -Multi Tenant Operator can be configured to automatically provision a namespace in the cluster for every member of the specific tenant, that will also be preloaded with any selected templates and consume the same pool of resources from the tenants quota creating safe remote dev namespaces that teams can use as scratch namespace for rapid prototyping and development. So, every developer gets a Kubernetes-based cloud development environment that feel like working on localhost. - -More details on [Sandboxes](./tutorials/tenant/create-sandbox.md#create-private-sandboxes) - -## Templates and Template distribution - -Multi Tenant Operator allows admins/users to define templates for namespaces, so that others can instantiate these templates to provision namespaces with batteries loaded. A template could pre-populate a namespace for certain use cases or with basic tooling required. Templates allow you to define Kubernetes manifests, Helm chart and more to be applied when the template is used to create a namespace. - -It also allows the parameterizing of these templates for flexibility and ease of use. It also provides the option to enforce the presence of templates in one tenant's or all the tenants' namespaces for configuring secure defaults. - -Common use cases for namespace templates may be: - -- Adding networking policies for multitenancy -- Adding development tooling to a namespace -- Deploying pre-populated databases with test data -- Injecting new namespaces with optional credentials such as image pull secrets - -More details on [Distributing Template Resources](./reference-guides/deploying-templates.md) - -## Hibernation - -Multi Tenant Operator can downscale Deployments and StatefulSets in a tenant's Namespace according to a defined sleep schedule. The Deployments and StatefulSets are brought back to their required replicas according to the provided wake schedule. - -More details on [Hibernation](./tutorials/tenant/tenant-hibernation.md) - -## Cross Namespace Resource Distribution - -Multi Tenant Operator supports cloning of secrets and configmaps from one namespace to another namespace based on label selectors. It uses templates to enable users to provide reference to secrets and configmaps. It uses a template group instance to distribute those secrets and namespaces in matching namespaces, even if namespaces belong to different tenants. If template instance is used then the resources will only be mapped if namespaces belong to same tenant. - -More details on [Distributing Secrets and ConfigMaps](./reference-guides/distributing-resources.md) - -## Self-Service - -With Multi Tenant Operator, you can empower your users to safely provision namespaces for themselves and their teams (typically mapped to SSO groups). Team-owned namespaces and the resources inside them count towards the team's quotas rather than the user's individual limits and are automatically shared with all team members according to the access rules you configure in Multi Tenant Operator. - -Also, by leveraging Multi Tenant Operator's templating mechanism, namespaces can be provisioned and automatically pre-populated with any kind of resource or multiple resources such as network policies, docker pull secrets or even Helm charts etc - -## Everything as Code/GitOps Ready - -Multi Tenant Operator is designed and built to be 100% OpenShift-native and to be configured and managed the same familiar way as native OpenShift resources so is perfect for modern shops that are dedicated to GitOps as it is fully configurable using Custom Resources. - -## Preventing Clusters Sprawl - -As companies look to further harness the power of cloud-native, they are adopting container technologies at rapid speed, increasing the number of clusters and workloads. As the number of Kubernetes clusters grows, this is an increasing work for the Ops team. When it comes to patching security issues or upgrading clusters, teams are doing five times the amount of work. - -With Multi Tenant Operator teams can share a single cluster with multiple teams, groups of users, or departments by saving operational and management efforts. This prevents you from Kubernetes cluster sprawl. - -## Native Experience - -Multi Tenant Operator provides multi-tenancy with a native Kubernetes experience without introducing additional management layers, plugins, or customized binaries. - -## Custom Metrics Support - -Multi Tenant Operator now supports custom metrics for templates, template instances and template group instances. - -Exposed metrics contain, number of resources deployed, number of resources failed, total number of resources deployed for template instances and template group instances. These metrics can be used to monitor the usage of templates and template instances in the cluster. - -Additionally, this allows us to expose other performance metrics listed [here](https://book.kubebuilder.io/reference/metrics-reference.html). - -More details on [Enabling Custom Metrics](./reference-guides/custom-metrics.md) - -## Graph Visualization for Tenants - -Multi Tenant Operator now supports graph visualization for tenants on the MTO Console. Effortlessly associate tenants with their respective resources using the enhanced graph feature on the MTO Console. This dynamic graph illustrates the relationships between tenants and the resources they create, encompassing both MTO's proprietary resources and native Kubernetes/OpenShift elements. - -More details on [Graph Visualization](./reference-guides/graph-visualization.md) diff --git a/content/hibernation.md b/content/hibernation.md deleted file mode 100644 index 5171793b6..000000000 --- a/content/hibernation.md +++ /dev/null @@ -1,86 +0,0 @@ -# Hibernating Namespaces - -You can manage workloads in your cluster with MTO by implementing a hibernation schedule for your tenants. -Hibernation downsizes the running Deployments and StatefulSets in a tenant’s namespace according to a defined cron schedule. You can set a hibernation schedule for your tenants by adding the ‘spec.hibernation’ field to the tenant's respective Custom Resource. - -```yaml -hibernation: - sleepSchedule: 23 * * * * - wakeSchedule: 26 * * * * -``` - -`spec.hibernation.sleepSchedule` accepts a cron expression indicating the time to put the workloads in your tenant’s namespaces to sleep. - -`spec.hibernation.wakeSchedule` accepts a cron expression indicating the time to wake the workloads in your tenant’s namespaces up. - -!!! note - Both sleep and wake schedules must be specified for your Hibernation schedule to be valid. - -Additionally, adding the `hibernation.stakater.com/exclude: 'true'` annotation to a namespace excludes it from hibernating. - -!!! note - This is only true for hibernation applied via the Tenant Custom Resource, and does not apply for hibernation done by manually creating a ResourceSupervisor (details about that below). - -!!! note - This will not wake up an already sleeping namespace before the wake schedule. - -## Resource Supervisor - -Adding a Hibernation Schedule to a Tenant creates an accompanying ResourceSupervisor Custom Resource. -The Resource Supervisor stores the Hibernation schedules and manages the current and previous states of all the applications, whether they are sleeping or awake. - -When the sleep timer is activated, the Resource Supervisor controller stores the details of your applications (including the number of replicas, configurations, etc.) in the applications' namespaces and then puts your applications to sleep. When the wake timer is activated, the controller wakes up the applications using their stored details. - -Enabling ArgoCD support for Tenants will also hibernate applications in the tenants' `appProjects`. - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta1 -kind: ResourceSupervisor -metadata: - name: sigma -spec: - argocd: - appProjects: - - sigma - namespace: openshift-gitops - hibernation: - sleepSchedule: 42 * * * * - wakeSchedule: 45 * * * * - namespaces: - - tenant-ns1 - - tenant-ns2 -``` - -> Currently, Hibernation is available only for StatefulSets and Deployments. - -### Manual creation of ResourceSupervisor - -Hibernation can also be applied by creating a ResourceSupervisor resource manually. -The ResourceSupervisor definition will contain the hibernation cron schedule, the names of the namespaces to be hibernated, and the names of the ArgoCD AppProjects whose ArgoCD Applications have to be hibernated (as per the given schedule). - -This method can be used to hibernate: - -- Some specific namespaces and AppProjects in a tenant -- A set of namespaces and AppProjects belonging to different tenants -- Namespaces and AppProjects belonging to a tenant that the cluster admin is not a member of -- Non-tenant namespaces and ArgoCD AppProjects - -As an example, the following ResourceSupervisor could be created manually, to apply hibernation explicitly to the 'ns1' and 'ns2' namespaces, and to the 'sample-app-project' AppProject. - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta1 -kind: ResourceSupervisor -metadata: - name: hibernator -spec: - argocd: - appProjects: - - sample-app-project - namespace: openshift-gitops - hibernation: - sleepSchedule: 42 * * * * - wakeSchedule: 45 * * * * - namespaces: - - ns1 - - ns2 -``` diff --git a/content/how-to-guides/integration-config.md b/content/how-to-guides/integration-config.md index 30e9cb580..5336d6886 100644 --- a/content/how-to-guides/integration-config.md +++ b/content/how-to-guides/integration-config.md @@ -100,7 +100,7 @@ Following are the different components that can be used to configure multi-tenan TenantRoles are required within the IntegrationConfig, as they are used for defining what roles will be applied to each Tenant namespace. The field allows optional custom roles, that are then used to create RoleBindings for namespaces that match a labelSelector. -> ⚠️ If you do not configure roles in any way, then the default OpenShift roles of `owner`, `edit`, and `view` will apply to Tenant members. Their details can be found [here](../tenant-roles.md) +> ⚠️ If you do not configure roles in any way, then the default OpenShift roles of `owner`, `edit`, and `view` will apply to Tenant members. Their details can be found [here](../reference-guides/custom-roles.md) ```yaml tenantRoles: @@ -248,11 +248,17 @@ users: ### Cluster Admin Groups -`clusterAdminGroups:` Contains names of the groups that are allowed to perform CRUD operations on namespaces present on the cluster. Users in the specified group(s) will be able to perform these operations without MTO getting in their way +### Cluster Admin Groups + +`clusterAdminGroups:` Contains names of the groups that are allowed to perform CRUD operations on namespaces present on the cluster. Users in the specified group(s) will be able to perform these operations without MTO getting in their way. MTO does not interfere even with the deletion of privilegedNamespaces. + +!!! note + User `kube:admin` is bypassed by default to perform operations as a cluster admin, this includes operations on all the namespaces. ### Privileged Namespaces -`privilegedNamespaces:` Contains the list of `namespaces` ignored by MTO. MTO will not manage the `namespaces` in this list. Values in this list are regex patterns. +`privilegedNamespaces:` Contains the list of `namespaces` ignored by MTO. MTO will not manage the `namespaces` in this list. Treatment for privileged namespaces does not involve further integrations or finalizers processing as with normal namespaces. Values in this list are regex patterns. + For example: - To ignore the `default` namespace, we can specify `^default$` @@ -312,6 +318,27 @@ argocd: `argocd.clusterResourceWhitelist` allows ArgoCD to sync the listed cluster scoped resources from your GitOps repo. +## Provision + +```yaml +provision: + console: true + showback: true +``` + +`provision.console:` Can be used to enable/disable console GUI for MTO. +`provision.showback:` Can be used to enable/disable showback feature on the console. + +Integration config will be managing the following resources required for console GUI: + +- `Showback` cronjob. +- `Keycloak` deployment. +- `MTO-OpenCost` operator. +- `MTO-Prometheus` operator. +- `MTO-Postgresql` stateful set. + +Details on console GUI and showback can be found [here](../explanation/console.md) + ## RHSSO (Red Hat Single Sign-On) Red Hat Single Sign-On [RHSSO](https://access.redhat.com/products/red-hat-single-sign-on) is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0. @@ -345,9 +372,9 @@ If `vault` is configured on a cluster, then Vault configuration can be enabled. ```yaml Vault: enabled: true - accessorPath: oidc/ - address: 'https://vault.apps.prod.abcdefghi.kubeapp.cloud/' - roleName: mto + accessorPath: oidc/ + address: 'https://vault.apps.prod.abcdefghi.kubeapp.cloud/' + roleName: mto sso: clientName: vault ``` diff --git a/content/installation.md b/content/installation.md deleted file mode 100644 index 295c154d9..000000000 --- a/content/installation.md +++ /dev/null @@ -1,163 +0,0 @@ -# Installation - -This document contains instructions on installing, uninstalling and configuring Multi Tenant Operator using OpenShift MarketPlace. - -1. [OpenShift OperatorHub UI](#installing-via-operatorhub-ui) - -1. [CLI/GitOps](#installing-via-cli-or-gitops) - -1. [Enabling Console](#enabling-console) - -1. [Uninstall](#uninstall-via-operatorhub-ui) - -## Requirements - -* An **OpenShift** cluster [v4.8 - v4.13] - -## Installing via OperatorHub UI - -* After opening OpenShift console click on `Operators`, followed by `OperatorHub` from the side menu - -![image](./images/operatorHub.png) - -* Now search for `Multi Tenant Operator` and then click on `Multi Tenant Operator` tile - -![image](./images/search_tenant_operator_operatorHub.png) - -* Click on the `install` button - -![image](./images/to_install_1.png) - -* Select `Updated channel`. Select `multi-tenant-operator` to install the operator in `multi-tenant-operator` namespace from `Installed Namespace` dropdown menu. After configuring `Update approval` click on the `install` button. - -> Note: Use `stable` channel for seamless upgrades. For `Production Environment` prefer `Manual` approval and use `Automatic` for `Development Environment` - -![image](./images/to_install_2.png) - -* Wait for the operator to be installed - -![image](./images/to_install_wait.png) - -* Once successfully installed, MTO will be ready to enforce multi-tenancy in your cluster - -![image](./images/to_installed_successful.png) - -> Note: MTO will be installed in `multi-tenant-operator` namespace. - -## Installing via CLI OR GitOps - -* Create namespace `multi-tenant-operator` - -```bash -oc create namespace multi-tenant-operator -namespace/multi-tenant-operator created -``` - -* Create an OperatorGroup YAML for MTO and apply it in `multi-tenant-operator` namespace. - -```bash -oc create -f - << EOF -apiVersion: operators.coreos.com/v1 -kind: OperatorGroup -metadata: - name: tenant-operator - namespace: multi-tenant-operator -EOF -operatorgroup.operators.coreos.com/tenant-operator created -``` - -* Create a subscription YAML for MTO and apply it in `multi-tenant-operator` namespace. To enable console set `.spec.config.env[].ENABLE_CONSOLE` to `true`. This will create a route resource, which can be used to access the Multi-Tenant-Operator console. - -```bash -oc create -f - << EOF -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: tenant-operator - namespace: multi-tenant-operator -spec: - channel: stable - installPlanApproval: Automatic - name: tenant-operator - source: certified-operators - sourceNamespace: openshift-marketplace - startingCSV: tenant-operator.v0.10.0 -EOF -subscription.operators.coreos.com/tenant-operator created -``` - -> Note: To bring MTO via GitOps, add the above files in GitOps repository. - -* After creating the `subscription` custom resource open OpenShift console and click on `Operators`, followed by `Installed Operators` from the side menu - -![image](./images/to_sub_installation_wait.png) - -* Wait for the installation to complete - -![image](./images/to_sub_installation_successful.png) - -* Once the installation is complete click on `Workloads`, followed by `Pods` from the side menu and select `multi-tenant-operator` project - -![image](./images/select_multi_tenant_operator_project.png) - -* Once pods are up and running, MTO will be ready to enforce multi-tenancy in your cluster - -![image](./images/to_installed_successful_pod.png) - -For more details and configurations check out [IntegrationConfig](./integration-config.md). - -## Enabling Console - -To enable console GUI for MTO, go to `Search` -> `IntegrationConfig` -> `tenant-operator-config` and make sure the following fields are set to `true`: - -```yaml -spec: - provision: - console: true - showback: true -``` - -> Note: If your `InstallPlan` approval is set to `Manual` then you will have to manually approve the `InstallPlan` for MTO console components to be installed. - -### Manual Approval - -* Open OpenShift console and click on `Operators`, followed by `Installed Operators` from the side menu. - -![image](./images/manual-approve-1.png) - -* Now click on `Upgrade available` in front of `mto-opencost` or `mto-prometheus`. - -![image](./images/manual-approve-2.png) - -* Now click on `Preview InstallPlan` on top. - -![image](./images/manual-approve-3.png) - -* Now click on `Approve` button. - -![image](./images/manual-approve-4.png) - -* Now the `InstallPlan` will be approved, and MTO console components will be installed. - -## Uninstall via OperatorHub UI - -You can uninstall MTO by following these steps: - -* Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not. If yes, please set `spec.onDelete.cleanNamespaces` to `false` for all those tenants whose namespaces you want to retain, and `spec.onDelete.cleanAppProject` to `false` for all those tenants whose AppProject you want to retain. For more details check out [onDelete](./usecases/tenant.md#retaining-tenant-namespaces-and-appproject-when-a-tenant-is-being-deleted) - -* After making the required changes open OpenShift console and click on `Operators`, followed by `Installed Operators` from the side menu - -![image](./images/installed-operators.png) - -* Now click on uninstall and confirm uninstall. - -![image](./images/uninstall-from-ui.png) - -* Now the operator has been uninstalled. - -* `Optional:` you can also manually remove MTO's CRDs and its resources from the cluster. - -## Notes - -* For more details on how to use MTO please refer [use-cases](./usecases/quota.md). -* For more details on how to extend your MTO manager ClusterRole please refer [extend-admin-clusterrole](./usecases/admin-clusterrole.md). diff --git a/content/integration-config.md b/content/integration-config.md deleted file mode 100644 index 1cc42b036..000000000 --- a/content/integration-config.md +++ /dev/null @@ -1,397 +0,0 @@ -# Integration Config - -IntegrationConfig is used to configure settings of multi-tenancy for Multi Tenant Operator. - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: IntegrationConfig -metadata: - name: tenant-operator-config - namespace: multi-tenant-operator -spec: - tenantRoles: - default: - owner: - clusterRoles: - - admin - editor: - clusterRoles: - - edit - viewer: - clusterRoles: - - view - - viewer - custom: - - labelSelector: - matchExpressions: - - key: stakater.com/kind - operator: In - values: - - build - matchLabels: - stakater.com/kind: dev - owner: - clusterRoles: - - custom-owner - editor: - clusterRoles: - - custom-editor - viewer: - clusterRoles: - - custom-viewer - - custom-view - openshift: - project: - labels: - stakater.com/workload-monitoring: "true" - annotations: - openshift.io/node-selector: node-role.kubernetes.io/worker= - group: - labels: - role: customer-reader - sandbox: - labels: - stakater.com/kind: sandbox - clusterAdminGroups: - - cluster-admins - privilegedNamespaces: - - ^default$ - - ^openshift-* - - ^kube-* - privilegedServiceAccounts: - - ^system:serviceaccount:openshift-* - - ^system:serviceaccount:kube-* - namespaceAccessPolicy: - deny: - privilegedNamespaces: - users: - - system:serviceaccount:openshift-argocd:argocd-application-controller - - adam@stakater.com - groups: - - cluster-admins - argocd: - namespace: openshift-operators - namespaceResourceBlacklist: - - group: '' # all groups - kind: ResourceQuota - clusterResourceWhitelist: - - group: tronador.stakater.com - kind: EnvironmentProvisioner - rhsso: - enabled: true - realm: customer - endpoint: - url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/ - secretReference: - name: auth-secrets - namespace: openshift-auth - vault: - enabled: true - endpoint: - url: https://vault.apps.prod.abcdefghi.kubeapp.cloud/ - secretReference: - name: vault-root-token - namespace: vault - sso: - clientName: vault - accessorID: - provision: - console: true - showback: true -``` - -Following are the different components that can be used to configure multi-tenancy in a cluster via Multi Tenant Operator. - -## TenantRoles - -TenantRoles are required within the IntegrationConfig, as they are used for defining what roles will be applied to each Tenant namespace. The field allows optional custom roles, that are then used to create RoleBindings for namespaces that match a labelSelector. - -> ⚠️ If you do not configure roles in any way, then the default OpenShift roles of `owner`, `edit`, and `view` will apply to Tenant members. Their details can be found [here](./tenant-roles.md) - -```yaml -tenantRoles: - default: - owner: - clusterRoles: - - admin - editor: - clusterRoles: - - edit - viewer: - clusterRoles: - - view - - viewer - custom: - - labelSelector: - matchExpressions: - - key: stakater.com/kind - operator: In - values: - - build - matchLabels: - stakater.com/kind: dev - owner: - clusterRoles: - - custom-owner - editor: - clusterRoles: - - custom-editor - viewer: - clusterRoles: - - custom-viewer - - custom-view -``` - -### Default - -This field contains roles that will be used to create default roleBindings for each namespace that belongs to tenants. These roleBindings are only created for a namespace if that namespaces isn't already matched by the `custom` field below it. Therefore, it is required to have at least one role mentioned within each of its three subfields: `owner`, `editor`, and `viewer`. These 3 subfields also correspond to the member fields of the [Tenant CR](./customresources.md#_2-tenant) - -### Custom - -An array of custom roles. Similar to the `default` field, you can mention roles within this field as well. However, the custom roles also require the use of a `labelSelector` for each iteration within the array. The roles mentioned here will only apply to the namespaces that are matched by the labelSelector. If a namespace is matched by 2 different labelSelectors, then both roles will apply to it. Additionally, roles can be skipped within the labelSelector. These missing roles are then inherited from the `default` roles field . For example, if the following custom roles arrangement is used: - -```yaml -custom: -- labelSelector: - matchExpressions: - - key: stakater.com/kind - operator: In - values: - - build - matchLabels: - stakater.com/kind: dev - owner: - clusterRoles: - - custom-owner -``` - -Then the `editor` and `viewer` roles will be taken from the `default` roles field, as that is required to have at least one role mentioned. - -## OpenShift - -``` yaml -openshift: - project: - labels: - stakater.com/workload-monitoring: "true" - annotations: - openshift.io/node-selector: node-role.kubernetes.io/worker= - group: - labels: - role: customer-reader - sandbox: - labels: - stakater.com/kind: sandbox - clusterAdminGroups: - - cluster-admins - privilegedNamespaces: - - ^default$ - - ^openshift-* - - ^kube-* - privilegedServiceAccounts: - - ^system:serviceaccount:openshift-* - - ^system:serviceaccount:kube-* - namespaceAccessPolicy: - deny: - privilegedNamespaces: - users: - - system:serviceaccount:openshift-argocd:argocd-application-controller - - adam@stakater.com - groups: - - cluster-admins -``` - -### Project, group and sandbox - -We can use the `openshift.project`, `openshift.group` and `openshift.sandbox` fields to automatically add `labels` and `annotations` to the **Projects** and **Groups** managed via MTO. - -```yaml - openshift: - project: - labels: - stakater.com/workload-monitoring: "true" - annotations: - openshift.io/node-selector: node-role.kubernetes.io/worker= - group: - labels: - role: customer-reader - sandbox: - labels: - stakater.com/kind: sandbox -``` - -If we want to add default *labels/annotations* to sandbox namespaces of tenants than we just simply add them in `openshift.project.labels`/`openshift.project.annotations` respectively. - -Whenever a project is made it will have the labels and annotations as mentioned above. - -```yaml -kind: Project -apiVersion: project.openshift.io/v1 -metadata: - name: bluesky-build - annotations: - openshift.io/node-selector: node-role.kubernetes.io/worker= - labels: - workload-monitoring: 'true' - stakater.com/tenant: bluesky -spec: - finalizers: - - kubernetes -status: - phase: Active -``` - -```yaml -kind: Group -apiVersion: user.openshift.io/v1 -metadata: - name: bluesky-owner-group - labels: - role: customer-reader -users: - - andrew@stakater.com -``` - -### Cluster Admin Groups - -`clusterAdminGroups:` Contains names of the groups that are allowed to perform CRUD operations on namespaces present on the cluster. Users in the specified group(s) will be able to perform these operations without MTO getting in their way. MTO does not interfere even with the deletion of privilegedNamespaces. - -!!! note - User `kube:admin` is bypassed by default to perform operations as a cluster admin, this includes operations on all the namespaces. - -### Privileged Namespaces - -`privilegedNamespaces:` Contains the list of `namespaces` ignored by MTO. MTO will not manage the `namespaces` in this list. Treatment for privileged namespaces does not involve further integrations or finalizers processing as with normal namespaces. Values in this list are regex patterns. -For example: - -- To ignore the `default` namespace, we can specify `^default$` -- To ignore all namespaces starting with the `openshift-` prefix, we can specify `^openshift-*`. -- To ignore any namespace containing `stakater` in its name, we can specify `stakater`. (A constant word given as a regex pattern will match any namespace containing that word.) - -### Privileged ServiceAccounts - -`privilegedServiceAccounts:` Contains the list of `ServiceAccounts` ignored by MTO. MTO will not manage the `ServiceAccounts` in this list. Values in this list are regex patterns. For example, to ignore all `ServiceAccounts` starting with the `system:serviceaccount:openshift-` prefix, we can use `^system:serviceaccount:openshift-*`; and to ignore the `system:serviceaccount:builder` service account we can use `^system:serviceaccount:builder$.` - -### Namespace Access Policy - -`namespaceAccessPolicy.Deny:` Can be used to restrict privileged *users/groups* CRUD operation over managed namespaces. - -```yaml -namespaceAccessPolicy: - deny: - privilegedNamespaces: - groups: - - cluster-admins - users: - - system:serviceaccount:openshift-argocd:argocd-application-controller - - adam@stakater.com -``` - -> ⚠️ If you want to use a more complex regex pattern (for the `openshift.privilegedNamespaces` or `openshift.privilegedServiceAccounts` field), it is recommended that you test the regex pattern first - either locally or using a platform such as . - -## ArgoCD - -### Namespace - -`argocd.namespace` is an optional field used to specify the namespace where ArgoCD Applications and AppProjects are deployed. The field should be populated when you want to create an ArgoCD AppProject for each tenant. - -### NamespaceResourceBlacklist - -```yaml -argocd: - namespaceResourceBlacklist: - - group: '' # all resource groups - kind: ResourceQuota - - group: '' - kind: LimitRange - - group: '' - kind: NetworkPolicy -``` - -`argocd.namespaceResourceBlacklist` prevents ArgoCD from syncing the listed resources from your GitOps repo. - -### ClusterResourceWhitelist - -```yaml -argocd: - clusterResourceWhitelist: - - group: tronador.stakater.com - kind: EnvironmentProvisioner -``` - -`argocd.clusterResourceWhitelist` allows ArgoCD to sync the listed cluster scoped resources from your GitOps repo. - -## RHSSO (Red Hat Single Sign-On) - -Red Hat Single Sign-On [RHSSO](https://access.redhat.com/products/red-hat-single-sign-on) is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0. - -If `RHSSO` is configured on a cluster, then RHSSO configuration can be enabled. - -```yaml -rhsso: - enabled: true - realm: customer - endpoint: - secretReference: - name: auth-secrets - namespace: openshift-auth - url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/ -``` - -If enabled, than admins have to provide secret and URL of RHSSO. - -- `secretReference.name:` Will contain the name of the secret. -- `secretReference.namespace:` Will contain the namespace of the secret. -- `realm:` Will contain the realm name which is configured for users. -- `url:` Will contain the URL of RHSSO. - -## Vault - -[Vault](https://www.vaultproject.io/) is used to secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API. - -If `vault` is configured on a cluster, then Vault configuration can be enabled. - -```yaml -Vault: - enabled: true - endpoint: - secretReference: - name: vault-root-token - namespace: vault - url: >- - https://vault.apps.prod.abcdefghi.kubeapp.cloud/ - sso: - accessorID: - clientName: vault -``` - -If enabled, than admins have to provide secret, URL and SSO accessorID of Vault. - -- `secretReference.name:` Will contain the name of the secret. -- `secretReference.namespace:` Will contain the namespace of the secret. -- `url:` Will contain the URL of Vault. -- `sso.accessorID:` Will contain the SSO accessorID. -- `sso.clientName:` Will contain the client name. - -For more details please refer [use-cases](./usecases/integrationconfig.md) - -## Provision - -```yaml -provision: - console: true - showback: true -``` - -`provision.console:` Can be used to enable/disable console GUI for MTO. -`provision.showback:` Can be used to enable/disable showback feature on the console. - -Integration config will be managing the following resources required for console GUI: - -- `Showback` cronjob. -- `Keycloak` deployment. -- `MTO-OpenCost` operator. -- `MTO-Prometheus` operator. -- `MTO-Postgresql` stateful set. - -Details on console GUI and showback can be found [here](explanation/console.md) diff --git a/content/reference-guides/add-remove-namespace-gitops.md b/content/reference-guides/add-remove-namespace-gitops.md deleted file mode 100644 index d223c731b..000000000 --- a/content/reference-guides/add-remove-namespace-gitops.md +++ /dev/null @@ -1 +0,0 @@ -# Add/Remove Namespace from Tenant via GitOps diff --git a/content/tenant-roles.md b/content/tenant-roles.md deleted file mode 100644 index e0076e8a8..000000000 --- a/content/tenant-roles.md +++ /dev/null @@ -1,229 +0,0 @@ -# Tenant Member Roles - -> After adding support for custom roles within MTO, this page is only applicable if you use OpenShift and its default `owner`, `edit`, and `view` roles. For more details, see the [IntegrationConfig spec](./integration-config.md) - -MTO tenant members can have one of following 3 roles: - -1. Owner -1. Editor -1. Viewer - -## 1. Owner - -![image](./images/tenant-operator-owner-overview.jpg) -fig 2. Shows how tenant owners manage their tenant using MTO - -Owner is an admin of a tenant with some restrictions. It has privilege to see all resources in their Tenant with some additional privileges. They can also create new `namespaces`. - -*Owners will also inherit roles from `Edit` and `View`.* - -### Access Permissions - -* Role and RoleBinding access in `Project` : - * delete - * create - * list - * get - * update - * patch - -### Quotas Permissions - -* LimitRange and ResourceQuota access in `Project` - * get - * list - * watch - -* Daemonset access in `Project` - * create - * delete - * get - * list - * patch - * update - * watch - -### Resources Permissions - -* CRUD access on Template, TemplateInstance and TemplateGroupInstance of MTO custom resources -* CRUD access on ImageStreamTags in `Project` -* Get access on CustomResourceDefinitions in `Project` -* Get, list, watch access on Builds, BuildConfigs in `Project` -* CRUD access on following resources in `Project`: - * Prometheuses - * Prometheusrules - * ServiceMonitors - * PodMonitors - * ThanosRulers -* Permission to create Namespaces. -* Restricted to perform actions on cluster resource Quotas and Limits. - -## 2. Editor - -![image](./images/tenant-operator-edit-overview.jpg) -fig 3. Shows editors role in a tenant using MTO - -Edit role will have edit access on their `Projects`, but they wont have access on `Roles` or `RoleBindings`. - -*Editors will also inherit `View` role.* - -### Access Permissions - -* ServiceAccount access in `Project` - * create - * delete - * deletecollection - * get - * list - * patch - * update - * watch - * impersonate - -### Quotas Permissions - -* AppliedClusterResourceQuotas and ResourceQuotaUsages access in `Project` - * get - * list - * watch - -### Builds ,Pods , PVC Permissions - -* Pod, PodDisruptionBudgets and PVC access in `Project` - * get - * list - * watch - * create - * delete - * deletecollection - * patch - * update -* Build, BuildConfig, BuildLog, DeploymentConfig, Deployment, ConfigMap, ImageStream , ImageStreamImage and ImageStreamMapping access in `Project` - * get - * list - * watch - * create - * delete - * deletecollection - * patch - * update - -### Resources Permissions - -* CRUD access on Template, TemplateInstance and TemplateGroupInstance of MTO custom resources -* Job, CronJob, Task, Trigger and Pipeline access in `Project` - * get - * list - * watch - * create - * delete - * deletecollection - * patch - * update -* Get access on projects -* Route and NetworkPolicies access in `Project` - * get - * list - * watch - * create - * delete - * deletecollection - * patch - * update -* Template, ReplicaSet, StatefulSet and DaemonSet access in `Project` - * get - * list - * watch - * create - * delete - * deletecollection - * patch - * update -* CRUD access on all Projects related to - * Elasticsearch - * Logging - * Kibana - * Istio - * Jaeger - * Kiali - * Tekton.dev -* Get access on CustomResourceDefinitions in `Project` -* Edit and view permission on `jenkins.build.openshift.io` -* InstallPlan access in `Project` - * get - * list - * watch - * delete -* Subscription and PackageManifest access in `Project` - * get - * list - * watch - * create - * delete - * deletecollection - * patch - * update - -## 3. Viewer - -![image](./images/tenant-operator-view-overview.jpg) -fig 4. Shows viewers role in a tenant using MTO - -Viewer role will only have view access on their `Project`. - -### Access Permissions - -* ServiceAccount access in `Project` - * get - * list - * watch - -### Quotas Permissions - -* AppliedClusterResourceQuotas access in `Project` - * get - * list - * watch - -### Builds ,Pods , PVC Permissions - -* Pod, PodDisruptionBudget and PVC access in `Project` - * get - * list - * watch -* Build, BuildConfig, BuildLog, DeploymentConfig, ConfigMap, ImageStream, ImageStreamImage and ImageStreamMapping access in `Project` - * get - * list - * watch - -### Resources Permissions - -* Get, list, view access on Template, TemplateInstance and TemplateGroupInstance of MTO custom resources -* Job, CronJob, Task, Trigger and Pipeline access in `Project` - * get - * list - * watch -* Get access on projects -* Routes, NetworkPolicies and Daemonset access in `Project` - * get - * list - * watch -* Template, ReplicaSet, StatefulSet and Daemonset in `Project` - * get - * list - * watch -* Get,list,watch access on all projects related to - * Elasticsearch - * Logging - * Kibana - * Istio - * Jaeger - * Kiali - * Tekton.dev -* Get, list, watch access on ImageStream, ImageStreamImage and ImageStreamMapping in `Project` -* Get access on CustomResourceDefinition in `Project` -* View permission on `Jenkins.Build.Openshift.io` -* Subscription, PackageManifest and InstallPlan access in `Project` - * get - * list - * watch diff --git a/content/tutorials/installation.md b/content/tutorials/installation.md index 1ab5bda16..1d72e321e 100644 --- a/content/tutorials/installation.md +++ b/content/tutorials/installation.md @@ -6,11 +6,13 @@ This document contains instructions on installing, uninstalling and configuring 1. [CLI/GitOps](#installing-via-cli-or-gitops) +1. [Enabling Console](#enabling-console) + 1. [Uninstall](#uninstall-via-operatorhub-ui) ## Requirements -* An **OpenShift** cluster [v4.7 - v4.12] +* An **OpenShift** cluster [v4.8 - v4.13] ## Installing via OperatorHub UI @@ -42,34 +44,6 @@ This document contains instructions on installing, uninstalling and configuring > Note: MTO will be installed in `multi-tenant-operator` namespace. -### Configuring IntegrationConfig - -IntegrationConfig is required to configure the settings of multi-tenancy for MTO. - -* We recommend using the following IntegrationConfig as a starting point - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: IntegrationConfig -metadata: - name: tenant-operator-config - namespace: multi-tenant-operator -spec: - openshift: - privilegedNamespaces: - - default - - ^openshift-* - - ^kube-* - - ^redhat-* - privilegedServiceAccounts: - - ^system:serviceaccount:default-* - - ^system:serviceaccount:openshift-* - - ^system:serviceaccount:kube-* - - ^system:serviceaccount:redhat-* -``` - -For more details and configurations check out [IntegrationConfig](../how-to-guides/integration-config.md). - ## Installing via CLI OR GitOps * Create namespace `multi-tenant-operator` @@ -107,11 +81,7 @@ spec: name: tenant-operator source: certified-operators sourceNamespace: openshift-marketplace - startingCSV: tenant-operator.v0.9.1 - config: - env: - - name: ENABLE_CONSOLE - value: 'true' + startingCSV: tenant-operator.v0.10.0 EOF subscription.operators.coreos.com/tenant-operator created ``` @@ -134,39 +104,46 @@ subscription.operators.coreos.com/tenant-operator created ![image](../images/to_installed_successful_pod.png) -### Configuring IntegrationConfig +For more details and configurations check out [IntegrationConfig](../how-to-guides/integration-config.md). -IntegrationConfig is required to configure the settings of multi-tenancy for MTO. +## Enabling Console -* We recommend using the following IntegrationConfig as a starting point: +To enable console GUI for MTO, go to `Search` -> `IntegrationConfig` -> `tenant-operator-config` and make sure the following fields are set to `true`: ```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: IntegrationConfig -metadata: - name: tenant-operator-config - namespace: multi-tenant-operator spec: - openshift: - privilegedNamespaces: - - default - - ^openshift-* - - ^kube-* - - ^redhat-* - privilegedServiceAccounts: - - ^system:serviceaccount:default-* - - ^system:serviceaccount:openshift-* - - ^system:serviceaccount:kube-* - - ^system:serviceaccount:redhat-* + provision: + console: true + showback: true ``` -For more details and configurations check out [IntegrationConfig](../how-to-guides/integration-config.md). +> Note: If your `InstallPlan` approval is set to `Manual` then you will have to manually approve the `InstallPlan` for MTO console components to be installed. + +### Manual Approval + +* Open OpenShift console and click on `Operators`, followed by `Installed Operators` from the side menu. + +![image](../images/manual-approve-1.png) + +* Now click on `Upgrade available` in front of `mto-opencost` or `mto-prometheus`. + +![image](../images/manual-approve-2.png) + +* Now click on `Preview InstallPlan` on top. + +![image](../images/manual-approve-3.png) + +* Now click on `Approve` button. + +![image](../images/manual-approve-4.png) + +* Now the `InstallPlan` will be approved, and MTO console components will be installed. ## Uninstall via OperatorHub UI You can uninstall MTO by following these steps: -* Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not. If yes, please set `spec.onDelete.cleanNamespaces` to `false` for all those tenants whose namespaces you want to retain, and `spec.onDelete.cleanAppProject` to `false` for all those tenants whose AppProject you want to retain. For more details check out [onDelete](./tenant/deleting-tenant.md#retaining-tenant-namespaces-and-appproject-when-a-tenant-is-being-deleted) +* Decide on whether you want to retain tenant namespaces and ArgoCD AppProjects or not. If yes, please set `spec.onDelete.cleanNamespaces` to `false` for all those tenants whose namespaces you want to retain, and `spec.onDelete.cleanAppProject` to `false` for all those tenants whose AppProject you want to retain. For more details check out [onDelete](../tutorials/tenant/deleting-tenant.md) * After making the required changes open OpenShift console and click on `Operators`, followed by `Installed Operators` from the side menu diff --git a/content/tutorials/template/template-group-instance.md b/content/tutorials/template/template-group-instance.md deleted file mode 100644 index e9e9c2a93..000000000 --- a/content/tutorials/template/template-group-instance.md +++ /dev/null @@ -1 +0,0 @@ -# More about TemplateGroupInstance diff --git a/content/tutorials/template/template-instance.md b/content/tutorials/template/template-instance.md deleted file mode 100644 index 62c983627..000000000 --- a/content/tutorials/template/template-instance.md +++ /dev/null @@ -1 +0,0 @@ -# More about TemplateInstances diff --git a/content/tutorials/tenant/assign-quota-tenant.md b/content/tutorials/tenant/assign-quota-tenant.md deleted file mode 100644 index 5e1dba39b..000000000 --- a/content/tutorials/tenant/assign-quota-tenant.md +++ /dev/null @@ -1 +0,0 @@ -# Assign Quota to a Tenant diff --git a/content/tutorials/tenant/custom-rbac.md b/content/tutorials/tenant/custom-rbac.md deleted file mode 100644 index 7b0bd772d..000000000 --- a/content/tutorials/tenant/custom-rbac.md +++ /dev/null @@ -1 +0,0 @@ -# Applying Custom RBAC to a Tenant diff --git a/content/usecases/admin-clusterrole.md b/content/usecases/admin-clusterrole.md deleted file mode 100644 index 1e0dc8fa2..000000000 --- a/content/usecases/admin-clusterrole.md +++ /dev/null @@ -1,30 +0,0 @@ -# Extending Admin ClusterRole - -Bill as the cluster admin want to add additional rules for admin ClusterRole. - -Bill can extend the `admin` role for MTO using the aggregation label for admin ClusterRole. Bill will create a new ClusterRole with all the permissions he needs to extend for MTO and add the aggregation label on the newly created ClusterRole. - -```yaml -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: extend-admin-role - labels: - rbac.authorization.k8s.io/aggregate-to-admin: 'true' -rules: - - verbs: - - create - - update - - patch - - delete - apiGroups: - - user.openshift.io - resources: - - groups -``` - -> Note: You can learn more about `aggregated-cluster-roles` [here](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles) - -## What’s next - -See how Bill can [hibernate unused namespaces at night](../tutorials/tenant/tenant-hibernation.md) diff --git a/content/usecases/argocd.md b/content/usecases/argocd.md deleted file mode 100644 index 219ab84f4..000000000 --- a/content/usecases/argocd.md +++ /dev/null @@ -1,219 +0,0 @@ -# ArgoCD - -## Creating ArgoCD AppProjects for your tenant - -Bill wants each tenant to also have their own ArgoCD AppProjects. To make sure this happens correctly, Bill will first specify the ArgoCD namespace in the [IntegrationConfig](./../integration-config.md): - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: IntegrationConfig -metadata: - name: tenant-operator-config - namespace: multi-tenant-operator -spec: - ... - argocd: - namespace: openshift-operators - ... -``` - -Afterwards, Bill must specify the source GitOps repos for the tenant inside the tenant CR like so: - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: sigma -spec: - argocd: - sourceRepos: - # specify source repos here - - "https://github.com/stakater/GitOps-config" - owners: - users: - - user - editors: - users: - - user1 - quota: medium - sandbox: false - namespaces: - withTenantPrefix: - - build - - stage - - dev -``` - -Now Bill can see an AppProject will be created for the tenant - -```bash -oc get AppProject -A -NAMESPACE NAME AGE -openshift-operators sigma 5d15h -``` - -The following AppProject is created: - -```yaml -apiVersion: argoproj.io/v1alpha1 -kind: AppProject -metadata: - name: sigma - namespace: openshift-operators -spec: - destinations: - - namespace: sigma-build - server: "https://kubernetes.default.svc" - - namespace: sigma-dev - server: "https://kubernetes.default.svc" - - namespace: sigma-stage - server: "https://kubernetes.default.svc" - roles: - - description: >- - Role that gives full access to all resources inside the tenant's - namespace to the tenant owner group - groups: - - saap-cluster-admins - - stakater-team - - sigma-owner-group - name: sigma-owner - policies: - - "p, proj:sigma:sigma-owner, *, *, sigma/*, allow" - - description: >- - Role that gives edit access to all resources inside the tenant's - namespace to the tenant owner group - groups: - - saap-cluster-admins - - stakater-team - - sigma-edit-group - name: sigma-edit - policies: - - "p, proj:sigma:sigma-edit, *, *, sigma/*, allow" - - description: >- - Role that gives view access to all resources inside the tenant's - namespace to the tenant owner group - groups: - - saap-cluster-admins - - stakater-team - - sigma-view-group - name: sigma-view - policies: - - "p, proj:sigma:sigma-view, *, get, sigma/*, allow" - sourceRepos: - - "https://github.com/stakater/gitops-config" -``` - -Users belonging to the Sigma group will now only see applications created by them in the ArgoCD frontend now: - -![image](./../images/argocd.png) - -## Prevent ArgoCD from syncing certain namespaced resources - -Bill wants tenants to not be able to sync `ResourceQuota` and `LimitRange` resources to their namespaces. To do this correctly, Bill will specify these resources to blacklist in the ArgoCD portion of the [IntegrationConfig](./../integration-config.md): - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: IntegrationConfig -metadata: - name: tenant-operator-config - namespace: multi-tenant-operator -spec: - ... - argocd: - namespace: openshift-operators - namespaceResourceBlacklist: - - group: "" - kind: ResourceQuota - - group: "" - kind: LimitRange - ... -``` - -Now, if these resources are added to any tenant's project directory in GitOps, ArgoCD will not sync them to the cluster. The AppProject will also have the blacklisted resources added to it: - -```yaml -apiVersion: argoproj.io/v1alpha1 -kind: AppProject -metadata: - name: sigma - namespace: openshift-operators -spec: - ... - namespaceResourceBlacklist: - - group: '' - kind: ResourceQuota - - group: '' - kind: LimitRange - ... -``` - -## Allow ArgoCD to sync certain cluster-wide resources - -Bill now wants tenants to be able to sync the `Environment` cluster scoped resource to the cluster. To do this correctly, Bill will specify the resource to allow-list in the ArgoCD portion of the Integration Config's Spec: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: IntegrationConfig -metadata: - name: tenant-operator-config - namespace: multi-tenant-operator -spec: - ... - argocd: - namespace: openshift-operators - clusterResourceWhitelist: - - group: "" - kind: Environment - ... -``` - -Now, if the resource is added to any tenant's project directory in GitOps, ArgoCD will sync them to the cluster. The AppProject will also have the allow-listed resources added to it: - -```yaml -apiVersion: argoproj.io/v1alpha1 -kind: AppProject -metadata: - name: sigma - namespace: openshift-operators -spec: - ... - clusterResourceWhitelist: - - group: "" - kind: Environment - ... -``` - -## Override NamespaceResourceBlacklist and/or ClusterResourceWhitelist per Tenant - -Bill now wants a specific tenant to override the `namespaceResourceBlacklist` and/or `clusterResourceWhitelist` set via Integration Config. Bill will specify these in `argoCD.appProjects` section of Tenant spec. - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: blue-sky -spec: - argocd: - sourceRepos: - # specify source repos here - - "https://github.com/stakater/GitOps-config" - appProject: - clusterResourceWhitelist: - - group: admissionregistration.k8s.io - kind: validatingwebhookconfigurations - namespaceResourceBlacklist: - - group: "" - kind: ConfigMap - owners: - users: - - user - editors: - users: - - user1 - quota: medium - sandbox: false - namespaces: - withTenantPrefix: - - build - - stage -``` diff --git a/content/usecases/configuring-multitenant-network-isolation.md b/content/usecases/configuring-multitenant-network-isolation.md deleted file mode 100644 index 8d751f3c3..000000000 --- a/content/usecases/configuring-multitenant-network-isolation.md +++ /dev/null @@ -1,96 +0,0 @@ -# Configuring Multi-Tenant Isolation with Network Policy Template - -Bill is a cluster admin who wants to configure network policies to provide multi-tenant network isolation. - -First, Bill creates a template for network policies: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: Template -metadata: - name: tenant-network-policy -resources: - manifests: - - apiVersion: networking.k8s.io/v1 - kind: NetworkPolicy - metadata: - name: allow-same-namespace - spec: - podSelector: {} - ingress: - - from: - - podSelector: {} - - apiVersion: networking.k8s.io/v1 - kind: NetworkPolicy - metadata: - name: allow-from-openshift-monitoring - spec: - ingress: - - from: - - namespaceSelector: - matchLabels: - network.openshift.io/policy-group: monitoring - podSelector: {} - policyTypes: - - Ingress - - apiVersion: networking.k8s.io/v1 - kind: NetworkPolicy - metadata: - name: allow-from-openshift-ingress - spec: - ingress: - - from: - - namespaceSelector: - matchLabels: - network.openshift.io/policy-group: ingress - podSelector: {} - policyTypes: - - Ingress -``` - -Once the template has been created, Bill edits the [IntegrationConfig](./../integration-config.md) to add unique label to all tenant projects: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: IntegrationConfig -metadata: - name: tenant-operator-config - namespace: multi-tenant-operator -spec: - openshift: - project: - labels: - stakater.com/workload-monitoring: "true" - tenant-network-policy: "true" - annotations: - openshift.io/node-selector: node-role.kubernetes.io/worker= - sandbox: - labels: - stakater.com/kind: sandbox - privilegedNamespaces: - - default - - ^openshift-* - - ^kube-* - privilegedServiceAccounts: - - ^system:serviceaccount:openshift-* - - ^system:serviceaccount:kube-* -``` - -Bill has added a new label `tenant-network-policy: "true"` in project section of IntegrationConfig, now MTO will add that label in all tenant projects. - -Finally Bill creates a `TemplateGroupInstance` which will distribute the network policies using the newly added project label and template. - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: TemplateGroupInstance -metadata: - name: tenant-network-policy-group -spec: - template: tenant-network-policy - selector: - matchLabels: - tenant-network-policy: "true" - sync: true -``` - -MTO will now deploy the network policies mentioned in `Template` to all projects matching the label selector mentioned in the TemplateGroupInstance. diff --git a/content/usecases/custom-roles.md b/content/usecases/custom-roles.md deleted file mode 100644 index ace50dc37..000000000 --- a/content/usecases/custom-roles.md +++ /dev/null @@ -1,72 +0,0 @@ -# Changing the default access level for tenant owners - -This feature allows the cluster admins to change the default roles assigned to Tenant owner, editor, viewer groups. - -For example, if Bill as the cluster admin wants to reduce the privileges that tenant owners have, so they cannot create or edit Roles or bind them. As an admin of an OpenShift cluster, Bill can do this by assigning the `edit` role to all tenant owners. This is easily achieved by modifying the [IntegrationConfig](./../integration-config.md): - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: IntegrationConfig -metadata: - name: tenant-operator-config - namespace: multi-tenant-operator -spec: - tenantRoles: - default: - owner: - clusterRoles: - - edit - editor: - clusterRoles: - - edit - viewer: - clusterRoles: - - view -``` - -Once all namespaces reconcile, the old `admin` RoleBindings should get replaced with the `edit` ones for each tenant owner. - -## Giving specific permissions to some tenants - -Bill now wants the owners of the tenants `bluesky` and `alpha` to have `admin` permissions over their namespaces. Custom roles feature will allow Bill to do this, by modifying the IntegrationConfig like this: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: IntegrationConfig -metadata: - name: tenant-operator-config - namespace: multi-tenant-operator -spec: - tenantRoles: - default: - owner: - clusterRoles: - - edit - editor: - clusterRoles: - - edit - viewer: - clusterRoles: - - view - custom: - - labelSelector: - matchExpressions: - - key: stakater.com/tenant - operator: In - values: - - alpha - owner: - clusterRoles: - - admin - - labelSelector: - matchExpressions: - - key: stakater.com/tenant - operator: In - values: - - bluesky - owner: - clusterRoles: - - admin -``` - -New Bindings will be created for the Tenant owners of `bluesky` and `alpha`, corresponding to the `admin` Role. Bindings for editors and viewer will be inherited from the `default roles`. All other Tenant owners will have an `edit` Role bound to them within their namespaces diff --git a/content/usecases/deploying-templates.md b/content/usecases/deploying-templates.md deleted file mode 100644 index 5e10e0dd8..000000000 --- a/content/usecases/deploying-templates.md +++ /dev/null @@ -1,309 +0,0 @@ -# Distributing Resources in Namespaces - -Multi Tenant Operator has three Custom Resources which can cover this need using the `Template` CR, depending upon the conditions and preference. - -1. TemplateGroupInstance -1. TemplateInstance -1. Tenant - -Stakater Team, however, encourages the use of `TemplateGroupInstance` to distribute resources in multiple namespaces as it is optimized for better performance. - -## Deploying Template to Namespaces via TemplateGroupInstances - -Bill, the cluster admin, wants to deploy a docker pull secret in namespaces where certain labels exists. - -First, Bill creates a template: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: Template -metadata: - name: docker-secret -resources: - manifests: - - kind: Secret - apiVersion: v1 - metadata: - name: docker-pull-secret - data: - .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K - type: kubernetes.io/dockercfg -``` - -Once the template has been created, Bill makes a `TemplateGroupInstance` referring to the `Template` he wants to deploy with `MatchLabels`: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: TemplateGroupInstance -metadata: - name: docker-secret-group-instance -spec: - template: docker-pull-secret - selector: - matchLabels: - kind: build - sync: true -``` - -Afterwards, Bill can see that secrets have been successfully created in all label matching namespaces. - -```bash -kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox -NAME STATE AGE -docker-secret Active 3m - -kubectl get secret docker-secret -n alpha-dave-aurora-sandbox -NAME STATE AGE -docker-secret Active 2m -``` - -`TemplateGroupInstance` can also target specific tenants or all tenant namespaces under a single yaml definition. - -### TemplateGroupInstance for multiple Tenants - -It can be done by using the `matchExpressions` field, dividing the tenant label in key and values. - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: TemplateGroupInstance -metadata: - name: docker-secret-group-instance -spec: - template: docker-pull-secret - selector: - matchExpressions: - - key: stakater.com/tenant - operator: In - values: - - alpha - - beta - sync: true -``` - -### TemplateGroupInstance for all Tenants - -This can also be done by using the `matchExpressions` field, using just the tenant label key `stakater.com/tenant`. - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: TemplateGroupInstance -metadata: - name: docker-secret-group-instance -spec: - template: docker-pull-secret - selector: - matchExpressions: - - key: stakater.com/tenant - operator: Exists - sync: true -``` - -## Deploying Template to Namespaces via Tenant - -Bill is a cluster admin who wants to deploy a docker-pull-secret in Anna's tenant namespaces where certain labels exists. - -First, Bill creates a template: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: Template -metadata: - name: docker-pull-secret -resources: - manifests: - - kind: Secret - apiVersion: v1 - metadata: - name: docker-pull-secret - data: - .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K - type: kubernetes.io/dockercfg -``` - -Once the template has been created, Bill edits Anna's tenant and populates the `namespacetemplate` field: - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: bluesky -spec: - owners: - users: - - anna@aurora.org - editors: - users: - - john@aurora.org - quota: small - sandboxConfig: - enabled: true - templateInstances: - - spec: - template: docker-pull-secret - selector: - matchLabels: - kind: build -``` - -Multi Tenant Operator will deploy `TemplateInstances` mentioned in `templateInstances` field, `TemplateInstances` will only be applied in those `namespaces` which belong to Anna's `tenant` and have the matching label of `kind: build`. - -So now Anna adds label `kind: build` to her existing namespace `bluesky-anna-aurora-sandbox`, and after adding the label she see's that the secret has been created. - -```bash -kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox -NAME STATE AGE -docker-pull-secret Active 3m -``` - -## Deploying Template to a Namespace via TemplateInstance - -Anna wants to deploy a docker pull secret in her namespace. - -First Anna asks Bill, the cluster admin, to create a template of the secret for her: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: Template -metadata: - name: docker-pull-secret -resources: - manifests: - - kind: Secret - apiVersion: v1 - metadata: - name: docker-pull-secret - data: - .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K - type: kubernetes.io/dockercfg -``` - -Once the template has been created, Anna creates a `TemplateInstance` in her namespace referring to the `Template` she wants to deploy: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: TemplateInstance -metadata: - name: docker-pull-secret-instance - namespace: bluesky-anna-aurora-sandbox -spec: - template: docker-pull-secret - sync: true -``` - -Once this is created, Anna can see that the secret has been successfully applied. - -```bash -kubectl get secret docker-secret -n bluesky-anna-aurora-sandbox -NAME STATE AGE -docker-pull-secret Active 3m -``` - -## Passing Parameters to Template via TemplateInstance, TemplateGroupInstance or Tenant - -Anna wants to deploy a LimitRange resource to certain namespaces. - -First Anna asks Bill, the cluster admin, to create template with parameters for LimitRange for her: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: Template -metadata: - name: namespace-parameterized-restrictions -parameters: - # Name of the parameter - - name: DEFAULT_CPU_LIMIT - # The default value of the parameter - value: "1" - - name: DEFAULT_CPU_REQUESTS - value: "0.5" - # If a parameter is required the template instance will need to set it - # required: true - # Make sure only values are entered for this parameter - validation: "^[0-9]*\\.?[0-9]+$" -resources: - manifests: - - apiVersion: v1 - kind: LimitRange - metadata: - name: namespace-limit-range-${namespace} - spec: - limits: - - default: - cpu: "${{DEFAULT_CPU_LIMIT}}" - defaultRequest: - cpu: "${{DEFAULT_CPU_REQUESTS}}" - type: Container -``` - -Afterwards, Anna creates a `TemplateInstance` in her namespace referring to the `Template` she wants to deploy: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: TemplateInstance -metadata: - name: namespace-parameterized-restrictions-instance - namespace: bluesky-anna-aurora-sandbox -spec: - template: namespace-parameterized-restrictions - sync: true -parameters: - - name: DEFAULT_CPU_LIMIT - value: "1.5" - - name: DEFAULT_CPU_REQUESTS - value: "1" -``` - -If she wants to distribute the same Template over multiple namespaces, she can use `TemplateGroupInstance`. - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: TemplateGroupInstance -metadata: - name: namespace-parameterized-restrictions-tgi -spec: - template: namespace-parameterized-restrictions - sync: true - selector: - matchExpressions: - - key: stakater.com/tenant - operator: In - values: - - alpha - - beta -parameters: - - name: DEFAULT_CPU_LIMIT - value: "1.5" - - name: DEFAULT_CPU_REQUESTS - value: "1" -``` - -Or she can use her tenant to cover only the tenant namespaces. - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: bluesky -spec: - owners: - users: - - anna@aurora.org - editors: - users: - - john@aurora.org - quota: small - sandboxConfig: - enabled: true - templateInstances: - - spec: - template: namespace-parameterized-restrictions - sync: true - parameters: - - name: DEFAULT_CPU_LIMIT - value: "1.5" - - name: DEFAULT_CPU_REQUESTS - value: "1" - selector: - matchLabels: - kind: build -``` diff --git a/content/usecases/distributing-resources.md b/content/usecases/distributing-resources.md deleted file mode 100644 index 6f26e8ff4..000000000 --- a/content/usecases/distributing-resources.md +++ /dev/null @@ -1,83 +0,0 @@ -# Copying Secrets and Configmaps across Tenant Namespaces via TGI - -Bill is a cluster admin who wants to map a `docker-pull-secret`, present in a `build` namespace, in tenant namespaces where certain labels exists. - -First, Bill creates a template: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: Template -metadata: - name: docker-pull-secret -resources: - resourceMappings: - secrets: - - name: docker-pull-secret - namespace: build -``` - -Once the template has been created, Bill makes a `TemplateGroupInstance` referring to the `Template` he wants to deploy with `MatchLabels`: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: TemplateGroupInstance -metadata: - name: docker-secret-group-instance -spec: - template: docker-pull-secret - selector: - matchLabels: - kind: build - sync: true -``` - -Afterwards, Bill can see that secrets has been successfully mapped in all matching namespaces. - -```bash -kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox -NAME STATE AGE -docker-pull-secret Active 3m - -kubectl get secret docker-pull-secret -n alpha-dave-aurora-sandbox -NAME STATE AGE -docker-pull-secret Active 3m -``` - -## Mapping Resources within Tenant Namespaces via TI - -Anna is a tenant owner who wants to map a `docker-pull-secret`, present in `bluseky-build` namespace, to `bluesky-anna-aurora-sandbox` namespace. - -First, Bill creates a template: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: Template -metadata: - name: docker-pull-secret -resources: - resourceMappings: - secrets: - - name: docker-pull-secret - namespace: bluesky-build -``` - -Once the template has been created, Anna creates a `TemplateInstance` in `bluesky-anna-aurora-sandbox` namespace, referring to the `Template`. - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: TemplateInstance -metadata: - name: docker-secret-instance - namespace: bluesky-anna-aurora-sandbox -spec: - template: docker-pull-secret - sync: true -``` - -Afterwards, Bill can see that secrets has been successfully mapped in all matching namespaces. - -```bash -kubectl get secret docker-pull-secret -n bluesky-anna-aurora-sandbox -NAME STATE AGE -docker-pull-secret Active 3m -``` diff --git a/content/usecases/distributing-secrets-using-sealed-secret-template.md b/content/usecases/distributing-secrets-using-sealed-secret-template.md deleted file mode 100644 index 2f203ef70..000000000 --- a/content/usecases/distributing-secrets-using-sealed-secret-template.md +++ /dev/null @@ -1,89 +0,0 @@ -# Distributing Secrets Using Sealed Secrets Template - -Bill is a cluster admin who wants to provide a mechanism for distributing secrets in multiple namespaces. For this, he wants to use [Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets#sealed-secrets-for-kubernetes) as the solution by adding them to MTO Template CR - -First, Bill creates a Template in which Sealed Secret is mentioned: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: Template -metadata: - name: tenant-sealed-secret -resources: - manifests: - - kind: SealedSecret - apiVersion: bitnami.com/v1alpha1 - metadata: - name: mysecret - spec: - encryptedData: - .dockerconfigjson: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq..... - template: - type: kubernetes.io/dockerconfigjson - # this is an example of labels and annotations that will be added to the output secret - metadata: - labels: - "jenkins.io/credentials-type": usernamePassword - annotations: - "jenkins.io/credentials-description": credentials from Kubernetes -``` - -Once the template has been created, Bill has to edit the `Tenant` to add unique label to namespaces in which the secret has to be deployed. -For this, he can use the support for [common](./tenant.md#distributing-common-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource) and [specific](./tenant.md#distributing-specific-labels-and-annotations-to-tenant-namespaces-via-tenant-custom-resource) labels across namespaces. - -Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case. - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: bluesky -spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - editors: - users: - - john@aurora.org - groups: - - alpha - quota: small - namespaces: - withTenantPrefix: - - dev - - build - - prod - - # use this if you want to add label to some specific namespaces - specificMetadata: - - namespaces: - - test-namespace - labels: - distribute-image-pull-secret: true - - # use this if you want to add label to all namespaces under your tenant - commonMetadata: - labels: - distribute-image-pull-secret: true - -``` - -Bill has added support for a new label `distribute-image-pull-secret: true"` for tenant projects/namespaces, now MTO will add that label depending on the used field. - -Finally Bill creates a `TemplateGroupInstance` which will deploy the sealed secrets using the newly created project label and template. - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: TemplateGroupInstance -metadata: - name: tenant-sealed-secret -spec: - template: tenant-sealed-secret - selector: - matchLabels: - distribute-image-pull-secret: true - sync: true -``` - -MTO will now deploy the sealed secrets mentioned in `Template` to namespaces which have the mentioned label. The rest of the work to deploy secret from a sealed secret has to be done by Sealed Secrets Controller. diff --git a/content/usecases/extend-default-roles.md b/content/usecases/extend-default-roles.md deleted file mode 100644 index bb277edf9..000000000 --- a/content/usecases/extend-default-roles.md +++ /dev/null @@ -1,23 +0,0 @@ -# Extending the default access level for tenant members - -Bill as the cluster admin wants to extend the default access for tenant members. As an admin of an OpenShift Cluster, Bill can extend the admin, edit, and view ClusterRole using aggregation. Bill will first create a ClusterRole with privileges to resources which Bill wants to extend. Bill will add the aggregation label to the newly created ClusterRole for extending the default ClusterRoles provided by OpenShift. - -```yaml -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: extend-view-role - labels: - rbac.authorization.k8s.io/aggregate-to-view: 'true' -rules: - - verbs: - - get - - list - - watch - apiGroups: - - user.openshift.io - resources: - - groups -``` - -> Note: You can learn more about `aggregated-cluster-roles` [here](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles) diff --git a/content/usecases/hibernation.md b/content/usecases/hibernation.md deleted file mode 100644 index b8b97b1fb..000000000 --- a/content/usecases/hibernation.md +++ /dev/null @@ -1,155 +0,0 @@ -# Freeing up unused resources with hibernation - -## Hibernating a tenant - -Bill is a cluster administrator who wants to free up unused cluster resources at nighttime, in an effort to reduce costs (when the cluster isn't being used). - -First, Bill creates a tenant with the `hibernation` schedules mentioned in the spec, or adds the hibernation field to an existing tenant: - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: sigma -spec: - hibernation: - sleepSchedule: 0 20 * * 1-5 - wakeSchedule: 0 8 * * 1-5 - owners: - users: - - user - editors: - users: - - user1 - quota: medium - namespaces: - withoutTenantPrefix: - - build - - stage - - dev -``` - -The schedules above will put all the `Deployments` and `StatefulSets` within the tenant's namespaces to sleep, by reducing their pod count to 0 at 8 PM every weekday. At 8 AM on weekdays, the namespaces will then wake up by restoring their applications' previous pod counts. - -Bill can verify this behaviour by checking the newly created ResourceSupervisor resource at run time: - -```bash -oc get ResourceSupervisor -A -NAME AGE -sigma 5m -``` - -The ResourceSupervisor will look like this at 'running' time (as per the schedule): - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta1 -kind: ResourceSupervisor -metadata: - name: example -spec: - argocd: - appProjects: [] - namespace: '' - hibernation: - sleepSchedule: 0 20 * * 1-5 - wakeSchedule: 0 8 * * 1-5 - namespaces: - - build - - stage - - dev -status: - currentStatus: running - nextReconcileTime: '2022-10-12T20:00:00Z' -``` - -The ResourceSupervisor will look like this at 'sleeping' time (as per the schedule): - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta1 -kind: ResourceSupervisor -metadata: - name: example -spec: - argocd: - appProjects: [] - namespace: '' - hibernation: - sleepSchedule: 0 20 * * 1-5 - wakeSchedule: 0 8 * * 1-5 - namespaces: - - build - - stage - - dev -status: - currentStatus: sleeping - nextReconcileTime: '2022-10-13T08:00:00Z' - sleepingApplications: - - Namespace: build - kind: Deployment - name: example - replicas: 3 - - Namespace: stage - kind: Deployment - name: example - replicas: 3 -``` - -Bill wants to prevent the `build` namespace from going to sleep, so he can add the `hibernation.stakater.com/exclude: 'true'` annotation to it. The ResourceSupervisor will now look like this after reconciling: - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta1 -kind: ResourceSupervisor -metadata: - name: example -spec: - argocd: - appProjects: [] - namespace: '' - hibernation: - sleepSchedule: 0 20 * * 1-5 - wakeSchedule: 0 8 * * 1-5 - namespaces: - - stage - - dev -status: - currentStatus: sleeping - nextReconcileTime: '2022-10-13T08:00:00Z' - sleepingApplications: - - Namespace: stage - kind: Deployment - name: example - replicas: 3 -``` - -## Hibernating namespaces and/or ArgoCD Applications with ResourceSupervisor - -Bill, the cluster administrator, wants to hibernate a collection of namespaces and AppProjects belonging to multiple different tenants. He can do so by creating a ResourceSupervisor manually, specifying the hibernation schedule in its spec, the namespaces and ArgoCD Applications that need to be hibernated as per the mentioned schedule. -Bill can also use the same method to hibernate some namespaces and ArgoCD Applications that do not belong to any tenant on his cluster. - -The example given below will hibernate the ArgoCD Applications in the 'test-app-project' AppProject; and it will also hibernate the 'ns2' and 'ns4' namespaces. - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta1 -kind: ResourceSupervisor -metadata: - name: test-resource-supervisor -spec: - argocd: - appProjects: - - test-app-project - namespace: argocd-ns - hibernation: - sleepSchedule: 0 20 * * 1-5 - wakeSchedule: 0 8 * * 1-5 - namespaces: - - ns2 - - ns4 -status: - currentStatus: sleeping - nextReconcileTime: '2022-10-13T08:00:00Z' - sleepingApplications: - - Namespace: ns2 - kind: Deployment - name: test-deployment - replicas: 3 -``` diff --git a/content/usecases/integrationconfig.md b/content/usecases/integrationconfig.md deleted file mode 100644 index 49380d157..000000000 --- a/content/usecases/integrationconfig.md +++ /dev/null @@ -1,140 +0,0 @@ -# Configuring Managed Namespaces and ServiceAccounts in IntegrationConfig - -Bill is a cluster admin who can use `IntegrationConfig` to configure how `Multi Tenant Operator (MTO)` manages the cluster. - -By default, MTO watches all namespaces and will enforce all the governing policies on them. -All namespaces managed by MTO require the `stakater.com/tenant` label. -MTO ignores privileged namespaces that are mentioned in the IntegrationConfig, and does not manage them. Therefore, any tenant label on such namespaces will be ignored. - -```bash -oc create namespace stakater-test -Error from server (Cannot Create namespace stakater-test without label stakater.com/tenant. User: Bill): admission webhook "vnamespace.kb.io" denied the request: Cannot CREATE namespace stakater-test without label stakater.com/tenant. User: Bill -``` - -Bill is trying to create a namespace without the `stakater.com/tenant` label. Creating a namespace without this label is only allowed if the namespace is privileged. Privileged namespaces will be ignored by MTO and do not require the said label. Therefore, Bill will add the required regex in the IntegrationConfig, along with any other namespaces which are privileged and should be ignored by MTO - like `default`, or namespaces with prefixes like `openshift`, `kube`: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: IntegrationConfig -metadata: - name: tenant-operator-config - namespace: multi-tenant-operator -spec: - openshift: - privilegedNamespaces: - - ^default$ - - ^openshift* - - ^kube* - - ^stakater* -``` - -After mentioning the required regex (`^stakater*`) under `privilegedNamespaces`, Bill can create the namespace without interference. - -```bash -oc create namespace stakater-test -namespace/stakater-test created -``` - -MTO will also disallow all users which are not tenant owners to perform CRUD operations on namespaces. This will also prevent Service Accounts from performing CRUD operations. - -If Bill wants MTO to ignore Service Accounts, then he would simply have to add them in the IntegrationConfig: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: IntegrationConfig -metadata: - name: tenant-operator-config - namespace: multi-tenant-operator -spec: - openshift: - privilegedServiceAccounts: - - system:serviceaccount:openshift - - system:serviceaccount:stakater - - system:serviceaccount:kube - - system:serviceaccount:redhat - - system:serviceaccount:hive -``` - -Bill can also use regex patterns to ignore a set of service accounts: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: IntegrationConfig -metadata: - name: tenant-operator-config - namespace: multi-tenant-operator -spec: - openshift: - privilegedServiceAccounts: - - ^system:serviceaccount:openshift* - - ^system:serviceaccount:stakater* -``` - -## Configuring Vault in IntegrationConfig - -[Vault](https://www.vaultproject.io/) is used to secure, store and tightly control access to tokens, passwords, certificates, and encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API. - -If Bill (the cluster admin) has Vault configured in his cluster, then he can take benefit from MTO's integration with Vault. - -MTO automatically creates Vault secret paths for tenants, where tenant members can securely save their secrets. It also authorizes tenant members to access these secrets via OIDC. - -Bill would first have to integrate Vault with MTO by adding the details in IntegrationConfig. For more [details](./../integration-config.md#vault) - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: IntegrationConfig -metadata: - name: tenant-operator-config - namespace: multi-tenant-operator -spec: - vault: - enabled: true - accessorPath: oidc/ - address: 'https://vault.apps.prod.abcdefghi.kubeapp.cloud/' - roleName: mto - sso: - clientName: vault -``` - -Bill then creates a tenant for Anna and John: - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: bluesky -spec: - owners: - users: - - anna@acme.org - viewers: - users: - - john@acme.org - quota: small - sandbox: false -``` - -Now Bill goes to `Vault` and sees that a path for `tenant` has been made under the name `bluesky/kv`, confirming that Tenant members with the Owner or Edit roles now have access to the tenant's Vault path. - -Now if Anna sign's in to the Vault via OIDC, she can see her tenants path and secrets. Whereas if John sign's in to the Vault via OIDC, he can't see his tenants path or secrets as he doesn't have the access required to view them. - -## Configuring RHSSO (Red Hat Single Sign-On) in IntegrationConfig - -Red Hat Single Sign-On [RHSSO](https://access.redhat.com/products/red-hat-single-sign-on) is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0. - -If Bill the cluster admin has RHSSO configured in his cluster, then he can take benefit from MTO's integration with RHSSO and Vault. - -MTO automatically allows tenant members to access Vault via OIDC(RHSSO authentication and authorization) to access secret paths for tenants where tenant members can securely save their secrets. - -Bill would first have to integrate RHSSO with MTO by adding the details in IntegrationConfig. [Visit here](./../integration-config.md#rhsso-red-hat-single-sign-on) for more details. - -```yaml -rhsso: - enabled: true - realm: customer - endpoint: - secretReference: - name: auth-secrets - namespace: openshift-auth - url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/ -``` diff --git a/content/usecases/mattermost.md b/content/usecases/mattermost.md deleted file mode 100644 index f576c8d04..000000000 --- a/content/usecases/mattermost.md +++ /dev/null @@ -1,41 +0,0 @@ -# Creating Mattermost Teams for your tenant - -## Requirements - -`MTO-Mattermost-Integration-Operator` - -Please contact stakater to install the Mattermost integration operator before following the below mentioned steps. - -## Steps to enable integration - -Bill wants some of the tenants to also have their own Mattermost Teams. To make sure this happens correctly, Bill will first add the `stakater.com/mattermost: true` label to the tenants. -The label will enable the `mto-mattermost-integration-operator` to create and manage Mattermost Teams based on Tenants. - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: sigma - labels: - stakater.com/mattermost: 'true' -spec: - owners: - users: - - user - editors: - users: - - user1 - quota: medium - sandbox: false - namespaces: - withTenantPrefix: - - dev - - build - - prod -``` - -Now user can logIn to Mattermost to see their Team and relevant channels associated with it. - -![image](./../images/mattermost-tenant-team.png) - -The name of the Team is similar to the Tenant name. Notification channels are pre-configured for every team, and can be modified. diff --git a/content/usecases/namespace.md b/content/usecases/namespace.md deleted file mode 100644 index 6f9d454a1..000000000 --- a/content/usecases/namespace.md +++ /dev/null @@ -1,65 +0,0 @@ -# Creating Namespace - -Anna as the tenant owner can create new namespaces for her tenant. - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: bluesky-production - labels: - stakater.com/tenant: bluesky -``` - -> ⚠️ Anna is required to add the tenant label `stakater.com/tenant: bluesky` which contains the name of her tenant `bluesky`, while creating the namespace. If this label is not added or if Anna does not belong to the `bluesky` tenant, then Multi Tenant Operator will not allow the creation of that namespace. - -When Anna creates the namespace, MTO assigns Anna and other tenant members the roles based on their user types, such as a tenant owner getting the OpenShift `admin` role for that namespace. - -As a tenant owner, Anna is able to create namespaces. - -If you have enabled [ArgoCD Multitenancy](./../argocd-multitenancy.md), our preferred solution is to create tenant namespaces by using [Tenant](./tenant.md) spec to avoid syncing issues in ArgoCD console during namespace creation. - -## Add Existing Namespaces to Tenant via GitOps - -Using GitOps as your preferred development workflow, you can add existing namespaces for your tenants by including the tenant label. - -To add an existing namespace to your tenant via GitOps: - -1. First, migrate your namespace resource to your “watched” git repository -1. Edit your namespace `yaml` to include the tenant label -1. Tenant label follows the naming convention `stakater.com/tenant: ` -1. Sync your GitOps repository with your cluster and allow changes to be propagated -1. Verify that your Tenant users now have access to the namespace - -For example, If Anna, a tenant owner, wants to add the namespace `bluesky-dev` to her tenant via GitOps, after migrating her namespace manifest to a “watched repository” - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: bluesky-dev -``` - -She can then add the tenant label - -```yaml - ... - labels: - stakater.com/tenant: bluesky -``` - -Now all the users of the `Bluesky` tenant now have access to the existing namespace. - -Additionally, to remove namespaces from a tenant, simply remove the tenant label from the namespace resource and sync your changes to your cluster. - -## Remove Namespaces from your Cluster via GitOps - - GitOps is a quick and efficient way to automate the management of your K8s resources. - -To remove namespaces from your cluster via GitOps; - -- Remove the `yaml` file containing your namespace configurations from your “watched” git repository. -- ArgoCD automatically sets the `[app.kubernetes.io/instance](http://app.kubernetes.io/instance)` label on resources it manages. It uses this label it to select resources which inform the basis of an app. To remove a namespace from a managed ArgoCD app, remove the ArgoCD label `app.kubernetes.io/instance` from the namespace manifest. -- You can edit your namespace manifest through the OpenShift Web Console or with the OpenShift command line tool. -- Now that you have removed your namespace manifest from your watched git repository, and from your managed ArgoCD apps, sync your git repository and allow your changes be propagated. -- Verify that your namespace has been deleted. diff --git a/content/usecases/private-sandboxes.md b/content/usecases/private-sandboxes.md deleted file mode 100644 index 4bb62b8ca..000000000 --- a/content/usecases/private-sandboxes.md +++ /dev/null @@ -1,44 +0,0 @@ -# Create Private Sandboxes - -Bill assigned the ownership of `bluesky` to `Anna` and `Anthony`. Now if the users want sandboxes to be made for them, they'll have to ask `Bill` to enable `sandbox` functionality. The Users also want to make sure that the sandboxes that are created for them are also only visible to the user they belong to. To enable that, Bill will just set `enabled: true` and `private: true` within the `sandboxConfig` field - -```yaml -kubectl apply -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: bluesky -spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - editors: - users: - - john@aurora.org - groups: - - alpha - quota: small - sandboxConfig: - enabled: true - private: true -EOF -``` - -With the above configuration `Anna` and `Anthony` will now have new sandboxes created - -```bash -kubectl get namespaces -NAME STATUS AGE -bluesky-anna-aurora-sandbox Active 5d5h -bluesky-anthony-aurora-sandbox Active 5d5h -bluesky-john-aurora-sandbox Active 5d5h -``` - -However, from the perspective of `Anna`, only their sandbox will be visible - -```bash -kubectl get namespaces -NAME STATUS AGE -bluesky-anna-aurora-sandbox Active 5d5h -``` diff --git a/content/usecases/quota.md b/content/usecases/quota.md deleted file mode 100644 index 53b0e7ffb..000000000 --- a/content/usecases/quota.md +++ /dev/null @@ -1,73 +0,0 @@ -# Enforcing Quotas - -Using Multi Tenant Operator, the cluster-admin can set and enforce cluster resource quotas and limit ranges for tenants. - -## Assigning Resource Quotas - -Bill is a cluster admin who will first create `Quota` CR where he sets the maximum resource limits that Anna's tenant will have. -Here `limitrange` is an optional field, cluster admin can skip it if not needed. - -```yaml -kubectl create -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta1 -kind: Quota -metadata: - name: small -spec: - resourcequota: - hard: - requests.cpu: '5' - requests.memory: '5Gi' - configmaps: "10" - secrets: "10" - services: "10" - services.loadbalancers: "2" - limitrange: - limits: - - type: "Pod" - max: - cpu: "2" - memory: "1Gi" - min: - cpu: "200m" - memory: "100Mi" -EOF -``` - -For more details please refer to [Quotas](../customresources.md#_1-quota). - -```bash -kubectl get quota small -NAME STATE AGE -small Active 3m -``` - -Bill then proceeds to create a tenant for Anna, while also linking the newly created `Quota`. - -```yaml -kubectl create -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: bluesky -spec: - owners: - users: - - anna@stakater.com - quota: small - sandbox: false -EOF -``` - -Now that the quota is linked with Anna's tenant, Anna can create any resource within the values of resource quota and limit range. - -```bash -kubectl -n bluesky-production create deployment nginx --image nginx:latest --replicas 4 -``` - -Once the resource quota assigned to the tenant has been reached, Anna cannot create further resources. - -```bash -kubectl create pods bluesky-training -Error from server (Cannot exceed Namespace quota: please, reach out to the system administrators) -``` diff --git a/content/usecases/secret-distribution.md b/content/usecases/secret-distribution.md deleted file mode 100644 index 061750328..000000000 --- a/content/usecases/secret-distribution.md +++ /dev/null @@ -1,63 +0,0 @@ -# Propagate Secrets from Parent to Descendant namespaces - -Secrets like `registry` credentials often need to exist in multiple Namespaces, so that Pods within different namespaces can have access to those credentials in form of secrets. - -Manually creating secrets within different namespaces could lead to challenges, such as: - -- Someone will have to create secret either manually or via GitOps each time there is a new descendant namespace that needs the secret -- If we update the parent secret, they will have to update the secret in all descendant namespaces -- This could be time-consuming, and a small mistake while creating or updating the secret could lead to unnecessary debugging - -With the help of Multi-Tenant Operator's Template feature we can make this secret distribution experience easy. - ---- - -For example, to copy a Secret called `registry` which exists in the `example` to new Namespaces whenever they are created, we will first create a Template which will have reference of the registry secret. - -It will also push updates to the copied Secrets and keep the propagated secrets always sync and updated with parent namespaces. - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: Template -metadata: - name: registry-secret -resources: - resourceMappings: - secrets: - - name: registry - namespace: example -``` - -Now using this Template we can propagate registry secret to different namespaces that has some common set of labels. - -For example, will just add one label `kind: registry` and all namespaces with this label will get this secret. - -For propagating it on different namespaces dynamically will have to create another resource called `TemplateGroupInstance`. -`TemplateGroupInstance` will have `Template` and `matchLabel` mapping as shown below: - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: TemplateGroupInstance -metadata: - name: registry-secret-group-instance -spec: - template: registry-secret - selector: - matchLabels: - kind: registry - sync: true -``` - -After reconciliation, you will be able to see those secrets in namespaces having mentioned label. - -MTO will keep injecting this secret to the new namespaces created with that label. - -```bash -kubectl get secret registry-secret -n example-ns-1 -NAME STATE AGE -registry-secret Active 3m - -kubectl get secret registry-secret -n example-ns-2 -NAME STATE AGE -registry-secret Active 3m -``` diff --git a/content/usecases/template.md b/content/usecases/template.md deleted file mode 100644 index 9e5f6aaed..000000000 --- a/content/usecases/template.md +++ /dev/null @@ -1,93 +0,0 @@ -# Creating Templates - -Anna wants to create a Template that she can use to initialize or share common resources across namespaces (e.g. PullSecrets). - -Anna can either create a template using `manifests` field, covering Kubernetes or custom resources. - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: Template -metadata: - name: docker-pull-secret -resources: - manifests: - - kind: Secret - apiVersion: v1 - metadata: - name: docker-pull-secret - data: - .dockercfg: eyAKICAiaHR0cHM6IC8vaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsgImF1dGgiOiAiYzNSaGEyRjBaWEk2VjI5M1YyaGhkRUZIY21WaGRGQmhjM04zYjNKayJ9Cn0K - type: kubernetes.io/dockercfg -``` - -Or by using `Helm Charts` - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: Template -metadata: - name: redis -resources: - helm: - releaseName: redis - chart: - repository: - name: redis - repoUrl: https://charts.bitnami.com/bitnami - values: | - redisPort: 6379 -``` - -She can also use `resourceMapping` field to copy over secrets and configmaps from one namespace to others. - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: Template -metadata: - name: resource-mapping -resources: - resourceMappings: - secrets: - - name: docker-secret - namespace: bluesky-build - configMaps: - - name: tronador-configMap - namespace: stakater-tronador -``` - -**Note:** Resource mapping can be used via TGI to map resources within tenant namespaces or to some other tenant's namespace. If used with TI, the resources will only be mapped if namespaces belong to same tenant. - -## Using Templates with Default Parameters - -```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 -kind: Template -metadata: - name: namespace-parameterized-restrictions -parameters: - # Name of the parameter - - name: DEFAULT_CPU_LIMIT - # The default value of the parameter - value: "1" - - name: DEFAULT_CPU_REQUESTS - value: "0.5" - # If a parameter is required the template instance will need to set it - # required: true - # Make sure only values are entered for this parameter - validation: "^[0-9]*\\.?[0-9]+$" -resources: - manifests: - - apiVersion: v1 - kind: LimitRange - metadata: - name: namespace-limit-range-${namespace} - spec: - limits: - - default: - cpu: "${{DEFAULT_CPU_LIMIT}}" - defaultRequest: - cpu: "${{DEFAULT_CPU_REQUESTS}}" - type: Container -``` - -Parameters can be used with both `manifests` and `helm charts` diff --git a/content/usecases/tenant.md b/content/usecases/tenant.md deleted file mode 100644 index e3282bbee..000000000 --- a/content/usecases/tenant.md +++ /dev/null @@ -1,275 +0,0 @@ -# Creating Tenant - -Bill is a cluster admin who receives a new request from Aurora Solutions CTO asking for a new tenant for Anna's team. - -Bill creates a new tenant called `bluesky` in the cluster: - -```yaml -kubectl create -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: bluesky -spec: - owners: - users: - - anna@aurora.org - editors: - users: - - john@aurora.org - groups: - - alpha - quota: small - sandbox: false -EOF -``` - -Bill checks if the new tenant is created: - -```bash -kubectl get tenants.tenantoperator.stakater.com bluesky -NAME STATE AGE -bluesky Active 3m -``` - -Anna can now login to the cluster and check if she can create namespaces - -```bash -kubectl auth can-i create namespaces -yes -``` - -However, cluster resources are not accessible to Anna - -```bash -kubectl auth can-i get namespaces -no - -kubectl auth can-i get persistentvolumes -no -``` - -Including the `Tenant` resource - -```bash -kubectl auth can-i get tenants.tenantoperator.stakater.com -no -``` - -## Assign multiple users as tenant owner - -In the example above, Bill assigned the ownership of `bluesky` to `Anna`. If another user, e.g. `Anthony` needs to administer `bluesky`, than Bill can assign the ownership of tenant to that user as well: - -```yaml -kubectl apply -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: bluesky -spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - editors: - users: - - john@aurora.org - groups: - - alpha - quota: small - sandbox: false -EOF -``` - -With the configuration above, Anthony can log-in to the cluster and execute - -```bash -kubectl auth can-i create namespaces -yes -``` - -## Assigning Users Sandbox Namespace - -Bill assigned the ownership of `bluesky` to `Anna` and `Anthony`. Now if the users want sandboxes to be made for them, they'll have to ask `Bill` to enable `sandbox` functionality. - -To enable that, Bill will just set `enabled: true` within the `sandboxConfig` field - -```yaml -kubectl apply -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: bluesky -spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - editors: - users: - - john@aurora.org - groups: - - alpha - quota: small - sandboxConfig: - enabled: true -EOF -``` - -With the above configuration `Anna` and `Anthony` will now have new sandboxes created - -```bash -kubectl get namespaces -NAME STATUS AGE -bluesky-anna-aurora-sandbox Active 5d5h -bluesky-anthony-aurora-sandbox Active 5d5h -bluesky-john-aurora-sandbox Active 5d5h -``` - -If Bill wants to make sure that only the sandbox owner can view his sandbox namespace, he can achieve this by setting `private: true` within the `sandboxConfig` filed. - -## Creating Namespaces via Tenant Custom Resource - -Bill now wants to create namespaces for `dev`, `build` and `production` environments for the tenant members. To create those namespaces Bill will just add those names within the `namespaces` field in the tenant CR. If Bill wants to append the tenant name as a prefix in namespace name, then he can use `namespaces.withTenantPrefix` field. Else he can use `namespaces.withoutTenantPrefix` for namespaces for which he does not need tenant name as a prefix. - -```yaml -kubectl apply -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: bluesky -spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - editors: - users: - - john@aurora.org - groups: - - alpha - quota: small - namespaces: - withTenantPrefix: - - dev - - build - withoutTenantPrefix: - - prod -EOF -``` - -With the above configuration tenant members will now see new namespaces have been created. - -```bash -kubectl get namespaces -NAME STATUS AGE -bluesky-dev Active 5d5h -bluesky-build Active 5d5h -prod Active 5d5h -``` - -## Distributing common labels and annotations to tenant namespaces via Tenant Custom Resource - -Bill now wants to add labels/annotations to all the namespaces for a tenant. To create those labels/annotations Bill will just add them into `commonMetadata.labels`/`commonMetadata.annotations` field in the tenant CR. - -```yaml -kubectl apply -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: bluesky -spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - editors: - users: - - john@aurora.org - groups: - - alpha - quota: small - namespaces: - withTenantPrefix: - - dev - - build - - prod - commonMetadata: - labels: - app.kubernetes.io/managed-by: tenant-operator - app.kubernetes.io/part-of: tenant-alpha - annotations: - openshift.io/node-selector: node-role.kubernetes.io/infra= -EOF -``` - -With the above configuration all tenant namespaces will now contain the mentioned labels and annotations. - -## Distributing specific labels and annotations to tenant namespaces via Tenant Custom Resource - -Bill now wants to add labels/annotations to specific namespaces for a tenant. To create those labels/annotations Bill will just add them into `specificMetadata.labels`/`specificMetadata.annotations` and specific namespaces in `specificMetadata.namespaces` field in the tenant CR. - -```yaml -kubectl apply -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: bluesky -spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - editors: - users: - - john@aurora.org - groups: - - alpha - quota: small - sandboxConfig: - enabled: true - namespaces: - withTenantPrefix: - - dev - - build - - prod - specificMetadata: - - namespaces: - - bluesky-anna-aurora-sandbox - labels: - app.kubernetes.io/is-sandbox: true - annotations: - openshift.io/node-selector: node-role.kubernetes.io/worker= -EOF -``` - -With the above configuration all tenant namespaces will now contain the mentioned labels and annotations. - -## Retaining tenant namespaces and AppProject when a tenant is being deleted - -Bill now wants to delete tenant `bluesky` and wants to retain all namespaces and AppProject of the tenant. To retain the namespaces Bill will set `spec.onDelete.cleanNamespaces`, and `spec.onDelete.cleanAppProjects` to `false`. - -```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: bluesky -spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - quota: small - sandboxConfig: - enabled: true - namespaces: - withTenantPrefix: - - dev - - build - - prod - onDelete: - cleanNamespaces: false - cleanAppProject: false -``` - -With the above configuration all tenant namespaces and AppProject will not be deleted when tenant `bluesky` is deleted. By default, the value of `spec.onDelete.cleanNamespaces` is also `false` and `spec.onDelete.cleanAppProject` is `true` diff --git a/content/usecases/volume-limits.md b/content/usecases/volume-limits.md deleted file mode 100644 index f34930940..000000000 --- a/content/usecases/volume-limits.md +++ /dev/null @@ -1,79 +0,0 @@ -# Limiting PersistentVolume for Tenant - -Bill, as a cluster admin, wants to restrict the amount of storage a Tenant can use. For that he'll add the `requests.storage` field to `quota.spec.resourcequota.hard`. If Bill wants to restrict tenant `bluesky` to use only `50Gi` of storage, he'll first create a quota with `requests.storage` field set to `50Gi`. - -```yaml -kubectl create -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta1 -kind: Quota -metadata: - name: medium -spec: - resourcequota: - hard: - requests.cpu: '5' - requests.memory: '10Gi' - requests.storage: '50Gi' -``` - -Once the quota is created, Bill will create the tenant and set the quota field to the one he created. - -```yaml -kubectl create -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: bluesky -spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - quota: medium - sandbox: true -EOF -``` - -Now, the combined storage used by all tenant namespaces will not exceed `50Gi`. - -## Adding StorageClass Restrictions for Tenant - -Now, Bill, as a cluster admin, wants to make sure that no Tenant can provision more than a fixed amount of storage from a StorageClass. Bill can restrict that using `.storageclass.storage.k8s.io/requests.storage` field in `quota.spec.resourcequota.hard` field. If Bill wants to restrict tenant `sigma` to use only `20Gi` of storage from storage class `stakater`, he'll first create a StorageClass `stakater` and then create the relevant Quota with `stakater.storageclass.storage.k8s.io/requests.storage` field set to `20Gi`. - -```yaml -kubectl create -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta1 -kind: Quota -metadata: - name: small -spec: - resourcequota: - hard: - requests.cpu: '2' - requests.memory: '4Gi' - stakater.storageclass.storage.k8s.io/requests.storage: '20Gi' -``` - -Once the quota is created, Bill will create the tenant and set the quota field to the one he created. - -```yaml -kubectl create -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 -kind: Tenant -metadata: - name: sigma -spec: - owners: - users: - - dave@aurora.org - quota: small - sandbox: true -EOF -``` - -Now, the combined storage provisioned from StorageClass `stakater` used by all tenant namespaces will not exceed `20Gi`. - -> The `20Gi` limit will only be applied to StorageClass `stakater`. If a tenant member creates a PVC with some other StorageClass, he will not be restricted. - -!!! tip - More details about `Resource Quota` can be found [here](https://kubernetes.io/docs/concepts/policy/resource-quotas/) diff --git a/content/vault-multitenancy.md b/content/vault-multitenancy.md deleted file mode 100644 index e851bf69c..000000000 --- a/content/vault-multitenancy.md +++ /dev/null @@ -1,38 +0,0 @@ -# Vault Multitenancy - -HashiCorp Vault is an identity-based secret and encryption management system. Vault validates and authorizes a system's clients (users, machines, apps) before providing them access to secrets or stored sensitive data. - -## Vault integration in Multi Tenant Operator - -### Service Account Auth in Vault - -MTO enables the [Kubernetes auth method](https://www.Vaultproject.io/docs/auth/kubernetes) which can be used to authenticate with Vault using a Kubernetes Service Account Token. When enabled, for every tenant namespace, MTO automatically creates policies and roles that allow the service accounts present in those namespaces to **read** secrets at tenant's path in Vault. The name of the role is the same as **namespace** name. - -These service accounts are required to have `stakater.com/vault-access: true` label, so they can be authenticated with Vault via MTO. - -The Diagram shows how MTO enables ServiceAccounts to read secrets from Vault. - -![image](./images/mto-vault-k8s-auth-workflow.png) - -### User OIDC Auth in Vault - -This requires a running `RHSSO(RedHat Single Sign On)` instance integrated with Vault over [OIDC](https://developer.hashicorp.com/vault/docs/auth/jwt) login method. - -MTO integration with Vault and RHSSO provides a way for users to log in to Vault where they only have access to relevant tenant paths. - -Once both integrations are set-up with [IntegrationConfig CR](/content/integration-config.md), MTO links tenant users to specific client roles named after their tenant under Vault client in RHSSO. - -After that, MTO creates specific policies in Vault for its tenant users. - -Mapping of tenant roles to Vault is shown below - -| Tenant Role | Vault Path | Vault Capabilities | -|:--------------|:------------------------|:---------------------------------| -|Owner, Editor |(tenantName)/* |Create, Read, Update, Delete, List| -|Owner, Editor |sys/mounts/(tenantName)/*|Create, Read, Update, Delete, List| -|Owner, Editor |managed-addons/* |Read, List | -|Viewer |(tenantName)/* |Read | - -A simple user login workflow is shown in the diagram below. - -![image](./images/mto-vault-integration-user-workflow.png) diff --git a/mkdocs.yml b/mkdocs.yml index 427bca647..21a8430c7 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -48,21 +48,16 @@ nav: - tutorials/installation.md - Create your first Tenant: - tutorials/tenant/create-tenant.md -# - tutorials/tenant/assign-quota-tenant.md - tutorials/tenant/create-sandbox.md - tutorials/tenant/creating-namespaces.md - tutorials/tenant/assigning-metadata.md - tutorials/tenant/tenant-hibernation.md -# - tutorials/tenant/custom-rbac.md - tutorials/tenant/deleting-tenant.md - "Template: Definition and Usage Guide": - tutorials/template/template.md -# - tutorials/template/template-instance.md -# - tutorials/template/template-group-instance.md - ArgoCD Multi-tenancy: - tutorials/argocd/enabling-multi-tenancy-argocd.md - Vault Multi-Tenancy: -# - tutorials/vault/why-vault-multi-tenancy.md - tutorials/vault/enabling-multi-tenancy-vault.md - How-to Guides: - how-to-guides/tenant.md @@ -74,7 +69,6 @@ nav: - Offboarding: - how-to-guides/offboarding/uninstalling.md - Reference guides: -# - reference-guides/add-remove-namespace-gitops.md - reference-guides/admin-clusterrole.md - reference-guides/configuring-multitenant-network-isolation.md - reference-guides/custom-roles.md @@ -87,12 +81,13 @@ nav: - reference-guides/integrationconfig.md - reference-guides/mattermost.md - reference-guides/secret-distribution.md + - reference-guides/custom-metrics.md + - reference-guides/graph-visualization.md - Explanation: + - explanation/console.md + - explanation/auth.md - explanation/why-argocd-multi-tenancy.md -# - explanation/why-vault-multi-tenancy.md - faq.md -# - FAQ: -# - faq/index.md - changelog.md - eula.md - troubleshooting.md