-Component Architecture -
- -### Piped - -`piped` is a single binary component you run as an agent in your cluster, your local network to handle the deployment tasks. -It can be run inside a Kubernetes cluster by simply starting a Pod or a Deployment. -This component is designed to be stateless, so it can also be run in a single VM or even your local machine. - -### Control Plane - -A centralized component managing deployment data and provides gRPC API for connecting `piped`s as well as all web-functionalities of PipeCD such as -authentication, showing deployment list/details, application list/details, delivery insights... - -### Project - -A project is a logical group of applications to be managed by a group of users. -Each project can have multiple `piped` instances from different clouds or environments. - -There are three types of project roles: - -- **Viewer** has only permissions of viewing to deployment and application in the project. -- **Editor** has all viewer permissions, plus permissions for actions that modify state such as manually trigger/cancel the deployment. -- **Admin** has all editor permissions, plus permissions for managing project data, managing project `piped`. - -### Application - -A collect of resources (containers, services, infrastructure components...) and configurations that are managed together. -PipeCD supports multiple kinds of applications such as `KUBERNETES`, `TERRAFORM`, `ECS`, `CLOUDRUN`, `LAMBDA`... - -### Application Configuration - -A YAML file that contains information to define and configure application. -Each application requires one file at application directory stored in the Git repository. -The default file name is `app.pipecd.yaml`. - -### Application Directory - -A directory in Git repository containing application configuration file and application manifests. -Each application must have one application directory. - -### Deployment - -A deployment is a process that does transition from the current state (running state) to the desired state (specified state in Git) of a specific application. -When the deployment is success, it means the running state is being synced with the desired state specified in the target commit. - -### Sync Strategy - -There are 3 strategies that PipeCD supports while syncing your application state with its configuration stored in Git. Which are: -- Quick Sync: a fast way to make the running application state as same as its Git stored configuration. The generated pipeline contains only one predefined `SYNC` stage. -- Pipeline Sync: sync the running application state with its Git stored configuration through a pipeline defined in its application configuration. -- Auto Sync: depends on your defined application configuration, `piped` will decide the best way to sync your application state with its Git stored configuration. - -### Platform Provider - -Note: The previous name of this concept was Cloud Provider. - -PipeCD supports multiple platforms and multiple kinds of applications. -Platform Provider defines which platform, cloud and where application should be deployed to. - -Currently, PipeCD is supporting these five platform providers: `KUBERNETES`, `ECS`, `TERRAFORM`, `CLOUDRUN`, `LAMBDA`. - -### Analysis Provider -An external product that provides metrics/logs to evaluate deployments, such as `Prometheus`, `Datadog`, `Stackdriver`, `CloudWatch` and so on. -It is mainly used in the [Automated deployment analysis](../user-guide/managing-application/customizing-deployment/automated-deployment-analysis/) context. diff --git a/docs/content/en/docs/contribution-guidelines/_index.md b/docs/content/en/docs/contribution-guidelines/_index.md deleted file mode 100755 index b47753d9aa..0000000000 --- a/docs/content/en/docs/contribution-guidelines/_index.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -title: "Contributor Guide" -linkTitle: "Contributor Guide" -weight: 6 -description: > - This guide is for anyone who want to contribute to PipeCD project. We are so excited to have you! ---- diff --git a/docs/content/en/docs/contribution-guidelines/architectural-overview.md b/docs/content/en/docs/contribution-guidelines/architectural-overview.md deleted file mode 100644 index c7569db0f4..0000000000 --- a/docs/content/en/docs/contribution-guidelines/architectural-overview.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -title: "Architectural overview" -linkTitle: "Architectural overview" -weight: 3 -description: > - This page describes the architecture of PipeCD. ---- - -![](/images/architecture-overview.png) --Component Architecture -
- -### Piped - -A single binary component runs in your cluster, your local network to handle the deployment tasks. -It can be run inside a Kubernetes cluster by simply starting a Pod or a Deployment. -This component is designed to be stateless, so it can also be run in a single VM or even your local machine. - -### Control Plane - -A centralized component manages deployment data and provides gRPC API for connecting `piped`s as well as all web-functionalities of PipeCD such as -authentication, showing deployment list/details, application list/details, delivery insights... - -Control Plane contains the following components: -- `server`: a service to provide api for piped, web and serve static assets for web. -- `ops`: a service to provide administrative features for Control Plane owner like adding/managing projects. -- `cache`: a redis cache service for caching internal data. -- `datastore`: data storage for storing deployment, application data - - this can be a fully-managed service such as `Firestore`, `Cloud SQL`... - - or a self-managed such as `MySQL` -- `filestore`: file storage for storing logs, application states - - this can a fully-managed service such as `GCS`, `S3`... - - or a self-managed service such as `Minio` - -For more information, see [Architecture overview of Control Plane](../../user-guide/managing-controlplane/architecture-overview/). diff --git a/docs/content/en/docs/contribution-guidelines/contributing.md b/docs/content/en/docs/contribution-guidelines/contributing.md deleted file mode 100644 index 87eb1a51c0..0000000000 --- a/docs/content/en/docs/contribution-guidelines/contributing.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: "Contributing" -linkTitle: "Contributing" -weight: 1 -description: > - This page describes how to contribute to PipeCD. ---- - -PipeCD is an open source project that anyone in the community can use, improve, and enjoy. We'd love you to join us! [Contributing to PipeCD](https://github.com/pipe-cd/pipecd/tree/master/CONTRIBUTING.md) is the best place to start with. \ No newline at end of file diff --git a/docs/content/en/docs/examples/_index.md b/docs/content/en/docs/examples/_index.md deleted file mode 100755 index 8030751054..0000000000 --- a/docs/content/en/docs/examples/_index.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -title: "Examples" -linkTitle: "Examples" -weight: 7 -description: > - Some examples of PipeCD in action! ---- - -One of the best ways to see what PipeCD can do, and learn how to deploy your applications with it, is to see some real examples. - -We have prepared some examples for each kind of application. -The examples can be found at the following repository: - -https://github.com/pipe-cd/examples - -### Kubernetes Applications - -| Name | Description | -|-----------------------------------------------------------------------------|-------------| -| [simple](https://github.com/pipe-cd/examples/tree/master/kubernetes/simple) | Deploy plain-yaml manifests in application directory without using pipeline. | -| [helm-local-chart](https://github.com/pipe-cd/examples/tree/master/kubernetes/helm-local-chart) | Deploy a helm chart sourced from the same Git repository. | -| [helm-remote-chart](https://github.com/pipe-cd/examples/tree/master/kubernetes/helm-remote-chart) | Deploy a helm chart sourced from a [Helm Chart Repository](https://helm.sh/docs/topics/chart_repository/). | -| [helm-remote-git-chart](https://github.com/pipe-cd/examples/tree/master/kubernetes/helm-remote-git-chart) | Deploy a helm chart sourced from another Git repository. | -| [kustomize-local-base](https://github.com/pipe-cd/examples/tree/master/kubernetes/kustomize-local-base) | Deploy a kustomize package that just uses the local bases from the same Git repository. | -| [kustomize-remote-base](https://github.com/pipe-cd/examples/tree/master/kubernetes/kustomize-remote-base) | Deploy a kustomize package that uses remote bases from other Git repositories. | -| [canary](https://github.com/pipe-cd/examples/tree/master/kubernetes/canary) | Deployment pipeline with canary strategy. | -| [canary-by-config-change](https://github.com/pipe-cd/examples/tree/master/kubernetes/canary-by-config-change) | Deployment pipeline with canary strategy when ConfigMap was changed. | -| [canary-patch](https://github.com/pipe-cd/examples/tree/master/kubernetes/canary-patch) | Demonstrate how to customize manifests for Canary variant using [patches](../user-guide/configuration-reference/#kubernetescanaryrolloutstageoptions) option. | -| [bluegreen](https://github.com/pipe-cd/examples/tree/master/kubernetes/bluegreen) | Deployment pipeline with bluegreen strategy. This also contains a manual approval stage. | -| [mesh-istio-canary](https://github.com/pipe-cd/examples/tree/master/kubernetes/mesh-istio-canary) | Deployment pipeline with canary strategy by using Istio for traffic routing. | -| [mesh-istio-bluegreen](https://github.com/pipe-cd/examples/tree/master/kubernetes/mesh-istio-bluegreen) | Deployment pipeline with bluegreen strategy by using Istio for traffic routing. | -| [mesh-smi-canary](https://github.com/pipe-cd/examples/tree/master/kubernetes/mesh-smi-canary) | Deployment pipeline with canary strategy by using SMI for traffic routing. | -| [mesh-smi-bluegreen](https://github.com/pipe-cd/examples/tree/master/kubernetes/mesh-smi-bluegreen) | Deployment pipeline with bluegreen strategy by using SMI for traffic routing. | -| [wait-approval](https://github.com/pipe-cd/examples/tree/master/kubernetes/wait-approval) | Deployment pipeline that contains a manual approval stage. | -| [multi-steps-canary](https://github.com/pipe-cd/examples/tree/master/kubernetes/multi-steps-canary) | Deployment pipeline with multiple canary steps. | -| [analysis-by-metrics](https://github.com/pipe-cd/examples/tree/master/kubernetes/analysis-by-metrics) | Deployment pipeline with analysis stage by metrics. | -| [analysis-by-http](https://github.com/pipe-cd/examples/tree/master/kubernetes/analysis-by-http) | Deployment pipeline with analysis stage by running http requests. | -| [analysis-by-log](https://github.com/pipe-cd/examples/tree/master/kubernetes/analysis-by-log) | Deployment pipeline with analysis stage by checking logs. | -| [analysis-with-baseline](https://github.com/pipe-cd/examples/tree/master/kubernetes/analysis-with-baseline) | Deployment pipeline with analysis stage by comparing baseline and canary. | -| [secret-management](https://github.com/pipe-cd/examples/tree/master/kubernetes/secret-management) | Demonstrate how to manage sensitive data by using [Secret Management](../user-guide/managing-application/secret-management/) feature. | - -### Terraform Applications - -| Name | Description | -|-----------------------------------------------------------------------------|-------------| -| [simple](https://github.com/pipe-cd/examples/tree/master/terraform/simple) | Automatically applies when any changes were detected. | -| [local-module](https://github.com/pipe-cd/examples/tree/master/terraform/local-module) | Deploy application that using local terraform modules from the same Git repository. | -| [remote-module](https://github.com/pipe-cd/examples/tree/master/terraform/remote-module) | Deploy application that using remote terraform modules from other Git repositories. | -| [wait-approval](https://github.com/pipe-cd/examples/tree/master/terraform/wait-approval) | Deployment pipeline that contains a manual approval stage. | -| [autorollback](https://github.com/pipe-cd/examples/tree/master/terraform/auto-rollback) | Automatically rollback the changes when deployment was failed. | -| [secret-management](https://github.com/pipe-cd/examples/tree/master/terraform/secret-management) | Demonstrate how to manage sensitive data by using [Secret Management](../user-guide/managing-application/secret-management/) feature. | - -### Cloud Run Applications - -| Name | Description | -|-----------------------------------------------------------------------------|-------------| -| [simple](https://github.com/pipe-cd/examples/tree/master/cloudrun/simple) | Quick sync by rolling out the new version and switching all traffic to it. | -| [canary](https://github.com/pipe-cd/examples/tree/master/cloudrun/canary) | Deployment pipeline with canary strategy. | -| [analysis](https://github.com/pipe-cd/examples/tree/master/cloudrun/analysis) | Deployment pipeline that contains an analysis stage. | -| [secret-management](https://github.com/pipe-cd/examples/tree/master/cloudrun/secret-management) | Demonstrate how to manage sensitive data by using [Secret Management](../user-guide/managing-application/secret-management/) feature. | -| [wait-approval](https://github.com/pipe-cd/examples/tree/master/cloudrun/wait-approval) | Deployment pipeline that contains a manual approval stage. | - -### Lambda Applications - -| Name | Description | -|-----------------------------------------------------------------------------|-------------| -| [simple](https://github.com/pipe-cd/examples/tree/master/lambda/simple) | Quick sync by rolling out the new version and switching all traffic to it. | -| [canary](https://github.com/pipe-cd/examples/tree/master/lambda/canary) | Deployment pipeline with canary strategy. | -| [analysis](https://github.com/pipe-cd/examples/tree/master/lambda/analysis) | Deployment pipeline that contains an analysis stage. | -| [secret-management](https://github.com/pipe-cd/examples/tree/master/lambda/secret-management) | Demonstrate how to manage sensitive data by using [Secret Management](../user-guide/managing-application/secret-management/) feature. | -| [wait-approval](https://github.com/pipe-cd/examples/tree/master/lambda/wait-approval) | Deployment pipeline that contains a manual approval stage. | -| [remote-git](https://github.com/pipe-cd/examples/tree/master/lambda/remote-git) | Deploy the lambda code sourced from another Git repository. | -| [zip-packing-s3](https://github.com/pipe-cd/examples/tree/master/lambda/zip-packing-s3) | Deployment pipeline of kind Lambda which uses s3 stored zip file as function code. | - -### ECS Applications - -| Name | Description | -|-----------------------------------------------------------------------------|-------------| -| [simple](https://github.com/pipe-cd/examples/tree/master/ecs/simple) | Quick sync by rolling out the new version and switching all traffic to it. | -| [canary](https://github.com/pipe-cd/examples/tree/master/ecs/canary) | Deployment pipeline with canary strategy. | -| [bluegreen](https://github.com/pipe-cd/examples/tree/master/ecs/bluegreen) | Deployment pipeline with blue-green strategy. | -| [secret-management](https://github.com/pipe-cd/examples/tree/master/ecs/secret-management) | Demonstrate how to manage sensitive data by using [Secret Management](../user-guide/managing-application/secret-management/) feature. | -| [wait-approval](https://github.com/pipe-cd/examples/tree/master/ecs/wait-approval) | Deployment pipeline that contains a manual approval stage. | -| [standalone-task](https://github.com/pipe-cd/examples/tree/master/ecs/standalone-task) | Deployment Standalone Task. (`Standalone task is only supported for Quick sync`) | - - -### Deployment chain - -| Name | Description | -|-----------------------------------------------------------------------------|-------------| -| [simple](https://github.com/pipe-cd/examples/tree/master/deployment-chain/simple) | Simple deployment chain which uses application name as a filter in chain configuration. | diff --git a/docs/content/en/docs/faq/_index.md b/docs/content/en/docs/faq/_index.md deleted file mode 100644 index 1a58110ddd..0000000000 --- a/docs/content/en/docs/faq/_index.md +++ /dev/null @@ -1,51 +0,0 @@ ---- -title: "FAQ" -linkTitle: "FAQ" -weight: 9 -description: > - List of frequently asked questions. ---- - -If you have any other questions, please feel free to create the issue in the [pipe-cd/pipecd](https://github.com/pipe-cd/pipecd/issues/new/choose) repository or contact us on [Cloud Native Slack](https://slack.cncf.io) (channel [#pipecd](https://app.slack.com/client/T08PSQ7BQ/C01B27F9T0X)). - -### 1. What kind of application (platform provider) will be supported? - -Currently, PipeCD can be used to deploy `Kubernetes`, `ECS`, `Terraform`, `CloudRun`, `Lambda` applications. - -In the near future we also want to support `Crossplane`... - -### 2. What kind of templating methods for Kubernetes application will be supported? - -Currently, PipeCD is supporting `Helm` and `Kustomize` as templating method for Kubernetes applications. - -### 3. Istio is supported now? - -Yes, you can use PipeCD for both mesh (Istio, SMI) applications and non-mesh applications. - -### 4. What are the differences between PipeCD and FluxCD? - -- Not just Kubernetes applications, PipeCD also provides a unified interface for other cloud services (CloudRun, AWS Lamda...) and Terraform -- One tool for both GitOps sync and progressive deployment -- Supports multiple Git repositories -- Has web UI for better visibility - - Log viewer for each deployment - - Visualization of application component/state in realtime - - Show configuration drift in realtime -- Also supports Canary and BlueGreen for non-mesh applications -- Has built-in secrets management -- Supports gradual rollout of a single app to multiple clusters -- Shows the delivery performance insights - -### 5. What are the differences between PipeCD and ArgoCD? - -- Not just Kubernetes applications, PipeCD also provides a unified interface for other cloud services (GCP CloudRun, AWS Lamda...) and Terraform -- One tool for both GitOps sync and progressive deployment -- Don't need another CRD or changing the existing manifests for doing Canary/BlueGreen. PipeCD just uses the standard Kubernetes deployment object -- Easier and safer to operate multi-tenancy, multi-cluster for multiple teams (even some teams are running in a private/restricted network) -- Has built-in secrets management -- Supports gradual rollout of a single app to multiple clusters -- Shows the delivery performance insights - -### 6. What should I do if I lost my Piped key? - -You can create a new Piped key. Go to the `Piped` tab at `Settings` page, and click the vertical ellipsis of the Piped that you would like to create the new Piped key. Don't forget deleting the old Key, too. diff --git a/docs/content/en/docs/feature-status/_index.md b/docs/content/en/docs/feature-status/_index.md deleted file mode 100644 index 77bae8873e..0000000000 --- a/docs/content/en/docs/feature-status/_index.md +++ /dev/null @@ -1,130 +0,0 @@ ---- -title: "Feature Status" -linkTitle: "Feature Status" -weight: 8 -description: > - This page lists the relative maturity of every PipeCD features. ---- - -Please note that the phases (Incubating, Alpha, Beta, and Stable) are applied to individual features within the project, not to the project as a whole. - -## Feature Phase Definitions - -| Phase | Definition | -|-|-| -| Incubating | Under planning/developing the prototype and still not ready to be used. | -| Alpha | Demo-able, works end-to-end but has limitations. No guarantees on backward compatibility. | -| Beta | **Usable in production**. Documented. | -| Stable | Production hardened. Backward compatibility. Documented. | - -## Provider - -### Kubernetes - -| Feature | Phase | -|-|-| -| Quick sync deployment | Beta | -| Deployment with a defined pipeline (e.g. canary, analysis) | Beta | -| [Automated rollback](../user-guide/managing-application/rolling-back-a-deployment/) | Beta | -| [Automated configuration drift detection](../user-guide/managing-application/configuration-drift-detection/) | Beta | -| [Application live state](../user-guide/managing-application/application-live-state/) | Beta | -| Support Helm | Beta | -| Support Kustomize | Beta | -| Support Istio service mesh | Beta | -| Support SMI service mesh | Incubating | -| Support [AWS App Mesh](https://aws.amazon.com/app-mesh/) | Incubating | -| [Plan preview](../user-guide/plan-preview) | Beta | -| [Manifest attachment](../user-guide/managing-application/manifest-attachment) | Alpha | - -### Terraform - -| Feature | Phase | -|-|-| -| Quick sync deployment | Beta | -| Deployment with a defined pipeline (e.g. manual-approval) | Beta | -| [Automated rollback](../user-guide/managing-application/rolling-back-a-deployment/) | Beta | -| [Automated configuration drift detection](../user-guide/managing-application/configuration-drift-detection/) | Alpha | -| [Application live state](../user-guide/managing-application/application-live-state/) | Incubating | -| [Plan preview](../user-guide/plan-preview) | Beta | -| [Manifest attachment](../user-guide/managing-application/manifest-attachment) | Alpha | - -### Cloud Run - -| Feature | Phase | -|-|-| -| Quick sync deployment | Beta | -| Deployment with a defined pipeline (e.g. canary, analysis) | Beta | -| [Automated rollback](../user-guide/managing-application/rolling-back-a-deployment/) | Beta | -| [Automated configuration drift detection](../user-guide/managing-application/configuration-drift-detection/) | Beta | -| [Application live state](../user-guide/managing-application/application-live-state/) | Beta | -| [Plan preview](../user-guide/plan-preview) | Beta | -| [Manifest attachment](../user-guide/managing-application/manifest-attachment) | Alpha | - -### Lambda - -| Feature | Phase | -|-|-| -| Quick sync deployment | Beta | -| Deployment with a defined pipeline (e.g. canary, analysis) | Beta | -| [Automated rollback](../user-guide/managing-application/rolling-back-a-deployment/) | Beta | -| [Automated configuration drift detection](../user-guide/managing-application/configuration-drift-detection/) | Incubating | -| [Application live state](../user-guide/managing-application/application-live-state/) | Incubating | -| [Plan preview](../user-guide/plan-preview) | Alpha | -| [Manifest attachment](../user-guide/managing-application/manifest-attachment) | Alpha | - -### Amazon ECS - -| Feature | Phase | -|-|-| -| Quick sync deployment | Alpha | -| Deployment with a defined pipeline (e.g. canary, analysis) | Alpha | -| [Automated rollback](../user-guide/managing-application/rolling-back-a-deployment/) | Beta | -| [Automated configuration drift detection](../user-guide/managing-application/configuration-drift-detection/) | Incubating | -| [Application live state](../user-guide/managing-application/application-live-state/) | Incubating | -| Support [AWS App Mesh](https://aws.amazon.com/app-mesh/) | Incubating | -| [Plan preview](../user-guide/plan-preview) | Alpha | -| [Manifest attachment](../user-guide/managing-application/manifest-attachment) | Alpha | - -## Piped Agent - -| Feature | Phase | -|-|-| -| [Deployment wait stage](../user-guide/managing-application/customizing-deployment/adding-a-wait-stage/) | Beta | -| [Deployment manual approval stage](../user-guide/managing-application/customizing-deployment/adding-a-manual-approval/) | Beta | -| [Notification](../user-guide/managing-piped/configuring-notifications/) to Slack | Beta | -| [Notification](../user-guide/managing-piped/configuring-notifications/) to external service via webhook | Beta | -| [Secrets management](../user-guide/managing-application/secret-management/) - Storing secrets safely in the Git repository | Beta | -| [Event watcher](../user-guide/event-watcher/) - Updating files in Git automatically for given events | Beta | -| [Pipectl](../user-guide/command-line-tool/) - Command-line tool for interacting with Control Plane | Beta | -| Deployment plugin - Allow executing user-created deployment plugin | Incubating | -| [ADA](../user-guide/managing-application/customizing-deployment/automated-deployment-analysis/) (Automated Deployment Analysis) by Prometheus metrics | Alpha | -| [ADA](../user-guide/managing-application/customizing-deployment/automated-deployment-analysis/) by Datadog metrics | Alpha | -| [ADA](../user-guide/managing-application/customizing-deployment/automated-deployment-analysis/) by Stackdriver metrics | Incubating | -| [ADA](../user-guide/managing-application/customizing-deployment/automated-deployment-analysis/) by Stackdriver log | Incubating | -| [ADA](../user-guide/managing-application/customizing-deployment/automated-deployment-analysis/) by CloudWatch metrics | Incubating | -| [ADA](../user-guide/managing-application/customizing-deployment/automated-deployment-analysis/) by CloudWatch log | Incubating | -| [ADA](../user-guide/managing-application/customizing-deployment/automated-deployment-analysis/) by HTTP request (smoke test...) | Incubating | -| [Remote upgrade](../user-guide/managing-piped/remote-upgrade-remote-config/#remote-upgrade) - Ability to upgrade Piped from the web console | Beta | -| [Remote config](../user-guide/managing-piped/remote-upgrade-remote-config/#remote-config) - Watch and reload configuration from a remote location such as Git | Beta | - -## Control Plane - -| Feature | Phase | -|-|-| -| Project/Piped/Application/Deployment management | Beta | -| Rendering deployment pipeline in realtime | Beta | -| Canceling a deployment from console | Beta | -| Triggering a deployment manually from console | Beta | -| RBAC on PipeCD resources such as Application, Piped... | Alpha | -| Authentication by username/password for static admin | Beta | -| GitHub & GitHub Enterprise Server SSO | Beta | -| Google SSO | Incubating | -| Support GCP [Firestore](https://cloud.google.com/firestore) as data store | Beta | -| Support [MySQL v8.0](https://www.mysql.com/) as data store | Beta | -| Support GCP [GCS](https://cloud.google.com/storage) as file store | Beta | -| Support AWS [S3](https://aws.amazon.com/s3/) as file store | Beta | -| Support [Minio](https://github.com/minio/minio) as file store | Beta | -| Support using file storage such as GCS, S3, Minio for both data store and file store (It means no database is required to run control plane) | Incubating | -| [Insights](../user-guide/insights/) - Show the delivery performance of a team or an application | Incubating | -| [Deployment Chain](../user-guide/managing-application/deployment-chain/) - Allow rolling out to multiple clusters gradually or promoting across environments | Alpha | -| [Metrics](../user-guide/managing-controlplane/metrics/) - Dashboards for PipeCD and Piped metrics | Beta | diff --git a/docs/content/en/docs/installation/_index.md b/docs/content/en/docs/installation/_index.md deleted file mode 100644 index 76a1629a37..0000000000 --- a/docs/content/en/docs/installation/_index.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: "Installation" -linkTitle: "Installation" -weight: 4 -description: > - Complete guideline for installing and configuring PipeCD on your own. ---- - -Before starting to install PipeCD, let’s have a look at PipeCD’s components, determine your role, and which components you will interact with while installing/using PipeCD. You’re recommended to read about PipeCD’s [Control Plane](../concepts/#control-plane) and [Piped](../concepts/#piped) on the concepts page. - -![](/images/architecture-overview-with-roles.png) --PipeCD's components with roles -
- -Basically, there are two types of users/roles that exist in the PipeCD system, which are: -- Developers/Production team: Users who use PipeCD to manage their applications’ deployments. You will interact with Piped and may or may not need to install Piped by yourself. -- Operators/Platform team: Users who operate the PipeCD for other developers can use it. You will interact with the Control Plane and Piped, you will be the one who installs the Control Plane and keeps it up for other Pipeds to connect while managing their applications’ deployments. - -This section contains the guideline for installing PipeCD's Control Plane and Piped step by step. You can choose what to read based on your roles. diff --git a/docs/content/en/docs/installation/install-controlplane.md b/docs/content/en/docs/installation/install-controlplane.md deleted file mode 100644 index 6cb214ac56..0000000000 --- a/docs/content/en/docs/installation/install-controlplane.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -title: "Install Control Plane" -linkTitle: "Install Control Plane" -weight: 2 -description: > - This page describes how to install control plane on a Kubernetes cluster. ---- - -## Prerequisites - -- Having a running Kubernetes cluster -- Installed [Helm](https://helm.sh/docs/intro/install/) (3.8.0 or later) - -## Installation - -### 1. Preparing an encryption key - -PipeCD requires a key for encrypting sensitive data or signing JWT token while authenticating. You can use one of the following commands to generate an encryption key. - -``` console -openssl rand 64 | base64 > encryption-key - -# or -cat /dev/urandom | head -c64 | base64 > encryption-key -``` - -### 2. Preparing Control Plane configuration file and installing - -![](/images/control-plane-components.png) --Control Plane Architecture -
- -The Control Plane of PipeCD is constructed by several components, as shown in the above graph (for more in detail please read [Control Plane architecture overview docs](../../user-guide/managing-controlplane/architecture-overview/)). As mentioned in the graph, the PipeCD's data can be stored in one of the provided fully-managed or self-managed services. So you have to decide which kind of [data store](../../user-guide/managing-controlplane/architecture-overview/#data-store) and [file store](../../user-guide/managing-controlplane/architecture-overview/#file-store) you want to use and prepare a Control Plane configuration file suitable for that choice. - -#### Using Firestore and GCS - -PipeCD requires a GCS bucket and service account files to access Firestore and GCS service. Here is an example of configuration file: - -``` yaml -apiVersion: "pipecd.dev/v1beta1" -kind: ControlPlane -spec: - stateKey: {RANDOM_STRING} - datastore: - type: FIRESTORE - config: - namespace: pipecd - environment: dev - project: {YOUR_GCP_PROJECT_NAME} - # Must be a service account with "Cloud Datastore User" and "Cloud Datastore Index Admin" roles - # since PipeCD needs them to creates the needed Firestore composite indexes in the background. - credentialsFile: /etc/pipecd-secret/firestore-service-account - filestore: - type: GCS - config: - bucket: {YOUR_BUCKET_NAME} - # Must be a service account with "Storage Object Admin (roles/storage.objectAdmin)" role on the given bucket - # since PipeCD need to write file object such as deployment log file to that bucket. - credentialsFile: /etc/pipecd-secret/gcs-service-account -``` - -See [ConfigurationReference](../../user-guide/managing-controlplane/configuration-reference/) for the full configuration. - -After all, install the Control Plane as bellow: - -``` console -helm upgrade -i pipecd oci://ghcr.io/pipe-cd/chart/pipecd --version {{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set-file config.data=path-to-control-plane-configuration-file \ - --set-file secret.encryptionKey.data=path-to-encryption-key-file \ - --set-file secret.firestoreServiceAccount.data=path-to-service-account-file \ - --set-file secret.gcsServiceAccount.data=path-to-service-account-file -``` - -Currently, besides `Firestore` PipeCD supports other databases as its datastore such as `MySQL`. Also as for filestore, PipeCD supports `AWS S3` and `MINIO` either. - -For example, in case of using `MySQL` as datastore and `MINIO` as filestore, the ControlPlane configuration will be as follow: - -```yaml -apiVersion: "pipecd.dev/v1beta1" -kind: ControlPlane -spec: - stateKey: {RANDOM_STRING} - datastore: - type: MYSQL - config: - url: {YOUR_MYSQL_ADDRESS} - database: {YOUR_DATABASE_NAME} - filestore: - type: MINIO - config: - endpoint: {YOUR_MINIO_ADDRESS} - bucket: {YOUR_BUCKET_NAME} - accessKeyFile: /etc/pipecd-secret/minio-access-key - secretKeyFile: /etc/pipecd-secret/minio-secret-key - autoCreateBucket: true -``` - -You can find required configurations to use other datastores and filestores from [ConfigurationReference](../../user-guide/managing-controlplane/configuration-reference/). - -__Caution__: In case of using `MySQL` as Control Plane's datastore, please note that the implementation of PipeCD requires some features that only available on [MySQL v8](https://dev.mysql.com/doc/refman/8.0/en/), make sure your MySQL service is satisfied the requirement. - -### 3. Accessing the PipeCD web - -If your installation was including an [ingress](https://github.com/pipe-cd/pipecd/blob/master/manifests/pipecd/values.yaml#L7), the PipeCD web can be accessed by the ingress's IP address or domain. -Otherwise, private PipeCD web can be accessed by using `kubectl port-forward` to expose the installed Control Plane on your localhost: - -``` console -kubectl port-forward svc/pipecd 8080 --namespace={NAMESPACE} -``` - -Now go to [http://localhost:8080](http://localhost:8080) on your browser, you will see a page to login to your project. - -Up to here, you have a installed PipeCD's Control Plane. To logging in, you need to initialize a new project. - -### 4. Initialize a new project - -To create a new project, you need to access to the `ops` pod in your installed PipeCD control plane, using `kubectl port-forward` command: - -```console -kubectl port-forward service/pipecd-ops 9082 --namespace={NAMESPACE} -``` - -Then, access to [http://localhost:9082](http://localhost:9082). - -On that page, you will see the list of registered projects and a link to register new projects. Registering a new project requires only a unique ID string and an optional description text. - -Once a new project has been registered, a static admin (username, password) will be automatically generated for the project admin, you can use that to login via the login form in the above section. - -For more about adding a new project in detail, please read the following [docs](../../user-guide/managing-controlplane/adding-a-project/). - -### 4'. Upgrade Control Plane version - -To upgrade the PipeCD Control Plane, preparations and commands remain as you do when installing PipeCD Control Plane. Only need to change the version flag in command to the specified version you want to upgrade your PipeCD Control Plane to. - -``` console -helm upgrade -i pipecd oci://ghcr.io/pipe-cd/chart/pipecd --version {NEW_VERSION} --namespace={NAMESPACE} \ - --set-file config.data=path-to-control-plane-configuration-file \ - --set-file secret.encryptionKey.data=path-to-encryption-key-file \ - --set-file secret.firestoreServiceAccount.data=path-to-service-account-file \ - --set-file secret.gcsServiceAccount.data=path-to-service-account-file -``` - -## Production Hardening - -This part provides guidance for a production hardened deployment of the control plane. - -- Publishing the control plane - - You can allow external access to the control plane by enabling the [ingress](https://github.com/pipe-cd/pipecd/blob/master/manifests/pipecd/values.yaml#L7) configuration. - -- End-to-End TLS - - After switching to HTTPs, do not forget to set the `api.args.secureCookie` parameter to be `true` to disallow using cookie on unsecured HTTP connection. - - Alternatively in the case of GKE Ingress, PipeCD also requires a TLS certificate for internal use. This can be a self-signed one and generated by this command: - - ``` console - openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN={YOUR_DOMAIN}" - ``` - Those key and cert can be configured via [`secret.internalTLSKey.data`](https://github.com/pipe-cd/pipecd/blob/master/manifests/pipecd/values.yaml#L118) and [`secret.internalTLSCert.data`](https://github.com/pipe-cd/pipecd/blob/master/manifests/pipecd/values.yaml#L121). - - To enable internal tls connection, please set the `gateway.internalTLS.enabled` parameter to be `true`. - - Otherwise, the `cloud.google.com/app-protocols` annotation is also should be configured as the following: - - ``` yaml - service: - port: 443 - annotations: - cloud.google.com/app-protocols: '{"service":"HTTP2"}' - ``` diff --git a/docs/content/en/docs/installation/install-piped/_index.md b/docs/content/en/docs/installation/install-piped/_index.md deleted file mode 100644 index 71a5199f66..0000000000 --- a/docs/content/en/docs/installation/install-piped/_index.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: "Install Piped" -linkTitle: "Install Piped" -weight: 3 -description: > - This page describes how to install a Piped. ---- - -Since Piped is a stateless agent, no database or storage is required to run. In addition, a Piped can interact with one or multiple platform providers, so the number of Piped and where they should run is entirely up to your preference. For example, you can run your Pipeds in a Kubernetes cluster to deploy not just Kubernetes applications but your Terraform and Cloud Run applications as well. diff --git a/docs/content/en/docs/installation/install-piped/installing-on-cloudrun.md b/docs/content/en/docs/installation/install-piped/installing-on-cloudrun.md deleted file mode 100644 index 2919f6ef2e..0000000000 --- a/docs/content/en/docs/installation/install-piped/installing-on-cloudrun.md +++ /dev/null @@ -1,174 +0,0 @@ ---- -title: "Installing on Cloud Run" -linkTitle: "Installing on Cloud Run" -weight: 3 -description: > - This page describes how to install Piped on Cloud Run. ---- - -## Prerequisites - -##### Having piped's ID and Key strings -- Ensure that the `piped` has been registered and you are having its PIPED_ID and PIPED_KEY strings. -- If you are not having them, this [page](../../../user-guide/managing-controlplane/registering-a-piped/) guides you how to register a new one. - -##### Preparing SSH key -- If your Git repositories are private, `piped` requires a private SSH key to access those repositories. -- Please checkout [this documentation](https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) for how to generate a new SSH key pair. Then add the public key to your repositories. (If you are using GitHub, you can add it to Deploy Keys at the repository's Settings page.) - -## Installation - -- Preparing a piped configuration file as the following: - - ``` yaml - apiVersion: pipecd.dev/v1beta1 - kind: Piped - spec: - projectID: {PROJECT_ID} - pipedID: {PIPED_ID} - pipedKeyData: {BASE64_ENCODED_PIPED_KEY} - # Write in a format like "host:443" because the communication is done via gRPC. - apiAddress: {CONTROL_PLANE_API_ADDRESS} - - git: - sshKeyData: {BASE64_ENCODED_PRIVATE_SSH_KEY} - - repositories: - - repoId: {REPO_ID_OR_NAME} - remote: git@github.com:{GIT_ORG}/{GIT_REPO}.git - branch: {GIT_BRANCH} - - # Optional - # Enable this Piped to handle Cloud Run application. - platformProviders: - - name: cloudrun-in-project - type: CLOUDRUN - config: - project: {GCP_PROJECT_ID} - region: {GCP_PROJECT_REGION} - - # Optional - # Uncomment this if you want to enable this Piped to handle Terraform application. - # - name: terraform-gcp - # type: TERRAFORM - - # Optional - # Uncomment this if you want to enable SecretManagement feature. - # https://pipecd.dev//docs/user-guide/managing-application/secret-management/ - # secretManagement: - # type: KEY_PAIR - # config: - # privateKeyData: {BASE64_ENCODED_PRIVATE_KEY} - # publicKeyData: {BASE64_ENCODED_PUBLIC_KEY} - ``` - -See [ConfigurationReference](../../../user-guide/managing-piped/configuration-reference/) for the full configuration. - -- Creating a new secret in [SecretManager](https://cloud.google.com/secret-manager/docs/creating-and-accessing-secrets) to store above configuration data securely - - ``` console - gcloud secrets create cloudrun-piped-config --data-file={PATH_TO_CONFIG_FILE} - ``` - - then make sure that Cloud Run has the ability to access that secret as [this guide](https://cloud.google.com/run/docs/configuring/secrets#access-secret). - -- Running Piped in Cloud Run - - Prepare a Cloud Run service manifest file as below. - - {{< tabpane >}} - {{< tab lang="yaml" header="Piped with Remote-upgrade" >}} -# Enable remote-upgrade feature of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-upgrade -# This allows upgrading Piped to a new version from the web console. - -apiVersion: serving.knative.dev/v1 -kind: Service -metadata: - name: piped - annotaions: - run.googleapis.com/ingress: internal - run.googleapis.com/ingress-status: internal -spec: - template: - metadata: - annotations: - autoscaling.knative.dev/maxScale: '1' # This must be 1. - autoscaling.knative.dev/minScale: '1' # This must be 1. - run.googleapis.com/cpu-throttling: "false" # This is required. - spec: - containerConcurrency: 1 # This must be 1 to ensure Piped work correctly. - containers: - - image: ghcr.io/pipe-cd/launcher:{{< blocks/latest_version >}} - args: - - launcher - - --launcher-admin-port=9086 - - --config-file=/etc/piped-config/config.yaml - ports: - - containerPort: 9086 - volumeMounts: - - mountPath: /etc/piped-config - name: piped-config - resources: - limits: - cpu: 1000m - memory: 2Gi - volumes: - - name: piped-config - secret: - secretName: cloudrun-piped-config - items: - - path: config.yaml - key: latest - {{< /tab >}} - {{< tab lang="yaml" header="Piped" >}} -# This just installs a Piped with the specified version. -# Whenever you want to upgrade that Piped to a new version or update its config data you have to restart it. - -apiVersion: serving.knative.dev/v1 -kind: Service -metadata: - name: piped - annotaions: - run.googleapis.com/ingress: internal - run.googleapis.com/ingress-status: internal -spec: - template: - metadata: - annotations: - autoscaling.knative.dev/maxScale: '1' # This must be 1. - autoscaling.knative.dev/minScale: '1' # This must be 1. - run.googleapis.com/cpu-throttling: "false" # This is required. - spec: - containerConcurrency: 1 # This must be 1. - containers: - - image: ghcr.io/pipe-cd/piped:{{< blocks/latest_version >}} - args: - - piped - - --config-file=/etc/piped-config/config.yaml - ports: - - containerPort: 9085 - volumeMounts: - - mountPath: /etc/piped-config - name: piped-config - resources: - limits: - cpu: 1000m - memory: 2Gi - volumes: - - name: piped-config - secret: - secretName: cloudrun-piped-config - items: - - path: config.yaml - key: latest - {{< /tab >}} - {{< /tabpane >}} - - Run Piped service on Cloud Run with the following command: - - ``` console - gcloud beta run services replace cloudrun-piped-service.yaml - ``` - - Note: Make sure that the created secret is accessible from this Piped service. See more [here](https://cloud.google.com/run/docs/configuring/secrets#access-secret). diff --git a/docs/content/en/docs/installation/install-piped/installing-on-fargate.md b/docs/content/en/docs/installation/install-piped/installing-on-fargate.md deleted file mode 100644 index bc6cee74fc..0000000000 --- a/docs/content/en/docs/installation/install-piped/installing-on-fargate.md +++ /dev/null @@ -1,199 +0,0 @@ ---- -title: "Installing on ECS Fargate" -linkTitle: "Installing on ECS Fargate" -weight: 4 -description: > - This page describes how to install Piped as a task on ECS cluster backed by AWS Fargate. ---- - -## Prerequisites - -##### Having piped's ID and Key strings -- Ensure that the `piped` has been registered and you are having its PIPED_ID and PIPED_KEY strings. -- If you are not having them, this [page](../../../user-guide/managing-controlplane/registering-a-piped/) guides you how to register a new one. - -##### Preparing SSH key -- If your Git repositories are private, `piped` requires a private SSH key to access those repositories. -- Please checkout [this documentation](https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) for how to generate a new SSH key pair. Then add the public key to your repositories. (If you are using GitHub, you can add it to Deploy Keys at the repository's Settings page.) - -## Installation - -- Preparing a piped configuration file as follows: - - ``` yaml - apiVersion: pipecd.dev/v1beta1 - kind: Piped - spec: - projectID: {PROJECT_ID} - pipedID: {PIPED_ID} - pipedKeyData: {BASE64_ENCODED_PIPED_KEY} - # Write in a format like "host:443" because the communication is done via gRPC. - apiAddress: {CONTROL_PLANE_API_ADDRESS} - - git: - sshKeyData: {BASE64_ENCODED_PRIVATE_SSH_KEY} - - repositories: - - repoId: {REPO_ID_OR_NAME} - remote: git@github.com:{GIT_ORG}/{GIT_REPO}.git - branch: {GIT_BRANCH} - - # Optional - # Enable this Piped to handle ECS application. - platformProviders: - - name: ecs-dev - type: ECS - config: - region: {ECS_RUNNING_REGION} - - # Optional - # Uncomment this if you want to enable this Piped to handle Terraform application. - # - name: terraform-dev - # type: TERRAFORM - - # Optional - # Uncomment this if you want to enable SecretManagement feature. - # https://pipecd.dev//docs/user-guide/managing-application/secret-management/ - # secretManagement: - # type: KEY_PAIR - # config: - # privateKeyData: {BASE64_ENCODED_PRIVATE_KEY} - # publicKeyData: {BASE64_ENCODED_PUBLIC_KEY} - ``` - -See [ConfigurationReference](../../../user-guide/managing-piped/configuration-reference/) for the full configuration. - -- Store the above configuration data to AWS to enable using it while creating piped task. Both [AWS SecretManager](https://aws.amazon.com/secrets-manager/) and [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) are available to address this task. - - {{< tabpane >}} - {{< tab lang="bash" header="Store in AWS SecretManager" >}} - aws secretsmanager create-secret --name PipedConfig \ - --description "Configuration of piped running as ECS Fargate task" \ - --secret-string `base64 piped-config.yaml` - {{< /tab >}} - {{< tab lang="bash" header="Store in AWS Systems Manager Parameter Store" >}} - aws ssm put-parameter \ - --name PipedConfig \ - --value `base64 piped-config.yaml` \ - --type SecureString - {{< /tab >}} - {{< /tabpane >}} - -- Prepare task definition for your piped task. Basically, you can just define your piped TaskDefinition as normal TaskDefinition, the only thing that needs to be beware is, in case you used [AWS SecretManager](https://aws.amazon.com/secrets-manager/) to store piped configuration, to enable your piped accesses it's configuration we created as a secret on above, you need to add `secretsmanager:GetSecretValue` policy to your piped task `executionRole`. Read more in [Required IAM permissions for Amazon ECS secrets](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-secrets.html). - - A sample TaskDefinition for Piped as follows - - {{< tabpane >}} - {{< tab lang="json" header="Piped with Remote-upgrade" >}} -# Enable remote-upgrade feature of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-upgrade -# This allows upgrading Piped to a new version from the web console. - -{ - "family": "piped", - "executionRoleArn": "{PIPED_TASK_EXECUTION_ROLE_ARN}", - "containerDefinitions": [ - { - "name": "piped", - "essential": true, - "image": "ghcr.io/pipe-cd/launcher:{{< blocks/latest_version >}}", - "entryPoint": [ - "sh", - "-c" - ], - "command": [ - "/bin/sh -c \"launcher launcher --config-data=$(echo $CONFIG_DATA)\"" - ], - "secrets": [ - { - "valueFrom": "{PIPED_SECRET_MANAGER_ARN}", - "name": "CONFIG_DATA" - } - ], - } - ], - "requiresCompatibilities": [ - "FARGATE" - ], - "networkMode": "awsvpc", - "memory": "512", - "cpu": "256" -} - {{< /tab >}} - {{< tab lang="json" header="Piped" >}} -# This just installs a Piped with the specified version. -# Whenever you want to upgrade that Piped to a new version or update its config data you have to restart it. - -{ - "family": "piped", - "executionRoleArn": "{PIPED_TASK_EXECUTION_ROLE_ARN}", - "containerDefinitions": [ - { - "name": "piped", - "essential": true, - "image": "ghcr.io/pipe-cd/piped:{{< blocks/latest_version >}}", - "entryPoint": [ - "sh", - "-c" - ], - "command": [ - "/bin/sh -c \"piped piped --config-data=$(echo $CONFIG_DATA)\"" - ], - "secrets": [ - { - "valueFrom": "{PIPED_SECRET_MANAGER_ARN}", - "name": "CONFIG_DATA" - } - ], - } - ], - "requiresCompatibilities": [ - "FARGATE" - ], - "networkMode": "awsvpc", - "memory": "512", - "cpu": "256" -} - {{< /tab >}} - {{< /tabpane >}} - - Register this piped task definition and start piped task: - - ```console - aws ecs register-task-definition --cli-input-json file://taskdef.json - aws ecs run-task --cluster {ECS_CLUSTER} --task-definition piped - ``` - - Once the task is created, it will run continuously because of the piped execution. Since this task is run without [startedBy](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_StartTask.html#API_StartTask_RequestSyntax) setting, in case the piped is stopped, it will not automatically be restarted. To do so, you must define an ECS service to control piped task deployment. - - A sample Service definition to control piped task deployment. - - ```json - { - "cluster": "{ECS_CLUSTER}", - "serviceName": "piped", - "desiredCount": 1, # This must be 1. - "taskDefinition": "{PIPED_TASK_DEFINITION_ARN}", - "deploymentConfiguration": { - "minimumHealthyPercent": 0, - "maximumPercent": 100 - }, - "schedulingStrategy": "REPLICA", - "launchType": "FARGATE", - "networkConfiguration": { - "awsvpcConfiguration": { - "assignPublicIp": "ENABLED", # This is need to enable ECS deployment to pull piped container images. - ... - } - } - } - ``` - - Then start your piped task controller service. - - ```console - aws ecs create-service \ - --cluster {ECS_CLUSTER} \ - --service-name piped \ - --cli-input-json file://service.json - ``` diff --git a/docs/content/en/docs/installation/install-piped/installing-on-google-cloud-vm.md b/docs/content/en/docs/installation/install-piped/installing-on-google-cloud-vm.md deleted file mode 100644 index 84cb85160f..0000000000 --- a/docs/content/en/docs/installation/install-piped/installing-on-google-cloud-vm.md +++ /dev/null @@ -1,138 +0,0 @@ ---- -title: "Installing on Google Cloud VM" -linkTitle: "Installing on Google Cloud VM" -weight: 2 -description: > - This page describes how to install Piped on Google Cloud VM. ---- - -## Prerequisites - -##### Having piped's ID and Key strings -- Ensure that the `piped` has been registered and you are having its `PIPED_ID` and `PIPED_KEY` strings. -- If you are not having them, this [page](../../../user-guide/managing-controlplane/registering-a-piped/) guides you how to register a new one. - -##### Preparing SSH key -- If your Git repositories are private, `piped` requires a private SSH key to access those repositories. -- Please checkout [this documentation](https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) for how to generate a new SSH key pair. Then add the public key to your repositories. (If you are using GitHub, you can add it to Deploy Keys at the repository's Settings page.) - -## Installation - -- Preparing a piped configuration file as the following: - - ``` yaml - apiVersion: pipecd.dev/v1beta1 - kind: Piped - spec: - projectID: {PROJECT_ID} - pipedID: {PIPED_ID} - pipedKeyData: {BASE64_ENCODED_PIPED_KEY} - # Write in a format like "host:443" because the communication is done via gRPC. - apiAddress: {CONTROL_PLANE_API_ADDRESS} - - git: - sshKeyData: {BASE64_ENCODED_PRIVATE_SSH_KEY} - - repositories: - - repoId: {REPO_ID_OR_NAME} - remote: git@github.com:{GIT_ORG}/{GIT_REPO}.git - branch: {GIT_BRANCH} - - # Optional - # Uncomment this if you want to enable this Piped to handle Cloud Run application. - # platformProviders: - # - name: cloudrun-in-project - # type: CLOUDRUN - # config: - # project: {GCP_PROJECT_ID} - # region: {GCP_PROJECT_REGION} - - # Optional - # Uncomment this if you want to enable this Piped to handle Terraform application. - # - name: terraform-gcp - # type: TERRAFORM - - # Optional - # Uncomment this if you want to enable SecretManagement feature. - # https://pipecd.dev//docs/user-guide/managing-application/secret-management/ - # secretManagement: - # type: KEY_PAIR - # config: - # privateKeyData: {BASE64_ENCODED_PRIVATE_KEY} - # publicKeyData: {BASE64_ENCODED_PUBLIC_KEY} - ``` - -See [ConfigurationReference](../../../user-guide/managing-piped/configuration-reference/) for the full configuration. - -- Creating a new secret in [SecretManager](https://cloud.google.com/secret-manager/docs/creating-and-accessing-secrets) to store above configuration data securely - - ``` shell - gcloud secrets create vm-piped-config --data-file={PATH_TO_CONFIG_FILE} - ``` - -- Creating a new Service Account for Piped and giving it needed roles - - ``` shell - gcloud iam service-accounts create vm-piped \ - --description="Using by Piped running on Google Cloud VM" \ - --display-name="vm-piped" - - # Allow Piped to access the created secret. - gcloud secrets add-iam-policy-binding vm-piped-config \ - --member="serviceAccount:vm-piped@{GCP_PROJECT_ID}.iam.gserviceaccount.com" \ - --role="roles/secretmanager.secretAccessor" - - # Allow Piped to write its log messages to Google Cloud Logging service. - gcloud projects add-iam-policy-binding {GCP_PROJECT_ID} \ - --member="serviceAccount:vm-piped@{GCP_PROJECT_ID}.iam.gserviceaccount.com" \ - --role="roles/logging.logWriter" - - # Optional - # If you want to use this Piped to handle Cloud Run application - # run the following command to give it the needed roles. - # https://cloud.google.com/run/docs/reference/iam/roles#additional-configuration - # - # gcloud projects add-iam-policy-binding {GCP_PROJECT_ID} \ - # --member="serviceAccount:vm-piped@{GCP_PROJECT_ID}.iam.gserviceaccount.com" \ - # --role="roles/run.developer" - # - # gcloud iam service-accounts add-iam-policy-binding {GCP_PROJECT_NUMBER}-compute@developer.gserviceaccount.com \ - # --member="serviceAccount:vm-piped@{GCP_PROJECT_ID}.iam.gserviceaccount.com" \ - # --role="roles/iam.serviceAccountUser" - ``` - -- Running Piped on a Google Cloud VM - - {{< tabpane >}} - {{< tab lang="console" header="Piped with Remote-upgrade" >}} -# Enable remote-upgrade feature of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-upgrade -# This allows upgrading Piped to a new version from the web console. - - gcloud compute instances create-with-container vm-piped \ - --container-image="ghcr.io/pipe-cd/launcher:{{< blocks/latest_version >}}" \ - --container-arg="launcher" \ - --container-arg="--config-from-gcp-secret=true" \ - --container-arg="--gcp-secret-id=projects/{GCP_PROJECT_ID}/secrets/vm-piped-config/versions/{SECRET_VERSION}" \ - --network="{VPC_NETWORK}" \ - --subnet="{VPC_SUBNET}" \ - --scopes="cloud-platform" \ - --service-account="vm-piped@{GCP_PROJECT_ID}.iam.gserviceaccount.com" - {{< /tab >}} - {{< tab lang="console" header="Piped" >}} -# This just installs a Piped with the specified version. -# Whenever you want to upgrade that Piped to a new version or update its config data you have to restart it. - - gcloud compute instances create-with-container vm-piped \ - --container-image="ghcr.io/pipe-cd/piped:{{< blocks/latest_version >}}" \ - --container-arg="piped" \ - --container-arg="--config-gcp-secret=projects/{GCP_PROJECT_ID}/secrets/vm-piped-config/versions/{SECRET_VERSION}" \ - --network="{VPC_NETWORK}" \ - --subnet="{VPC_SUBNET}" \ - --scopes="cloud-platform" \ - --service-account="vm-piped@{GCP_PROJECT_ID}.iam.gserviceaccount.com" - {{< /tab >}} - {{< /tabpane >}} - -After that, you can see on PipeCD web at `Settings` page that Piped is connecting to the Control Plane. -You can also view Piped log as described [here](https://cloud.google.com/compute/docs/containers/deploying-containers#viewing_logs). diff --git a/docs/content/en/docs/installation/install-piped/installing-on-kubernetes.md b/docs/content/en/docs/installation/install-piped/installing-on-kubernetes.md deleted file mode 100644 index d72c124fd5..0000000000 --- a/docs/content/en/docs/installation/install-piped/installing-on-kubernetes.md +++ /dev/null @@ -1,246 +0,0 @@ ---- -title: "Installing on Kubernetes cluster" -linkTitle: "Installing on Kubernetes cluster" -weight: 1 -description: > - This page describes how to install Piped on Kubernetes cluster. ---- - -## Prerequisites - -##### Having piped's ID and Key strings -- Ensure that the `piped` has been registered and you are having its PIPED_ID and PIPED_KEY strings. -- If you are not having them, this [page](../../../user-guide/managing-controlplane/registering-a-piped/) guides you how to register a new one. - -##### Preparing SSH key -- If your Git repositories are private, `piped` requires a private SSH key to access those repositories. -- Please checkout [this documentation](https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) for how to generate a new SSH key pair. Then add the public key to your repositories. (If you are using GitHub, you can add it to Deploy Keys at the repository's Settings page.) - -## In the cluster-wide mode -This way requires installing cluster-level resources. Piped installed with this way can perform deployment workloads against any other namespaces than the where Piped runs on. - -- Preparing a piped configuration file as the following - - ``` yaml - apiVersion: pipecd.dev/v1beta1 - kind: Piped - spec: - projectID: {PROJECT_ID} - pipedID: {PIPED_ID} - pipedKeyFile: /etc/piped-secret/piped-key - # Write in a format like "host:443" because the communication is done via gRPC. - apiAddress: {CONTROL_PLANE_API_ADDRESS} - git: - sshKeyFile: /etc/piped-secret/ssh-key - repositories: - - repoId: {REPO_ID_OR_NAME} - remote: git@github.com:{GIT_ORG}/{GIT_REPO}.git - branch: {GIT_BRANCH} - syncInterval: 1m - ``` - -See [ConfigurationReference](../../../user-guide/managing-piped/configuration-reference/) for the full configuration. - -- Installing by using [Helm](https://helm.sh/docs/intro/install/) (3.8.0 or later) - - {{< tabpane >}} - {{< tab lang="bash" header="Piped" >}} -# This command just installs a Piped with the specified version. -# Whenever you want to upgrade that Piped to a new version or update its config data -# you have to restart it by re-running this command. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set-file config.data={PATH_TO_PIPED_CONFIG_FILE} \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} - {{< /tab >}} - {{< tab lang="bash" header="Piped with Remote-upgrade" >}} -# Enable remote-upgrade feature of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-upgrade -# This allows upgrading Piped to a new version from the web console. -# But we still need to restart Piped when we want to update its config data. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set launcher.enabled=true \ - --set-file config.data={PATH_TO_PIPED_CONFIG_FILE} \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} - {{< /tab >}} - {{< tab lang="bash" header="Piped with Remote-upgrade and Remote-config" >}} -# Enable both remote-upgrade and remote-config features of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-config -# Beside of the ability to upgrade Piped to a new version from the web console, -# remote-config allows loading the Piped config stored in a remote location such as a Git repository. -# Whenever the config data is changed, it loads the new config and restarts Piped to use that new config. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set launcher.enabled=true \ - --set launcher.configFromGitRepo.enabled=true \ - --set launcher.configFromGitRepo.repoUrl=git@github.com:{GIT_ORG}/{GIT_REPO}.git \ - --set launcher.configFromGitRepo.branch={GIT_BRANCH} \ - --set launcher.configFromGitRepo.configFile={RELATIVE_PATH_TO_PIPED_CONFIG_FILE_IN_GIT_REPO} \ - --set launcher.configFromGitRepo.sshKeyFile=/etc/piped-secret/ssh-key \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} - {{< /tab >}} - {{< /tabpane >}} - - Note: Be sure to set `--set args.insecure=true` if your Control Plane has not TLS-enabled yet. - - See [values.yaml](https://github.com/pipe-cd/pipecd/blob/master/manifests/piped/values.yaml) for the full values. - -## In the namespaced mode -The previous way requires installing cluster-level resources. If you want to restrict Piped's permission within the namespace where Piped runs on, this way is for you. -Most parts are identical to the previous way, but some are slightly different. - -- Adding a new cloud provider like below to the previous piped configuration file - - ``` yaml - apiVersion: pipecd.dev/v1beta1 - kind: Piped - spec: - projectID: {PROJECT_ID} - pipedID: {PIPED_ID} - pipedKeyFile: /etc/piped-secret/piped-key - # Write in a format like "host:443" because the communication is done via gRPC. - apiAddress: {CONTROL_PLANE_API_ADDRESS} - git: - sshKeyFile: /etc/piped-secret/ssh-key - repositories: - - repoId: REPO_ID_OR_NAME - remote: git@github.com:{GIT_ORG}/{GIT_REPO}.git - branch: {GIT_BRANCH} - syncInterval: 1m - # This is needed to restrict to limit the access range to within a namespace. - platformProviders: - - name: my-kubernetes - type: KUBERNETES - config: - appStateInformer: - namespace: {NAMESPACE} - ``` - -- Installing by using [Helm](https://helm.sh/docs/intro/install/) (3.8.0 or later) - - {{< tabpane >}} - {{< tab lang="bash" header="Piped" >}} -# This command just installs a Piped with the specified version. -# Whenever you want to upgrade that Piped to a new version or update its config data -# you have to restart it by re-running this command. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set-file config.data={PATH_TO_PIPED_CONFIG_FILE} \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} \ - --set args.enableDefaultKubernetesCloudProvider=false \ - --set rbac.scope=namespace - {{< /tab >}} - {{< tab lang="bash" header="Piped with Remote-upgrade" >}} -# Enable remote-upgrade feature of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-upgrade -# This allows upgrading Piped to a new version from the web console. -# But we still need to restart Piped when we want to update its config data. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set launcher.enabled=true \ - --set-file config.data={PATH_TO_PIPED_CONFIG_FILE} \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} \ - --set args.enableDefaultKubernetesCloudProvider=false \ - --set rbac.scope=namespace - {{< /tab >}} - {{< tab lang="bash" header="Piped with Remote-upgrade and Remote-config" >}} -# Enable both remote-upgrade and remote-config features of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-config -# Beside of the ability to upgrade Piped to a new version from the web console, -# remote-config allows loading the Piped config stored in a remote location such as a Git repository. -# Whenever the config data is changed, it loads the new config and restarts Piped to use that new config. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set launcher.enabled=true \ - --set launcher.configFromGitRepo.enabled=true \ - --set launcher.configFromGitRepo.repoUrl=git@github.com:{GIT_ORG}/{GIT_REPO}.git \ - --set launcher.configFromGitRepo.branch={GIT_BRANCH} \ - --set launcher.configFromGitRepo.configFile={RELATIVE_PATH_TO_PIPED_CONFIG_FILE_IN_GIT_REPO} \ - --set launcher.configFromGitRepo.sshKeyFile=/etc/piped-secret/ssh-key \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} \ - --set args.enableDefaultKubernetesCloudProvider=false \ - --set rbac.scope=namespace - {{< /tab >}} - {{< /tabpane >}} - -#### In case on OpenShift less than 4.2 - -OpenShift uses an arbitrarily assigned user ID when it starts a container. -Starting from OpenShift 4.2, it also inserts that user into `/etc/passwd` for using by the application inside the container, -but before that version, the assigned user is missing in that file. That blocks workloads of `gcr.io/pipecd/piped` image. -Therefore if you are running on OpenShift with a version before 4.2, please use `gcr.io/pipecd/piped-okd` image with the following command: - -- Installing by using [Helm](https://helm.sh/docs/intro/install/) (3.8.0 or later) - - {{< tabpane >}} - {{< tab lang="bash" header="Piped" >}} -# This command just installs a Piped with the specified version. -# Whenever you want to upgrade that Piped to a new version or update its config data -# you have to restart it by re-running this command. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set-file config.data={PATH_TO_PIPED_CONFIG_FILE} \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} \ - --set args.enableDefaultKubernetesCloudProvider=false \ - --set rbac.scope=namespace - --set args.addLoginUserToPasswd=true \ - --set securityContext.runAsNonRoot=true \ - --set securityContext.runAsUser={UID} \ - --set securityContext.fsGroup={FS_GROUP} \ - --set securityContext.runAsGroup=0 \ - --set image.repository="ghcr.io/pipe-cd/piped-okd" - {{< /tab >}} - {{< tab lang="bash" header="Piped with Remote-upgrade" >}} -# Enable remote-upgrade feature of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-upgrade -# This allows upgrading Piped to a new version from the web console. -# But we still need to restart Piped when we want to update its config data. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set launcher.enabled=true \ - --set-file config.data={PATH_TO_PIPED_CONFIG_FILE} \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} \ - --set args.enableDefaultKubernetesCloudProvider=false \ - --set rbac.scope=namespace - --set args.addLoginUserToPasswd=true \ - --set securityContext.runAsNonRoot=true \ - --set securityContext.runAsUser={UID} \ - --set securityContext.fsGroup={FS_GROUP} \ - --set securityContext.runAsGroup=0 \ - --set launcher.image.repository="ghcr.io/pipe-cd/launcher-okd" - {{< /tab >}} - {{< tab lang="bash" header="Piped with Remote-upgrade and Remote-config" >}} -# Enable both remote-upgrade and remote-config features of Piped. -# https://pipecd.dev/docs/user-guide/managing-piped/remote-upgrade-remote-config/#remote-config -# Beside of the ability to upgrade Piped to a new version from the web console, -# remote-config allows loading the Piped config stored in a remote location such as a Git repository. -# Whenever the config data is changed, it loads the new config and restarts Piped to use that new config. - -helm upgrade -i dev-piped oci://ghcr.io/pipe-cd/chart/piped --version={{< blocks/latest_version >}} --namespace={NAMESPACE} \ - --set launcher.enabled=true \ - --set launcher.configFromGitRepo.enabled=true \ - --set launcher.configFromGitRepo.repoUrl=git@github.com:{GIT_ORG}/{GIT_REPO}.git \ - --set launcher.configFromGitRepo.branch={GIT_BRANCH} \ - --set launcher.configFromGitRepo.configFile={RELATIVE_PATH_TO_PIPED_CONFIG_FILE_IN_GIT_REPO} \ - --set launcher.configFromGitRepo.sshKeyFile=/etc/piped-secret/ssh-key \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} \ - --set args.enableDefaultKubernetesCloudProvider=false \ - --set rbac.scope=namespace - --set args.addLoginUserToPasswd=true \ - --set securityContext.runAsNonRoot=true \ - --set securityContext.runAsUser={UID} \ - --set securityContext.fsGroup={FS_GROUP} \ - --set securityContext.runAsGroup=0 \ - --set launcher.image.repository="ghcr.io/pipe-cd/launcher-okd" - {{< /tab >}} - {{< /tabpane >}} diff --git a/docs/content/en/docs/installation/install-piped/installing-on-single-machine.md b/docs/content/en/docs/installation/install-piped/installing-on-single-machine.md deleted file mode 100644 index 018d9cf55e..0000000000 --- a/docs/content/en/docs/installation/install-piped/installing-on-single-machine.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -title: "Installing on a single machine" -linkTitle: "Installing on a single machine" -weight: 5 -description: > - This page describes how to install a Piped on a single machine. ---- - -## Prerequisites - -##### Having piped's ID and Key strings -- Ensure that the `piped` has been registered and you are having its PIPED_ID and PIPED_KEY strings. -- If you are not having them, this [page](../../../user-guide/managing-controlplane/registering-a-piped/) guides you how to register a new one. - -##### Preparing SSH key -- If your Git repositories are private, `piped` requires a private SSH key to access those repositories. -- Please checkout [this documentation](https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) for how to generate a new SSH key pair. Then add the public key to your repositories. (If you are using GitHub, you can add it to Deploy Keys at the repository's Settings page.) - -## Installation - -- Downloading the latest `piped` binary for your machine - - https://github.com/pipe-cd/pipecd/releases - -- Preparing a piped configuration file as the following: - - ``` yaml - apiVersion: pipecd.dev/v1beta1 - kind: Piped - spec: - projectID: {PROJECT_ID} - pipedID: {PIPED_ID} - pipedKeyFile: {PATH_TO_PIPED_KEY_FILE} - # Write in a format like "host:443" because the communication is done via gRPC. - apiAddress: {CONTROL_PLANE_API_ADDRESS} - git: - sshKeyFile: {PATH_TO_SSH_KEY_FILE} - repositories: - - repoId: {REPO_ID_OR_NAME} - remote: git@github.com:{GIT_ORG}/{GIT_REPO}.git - branch: {GIT_BRANCH} - syncInterval: 1m - ``` - -See [ConfigurationReference](../../../user-guide/managing-piped/configuration-reference/) for the full configuration. - -- Start running the `piped` - - ``` console - ./piped piped --config-file={PATH_TO_PIPED_CONFIG_FILE} - ``` - diff --git a/docs/content/en/docs/overview/_index.md b/docs/content/en/docs/overview/_index.md deleted file mode 100644 index 724cbec785..0000000000 --- a/docs/content/en/docs/overview/_index.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -title: "Overview" -linkTitle: "Overview" -weight: 1 -description: > - Overview about PipeCD. ---- - -![](/images/pipecd-explanation.png) --PipeCD - a Gitops style continuous delivery solution -
- -## What Is PipeCD? - -{{% pageinfo %}} -PipeCD provides a unified continuous delivery solution for multiple application kinds on multi-cloud that empowers engineers to deploy faster with more confidence, a GitOps tool that enables doing deployment operations by pull request on Git. -{{% /pageinfo %}} - -## Why PipeCD? - -**Visibility** -- Deployment pipeline UI shows clarify what is happening -- Separate logs viewer for each individual deployment -- Realtime visualization of application state -- Deployment notifications to slack, webhook endpoints -- Insights show metrics like lead time, deployment frequency, MTTR and change failure rate to measure delivery performance - -**Automation** -- Automated deployment analysis to measure deployment impact based on metrics, logs, emitted requests -- Automatically roll back to the previous state as soon as analysis or a pipeline stage fails -- Automatically detect configuration drift to notify and render the changes -- Automatically trigger a new deployment when a defined event has occurred (e.g. container image pushed, helm chart published, etc) - -**Safety and Security** -- Support single sign-on and role-based access control -- Credentials are not exposed outside the cluster and not saved in the Control Plane -- Piped makes only outbound requests and can run inside a restricted network -- Built-in secrets management - -**Multi-provider & Multi-Tenancy** -- Support multiple application kinds on multi-cloud including Kubernetes, Terraform, Cloud Run, AWS Lambda, Amazon ECS -- Support multiple analysis providers including Prometheus, Datadog, Stackdriver, and more -- Easy to operate multi-cluster, multi-tenancy by separating Control Plane and Piped - -**Open Source** - -- Released as an Open Source project -- Under APACHE 2.0 license, see [LICENSE](https://github.com/pipe-cd/pipecd/blob/master/LICENSE) - -## Where should I go next? - -For a good understanding of the PipeCD's components, see the [Concepts](../concepts) page. - -If you are an **operator** wanting to install and configure PipeCD for other developers. -- [Quickstart](../quickstart/) -- [Managing Control Plane](../user-guide/managing-controlplane/) -- [Managing Piped](../user-guide/managing-piped/) - -If you are a **user** using PipeCD to deploy your application/infrastructure: -- [User Guide](../user-guide/) -- [Examples](../user-guide/examples) - -If you want to be a **contributor**: -- [Contributor Guide](../contribution-guidelines/) diff --git a/docs/content/en/docs/quickstart/_index.md b/docs/content/en/docs/quickstart/_index.md deleted file mode 100644 index 4ee7296907..0000000000 --- a/docs/content/en/docs/quickstart/_index.md +++ /dev/null @@ -1,129 +0,0 @@ ---- -title: "Quickstart" -linkTitle: "Quickstart" -weight: 3 -description: > - This page describes how to quickly get started with PipeCD on Kubernetes. ---- - -This page is a guideline for installing PipeCD into your Kubernetes cluster and deploying a "hello world" application to that same Kubernetes cluster. - -Note: It's not required to install the PipeCD control plane to the cluster where your applications are running. Please read this [blog post](/blog/2021/12/29/pipecd-best-practice-01-operate-your-own-pipecd-cluster/) to understand more about PipeCD in real life use cases. - -### Prerequisites -- Having a Kubernetes cluster and connect to it via [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). -- Forked the [Examples](https://github.com/pipe-cd/examples) repository - -### 1. Installing PipeCD in quickstart mode - -#### 1.1. Installing PipeCD client - -##### Method 1: Official Installation -The official PipeCD client named `pipectl` can be installed using the following command - -``` console -# OS="darwin" or "linux" -curl -Lo ./pipectl https://github.com/pipe-cd/pipecd/releases/download/{{< blocks/latest_version >}}/pipectl_{{< blocks/latest_version >}}_{OS}_amd64 -``` - -Then make the pipectl binary executable - -``` console -chmod +x ./pipectl -``` - -You can also move the pipectl binary to the $PATH for later use - -```console -sudo mv ./pipectl /usr/local/bin/pipectl -``` - -##### Method 2: [Asdf](https://asdf-vm.com/) Supported Installation - -```console -asdf plugin add pipectl && asdf install pipectl latest && asdf global pipectl latest -``` - -#### 1.2. Installing PipeCD's components - -We can simply use __pipectl quickstart__ command to start the PipeCD installation process and follow the instruction - -```console -pipectl quickstart --version {{< blocks/latest_version >}} -``` - -Follow the instruction, the PipeCD control plane will be installed with a default project named `quickstart`. You can access to the PipeCD console at [http://localhost:8080](http://localhost:8080?project=quickstart) and pipectl command will open the PipeCD console automatically on your browser. - -To login, you can use the configured static admin account as below: -- username: `hello-pipecd` -- password: `hello-pipecd` - -![](/images/quickstart-login-form.png) - -After logged in successfully, the browser will redirect you to the PipeCD console settings page at `piped` settings tab. You will find the `+ADD` button on the top of this page, click there and insert information to register the deployment runner for PipeCD (called `piped`). - -![](/images/quickstart-adding-piped.png) - -Click on the `Save` button, and then you can see the piped-id and secret-key. - -![](/images/quickstart-piped-registered.png) - -Use the above value to fill the form showing on the terminal you run __pipectl quickstart__ command - -```console -... -Fill up your registered Piped information: -✔ ID: 2bf655c6-d7a8-4b97-8480-43fb0155539e -✔ Key: 02s3b0b6bo07kvzr8662tke4i292uo5n8w1x9pn8q9rww5lk0b -GitRemoteRepo: https://github.com/{FORKED_GITHUB_ORG}/examples.git - -``` - -That's all! - -Note: The __pipectl quickstart__ command will keep running to expose your PipeCD console on `localhost:8080`. If you stop the process, the installed PipeCD components will not lost, you can access to the PipeCD console anytime using __kubectl port-forward__ command - -```console -kubectl -n pipecd port-forward svc/pipecd 8080 -``` - -### 2. Deploy a kubernetes application with PipeCD - -Above are all we need to set up your own PipeCD (both control plane and agent), let's use the installed one to deploy your first Kubernetes application with PipeCD. - -#### 2.1. Registering a Kubernetes application -Navigate to the `Applications` page, click on the `+ADD` button on the top left corner. - -Go to the `ADD FROM SUGGESTIONS` tab, then select: -- Piped: `dev` (you just registered) -- PlatformProvider: `kubernetes-default` - -You should see a lot of suggested applications. Select the `canary` application and click the `SAVE` button to register. - -![](/images/quickstart-adding-application-from-suggestions.png) - -After a bit, the first deployment would be complete automatically to sync the application to the state specified in the current Git commit. - -![](/images/quickstart-first-deployment.png) - -#### 2.2. Let's deploy! -Let's get started with deployment! All you have to do is to make a PR to update the image tag, scale the replicas, or change the manifests. - -For instance, open the `kubernetes/canary/deployment.yaml` under the forked examples' repository, then change the tag from `v0.1.0` to `v0.2.0`. - -![](/images/quickstart-update-image-tag.png) - -After a short wait, a new deployment will be started to update to `v0.2.0`. - -![](/images/quickstart-deploying.png) - -### 3. Cleanup -When you’re finished experimenting with PipeCD quickstart mode, you can uninstall it using: - -``` console -pipectl quickstart --uninstall -``` - -### What's next? - -To prepare your PipeCD for a production environment, please visit the [Installation](../installation/) guideline. For guidelines to use PipeCD to deploy your application in daily usage, please visit the [User guide](../user-guide/) docs. diff --git a/docs/content/en/docs/user-guide/_index.md b/docs/content/en/docs/user-guide/_index.md deleted file mode 100755 index 5482b97115..0000000000 --- a/docs/content/en/docs/user-guide/_index.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: "User Guide" -linkTitle: "User Guide" -weight: 5 -description: > - Guideline to use PipeCD, from installation to common features for daily usage. ---- - - diff --git a/docs/content/en/docs/user-guide/command-line-tool.md b/docs/content/en/docs/user-guide/command-line-tool.md deleted file mode 100644 index 04d222392f..0000000000 --- a/docs/content/en/docs/user-guide/command-line-tool.md +++ /dev/null @@ -1,316 +0,0 @@ ---- -title: "Command-line tool: pipectl" -linkTitle: "Command-line tool: pipectl" -weight: 8 -description: > - This page describes how to install and use pipectl to manage PipeCD's resources. ---- - -Besides using web UI, PipeCD also provides a command-line tool, pipectl, which allows you to run commands against your project's resources. -You can use pipectl to add and sync applications, wait for a deployment status. - -## Installation - -### Binary - -1. Download the appropriate version for your platform from [PipeCD Releases](https://github.com/pipe-cd/pipecd/releases). - - We recommend using the latest version of pipectl to avoid unforeseen issues. - Run the following script: - - ``` console - # OS="darwin" or "linux" - curl -Lo ./pipectl https://github.com/pipe-cd/pipecd/releases/download/{{< blocks/latest_version >}}/pipectl_{{< blocks/latest_version >}}_{OS}_amd64 - ``` - -2. Make the pipectl binary executable. - - ``` console - chmod +x ./pipectl - ``` - -3. Move the binary to your PATH. - - ``` console - sudo mv ./pipectl /usr/local/bin/pipectl - ``` - -4. Test to ensure the version you installed is up-to-date. - - ``` console - pipectl version - ``` - -### [Asdf](https://asdf-vm.com/) - -1. Add pipectl plugin to asdf. (If you have not yet `asdf add plugin add pipectl`.) - ```console - asdf add plugin pipectl - ``` - -2. Install pipectl. Available versions are [here](https://github.com/pipe-cd/pipecd/releases). - ```console - asdf install pipectl {VERSION} - ``` - -3. Set a version. - ```console - asdf global pipectl {VERSION} - ``` - -4. Test to ensure the version you installed is up-to-date. - - ``` console - pipectl version - ``` - -### Docker -We are storing every version of docker image for pipectl on Google Cloud Container Registry. -Available versions are [here](https://github.com/pipe-cd/pipecd/releases). - -``` -docker run --rm gcr.io/pipecd/pipectl:{VERSION} -h -``` - -## Authentication - -In order for pipectl to authenticate with PipeCD's Control Plane, it needs an API key, which can be created from `Settings/API Key` tab on the web UI. -There are two kinds of key role: `READ_ONLY` and `READ_WRITE`. Depending on the command, it might require an appropriate role to execute. - -![](/images/settings-api-key.png) --Adding a new API key from Settings tab -
- -When executing a command of pipectl you have to specify either a string of API key via `--api-key` flag or a path to the API key file via `--api-key-file` flag. - -## Usage - -### Help - -Run `help` to know the available commands: - -``` console -$ pipectl --help - -The command line tool for PipeCD. - -Usage: - pipectl [command] - -Available Commands: - application Manage application resources. - deployment Manage deployment resources. - encrypt Encrypt the plaintext entered in either stdin or the --input-file flag. - event Manage event resources. - help Help about any command - piped Manage piped resources. - plan-preview Show plan preview against the specified commit. - quickstart Quick prepare PipeCD control plane in quickstart mode. - version Print the information of current binary. - -Flags: - -h, --help help for pipectl - --log-encoding string The encoding type for logger [json|console|humanize]. (default "humanize") - --log-level string The minimum enabled logging level. (default "info") - --metrics Whether metrics is enabled or not. (default true) - --profile If true enables uploading the profiles to Stackdriver. - --profile-debug-logging If true enables logging debug information of profiler. - --profiler-credentials-file string The path to the credentials file using while sending profiles to Stackdriver. - -Use "pipectl [command] --help" for more information about a command. -``` - -### Adding a new application - -Add a new application into the project: - -``` console -pipectl application add \ - --address=CONTROL_PLANE_API_ADDRESS \ - --api-key=API_KEY \ - --app-name=simple \ - --app-kind=KUBERNETES \ - --piped-id=PIPED_ID \ - --platform-provider=kubernetes-default \ - --repo-id=examples \ - --app-dir=kubernetes/simple -``` - -Run `help` to know what command flags should be specified: - -``` console -$ pipectl application add --help - -Add a new application. - -Usage: - pipectl application add [flags] - -Flags: - --app-dir string The relative path from the root of repository to the application directory. - --app-kind string The kind of application. (KUBERNETES|TERRAFORM|LAMBDA|CLOUDRUN) - --app-name string The application name. - --platform-provider string The platform provider name. One of the registered providers in the piped configuration. The previous name of this field is cloud-provider. - --config-file-name string The configuration file name. (default "app.pipecd.yaml") - --description string The description of the application. - -h, --help help for add - --piped-id string The ID of piped that should handle this application. - --repo-id string The repository ID. One the registered repositories in the piped configuration. - -Global Flags: - --address string The address to Control Plane api. - --api-key string The API key used while authenticating with Control Plane. - --api-key-file string Path to the file containing API key used while authenticating with Control Plane. - --cert-file string The path to the TLS certificate file. - --insecure Whether disabling transport security while connecting to Control Plane. - --log-encoding string The encoding type for logger [json|console|humanize]. (default "humanize") - --log-level string The minimum enabled logging level. (default "info") - --metrics Whether metrics is enabled or not. (default true) - --profile If true enables uploading the profiles to Stackdriver. - --profile-debug-logging If true enables logging debug information of profiler. - --profiler-credentials-file string The path to the credentials file using while sending profiles to Stackdriver. -``` - -### Syncing an application - -- Send a request to sync an application and exit immediately when the deployment is triggered: - - ``` console - pipectl application sync \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --app-id={APPLICATION_ID} - ``` - -- Send a request to sync an application and wait until the triggered deployment reaches one of the specified statuses: - - ``` console - pipectl application sync \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --app-id={APPLICATION_ID} \ - --wait-status=DEPLOYMENT_SUCCESS,DEPLOYMENT_FAILURE - ``` - -### Getting an application - -Display the information of a given application in JSON format: - -``` console -pipectl application get \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --app-id={APPLICATION_ID} -``` - -### Listing applications - -Find and display the information of matching applications in JSON format: - -``` console -pipectl application list \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --app-name={APPLICATION_NAME} \ - --app-kind=KUBERNETES \ -``` - -### Disable an application - -Disable an application with given id: - -``` console -pipectl application disable \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --app-id={APPLICATION_ID} -``` - -### Deleting an application - -Delete an application with given id: - -``` console -pipectl application delete \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --app-id={APPLICATION_ID} -``` - -### List deployments - -Show the list of deployments based on filters. - -```console -pipectl deployment list \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --app-id={APPLICATION_ID} -``` - -### Waiting a deployment status - -Wait until a given deployment reaches one of the specified statuses: - -``` console -pipectl deployment wait-status \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --deployment-id={DEPLOYMENT_ID} \ - --status=DEPLOYMENT_SUCCESS -``` - -### Get deployment stages log - -Get deployment stages log. - -```console -pipectl deployment logs \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --deployment-id={DEPLOYMENT_ID} -``` - -### Registering an event for EventWatcher - -Register an event that can be used by EventWatcher: - -``` console -pipectl event register \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --name=example-image-pushed \ - --data=gcr.io/pipecd/example:v0.1.0 -``` - -### Encrypting the data you want to use when deploying - -Encrypt the plaintext entered either in stdin or via the `--input-file` flag. - -You can encrypt it the same way you do [from the web](../managing-application/secret-management/#encrypting-secret-data). - -- From stdin: - - ``` console - pipectl encrypt \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --piped-id={PIPED_ID} <{PATH_TO_SECRET_FILE} - ``` - -- From the `--input-file` flag: - - ``` console - pipectl encrypt \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --piped-id={PIPED_ID} \ - --input-file={PATH_TO_SECRET_FILE} - ``` - -Note: The docs for pipectl available command is maybe outdated, we suggest users use the `help` command for the updated usage while using pipectl. - -### You want more? - -We always want to add more needed commands into pipectl. Please let us know what command you want to add by creating issues in the [pipe-cd/pipecd](https://github.com/pipe-cd/pipecd/issues) repository. We also welcome your pull request to add the command. diff --git a/docs/content/en/docs/user-guide/configuration-reference.md b/docs/content/en/docs/user-guide/configuration-reference.md deleted file mode 100644 index cf298d5d9d..0000000000 --- a/docs/content/en/docs/user-guide/configuration-reference.md +++ /dev/null @@ -1,712 +0,0 @@ ---- -title: "Configuration reference" -linkTitle: "Configuration reference" -weight: 9 -description: > - This page describes all configurable fields in the application configuration and analysis template. ---- - -## Kubernetes Application - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - input: - pipeline: - ... -``` - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The application name. | Yes (if you want to create PipeCD application through the application configuration file) | -| labels | map[string]string | Additional attributes to identify applications. | No | -| description | string | Notes on the Application. | No | -| input | [KubernetesDeploymentInput](#kubernetesdeploymentinput) | Input for Kubernetes deployment such as kubectl version, helm version, manifests filter... | No | -| trigger | [DeploymentTrigger](#deploymenttrigger) | Configuration for trigger used to determine should we trigger a new deployment or not. | No | -| planner | [DeploymentPlanner](#deploymentplanner) | Configuration for planner used while planning deployment. | No | -| commitMatcher | [CommitMatcher](#commitmatcher) | Forcibly use QuickSync or Pipeline when commit message matched the specified pattern. | No | -| quickSync | [KubernetesQuickSync](#kubernetesquicksync) | Configuration for quick sync. | No | -| pipeline | [Pipeline](#pipeline) | Pipeline for deploying progressively. | No | -| service | [KubernetesService](#kubernetesservice) | Which Kubernetes resource should be considered as the Service of application. Empty means the first Service resource will be used. | No | -| workloads | [][KubernetesWorkload](#kubernetesworkload) | Which Kubernetes resources should be considered as the Workloads of application. Empty means all Deployment resources. | No | -| trafficRouting | [KubernetesTrafficRouting](#kubernetestrafficrouting) | How to change traffic routing percentages. | No | -| encryption | [SecretEncryption](#secretencryption) | List of encrypted secrets and targets that should be decrypted before using. | No | -| attachment | [Attachment](#attachment) | List of attachment sources and targets that should be attached to manifests before using. | No | -| timeout | duration | The maximum length of time to execute deployment before giving up. Default is 6h. | No | -| notification | [DeploymentNotification](#deploymentnotification) | Additional configuration used while sending notification to external services. | No | -| postSync | [PostSync](#postsync) | Additional configuration used as extra actions once the deployment is triggered. | No | -| variantLabel | [KubernetesVariantLabel](#kubernetesvariantlabel) | The label will be configured to variant manifests used to distinguish them. | No | -| eventWatcher | [][EventWatcher](#eventwatcher) | List of configurations for event watcher. | No | -| driftDetection | [DriftDetection](#driftdetection) | Configuration for drift detection. | No | - -## Terraform application - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: TerraformApp -spec: - input: - pipeline: - ... -``` - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The application name. | Yes if you set the application through the application configuration file | -| labels | map[string]string | Additional attributes to identify applications. | No | -| description | string | Notes on the Application. | No | -| input | [TerraformDeploymentInput](#terraformdeploymentinput) | Input for Terraform deployment such as terraform version, workspace... | No | -| trigger | [DeploymentTrigger](#deploymenttrigger) | Configuration for trigger used to determine should we trigger a new deployment or not. | No | -| planner | [DeploymentPlanner](#deploymentplanner) | Configuration for planner used while planning deployment. | No | -| quickSync | [TerraformQuickSync](#terraformquicksync) | Configuration for quick sync. | No | -| pipeline | [Pipeline](#pipeline) | Pipeline for deploying progressively. | No | -| encryption | [SecretEncryption](#secretencryption) | List of encrypted secrets and targets that should be decrypted before using. | No | -| attachment | [Attachment](#attachment) | List of attachment sources and targets that should be attached to manifests before using. | No | -| timeout | duration | The maximum length of time to execute deployment before giving up. Default is 6h. | No | -| notification | [DeploymentNotification](#deploymentnotification) | Additional configuration used while sending notification to external services. | No | -| postSync | [PostSync](#postsync) | Additional configuration used as extra actions once the deployment is triggered. | No | -| eventWatcher | [][EventWatcher](#eventwatcher) | List of configurations for event watcher. | No | - -## Cloud Run application - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: CloudRunApp -spec: - input: - pipeline: - ... -``` - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The application name. | Yes if you set the application through the application configuration file | -| labels | map[string]string | Additional attributes to identify applications. | No | -| description | string | Notes on the Application. | No | -| input | [CloudRunDeploymentInput](#cloudrundeploymentinput) | Input for Cloud Run deployment such as docker image... | No | -| trigger | [DeploymentTrigger](#deploymenttrigger) | Configuration for trigger used to determine should we trigger a new deployment or not. | No | -| planner | [DeploymentPlanner](#deploymentplanner) | Configuration for planner used while planning deployment. | No | -| quickSync | [CloudRunQuickSync](#cloudrunquicksync) | Configuration for quick sync. | No | -| pipeline | [Pipeline](#pipeline) | Pipeline for deploying progressively. | No | -| encryption | [SecretEncryption](#secretencryption) | List of encrypted secrets and targets that should be decrypted before using. | No | -| attachment | [Attachment](#attachment) | List of attachment sources and targets that should be attached to manifests before using. | No | -| timeout | duration | The maximum length of time to execute deployment before giving up. Default is 6h. | No | -| notification | [DeploymentNotification](#deploymentnotification) | Additional configuration used while sending notification to external services. | No | -| postSync | [PostSync](#postsync) | Additional configuration used as extra actions once the deployment is triggered. | No | -| eventWatcher | [][EventWatcher](#eventwatcher) | List of configurations for event watcher. | No | - -## Lambda application - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: LambdaApp -spec: - pipeline: - ... -``` - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The application name. | Yes if you set the application through the application configuration file | -| labels | map[string]string | Additional attributes to identify applications. | No | -| description | string | Notes on the Application. | No | -| input | [LambdaDeploymentInput](#lambdadeploymentinput) | Input for Lambda deployment such as path to function manifest file... | No | -| trigger | [DeploymentTrigger](#deploymenttrigger) | Configuration for trigger used to determine should we trigger a new deployment or not. | No | -| planner | [DeploymentPlanner](#deploymentplanner) | Configuration for planner used while planning deployment. | No | -| quickSync | [LambdaQuickSync](#lambdaquicksync) | Configuration for quick sync. | No | -| pipeline | [Pipeline](#pipeline) | Pipeline for deploying progressively. | No | -| encryption | [SecretEncryption](#secretencryption) | List of encrypted secrets and targets that should be decrypted before using. | No | -| attachment | [Attachment](#attachment) | List of attachment sources and targets that should be attached to manifests before using. | No | -| timeout | duration | The maximum length of time to execute deployment before giving up. Default is 6h. | No | -| notification | [DeploymentNotification](#deploymentnotification) | Additional configuration used while sending notification to external services. | No | -| postSync | [PostSync](#postsync) | Additional configuration used as extra actions once the deployment is triggered. | No | -| eventWatcher | [][EventWatcher](#eventwatcher) | List of configurations for event watcher. | No | - -## ECS application - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: ECSApp -spec: - input: - pipeline: - ... -``` - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The application name. | Yes if you set the application through the application configuration file | -| labels | map[string]string | Additional attributes to identify applications. | No | -| description | string | Notes on the Application. | No | -| input | [ECSDeploymentInput](#ecsdeploymentinput) | Input for ECS deployment such as path to TaskDefinition, Service... | No | -| trigger | [DeploymentTrigger](#deploymenttrigger) | Configuration for trigger used to determine should we trigger a new deployment or not. | No | -| planner | [DeploymentPlanner](#deploymentplanner) | Configuration for planner used while planning deployment. | No | -| quickSync | [ECSQuickSync](#ecsquicksync) | Configuration for quick sync. | No | -| pipeline | [Pipeline](#pipeline) | Pipeline for deploying progressively. | No | -| encryption | [SecretEncryption](#secretencryption) | List of encrypted secrets and targets that should be decrypted before using. | No | -| attachment | [Attachment](#attachment) | List of attachment sources and targets that should be attached to manifests before using. | No | -| timeout | duration | The maximum length of time to execute deployment before giving up. Default is 6h. | No | -| notification | [DeploymentNotification](#deploymentnotification) | Additional configuration used while sending notification to external services. | No | -| postSync | [PostSync](#postsync) | Additional configuration used as extra actions once the deployment is triggered. | No | -| eventWatcher | [][EventWatcher](#eventwatcher) | List of configurations for event watcher. | No | - -## Analysis Template Configuration - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: AnalysisTemplate -spec: - metrics: - grpc_error_rate_percentage: - interval: 1m - provider: prometheus-dev - failureLimit: 1 - expected: - max: 10 - query: awesome_query -``` - -| Field | Type | Description | Required | -|-|-|-|-| -| metrics | map[string][AnalysisMetrics](#analysismetrics) | Template for metrics. | No | - -## Event Watcher Configuration (deprecated) - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: EventWatcher -spec: - events: - - name: helloworld-image-update - replacements: - - file: helloworld/deployment.yaml - yamlField: $.spec.template.spec.containers[0].image -``` - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The event name. | Yes | -| labels | map[string]string | Additional attributes of event. This can make an event definition unique even if the one with the same name exists. | No | -| replacements | [][EventWatcherReplacement](#eventwatcherreplacement) | List of places where will be replaced when the new event matches. | Yes | - -### EventWatcherReplacement -One of `yamlField` or `regex` is required. - -| Field | Type | Description | Required | -|-|-|-|-| -| file | string | The relative path from the repository root to the file to be updated. | Yes | -| yamlField | string | The yaml path to the field to be updated. It requires to start with `$` which represents the root element. e.g. `$.foo.bar[0].baz`. | No | -| regex | string | The regex string that specify what should be replaced. The only first capturing group enclosed by `()` will be replaced with the new value. e.g. `host.xz/foo/bar:(v[0-9].[0-9].[0-9])` | No | - -## CommitMatcher - -| Field | Type | Description | Required | -|-|-|-|-| -| quickSync | string | Regular expression string to forcibly do QuickSync when it matches the commit message. | No | -| pipeline | string | Regular expression string to forcibly do Pipeline when it matches the commit message. | No | - -## SecretEncryption - -| Field | Type | Description | Required | -|-|-|-|-| -| encryptedSecrets | map[string]string | List of encrypted secrets. | No | -| decryptionTargets | []string | List of files to be decrypted before using. | No | - -## Attachment - -| Field | Type | Description | Required | -|-|-|-|-| -| sources | map[string]string | List of attaching files with key is its refer name. | No | -| targets | []string | List of files which should contain the attachments. | No | - -## DeploymentPlanner - -| Field | Type | Description | Required | -|-|-|-|-| -| alwaysUsePipeline | bool | Always use the defined pipeline to deploy the application in all deployments. Default is `false`. | No | - -## DeploymentTrigger - -| Field | Type | Description | Required | -|-|-|-|-| -| onCommit | [OnCommit](#oncommit) | Controls triggering new deployment when new Git commits touched the application. | No | -| onCommand | [OnCommand](#oncommand) | Controls triggering new deployment when received a new `SYNC` command. | No | -| onOutOfSync | [OnOutOfSync](#onoutofsync) | Controls triggering new deployment when application is at `OUT_OF_SYNC` state. | No | -| onChain | [OnChain](#onchain) | Controls triggering new deployment when the application is counted as a node of some chains. | No | - -### OnCommit - -| Field | Type | Description | Required | -|-|-|-|-| -| disabled | bool | Whether to exclude application from triggering target when new Git commits touched it. Default is `false`. | No | -| paths | []string | List of directories or files where any changes of them will be considered as touching the application. Regular expression can be used. Empty means watching all changes under the application directory. | No | -| ignores | []string | List of directories or files where any changes of them will NOT be considered as touching the application. Regular expression can be used. This config has a higher priority compare to `paths`. | No | - -### OnCommand - -| Field | Type | Description | Required | -|-|-|-|-| -| disabled | bool | Whether to exclude application from triggering target when received a new `SYNC` command. Default is `false`. | No | - -### OnOutOfSync - -| Field | Type | Description | Required | -|-|-|-|-| -| disabled | bool | Whether to exclude application from triggering target when application is at `OUT_OF_SYNC` state. Default is `true`. | No | -| minWindow | duration | Minimum amount of time must be elapsed since the last deployment. This can be used to avoid triggering unnecessary continuous deployments based on `OUT_OF_SYNC` status. Default is `5m`. | No | - -### OnChain - -| Field | Type | Description | Required | -|-|-|-|-| -| disabled | bool | Whether to exclude application from triggering target when application is counted as a node of some chains. Default is `true`. | No | - -## Pipeline - -| Field | Type | Description | Required | -|-|-|-|-| -| stages | [][PipelineStage](#pipelinestage) | List of deployment pipeline stages. | No | - -### PipelineStage - -| Field | Type | Description | Required | -|-|-|-|-| -| id | string | The unique ID of the stage. | No | -| name | string | One of the provided stage names. | Yes | -| desc | string | The description about the stage. | No | -| timeout | duration | The maximum time the stage can be taken to run. | No | -| with | [StageOptions](#stageoptions) | Specific configuration for the stage. This must be one of these [StageOptions](#stageoptions). | No | - -## DeploymentNotification - -| Field | Type | Description | Required | -|-|-|-|-| -| mentions | [][NotificationMention](#notificationmention) | List of users to be notified for each event. | No | - -### NotificationMention - -| Field | Type | Description | Required | -|-|-|-|-| -| event | string | The event to be notified to users. | Yes | -| slack | []string | List of user IDs for mentioning in Slack. See [here](https://api.slack.com/reference/surfaces/formatting#mentioning-users) for more information on how to check them. | No | - -## KubernetesDeploymentInput - -| Field | Type | Description | Required | -|-|-|-|-| -| manifests | []string | List of manifest files in the application directory used to deploy. Empty means all manifest files in the directory will be used. | No | -| kubectlVersion | string | Version of kubectl will be used. Empty means the version set on [piped config](../managing-piped/configuration-reference/#platformproviderkubernetesconfig) or [default version](https://github.com/pipe-cd/pipecd/blob/master/tool/piped-base/install-kubectl.sh#L24) will be used. | No | -| kustomizeVersion | string | Version of kustomize will be used. Empty means the [default version](https://github.com/pipe-cd/pipecd/blob/master/tool/piped-base/install-kustomize.sh#L24) will be used. | No | -| kustomizeOptions | map[string]string | List of options that should be used by Kustomize commands. | No | -| helmVersion | string | Version of helm will be used. Empty means the [default version](https://github.com/pipe-cd/pipecd/blob/master/tool/piped-base/install-helm.sh#L24) will be used. | No | -| helmChart | [HelmChart](#helmchart) | Where to fetch helm chart. | No | -| helmOptions | [HelmOptions](#helmoptions) | Configurable parameters for helm commands. | No | -| namespace | string | The namespace where manifests will be applied. | No | -| autoRollback | bool | Automatically reverts all deployment changes on failure. Default is `true`. | No | - -### HelmChart - -| Field | Type | Description | Required | -|-|-|-|-| -| gitRemote | string | Git remote address where the chart is placing. Empty means the same repository. | No | -| ref | string | The commit SHA or tag value. Only valid when gitRemote is not empty. | No | -| path | string | Relative path from the repository root to the chart directory. | No | -| repository | string | The name of a registered Helm Chart Repository. | No | -| name | string | The chart name. | No | -| version | string | The chart version. | No | - -### HelmOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| releaseName | string | The release name of helm deployment. By default, the release name is equal to the application name. | No | -| valueFiles | []string | List of value files should be loaded. Only local files stored under the application directory or remote files served at the http(s) endpoint are allowed. | No | -| setFiles | map[string]string | List of file path for values. | No | -| apiVersions | []string | Kubernetes api versions used for Capabilities.APIVersions. | No | -| kubeVersion | string | Kubernetes version used for Capabilities.KubeVersion. | No | - -## KubernetesVariantLabel - -| Field | Type | Description | Required | -|-|-|-|-| -| key | string | The key of the label. Default is `pipecd.dev/variant`. | No | -| primaryValue | string | The label value for PRIMARY variant. Default is `primary`. | No | -| canaryValue | string | The label value for CANARY variant. Default is `canary`. | No | -| baselineValue | string | The label value for BASELINE variant. Default is `baseline`. | No | - -## KubernetesQuickSync - -| Field | Type | Description | Required | -|-|-|-|-| -| addVariantLabelToSelector | bool | Whether the PRIMARY variant label should be added to manifests if they were missing. Default is `false`. | No | -| prune | bool | Whether the resources that are no longer defined in Git should be removed or not. Default is `false` | No | - -## KubernetesService - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The name of Service manifest. | No | - -## KubernetesWorkload - -| Field | Type | Description | Required | -|-|-|-|-| -| kind | string | The kind name of workload manifests. Currently, only `Deployment` is supported. In the future, we also want to support `ReplicationController`, `DaemonSet`, `StatefulSet`. | No | -| name | string | The name of workload manifest. | No | - -## KubernetesTrafficRouting - -| Field | Type | Description | Required | -|-|-|-|-| -| method | string | Which traffic routing method will be used. Available values are `istio`, `smi`, `podselector`. Default is `podselector`. | No | -| istio | [IstioTrafficRouting](#istiotrafficrouting)| Istio configuration when the method is `istio`. | No | - -### IstioTrafficRouting - -| Field | Type | Description | Required | -|-|-|-|-| -| editableRoutes | []string | List of routes in the VirtualService that can be changed to update traffic routing. Empty means all routes should be updated. | No | -| host | string | The service host. | No | -| virtualService | [IstioVirtualService](#istiovirtualservice) | The reference to VirtualService manifest. Empty means the first VirtualService resource will be used. | No | - -#### IstioVirtualService - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The name of VirtualService manifest. | No | - -## TerraformDeploymentInput - -| Field | Type | Description | Required | -|-|-|-|-| -| workspace | string | The terraform workspace name. Empty means `default` workspace. | No | -| terraformVersion | string | The version of terraform should be used. Empty means the pre-installed version will be used. | No | -| vars | []string | List of variables that will be set directly on terraform commands with `-var` flag. The variable must be formatted by `key=value`. | No | -| varFiles | []string | List of variable files that will be set on terraform commands with `-var-file` flag. | No | -| commandFlags | [TerraformCommandFlags](#terraformcommandflags) | List of additional flags will be used while executing terraform commands. | No | -| commandEnvs | [TerraformCommandEnvs](#terraformcommandenvs) | List of additional environment variables will be used while executing terraform commands. | No | -| autoRollback | bool | Automatically reverts all changes from all stages when one of them failed. | No | - -### TerraformCommandFlags - -| Field | Type | Description | Required | -|-|-|-|-| -| shared | []string | List of additional flags used for all Terraform commands. | No | -| init | []string | List of additional flags used for Terraform `init` command. | No | -| plan | []string | List of additional flags used for Terraform `plan` command. | No | -| apply | []string | List of additional flags used for Terraform `apply` command. | No | - -### TerraformCommandEnvs - -| Field | Type | Description | Required | -|-|-|-|-| -| shared | []string | List of additional environment variables used for all Terraform commands. | No | -| init | []string | List of additional environment variables used for Terraform `init` command. | No | -| plan | []string | List of additional environment variables used for Terraform `plan` command. | No | -| apply | []string | List of additional environment variables used for Terraform `apply` command. | No | - -## TerraformQuickSync - -| Field | Type | Description | Required | -|-|-|-|-| -| retries | int | How many times to retry applying terraform changes. Default is `0`. | No | - -## CloudRunDeploymentInput - -| Field | Type | Description | Required | -|-|-|-|-| -| serviceManifestFile | string | The name of service manifest file placing in application directory. Default is `service.yaml`. | No | -| autoRollback | bool | Automatically reverts to the previous state when the deployment is failed. Default is `true`. | No | - -## CloudRunQuickSync - -| Field | Type | Description | Required | -|-|-|-|-| - -## LambdaDeploymentInput - -| Field | Type | Description | Required | -|-|-|-|-| -| functionManifestFile | string | The name of function manifest file placing in application directory. Default is `function.yaml`. | No | -| autoRollback | bool | Automatically reverts to the previous state when the deployment is failed. Default is `true`. | No | - -## LambdaQuickSync - -| Field | Type | Description | Required | -|-|-|-|-| - -## ECSDeploymentInput - -| Field | Type | Description | Required | -|-|-|-|-| -| serviceDefinitionFile | string | The path ECS Service configuration file. Allow file in both `yaml` and `json` format. The default value is `service.json`. See [here](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_definition_parameters.html) for parameters.| No | -| taskDefinitionFile | string | The path to ECS TaskDefinition configuration file. Allow file in both `yaml` and `json` format. The default value is `taskdef.json`. See [here](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html) for parameters. | No | -| targetGroups | [ECSTargetGroupInput](#ecstargetgroupinput) | The target groups configuration, will be used to routing traffic to created task sets. | Yes (if you want to perform progressive delivery) | - -### ECSTargetGroupInput - -| Field | Type | Description | Required | -|-|-|-|-| -| primary | ECSTargetGroupObject | The PRIMARY target group, will be used to register the PRIMARY ECS task set. | Yes | -| canary | ECSTargetGroupObject | The CANARY target group, will be used to register the CANARY ECS task set if exist. It's required to enable PipeCD to perform the multi-stage deployment. | No | - -Note: You can get examples for those object from [here](../../examples/#ecs-applications). - -## ECSQuickSync - -| Field | Type | Description | Required | -|-|-|-|-| - -## AnalysisMetrics - -| Field | Type | Description | Required | -|-|-|-|-| -| provider | string | The unique name of provider defined in the Piped Configuration. | Yes | -| strategy | string | The strategy name. One of `THRESHOLD` or `PREVIOUS` or `CANARY_BASELINE` or `CANARY_PRIMARY` is available. Defaults to `THRESHOLD`. | No | -| query | string | A query performed against the [Analysis Provider](../../concepts/#analysis-provider). The stage will be skipped if no data points were returned. | Yes | -| expected | [AnalysisExpected](#analysisexpected) | The statically defined expected query result. This field is ignored if there was no data point as a result of the query. | Yes if the strategy is `THRESHOLD` | -| interval | duration | Run a query at specified intervals. | Yes | -| failureLimit | int | Acceptable number of failures. e.g. If 1 is set, the `ANALYSIS` stage will end with failure after two queries results failed. Defaults to 1. | No | -| skipOnNoData | bool | If true, it considers as a success when no data returned from the analysis provider. Defaults to false. | No | -| deviation | string | The stage fails on deviation in the specified direction. One of `LOW` or `HIGH` or `EITHER` is available. This can be used only for `PREVIOUS`, `CANARY_BASELINE` or `CANARY_PRIMARY`. Defaults to `EITHER`. | No | -| baselineArgs | map[string][string] | The custom arguments to be populated for the Baseline query. They can be reffered as `{{ .VariantCustomArgs.xxx }}`. | No | -| canaryArgs | map[string][string] | The custom arguments to be populated for the Canary query. They can be reffered as `{{ .VariantCustomArgs.xxx }}`. | No | -| primaryArgs | map[string][string] | The custom arguments to be populated for the Primary query. They can be reffered as `{{ .VariantCustomArgs.xxx }}`. | No | -| timeout | duration | How long after which the query times out. | No | -| template | [AnalysisTemplateRef](#analysistemplateref) | Reference to the template to be used. | No | - - -### AnalysisExpected - -| Field | Type | Description | Required | -|-|-|-|-| -| min | float64 | Failure, if the query result is less than this value. | No | -| max | float64 | Failure, if the query result is larger than this value. | No | - -### AnalysisTemplateRef - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The template name to refer. | Yes | -| appArgs | map[string]string | The arguments for custom-args. | No | - -## AnalysisLog - -| Field | Type | Description | Required | -|-|-|-|-| - -## AnalysisHttp - -| Field | Type | Description | Required | -|-|-|-|-| - -## StageOptions - -### KubernetesPrimaryRolloutStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| suffix | string | Suffix that should be used when naming the PRIMARY variant's resources. Default is `primary`. | No | -| createService | bool | Whether the PRIMARY service should be created. Default is `false`. | No | -| addVariantLabelToSelector | bool | Whether the PRIMARY variant label should be added to manifests if they were missing. Default is `false`. | No | -| prune | bool | Whether the resources that are no longer defined in Git should be removed or not. Default is `false` | No | - -### KubernetesCanaryRolloutStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| replicas | int | How many pods for CANARY workloads. Default is `1` pod. Alternatively, can be specified a string suffixed by "%" to indicate a percentage value compared to the pod number of PRIMARY | No | -| suffix | string | Suffix that should be used when naming the CANARY variant's resources. Default is `canary`. | No | -| createService | bool | Whether the CANARY service should be created. Default is `false`. | No | -| patches | [][KubernetesResourcePatch](#kubernetesresourcepatch) | List of patches used to customize manifests for CANARY variant. | No | - -### KubernetesCanaryCleanStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| | | | | - -### KubernetesBaselineRolloutStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| replicas | int | How many pods for BASELINE workloads. Default is `1` pod. Alternatively, can be specified a string suffixed by "%" to indicate a percentage value compared to the pod number of PRIMARY | No | -| suffix | string | Suffix that should be used when naming the BASELINE variant's resources. Default is `baseline`. | No | -| createService | bool | Whether the BASELINE service should be created. Default is `false`. | No | - -### KubernetesBaselineCleanStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| | | | | - -### KubernetesTrafficRoutingStageOptions -This stage routes traffic with the method specified in [KubernetesTrafficRouting](#kubernetestrafficrouting). -When using `podselector` method as a traffic routing method, routing is done by updating the Service selector. -Therefore, note that all traffic will be routed to the primary if the the primary variant's service is rolled out by running the `K8S_PRIMARY_ROLLOUT` stage. - -| Field | Type | Description | Required | -|-|-|-|-| -| all | string | Which variant should receive all traffic. Available values are "primary", "canary", "baseline". Default is `primary`. | No | -| primary | [Percentage](#percentage) | The percentage of traffic should be routed to PRIMARY variant. | No | -| canary | [Percentage](#percentage) | The percentage of traffic should be routed to CANARY variant. | No | -| baseline | [Percentage](#percentage) | The percentage of traffic should be routed to BASELINE variant. | No | - -### TerraformPlanStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| exitOnNoChanges | bool | Whether exiting the pipeline when the result has no changes | No | - -### TerraformApplyStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| retries | int | How many times to retry applying terraform changes. Default is `0`. | No | - -### CloudRunPromoteStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| percent | [Percentage](#percentage) | Percentage of traffic should be routed to the new version. | No | - -### LambdaCanaryRolloutStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| - -### LambdaPromoteStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| percent | [Percentage](#percentage) | Percentage of traffic should be routed to the new version. | No | - -### ECSPrimaryRolloutStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| - -### ECSCanaryRolloutStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| scale | [Percentage](#percentage) | The percentage of workloads should be rolled out as CANARY variant's workload. | Yes | - -### ECSTrafficRoutingStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| primary | [Percentage](#percentage) | The percentage of traffic should be routed to PRIMARY variant. | No | -| canary | [Percentage](#percentage) | The percentage of traffic should be routed to CANARY variant. | No | - -Note: By default, the sum of traffic is rounded to 100. If both `primary` and `canary` numbers are not set, the PRIMARY variant will receive 100% while the CANARY variant will receive 0% of the traffic. - -### AnalysisStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| duration | duration | Maximum time to perform the analysis. | Yes | -| metrics | [][AnalysisMetrics](#analysismetrics) | Configuration for analysis by metrics. | No | - -### WaitStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| duration | duration | Time to wait. | Yes | - -### WaitApprovalStageOptions - -| Field | Type | Description | Required | -|-|-|-|-| -| timeout | duration | The maximum length of time to wait before giving up. Default is 6h. | No | -| approvers | []string | List of username who has permission to approve. | Yes | -| minApproverNum | int | Number of minimum needed approvals to make this stage complete. Default is 1. | No | - -### CustomSyncStageOptions -| Field | Type | Description | Required | -|-|-|-|-| -| timeout | duration | The maximum time the stage can be taken to run. Default is `6h`| No | -| envs | map[string]string | Environment variables used with scripts. | No | -| run | string | Script run on this stage. | Yes | - -## PostSync - -| Field | Type | Description | Required | -|-|-|-|-| -| chain | [DeploymentChain](#deploymentchain) | Deployment chain configuration, used to determine and build deployments that should be triggered once the current deployment is triggered. | No | - -### DeploymentChain - -| Field | Type | Description | Required | -|-|-|-|-| -| applications | [][DeploymentChainApplication](#deploymentchainapplication) | The list of applications which should be triggered once deployment of this application rolled out successfully. | Yes | - -#### DeploymentChainApplication - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The name of PipeCD application, note that application name is not unique in PipeCD datastore | No | -| kind | string | The kind of the PipeCD application, which should be triggered as a node in deployment chain. The value will be one of: KUBERNETES, TERRAFORM, CLOUDRUN, LAMBDA, ECS. | No | - -## EventWatcher - -| Field | Type | Description | Required | -|-|-|-|-| -| matcher | [EventWatcherMatcher](#eventwatchermatcher) | Which event will be handled. | Yes | -| handler | [EventWatcherHandler](#eventwatcherhandler) | What to do for the event which matched by the above matcher. | Yes | - -### EventWatcherMatcher - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The event name. | Yes | -| labels | map[string]string | Additional attributes of event. This can make an event definition unique even if the one with the same name exists. | No | - -### EventWatcherHandler - -| Field | Type | Description | Required | -|-|-|-|-| -| type | string | The handler type. Currently, only `GIT_UPDATE` is supported. | Yes | -| config | [EventWatcherHandlerConfig](#eventwatcherhandlerconfig) | Configuration for the event watcher handler. | Yes | - -### EventWatcherHandlerConfig - -| Field | Type | Description | Required | -|-|-|-|-| -| commitMessage | string | The commit message used to push after replacing values. Default message is used if not given. | No | -| replacements | [][EventWatcherReplacement](#eventwatcherreplacement) | List of places where will be replaced when the new event matches. | Yes | - -## DriftDetection - -| Field | Type | Description | Required | -|-|-|-|-| -| ignoreFields | []string | List of fields path in manifests, which its diff should be ignored. | No | - -## PipeCD rich defined types - -### Percentage -A wrapper of type `int` to represent percentage data. Basically, you can pass `10` or `"10"` or `10%` and they will be treated as `10%` in PipeCD. - -### KubernetesResourcePatch - -| Field | Type | Description | Required | -|-|-|-|-| -| target | [KubernetesResourcePatchTarget](#kubernetesresourcepatchtarget) | Which manifest, which field will be the target of patch operations. | Yes | -| ops | [][KubernetesResourcePatchOp](#kubernetesresourcepatchop) | List of operations should be applied to the above target. | No | - -### KubernetesResourcePatchTarget - -| Field | Type | Description | Required | -|-|-|-|-| -| kind | string | The resource kind. e.g. `ConfigMap` | Yes | -| name | string | The resource name. e.g. `config-map-name` | Yes | -| documentRoot | string | In case you want to manipulate the YAML or JSON data specified in a field of the manfiest, specify that field's path. The string value of that field will be used as input for the patch operations. Otherwise, the whole manifest will be the target of patch operations. e.g. `$.data.envoy-config` | No | - -### KubernetesResourcePatchOp - -| Field | Type | Description | Required | -|-|-|-|-| -| op | string | The operation type. This must be one of `yaml-replace`, `yaml-add`, `yaml-remove`, `json-replace`, `text-regex`. Default is `yaml-replace`. | No | -| path | string | The path string pointing to the manipulated field. For yaml operations it looks like `$.foo.array[0].bar`. | No | -| value | string | The value string whose content will be used as new value for the field. | No | diff --git a/docs/content/en/docs/user-guide/event-watcher.md b/docs/content/en/docs/user-guide/event-watcher.md deleted file mode 100644 index ba32f9fc21..0000000000 --- a/docs/content/en/docs/user-guide/event-watcher.md +++ /dev/null @@ -1,233 +0,0 @@ ---- -title: "Connect between CI and CD with event watcher" -linkTitle: "Event watcher" -weight: 3 -description: > - A helper facility to automatically update files when it finds out a new event. ---- - -![](/images/diff-by-eventwatcher.png) - -The only way to upgrade your application with PipeCD is modifying configuration files managed by the Git repositories. -It brings benefits quite a bit, but it can be painful to manually update them every time in some cases (e.g. continuous deployment to your development environment for debugging, the latest prerelease to the staging environment). - -If you're experiencing any of the above pains, Event watcher is for you. -Event watcher works as a helper facility to seamlessly link CI and CD. This feature lets you automatically update files managed by your Piped when an arbitrary event has occurred. -While it empowers you to build pretty versatile workflows, the canonical use case is that you trigger a new deployment by image updates, package releases, etc. - -This guide walks you through configuring Event watcher and how to push an Event. - -## Prerequisites -Before we get into configuring EventWatcher, be sure to configure Piped. See [here](../managing-piped/configuring-event-watcher/) for more details. - -## Usage -File updating can be done by registering the latest value corresponding to the Event in the Control Plane and comparing it with the current value. - -Therefore, you mainly need to: -1. define which values in which files should be updated when a new Event found. -1. integrate a step to push an Event to the Control Plane using `pipectl` into your CI workflow. - -### 1. Defining Events -#### Use the `.pipe/` directory ->NOTE: This way is deprecated and will be removed in the future, so please use the application configuration. - -Prepare EventWatcher configuration files under the `.pipe/` directory at the root of your Git repository. -In that files, you define which values in which files should be updated when the Piped found out a new Event. - -For instance, suppose you want to update the Kubernetes manifest defined in `helloworld/deployment.yaml` when an Event with the name `helloworld-image-update` occurs: - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: EventWatcher -spec: - events: - - name: helloworld-image-update - replacements: - - file: helloworld/deployment.yaml - yamlField: $.spec.template.spec.containers[0].image -``` - -The full list of configurable `EventWatcher` fields are [here](../configuration-reference/#event-watcher-configuration-deprecated). - -#### Use the application configuration - -Define what to do for which event in the application configuration file of the target application. - -- `matcher`: Which event should be handled. -- `handler`: What to do for the event which is specified by matcher. - -For instance, suppose you want to update the Kubernetes manifest defined in `helloworld/deployment.yaml` when an Event with the name `helloworld-image-update` occurs: -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - name: helloworld - eventWatcher: - - matcher: - name: helloworld-image-update - handler: - type: GIT_UPDATE - config: - replacements: - - file: deployment.yaml - yamlField: $.spec.template.spec.containers[0].image -``` - -The full list of configurable `eventWatcher` fields are [here](../configuration-reference/#eventwatcher). - -### 2. Pushing an Event with `pipectl` -To register a new value corresponding to Event such as the above in the Control Plane, you need to perform `pipectl`. -And we highly recommend integrating a step for that into your CI workflow. - -You first need to set-up the `pipectl`: - -- Install it on your CI system or where you want to run according to [this guide](../command-line-tool/#installation). -- Grab the API key to which the `READ_WRITE` role is attached according to [this guide](../command-line-tool/#authentication). - -Once you're all set up, pushing a new Event to the Control Plane by the following command: - -```bash -pipectl event register \ - --address={CONTROL_PLANE_API_ADDRESS} \ - --api-key={API_KEY} \ - --name=helloworld-image-update \ - --data=gcr.io/pipecd/helloworld:v0.2.0 -``` - -You can see the status on the event list page. - -![](/images/event-list-page.png) - - -After a while, Piped will create a commit as shown below: - -```diff - spec: - containers: - - name: helloworld -- image: gcr.io/pipecd/helloworld:v0.1.0 -+ image: gcr.io/pipecd/helloworld:v0.2.0 -``` - -NOTE: Keep in mind that it may take a little while because Piped periodically fetches the new events from the Control Plane. You can change its interval according to [here](../managing-piped/configuration-reference/#eventwatcher). - -### [optional] Using labels -Event watcher is a project-wide feature, hence an event name is unique inside a project. That is, you can update multiple repositories at the same time if you use the same event name for different events. - -On the contrary, if you want to explicitly distinguish those, we recommend using labels. You can make an event definition unique by using any number of labels with arbitrary keys and values. -Suppose you define an event with the labels `env: dev` and `appName: helloworld`: - -When you use the `.pipe/` directory, you can configure like below. -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: EventWatcher -spec: - events: - - name: image-update - labels: - env: dev - appName: helloworld - replacements: - - file: helloworld/deployment.yaml - yamlField: $.spec.template.spec.containers[0].image -``` - -The other example is like below. -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: ApplicationKind -spec: - name: helloworld - eventWatcher: - - matcher: - name: image-update - labels: - env: dev - appName: helloworld - handler: - type: GIT_UPDATE - config: - replacements: - - file: deployment.yaml - yamlField: $.spec.template.spec.containers[0].image -``` - -The file update will be executed only when the labels are explicitly specified with the `--labels` flag. - -```bash -pipectl event register \ - --address=CONTROL_PLANE_API_ADDRESS \ - --api-key=API_KEY \ - --name=image-update \ - --labels env=dev,appName=helloworld \ - --data=gcr.io/pipecd/helloworld:v0.2.0 -``` - -Note that it is considered a match only when labels are an exact match. - -## Examples -Suppose you want to update your configuration file after releasing a new Helm chart. - -You define the configuration for event watcher in `helloworld/app.pipecd.yaml` file like: - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - input: - helmChart: - name: helloworld - version: 0.1.0 - eventWatcher: - - matcher: - name: image-update - labels: - env: dev - appName: helloworld - handler: - type: GIT_UPDATE - config: - replacements: - - file: app.pipecd.yaml - yamlField: $.spec.input.helmChart.version -``` - -Push a new version `0.2.0` as data when the Helm release is completed. - -```bash -pipectl event register \ - --address=CONTROL_PLANE_API_ADDRESS \ - --api-key=API_KEY \ - --name=helm-release \ - --labels env=dev,appName=helloworld \ - --data=0.2.0 -``` - -Then you'll see that Piped updates as: - -```diff -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - input: - helmChart: - name: helloworld -- version: 0.1.0 -+ version: 0.2.0 - eventWatcher: - - matcher: - name: image-update - labels: - env: dev - appName: helloworld - handler: - type: GIT_UPDATE - config: - replacements: - - file: app.pipecd.yaml - yamlField: $.spec.input.helmChart.version -``` - -## Github Actions -If you're using Github Actions in your CI workflow, [actions-event-register](https://github.com/marketplace/actions/pipecd-register-event) is for you! -With it, you can easily register events without any installation. diff --git a/docs/content/en/docs/user-guide/examples/_index.md b/docs/content/en/docs/user-guide/examples/_index.md deleted file mode 100755 index 9a6c69f276..0000000000 --- a/docs/content/en/docs/user-guide/examples/_index.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: "Examples" -linkTitle: "Examples" -weight: 10 -description: > - Some examples of PipeCD in action! ---- - -One of the best ways to see what PipeCD can do, and learn how to deploy your applications with it, is to see some real examples. - -We have prepared some examples for each kind of application, please visit the [PipeCD examples](../../examples/) page for details. diff --git a/docs/content/en/docs/user-guide/examples/k8s-app-bluegreen-with-istio.md b/docs/content/en/docs/user-guide/examples/k8s-app-bluegreen-with-istio.md deleted file mode 100644 index 7544f8ca79..0000000000 --- a/docs/content/en/docs/user-guide/examples/k8s-app-bluegreen-with-istio.md +++ /dev/null @@ -1,126 +0,0 @@ ---- -title: "BlueGreen deployment for Kubernetes app with Istio" -linkTitle: "BlueGreen k8s app with Istio" -weight: 2 -description: > - How to enable blue-green deployment for Kubernetes application with Istio. ---- - -Similar to [canary deployment](../k8s-app-canary-with-istio/), PipeCD allows you to enable and automate the blue-green deployment strategy for your application based on Istio's weighted routing feature. - -In both canary and blue-green strategies, the old version and the new version of the application get deployed at the same time. -But while the canary strategy slowly routes the traffic to the new version, the blue-green strategy quickly routes all traffic to one of the versions. - -In this guide, we will show you how to configure the application configuration file to apply the blue-green strategy. - -Complete source code for this example is hosted in [pipe-cd/examples](https://github.com/pipe-cd/examples/tree/master/kubernetes/mesh-istio-bluegreen) repository. - -## Before you begin - -- Add a new Kubernetes application by following the instructions in [this guide](../../managing-application/adding-an-application/) -- Ensure having `pipecd.dev/variant: primary` [label](https://github.com/pipe-cd/examples/blob/master/kubernetes/mesh-istio-bluegreen/deployment.yaml#L17) and [selector](https://github.com/pipe-cd/examples/blob/master/kubernetes/mesh-istio-bluegreen/deployment.yaml#L12) in the workload template -- Ensure having at least one Istio's `DestinationRule` and defining the needed subsets (`primary` and `canary`) with `pipecd.dev/variant` label - -``` yaml -apiVersion: networking.istio.io/v1beta1 -kind: DestinationRule -metadata: - name: mesh-istio-bluegreen -spec: - host: mesh-istio-bluegreen - subsets: - - name: primary - labels: - pipecd.dev/variant: primary - - name: canary - labels: - pipecd.dev/variant: canary - trafficPolicy: - tls: - mode: ISTIO_MUTUAL -``` - -- Ensure having at least one Istio's `VirtualService` manifest and all traffic is routed to the `primary` - -``` yaml -apiVersion: networking.istio.io/v1beta1 -kind: VirtualService -metadata: - name: mesh-istio-bluegreen -spec: - hosts: - - mesh-istio-bluegreen.pipecd.dev - gateways: - - mesh-istio-bluegreen - http: - - route: - - destination: - host: mesh-istio-bluegreen - subset: primary - weight: 100 -``` - -## Enabling blue-green strategy - -- Add the following application configuration file into the application directory in the Git repository. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: K8S_CANARY_ROLLOUT - with: - replicas: 100% - - name: K8S_TRAFFIC_ROUTING - with: - all: canary - - name: WAIT_APPROVAL - - name: K8S_PRIMARY_ROLLOUT - - name: K8S_TRAFFIC_ROUTING - with: - all: primary - - name: K8S_CANARY_CLEAN - trafficRouting: - method: istio - istio: - host: mesh-istio-bluegreen -``` - -- Send a PR to update the container image version in the Deployment manifest and merge it to trigger a new deployment. PipeCD will plan the deployment with the specified blue-green strategy. - -![](/images/example-bluegreen-kubernetes-istio.png) --Deployment Details Page -
- -- Now you have an automated blue-green deployment for your application. 🎉 - -## Understanding what happened - -In this example, you configured the application configuration file to switch all traffic from an old to a new version of the application using Istio's weighted routing feature. - -- Stage 1: `K8S_CANARY_ROLLOUT` ensures that the workloads of canary variant (new version) should be deployed. But at this time, they still handle nothing, all traffic is handled by workloads of primary variant. -The number of workloads (e.g. pod) for canary variant is configured to be 100% of the replicas number of primary varant. - -![](/images/example-bluegreen-kubernetes-istio-stage-1.png) - -- Stage 2: `K8S_TRAFFIC_ROUTING` ensures that all traffic should be routed to canary variant. Because the `trafficRouting` is configured to use Istio, PipeCD will find Istio's VirtualService resource of this application to control the traffic percentage. -(You can add an [ANALYSIS](../../managing-application/customizing-deployment/automated-deployment-analysis/) stage after this to validate the new version. When any negative impacts are detected, an auto-rollback stage will be executed to switch all traffic back to the primary variant.) - -![](/images/example-bluegreen-kubernetes-istio-stage-2.png) - -- Stage 3: `WAIT_APPROVAL` waits for a manual approval from someone in your team. - -- Stage 4: `K8S_PRIMARY_ROLLOUT` ensures that all resources of primary variant will be updated to the new version. - -![](/images/example-bluegreen-kubernetes-istio-stage-4.png) - -- Stage 5: `K8S_TRAFFIC_ROUTING` ensures that all traffic should be routed to primary variant. Now primary variant is running the new version so it means all traffic is handled by the new version. - -![](/images/example-bluegreen-kubernetes-istio-stage-5.png) - -- Stage 6: `K8S_CANARY_CLEAN` ensures all created resources for canary variant should be destroyed. - -![](/images/example-bluegreen-kubernetes-istio-stage-6.png) diff --git a/docs/content/en/docs/user-guide/examples/k8s-app-bluegreen-with-pod-selector.md b/docs/content/en/docs/user-guide/examples/k8s-app-bluegreen-with-pod-selector.md deleted file mode 100644 index c303b64cbe..0000000000 --- a/docs/content/en/docs/user-guide/examples/k8s-app-bluegreen-with-pod-selector.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: "BlueGreen deployment for Kubernetes app with PodSelector" -linkTitle: "BlueGreen k8s app with PodSelector" -weight: 4 -description: > - How to enable blue-green deployment for Kubernetes application with PodSelector. ---- - -> TBA - -For applications that are not deployed on a service mesh, PipeCD can enable blue-green deployment with Kubernetes L4 networking. diff --git a/docs/content/en/docs/user-guide/examples/k8s-app-canary-with-istio.md b/docs/content/en/docs/user-guide/examples/k8s-app-canary-with-istio.md deleted file mode 100644 index 286b361ded..0000000000 --- a/docs/content/en/docs/user-guide/examples/k8s-app-canary-with-istio.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: "Canary deployment for Kubernetes app with Istio" -linkTitle: "Canary k8s app with Istio" -weight: 1 -description: > - How to enable canary deployment for Kubernetes application with Istio. ---- - -> Canary release is a technique to reduce the risk of introducing a new software version in production by slowly rolling out the change to a small subset of users before rolling it out to the entire infrastructure and making it available to everybody. -> -- [martinfowler.com/canaryrelease](https://martinfowler.com/bliki/CanaryRelease.html) - -With Istio, we can accomplish this goal by configuring a sequence of rules that route a percentage of traffic to each [variant](../../managing-application/defining-app-configuration/kubernetes/#sync-with-the-specified-pipeline) of the application. -And with PipeCD, you can enable and automate the canary strategy for your Kubernetes application even easier. - -In this guide, we will show you how to configure the application configuration file to send 10% of traffic to the new version and keep 90% to the primary variant. Then after waiting for manual approval, you will complete the migration by sending 100% of traffic to the new version. - -Complete source code for this example is hosted in [pipe-cd/examples](https://github.com/pipe-cd/examples/tree/master/kubernetes/mesh-istio-canary) repository. - -## Before you begin - -- Add a new Kubernetes application by following the instructions in [this guide](../../managing-application/adding-an-application/) -- Ensure having `pipecd.dev/variant: primary` [label](https://github.com/pipe-cd/examples/blob/master/kubernetes/mesh-istio-canary/deployment.yaml#L17) and [selector](https://github.com/pipe-cd/examples/blob/master/kubernetes/mesh-istio-canary/deployment.yaml#L12) in the workload template -- Ensure having at least one Istio's `DestinationRule` and defining the needed subsets (`primary` and `canary`) with `pipecd.dev/variant` label - -``` yaml -apiVersion: networking.istio.io/v1beta1 -kind: DestinationRule -metadata: - name: mesh-istio-canary -spec: - host: mesh-istio-canary.default.svc.cluster.local - subsets: - - name: primary - labels: - pipecd.dev/variant: primary - - name: canary - labels: - pipecd.dev/variant: canary -``` - -- Ensure having at least one Istio's `VirtualService` manifest and all traffic is routed to the `primary` - -``` yaml -apiVersion: networking.istio.io/v1beta1 -kind: VirtualService -metadata: - name: mesh-istio-canary -spec: - hosts: - - mesh-istio-canary.pipecd.dev - gateways: - - mesh-istio-canary - http: - - route: - - destination: - host: mesh-istio-canary.default.svc.cluster.local - subset: primary - weight: 100 -``` - -## Enabling canary strategy - -- Add the following application configuration file into the application directory in Git. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: K8S_CANARY_ROLLOUT - with: - replicas: 50% - - name: K8S_TRAFFIC_ROUTING - with: - canary: 10 - primary: 90 - - name: WAIT_APPROVAL - - name: K8S_PRIMARY_ROLLOUT - - name: K8S_TRAFFIC_ROUTING - with: - primary: 100 - - name: K8S_CANARY_CLEAN - trafficRouting: - method: istio - istio: - host: mesh-istio-canary.default.svc.cluster.local -``` - -- Send a PR to update the container image version in the Deployment manifest and merge it to trigger a new deployment. PipeCD will plan the deployment with the specified canary strategy. - -![](/images/example-canary-kubernetes-istio.png) --Deployment Details Page -
- -- Now you have an automated canary deployment for your application. 🎉 - -## Understanding what happened - -In this example, you configured the application configuration file to migrate traffic from an old to a new version of the application using Istio's weighted routing feature. - -- Stage 1: `K8S_CANARY_ROLLOUT` ensures that the workloads of canary variant (new version) should be deployed. But at this time, they still handle nothing, all traffic are handled by workloads of primary variant. -The number of workloads (e.g. pod) for canary variant is configured to be 50% of the replicas number of primary varant. - -![](/images/example-canary-kubernetes-istio-stage-1.png) - -- Stage 2: `K8S_TRAFFIC_ROUTING` ensures that 10% of traffic should be routed to canary variant and 90% to primary variant. Because the `trafficRouting` is configured to use Istio, PipeCD will find Istio's VirtualService resource of this application to control the traffic percentage. - -![](/images/example-canary-kubernetes-istio-stage-2.png) - -- Stage 3: `WAIT_APPROVAL` waits for a manual approval from someone in your team. - -- Stage 4: `K8S_PRIMARY_ROLLOUT` ensures that all resources of primary variant will be updated to the new version. - -![](/images/example-canary-kubernetes-istio-stage-4.png) - -- Stage 5: `K8S_TRAFFIC_ROUTING` ensures that all traffic should be routed to primary variant. Now primary variant is running the new version so it means all traffic is handled by the new version. - -![](/images/example-canary-kubernetes-istio-stage-5.png) - -- Stage 6: `K8S_CANARY_CLEAN` ensures all created resources for canary variant should be destroyed. - -![](/images/example-canary-kubernetes-istio-stage-6.png) diff --git a/docs/content/en/docs/user-guide/examples/k8s-app-canary-with-pod-selector.md b/docs/content/en/docs/user-guide/examples/k8s-app-canary-with-pod-selector.md deleted file mode 100644 index 5993bc101e..0000000000 --- a/docs/content/en/docs/user-guide/examples/k8s-app-canary-with-pod-selector.md +++ /dev/null @@ -1,122 +0,0 @@ ---- -title: "Canary deployment for Kubernetes app with PodSelector" -linkTitle: "Canary k8s app with PodSelector" -weight: 3 -description: > - How to enable canary deployment for Kubernetes application with PodSelector. ---- - -Using service mesh like [Istio](../k8s-app-canary-with-istio/) helps you doing canary deployment easier with many powerful features, but not all teams are ready to use service mesh in their environment. This page will walk you through using PipeCD to enable canary deployment for Kubernetes application running in a non-mesh environment. - -Basically, the idea behind is described as this [Kubernetes document](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments); the Service resource uses the common label set to route the traffic to both canary and primary workloads, and percentage of traffic for each variant is based on their replicas number. - -## Enabling canary strategy - -Assume your application has the following `Service` and `Deployment` manifests: - -- service.yaml - -``` yaml -apiVersion: v1 -kind: Service -metadata: - name: helloworld -spec: - selector: - app: helloworld - ports: - - protocol: TCP - port: 9085 -``` - -- deployment.yaml - -``` yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: helloworld - labels: - app: helloworld - pipecd.dev/variant: primary -spec: - replicas: 30 - revisionHistoryLimit: 2 - selector: - matchLabels: - app: helloworld - pipecd.dev/variant: primary - template: - metadata: - labels: - app: helloworld - pipecd.dev/variant: primary - spec: - containers: - - name: helloworld - image: gcr.io/pipecd/helloworld:v0.1.0 - args: - - server - ports: - - containerPort: 9085 -``` - -In PipeCD context, manifests defined in Git are the manifests for primary variant, so please note to ensure that your deployment manifest contains `pipecd.dev/variant: primary` label and selector in the spec. - -To enable canary strategy for this Kubernetes application, you will update your application configuration file to be as below: - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - # Deploy the workloads of CANARY variant. In this case, the number of - # workload replicas of CANARY variant is 50% of the replicas number of PRIMARY variant. - - name: K8S_CANARY_ROLLOUT - with: - replicas: 50% - - name: WAIT_APPROVAL - with: - duration: 10s - # Update the workload of PRIMARY variant to the new version. - - name: K8S_PRIMARY_ROLLOUT - # Destroy all workloads of CANARY variant. - - name: K8S_CANARY_CLEAN -``` - -That is all, now let try to send a PR to update the container image version in the Deployment manifest and merge it to trigger a new deployment. Then, PipeCD will plan the deployment with the specified canary strategy. - -![](/images/example-canary-kubernetes.png) --Deployment Details Page -
- -Complete source code for this example is hosted in [pipe-cd/examples](https://github.com/pipe-cd/examples/tree/master/kubernetes/canary) repository. - -## Understanding what happened - -In this example, you configured your application to be deployed with a canary strategy using a native feature of Kubernetes: pod selector. -The traffic will be routed to both canary and primary workloads because they are sharing the same label: `app: helloworld`. -The percentage of traffic for each variant is based on the respective number of pods. - -Here are what happened in details: - -- Before deploying, all traffic gets routed to primary workloads. - - - -- Stage 1: `K8S_CANARY_ROLLOUT` ensures that the workloads of canary variant (new version) should be deployed. -The number of workloads (e.g. pod) for canary variant is configured to be 50% of the replicas number of primary variant. It means 15 canary pods will be started, and they receive 33.3% traffic while primary workloads receive the remaining 66.7% traffic. - - - -- Stage 2: `WAIT_APPROVAL` waits for a manual approval from someone in your team. - -- Stage 3: `K8S_PRIMARY_ROLLOUT` ensures that all resources of primary variant will be updated to the new version. - - - -- Stage 4: `K8S_CANARY_CLEAN` ensures all created resources for canary variant should be destroyed. After that, the primary workloads running in with the new version will receive all traffic. - - diff --git a/docs/content/en/docs/user-guide/insights.md b/docs/content/en/docs/user-guide/insights.md deleted file mode 100644 index fb2e46241c..0000000000 --- a/docs/content/en/docs/user-guide/insights.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -title: "Insights" -linkTitle: "Insights" -weight: 5 -description: > - This page describes how to see delivery performance. ---- - -![](/images/insights.png) - -### Application metrics - -The topmost block helps you understand how many applications your project has. - -### Deployment metrics - -Based on your executed deployment data, PipeCD provides charts that help you better understand the delivery performance of your organization. - -You can view daily, and monthly data visualizations of your entire project, a specific application, or a group of applications that match a list of labels. - -#### Deployment Frequency -How often does your application/project deploy code to production. - -#### Change Failure Rate -How often deployment failures occur in production that requires an immediate remedy (fix, rollback...). - -#### Lead Time for Changes -How long does it take to go from code committed to code successfully running on production. - -> WIP - -#### Mean Time To Restore -How long does it generally take to restore service when a service incident occurs. - -> WIP diff --git a/docs/content/en/docs/user-guide/managing-application/_index.md b/docs/content/en/docs/user-guide/managing-application/_index.md deleted file mode 100644 index 99468227f5..0000000000 --- a/docs/content/en/docs/user-guide/managing-application/_index.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: "Managing application" -linkTitle: "Managing application" -weight: 2 -description: > - This guide is for developers who have PipeCD installed for them and are using PipeCD to deploy their applications. ---- - -> Note: You must have at least one activated/running Piped to enable using any of the following features of PipeCD. Please refer to [Piped installation docs](../../installation/install-piped/) if you do not have any Piped in your pocket. diff --git a/docs/content/en/docs/user-guide/managing-application/adding-an-application.md b/docs/content/en/docs/user-guide/managing-application/adding-an-application.md deleted file mode 100644 index 822b446c99..0000000000 --- a/docs/content/en/docs/user-guide/managing-application/adding-an-application.md +++ /dev/null @@ -1,140 +0,0 @@ ---- -title: "Adding an application" -linkTitle: "Adding an application" -weight: 1 -description: > - This page describes how to add a new application. ---- - -An application is a collection of resources and configurations that are managed together. -It represents the service which you are going to deploy. With PipeCD, all application's manifests and its application configuration (`app.pipecd.yaml`) must be committed into a directory of a Git repository. That directory is called as application directory. - -Each application can be handled by one and only one `piped`. Currently, PipeCD is supporting 5 kinds of application: Kubernetes, Terraform, CloudRun, Lambda, ECS. - -Before deploying an application, it must be registered to help PipeCD knows -- where the application configuration is placed -- which `piped` should handle it and which platform the application should be deployed to - -Through the web console, you can register a new application in one of the following ways: -- Picking from a list of unused apps suggested by Pipeds while scanning Git repositories (Recommended) -- Manually configuring application information - -(If you prefer to use [`pipectl`](../../command-line-tool/#adding-a-new-application) command-line tool, see its usage for the details.) - -## Picking from a list of unused apps suggested by Pipeds - -You have to __prepare a configuration file__ which contains your application configuration and store that file in the Git repository which your Piped is watching first to enable adding a new application this way. - -The application configuration file name must be suffixed by `.pipecd.yaml` because Piped periodically checks for files with this suffix. - -{{< tabpane >}} -{{< tab lang="yaml" header="KubernetesApp" >}} -# For application's configuration in detail for KubernetesApp, please visit -# https://pipecd.dev/docs/user-guide/managing-application/defining-app-configuration/kubernetes/ - -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - name: foo - labels: - team: bar -{{< /tab >}} -{{< tab lang="yaml" header="TerraformApp" >}} -# For application's configuration in detail for TerraformApp, please visit -# https://pipecd.dev/docs/user-guide/managing-application/defining-app-configuration/terraform/ - -apiVersion: pipecd.dev/v1beta1 -kind: TerraformApp -spec: - name: foo - labels: - team: bar -{{< /tab >}} -{{< tab lang="yaml" header="LambdaApp" >}} -# For application's configuration in detail for LambdaApp, please visit -# https://pipecd.dev/docs/user-guide/managing-application/defining-app-configuration/lambda/ - -apiVersion: pipecd.dev/v1beta1 -kind: LambdaApp -spec: - name: foo - labels: - team: bar -{{< /tab >}} -{{< tab lang="yaml" header="CloudRunApp" >}} -# For application's configuration in detail for CloudRunApp, please visit -# https://pipecd.dev/docs/user-guide/managing-application/defining-app-configuration/cloudrun/ - -apiVersion: pipecd.dev/v1beta1 -kind: CloudRunApp -spec: - name: foo - labels: - team: bar -{{< /tab >}} -{{< tab lang="yaml" header="ECSApp" >}} -# For application's configuration in detail for ECSApp, please visit -# https://pipecd.dev/docs/user-guide/managing-application/defining-app-configuration/ecs/ - -apiVersion: pipecd.dev/v1beta1 -kind: ECSApp -spec: - name: foo - labels: - team: bar -{{< /tab >}} -{{< /tabpane >}} - -To define your application deployment pipeline which contains the guideline to show Piped how to deploy your application, please visit [Defining app configuration](../defining-app-configuration/). - -Go to the PipeCD web console on application list page, click the `+ADD` button at the top left corner of the application list page and then go to the `ADD FROM GIT` tab. - -Select the Piped and Platform Provider that you deploy to, once the Piped that's watching your Git repository catches the new unregistered application configuration file, it will be listed up in this panel. Click `ADD` to complete the registration. - -![](/images/registering-an-application-from-suggestions-new.png) --
- -## Manually configuring application information - -This way, you can postpone the preparation for your application's configuration after submitting all the necessary information about your app on the web console. - -By clicking on `+ADD` button at the application list page, a popup will be revealed from the right side as below: - -![](/images/registering-an-application-manually-new.png) --
- -After filling all the required fields, click `Save` button to complete the application registering. - -Here are the list of fields in the register form: - -| Field | Description | Required | -|-|-|-|-| -| Name | The application name | Yes | -| Kind | The application kind. Select one of these values: `Kubernetes`, `Terraform`, `CloudRun`, `Lambda` and `ECS`. | Yes | -| Piped | The piped that handles this application. Select one of the registered `piped`s at `Settings/Piped` page. | Yes | -| Repository | The Git repository contains application configuration and application configuration. Select one of the registered repositories in `piped` configuration. | Yes | -| Path | The relative path from the root of the Git repository to the directory containing application configuration and application configuration. Use `./` means repository root. | Yes | -| Config Filename | The name of application configuration file. Default is `app.pipecd.yaml`. | No | -| Platform Provider | Where the application will be deployed to. Select one of the registered cloud/platform providers in `piped` configuration. This field name previously was `Cloud Provider`. | Yes | - -> Note: Labels couldn't be set via this form. If you want, try the way to register via the application configuration defined in the Git repository. - -After submitting the form, one more step left is adding the application configuration file for that application into the application directory in Git repository same as we prepared in [the above method](../adding-an-application/#picking-from-a-list-of-unused-apps-suggested-by-pipeds). - -Please refer [Define your app's configuration](../defining-app-configuration/) or [pipecd/examples](../../examples/) for the examples of being supported application kind. - -## Updating an application -Regardless of which method you used to register the application, the web console can only be used to disable/enable/delete the application, besides the adding operation. All updates on application information must be done via the application configuration file stored in Git as a single source of truth. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: AppKind -spec: - name: new-name - labels: - team: new-team -``` - -Refer to [configuration reference](../../configuration-reference/) to see the full list of configurable fields. diff --git a/docs/content/en/docs/user-guide/managing-application/application-live-state.md b/docs/content/en/docs/user-guide/managing-application/application-live-state.md deleted file mode 100644 index 6cab5cd950..0000000000 --- a/docs/content/en/docs/user-guide/managing-application/application-live-state.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: "Application live state" -linkTitle: "Application live state" -weight: 7 -description: > - The live states of application components as well as their health status. ---- - -By default, `piped` continuously monitors the running resources/components of all deployed applications to determine the state of them and then send those results to the control plane. The application state will be visualized and rendered at the application details page in realtime. That helps developers can see what is running in the cluster as well as their health status. The application state includes: -- visual graph of application resources/components. Each resource/component node includes its metadata and health status. -- health status of the whole application. Application health status is `HEALTHY` if and only if the health statuses of all of its resources/components are `HEALTHY`. - -![](/images/application-details.png) --Application Details Page -
- -By clicking on the resource/component node, a popup will be revealed from the right side to show more details about that resource/component. diff --git a/docs/content/en/docs/user-guide/managing-application/cancelling-a-deployment.md b/docs/content/en/docs/user-guide/managing-application/cancelling-a-deployment.md deleted file mode 100644 index 457a305e70..0000000000 --- a/docs/content/en/docs/user-guide/managing-application/cancelling-a-deployment.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: "Cancelling a deployment" -linkTitle: "Cancelling a deployment" -weight: 5 -description: > - This page describes how to cancel a running deployment. ---- - -A running deployment can be cancelled from web UI at the deployment details page. - -If the application rollback is enabled in the application configuration, the rollback process will be executed after the cancelling. You can also explicitly specify to rollback after the cancelling or not from the web UI by clicking on `▼` mark on the right side of the `CANCEL` button to select your option. - -![](/images/cancel-deployment.png) --Cancel a Deployment from web UI -
- diff --git a/docs/content/en/docs/user-guide/managing-application/configuration-drift-detection.md b/docs/content/en/docs/user-guide/managing-application/configuration-drift-detection.md deleted file mode 100644 index 3d2a3b4bc1..0000000000 --- a/docs/content/en/docs/user-guide/managing-application/configuration-drift-detection.md +++ /dev/null @@ -1,99 +0,0 @@ ---- -title: "Configuration drift detection" -linkTitle: "Configuration drift detection" -weight: 8 -description: > - Automatically detecting the configuration drift. ---- - -Configuration Drift is a phenomenon where running resources of service become more and more different from the definitions in Git as time goes on, due to manual ad-hoc changes and updates. -As PipeCD is using Git as a single source of truth, all application resources and infrastructure changes should be done by making a pull request to Git. Whenever a configuration drift occurs it should be notified to the developers and be fixed. - -PipeCD includes `Configuration Drift Detection` feature, which periodically compares running resources/configurations with the definitions in Git to detect the configuration drift and shows the comparing result in the application details web page as well as sends the notifications to the developers. - -### Detection Result -There are three statuses for the drift detection result: `SYNCED`, `OUT_OF_SYNC`, `DEPLOYING`. - -###### SYNCED - -This status means no configuration drift was detected. All resources/configurations are synced from the definitions in Git. From the application details page, this status is shown by a green "Synced" mark. - -![](/images/application-synced.png) --Application is in SYNCED state -
- -###### OUT_OF_SYNC - -This status means a configuration drift was detected. An application is in this status when at least one of the following conditions is satisfied: -- at least one resource is defined in Git but NOT running in the cluster -- at least one resource is NOT defined in Git but running in the cluster -- at least one resource that is both defined in Git and running in the cluster but NOT in the same configuration - -This status is shown by a red "Out of Sync" mark on the application details page. - -![](/images/application-out-of-sync.png) --Application is in OUT_OF_SYNC state -
- -Click on the "SHOW DETAILS" button to see more details about why the application is in the `OUT_OF_SYNC` status. In the below example, the replicas number of a Deployment was not matching, it was `300` in Git but `3` in the cluster. - -![](/images/application-out-of-sync-details.png) --The details shows why the application is in OUT_OF_SYNC state -
- -###### DEPLOYING - -This status means the application is deploying and the configuration drift detection is not running a white. Whenever a new deployment of the application was started, the detection process will temporarily be stopped until that deployment finishes and will be continued after that. - -### How to enable - -This feature is automatically enabled for all applications. - -You can change the checking interval as well as [configure the notification](../../managing-piped/configuring-notifications/) for these events in `piped` configuration. - -### Ignore drift detection for specific fields - -> Note: This feature is currently supported for only Kubernetes application. - -You can also ignore drift detection for specified fields in your application manifests. In other words, even if the selected fields have different values between live state and Git, the application status will not be set to `Out of Sync`. - -For example, suppose you have the application's manifest as below - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: simple -spec: - replicas: 2 - template: - spec: - containers: - - args: - - hi - - hello - image: gcr.io/pipecd/helloworld:v1.0.0 - name: helloworld -``` - -If you want to ignore the drift detection for the two sceans -- pod's replicas -- `helloworld` container's args - -Add the following statements to `app.pipecd.yaml` to ignore diff on those fields. - -```yaml -spec: - ... - driftDetection: - ignoreFields: - - apps/v1:Deployment:default:simple#spec.replicas - - apps/v1:Deployment:default:simple#spec.template.spec.containers.0.args -``` - -Note: The `ignoreFields` is in format `apiVersion:kind:namespace:name#yamlFieldPath` - -For more information, see the [configuration reference](../../configuration-reference/#driftdetection). diff --git a/docs/content/en/docs/user-guide/managing-application/customizing-deployment/_index.md b/docs/content/en/docs/user-guide/managing-application/customizing-deployment/_index.md deleted file mode 100644 index 3f42bbdd32..0000000000 --- a/docs/content/en/docs/user-guide/managing-application/customizing-deployment/_index.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: "Customizing application's deployment pipeline" -linkTitle: "Customizing deployment" -weight: 3 -description: > - This page describes how to customize an application's deployment pipeline with PipeCD defined stages. ---- - -In the previous section, we knew how to use PipeCD supporting application kind's stages to build up a pipeline that defines how Piped should deploy your application. In this section, aside from the application kind specified stages, we will talk about some commonly defined pipeline stages, which can be used to build up a more fashionable deployment pipeline for your application. - -![](/images/deployment-wait-stage.png) --Example deployment with a WAIT stage -
diff --git a/docs/content/en/docs/user-guide/managing-application/customizing-deployment/adding-a-manual-approval.md b/docs/content/en/docs/user-guide/managing-application/customizing-deployment/adding-a-manual-approval.md deleted file mode 100644 index 3ee946b5fd..0000000000 --- a/docs/content/en/docs/user-guide/managing-application/customizing-deployment/adding-a-manual-approval.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: "Adding a manual approval stage" -linkTitle: "Manual approval stage" -weight: 2 -description: > - This page describes how to add a manual approval stage. ---- - -While deploying an application to production environments, some teams require manual approvals before continuing. -The manual approval stage enables you to control when the deployment is allowed to continue by requiring a specific person or team to approve. -This stage is named by `WAIT_APPROVAL` and you can add it to your pipeline before some stages should be approved before they can be executed. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: K8S_CANARY_ROLLOUT - - name: WAIT_APPROVAL - with: - timeout: 6h - approvers: - - user-abc - - name: K8S_PRIMARY_ROLLOUT -``` - -As above example, the deployment requires an approval from `user-abc` before `K8S_PRIMARY_ROLLOUT` stage can be executed. - -The value of user ID in the `approvers` list depends on your [SSO configuration](../../../managing-controlplane/auth/), it must be GitHub's user ID if your SSO was configured to use GitHub provider, it must be Gmail account if your SSO was configured to use Google provider. - -In case the `approvers` field was not configured, anyone in the project who has `Editor` or `Admin` role can approve the deployment pipeline. - -Also, it will end with failure when the time specified in `timeout` has elapsed. Default is `6h`. - -![](/images/deployment-wait-approval-stage.png) --Deployment with a WAIT_APPROVAL stage -
diff --git a/docs/content/en/docs/user-guide/managing-application/customizing-deployment/adding-a-wait-stage.md b/docs/content/en/docs/user-guide/managing-application/customizing-deployment/adding-a-wait-stage.md deleted file mode 100644 index f2d381d8f8..0000000000 --- a/docs/content/en/docs/user-guide/managing-application/customizing-deployment/adding-a-wait-stage.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -title: "Adding a wait stage" -linkTitle: "Wait stage" -weight: 1 -description: > - This page describes how to add a WAIT stage. ---- - -In addition to waiting for approvals from someones, the deployment pipeline can be configured to wait an amount of time before continuing. -This can be done by adding the `WAIT` stage into the pipeline. This stage has one configurable field is `duration` to configure how long should be waited. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: K8S_CANARY_ROLLOUT - - name: WAIT - with: - duration: 5m - - name: K8S_PRIMARY_ROLLOUT - - name: K8S_CANARY_CLEAN -``` - -![](/images/deployment-wait-stage.png) --Deployment with a WAIT stage -
diff --git a/docs/content/en/docs/user-guide/managing-application/customizing-deployment/automated-deployment-analysis.md b/docs/content/en/docs/user-guide/managing-application/customizing-deployment/automated-deployment-analysis.md deleted file mode 100644 index 2d16a427c4..0000000000 --- a/docs/content/en/docs/user-guide/managing-application/customizing-deployment/automated-deployment-analysis.md +++ /dev/null @@ -1,297 +0,0 @@ ---- -title: "Adding an automated deployment analysis stage" -linkTitle: "Automated deployment analysis stage" -weight: 3 -description: > - This page describes how to configure Automated Deployment Analysis feature. ---- - ->NOTE: This feature is currently alpha status. - -Automated Deployment Analysis (ADA) evaluates the impact of the deployment you are in the middle of by analyzing the metrics data, log entries, and the responses of the configured HTTP requests. -The analysis of the newly deployed application is often carried out in a manual, ad-hoc or statistically incorrect manner. -ADA automates that and helps to build a robust deployment process. -ADA is available as a stage in the pipeline specified in the application configuration file. - -ADA does the analysis by periodically performing queries against the [Analysis Provider](../../../../concepts/#analysis-provider) and evaluating the results to know the impact of the deployment. Then based on these evaluating results, the deployment can be rolled back immediately to minimize any negative impacts. - -The canonical use case for this stage is to determine if your canary deployment should proceed. - -![](/images/deployment-analysis-stage.png) --Automatic rollback based on the analysis result -
- -## Prerequisites -Before enabling ADA inside the pipeline, all required Analysis Providers must be configured in the Piped Configuration according to [this guide](../../../managing-piped/adding-an-analysis-provider/). - -## Analysis by metrics -### Strategies -You can choose one of the four strategies to fit your use case. - -- `THRESHOLD`: A simple method to compare against a statically defined threshold (same as the typical analysis method up to `v0.18.0`). -- `PREVIOUS`: A method to compare metrics with the last successful deployment. -- `CANARY_BASELINE`: A method to compare the metrics between the Canary and Baseline variants. -- `CANARY_PRIMARY`(not recommended): A method to compare the metrics between the Canary and Primary variants. - -`THRESHOLD` is the simplest strategy, so it's for you if you attempt to evaluate this feature. - -`THRESHOLD` only checks if the query result falls within the statically specified range, whereas others evaluate by checking the deviation of two time-series data. -Therefore, those configuration fields are slightly different from each other. The next section covers how to configure the ADA stage for each strategy. - -### Configuration -Here is an example for the `THRESHOLD` strategy. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: ANALYSIS - with: - duration: 30m - metrics: - - strategy: THRESHOLD - provider: my-prometheus - interval: 5m - expected: - max: 0.01 - query: | - sum (rate(http_requests_total{status=~"5.*"}[5m])) - / - sum (rate(http_requests_total[5m])) -``` - -In the `provider` field, put the name of the provider in Piped configuration prepared in the [Prerequisites](#prerequisites) section. - -The `ANALYSIS` stage will continue to run for the period specified in the `duration` field. -In the meantime, Piped sends the given `query` to the Analysis Provider at each specified `interval`. - -For each query, it checks if the result is within the expected range. If it's not expected, this `ANALYSIS` stage will fail (typically the rollback stage will be started). -You can change the acceptable number of failures by setting the `failureLimit` field. - -The other strategies are basically the same, but there are slight differences. Let's take a look at them. - -##### PREVIOUS strategy -In the `PREVIOUS` strategy, Piped queries the analysis provider with the time range when the deployment was previously successful, and compares that metrics with the current metrics. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: ANALYSIS - with: - duration: 30m - metrics: - - strategy: PREVIOUS - provider: my-prometheus - deviation: HIGH - interval: 5m - query: | - sum (rate(http_requests_total{status=~"5.*"}[5m])) - / - sum (rate(http_requests_total[5m])) -``` - -In the `THRESHOLD` strategy, we used `expected` to evaluate the deployment, but here we use `deviation` instead. -The stage fails on deviation in the specified direction. In the above example, it fails if the current metrics is higher than the previous. - -##### CANARY strategy - -**With baseline**: - -In the `CANARY_BASELINE` strategy, Piped checks if there is a significant difference between the metrics of the two running variants, Canary and Baseline. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: ANALYSIS - with: - duration: 30m - metrics: - - strategy: CANARY_BASELINE - provider: my-prometheus - deviation: HIGH - interval: 5m - query: | - sum (rate(http_requests_total{job="foo-{{ .Variant.Name }}", status=~"5.*"}[5m])) - / - sum (rate(http_requests_total{job="foo-{{ .Variant.Name }}"}[5m])) -``` - -Like `PREVIOUS`, you specify the conditions for failure with `deviation`. - -It generates different queries for Canary and Baseline to compare the metrics. You can use the Variant args to template the queries. -Analysis Template uses the [Go templating engine](https://golang.org/pkg/text/template/) which only replaces values. This allows variant-specific data to be embedded in the query. - -The available built-in args currently are: - -| Property | Type | Description | -|-|-|-| -| Variant.Name | string | "canary", "baseline", or "primary" will be populated | - -Also, you can define the custom args using `baselineArgs` and `canaryArgs`, and refer them like `{{ .VariantCustom.Args.job }}`. - -```yaml - metrics: - - strategy: CANARY_BASELINE - provider: my-prometheus - deviation: HIGH - baselineArgs: - job: bar - canaryArgs: - job: baz - interval: 5m - query: cpu_usage{job="{{ .VariantCustomArgs.job }}", status=~"5.*"} -``` - -**With primary (not recommended)**: - -If for some reason you cannot provide the Baseline variant, you can also compare Canary and Primary. -However, we recommend that you compare it with Baseline that is a variant launched at the same time as Canary as much as possible. - -##### Comparison algorithm -The metric comparison algorithm in PipeCD uses a nonparametric statistical test called [Mann-Whitney U test](https://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test) to check for a significant difference between two metrics collection (like Canary and Baseline, or the previous deployment and the current metrics). - -### Example pipelines - -**Analyze the canary variant using the `THRESHOLD` strategy:** - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: K8S_CANARY_ROLLOUT - with: - replicas: 20% - - name: ANALYSIS - with: - duration: 30m - metrics: - - provider: my-prometheus - interval: 10m - expected: - max: 0.1 - query: rate(cpu_usage_total{app="foo"}[10m]) - - name: K8S_PRIMARY_ROLLOUT - - name: K8S_CANARY_CLEAN -``` - -**Analyze the primary variant using the `PREVIOUS` strategy:** - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: K8S_PRIMARY_ROLLOUT - - name: ANALYSIS - with: - duration: 30m - metrics: - - strategy: PREVIOUS - provider: my-prometheus - interval: 5m - deviation: HIGH - query: rate(cpu_usage_total{app="foo"}[5m]) -``` - -**Analyze the canary variant using the `CANARY_BASELINE` strategy:** - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: K8S_CANARY_ROLLOUT - with: - replicas: 20% - - name: K8S_BASELINE_ROLLOUT - with: - replicas: 20% - - name: ANALYSIS - with: - duration: 30m - metrics: - - strategy: CANARY_BASELINE - provider: my-prometheus - interval: 10m - deviation: HIGH - query: rate(cpu_usage_total{app="foo", variant="{{ .Variant.Name }}"}[10m]) - - name: K8S_PRIMARY_ROLLOUT - - name: K8S_CANARY_CLEAN - - name: K8S_BASELINE_CLEAN -``` - -The full list of configurable `ANALYSIS` stage fields are [here](../../../configuration-reference/#analysisstageoptions). - -See more the [example](https://github.com/pipe-cd/examples/blob/master/kubernetes/analysis-by-metrics/app.pipecd.yaml). - -## Analysis by logs - ->TBA - -## Analysis by http - ->TBA - -### [Optional] Analysis Template -Analysis Templating is a feature that allows you to define some shared analysis configurations to be used by multiple applications. These templates must be placed at the `.pipe` directory at the root of the Git repository. Any application in that Git repository can use to the defined template by specifying the name of the template in the application configuration file. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: AnalysisTemplate -spec: - metrics: - http_error_rate: - interval: 30m - provider: my-prometheus - expected: - max: 0 - query: | - sum without(status) (rate(http_requests_total{status=~"5.*", job="{{ .App.Name }}"}[1m])) - / - sum without(status) (rate(http_requests_total{job="{{ .App.Name }}"}[1m])) -``` - -Once the AnalysisTemplate is defined, you can reference from the application configuration using the `template` field. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - pipeline: - stages: - - name: ANALYSIS - with: - duration: 30m - metrics: - - template: - name: http_error_rate -``` - -Analysis Template uses the [Go templating engine](https://golang.org/pkg/text/template/) which only replaces values. This allows deployment-specific data to be embedded in the analysis template. - -The available built-in args are: - -| Property | Type | Description | -|-|-|-| -| App.Name | string | Application Name. | -| K8s.Namespace | string | The Kubernetes namespace where manifests will be applied. | - -Also, custom args is supported. Custom args placeholders can be defined as `{{ .AppCustomArgs.-A deployment was rolled back -
- -Alternatively, manually rolling back a running deployment can be done from web UI by clicking on `Cancel with rollback` button. diff --git a/docs/content/en/docs/user-guide/managing-application/secret-management.md b/docs/content/en/docs/user-guide/managing-application/secret-management.md deleted file mode 100755 index c1ddc15912..0000000000 --- a/docs/content/en/docs/user-guide/managing-application/secret-management.md +++ /dev/null @@ -1,122 +0,0 @@ ---- -title: "Secret management" -linkTitle: "Secret management" -weight: 9 -description: > - Storing secrets safely in the Git repository. ---- - -When doing GitOps, user wants to use Git as a single source of truth. But storing credentials like Kubernetes Secret or Terraform's credentials directly in Git is not safe. -This feature helps you keep that sensitive information safely in Git, right next to your application manifests. - -Basically, the flow will look like this: -- user encrypts their secret data via the PipeCD's Web UI and stores the encrypted data in Git -- `Piped` decrypts them before doing deployment tasks - -## Prerequisites - -Before using this feature, `Piped` needs to be started with a key pair for secret encryption. - -You can use the following command to generate a key pair: - -``` console -openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -out private-key -openssl pkey -in private-key -pubout -out public-key -``` - -Then specify them while [installing](../../../installation/install-piped/installing-on-kubernetes) the `Piped` with these options: - -``` console ---set-file secret.data.secret-public-key=PATH_TO_PUBLIC_KEY_FILE \ ---set-file secret.data.secret-private-key=PATH_TO_PRIVATE_KEY_FILE -``` - -Finally, enable this feature in Piped configuration file with `secretManagement` field as below: - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - pipedID: your-piped-id - ... - secretManagement: - type: KEY_PAIR - config: - privateKeyFile: /etc/piped-secret/secret-private-key - publicKeyFile: /etc/piped-secret/secret-public-key -``` - -## Encrypting secret data - -In order to encrypt the secret data, go to the application list page and click on the options icon at the right side of the application row, choose "Encrypt Secret" option. -After that, input your secret data and click on "ENCRYPT" button. -The encrypted data should be shown for you. Copy it to store in Git. - -![](/images/sealed-secret-application-list.png) --Application list page -
- --The form for encrypting secret data -
- -## Storing encrypted secrets in Git - -To make encrypted secrets available to an application, they must be specified in the application configuration file of that application. - -- `encryptedSecrets` contains a list of the encrypted secrets. -- `decryptionTargets` contains a list of files that are using one of the encrypted secrets and should be decrypted by `Piped`. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -# One of Piped defined app kind such as: KubernetesApp -kind: {APPLICATION_KIND} -spec: - encryption: - encryptedSecrets: - password: encrypted-data - decryptionTargets: - - secret.yaml -``` - -## Accessing encrypted secrets - -Any file in the application directory can use `.encryptedSecrets` context to access secrets you have encrypted and stored in the application configuration. - -For example, - -- Accessing by a Kubernets Secret manfiest - -``` yaml -apiVersion: v1 -kind: Secret -metadata: - name: simple-sealed-secret -data: - password: "{{ .encryptedSecrets.password }}" -``` - -- Configuring ENV variable of a Lambda function to use a encrypted secret - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: LambdaFunction -spec: - name: HelloFunction - environments: - KEY: "{{ .encryptedSecrets.key }}" -``` - -In all cases, `Piped` will decrypt the encrypted secrets and render the decryption target files before using to handle any deployment tasks. - -## Examples - -- [examples/kubernetes/secret-management](https://github.com/pipe-cd/examples/tree/master/kubernetes/secret-management) -- [examples/cloudrun/secret-management](https://github.com/pipe-cd/examples/tree/master/cloudrun/secret-management) -- [examples/lambda/secret-management](https://github.com/pipe-cd/examples/tree/master/lambda/secret-management) -- [examples/terraform/secret-management](https://github.com/pipe-cd/examples/tree/master/terraform/secret-management) -- [examples/ecs/secret-management](https://github.com/pipe-cd/examples/tree/master/ecs/secret-management) diff --git a/docs/content/en/docs/user-guide/managing-application/triggering-a-deployment.md b/docs/content/en/docs/user-guide/managing-application/triggering-a-deployment.md deleted file mode 100644 index 3fcb5559ab..0000000000 --- a/docs/content/en/docs/user-guide/managing-application/triggering-a-deployment.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: "Triggering a deployment" -linkTitle: "Triggering a deployment" -weight: 4 -description: > - This page describes when a deployment is triggered automatically and how to manually trigger a deployment. ---- - -PipeCD uses Git as a single source of truth; all application resources are defined declaratively and immutably in Git. Whenever a developer wants to update the application or infrastructure, they will send a pull request to that Git repository to propose the change. The state defined in Git is the desired state for the application and infrastructure running in the cluster. - -PipeCD applies the proposed changes to running resources in the cluster by triggering needed deployments for applications. The deployment mission is syncing all running resources of the application in the cluster to the state specified in the newest commit in Git. - -By default, when a new merged pull request touches an application, a new deployment for that application will be triggered to execute the sync process. But users can configure the application to control when a new deployment should be triggered or not. For example, using [`onOutOfSync`](#trigger-configuration) to enable the ability to attempt to resolve `OUT_OF_SYNC` state whenever a configuration drift has been detected. - -### Trigger configuration - -Configuration for the trigger used to determine whether we trigger a new deployment. There are several configurable types: -- `onCommit`: Controls triggering new deployment when new Git commits touched the application. -- `onCommand`: Controls triggering new deployment when received a new `SYNC` command. -- `onOutOfSync`: Controls triggering new deployment when application is at `OUT_OF_SYNC` state. -- `onChain`: Controls triggering new deployment when the application is counted as a node of some chains. - -See [Configuration Reference](../../configuration-reference/#deploymenttrigger) for the full configuration. - -After a new deployment was triggered, it will be queued to handle by the appropriate `piped`. And at this time the deployment pipeline was not decided yet. -`piped` schedules all deployments of applications to ensure that for each application only one deployment will be executed at the same time. -When no deployment of an application is running, `piped` picks queueing one to plan the deploying pipeline. -`piped` plans the deploying pipeline based on the application configuration and the diff between the running state and the specified state in the newest commit. -For example: - -- when the merged pull request updated a Deployment's container image or updated a mounting ConfigMap or Secret, `piped` planner will decide that the deployment should use the specified pipeline to do a progressive deployment. -- when the merged pull request just updated the `replicas` number, `piped` planner will decide to use a quick sync to scale the resources. - -You can force `piped` planner to decide to use the [QuickSync](../../../concepts/#sync-strategy) or the specified pipeline based on the commit message by configuring [CommitMatcher](../../configuration-reference/#commitmatcher) in the application configuration. - -After being planned, the deployment will be executed as the decided pipeline. The deployment execution including the state of each stage as well as their logs can be viewed in realtime at the deployment details page. - -![](/images/deployment-details.png) --A Running Deployment at the Deployment Details Page -
- -As explained above, by default all deployments will be triggered automatically by checking the merged commits but you also can manually trigger a new deployment from web UI. -By clicking on `SYNC` button at the application details page, a new deployment for that application will be triggered to sync the application to be the state specified at the newest commit of the master branch (default branch). - -![](/images/application-details.png) --Application Details Page -
- diff --git a/docs/content/en/docs/user-guide/managing-controlplane/_index.md b/docs/content/en/docs/user-guide/managing-controlplane/_index.md deleted file mode 100644 index efdfe70387..0000000000 --- a/docs/content/en/docs/user-guide/managing-controlplane/_index.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -title: "Managing Control Plane" -linkTitle: "Managing Control Plane" -weight: 6 -description: > - This guide is for administrators and operators wanting to install and configure PipeCD for other developers. ---- diff --git a/docs/content/en/docs/user-guide/managing-controlplane/adding-a-project.md b/docs/content/en/docs/user-guide/managing-controlplane/adding-a-project.md deleted file mode 100644 index e162c6adf5..0000000000 --- a/docs/content/en/docs/user-guide/managing-controlplane/adding-a-project.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -title: "Adding a project" -linkTitle: "Adding a project" -weight: 2 -description: > - This page describes how to set up a new project. ---- - -The control plane ops can add a new project for a team. -Project adding can be simply done from an internal web page prepared for the ops. -Because that web service is running in an `ops` pod, so in order to access it, using `kubectl port-forward` command to forward a local port to a port on the `ops` pod as following: - -``` console -kubectl port-forward service/pipecd-ops 9082 --namespace={NAMESPACE} -``` - -Then, access to [http://localhost:9082](http://localhost:9082). - -On that page, you will see the list of registered projects and a link to register new projects. -Registering a new project requires only a unique ID string and an optional description text. - -Once a new project has been registered, a static admin (username, password) will be automatically generated for the project admin. You can send that information to the project admin. The project admin first uses the provided static admin information to log in to PipeCD. After that, they can change the static admin information, configure the SSO, RBAC or disable static admin user. - -__Caution:__ The Role-Based Access Control (RBAC) setting is required to enable your team login using SSO, please make sure you have that setup before disable static admin user. \ No newline at end of file diff --git a/docs/content/en/docs/user-guide/managing-controlplane/architecture-overview.md b/docs/content/en/docs/user-guide/managing-controlplane/architecture-overview.md deleted file mode 100644 index 4166700b69..0000000000 --- a/docs/content/en/docs/user-guide/managing-controlplane/architecture-overview.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: "Architecture overview" -linkTitle: "Architecture overview" -weight: 1 -description: > - This page describes the architecture of control plane. ---- - -![](/images/control-plane-components.png) --Component Architecture -
- -The control plane is a centralized part of PipeCD. It contains several services as below to manage the application, deployment data and handle all requests from `piped`s and web clients: - -##### Server - -`server` handles all incoming gRPC requests from `piped`s, web clients, incoming HTTP requests such as auth callback from third party services. -It also serves all web assets including HTML, JS, CSS... -This service can be easily scaled by updating the pod number. - -##### Cache - -`cache` is a single pod service for caching internal data used by `server` service. Currently, this `cache` service is powered by `redis`. -You can configure the control plane to use a fully-managed redis cache service instead of launching a cache pod in your cluster. - -##### Ops - -`ops` is a single pod service for operating PipeCD owner's tasks. -For example, it provides an internal web page for adding and managing projects; it periodically removes the old data; it collects and saves the deployment insights. - -##### Data Store - -`Data store` is a storage for storing model data such as applications and deployments. This can be a fully-managed service such as GCP [Firestore](https://cloud.google.com/firestore), GCP [Cloud SQL](https://cloud.google.com/sql) or AWS [RDS](https://aws.amazon.com/rds/) (currently we choose [MySQL v8](https://www.mysql.com/) as supported relational data store). You can also configure the control plane to use a self-managed MySQL server. -When installing the control plane, you have to choose one of the provided data store services. - -##### File Store - -`File store` is a storage for storing stage logs, application live states. This can be a fully-managed service such as GCP [GCS](https://cloud.google.com/storage), AWS [S3](https://aws.amazon.com/s3/), or a self-managed service such as [Minio](https://github.com/minio/minio). -When installing the control plane, you have to choose one of the provided file store services. diff --git a/docs/content/en/docs/user-guide/managing-controlplane/auth.md b/docs/content/en/docs/user-guide/managing-controlplane/auth.md deleted file mode 100644 index 8b055895e9..0000000000 --- a/docs/content/en/docs/user-guide/managing-controlplane/auth.md +++ /dev/null @@ -1,74 +0,0 @@ ---- -title: "Authentication and authorization" -linkTitle: "Authentication and authorization" -weight: 3 -description: > - This page describes about PipeCD Authentication and Authorization. ---- - -![](/images/settings-project-v0.38.x.png) - -### Static Admin - -When the PipeCD owner [adds a new project](../adding-a-project/), an admin account will be automatically generated for the project. After that, PipeCD owner sends that static admin information including username, password strings to the project admin, who can use that information to log in to PipeCD web with the admin role. - -After logging, the project admin should change the provided username and password. Or disable the static admin account after configuring the single sign-on for the project. - -### Single Sign-On (SSO) - -Single sign-on (SSO) allows users to log in to PipeCD by relying on a trusted third-party service such as GitHub, GitHub Enterprise Server, Google Gmail, Bitbucket... - -Before configuring the SSO, you need an OAuth application of the using service. For example, GitHub SSO requires creating a GitHub OAuth application as described in this page: - -https://docs.github.com/en/developers/apps/creating-an-oauth-app - -The authorization callback URL should be `https://YOUR_PIPECD_ADDRESS/auth/callback`. - -![](/images/settings-update-sso.png) - -The project can be configured to use a shared SSO configuration (shared OAuth application) instead of needing a new one. In that case, while creating the project, the PipeCD owner specifies the name of the shared SSO configuration should be used, and then the project admin can skip configuring SSO at the settings page. - -### Role-Based Access Control (RBAC) - -Role-based access control (RBAC) allows restricting access on the PipeCD web-based on the roles of user groups within the project. Before using this feature, the SSO must be configured. - -PipeCD provides three built-in roles: - -- `Viewer`: has only permissions to view existing resources or data. -- `Editor`: has all viewer permissions, plus permissions for actions that modify state, such as manually syncing application, canceling deployment... -- `Admin`: has all editor permissions, plus permissions for updating project configurations. - -#### Configuring the PipeCD's roles - -The below table represents PipeCD's resources with actions on those resources. - -| resource | get | list | create | update | delete | -|:--------------------|:------:|:-------:|:-------:|:-------:|:-------:| -| application | ○ | ○ | ○ | ○ | ○ | -| deployment | ○ | ○ | | ○ | | -| event | | ○ | | | | -| piped | ○ | ○ | ○ | ○ | | -| project | ○ | | | ○ | | -| apiKey | | ○ | ○ | ○ | | -| insight | ○ | | | | | - - -Each role is defined as a combination of multiple policies under this format. -``` -resources=RESOURCE_NAMES;actions=ACTION_NAMES -``` - -The `*` represents all resources and all actions for a resource. -``` -resources=*;actions=ACTION_NAMES -resources=RESOURCE_NAMES;actions=* -resources=*;actions=* -``` - -#### Configuring the PipeCD's user groups - -User Group represents a relation with a specific team (GitHub)/group (Google) and an arbitrary role. All users belong to a team/group will have all permissions of that team/group. - -You cannot assign multiple roles to a team/group. - -![](/images/settings-add-user-group.png) diff --git a/docs/content/en/docs/user-guide/managing-controlplane/configuration-reference.md b/docs/content/en/docs/user-guide/managing-controlplane/configuration-reference.md deleted file mode 100644 index 721fe46cd0..0000000000 --- a/docs/content/en/docs/user-guide/managing-controlplane/configuration-reference.md +++ /dev/null @@ -1,169 +0,0 @@ ---- -title: "Configuration reference" -linkTitle: "Configuration reference" -weight: 6 -description: > - This page describes all configurable fields in the Control Plane configuration. ---- - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: ControlPlane -spec: - address: https://your-pipecd-address - ... -``` - -## Control Plane Configuration - -| Field | Type | Description | Required | -|-|-|-|-| -| stateKey | string | A randomly generated string used to sign oauth state. | Yes | -| datastore | [DataStore](#datastore) | Storage for storing application, deployment data. | Yes | -| filestore | [FileStore](#filestore) | File storage for storing deployment logs and application states. | Yes | -| cache | [Cache](#cache) | Internal cache configuration. | No | -| address | string | The address to the control plane. This is required if SSO is enabled. | No | -| insightCollector | [InsightCollector](#insightcollector) | Option to run collector of Insights feature. | No | -| sharedSSOConfigs | [][SharedSSOConfig](#sharedssoconfig) | List of shared SSO configurations that can be used by any projects. | No | -| projects | [][Project](#project) | List of debugging/quickstart projects. Please note that do not use this to configure the projects running in the production. | No | - -## DataStore - -| Field | Type | Description | Required | -|-|-|-|-| -| type | string | Which type of data store should be used. Can be one of the following values-Monitoring Architecture -
- -Both the Control plane and piped agent have their own "admin servers" (the default port number is 9085), which are simple HTTP servers providing operational information such as health status, running version, go profile, and monitoring metrics. - -The piped agent collects its metrics and periodically sends them to the Control plane. The Control plane then compacts its resource usage and cluster information with the metrics sent by the piped agent and re-publishes them via its admin server. When the PipeCD monitoring feature is turned on, Prometheus, Alertmanager, and Grafana are deployed with the Control plane, and Prometheus retrieves metrics information from the Control plane's admin server. - -Developers managing the piped agent can also get metrics directly from the piped agent and monitor them with their custom monitoring service. - -## Enable monitoring system -To enable monitoring system for PipeCD, you first need to set the following value to `helm install` when [installing](../../../installation/install-controlplane/#2-preparing-control-plane-configuration-file-and-installing). - -``` ---set monitoring.enabled=true -``` - -## Dashboards -If you've already enabled monitoring system in the previous section, you can access Grafana using port forwarding: - -``` -kubectl port-forward -n {NAMESPACE} svc/{PIPECD_RELEASE_NAME}-grafana 3000:80 -``` - -#### Control Plane dashboards -There are three dashboards related to Control Plane: -- Overview - usage stats of PipeCD -- Incoming Requests - gRPC and HTTP requests stats to check for any negative impact on users -- Go - processes stats of PipeCD components - -#### Piped dashboards -Visualize the metrics of Piped registered in the Control plane. -- Overview - usage stats of piped agents -- Process - resource usage of piped agent -- Go - processes stats of piped agents. - -#### Cluster dashboards -Because cluster dashboards tracks cluster-wide metrics, defaults to disable. You can enable it with: - -``` ---monitoring.clusterStats=true -``` - -There are three dashboards that track metrics for: -- Node - nodes stats within the Kubernetes cluster where PipeCD runs on -- Pod - stats for pods that make PipeCD up -- Prometheus - stats for Prometheus itself - -## Alert notifications -If you want to send alert notifications to external services like Slack, you need to set an alertmanager configuration file. - -For example, let's say you use Slack as a receiver. Create `values.yaml` and put the following configuration to there. - -```yaml -prometheus: - alertmanagerFiles: - alertmanager.yml: - global: - slack_api_url: {YOUR_WEBHOOK_URL} - route: - receiver: slack-notifications - receivers: - - name: slack-notifications - slack_configs: - - channel: '#your-channel' -``` - -And give it to the `helm install` command when [installing](../../../installation/install-controlplane/#2-preparing-control-plane-configuration-file-and-installing). - -``` ---values=values.yaml -``` - -See [here](https://prometheus.io/docs/alerting/latest/configuration/) for more details on AlertManager's configuration. diff --git a/docs/content/en/docs/user-guide/managing-controlplane/registering-a-piped.md b/docs/content/en/docs/user-guide/managing-controlplane/registering-a-piped.md deleted file mode 100644 index 9719f26f8d..0000000000 --- a/docs/content/en/docs/user-guide/managing-controlplane/registering-a-piped.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: "Registering a piped" -linkTitle: "Registering a piped" -weight: 4 -description: > - This page describes how to register a new piped to a project. ---- - -The list of pipeds are shown in the Settings page. Anyone who has the project admin role can register a new piped by clicking on the `+ADD` button. - --Registering a new piped -
diff --git a/docs/content/en/docs/user-guide/managing-piped/_index.md b/docs/content/en/docs/user-guide/managing-piped/_index.md deleted file mode 100644 index ef848b8856..0000000000 --- a/docs/content/en/docs/user-guide/managing-piped/_index.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: "Managing Piped" -linkTitle: "Managing Piped" -weight: 7 -description: > - This guide is for administrators and operators wanting to install and configure piped for other developers. ---- - -In order to use Piped you need to register through PipeCD control plane, so please refer [register a Piped docs](../managing-controlplane/registering-a-piped/) if you do not have already. After registering successfully, you can monitor your Piped live state via the PipeCD console on the settings page. - -![piped-list-page](/images/piped-list-page.png) diff --git a/docs/content/en/docs/user-guide/managing-piped/adding-a-cloud-provider.md b/docs/content/en/docs/user-guide/managing-piped/adding-a-cloud-provider.md deleted file mode 100644 index e05aad45af..0000000000 --- a/docs/content/en/docs/user-guide/managing-piped/adding-a-cloud-provider.md +++ /dev/null @@ -1,134 +0,0 @@ ---- -title: "Adding a cloud provider" -linkTitle: "Adding cloud provider" -weight: 3 -description: > - This page describes how to add a cloud provider to enable its applications. ---- - -> NOTE: Starting from version v0.35.0, the CloudProvider concept is being replaced by PlatformProvider. It's a name change due to the PipeCD vision improvement. __The CloudProvider configuration is marked as deprecated, please migrate your piped agent configuration to use PlatformProvider__. - -PipeCD supports multiple clouds and multiple application kinds. -Cloud provider defines which cloud and where the application should be deployed to. -So while registering a new application, the name of a configured cloud provider is required. - -Currently, PipeCD is supporting these five kinds of cloud providers: `KUBERNETES`, `ECS`, `TERRAFORM`, `CLOUDRUN`, `LAMBDA`. -A new cloud provider can be enabled by adding a [CloudProvider](../configuration-reference/#cloudprovider) struct to the piped configuration file. -A piped can have one or multiple cloud provider instances from the same or different cloud provider kind. - -The next sections show the specific configuration for each kind of cloud provider. - -### Configuring Kubernetes cloud provider - -By default, piped deploys Kubernetes application to the cluster where the piped is running in. An external cluster can be connected by specifying the `masterURL` and `kubeConfigPath` in the [configuration](../configuration-reference/#cloudproviderkubernetesconfig). - -And, the default resources (defined at [here](https://github.com/pipe-cd/pipecd/blob/master/pkg/app/piped/platformprovider/kubernetes/resourcekey.go)) from all namespaces of the Kubernetes cluster will be watched for rendering the application state in realtime and detecting the configuration drift. In case you want to restrict piped to watch only a single namespace, let specify the namespace in the [KubernetesAppStateInformer](../configuration-reference/#kubernetesappstateinformer) field. You can also add other resources or exclude resources to/from the watching targets by that field. - -Below configuration snippet just specifies a name and type of cloud provider. It means the cloud provider `kubernetes-dev` will connect to the Kubernetes cluster where the piped is running in, and this cloud provider watches all of the predefined resources from all namespaces inside that cluster. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - cloudProviders: - - name: kubernetes-dev - type: KUBERNETES -``` - -See [ConfigurationReference](../configuration-reference/#cloudproviderkubernetesconfig) for the full configuration. - -### Configuring Terraform cloud provider - -A terraform cloud provider contains a list of shared terraform variables that will be applied while running the deployment of its applications. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - cloudProviders: - - name: terraform-dev - type: TERRAFORM - config: - vars: - - "project=pipecd" -``` - -See [ConfigurationReference](../configuration-reference/#cloudproviderterraformconfig) for the full configuration. - -### Configuring Cloud Run cloud provider - -Adding a Cloud Run provider requires the name of the Google Cloud project and the region name where Cloud Run service is running. A service account file for accessing to Cloud Run is also required if the machine running the piped does not have enough permissions to access. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - cloudProviders: - - name: cloudrun-dev - type: CLOUDRUN - config: - project: {GCP_PROJECT} - region: {CLOUDRUN_REGION} - credentialsFile: {PATH_TO_THE_SERVICE_ACCOUNT_FILE} -``` - -See [ConfigurationReference](../configuration-reference/#cloudprovidercloudrunconfig) for the full configuration. - -### Configuring Lambda cloud provider - -Adding a Lambda provider requires the region name where Lambda service is running. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - cloudProviders: - - name: lambda-dev - type: LAMBDA - config: - region: {LAMBDA_REGION} - profile: default - credentialsFile: {PATH_TO_THE_CREDENTIAL_FILE} -``` - -You will generally need your AWS credentials to authenticate with Lambda. Piped provides multiple methods of loading these credentials. -It attempts to retrieve credentials in the following order: -1. From the environment variables. Available environment variables are `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` and `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`. -2. From the given credentials file. (the `credentialsFile field in above sample`) -3. From the pod running in EKS cluster via STS (SecurityTokenService). -4. From the EC2 Instance Role. - -Therefore, you don't have to set credentialsFile if you use the environment variables or the EC2 Instance Role. Keep in mind the IAM role/user that you use with your Piped must possess the IAM policy permission for at least `Lambda.Function` and `Lambda.Alias` resources controll (list/read/write). - -See [ConfigurationReference](../configuration-reference/#cloudproviderlambdaconfig) for the full configuration. - -### Configuring ECS cloud provider - -Adding a ECS provider requires the region name where ECS cluster is running. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - cloudProviders: - - name: ecs-dev - type: ECS - config: - region: {ECS_CLUSTER_REGION} - profile: default - credentialsFile: {PATH_TO_THE_CREDENTIAL_FILE} -``` - -Just same as Lambda cloud provider, there are several ways to authorize Piped agent to enable it performs deployment jobs. -It attempts to retrieve credentials in the following order: -1. From the environment variables. Available environment variables are `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` and `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`. -2. From the given credentials file. (the `credentialsFile field in above sample`) -3. From the pod running in EKS cluster via STS (SecurityTokenService). -4. From the EC2 Instance Role. - -See [ConfigurationReference](../configuration-reference/#cloudproviderecsconfig) for the full configuration. diff --git a/docs/content/en/docs/user-guide/managing-piped/adding-a-git-repository.md b/docs/content/en/docs/user-guide/managing-piped/adding-a-git-repository.md deleted file mode 100644 index 97bf68b200..0000000000 --- a/docs/content/en/docs/user-guide/managing-piped/adding-a-git-repository.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: "Adding a git repository" -linkTitle: "Adding git repository" -weight: 2 -description: > - This page describes how to add a new Git repository. ---- - -In the `piped` configuration file, we specify the list of Git repositories should be handled by the `piped`. -A Git repository contains one or more deployable applications where each application is put inside a directory called as [application directory](../../../concepts/#application-directory). -That directory contains an application configuration file as well as application manifests. -The `piped` periodically checks the new commits and fetches the needed manifests from those repositories for executing the deployment. - -A single `piped` can be configured to handle one or more Git repositories. -In order to enable a new Git repository, let's add a new [GitRepository](../configuration-reference/#gitrepository) block to the `repositories` field in the `piped` configuration file. - -For example, with the following snippet, `piped` will take the `master` branch of [pipe-cd/examples](https://github.com/pipe-cd/examples) repository as a target Git repository for doing deployments. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - repositories: - - repoId: examples - remote: git@github.com:pipe-cd/examples.git - branch: master -``` - -In most of the cases, we want to deal with private Git repositories. For accessing those private repositories, `piped` needs a private SSH key, which can be configured while [installing](../../../installation/install-piped/installing-on-kubernetes/) with `secret.sshKey` in the Helm chart. - -``` console -helm install dev-piped pipecd/piped --version={VERSION} \ - --set-file config.data={PATH_TO_PIPED_CONFIG_FILE} \ - --set-file secret.data.piped-key={PATH_TO_PIPED_KEY_FILE} \ - --set-file secret.data.ssh-key={PATH_TO_PRIVATE_SSH_KEY_FILE} -``` - -You can see this [configuration reference](../configuration-reference/#git) for more configurable fields about Git commands. - -Currently, `piped` allows configuring only one private SSH key for all specified Git repositories. So you can configure the same SSH key for all of those private repositories, or break them into separate `piped`s. In the near future, we also want to update `piped` to support loading multiple SSH keys. diff --git a/docs/content/en/docs/user-guide/managing-piped/adding-a-platform-provider.md b/docs/content/en/docs/user-guide/managing-piped/adding-a-platform-provider.md deleted file mode 100644 index d231c26e38..0000000000 --- a/docs/content/en/docs/user-guide/managing-piped/adding-a-platform-provider.md +++ /dev/null @@ -1,132 +0,0 @@ ---- -title: "Adding a platform provider" -linkTitle: "Adding platform provider" -weight: 4 -description: > - This page describes how to add a platform provider to enable its applications. ---- - -PipeCD supports multiple platforms and multiple application kinds which run on those platforms. -Platform provider defines which platform and where the application should be deployed to. -So while registering a new application, the name of a configured platform provider is required. - -Currently, PipeCD is supporting these five kinds of platform providers: `KUBERNETES`, `ECS`, `TERRAFORM`, `CLOUDRUN`, `LAMBDA`. -A new platform provider can be enabled by adding a [PlatformProvider](../configuration-reference/#platformprovider) struct to the piped configuration file. -A piped can have one or multiple platform provider instances from the same or different platform provider kind. - -The next sections show the specific configuration for each kind of platform provider. - -### Configuring Kubernetes platform provider - -By default, piped deploys Kubernetes application to the cluster where the piped is running in. An external cluster can be connected by specifying the `masterURL` and `kubeConfigPath` in the [configuration](../configuration-reference/#platformproviderkubernetesconfig). - -And, the default resources (defined at [here](https://github.com/pipe-cd/pipecd/blob/master/pkg/app/piped/platformprovider/kubernetes/resourcekey.go)) from all namespaces of the Kubernetes cluster will be watched for rendering the application state in realtime and detecting the configuration drift. In case you want to restrict piped to watch only a single namespace, let specify the namespace in the [KubernetesAppStateInformer](../configuration-reference/#kubernetesappstateinformer) field. You can also add other resources or exclude resources to/from the watching targets by that field. - -Below configuration snippet just specifies a name and type of platform provider. It means the platform provider `kubernetes-dev` will connect to the Kubernetes cluster where the piped is running in, and this platform provider watches all of the predefined resources from all namespaces inside that cluster. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - platformProviders: - - name: kubernetes-dev - type: KUBERNETES -``` - -See [ConfigurationReference](../configuration-reference/#platformproviderkubernetesconfig) for the full configuration. - -### Configuring Terraform platform provider - -A terraform platform provider contains a list of shared terraform variables that will be applied while running the deployment of its applications. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - platformProviders: - - name: terraform-dev - type: TERRAFORM - config: - vars: - - "project=pipecd" -``` - -See [ConfigurationReference](../configuration-reference/#platformproviderterraformconfig) for the full configuration. - -### Configuring Cloud Run platform provider - -Adding a Cloud Run provider requires the name of the Google Cloud project and the region name where Cloud Run service is running. A service account file for accessing to Cloud Run is also required if the machine running the piped does not have enough permissions to access. - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - platformProviders: - - name: cloudrun-dev - type: CLOUDRUN - config: - project: {GCP_PROJECT} - region: {CLOUDRUN_REGION} - credentialsFile: {PATH_TO_THE_SERVICE_ACCOUNT_FILE} -``` - -See [ConfigurationReference](../configuration-reference/#platformprovidercloudrunconfig) for the full configuration. - -### Configuring Lambda platform provider - -Adding a Lambda provider requires the region name where Lambda service is running. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - platformProviders: - - name: lambda-dev - type: LAMBDA - config: - region: {LAMBDA_REGION} - profile: default - credentialsFile: {PATH_TO_THE_CREDENTIAL_FILE} -``` - -You will generally need your AWS credentials to authenticate with Lambda. Piped provides multiple methods of loading these credentials. -It attempts to retrieve credentials in the following order: -1. From the environment variables. Available environment variables are `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` and `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`. -2. From the given credentials file. (the `credentialsFile field in above sample`) -3. From the pod running in EKS cluster via STS (SecurityTokenService). -4. From the EC2 Instance Role. - -Therefore, you don't have to set credentialsFile if you use the environment variables or the EC2 Instance Role. Keep in mind the IAM role/user that you use with your Piped must possess the IAM policy permission for at least `Lambda.Function` and `Lambda.Alias` resources controll (list/read/write). - -See [ConfigurationReference](../configuration-reference/#platformproviderlambdaconfig) for the full configuration. - -### Configuring ECS platform provider - -Adding a ECS provider requires the region name where ECS cluster is running. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - platformProviders: - - name: ecs-dev - type: ECS - config: - region: {ECS_CLUSTER_REGION} - profile: default - credentialsFile: {PATH_TO_THE_CREDENTIAL_FILE} -``` - -Just same as Lambda platform provider, there are several ways to authorize Piped agent to enable it performs deployment jobs. -It attempts to retrieve credentials in the following order: -1. From the environment variables. Available environment variables are `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` and `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`. -2. From the given credentials file. (the `credentialsFile field in above sample`) -3. From the pod running in EKS cluster via STS (SecurityTokenService). -4. From the EC2 Instance Role. - -See [ConfigurationReference](../configuration-reference/#platformproviderecsconfig) for the full configuration. diff --git a/docs/content/en/docs/user-guide/managing-piped/adding-an-analysis-provider.md b/docs/content/en/docs/user-guide/managing-piped/adding-an-analysis-provider.md deleted file mode 100644 index cc87d3a416..0000000000 --- a/docs/content/en/docs/user-guide/managing-piped/adding-an-analysis-provider.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: "Adding an analysis provider" -linkTitle: "Adding analysis provider" -weight: 6 -description: > - This page describes how to add an analysis provider for doing deployment analysis. ---- - -To enable [Automated deployment analysis](../../managing-application/customizing-deployment/automated-deployment-analysis/) feature, you have to set the needed information for Piped to connect to the [Analysis Provider](../../../concepts/#analysis-provider). - -Currently, PipeCD supports the following providers: -- [Prometheus](https://prometheus.io/) -- [Datadog](https://datadoghq.com/) - - -## Prometheus -Piped queries the [range query endpoint](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries) to obtain metrics used to evaluate the deployment. - -You need to define the Prometheus server address accessible to Piped. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - analysisProviders: - - name: prometheus-dev - type: PROMETHEUS - config: - address: https://your-prometheus.dev -``` -The full list of configurable fields are [here](../configuration-reference/#analysisproviderprometheusconfig). - -## Datadog -Piped queries the [MetricsApi.QueryMetrics](https://docs.datadoghq.com/api/latest/metrics/#query-timeseries-points) endpoint to obtain metrics used to evaluate the deployment. - -```yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - analysisProviders: - - name: datadog-dev - type: DATADOG - config: - apiKeyFile: /etc/piped-secret/datadog-api-key - applicationKeyFile: /etc/piped-secret/datadog-application-key -``` - -The full list of configurable fields are [here](../configuration-reference/#analysisproviderdatadogconfig). - -If you choose `Helm` as the installation method, we recommend using `--set-file` to mount the key files while performing the [upgrading process](../../../installation/install-piped/installing-on-kubernetes/#in-the-cluster-wide-mode). - -```console ---set-file secret.data.datadog-api-key={PATH_TO_API_KEY_FILE} \ ---set-file secret.data.datadog-application-key={PATH_TO_APPLICATION_KEY_FILE} -``` diff --git a/docs/content/en/docs/user-guide/managing-piped/adding-helm-chart-repository-or-registry.md b/docs/content/en/docs/user-guide/managing-piped/adding-helm-chart-repository-or-registry.md deleted file mode 100644 index 79581d2d65..0000000000 --- a/docs/content/en/docs/user-guide/managing-piped/adding-helm-chart-repository-or-registry.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -title: "Adding a Helm chart repository or registry" -linkTitle: "Adding Helm chart repo or registry" -weight: 5 -description: > - This page describes how to add a new Helm chart repository or registry. ---- - -PipeCD supports Kubernetes applications that are using Helm for templating and packaging. In addition to being able to deploy a Helm chart that is sourced from the same Git repository (`local chart`) or from a different Git repository (`remote git chart`), an application can use a chart sourced from a Helm chart repository. - -### Adding Helm chart repository - -A Helm [chart repository](https://helm.sh/docs/topics/chart_repository/) is a location backed by an HTTP server where packaged charts can be stored and shared. Before an application can be configured to use a chart from a Helm chart repository, that chart repository must be enabled in the related `piped` by adding the [ChartRepository](../configuration-reference/#chartrepository) struct to the piped configuration file. - -``` yaml -# piped configuration file -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - chartRepositories: - - name: pipecd - address: https://charts.pipecd.dev -``` - -For example, the above snippet enables the official chart repository of PipeCD project. After that, you can configure the Kubernetes application to load a chart from that chart repository for executing the deployment. - -``` yaml -# Application configuration file. -apiVersion: pipecd.dev/v1beta1 -kind: KubernetesApp -spec: - input: - # Helm chart sourced from a Helm Chart Repository. - helmChart: - repository: pipecd - name: helloworld - version: v0.5.0 -``` - -In case the chart repository is backed by HTTP basic authentication, the username and password strings are required in [configuration](../configuration-reference/#chartrepository). - -### Adding Helm chart registry - -A Helm chart [registry](https://helm.sh/docs/topics/registries/) is a mechanism enabled by default in Helm 3.8.0 and later that allows the OCI registry to be used for storage and distribution of Helm charts. - -Before an application can be configured to use a chart from a registry, that registry must be enabled in the related `piped` by adding the [ChartRegistry](../configuration-reference/#chartregistry) struct to the piped configuration file if authentication is enabled at the registry. - -``` yaml -# piped configuration file -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - ... - chartRegistries: - - type: OCI - address: registry.example.com - username: sample-username - password: sample-password -``` diff --git a/docs/content/en/docs/user-guide/managing-piped/configuration-reference.md b/docs/content/en/docs/user-guide/managing-piped/configuration-reference.md deleted file mode 100644 index 003776225c..0000000000 --- a/docs/content/en/docs/user-guide/managing-piped/configuration-reference.md +++ /dev/null @@ -1,269 +0,0 @@ ---- -title: "Configuration reference" -linkTitle: "Configuration reference" -weight: 9 -description: > - This page describes all configurable fields in the piped configuration. ---- - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - projectID: ... - pipedID: ... - ... -``` - -## Piped Configuration - -| Field | Type | Description | Required | -|-|-|-|-| -| projectID | string | The identifier of the PipeCD project where this piped belongs to. | Yes | -| pipedID | string | The generated ID for this piped. | Yes | -| pipedKeyFile | string | The path to the file containing the generated key string for this piped. | Yes | -| pipedKeyData | string | Base64 encoded string of Piped key. Either pipedKeyFile or pipedKeyData must be set. | Yes | -| apiAddress | string | The address used to connect to the Control Plane's API in format `host:port`. | Yes | -| syncInterval | duration | How often to check whether an application should be synced. Default is `1m`. | No | -| appConfigSyncInterval | duration | How often to check whether application configuration files should be synced. Default is `1m`. | No | -| git | [Git](#git) | Git configuration needed for Git commands. | No | -| repositories | [][Repository](#gitrepository) | List of Git repositories this piped will handle. | No | -| chartRepositories | [][ChartRepository](#chartrepository) | List of Helm chart repositories that should be added while starting up. | No | -| chartRegistries | [][ChartRegistry](#chartregistry) | List of helm chart registries that should be logged in while starting up. | No | -| cloudProviders | [][CloudProvider](#cloudprovider) | List of cloud providers can be used by this piped. This field is deprecated, use `platformProviders` instead. | No | -| platformProviders | [][PlatformProvider](#platformprovider) | List of platform providers can be used by this piped. | No | -| analysisProviders | [][AnalysisProvider](#analysisprovider) | List of analysis providers can be used by this piped. | No | -| eventWatcher | [EventWatcher](#eventwatcher) | Optional Event watcher settings. | No | -| secretManagement | [SecretManagement](#secretmanagement) | The using secret management method. | No | -| notifications | [Notifications](#notifications) | Sending notifications to Slack, Webhook... | No | -| appSelector | map[string]string | List of labels to filter all applications this piped will handle. Currently, it is only be used to filter the applications suggested for adding from the control plane. | No | - -## Git - -| Field | Type | Description | Required | -|-|-|-|-| -| username | string | The username that will be configured for `git` user. Default is `piped`. | No | -| email | string | The email that will be configured for `git` user. Default is `pipecd.dev@gmail.com`. | No | -| sshConfigFilePath | string | Where to write ssh config file. Default is `$HOME/.ssh/config`. | No | -| host | string | The host name. Default is `github.com`. | No | -| hostName | string | The hostname or IP address of the remote git server. Default is the same value with Host. | No | -| sshKeyFile | string | The path to the private ssh key file. This will be used to clone the source code of the specified git repositories. | No | -| sshKeyData | string | Base64 encoded string of SSH key. | No | - -## GitRepository - -| Field | Type | Description | Required | -|-|-|-|-| -| repoID | string | Unique identifier to the repository. This must be unique in the piped scope. | Yes | -| remote | string | Remote address of the repository used to clone the source code. e.g. `git@github.com:org/repo.git` | Yes | -| branch | string | The branch will be handled. | Yes | - -## ChartRepository - -| Field | Type | Description | Required | -|-|-|-|-| -| type | string | The repository type. Currently, HTTP and GIT are supported. Default is HTTP. | No | -| name | string | The name of the Helm chart repository. Note that is not a Git repository but a [Helm chart repository](https://helm.sh/docs/topics/chart_repository/). | Yes if type is HTTP | -| address | string | The address to the Helm chart repository. | Yes if type is HTTP | -| username | string | Username used for the repository backed by HTTP basic authentication. | No | -| password | string | Password used for the repository backed by HTTP basic authentication. | No | -| insecure | bool | Whether to skip TLS certificate checks for the repository or not. | No | -| gitRemote | string | Remote address of the Git repository used to clone Helm charts. | Yes if type is GIT | -| sshKeyFile | string | The path to the private ssh key file used while cloning Helm charts from above Git repository. | No | - -## ChartRegistry - -| Field | Type | Description | Required | -|-|-|-|-| -| type | string | The registry type. Currently, only OCI is supported. Default is OCI. | No | -| address | string | The address to the registry. | Yes | -| username | string | Username used for the registry authentication. | No | -| password | string | Password used for the registry authentication. | No | - -## CloudProvider - -This field is deprecated, please use [PlatformProvider](#platformprovider) instead. - -## PlatformProvider - -| Field | Type | Description | Required | -|-|-|-|-| -| name | string | The name of the platform provider. | Yes | -| type | string | The platform provider type. Must be one of the following values:-Deployment was triggered, planned and completed successfully -
- -![](/images/slack-notification-piped-started.png) --A piped has been started -
- - -For detailed configuration, please check the [configuration reference for Notifications](../configuration-reference/#notifications) section. - -### Sending notifications to external services via webhook - -``` yaml -apiVersion: pipecd.dev/v1beta1 -kind: Piped -spec: - notifications: - routes: - # Sending all events an external service. - - name: all-events-to-a-external-service - receiver: a-webhook-service - receivers: - - name: a-webhook-service - webhook: - url: {WEBHOOK_SERVICE_URL} - signatureValue: {RANDOM_SIGNATURE_STRING} -``` - -For detailed configuration, please check the [configuration reference for NotificationReceiverWebhook](../configuration-reference/#notificationreceiverwebhook) section. diff --git a/docs/content/en/docs/user-guide/managing-piped/remote-upgrade-remote-config.md b/docs/content/en/docs/user-guide/managing-piped/remote-upgrade-remote-config.md deleted file mode 100644 index eec51632dd..0000000000 --- a/docs/content/en/docs/user-guide/managing-piped/remote-upgrade-remote-config.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: "Remote upgrade and remote config" -linkTitle: "Remote upgrade and remote config" -weight: 1 -description: > - This page describes how to use remote upgrade and remote config features. ---- - -## Remote upgrade - -The remote upgrade is the ability to restart the currently running Piped with another version from the web console. -This reduces the effort involved in updating Piped to newer versions. -All Pipeds that are running by the provided Piped container image can be enabled to use this feature. -It means Pipeds running on a Kubernetes cluster, a virtual machine, a serverless service can be upgraded remotely from the web console. - -Basically, in order to use this feature you must run Piped with `/launcher` command instead of `/piped` command as usual. -Please check the [installation](../../../installation/install-piped/) guide on each environment to see the details. - -After starting Piped with the remote-upgrade feature, you can go to the Settings page then click on `UPGRADE` button on the top-right corner. -A dialog will be shown for selecting which Pipeds you want to upgrade and what version they should run. - -![](/images/settings-remote-upgrade.png) --Select a list of Pipeds to upgrade from Settings page -
- -## Remote config - -Although the remote-upgrade allows you remotely restart your Pipeds to run any new version you want, if your Piped is loading its config locally where Piped is running, you still need to manually restart Piped after adding any change on that config data. Remote-config is for you to remove that kind of manual operation. - -Remote-config is the ability to load Piped config data from a remote location such as a Git repository. Not only that, but it also watches the config periodically to detect any changes on that config and restarts Piped to reflect the new configuration automatically. - -This feature requires the remote-upgrade feature to be enabled simultaneously. Currently, we only support remote config from a Git repository, but other remote locations could be supported in the future. Please check the [installation](../../../installation/install-piped/) guide on each environment to know how to configure Piped to load a remote config file. - - -## Summary - -- By `remote-upgrade` you can upgrade your Piped to a newer version by clicking on the web console -- By `remote-config` you can enforce your Piped to use the latest config data just by updating its config file stored in a Git repository diff --git a/docs/content/en/docs/user-guide/plan-preview.md b/docs/content/en/docs/user-guide/plan-preview.md deleted file mode 100644 index bbcafab16e..0000000000 --- a/docs/content/en/docs/user-guide/plan-preview.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -title: "Confidently review your changes with Plan Preview" -linkTitle: "Plan preview" -weight: 4 -description: > - Enables the ability to preview the deployment plan against a given commit before merging. ---- - -In order to help developers review the pull request with a better experience and more confidence to approve it to trigger the actual deployments, -PipeCD provides a way to preview the deployment plan of all updated applications by that pull request. - -Here are what will be included currently in the result of plan-preview process: - -- which application will be deployed once the pull request got merged -- which deployment strategy (QUICK_SYNC or PIPELINE_SYNC) will be used -- which resources will be added, deleted, or modified - -This feature will available for all application kinds: KUBERNETES, TERRAFORM, CLOUD_RUN, LAMBDA and Amazon ECS. - -![](/images/plan-preview-comment.png) --PlanPreview with GitHub actions pipe-cd/actions-plan-preview -
- -## Prerequisites - -- Ensure the version of your Piped is at least `v0.11.0`. -- Having an API key that has `READ_WRITE` role to authenticate with PipeCD's Control Plane. A new key can be generated from `settings/api-key` page of your PipeCD web. - -## Usage - -Plan-preview result can be requested by using `pipectl` command-line tool as below: - -``` console -pipectl plan-preview \ - --address={ PIPECD_CONTROL_PLANE_ADDRESS } \ - --api-key={ PIPECD_API_KEY } \ - --repo-remote-url={ REPO_REMOTE_GIT_SSH_URL } \ - --head-branch={ HEAD_BRANCH } \ - --head-commit={ HEAD_COMMIT } \ - --base-branch={ BASE_BRANCH } -``` - -You can run it locally or integrate it to your CI system to run automatically when a new pull request is opened/updated. Use `--help` to see more options. - -``` console -pipectl plan-preview --help -``` - -## GitHub Actions - -If you are using GitHub Actions, you can seamlessly integrate our prepared [actions-plan-preview](https://github.com/pipe-cd/actions-plan-preview) to your workflows. This automatically comments the plan-preview result on the pull request when it is opened or updated. You can also trigger to run plan-preview manually by leave a comment `/pipecd plan-preview` on the pull request. diff --git a/docs/main.go b/docs/main.go index 9bcd1471af..3546d033a3 100644 --- a/docs/main.go +++ b/docs/main.go @@ -20,12 +20,16 @@ import ( "net/http" "os" "os/signal" + "strings" "syscall" "time" ) const dir = "/public" +// Don't update here manually. /hack/gen-release-docs.sh does. +const latestPath = "/docs-v0.44.x/" + func main() { var ( doneCh = make(chan error) @@ -38,6 +42,12 @@ func main() { ) mux.Handle("/", fs) + // Redirect /docs/ to /docs-{latest-version}/ + mux.HandleFunc("/docs/", func(w http.ResponseWriter, r *http.Request) { + latestPattern := strings.Replace(r.URL.Path, "/docs/", latestPath, 1) + http.Redirect(w, r, latestPattern, 307) + }) + defer func() { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() diff --git a/hack/gen-release-docs.sh b/hack/gen-release-docs.sh index 7aeaa0d99e..a368117d89 100755 --- a/hack/gen-release-docs.sh +++ b/hack/gen-release-docs.sh @@ -74,4 +74,7 @@ EOT tail -n +$LINE_NUM docs/config.toml >> docs/config.toml.tmp mv docs/config.toml.tmp docs/config.toml +# Update docs/main.go +sed -i '' "s/const latestPath.*/const latestPath = \"\/docs-"$VERSION"\/\"/g" docs/main.go + echo "Version docs has been prepared successfully at $CONTENT_DIR/docs-$VERSION/" diff --git a/hack/gen-stable-docs.sh b/hack/gen-stable-docs.sh deleted file mode 100755 index 4990806179..0000000000 --- a/hack/gen-stable-docs.sh +++ /dev/null @@ -1,48 +0,0 @@ -#!/usr/bin/env bash - -# Copyright 2023 The PipeCD Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# get version from the RELEASE file and create the version docs string -# for example, with the release file content: -# version: v0.21.0 -# the version docs string will be v0.21.x -LATEST_DOCS_VERSION="$(head -n 1 RELEASE | cut -d ' ' -f 2 | cut -d '.' -f -2).x" - -# parse params -if [[ -z "$1" ]] -then - STABLE_DOCS_VERSION=$LATEST_DOCS_VERSION -else - STABLE_DOCS_VERSION=$1 -fi - -echo "Sync stable docs with docs at version $STABLE_DOCS_VERSION" - -CONTENT_DIR=docs/content/en - -rm -rf $CONTENT_DIR/docs -cp -rf $CONTENT_DIR/docs-$STABLE_DOCS_VERSION $CONTENT_DIR/docs -cat <