Skip to content

Commit

Permalink
Merge pull request #782 from aws-quickstart/task/1.10.1-release-prep
Browse files Browse the repository at this point in the history
1.10.1 release - updated helm charts, libs, cdk
  • Loading branch information
shapirov103 authored Jul 20, 2023
2 parents d878d2b + 671bf35 commit 1345a33
Show file tree
Hide file tree
Showing 36 changed files with 88 additions and 83 deletions.
18 changes: 8 additions & 10 deletions .github/workflows/linkcheck.json
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,13 @@
}
],
"ignorePatterns": [
{
"pattern": [
"localhost"
]
},
{
"pattern": [
"127.0.0.1"
]
}
{ "pattern": "localhost" },
{ "pattern": "127.0.0.1" },
{ "pattern": "../api" },
{ "pattern": "https://helm.datadoghq.com" },
{ "pattern": "https://sqs" },
{ "pattern": "www.rsa-2048.example.com" },
{ "pattern": "rsa-2048.example.com" },
{ "pattern": "https://ingress-red-saas.instana.io/" }
]
}
3 changes: 3 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,9 @@ list:
$(DEPS)
$(CDK) list

markdown-link-check:
find docs -name "*.md" | xargs -n 1 markdown-link-check -q -c .github/workflows/linkcheck.json

run-test:
npm test

Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,14 +44,14 @@ aws --version
Install CDK matching the current version of the Blueprints QuickStart (which can be found in package.json).

```bash
npm install -g aws-cdk@2.86.0
npm install -g aws-cdk@2.88.0
```

Verify the installation.

```bash
cdk --version
# must output 2.86.0
# must output 2.88.0
```

Create a new CDK project. We use `typescript` for this example.
Expand Down
4 changes: 2 additions & 2 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,14 +44,14 @@ aws --version
Install CDK matching the current version of the Blueprints QuickStart (which can be found in package.json).

```bash
npm install -g aws-cdk@2.86.0
npm install -g aws-cdk@2.88.0
```

Verify the installation.

```bash
cdk --version
# must output 2.86.0
# must output 2.88.0
```

Create a new CDK project. We use `typescript` for this example.
Expand Down
6 changes: 3 additions & 3 deletions docs/addons/ack-addon.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ const blueprint = blueprints.EksBlueprint.builder()
.build(app, 'my-stack-name');
```

> Pattern # 2 : This installs AWS Controller for Kubernetes for EC2 ACK controller using service name internally referencing service mapping values for helm options. After Installing this EC2 ACK Controller, the instructions in [Provision ACK Resource](https://preview--eksworkshop-v2-next.netlify.app/docs/gitops/controlplanes/ack/configureResources) can be used to provision EC2 namespaces `SecurityGroup` resources required for creating Amazon RDS database as an example.
> Pattern # 2 : This installs AWS Controller for Kubernetes for EC2 ACK controller using service name internally referencing service mapping values for helm options. After Installing this EC2 ACK Controller, the instructions in [Provision ACK Resource](https://eksworkshop-v2-next.netlify.app/docs/gitops/controlplanes/ack/configureResources) can be used to provision EC2 namespaces `SecurityGroup` resources required for creating Amazon RDS database as an example.
```typescript
import * as cdk from 'aws-cdk-lib';
Expand All @@ -44,7 +44,7 @@ const blueprint = blueprints.EksBlueprint.builder()
.build(app, 'my-stack-name');
```

> Pattern # 3 : This installs AWS Controller for Kubernetes for RDS ACK controller with user specified values. After Installing this RDS ACK Controller, the instructions in [Provision ACK Resource](https://preview--eksworkshop-v2-next.netlify.app/docs/gitops/controlplanes/ack/configureResources) can be used to provision Amazon RDS database using the RDS ACK controller as an example.
> Pattern # 3 : This installs AWS Controller for Kubernetes for RDS ACK controller with user specified values. After Installing this RDS ACK Controller, the instructions in [Provision ACK Resource](https://eksworkshop-v2-next.netlify.app/docs/gitops/controlplanes/ack/configureResources) can be used to provision Amazon RDS database using the RDS ACK controller as an example.
```typescript
import * as cdk from 'aws-cdk-lib';
Expand Down Expand Up @@ -111,7 +111,7 @@ replicaset.apps/rds-chart-5f6f5b8fc7 1 1 1 5m36s
## aws-controller-8s references

Please refer to following aws-controller-8s references for more information :
- [ACK Workshop](https://preview--eksworkshop-v2-next.netlify.app/docs/gitops/controlplanes/ack/)
- [ACK Workshop](https://eksworkshop-v2-next.netlify.app/docs/gitops/controlplanes/ack/)
- [ECR Gallery for ACK](https://gallery.ecr.aws/aws-controllers-k8s/)
- [ACK GitHub](https://github.com/aws-controllers-k8s/community)

Expand Down
16 changes: 8 additions & 8 deletions docs/addons/argo-cd.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Argo CD Add-on

[Argo CD](https://argoproj.github.io/argo-cd/) is a declarative, GitOps continuous delivery tool for Kubernetes. The Argo CD add-on provisions [Argo CD](https://argoproj.github.io/argo-cd/) into an EKS cluster, and can optionally bootstrap your workloads from public and private Git repositories.
[Argo CD](https://argo-cd.readthedocs.io/en/stable/) is a declarative, GitOps continuous delivery tool for Kubernetes. The Argo CD add-on provisions [Argo CD](https://argo-cd.readthedocs.io/en/stable/) into an EKS cluster, and can optionally bootstrap your workloads from public and private Git repositories.

The Argo CD add-on allows platform administrators to combine cluster provisioning and workload bootstrapping in a single step and enables use cases such as replicating an existing running production cluster in a different region in a matter of minutes. This is important for business continuity and disaster recovery cases as well as for cross-regional availability and geographical expansion.

Please see the documentation below for details on automatic boostrapping with ArgoCD add-on. If you prefer manual bootstrapping (once your cluster is deployed with this add-on included), you can find instructions on getting started with Argo CD in our [Getting Started](/getting-started/#deploy-workloads-with-argocd) guide.
Please see the documentation below for details on automatic boostrapping with ArgoCD add-on. If you prefer manual bootstrapping (once your cluster is deployed with this add-on included), you can find instructions on getting started with Argo CD in our [Getting Started](../getting-started.md#deploy-workloads-with-argocd) guide.

Full Argo CD project documentation [can be found here](https://argoproj.github.io/argo-cd/).
Full Argo CD project documentation [can be found here](https://argo-cd.readthedocs.io/en/stable/).

## Usage

Expand All @@ -26,12 +26,12 @@ const blueprint = blueprints.EksBlueprint.builder()
.build(app, 'my-stack-name');
```

The above will create an `argocd` namespace and install all Argo CD components. In order to bootstrap workloads you will need to change the default ArgoCD admin password and add repositories as specified in the [Getting Started](https://argoproj.github.io/argo-cd/getting_started/#port-forwarding) documentation.
The above will create an `argocd` namespace and install all Argo CD components. In order to bootstrap workloads you will need to change the default ArgoCD admin password and add repositories as specified in the [Getting Started](https://argo-cd.readthedocs.io/en/stable/getting_started/#port-forwarding) documentation.

## Functionality

1. Creates the namespace specified in the construction parameter (`argocd` by default).
2. Deploys the [`argo-cd`](https://argoproj.github.io/argo-helm) Helm chart into the cluster.
2. Deploys the [`argo-cd`](https://argoproj.github.io/argo-helm/) Helm chart into the cluster.
3. Allows to specify `ApplicationRepository` selecting the required authentication method as SSH Key, username/password or username/token. Credentials are expected to be set in AWS Secrets Manager and replicated to the desired region. If bootstrap repository is specified, creates the initial bootstrap application which may be leveraged to bootstrap workloads and/or other add-ons through GitOps.
4. Allows setting the initial admin password through AWS Secrets Manager, replicating to the desired region.
5. Supports [standard helm configuration options](./index.md#standard-helm-add-on-configuration-options).
Expand All @@ -55,7 +55,7 @@ You can change the admin password through the Secrets Manager, but it will requi

## Bootstrapping

The Blueprints framework provides an approach to bootstrap workloads and/or additional add-ons from a customer GitOps repository. In a general case, the bootstrap GitOps repository may contains an [App of Apps](https://argoproj.github.io/argo-cd/operator-manual/cluster-bootstrapping/#app-of-apps-pattern) that points to all workloads and add-ons.
The Blueprints framework provides an approach to bootstrap workloads and/or additional add-ons from a customer GitOps repository. In a general case, the bootstrap GitOps repository may contains an [App of Apps](https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/#app-of-apps-pattern) that points to all workloads and add-ons.

In order to enable bootstrapping, the add-on allows passing an `ApplicationRepository` at construction time. The following repository types are supported at present:

Expand Down Expand Up @@ -124,7 +124,7 @@ The application promotion process in the above example is handled entirely throu

By default all AddOns defined in a blueprint are deployed to the cluster via CDK. You can opt-in to deploy them following the GitOps model via ArgoCD. You will need a repository contains all the AddOns you would like to deploy via ArgoCD, such as, [eks-blueprints-add-ons](https://github.com/aws-samples/eks-blueprints-add-ons). You then configure ArgoCD bootstrapping with this repository as shown above.

There are two types of GitOps deployments via ArgoCD depending on whether you would like to adopt the [App of Apps](https://argoproj.github.io/argo-cd/operator-manual/cluster-bootstrapping/#app-of-apps-pattern) strategy:
There are two types of GitOps deployments via ArgoCD depending on whether you would like to adopt the [App of Apps](https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/#app-of-apps-pattern) strategy:

- CDK deploys the `Application` resource for each AddOn enabled, and ArgoCD deploys the actual AddOn via GitOps based on the `Application` resource. Example:

Expand Down Expand Up @@ -270,7 +270,7 @@ import * as bcrypt from "bcrypt";
}))
```

For more information, please refer to the [ArgoCD official documentation](https://github.com/argoproj/argo-helm/tree/master/charts/argo-cd).
For more information, please refer to the [ArgoCD official documentation](https://github.com/argoproj/argo-helm/tree/main/charts/argo-cd).
## Known Issues

1. Destruction of the cluster with provisioned applications may cause cloud formation to get stuck on deleting ArgoCD namespace. This happens because the server component that handles Application CRD resource is destroyed before it has a chance to clean up applications that were provisioned through GitOps (of which CFN is unaware). To address this issue at the moment, App of Apps application should be destroyed manually before destroying the stack.
Expand Down
6 changes: 3 additions & 3 deletions docs/addons/karpenter.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ blueprints-addon-karpenter-54fd978b89-hclmp 2/2 Running 0 99m
2. Creates `karpenter` namespace.
3. Creates Kubernetes Service Account, and associate AWS IAM Role with Karpenter Controller Policy attached using [IRSA](https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/setting-up-enable-IAM.html).
4. Deploys Karpenter helm chart in the `karpenter` namespace, configuring cluster name and cluster endpoint on the controller by default.
5. (Optionally) provisions a default Karpenter Provisioner and AWSNodeTemplate CRD based on user-provided parameters such as [spec.requirements](https://karpenter.sh/docs/concepts/provisioners/#specrequirements), [AMI type](https://karpenter.sh/v0.12.1/aws/provisioning/#amazon-machine-image-ami-family),[weight](https://karpenter.sh/docs/concepts/provisioners/#specweight), [Subnet Selector](https://karpenter.sh/docs/concepts/node-templates/#specsubnetselector), and [Security Group Selector](https://karpenter.sh/docs/concepts/node-templates/#specsecuritygroupselector). If created, the provisioner will discover the EKS VPC subnets and security groups to launch the nodes with.
5. (Optionally) provisions a default Karpenter Provisioner and AWSNodeTemplate CRD based on user-provided parameters such as [spec.requirements](https://karpenter.sh/docs/concepts/provisioners/#specrequirements), [AMI type](https://karpenter.sh/docs/concepts/instance-types/),[weight](https://karpenter.sh/docs/concepts/provisioners/#specweight), [Subnet Selector](https://karpenter.sh/v0.26/concepts/node-templates/#specsubnetselector), and [Security Group Selector](https://karpenter.sh/v0.28/concepts/node-templates/#specsecuritygroupselector). If created, the provisioner will discover the EKS VPC subnets and security groups to launch the nodes with.

**NOTE:**
1. The default provisioner is created only if both the subnet tags and the security group tags are provided.
Expand All @@ -95,7 +95,7 @@ blueprints-addon-karpenter-54fd978b89-hclmp 2/2 Running 0 99m

## Using Karpenter

To use Karpenter, you need to provision a Karpenter [provisioner CRD](https://karpenter.sh/docs/provisioner/). A single provisioner is capable of handling many different pod shapes.
To use Karpenter, you need to provision a Karpenter [provisioner CRD](https://karpenter.sh/docs/concepts/provisioners/). A single provisioner is capable of handling many different pod shapes.

This can be done in 2 ways:

Expand Down Expand Up @@ -225,7 +225,7 @@ requirements: [

The property is changed to align with the naming convention of the provisioner, and to allow multiple operators (In vs NotIn). The values correspond similarly between the two, with type change being the only difference.

2. Certain upgrades require reapplying the CRDs since Helm does not maintain the lifecycle of CRDs. Please see the [official documentations](https://karpenter.sh/v0.16.0/upgrade-guide/#custom-resource-definition-crd-upgrades) for details.
2. Certain upgrades require reapplying the CRDs since Helm does not maintain the lifecycle of CRDs. Please see the [official documentations](https://karpenter.sh/v0.28/upgrade-guide/) for details.

3. Starting with v0.17.0, Karpenter's Helm chart package is stored in OCI (Open Container Initiative) registry. With this change, [charts.karpenter.sh](https://charts.karpenter.sh/) is no longer updated to preserve older versions. You have to adjust for the following:

Expand Down
7 changes: 5 additions & 2 deletions docs/addons/kasten-k10.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@
**Kasten K10 by Veeam Overview**

The K10 data management platform, purpose-built for Kubernetes, provides enterprise operations teams an easy-to-use, scalable, and secure system for backup/restore, disaster recovery, and mobility of Kubernetes applications.
![Kasten-K10 Overview](/docs/assets/images/kastenk10_image1.png)

## Kasten-K10 Overview

K10’s application-centric approach and deep integrations with relational and NoSQL databases, Amazon EKS and AWS Services provides teams the freedom of infrastructure choice without sacrificing operational simplicity. Policy-driven and extensible, K10 provides a native Kubernetes API and includes features such full-spectrum consistency, database integrations, automatic application discovery, application mobility, and a powerful web-based user interface.

Given K10’s extensive ecosystem support you have the flexibility to choose environments (public/ private/ hybrid cloud/ on-prem) and Kubernetes distributions (cloud vendor managed or self managed) in support of three principal use cases:
Expand All @@ -13,7 +15,8 @@ Given K10’s extensive ecosystem support you have the flexibility to choose env
- [Disaster Recovery](https://www.kasten.io/kubernetes/use-cases/disaster-recovery/)

- [Application Mobility](https://www.kasten.io/kubernetes/use-cases/application-mobility/)
![Kasten-K10 Use Cases ](/docs/assets/images/kastenk10_image2.png)

## Kasten-K10 Use Cases

The Kasten K10 add-on installs Kasten K10 into your Amazon EKS cluster.

Expand Down
2 changes: 1 addition & 1 deletion docs/addons/keda.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ done
7) Purge the SQS queue to test scale in event
Replace ${AWS_REGION} with your target region
```shell
aws sqs purge-queue --queue-url https://sqs.${AWS_REGION}.amazonaws.com/CCOUNT_NUMBER/sqs-consumer
aws sqs purge-queue --queue-url "https://sqs.${AWS_REGION}.amazonaws.com/CCOUNT_NUMBER/sqs-consumer"
```
6) Verify if the nginx pod is scaledd in from 2 to 1 after teh cool down perion set (500 in this case)
```shell
Expand Down
2 changes: 1 addition & 1 deletion docs/addons/knative-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,4 +63,4 @@ documentation.
### Applying KNative Functions
Currently, the Knative Operator does not support the deployment of Knative directly as they're directly run as services.
For better instructions check (their documentation.)[https://knative.dev/docs/functions/deploying-functions/]
For better instructions check [their documentation](https://knative.dev/docs/functions/deploying-functions).
8 changes: 4 additions & 4 deletions docs/addons/kubecost.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,22 +48,22 @@ Custom values to pass to the chart. Config options: https://github.com/kubecost/
#### `customPrometheus: string` (optional)

Kubecost comes bundled with a Prometheus installation. However, if you wish to integrate with an external Prometheus deployment, provide your local Prometheus service address with this format `http://..svc`.
Note: integrating with an existing Prometheus is only officially supported under Kubecost paid plans and requires some extra configurations on your Prometheus: https://docs.kubecost.com/custom-prom.html
Note: integrating with an existing Prometheus is only officially supported under Kubecost paid plans and requires some extra configurations on your Prometheus: https://docs.kubecost.com/install-and-configure/install/custom-prom

#### `installPrometheusNodeExporter: boolean` (optional)

Set to false to use an existing Node Exporter DaemonSet.
Note: this requires your existing Node Exporter endpoint to be visible from the namespace where Kubecost is installed.
https://github.com/kubecost/docs/blob/main/getting-started.md#using-an-existing-node-exporter
https://docs.kubecost.com/install-and-configure/install/getting-started#using-an-existing-node-exporter

#### `repository: string`, `release: string`, `chart: string` (optional)

Additional options for customers who may need to supply their own private Helm repository.

## Support

If you have any questions about Kubecost, get in touch with the team [on Slack](https://docs.kubecost.com/support-channels.html).
If you have any questions about Kubecost, get in touch with the team [on Slack](https://docs.kubecost.com/kubecost-cloud/receiving-kubecost-cloud-support).

## License

The Kubecost Blueprints AddOn is licensed under the Apache 2.0 license. [Project repository](https://github.com/kubecost/kubecost-blueprints-addon)
The Kubecost Blueprints AddOn is licensed under the Apache 2.0 license. [Project repository](https://github.com/kubecost/kubecost-eks-blueprints-addon/)
5 changes: 3 additions & 2 deletions docs/addons/nginx.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,8 +118,9 @@ spec:
After the above ingresses applied (ideally through a GitOps engine) you can now navigate to the specified hosts respectively:
[http://riker.dev.my-domain.com](http://riker.dev.my-domain.com)
[http://troi.dev.my-domain.com](http://troi.dev.my-domain.com)
`http://riker.dev.my-domain.com`
`http://troi.dev.my-domain.com`


## TLS Termination and Certificates

Expand Down
2 changes: 1 addition & 1 deletion docs/addons/pixie.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ Namespace to deploy Pixie to. Default: `pl`

#### `cloudAddr?: string` (optional)

The address of Pixie Cloud. This should only be modified if you have deployed your own self-hosted Pixie Cloud. By default, it will be set to [Community Cloud for Pixie](https://work.withpixie.dev).
The address of Pixie Cloud. This should only be modified if you have deployed your own self-hosted Pixie Cloud. By default, it will be set to [Community Cloud for Pixie](https://work.withpixie.ai).

#### `devCloudNamespace?: string` (optional)

Expand Down
2 changes: 1 addition & 1 deletion docs/cluster-providers/asg-cluster-provider.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Configuration can also be supplied via context variables (specify in cdk.json, c

Configuration of the EC2 parameters through context parameters makes sense if you would like to apply default configuration to multiple clusters without the need to explicitly pass `AsgClusterProviderProps` to each cluster blueprint.

You can find more details on the supported configuration options in the API documentation for the [AsgClusterProviderProps](../api/interfaces/AsgClusterProviderProps.html).
You can find more details on the supported configuration options in the API documentation for the [AsgClusterProviderProps](../api/interfaces/clusters.AsgClusterProviderProps.html).

## Bottlerocket ASG

Expand Down
Loading

0 comments on commit 1345a33

Please sign in to comment.