Skip to content

Commit

Permalink
replace all remaining doc: refs (#21)
Browse files Browse the repository at this point in the history
  • Loading branch information
osterman authored and goruha committed May 8, 2018
1 parent 0dbcb72 commit c22c333
Show file tree
Hide file tree
Showing 16 changed files with 50 additions and 22 deletions.
1 change: 0 additions & 1 deletion content/aws/aws-well-architected-framework.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
title: AWS Well-Architected Framework
excerpt: ''
draft: true
---
1 change: 0 additions & 1 deletion content/aws/organizations/best-practices.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
---
title: AWS Organizations Best Practices
excerpt: ''
draft: true
tags:
- organizations
- aws
Expand Down
1 change: 0 additions & 1 deletion content/development/12-factor-pattern.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
title: 12 Factor Pattern
excerpt: ''
draft: true
---
4 changes: 2 additions & 2 deletions content/geodesic/geodesic-design.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ weight: -1

We designed this shell as the last layer of abstraction. It stitches all the tools together like `make`, `aws-cli`, `kops`, `helm`, `kubectl`, and `terraform`. As time progresses, there will undoubtedly be even more that come into play. For this reason, we chose to use a combination of `bash` and `make` which together are ideally suited to combine the strengths of all these wonderful tools into one powerful shell, without raising the barrier to entry too high.

For the default environment variables, check out `Dockerfile`. We believe using ENVs this way is both consistent with the "cloud" ([12 Factor Pattern](doc:12-factor-pattern)) way of doing things, as well as a clear way of communicating what values are being passed without using a complicated convention. Additionally, you can set & forget these ENVs in your shell.
For the default environment variables, check out `Dockerfile`. We believe using ENVs this way is both consistent with the "cloud" ([12 Factor Pattern]({{< relref "development/12-factor-pattern.md" >}})) way of doing things, as well as a clear way of communicating what values are being passed without using a complicated convention. Additionally, you can set & forget these ENVs in your shell.

# Layout Inside the Shell

Expand All @@ -24,4 +24,4 @@ We leverage as many semantics of the linux shell as we can to make the experienc
| `/localhost` | is where we house the local state (like your temporary AWS credentials). This is your native `$HOME` directory mounted into the container. |
| `/s3` | is where we mount S3 buckets; these files are never written to disk and only kept in memory for security |

You can easily change almost any aspect of how the shell works simply by extending it with [Geodesic Module](doc:module)
You can easily change almost any aspect of how the shell works simply by extending it with [Geodesic Module](/geodesic/module)
4 changes: 2 additions & 2 deletions content/geodesic/module/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ docker run -e CLUSTER_NAME \

# Configure Project

Customize module files as necessary. Edit the `Dockerfile` to reflect your settings. The files are installed to the `$CLUSTER_NAME/` folder. We recommend creating a [GitHub](doc:github) repo to store this configuration.
Customize module files as necessary. Edit the `Dockerfile` to reflect your settings. The files are installed to the `$CLUSTER_NAME/` folder. We recommend creating a [GitHub]({{< relref "documentation/our-github.md" >}}) repo to store this configuration.

```
cd $CLUSTER_NAME
Expand Down Expand Up @@ -64,7 +64,7 @@ make install
## Run the shell

The shell can now be easily started any time by simply running `$CLUSTER_NAME`, which is a shell script in `/usr/local/bin`. Make sure this path is in your `PATH` environment variable.
For more information follow [Use](doc:use)
For more information follow [Use](/geodesic/module/usage/)

# Authorize on AWS

Expand Down
2 changes: 1 addition & 1 deletion content/geodesic/module/usage/with-kops.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ excerpt: ""
# Create a cluster

Follow the [Provision a Cluster](doc:provision-a-cluster) process
Follow the [Provision a Cluster]({{< relref "geodesic/module/usage/with-kops.md" >}}) process

# Provision Platform Backing Services

Expand Down
10 changes: 5 additions & 5 deletions content/geodesic/module/usage/with-terraform.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Then run these commands:
5. Re-run `init-terraform`, answer `yes` when asked to import state

{{% dialog type="warning" icon="fa-exclamation-circle" title="Prerequisites" %}}
Follow the "Use geodesic module" to [Use](doc:use) get how to use the module shell.
Follow the "Use geodesic module" to [Use](/geodesic/module/usage/) get how to use the module shell.
{{% /dialog %}}

# Create terraform state bucket
Expand Down Expand Up @@ -120,13 +120,13 @@ ENV TF_VAR_tfstate_region=us-west-2
```

## Rebuild module
[Rebuild](doc:use) the module
[Rebuild](/geodesic/module/usage/) the module
```bash
> make build
```

## Run into the module shell
Run Geodesic Shell in [development mode](doc:use#section-development-mode)
Run Geodesic Shell in [development mode](/geodesic/module/usage/#section-development-mode)
```bash
> $CLUSTER_NAME
```
Expand Down Expand Up @@ -250,13 +250,13 @@ ENV TF_DYNAMODB_TABLE "example-staging-terraform-state-lock"
```
## Rebuild the module
[Rebuild](doc:use) the module
[Rebuild](/geodesic/module/usage/) the module
```bash
> make build
```
## Run into the module shell and authorize on AWS
Run Geodesic Shell in [development mode](doc:use#section-development-mode)
Run Geodesic Shell in [development mode](/geodesic/module/usage/#section-development-mode)
```bash
> $CLUSTER_NAME
> assume-role
Expand Down
11 changes: 11 additions & 0 deletions content/glossary/12-factor.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
title: 12-Factor
terms:
- "12f"
- "12-factor"
- "12 factor"
- "12 factor pattern"
- "12-factor pattern"
excerpt: ""
---
The 12 Factor Pattern is a software methodology for building cloud-friendly (or cloud-native), scalable, maintainable applications that deploy easily on a Platform-as-a-Service (aka PaaS).
9 changes: 9 additions & 0 deletions content/glossary/paas.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
title: Platform-as-a-Service (PaaS)
terms:
- "PaaS"
- "Platform as a Service"
- "Platform-as-a-Service"
excerpt: ""
---
A Platform-as-a-Service is a type of cloud platform which offers black-box services that enable developers to build applications on top of the compute infrastructure without needing to deal with the day-to-day maintenance of the infrastructure. This might include developer tools that are offered as a service to build services, or data access and database services, or billing services.
11 changes: 11 additions & 0 deletions content/glossary/saas.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
title: Software-as-a-Service (SaaS)
terms:
- "SaaS"
- "Software as a Service"
- "Software-as-a-Service"
excerpt: ""
---
Sofware-as-a-Service is a form of a cloud services platform, whereby the computing platform (operating system and associated services) is delivered as a service over the Internet by the provider.


2 changes: 1 addition & 1 deletion content/helm-charts/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Install Github Authorized Keys if you want to enable users to login to the clust

To install [`github-authorized-keys`](https://github.com/cloudposse/github-authorized-keys/) on all nodes (including master nodes), you can run the following commands.

**NOTE**: The [Kops](doc:kops) `bastion` is not part of the kubernetes cluster, thus `DaemonSets` cannot be deployed to this instance. One alternative is to deploy a [`bastion`](https://github.com/cloudposse/charts/tree/master/incubator/bastion) helm chart.
**NOTE**: The [Kops]({{< relref "tools/kops.md" >}}) `bastion` is not part of the kubernetes cluster, thus `DaemonSets` cannot be deployed to this instance. One alternative is to deploy a [`bastion`](https://github.com/cloudposse/charts/tree/master/incubator/bastion) helm chart.

Simply run,
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Nginx Ingress Controller is a type of [Ingress controller](https://kubernetes.io
None
# Install

Add to your [Kubernetes Backing Services](doc:backing-services) Helmfile this code
Add to your [Kubernetes Backing Services](/kubernetes-backing-services) Helmfile this code

##### Helmfile
```yaml
Expand Down
6 changes: 3 additions & 3 deletions content/learn-by-example/agenda.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ weight: -1
Company "Example, LLC" owns a portal `example.com` that provides documentation, roadmaps and examples for a lot of activities we do in real life.

The company wants to migrate to AWS cloud hosting and use Kubernates as containers management and deployment system.
The company wants to migrate to AWS cloud hosting and use Kubernates as containers management and deployment system.

They need multiple environments:
* Production
Expand All @@ -20,13 +20,13 @@ As a continuous integration platform, they choose Codefresh.io.

## Game Plan

Following [AWS Well-Architected Framework](doc:aws-well-architected-framework) and [Best Practices](doc:aws-organizations-best-practices) we will create 3 AWS organizations belongs to root AWS account and 4 Geodesic Modules:
Following [AWS Well-Architected Framework]({{< relref "aws/aws-well-architected-framework.md" >}}) and [Best Practices]({{< relref "aws/organizations/best-practices.md" >}}) we will create 3 AWS organizations belongs to root AWS account and 4 Geodesic Modules:
* `root.example.com` - Module for root AWS account
* `staging.example.com` - Module for the staging environment
* `development.example.com` - Module for the development environment
* `production.example.com` - Module for the production environment

----------

`root.example.com` - will be responsible for managing users, creation [Organizations](doc:organizations) for environments and grant access to them.
`root.example.com` - will be responsible for managing users, creation [Organizations](/aws/organizations) for environments and grant access to them.
All other Modules will spin up Kubernetes where applications will be executed
2 changes: 1 addition & 1 deletion content/local-dev-environments/vagrant.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ excerpt: ""
Vagrant by HashiCorp is responsbile for setting up development environments under VirtualBox. Vagrant handles all configuration management and makes it easy to share development environments by developers.

{{% dialog type="info" icon="fa-info-circle" title="Important" %}}
> Vagrant is no longer recommended as a means of provisioning local development environments. We recommend using [Docker Compose](doc:docker-compose) instead.
> Vagrant is no longer recommended as a means of provisioning local development environments. We recommend using [Docker Compose]({{< relref "local-dev-environments/docker-compose.md" >}}) instead.
{{% /dialog %}}

VirtualBox by Oracle is responsible for running Linux Virtual Machines.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,5 +49,5 @@ module "tf_ami_from_instance" {

### :no_entry_sign: CAVEATS

> - Terraform will only keep the latest AMI snapshot (terraform will delete the previously generated AMI) See our Lamda based solution which avoids this pitfall: [terraform-aws-ec2-ami-backup](doc:terraform-aws-ec2-ami-backup)
> - Terraform will only keep the latest AMI snapshot (terraform will delete the previously generated AMI) See our Lamda based solution which avoids this pitfall: [terraform-aws-ec2-ami-backup]({{< relref "terraform-modules/backups/terraform-aws-ec2-ami-backup.md" >}})
> - This is is not compatible with autoscaling groups
4 changes: 2 additions & 2 deletions content/terraform-modules/cdn/terraform-aws-cloudfront-cdn.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ module "cdn" {

### :information_source: NOTE

> Pass the `deployment_arns` parameter to the `terraform-aws-s3-website` module to enable a [CI/CD](doc:terraform-aws-iam-system-user) user to upload assets to the bucket.
> Pass the `deployment_arns` parameter to the `terraform-aws-s3-website` module to enable a [CI/CD]({{< relref "terraform-modules/security/terraform-aws-iam-system-user.md" >}}) user to upload assets to the bucket.
# More Examples

Expand All @@ -94,7 +94,7 @@ A complete example of setting up CloudFront Distribution with Cache Behaviors fo

There are two options:

1. Use our [terraform-aws-acm-request-certificate](doc:terraform-aws-acm-request-certificate) module to generate certificates.
1. Use our [terraform-aws-acm-request-certificate]({{< relref "terraform-modules/security/terraform-aws-iam-system-user.md" >}}) module to generate certificates.

2. Use the AWS cli to [request new ACM certifiates](http://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request.html) (requires email validation)

Expand Down

0 comments on commit c22c333

Please sign in to comment.