diff --git a/examples/terraform/README.md b/examples/terraform/README.md new file mode 100644 index 000000000..cba77324e --- /dev/null +++ b/examples/terraform/README.md @@ -0,0 +1,14 @@ +# Terraform Playground +This repository contains a collection of Terraform configurations that we used to learn and experiment with Terraform. + +## Install Terraform +Follow the [Install Terraform](https://developer.hashicorp.com/terraform/install) page to install Terraform on your machine. + +## Setting up Terraform with Artifactory +The recommended way to manage Terraform state is to use a remote backend. +Some of the repository examples use JFrog Artifactory as the remote backend (commented out). + +To set up Terraform with Artifactory, follow the instructions in the [Terraform Artifactory Backend](https://jfrog.com/help/r/jfrog-artifactory-documentation/terraform-backend-repository-structure) documentation. + +## Examples +1. Create the needed [AWS infrastructure for running JFrog Artifactory and Xray in AWS](jfrog-platform-aws-install) using RDS, S3, and EKS. This uses the [JFrog Platform Helm Chart](https://github.com/jfrog/charts/tree/master/stable/jfrog-platform) to install Artifactory and Xray diff --git a/examples/terraform/jfrog-platform-aws-install/README.md b/examples/terraform/jfrog-platform-aws-install/README.md new file mode 100644 index 000000000..72ba5e1c6 --- /dev/null +++ b/examples/terraform/jfrog-platform-aws-install/README.md @@ -0,0 +1,80 @@ +# JFrog Platform Installation in AWS with Terraform +This example will prepare the AWS infrastructure and services required to run Artifactory and Xray (installed with the [jfrog-platform Helm Chart](https://github.com/jfrog/charts/tree/master/stable/jfrog-platform)) using Terraform: +1. The AWS VPC +2. RDS (PostgreSQL) as the database for each application +2. S3 as the Artifactory object storage +3. EKS as the Kubernetes cluster for running Artifactory and Xray with pre-defined node groups for the different services + +The resources are split between individual files for easy and clear separation. + + +## Prepare the JFrog Platform Configurations +Ensure that the AWS CLI is set up and properly configured before starting with Terraform. +A configured AWS account with the necessary permissions is required to provision and manage resources successfully. + +The [jfrog-values.yaml](jfrog-values.yaml) file has the values that Helm will use to configure the JFrog Platform installation. + +The [artifactory-license-template.yaml](artifactory-license-template.yaml) file has the license key(s) template that you will need to copy to a `artifactory-license.yaml` file. +```shell +cp artifactory-license-template.yaml artifactory-license.yaml +``` + +If you plan on skipping the license key(s) for now, you can leave the `artifactory-license.yaml` file empty. Terraform will create an empty one for you if you don't create it. + +## JFrog Platform Sizing +Artifactory and Xray have pre-defined sizing templates that you can use to deploy them. The supported sizing templates in this project are `small`, `medium`, `large`, `xlarge`, and `2xlarge`. + +The sizing templates will be pulled from the [official Helm Charts](https://github.com/jfrog/charts) during the execution of the Terraform configuration. + +## Terraform + + +1. Initialize the Terraform configuration by running the following command +```shell +terraform init +``` + +2. Plan the Terraform configuration by running the following command +```shell +terraform plan -var 'sizing=small' +``` + +3. Apply the Terraform configuration by running the following command +```shell +terraform apply -var 'sizing=small' +``` + +4. When you are done, you can destroy the resources by running the following command +```shell +terraform destroy +``` + +## Accessing the EKS Cluster and Artifactory Installation +To get the `kubectl` configuration for the EKS cluster, run the following command +```shell +aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name) +``` +### Add JFrog Helm repository +Before installing JFrog helm charts, you need to add the [JFrog helm repository](https://charts.jfrog.io) to your helm client + +```shell +helm repo add jfrog https://charts.jfrog.io +helm repo update +``` + +### Install JFrog Platform +Once done, install the JFrog Platform (Artifactory and Xray) using the Helm Chart with the following command. + +Terraform will create the needed configuration files to be used for the `helm install` command. +This command will auto generate and be writen to the console when you run the `Terraform apply` command. +```shell +helm upgrade --install jfrog jfrog/jfrog-platform \ + --version \ + --namespace > --create-namespace \ + -f ./jfrog-values.yaml \ + -f ./artifactory-license.yaml \ + -f ./jfrog-artifactory--adjusted.yaml \ + -f ./jfrog-xray---adjusted.yaml \ + -f ./jfrog-custom.yaml \ + --timeout 600s +``` diff --git a/examples/terraform/jfrog-platform-aws-install/artifactory-license-template.yaml b/examples/terraform/jfrog-platform-aws-install/artifactory-license-template.yaml new file mode 100644 index 000000000..52b74e092 --- /dev/null +++ b/examples/terraform/jfrog-platform-aws-install/artifactory-license-template.yaml @@ -0,0 +1,11 @@ +## A template for the Artifactory license as a helm value. +## Copy this file to artifactory-license.yaml and fill in the full license key(s). +artifactory: + artifactory: + license: + licenseKey: | + cHJvZHVjdHM6CiAgYXJ1aWZhY3Rvcnk6CiAgICBwcm9kdWN0OiBaWGh3YVhKbGN6b2dNakF5TlMx + TFRGaFpXTmlNRGs1T0dRMVpncHZkMjVsY2p... + + cHJvZHVjdHM6CiAgYXJ0aWZhY3Rvcnk6CiAgIBBwcm9kdWN0OiBaWGh3YVhKbGN6b2dNakF5TlMv + d05DMHdObFF5TURvMU9UbzFPVm9LYVdRNkl... diff --git a/examples/terraform/jfrog-platform-aws-install/eks.tf b/examples/terraform/jfrog-platform-aws-install/eks.tf new file mode 100644 index 000000000..16cd9955c --- /dev/null +++ b/examples/terraform/jfrog-platform-aws-install/eks.tf @@ -0,0 +1,237 @@ +# This file is used to create an AWS EKS cluster and the managed node group(s) + +locals { + cluster_name = "${var.cluster_name}-${random_pet.unique_name.id}" +} + +resource "aws_security_group_rule" "allow_management_from_my_ip" { + type = "ingress" + from_port = 0 + to_port = 65535 + protocol = "-1" + cidr_blocks = var.cluster_public_access_cidrs + security_group_id = module.eks.cluster_security_group_id + description = "Allow all traffic from my public IP for management" +} + +module "eks" { + source = "terraform-aws-modules/eks/aws" + + cluster_name = local.cluster_name + cluster_version = "1.31" + + enable_cluster_creator_admin_permissions = true + cluster_endpoint_public_access = true + cluster_endpoint_public_access_cidrs = var.cluster_public_access_cidrs + + cluster_addons = { + aws-ebs-csi-driver = { + most_recent = true + service_account_role_arn = module.ebs_csi_irsa_role.iam_role_arn + } + } + + vpc_id = module.vpc.vpc_id + subnet_ids = module.vpc.private_subnets + + eks_managed_node_group_defaults = { + ami_type = "AL2_ARM_64" + iam_role_additional_policies = { + AmazonS3FullAccess = "arn:aws:iam::aws:policy/AmazonS3FullAccess" + AmazonEBSCSIDriverPolicy = "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy" + } + pre_bootstrap_user_data = <<-EOF + # This script will run on all nodes before the kubelet starts + echo "It works!" > /tmp/pre_bootstrap_user_data.txt + EOF + block_device_mappings = { + xvda = { + device_name = "/dev/xvda" + ebs = { + volume_type = "gp3" + volume_size = 50 + throughput = 125 + delete_on_termination = true + } + } + } + tags = { + Group = var.common_tag + } + } + + eks_managed_node_groups = { + artifactory = { + name = "artifactory-node-group" + + instance_types = [( + var.sizing == "large" ? var.artifactory_node_size_large : + var.sizing == "xlarge" ? var.artifactory_node_size_large : + var.sizing == "2xlarge" ? var.artifactory_node_size_large : + var.artifactory_node_size_default + )] + min_size = 1 + max_size = 10 + desired_size = ( + var.sizing == "medium" ? 2 : + var.sizing == "large" ? 3 : + var.sizing == "xlarge" ? 4 : + var.sizing == "2xlarge" ? 6 : + 1 + ) + block_device_mappings = { + xvda = { + device_name = "/dev/xvda" + ebs = { + volume_type = "gp3" + volume_size = ( + var.sizing == "large" ? var.artifactory_disk_size_large : + var.sizing == "xlarge" ? var.artifactory_disk_size_large : + var.sizing == "2xlarge" ? var.artifactory_disk_size_large : + var.artifactory_disk_size_default + ) + iops = ( + var.sizing == "large" ? var.artifactory_disk_iops_large : + var.sizing == "xlarge" ? var.artifactory_disk_iops_large : + var.sizing == "2xlarge" ? var.artifactory_disk_iops_large : + var.artifactory_disk_iops_default + ) + throughput = ( + var.sizing == "large" ? var.artifactory_disk_throughput_large : + var.sizing == "xlarge" ? var.artifactory_disk_throughput_large : + var.sizing == "2xlarge" ? var.artifactory_disk_throughput_large : + var.artifactory_disk_throughput_default + ) + delete_on_termination = true + } + } + } + labels = { + "group" = "artifactory" + } + } + + nginx = { + name = "nginx-node-group" + + instance_types = [( + var.sizing == "xlarge" ? var.nginx_node_size_large : + var.sizing == "2xlarge" ? var.nginx_node_size_large : + var.nginx_node_size_default + )] + + min_size = 1 + max_size = 10 + desired_size = ( + var.sizing == "medium" ? 2 : + var.sizing == "large" ? 2 : + var.sizing == "xlarge" ? 2 : + var.sizing == "2xlarge" ? 3 : + 1 + ) + + labels = { + "group" = "nginx" + } + } + + xray = { + name = "xray-node-group" + + instance_types = [( + var.sizing == "xlarge" ? var.xray_node_size_xlarge : + var.sizing == "2xlarge" ? var.xray_node_size_xlarge : + var.xray_node_size_default + )] + min_size = 1 + max_size = 10 + desired_size = ( + var.sizing == "medium" ? 2 : + var.sizing == "large" ? 3 : + var.sizing == "xlarge" ? 4 : + var.sizing == "2xlarge" ? 6 : + 1 + ) + block_device_mappings = { + xvda = { + device_name = "/dev/xvda" + ebs = { + volume_type = "gp3" + volume_size = ( + var.sizing == "large" ? var.xray_disk_size_large : + var.sizing == "xlarge" ? var.xray_disk_size_large : + var.sizing == "2xlarge" ? var.xray_disk_size_large : + var.xray_disk_size_default + ) + iops = ( + var.sizing == "large" ? var.xray_disk_iops_large : + var.sizing == "xlarge" ? var.xray_disk_iops_large : + var.sizing == "2xlarge" ? var.xray_disk_iops_large : + var.xray_disk_iops_default + ) + throughput = ( + var.sizing == "large" ? var.xray_disk_throughput_large : + var.sizing == "xlarge" ? var.xray_disk_throughput_large : + var.sizing == "2xlarge" ? var.xray_disk_throughput_large : + var.xray_disk_throughput_default + ) + delete_on_termination = true + } + } + } + labels = { + "group" = "xray" + } + } + + ## Create an extra node group for testing + extra = { + name = "extra-node-group" + + instance_types = [var.extra_node_size] + + min_size = 1 + max_size = 3 + desired_size = var.extra_node_count + + labels = { + "group" = "extra" + } + } + } + + tags = { + Group = var.common_tag + } +} + +# Create the gp3 storage class and make it the default +resource "kubernetes_storage_class" "gp3_storage_class" { + metadata { + name = "gp3" + annotations = { + "storageclass.kubernetes.io/is-default-class" = "true" + } + } + storage_provisioner = "ebs.csi.aws.com" + volume_binding_mode = "WaitForFirstConsumer" + allow_volume_expansion = true + parameters = { + "fsType" = "ext4" + "type" = "gp3" + } +} + +module "ebs_csi_irsa_role" { + source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks" + + role_name = "ebs-csi-${module.eks.cluster_name}-${var.region}" + attach_ebs_csi_policy = true + + oidc_providers = { + ex = { + provider_arn = module.eks.oidc_provider_arn + namespace_service_accounts = ["kube-system:ebs-csi-controller-sa"] + } + } +} diff --git a/examples/terraform/jfrog-platform-aws-install/jfrog-platform.tf b/examples/terraform/jfrog-platform-aws-install/jfrog-platform.tf new file mode 100644 index 000000000..ee3cf8664 --- /dev/null +++ b/examples/terraform/jfrog-platform-aws-install/jfrog-platform.tf @@ -0,0 +1,217 @@ +# Terraform script to deploy Artifactory on the AWS EKS created earlier + +data "aws_eks_cluster_auth" "jfrog_cluster" { + name = module.eks.cluster_name +} + +# Configure the Kubernetes provider to use the EKS cluster +provider "kubernetes" { + host = module.eks.cluster_endpoint + token = data.aws_eks_cluster_auth.jfrog_cluster.token + cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) +} + +## Until sizing specs are part of the jfrog-platform chart, we pull them from the individual charts that are inside the platform chart +# Fetch the JFrog Platform Helm chart and untar it to the current directory so we can use the sizing files to create the final values files +resource "null_resource" "fetch_platform_chart" { + provisioner "local-exec" { + command = "rm -rf jfrog-platform-*.tgz" + } + provisioner "local-exec" { + command = "helm fetch jfrog-platform --version ${var.jfrog_platform_chart_version} --repo https://charts.jfrog.io --untar" + } +} + +################### Artifactory sizing +## Prepare the final values files for the JFrog Platform sizing +data "local_file" "artifactory_sizing" { + filename = "${path.module}/jfrog-platform/charts/artifactory/sizing/artifactory-${var.sizing}.yaml" + depends_on = [null_resource.fetch_platform_chart] +} + +# Inject two spaces before all lines and load into a variable +locals { + indented_artifactory_sizing = join("\n", [for line in split("\n", data.local_file.artifactory_sizing.content) : " ${line}"]) +} + +# Create the new artifactory sizing YAML string +locals { + new_artifactory_sizing = <<-EOT + artifactory: + ${local.indented_artifactory_sizing} + EOT +} + +# Write the new Artifactory YAML to a file +resource "local_file" "new_artifactory_sizing" { + filename = "${path.module}/jfrog-artifactory-${var.sizing}-adjusted.yaml" + content = trimspace(local.new_artifactory_sizing) +} + +################### Xray sizing +## Prepare the final values files for the JFrog Platform sizing +data "local_file" "xray_sizing" { + filename = "${path.module}/jfrog-platform/charts/xray/sizing/xray-${var.sizing}.yaml" + depends_on = [null_resource.fetch_platform_chart] +} + +# Inject two spaces before all lines and load into a variable +locals { + indented_xray_sizing = join("\n", [for line in split("\n", data.local_file.xray_sizing.content) : " ${line}"]) +} + +# Create the new Xray sizing YAML string +locals { + new_xray_sizing = <<-EOT + xray: + ${local.indented_xray_sizing} + EOT +} + +# Write the new Xray YAML to a file +resource "local_file" "new_xray_sizing" { + filename = "${path.module}/jfrog-xray-${var.sizing}-adjusted.yaml" + content = trimspace(local.new_xray_sizing) +} + +# Create an empty artifactory-license.yaml if missing +resource "local_file" "empty_license" { + count = fileexists("${path.module}/artifactory-license.yaml") ? 0 : 1 + filename = "${path.module}/artifactory-license.yaml" + content = "## Empty file to satisfy Helm requirements" +} + +# Set the cache-fs-size based on the sizing variable to 80% of the disk size +locals { + cache-fs-size = (var.sizing == "large" ? var.artifactory_disk_size_large * 0.8 : + var.sizing == "xlarge" ? var.artifactory_disk_size_large * 0.8 : + var.sizing == "2xlarge" ? var.artifactory_disk_size_large * 0.8 : + var.artifactory_disk_size_default * 0.8) +} + +# Write the artifactory-custom.yaml file with the variables needed +resource "local_file" "jfrog_platform_values" { + content = <<-EOT + artifactory: + artifactory: + persistence: + maxCacheSize: "${local.cache-fs-size}000000000" + awsS3V3: + region: "${var.region}" + bucketName: "${local.artifactory_s3_bucket_name}" + + database: + url: "jdbc:postgresql://${aws_db_instance.artifactory_db.endpoint}/${var.artifactory_db_name}?sslmode=require" + user: "${var.artifactory_db_username}" + password: "${var.artifactory_db_password}" + + xray: + database: + url: "postgres://${aws_db_instance.xray_db.endpoint}/${var.xray_db_name}?sslmode=require" + user: "${var.xray_db_username}" + password: "${var.xray_db_password}" + + EOT + filename = "${path.module}/jfrog-custom.yaml" + + depends_on = [ + aws_db_instance.artifactory_db, + aws_db_instance.xray_db, + aws_s3_bucket.artifactory_binarystore, + module.eks, + helm_release.metrics_server + ] +} + +## Create a Helm release for the JFrog Platform +## Leaving this as an example of how to deploy the JFrog Platform with Helm using multiple values files + +# resource "kubernetes_namespace" "jfrog_namespace" { +# metadata { +# annotations = { +# name = var.namespace +# } +# +# labels = { +# app = "jfrog" +# } +# +# name = var.namespace +# } +# } +# +# # Create a Helm release for the JFrog Platform +# resource "helm_release" "jfrog_platform" { +# name = var.namespace +# chart = "jfrog/jfrog-platform" +# version = var.jfrog_platform_chart_version +# namespace = var.namespace +# +# depends_on = [ +# aws_db_instance.artifactory_db, +# aws_s3_bucket.artifactory_binarystore, +# module.eks, +# helm_release.metrics_server +# ] +# +# values = [ +# file("${path.module}/jfrog-values.yaml") +# ] +# +# set { +# name = "artifactory.artifactory.persistence.awsS3V3.region" +# value = var.region +# } +# +# set { +# name = "artifactory.artifactory.persistence.awsS3V3.bucketName" +# value = aws_s3_bucket.artifactory_binarystore.bucket +# } +# +# set { +# name = "artifactory.database.url" +# value = "jdbc:postgresql://${aws_db_instance.artifactory_db.endpoint}/${var.artifactory_db_name}" +# } +# +# set { +# name = "artifactory.database.user" +# value = var.artifactory_db_username +# } +# +# set { +# name = "artifactory.database.password" +# value = var.artifactory_db_password +# } +# +# set { +# name = "xray.database.url" +# value = "postgres://${aws_db_instance.xray_db.endpoint}/${var.xray_db_name}?sslmode=" +# } +# +# set { +# name = "xray.database.user" +# value = var.xray_db_username +# } +# +# set { +# name = "xray.database.password" +# value = var.xray_db_password +# } +# +# # Wait for the release to complete deployment +# wait = true +# +# # Increase the timeout to 10 minutes for the JFrog Platform to deploy +# timeout = 600 +# } +# +# data "kubernetes_resources" "nginx_service" { +# api_version = "v1" +# kind = "Service" +# namespace = var.namespace +# label_selector = "component=nginx" +# +# depends_on = [ +# helm_release.jfrog_platform +# ] +# } diff --git a/examples/terraform/jfrog-platform-aws-install/jfrog-values.yaml b/examples/terraform/jfrog-platform-aws-install/jfrog-values.yaml new file mode 100644 index 000000000..193742260 --- /dev/null +++ b/examples/terraform/jfrog-platform-aws-install/jfrog-values.yaml @@ -0,0 +1,226 @@ +## Custom values for the JFrog Platform Helm Chart + +global: + ## IMPORTANT: Artifactory masterKey and joinKey are immutable and should not be changed after the first installation. + # Generate a random join key with 'openssl rand -hex 32' + joinKey: AAAEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE456 + # Generate a random master key with 'openssl rand -hex 32' + masterKey: aaabbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb456 + + # Since we are using an external postgresql, skip the initDBCreation + database: + initDBCreation: false + +# Disable the PostgreSQL deployment +postgresql: + enabled: false + +artifactory: + artifactory: + + ## To provide support for HA + extraEnvironmentVariables: + - name : JF_SHARED_NODE_HAENABLED + value: "true" + + ## Artifactory to use S3 for filestore + persistence: + enabled: false + type: s3-storage-v3-direct + awsS3V3: + testConnection: false + endpoint: s3.amazonaws.com + path: artifactory/filestore + useInstanceCredentials: true + + ## Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + ## Run on nodes marked with the label "group=artifactory" + nodeSelector: + group: "artifactory" + + ## Nginx + nginx: + disableProxyBuffering: true + + ## Logs to stdout and stderr + logs: + stderr: true + stdout: true + level: warn + + ## Run on nodes marked with the label "group=nginx" + nodeSelector: + group: "nginx" + + service: + ## Use an NLB for the Nginx service for better performance + annotations: + service.beta.kubernetes.io/aws-load-balancer-type: "nlb" + service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "instance" + service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing" + service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: "TCP" + service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "traffic-port" + + ## Custom Nginx configuration for better performance + mainConf: | + # Main Nginx configuration file + worker_processes auto; + error_log stderr warn; + pid /var/run/nginx.pid; + + events { + worker_connections 8192; + multi_accept on; + use epoll; + } + + http { + include /etc/nginx/mime.types; + default_type application/octet-stream; + + variables_hash_max_size 1024; + variables_hash_bucket_size 64; + server_names_hash_max_size 4096; + server_names_hash_bucket_size 128; + types_hash_max_size 2048; + types_hash_bucket_size 64; + proxy_read_timeout 2400s; + client_header_timeout 2400s; + client_body_timeout 2400s; + proxy_connect_timeout 75s; + proxy_send_timeout 2400s; + proxy_buffer_size 128k; + proxy_buffers 40 128k; + proxy_busy_buffers_size 128k; + proxy_temp_file_write_size 250m; + proxy_http_version 1.1; + client_body_buffer_size 128k; + + log_format main '$remote_addr - $remote_user [$time_local] "$request" ' + '$status $body_bytes_sent "$http_referer" ' + '"$http_user_agent" "$http_x_forwarded_for"'; + + log_format timing 'ip = $remote_addr ' + 'user = \"$remote_user\" ' + 'local_time = \"$time_local\" ' + 'host = $host ' + 'request = \"$request\" ' + 'status = $status ' + 'bytes = $body_bytes_sent ' + 'upstream = \"$upstream_addr\" ' + 'upstream_time = $upstream_response_time ' + 'request_time = $request_time ' + 'referer = \"$http_referer\" ' + 'UA = \"$http_user_agent\"'; + access_log /dev/stdout timing; + + sendfile on; + #tcp_nopush on; + + keepalive_timeout 65; + + #gzip on; + + include /etc/nginx/conf.d/*.conf; + } + + + artifactoryConf: | + ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; + ssl_certificate /var/opt/jfrog/nginx/ssl/tls.crt; + ssl_certificate_key /var/opt/jfrog/nginx/ssl/tls.key; + ssl_session_cache shared:SSL:1m; + ssl_prefer_server_ciphers on; + + upstream artifactory { + server jfrog-artifactory:8082; + keepalive 1000; + keepalive_requests 10000; + } + + ## server configuration + server { + listen 8443 ssl; + listen 8080; + server_name ~(?.+)\.artifactory artifactory; + + if ($http_x_forwarded_proto = '') { + set $http_x_forwarded_proto $scheme; + } + set $host_port 443; + if ( $scheme = "http" ) { + set $host_port 80; + } + ## Application specific logs + ## access_log /var/log/nginx/artifactory-access.log timing; + ## error_log /var/log/nginx/artifactory-error.log; + rewrite ^/artifactory/?$ / redirect; + if ( $repo != "" ) { + rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2 break; + } + chunked_transfer_encoding on; + client_max_body_size 0; + + location / { + proxy_read_timeout 900; + proxy_pass_header Server; + proxy_cookie_path ~*^/.* /; + proxy_pass http://artifactory; + proxy_set_header Connection ""; + proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host; + proxy_set_header X-Forwarded-Port $server_port; + proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; + proxy_set_header Host $http_host; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_http_version 1.1; + proxy_request_buffering off; + proxy_buffering off; + add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; + } + } + + + ## Don't use the PostgreSQL chart + postgresql: + enabled: false + + database: + type: postgresql + driver: org.postgresql.Driver + + databaseUpgradeReady: true + +# Enable Xray +xray: + enabled: true + unifiedUpgradeAllowed: true + postgresql: + enabled: false + common: + persistence: + enabled: false + + ## Run on nodes marked with the label "group=xray" + global: + nodeSelector: + group: "xray" + +# observability: +# extraEnvVars: | +# - name: JF_DUMMY +# value: "true" + +# RabbitMQ is required for Xray +rabbitmq: + enabled: true + + # Run on nodes marked with the label "group=xray" + nodeSelector: + group: "xray" + +# Disable other services +distribution: + enabled: false diff --git a/examples/terraform/jfrog-platform-aws-install/metrics.tf b/examples/terraform/jfrog-platform-aws-install/metrics.tf new file mode 100644 index 000000000..9d1a6ce8e --- /dev/null +++ b/examples/terraform/jfrog-platform-aws-install/metrics.tf @@ -0,0 +1,23 @@ +# Configure the Helm provider to use the EKS cluster +provider "helm" { + kubernetes { + host = module.eks.cluster_endpoint + token = data.aws_eks_cluster_auth.jfrog_cluster.token + cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) + } +} + +# Install the metrics server +resource "helm_release" "metrics_server" { + count = var.deploy_metrics_server ? 1 : 0 + + name = "metrics-server" + chart = "metrics-server" + namespace = "kube-system" + + # Repository to install the chart from + repository = "https://kubernetes-sigs.github.io/metrics-server/" + + # Don't wait for the release to complete deployment + wait = false +} diff --git a/examples/terraform/jfrog-platform-aws-install/outputs.tf b/examples/terraform/jfrog-platform-aws-install/outputs.tf new file mode 100644 index 000000000..a3e74d381 --- /dev/null +++ b/examples/terraform/jfrog-platform-aws-install/outputs.tf @@ -0,0 +1,37 @@ +output "_01_region" { + description = "AWS region" + value = var.region +} + +output "_02_eks_cluster_endpoint" { + description = "Endpoint for EKS control plane" + value = module.eks.cluster_endpoint +} + +output "_03_eks_cluster_name" { + description = "Kubernetes Cluster Name" + value = module.eks.cluster_name +} + +output "_04_resources_tag" { + description = "The common tag applied on all resources" + value = "Group: ${var.common_tag}" +} + +# Output the command to configure kubectl config to the newly created EKS cluster +output "_05_setting_cluster_kubectl_context" { + description = "Connect kubectl to Kubernetes Cluster" + value = "aws eks --region ${var.region} update-kubeconfig --name ${module.eks.cluster_name}" +} + +# Output the command to add JFrog helm repository to helm client +output "_06_setting_helm_configuration" { + description = "Add JFrog helm repository to helm client" + value = "helm repo add jfrog https://charts.jfrog.io && helm repo update " +} + +# Output the command to install Artifactory with Helm +output "_07_jfrog_platform_install_command" { + description = "The Helm command to install the JFrog Platform (after setting up kubectl context)" + value = "helm upgrade --install jfrog jfrog/jfrog-platform --version ${var.jfrog_platform_chart_version} --namespace ${var.namespace} --create-namespace -f ${path.module}/jfrog-values.yaml -f ${path.module}/artifactory-license.yaml -f ${path.module}/jfrog-artifactory-${var.sizing}-adjusted.yaml -f ${path.module}/jfrog-custom.yaml --timeout 600s" +} diff --git a/examples/terraform/jfrog-platform-aws-install/providers.tf b/examples/terraform/jfrog-platform-aws-install/providers.tf new file mode 100644 index 000000000..4214b78e9 --- /dev/null +++ b/examples/terraform/jfrog-platform-aws-install/providers.tf @@ -0,0 +1,33 @@ +# Setup the providers +terraform { + ## Configure the remote backend (Artifactory) + ## This will store the state file in Artifactory. + ## Follow https://jfrog.com/help/r/jfrog-artifactory-documentation/terraform-backend-repository + ## Create a new terraform workspace in Artifactory named "jfrog" + # backend "remote" { + # hostname = "eldada.jfrog.io" + # organization = "terraform-backend" + # workspaces { + # prefix = "jfrog" + # } + # } + + required_providers { + # Kubernetes provider + aws = { + source = "hashicorp/aws" + } + # Kubernetes provider + kubernetes = { + source = "hashicorp/kubernetes" + } + # Helm provider + helm = { + source = "hashicorp/helm" + } + } +} + +provider "aws" { + region = var.region +} diff --git a/examples/terraform/jfrog-platform-aws-install/rds.tf b/examples/terraform/jfrog-platform-aws-install/rds.tf new file mode 100644 index 000000000..4bb74f3b2 --- /dev/null +++ b/examples/terraform/jfrog-platform-aws-install/rds.tf @@ -0,0 +1,109 @@ +# This file creates the RDS instances for Artifactory and Xray + +resource "aws_db_subnet_group" "jfrog_subnet_group" { + name = "jfrog-subnet-group" + subnet_ids = module.vpc.private_subnets + + tags = { + Group = var.common_tag + } +} + +resource "aws_db_instance" "artifactory_db" { + identifier = "artifactory-db" + engine = "postgres" + engine_version = var.rds_postgres_version + + # Set the instance class based on the sizing variable + instance_class = ( + var.sizing == "medium" ? var.artifactory_rds_size_medium : + var.sizing == "large" ? var.artifactory_rds_size_large : + var.sizing == "xlarge" ? var.artifactory_rds_size_xlarge : + var.sizing == "2xlarge" ? var.artifactory_rds_size_2xlarge : + var.artifactory_rds_size_default + ) + + storage_type = "gp3" + allocated_storage = ( + var.sizing == "medium" ? var.artifactory_rds_disk_size_medium : + var.sizing == "large" ? var.artifactory_rds_disk_size_large : + var.sizing == "xlarge" ? var.artifactory_rds_disk_size_xlarge : + var.sizing == "2xlarge" ? var.artifactory_rds_disk_size_2xlarge : + var.artifactory_rds_disk_size_default + ) + + max_allocated_storage = var.artifactory_rds_disk_max_size + storage_encrypted = true + + db_name = var.artifactory_db_name + username = var.artifactory_db_username + password = var.artifactory_db_password + + vpc_security_group_ids = [aws_security_group.rds_sg.id] + db_subnet_group_name = aws_db_subnet_group.jfrog_subnet_group.name + skip_final_snapshot = true + + tags = { + Group = var.common_tag + } +} + +resource "aws_db_instance" "xray_db" { + identifier = "xray-db" + engine = "postgres" + engine_version = var.rds_postgres_version + # Set the instance class based on the sizing variable + instance_class = ( + var.sizing == "medium" ? var.xray_rds_size_medium : + var.sizing == "large" ? var.xray_rds_size_large : + var.sizing == "xlarge" ? var.xray_rds_size_xlarge : + var.sizing == "2xlarge" ? var.xray_rds_size_2xlarge : + var.xray_rds_size_default + ) + + storage_type = "gp3" + allocated_storage = ( + var.sizing == "medium" ? var.xray_rds_disk_size_medium : + var.sizing == "large" ? var.xray_rds_disk_size_large : + var.sizing == "xlarge" ? var.xray_rds_disk_size_xlarge : + var.sizing == "2xlarge" ? var.xray_rds_disk_size_2xlarge : + var.xray_rds_disk_size_default + ) + + max_allocated_storage = var.xray_rds_disk_max_size + storage_encrypted = true + + db_name = var.xray_db_name + username = var.xray_db_username + password = var.xray_db_password + + vpc_security_group_ids = [aws_security_group.rds_sg.id] + db_subnet_group_name = aws_db_subnet_group.jfrog_subnet_group.name + skip_final_snapshot = true + + tags = { + Group = var.common_tag + } +} + +resource "aws_security_group" "rds_sg" { + vpc_id = module.vpc.vpc_id + + ingress { + from_port = 5432 + to_port = 5432 + protocol = "tcp" + cidr_blocks = var.private_subnet_cidrs + } + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags = { + Group = var.common_tag + } +} diff --git a/examples/terraform/jfrog-platform-aws-install/s3.tf b/examples/terraform/jfrog-platform-aws-install/s3.tf new file mode 100644 index 000000000..8d62da2b7 --- /dev/null +++ b/examples/terraform/jfrog-platform-aws-install/s3.tf @@ -0,0 +1,20 @@ +# This file is used to create an S3 bucket for Artifactory to store binaries + +locals { + artifactory_s3_bucket_name = "artifactory-${var.region}-${var.s3_bucket_name_suffix}-${random_pet.unique_name.id}" +} + +resource "aws_s3_bucket" "artifactory_binarystore" { + bucket = local.artifactory_s3_bucket_name + + # WARNING: This will force the bucket to be destroyed even if it's not empty + force_destroy = true + + tags = { + Group = var.common_tag + } + + lifecycle { + prevent_destroy = false + } +} diff --git a/examples/terraform/jfrog-platform-aws-install/variables.tf b/examples/terraform/jfrog-platform-aws-install/variables.tf new file mode 100644 index 000000000..c9919a84d --- /dev/null +++ b/examples/terraform/jfrog-platform-aws-install/variables.tf @@ -0,0 +1,262 @@ +# Setup the required variables + +variable "region" { + default = "us-east-1" +} + +# WARNING: CIDR "0.0.0.0/0" is full public access to the cluster. You should use a more restrictive CIDR +variable "cluster_public_access_cidrs" { + default = ["0.0.0.0/0"] +} + +variable "vpc_cidr" { + default = "10.0.0.0/16" +} + +variable "public_subnet_cidrs" { + default = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"] +} + +variable "private_subnet_cidrs" { + default = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"] +} + +variable "rds_postgres_version" { + default = "16.4" +} + +variable "s3_bucket_name_suffix" { + default = "jfrog-demo" +} + +variable "artifactory_rds_size_default" { + default = "db.m7g.2xlarge" +} + +variable "artifactory_rds_size_medium" { + default = "db.m7g.4xlarge" +} + +variable "artifactory_rds_size_large" { + default = "db.m7g.8xlarge" +} + +variable "artifactory_rds_size_xlarge" { + default = "db.m7g.12xlarge" +} + +variable "artifactory_rds_size_2xlarge" { + default = "db.m7g.16xlarge" +} + +variable "artifactory_rds_disk_size_default" { + default = 100 +} + +variable "artifactory_rds_disk_size_medium" { + default = 250 +} + +variable "artifactory_rds_disk_size_large" { + default = 500 +} + +variable "artifactory_rds_disk_size_xlarge" { + default = 1000 +} + +variable "artifactory_rds_disk_size_2xlarge" { + default = 1500 +} + +variable "artifactory_rds_disk_max_size" { + default = 2000 +} + +variable "xray_rds_size_default" { + default = "db.m7g.xlarge" +} + +variable "xray_rds_size_medium" { + default = "db.m7g.2xlarge" +} + +variable "xray_rds_size_large" { + default = "db.m7g.4xlarge" +} + +variable "xray_rds_size_xlarge" { + default = "db.m7g.8xlarge" +} + +variable "xray_rds_size_2xlarge" { + default = "db.m7g.12xlarge" +} + +variable "xray_rds_disk_size_default" { + default = 100 +} + +variable "xray_rds_disk_size_medium" { + default = 250 +} + +variable "xray_rds_disk_size_large" { + default = 500 +} + +variable "xray_rds_disk_size_xlarge" { + default = 1000 +} + +variable "xray_rds_disk_size_2xlarge" { + default = 1500 +} + +variable "xray_rds_disk_max_size" { + default = 2000 +} + +variable "artifactory_node_size_default" { + default = "m7g.2xlarge" +} + +variable "artifactory_node_size_large" { + default = "m7g.4xlarge" +} + +variable "artifactory_disk_size_default" { + default = 500 +} + +variable "artifactory_disk_size_large" { + default = 1000 +} + +variable "artifactory_disk_iops_default" { + default = 3000 +} + +variable "artifactory_disk_iops_large" { + default = 6000 +} + +variable "artifactory_disk_throughput_default" { + default = 500 +} + +variable "artifactory_disk_throughput_large" { + default = 1000 +} + +variable "xray_node_size_default" { + default = "c7g.2xlarge" +} + +variable "xray_node_size_xlarge" { + default = "c7g.4xlarge" +} + +variable "xray_disk_size_default" { + default = 100 +} + +variable "xray_disk_size_large" { + default = 200 +} + +variable "xray_disk_iops_default" { + default = 3000 +} + +variable "xray_disk_iops_large" { + default = 6000 +} + +variable "xray_disk_throughput_default" { + default = 500 +} + +variable "xray_disk_throughput_large" { + default = 1000 +} + +variable "nginx_node_size_default" { + default = "c7g.xlarge" +} + +variable "nginx_node_size_large" { + default = "c7g.2xlarge" +} + +variable "extra_node_count" { + default = "3" +} + +variable "extra_node_size" { + default = "c7g.xlarge" +} + +variable "artifactory_db_name" { + description = "The database name" + default = "artifactory" +} + +variable "artifactory_db_username" { + description = "The username for the database" + default = "artifactory" +} + +variable "artifactory_db_password" { + description = "The password for the database" + sensitive = true + default = "Password321" +} + +variable "xray_db_name" { + description = "The database name" + default = "xray" +} + +variable "xray_db_username" { + description = "The username for the database" + default = "xray" +} + +variable "xray_db_password" { + description = "The password for the database" + sensitive = true + default = "PasswordX321" +} + +variable "cluster_name" { + default = "jfrog" +} + +variable "namespace" { + default = "jfrog" +} + +variable "jfrog_platform_chart_version" { + default = "11.0.0" +} + +variable "deploy_metrics_server" { + default = true +} + +variable "common_tag" { + description = "The 'Group' tag to apply to all resources" + default = "jfrog" +} + +variable "sizing" { + type = string + description = "The sizing templates for the infrastructure and Artifactory" + default = "small" + + validation { + condition = contains(["small", "medium", "large", "xlarge", "2xlarge"], var.sizing) + error_message = "Invlid sizing set. Supported sizings are: 'small', 'medium', 'large', 'xlarge' or '2xlarge'" + } +} diff --git a/examples/terraform/jfrog-platform-aws-install/vpc.tf b/examples/terraform/jfrog-platform-aws-install/vpc.tf new file mode 100644 index 000000000..d861ec35c --- /dev/null +++ b/examples/terraform/jfrog-platform-aws-install/vpc.tf @@ -0,0 +1,43 @@ +# This file is used to create the AWS VPC + +data "aws_availability_zones" "available" { + filter { + name = "opt-in-status" + values = ["opt-in-not-required"] + } +} + +resource "random_pet" "unique_name" { + keepers = { + # Generate a new pet name each time we switch to a new region + string = var.region + } +} + +module "vpc" { + source = "terraform-aws-modules/vpc/aws" + + name = "jfrog-vpc-${random_pet.unique_name.id}" + + cidr = var.vpc_cidr + azs = slice(data.aws_availability_zones.available.names, 0, 3) + + private_subnets = var.private_subnet_cidrs + public_subnets = var.public_subnet_cidrs + + enable_nat_gateway = true + single_nat_gateway = true + enable_dns_hostnames = true + + public_subnet_tags = { + "kubernetes.io/role/elb" = 1 + } + + private_subnet_tags = { + "kubernetes.io/role/internal-elb" = 1 + } + + tags = { + Group = var.common_tag + } +}