Skip to content

Commit

Permalink
update ec2 type to c7g.2xlarge
Browse files Browse the repository at this point in the history
Closes #1060

Signed-off-by: Kamesh Akella <[email protected]>
  • Loading branch information
kami619 authored Dec 3, 2024
1 parent caf4f71 commit b82b2c1
Show file tree
Hide file tree
Showing 8 changed files with 37 additions and 20 deletions.
37 changes: 27 additions & 10 deletions .github/actions/prepare-horreum-report/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,25 +16,42 @@ runs:
name: Create and Prepare Report File
if: ${{ inputs.createReportFile == 'true' }}
shell: bash
# language=bash
run: |
set -e # Exit on error
output_file_prefix="result-"
cur_date=$(date)
cur_date_iso=$(date -d "$cur_date" --iso-8601=seconds)
cur_date_iso_compressed=$(date -d "$cur_date" '+%Y%m%d-%H%M%S')
uuid=$(uuidgen)
# Extract compute machine type
echo "Getting compute machine type..."
nodes_output=$(oc get nodes -o custom-columns=TYPE:.metadata.labels."node\.kubernetes\.io/instance-type" || { echo "Failed to get nodes info"; exit 1; })
compute_machine_type=$(echo "$nodes_output" | tail -n +2 | sort -u | tr '\n' ',' | sed 's/,$//' || { echo "Failed to parse machine type"; exit 1; })
if [[ -z "$compute_machine_type" ]]; then
echo "Warning: No compute machine type found."
compute_machine_type="unknown"
fi
OUTPUT_FILE_NAME="${output_file_prefix}${cur_date_iso_compressed}-${uuid}.json"
echo "HORREUM_OUTPUT_FILE_NAME=$OUTPUT_FILE_NAME" >> $GITHUB_ENV
jq -n --arg current_date "${cur_date_iso}" --arg id "${uuid}" \
# Create the JSON file with the initial structure, using --arg for computeMachineType
jq -n --arg current_date "${cur_date_iso}" --arg id "${uuid}" --arg compute_machine_type "${compute_machine_type}" \
'{"$schema": "urn:keycloak-benchmark:0.2", "uuid": ($id), "name": "ROSA Scalability Benchmark Run Results",
"start": ($current_date), "end": ""}' > ${OUTPUT_FILE_NAME}
#Reading configmap with environment data
configJson=$(oc get configmap env-config -n ${{ env.PROJECT }} -o "jsonpath={ .data['environment_data\.json']}'" | rev | cut -d\' -f2- | rev | jq)
jq '. + {"context":{}}' ${OUTPUT_FILE_NAME} > tmp.json && \
mv tmp.json ${OUTPUT_FILE_NAME}
#Putting environment parameters into JSON
jq --argjson configJson "${configJson}" '.context = ($configJson)' ${OUTPUT_FILE_NAME} > tmp.json && \
mv tmp.json ${OUTPUT_FILE_NAME}
"start": ($current_date), "end": "", "computeMachineType": ($compute_machine_type)}' > ${OUTPUT_FILE_NAME}
# Read configmap with environment data
echo "Reading environment data from configmap..."
configJson=$(oc get configmap env-config -n ${{ env.PROJECT }} -o "jsonpath={ .data['environment_data\.json']}'" || { echo "Failed to get configmap data"; exit 1; })
configJson=$(echo "$configJson" | rev | cut -d\' -f2- | rev | jq || { echo "Failed to parse configmap JSON"; exit 1; })
# Add environment parameters to JSON
echo "Updating report JSON with context data..."
jq '. + {"context":{}}' ${OUTPUT_FILE_NAME} > tmp.json && mv tmp.json ${OUTPUT_FILE_NAME}
jq --argjson configJson "${configJson}" '.context = ($configJson)' ${OUTPUT_FILE_NAME} > tmp.json && mv tmp.json ${OUTPUT_FILE_NAME}
- id: finalize-report-file
name: Add end time to the report
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
2. Click on Run workflow button
3. Fill in the form and click on Run workflow button
1. Name of the cluster - the name of the cluster that will be later used for other workflows. Default value is `gh-${{ github.repository_owner }}`, this results in `gh-<owner of fork>`.
2. Instance type for compute nodes - see [AWS EC2 instance types](https://aws.amazon.com/ec2/instance-types/). Default value is `m7g.2xlarge`.
2. Instance type for compute nodes - see [AWS EC2 instance types](https://aws.amazon.com/ec2/instance-types/). Default value is `c7g.2xlarge`.
3. Deploy to multiple availability zones in the region - if checked, the cluster will be deployed to multiple availability zones in the region. Default value is `false`.
4. Number of worker nodes to provision - number of compute nodes in the cluster. Default value is `2`.
4. Wait for the workflow to finish.
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/rosa-cluster-create.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ on:
type: string
computeMachineType:
description: 'Instance type for the compute nodes'
default: 'm7g.2xlarge'
default: 'c7g.2xlarge'
type: string
availabilityZones:
description: 'Availability zones to deploy to'
Expand All @@ -35,7 +35,7 @@ on:
default: 10.0.0.0/24
computeMachineType:
description: 'Instance type for the compute nodes'
default: 'm7g.2xlarge'
default: 'c7g.2xlarge'
type: string
availabilityZones:
description: 'Availability zones to deploy to'
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/rosa-multi-az-cluster-create.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ on:
type: string
computeMachineType:
description: 'Instance type for the compute nodes'
default: 'm7g.2xlarge'
default: 'c7g.2xlarge'
type: string
workflow_call:
inputs:
Expand Down Expand Up @@ -67,7 +67,7 @@ on:
type: string
computeMachineType:
description: 'Instance type for the compute nodes'
default: 'm7g.2xlarge'
default: 'c7g.2xlarge'
type: string

env:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Collecting the CPU usage for refreshing a token is currently performed manually
This setup is run https://github.com/keycloak/keycloak-benchmark/blob/main/.github/workflows/rosa-cluster-auto-provision-on-schedule.yml[daily on a GitHub action schedule]:

* OpenShift 4.15.x deployed on AWS via ROSA with two AWS availability zones in AWS one region.
* Machinepool with `m7g.2xlarge` instances.
* Machinepool with `c7g.2xlarge` instances.
* Keycloak 25 release candidate build deployed with Operator and 3 pods in each site as an active/passive setup, and Infinispan connecting the two sites.
* Default user password hashing with Argon2 and 5 hash iterations and minimum memory size 7 MiB https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html#argon2id[as recommended by OWASP].
* Database seeded with 100,000 users and 100,000 clients.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ After the installation process is finished, it creates a new admin user.
CLUSTER_NAME=rosa-kcb
VERSION=4.13.8
REGION=eu-central-1
COMPUTE_MACHINE_TYPE=m7g.2xlarge
COMPUTE_MACHINE_TYPE=c7g.2xlarge
MULTI_AZ=false
REPLICAS=3
----
Expand Down Expand Up @@ -85,7 +85,7 @@ The above installation script creates an admin user automatically but in case th
== Scaling the cluster's nodes on demand

The standard setup of nodes might be too small for running a load test, at the same time using a different instance type and rebuilding the cluster takes a lot of time (about 45 minutes).
To scale the cluster on demand, the standard setup has a machine pool named `scaling` with instances of type `m7g.2xlarge` which is auto-scaled based on the current demand from 4 to 15 instances.
To scale the cluster on demand, the standard setup has a machine pool named `scaling` with instances of type `c7g.2xlarge` which is auto-scaled based on the current demand from 4 to 15 instances.
However, auto-scaling of worker nodes is quite time-consuming as nodes are scaled one by one.

To use different instance types, use `rosa create machinepool` to create additional machine pools
Expand Down
2 changes: 1 addition & 1 deletion provision/aws/rosa_create_cluster.sh
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ fi

SCALING_MACHINE_POOL=$(rosa list machinepools -c "${CLUSTER_NAME}" -o json | jq -r '.[] | select(.id == "scaling") | .id')
if [[ "${SCALING_MACHINE_POOL}" != "scaling" ]]; then
rosa create machinepool -c "${CLUSTER_NAME}" --instance-type "${COMPUTE_MACHINE_TYPE:-m7g.2xlarge}" --max-replicas 15 --min-replicas 1 --name scaling --enable-autoscaling --autorepair
rosa create machinepool -c "${CLUSTER_NAME}" --instance-type "${COMPUTE_MACHINE_TYPE:-c7g.2xlarge}" --max-replicas 15 --min-replicas 1 --name scaling --enable-autoscaling --autorepair
fi

cd ${SCRIPT_DIR}
Expand Down
2 changes: 1 addition & 1 deletion provision/opentofu/modules/rosa/hcp/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ variable "openshift_version" {

variable "instance_type" {
type = string
default = "m7g.2xlarge"
default = "c7g.2xlarge"
nullable = false
}

Expand Down

0 comments on commit b82b2c1

Please sign in to comment.