-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add GKE 1.6 CIS benchmark for GCP environment #1672
Conversation
Hi @ttousai |
Hello @deven0t I have added the selection based on k8s version and also added updates to various documents about gke-1.6.0 support. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @ttousai
there are different errors while running the benchmark - tests are not completed succsfully - can you make the changes and verify again on GKE cluster that the tests are completed succsfully?
- flag: "--anonymous-auth" | ||
path: '{.authentication.anonymous.enabled}' | ||
compare: | ||
op: eq |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ttousai
.authentication.anonymous.enabled should not appears in kubelet-config.yaml as enabled
guy_jerby@gke-gke-test-cluster-bas-default-pool-ba74cdf0-qbln /etc/kubernetes $ cat kubelet-config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
webhook:
enabled: false
authorization:
mode: AlwaysAllow
enableServer: false
podCIDR: 10.42.0.0/24
staticPodPath: /etc/kubernetes/manifests
staticPodURL: http://metadata.google.internal/computeMetadata/v1/instance/attributes/google-container-manifest
staticPodURLHeader:
Metadata-Flavor: [Google]
cgroupDriver: systemd
- flag: --streaming-connection-idle-timeout | ||
path: '{.streamingConnectionIdleTimeout}' | ||
set: false | ||
bin_op: or |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ttousai , the result that the kube-bench shows is not logical => '{.streamingConnectionIdleTimeout}' is present OR '{.streamingConnectionIdleTimeout}' is not present
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This check should PASS:
- if --streaming-connection-idle-timeout is set to any value not equal to 0 on the command line or,
- if streamingConnectionIdleTimeout is set to any value not equal to 0 in the config file or,
- if --streaming-connection-idle-timeout is not set on the command line or,
- if streamingConnectionIdleTimeout is not set in the config file.
In our case it should pass because --streaming-connection-idle-timeout
is not set on the command line and it is also not set in the config file (the correct config file is /home/kubernetes/kubelet-config.yaml
).
tests: | ||
test_items: | ||
- flag: --make-iptables-util-chains | ||
path: '{.makeIPTablesUtilChains}' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ttousai , here is the result,
the iptables-util-chains is exists in the kubelet-config , but the test is failed and exoected result is empty, probably a typo in the test?
{
"test_number": "3.2.6",
"test_desc": "Ensure that the --make-iptables-util-chains argument is set to true (Automated)",
"audit": "/bin/ps -fC kubelet",
"AuditEnv": "",
"AuditConfig": "/bin/cat /etc/kubernetes/kubelet-config.yaml",
"type": "",
"remediation": "Remediation Method 1:\nIf modifying the Kubelet config file, edit the kubelet-config.json file\n/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to\ntrue\n\n "makeIPTablesUtilChains": true\n\nEnsure that /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf\ndoes not set the --make-iptables-util-chains argument because that would\noverride your Kubelet config file.\n\nRemediation Method 2:\nIf using executable arguments, edit the kubelet service file\n/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each\nworker node and add the below parameter at the end of the KUBELET_ARGS variable\nstring.\n\n --make-iptables-util-chains:true\n\nRemediation Method 3:\nIf using the api configz endpoint consider searching for the status of\n"makeIPTablesUtilChains.: true by extracting the live configuration from the nodes\nrunning kubelet.\n\nSee detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a\nLive Cluster (https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/),\nand then rerun the curl statement from audit process to check for kubelet\nconfiguration changes\n\n kubectl proxy --port=8001 \u0026\n export HOSTNAME_PORT=localhost:8001 (example host and port number)\n export NODE_NAME=gke-cluster-1-pool1-5e572947-r2hg (example node name from\n "kubectl get nodes")\n curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"\n\nFor all three remediations:\nBased on your system, restart the kubelet service and check status\n\n systemctl daemon-reload\n systemctl restart kubelet.service\n systemctl status kubelet -l\n",
"test_info": ["Remediation Method 1:\nIf modifying the Kubelet config file, edit the kubelet-config.json file\n/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to\ntrue\n\n "makeIPTablesUtilChains": true\n\nEnsure that /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf\ndoes not set the --make-iptables-util-chains argument because that would\noverride your Kubelet config file.\n\nRemediation Method 2:\nIf using executable arguments, edit the kubelet service file\n/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each\nworker node and add the below parameter at the end of the KUBELET_ARGS variable\nstring.\n\n --make-iptables-util-chains:true\n\nRemediation Method 3:\nIf using the api configz endpoint consider searching for the status of\n"makeIPTablesUtilChains.: true by extracting the live configuration from the nodes\nrunning kubelet.\n\nSee detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a\nLive Cluster (https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/),\nand then rerun the curl statement from audit process to check for kubelet\nconfiguration changes\n\n kubectl proxy --port=8001 \u0026\n export HOSTNAME_PORT=localhost:8001 (example host and port number)\n export NODE_NAME=gke-cluster-1-pool1-5e572947-r2hg (example node name from\n "kubectl get nodes")\n curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"\n\nFor all three remediations:\nBased on your system, restart the kubelet service and check status\n\n systemctl daemon-reload\n systemctl restart kubelet.service\n systemctl status kubelet -l\n"],
"status": "FAIL",
"actual_value": "apiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nauthentication:\n webhook:\n enabled: false\nauthorization:\n mode: AlwaysAllow\nenableServer: false\nmakeIPTablesUtilChains:true\npodCIDR: 10.42.0.0/24\nstaticPodPath: /etc/kubernetes/manifests\nstaticPodURL: http://metadata.google.internal/computeMetadata/v1/instance/attributes/google-container-manifest\nstaticPodURLHeader:\n Metadata-Flavor: [Google]\ncgroupDriver: systemd",
"scored": true,
"IsMultiple": false,
"expected_result": ""
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No it was a bad test, the test should PASS if the makeIPTablesUtilChains is not set on either the command line or in the config file which is our case so it should pass.
I have added a fix for the test.
Hi guys! |
Hi @afdesk - we are still validating the benchmark and fixing the last founded issues, soon we will be able to merge that and I will notify you, will you able to merge it and also to build a new kube-bench release and image? |
cfg/gke-1.6.0/node.yaml
Outdated
systemctl daemon-reload | ||
systemctl restart kubelet.service | ||
systemctl status kubelet -l | ||
scored: false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ttousai - scored should be true for all automatic tests
hi @guyjerby! thanks for the answer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hi guys! I left a few comments.
as for me, only a mark about 3.1.3
is critical.
thanks for understanding.
- id: 2.1.1 | ||
text: "Client certificate authentication should not be used for users (Manual)" | ||
type: "manual" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this check marked as automation
.
If there is no way to automate it, we should add a command to test in remediation
section:
$ kubectl get secrets --namespace kube-system
# Look for secrets with names starting with gke-. These secrets contain the client
certificates.
text: "Worker Node Configuration Files" | ||
checks: | ||
- id: 3.1.1 | ||
text: "Ensure that the proxy kubeconfig file permissions are set to 644 or more restrictive (Manual)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this check is marked as automated
.
scored: true | ||
|
||
- id: 3.1.2 | ||
text: "Ensure that the proxy kubeconfig file ownership is set to root:root (Manual)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this check is marked as automated
and there is audit
command here
scored: true | ||
|
||
- id: 3.1.3 | ||
text: "Ensure that the kubelet configuration file has permissions set to 600 (Manual)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this check is marked as automated
and there is audit
command
scored: true | ||
|
||
- id: 3.1.4 | ||
text: "Ensure that the kubelet configuration file ownership is set to root:root (Manual)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this check is marked as automated
and there is audit
command
cfg/gke-1.6.0/node.yaml
Outdated
- flag: "permissions" | ||
compare: | ||
op: bitmask | ||
value: "644" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it was changed to 600
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ttousai - can you change to 600?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
test_items: | ||
- flag: "--read-only-port" | ||
path: '{.readOnlyPort}' | ||
set: false | ||
- flag: "--read-only-port" | ||
path: '{.readOnlyPort}' | ||
compare: | ||
op: eq | ||
value: 0 | ||
bin_op: or |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if --read-only-port
isn't set, the check will pass, right?
is it correct behavior?
just make sure
Verify that the --read-only-port argument exists and is set to 0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@afdesk yes this is the correct behavior. According to the kubelet config doc if readOnlyPort
is not set, it defaults to 0
, which is disable
.
- id: 4.1.1 | ||
text: "Ensure that the cluster-admin role is only used where required (Automated)" | ||
type: "manual" | ||
remediation: | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does it make sense to add audit
tip into remediation
block? wdyt?
Audit:
Obtain a list of the principals who have access to the cluster-admin role by reviewing
the clusterrolebinding output for each role binding that has access to the cluster-
admin role.
kubectl get clusterrolebindings -o=custom-
columns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECT:.subjects[*].name
Review each principal listed and ensure that cluster-admin privilege is required for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @afdesk for your review - regarding the automated test that cannot be added - this is because we never run kubectl commands from the kube-bench pod in the previous releases and yes - if we converted it to manual - the remediation must cover it. in any case - I am going to test the kubectl functionality from the kube-bench pod - if it works well - we will have create another PR to automate them @ttousai - can you make the relevant fixes for the remediations if missing and than we will finalize it? Thanks! |
@afdesk , how can I contact you? can you share your mail address or send it to me? [email protected] ? |
* Add config entries for GKE 1.6 controls * Add gke1.6 control plane recommendations * Add gke-1.6.0 worker node recommendations * Add gke-1.6.0 policy recommendations * Add managed services and policy recommendation * Add master recommendations * Fix formatting across gke-1.6.0 files * Add gke-1.6.0 benchmark selection based on k8s version * Workaround: hardcode kubelet config path for gke-1.6.0 * Fix tests for makeIPTablesUtilChaings * Change scored field for all node tests to true * Fix kubelet file permission to check for --------- Co-authored-by: afdesk <[email protected]>
* Add config entries for GKE 1.6 controls * Add gke1.6 control plane recommendations * Add gke-1.6.0 worker node recommendations * Add gke-1.6.0 policy recommendations * Add managed services and policy recommendation * Add master recommendations * Fix formatting across gke-1.6.0 files * Add gke-1.6.0 benchmark selection based on k8s version * Workaround: hardcode kubelet config path for gke-1.6.0 * Fix tests for makeIPTablesUtilChaings * Change scored field for all node tests to true * Fix kubelet file permission to check for --------- Co-authored-by: afdesk <[email protected]>
Implements #1662