Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BSI APP.4.4.A14+A15 #12158

Open
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

sluetze
Copy link
Contributor

@sluetze sluetze commented Jul 16, 2024

Description:

To check against BSI APP4.4.A14 this PR adds a new rule

master_taint_noschedule.

This rule checks, if the masters taint is set on master nodes. As we never know what kind of setup it is, it only checks if it is set at_least_once, since we can conclude that it might mostly be controlled by the schedulers component and be identical for each master node.

This PR also adds ad missing identifier and sets the bsi profile for automatic referencing.

Rationale:

  • Requested BSI Profile from our customers

Review Hints:

while we could also check if .spec.mastersSchedulable is set in the schedulers.config.openshift.io manifest, this key is not set by default. Thats why I moved to checking the effect instead of the configuration.

@openshift-merge-robot openshift-merge-robot added the needs-rebase Used by openshift-ci bot. label Jul 16, 2024
Copy link

openshift-ci bot commented Jul 16, 2024

Hi @sluetze. Thanks for your PR.

I'm waiting for a ComplianceAsCode member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci openshift-ci bot added the needs-ok-to-test Used by openshift-ci bot. label Jul 16, 2024
Copy link

github-actions bot commented Jul 16, 2024

Start a new ephemeral environment with changes proposed in this pull request:

ocp4 (from CTF) Environment (using Fedora as testing environment)
Open in Gitpod

Fedora Testing Environment
Open in Gitpod

Oracle Linux 8 Environment
Open in Gitpod

Copy link

github-actions bot commented Jul 16, 2024

🤖 A k8s content image for this PR is available at:
ghcr.io/complianceascode/k8scontent:12158
This image was built from commit: 024b942

Click here to see how to deploy it

If you alread have Compliance Operator deployed:
utils/build_ds_container.py -i ghcr.io/complianceascode/k8scontent:12158

Otherwise deploy the content and operator together by checking out ComplianceAsCode/compliance-operator and:
CONTENT_IMAGE=ghcr.io/complianceascode/k8scontent:12158 make deploy-local

@marcusburghardt marcusburghardt added OpenShift OpenShift product related. BSI PRs or issues for the BSI profile. labels Jul 31, 2024
@BhargaviGudi
Copy link
Collaborator

QE: /lgtm
Verification passed with 4.17.0-0.nightly-2024-08-18-131731 + compliance-operator + #12158
Verified the content and verified rule instructions are working as expected.

$ oc get ccr | grep no-clusterrolebin
upstream-ocp4-bsi-accounts-no-clusterrolebindings-default-service-account   PASS     medium
$ oc get ccr upstream-ocp4-bsi-accounts-no-clusterrolebindings-default-service-account -o=jsonpath={.instructions}
Run the following command to retrieve a list of ClusterRoleBindings that are
associated to the default service account:
$ oc get clusterrolebindings -o json | jq '[.items[] | select ( .subjects[]?.name == "default" ) | select(.subjects[].namespace | startswith("kube-") or startswith("openshift-") | not) | .metadata.name ] | unique'
There should be no ClusterRoleBindings associated with the the default service account
in any namespace.
Is it the case that default service account is given permissions using ClusterRoleBindings?$ oc get clusterrolebindings -o json | jq '[.items[] | select ( .subjects[]?.name == "default" ) | select(.subjects[].namespace | startswith("kube-") or startswith("openshift-") | not) | .metadata.name ] | unique'
[]
$ oc get ccr | grep no-rolebinding
upstream-ocp4-bsi-accounts-no-rolebindings-default-service-account          PASS     medium
$ oc get ccr upstream-ocp4-bsi-accounts-no-rolebindings-default-service-account -o=jsonpath={.instructions}
Run the following command to retrieve a list of RoleBindings that are
associated to the default service account:
$ oc get rolebindings --all-namespaces -o json | jq '[.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select ( .subjects[]?.name == "default" ) | .metadata.namespace + "/" + .metadata.name ] | unique'
There should be no RoleBindings associated with the the default service account
in any namespace.
Is it the case that default service account is given permissions using RoleBindings?$ 
$ oc get rolebindings --all-namespaces -o json | jq '[.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select ( .subjects[]?.name == "default" ) | .metadata.namespace + "/" + .metadata.name ] | unique'
[]
$ oc get ccr | grep ristrict-service
$ oc get ccr | grep restrict-service
upstream-ocp4-bsi-accounts-restrict-service-account-tokens                  MANUAL   medium
$ oc get ccr upstream-ocp4-bsi-accounts-restrict-service-account-tokens -o=jsonpath={.instructions}
For each pod in the cluster, review the pod specification and
ensure that pods that do not need to explicitly communicate with
the API server have automountServiceAccountToken
configured to false.
Is it the case that service account token usage needs review?$ 
$ oc get ccr | grep account-unique-service
$ oc get ccr | grep accounts-unique-service
upstream-ocp4-bsi-accounts-unique-service-account                           MANUAL   medium
$ oc get ccr | grep general-node
upstream-ocp4-bsi-general-node-separation                                   MANUAL   medium
$ oc get ccr upstream-ocp4-bsi-general-node-separation -o=jsonpath={.instructions}
Run the following command and review the pods and how they are deployed on nodes. $ oc get pod -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,APP:.metadata.labels.app\.kubernetes\.io/name,NODE:.spec.nodeName --all-namespaces | grep -v "openshift-"
You can use labels or other data as custom field which helps you to identify parts of an application.
Ensure that applications with high protection requirements are not colocated on nodes or in clusters with workloads of lower protection requirements.
Is it the case that Application placement on Nodes and Clusters needs review?$ 
$ oc get pod -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,APP:.metadata.labels.app\.kubernetes\.io/name,NODE:.spec.nodeName --all-namespaces | grep -v "openshift-"
NAME                                                                                          NAMESPACE                                          APP      NODE
$ oc get ccr | grep liveness-readiness
upstream-ocp4-bsi-liveness-readiness-probe-in-workload                      MANUAL   medium
$ oc get ccr upstream-ocp4-bsi-liveness-readiness-probe-in-workload -o=jsonpath={.instructions}
Run the following command to retrieve a list of deployments, daemonsets and statefulsets that
do not have liveness or readiness probes set for their containers:
$ oc get deployments,statefulsets,daemonsets --all-namespaces -o json | jq '[ .items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select( .spec.template.spec.containers[].readinessProbe != null and .spec.template.spec.containers[].livenessProbe != null ) | "\(.kind): \(.metadata.namespace)/\(.metadata.name)" ] | unique'

Make sure that there is output nothing in the result or there are valid reason not to set a
readiness or liveness probe for those workloads.
Is it the case that Liveness or readiness probe is not set?$ 
$ oc get deployments,statefulsets,daemonsets --all-namespaces -o json | jq '[ .items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select( .spec.template.spec.containers[].readinessProbe != null and .spec.template.spec.containers[].livenessProbe != null ) | "\(.kind): \(.metadata.namespace)/\(.metadata.name)" ] | unique'
[]
$ oc get ccr | grep master-taint
upstream-ocp4-bsi-master-taint-noschedule                                   PASS     medium
$ oc get ccr upstream-ocp4-bsi-master-taint-noschedule -o=jsonpath={.instructions}
Run the following command to see if control planes are schedulable
$oc get --raw /api/v1/nodes | jq '.items[] | select(.metadata.labels."node-role.kubernetes.io/master" == "" or .metadata.labels."node-role.kubernetes.io/control-plane" == "" ) | .spec.taints[] | select(.key == "node-role.kubernetes.io/master" and .effect == "NoSchedule" )'
for each master node, there should be an output of a key with the NoSchedule effect.

By editing the cluster scheduler you can centrally configure the masters as schedulable or not
by setting .spec.mastersSchedulable to true.
Use $oc edit schedulers.config.openshift.io cluster to configure the scheduling.
Is it the case that Control Plane is schedulable?$ 
$ oc get --raw /api/v1/nodes | jq '.items[] | select(.metadata.labels."node-role.kubernetes.io/master" == "" or .metadata.labels."node-role.kubernetes.io/control-plane" == "" ) | .spec.taints[] | select(.key == "node-role.kubernetes.io/master" and .effect == "NoSchedule" )'
{
  "key": "node-role.kubernetes.io/master",
  "effect": "NoSchedule"
}
{
  "key": "node-role.kubernetes.io/master",
  "effect": "NoSchedule"
}
{
  "key": "node-role.kubernetes.io/master",
  "effect": "NoSchedule"
}
$ oc get ccr | grep scansetting-has-auto
upstream-ocp4-bsi-scansetting-has-autoapplyremediations                     PASS     medium
$ oc get ccr upstream-ocp4-bsi-scansetting-has-autoapplyremediations -o=jsonpath={.instructions}
Run the following command to retrieve the scansettingbindings in the system:
oc get scansettings -ojson | jq '.items[].autoApplyRemediations'
If a scansetting is defined to set the autoApplyRemediation attribute, the above
filter will return at least one 'true'. Run the following jq query to identify the non-compliant scansettings objects:
oc get scansettings -ojson | jq -r '[.items[] | select(.autoApplyRemediation != "" or .autoApplyRemediation != null) | .metadata.name]'
Is it the case that compliance operator is not automatically remediating the cluster?$ 
$ oc get scansettings -ojson | jq '.items[].autoApplyRemediations'
true
null
true
$ oc get scansettings -ojson | jq -r '[.items[] | select(.autoApplyRemediation != "" or .autoApplyRemediation != null) | .metadata.name]'
[
  "auto-rem-ss",
  "default",
  "default-auto-apply"
]
$ oc get ccr | grep no-clusterbindings-de
$ oc get ccr | grep no-clusterrole
upstream-ocp4-bsi-accounts-no-clusterrolebindings-default-service-account   PASS     medium
$ oc get ccr upstream-ocp4-bsi-accounts-no-clusterrolebindings-default-service-account -o=jsonpath={.instructions}
Run the following command to retrieve a list of ClusterRoleBindings that are
associated to the default service account:
$ oc get clusterrolebindings -o json | jq '[.items[] | select ( .subjects[]?.name == "default" ) | select(.subjects[].namespace | startswith("kube-") or startswith("openshift-") | not) | .metadata.name ] | unique'
There should be no ClusterRoleBindings associated with the the default service account
in any namespace.
Is it the case that default service account is given permissions using ClusterRoleBindings?$ 
$ oc get clusterrolebindings -o json | jq '[.items[] | select ( .subjects[]?.name == "default" ) | select(.subjects[].namespace | startswith("kube-") or startswith("openshift-") | not) | .metadata.name ] | unique'
[]
$ oc get ccr | grep no-rolebindings
upstream-ocp4-bsi-accounts-no-rolebindings-default-service-account          PASS     medium
$ oc get ccr upstream-ocp4-bsi-accounts-no-rolebindings-default-service-account -o=jsonpath={.instructions}
Run the following command to retrieve a list of RoleBindings that are
associated to the default service account:
$ oc get rolebindings --all-namespaces -o json | jq '[.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select ( .subjects[]?.name == "default" ) | .metadata.namespace + "/" + .metadata.name ] | unique'
There should be no RoleBindings associated with the the default service account
in any namespace.
Is it the case that default service account is given permissions using RoleBindings?$ 
$ oc get rolebindings --all-namespaces -o json | jq '[.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select ( .subjects[]?.name == "default" ) | .metadata.namespace + "/" + .metadata.name ] | unique'
[]
$ oc get ccr | grep general-network
upstream-ocp4-bsi-general-network-separation                                MANUAL   medium
$ oc get ccr upstream-ocp4-bsi-general-network-separation -o=jsonpath={.instructions}
Create separate Ingress Controllers for the API and your Applications. Also setup your environment in a way, that Control Plane Nodes are in another network than your worker nodes. If you implement multiple Nodes for different purposes evaluate if these should be in different network segments (i.e. Infra-Nodes, Storage-Nodes, ...).
Is it the case that Network separation needs review?$ 
$ oc get ccr | grep probe-in-workload
upstream-ocp4-bsi-liveness-readiness-probe-in-workload                      MANUAL   medium
$ oc get ccr upstream-ocp4-bsi-liveness-readiness-probe-in-workload -o=jsonpath={.instructions}
Run the following command to retrieve a list of deployments, daemonsets and statefulsets that
do not have liveness or readiness probes set for their containers:
$ oc get deployments,statefulsets,daemonsets --all-namespaces -o json | jq '[ .items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select( .spec.template.spec.containers[].readinessProbe != null and .spec.template.spec.containers[].livenessProbe != null ) | "\(.kind): \(.metadata.namespace)/\(.metadata.name)" ] | unique'

Make sure that there is output nothing in the result or there are valid reason not to set a
readiness or liveness probe for those workloads.
Is it the case that Liveness or readoc get deployments,statefulsets,daemonsets --all-namespaces -o json | jq '[ .items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select( .spec.template.spec.containers[].readinessProbe != null and .spec.template.spec.containers[].livenessProbe != null ) | "\(.kind): \(.metadata.namespace)/\(.metadata.name)" ] | unique'ind): \(.metadata.namespace)/\(.metadata.name)" ] | unique'
[]

@yuumasato yuumasato self-assigned this Sep 17, 2024
@yuumasato yuumasato added this to the 0.1.75 milestone Sep 20, 2024
Copy link
Member

@yuumasato yuumasato left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sluetze This looks great.
It just needs a rebase/conflict resolution though.

controls/bsi_app_4_4.yml Outdated Show resolved Hide resolved
controls/bsi_app_4_4.yml Outdated Show resolved Hide resolved
controls/bsi_app_4_4.yml Outdated Show resolved Hide resolved
Copy link

github-actions bot commented Oct 9, 2024

This datastream diff is auto generated by the check Compare DS/Generate Diff

Click here to see the full diff
New content has different text for rule 'xccdf_org.ssgproject.content_rule_general_network_separation'.
--- xccdf_org.ssgproject.content_rule_general_network_separation
+++ xccdf_org.ssgproject.content_rule_general_network_separation
@@ -7,6 +7,9 @@
 
 [reference]:
 APP.4.4.A7
+
+[reference]:
+APP.4.4.A14
 
 [reference]:
 SYS.1.6.A3

OCIL for rule 'xccdf_org.ssgproject.content_rule_general_network_separation' differs.
--- ocil:ssg-general_network_separation_ocil:questionnaire:1
+++ ocil:ssg-general_network_separation_ocil:questionnaire:1
@@ -1,3 +1,5 @@
 Create separate Ingress Controllers for the API and your Applications. Also setup your environment in a way, that Control Plane Nodes are in another network than your worker nodes. If you implement multiple Nodes for different purposes evaluate if these should be in different network segments (i.e. Infra-Nodes, Storage-Nodes, ...).
+Also evaluate how you handle outgoing connections and if they have to be pinned to
+specific nodes or IPs.
       Is it the case that Network separation needs review?
       
New content has different text for rule 'xccdf_org.ssgproject.content_rule_general_node_separation'.
--- xccdf_org.ssgproject.content_rule_general_node_separation
+++ xccdf_org.ssgproject.content_rule_general_node_separation
@@ -5,9 +5,14 @@
 [description]:
 Use Nodes or Clusters to isolate Workloads with high protection requirements.
 
-Run the following command and review the pods and how they are deployed on Nodes. $ oc get pod -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,APP:.metadata.labels.app\.kubernetes\.io/name,NODE:.spec.nodeName --all-namespaces | grep -v "openshift-" 
+Run the following command and review the pods and how they are deployed on Nodes.
+$ oc get pod -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,APP:.metadata.labels.app\.kubernetes\.io/name,NODE:.spec.nodeName --all-namespaces | grep -v "openshift-" 
 You can use labels or other data as custom field which helps you to identify parts of an application.
-Ensure that Applications with high protection requirements are not colocated on Nodes or in Clusters with workloads of lower protection requirements.
+Ensure that Applications with high protection requirements are not colocated on Nodes or in Clusters
+with workloads of lower protection requirements.
+
+[reference]:
+APP.4.4.A14
 
 [reference]:
 APP.4.4.A15
@@ -16,4 +21,10 @@
 SYS.1.6.A3
 
 [rationale]:
-Assigning workloads with high protection requirements to specific nodes creates and additional boundary (the node) between workloads of high protection requirements and workloads which might follow less strict requirements. An adversary which attacked a lighter protected workload now has additional obstacles for their movement towards the higher protected workloads.
+Assigning workloads with high protection requirements to specific nodes creates and additional
+boundary (the node) between workloads of high protection requirements and workloads which might
+follow less strict requirements. An adversary which attacked a lighter protected workload now has
+additional obstacles for their movement towards the higher protected workloads.
+
+[ident]:
+CCE-88903-0

Copy link

codeclimate bot commented Oct 14, 2024

Code Climate has analyzed commit 33a1da5 and detected 0 issues on this pull request.

The test coverage on the diff in this pull request is 100.0% (50% is the threshold).

This pull request will bring the total coverage in the repository to 59.5% (0.0% change).

View more on Code Climate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
BSI PRs or issues for the BSI profile. needs-ok-to-test Used by openshift-ci bot. OpenShift OpenShift product related.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants