There are two ways to check the functionalities of KubeArmor: 1) testing KubeArmor manually and 2) using the testing framework.
0. Make sure that the annotation controller is installed on the cluster (Applicable for Steps 1 and 2)
- To install the controller from KubeArmor docker repository to your cluster run
$ cd KubeArmor/pkg/KubeArmorAnnotation
~/KubeArmor/pkg/KubeArmorAnnotation$ make deploy
- To install the controller (local version) to your cluster run
$ cd KubeArmor/pkg/KubeArmorAnnotation
~/KubeArmor/pkg/KubeArmorAnnotation$ make docker-build deploy
$ kubectl proxy &
$ cd KubeArmor/KubeArmor
~/KubeArmor/KubeArmor$ make clean && make
~/KubeArmor/KubeArmor$ sudo -E ./kubearmor -gRPC=[gRPC port number]
-logPath=[log file path]
-enableKubeArmorPolicy=[true|false]
-enableKubeArmorHostPolicy=[true|false]
Beforehand, check if the KubeArmorPolicy and KubeArmorHostPolicy CRDs are already applied.
$ kubectl explain KubeArmorPolicy
If they are still not applied, do so.
$ kubectl apply -f ~/KubeArmor/deployments/CRD/
Now you can apply specific policies.
$ kubectl apply -f [policy file]
You can refer to security policies defined for example microservices in examples.
$ kubectl -n [namespace name] exec -it [pod name] -- bash -c [command]
-
Watch alerts using karmor cli tool
$ karmor log [flags]
flags:
--gRPC string gRPC server information --help help for log --json Flag to print alerts and logs in the JSON format --logFilter string What kinds of alerts and logs to receive, {policy|system|all} (default "policy") --logPath string Output location for alerts and logs, {path|stdout|none} (default "stdout") --msgPath string Output location for messages, {path|stdout|none} (default "none")
Note that you will see alerts and logs generated right after
karmor
runs logs; thus, we recommend to run the above command in other terminal to see logs live.
The auto-testing framework operates based on two things: microservices and test scenarios for each microservice.
-
Microservices
Create a directory for a microservice in microservices
$ cd KubeArmor/tests/microservices ~/KubeArmor/tests/microservices$ mkdir [microservice name]
Then, create YAML files for the microservice
$ cd KubeArmor/tests/microservices/[microservice name] ~/KubeArmor/tests/microservices/[microservice name]$ ...
As an example, we created 'multiubuntu' in microservices and defined 'multiubuntu-deployment.yaml' in multiubuntu.
-
Test scenarios
Create a directory whose name is like '[microservice name]_[scenario name]' in scenarios
$ cd KubeArmor/tests/scenarios ~/KubeArmor/tests/scenarios$ mkdir [microservice name]_[scenario name]
Then, define a YAML file for a test policy in the directory
~/KubeArmor/tests/scenarios$ cd [microservice name]_[scenario name] .../[microservice name]_[scenario name]$ vi [policy name].yaml
Create cmd files whose names are like 'cmd#'
.../[microservice name]_[scenario name]$ vi cmd1 / cmd2 / ...
Here is a template for a cmd file.
source: [pod name] cmd: [command to trigger a policy violation] result: [expected result], { passed | failed } --- operation: [operation], { Process | File | Network } condition: [matching string] action: [action in a policy] { Allow | Audit | Block }
This is a cmd example of a test scenario.
source: ubuntu-1-deployment cmd: sleep 1 result: failed --- operation: Process condition: sleep action: Block
You can refer to predefined testcases in scenarios.
-
The case that KubeArmor is directly running in a host
Compile KubeArmor
$ cd KubeArmor/KubeArmor ~/KubeArmor/KubeArmor$ make clean && make
Run the auto-testing framework
$ cd KubeArmor/tests ~/KubeArmor/tests$ ./test-scenarios-local.sh
Check the test report
~/KubeArmor/tests$ cat /tmp/kubearmor.test
-
The case that KubeArmor is running as a daemonset in Kubernetes
Run the testing framework
$ cd KubeArmor/tests ~/KubeArmor/tests$ ./test-scenarios-in-runtime.sh
Check the test report
~/KubeArmor/tests$ cat /tmp/kubearmor.test