Kube Hunter
What is kube-hunter?
kube-hunter hunts for security weaknesses in Kubernetes clusters. The tool was developed to increase awareness and visibility for security issues in Kubernetes environments. You should NOT run kube-hunter on a Kubernetes cluster that you don't own!
To learn more about the kube-hunter scanner itself visit kube-hunter GitHub or kube-hunter Website.
Deployment
The kube-hunter chart can be deployed via helm:
# Install HelmChart (use -n to configure another namespace)
helm upgrade --install kube-hunter secureCodeBox/kube-hunter
Scanner Configuration
The following security scan configuration example are based on the kube-hunter Documentation, please take a look at the original documentation for more configuration examples.
- To specify remote machines for hunting, select option 1 or use the --remote option. Example:
kube-hunter --remote some.node.com
- To specify interface scanning, you can use the --interface option (this will scan all the machine's network interfaces). Example:
kube-hunter --interface
- To specify a specific CIDR to scan, use the --cidr option. Example:
kube-hunter --cidr 192.168.0.0/24
Requirements
Kubernetes: >=v1.11.0-0
Values
Key | Type | Default | Description |
---|---|---|---|
cascadingRules.enabled | bool | false | Enables or disables the installation of the default cascading rules for this scanner |
imagePullSecrets | list | [] | Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) |
parser.affinity | object | {} | Optional affinity settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) |
parser.env | list | [] | Optional environment variables mapped into each parseJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) |
parser.image.pullPolicy | string | "IfNotPresent" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images |
parser.image.repository | string | "docker.io/securecodebox/parser-kube-hunter" | Parser image repository |
parser.image.tag | string | defaults to the charts version | Parser image tag |
parser.resources | object | { requests: { cpu: "200m", memory: "100Mi" }, limits: { cpu: "400m", memory: "200Mi" } } | Optional resources lets you control resource limits and requests for the parser container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
parser.scopeLimiterAliases | object | {} | Optional finding aliases to be used in the scopeLimiter. |
parser.tolerations | list | [] | Optional tolerations settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) |
parser.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the parser will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ |
scanner.activeDeadlineSeconds | string | nil | There are situations where you want to fail a scan Job after some amount of time. To do so, set activeDeadlineSeconds to define an active deadline (in seconds) when considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup) |
scanner.affinity | object | {} | Optional affinity settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) |
scanner.backoffLimit | int | 3 | There are situations where you want to fail a scan Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy) |
scanner.env | list | [] | Optional environment variables mapped into each scanJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) |
scanner.extraContainers | list | [] | Optional additional Containers started with each scanJob (see: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) |
scanner.extraVolumeMounts | list | [] | Optional VolumeMounts mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) |
scanner.extraVolumes | list | [] | Optional Volumes mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) |
scanner.image.pullPolicy | string | "IfNotPresent" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images |
scanner.image.repository | string | "docker.io/securecodebox/scanner-kube-hunter" | Container Image to run the scan |
scanner.image.tag | string | nil | defaults to the charts appVersion |
scanner.nameAppend | string | nil | append a string to the default scantype name. |
scanner.podSecurityContext | object | {} | Optional securityContext set on scanner pod (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) |
scanner.resources | object | {} | CPU/memory resource requests/limits (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/, https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/) |
scanner.securityContext | object | {"allowPrivilegeEscalation":false,"capabilities":{"drop":["all"]},"privileged":false,"readOnlyRootFilesystem":true,"runAsNonRoot":false} | Optional securityContext set on scanner container (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) |
scanner.securityContext.allowPrivilegeEscalation | bool | false | Ensure that users privileges cannot be escalated |
scanner.securityContext.capabilities.drop[0] | string | "all" | This drops all linux privileges from the container. |
scanner.securityContext.privileged | bool | false | Ensures that the scanner container is not run in privileged mode |
scanner.securityContext.readOnlyRootFilesystem | bool | true | Prevents write access to the containers file system |
scanner.securityContext.runAsNonRoot | bool | false | Enforces that the scanner image is run as a non root user |
scanner.suspend | bool | false | if set to true the scan job will be suspended after creation. You can then resume the job using kubectl resume <jobname> or using a job scheduler like kueue |
scanner.tolerations | list | [] | Optional tolerations settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) |
scanner.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ |
License
Code of secureCodeBox is licensed under the Apache License 2.0.
CPU architectures
The scanner is currently supported for these CPU architectures:
- linux/amd64
Examples
in-cluster
- Scan
- Findings
# SPDX-FileCopyrightText: the secureCodeBox authors
#
# SPDX-License-Identifier: Apache-2.0
apiVersion: "execution.securecodebox.io/v1"
kind: Scan
metadata:
name: "kube-hunter-in-cluster"
spec:
scanType: "kube-hunter"
parameters:
- "--pod"
# SPDX-FileCopyrightText: the secureCodeBox authors
#
# SPDX-License-Identifier: Apache-2.0
{
"nodes":
[
{"type": "Node/Master", "location": "10.0.0.1"},
{"type": "Node/Master", "location": "10.96.0.1"},
],
"services":
[
{
"service": "Kubelet API",
"location": "10.0.0.1:10250",
"description": "The Kubelet is the main component in every Node, all pod operations goes through the kubelet",
},
{
"service": "Metrics Server",
"location": "10.0.0.1:6443",
"description": "The Metrics server is in charge of providing resource usage metrics for pods and nodes to the API server.",
},
{
"service": "API Server",
"location": "10.96.0.1:443",
"description": "The API server is in charge of all operations on the cluster.",
},
],
"vulnerabilities":
[
{
"location": "Local to Pod(scan-kube-hunter-in-cluster-4rfff)",
"vid": "KHV050",
"category": "Access Risk",
"severity": "low",
"vulnerability": "Read access to pod's service account token",
"description": " Accessing the pod service account token gives an attacker the option to use the server API ",
"evidence": "eyJhbGciOiJSUzI1NiIsImtpZCI6IkNabmY2NVgxUmR1ZnQzbHJVQVAzZFFUNjBiR0hUVE9SRDNPcURyenlkODgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Imx1cmNoZXItdG9rZW4tcGpmNGIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoibHVyY2hlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjUzOGVhYjdmLTY1YjAtNDE4Yy04MGI2LTI1NGQxNDQ4ODU3NiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0Omx1cmNoZXIifQ.cGtQHagQ2xxlAFnWwFRNgGJIkaeZIKnqoYYb8GmxN94ry0wwxCbgBm4Kg33A903wDBxd8iuITTk-r8UPZyYJHoxlVu0pHt-3SAc4NT0ob50R2acVXQ2qj_yJOOQHurCWeOJMkGqtCyUoZ8Xcnc6z32Ao-NWzKD-0wV7ndpKm-ytHP0YpHb9bLUPcQGvFoh_UF132yjeJqzwLPRX6hStMYOa8LNhJGyhdejW3BIOylzVPNkKE5lEjWv9f853qnTKG-TzXHBbth7qV8UHwSoY8YFoMezK3zazQt4dN1VG_wYmZ0ujikTC7TRTGr500kFxfpACKwdQ1M1fXgKJhNv9UgA",
"hunter": "Access Secrets",
},
{
"location": "Local to Pod(scan-kube-hunter-in-cluster-4rfff)",
"vid": "None",
"category": "Access Risk",
"severity": "low",
"vulnerability": "CAP_NET_RAW Enabled",
"description": "CAP_NET_RAW is enabled by default for pods. If an attacker manages to compromise a pod, they could potentially take advantage of this capability to perform network attacks on other pods running on the same node",
"evidence": "",
"hunter": "Pod Capabilities Hunter",
},
{
"location": "Local to Pod(scan-kube-hunter-in-cluster-4rfff)",
"vid": "None",
"category": "Access Risk",
"severity": "low",
"vulnerability": "Access to pod's secrets",
"description": " Accessing the pod's secrets within a compromised pod might disclose valuable data to a potential attacker",
"evidence": "['/var/run/secrets/kubernetes.io/serviceaccount/namespace', '/var/run/secrets/kubernetes.io/serviceaccount/ca.crt', '/var/run/secrets/kubernetes.io/serviceaccount/token', '/var/run/secrets/kubernetes.io/serviceaccount/..2020_04_03_14_52_24.460746409/ca.crt', '/var/run/secrets/kubernetes.io/serviceaccount/..2020_04_03_14_52_24.460746409/token', '/var/run/secrets/kubernetes.io/serviceaccount/..2020_04_03_14_52_24.460746409/namespace']",
"hunter": "Access Secrets",
},
{
"location": "10.96.0.1:443",
"vid": "KHV002",
"category": "Information Disclosure",
"severity": "medium",
"vulnerability": "K8s Version Disclosure",
"description": "The kubernetes version could be obtained from the /version endpoint ",
"evidence": "v1.18.0",
"hunter": "Api Version Hunter",
},
{
"location": "10.96.0.1:443",
"vid": "KHV005",
"category": "Information Disclosure",
"severity": "medium",
"vulnerability": "Access to API using service account token",
"description": " The API Server port is accessible. Depending on your RBAC settings this could expose access to or control of your cluster. ",
"evidence": "b'{\"kind\":\"APIVersions\",\"versions\":[\"v1\"],\"serverAddressByClientCIDRs\":[{\"clientCIDR\":\"0.0.0.0/0\",\"serverAddress\":\"172.17.0.2:6443\"}]}\\n'",
"hunter": "API Server Hunter",
},
],
"kburl": "https://aquasecurity.github.io/kube-hunter/kb/{vid}",
}