Installation of Kubescape in cluster

Prerequisites

  • Make sure you have a Kubescape Cloud account - if not, sign-up here
  • You need to have installation access to your cluster (you should be able to create Deployments, CronJobs, ConfigMaps, and Secrets)
  • You must have Kubectl and Helm

Cluster requirements

The Kubescape operator components require a minimum 400Mib RAM and 400m CPU

Install a pre-registered cluster

  1. Navigate to Kubescape Cloud Platform
  2. Click on "Add Cluster"

  1. Select the in-cluster installation and follow the steps

Install without pre-registering the cluster

  1. Add Kubescape operator helm repo
helm repo add kubescape https://kubescape.github.io/helm-charts/

Or, if already installed, run an upgrade:

helm repo update
  1. Install the Helm Chart
helm upgrade --install kubescape kubescape/kubescape-cloud-operator -n kubescape --create-namespace --set clusterName=`kubectl config current-context` --set account=<account ID>

Post-install validation

Please check after installation that all components are running correctly

% kubectl -n kubescape get deployments,statefulsets
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/gateway     1/1     1            1           48s
deployment.apps/kubescape   1/1     1            1           48s
deployment.apps/kubevuln    1/1     1            1           48s
deployment.apps/operator    1/1     1            1           48s

NAME                         READY   AGE
statefulset.apps/kollector   1/1     48s

The armo-system namespace is deprecated. You must delete it manually if it is running in your cluster

Prometheus Exporter

Read more about the integration with Prometheus

Adjusting Resource Usage for Your Cluster

By default, Kubescape is configured for small- to medium-sized clusters.
If you have a larger cluster and you experience slowdowns or see Kubernetes evicting components, please revise the number of resources allocated for the troubled component.

Taking Kubescape for example, we found that our defaults of 500 MiB of memory and 500m CPU work well for clusters up to 1250 total resources.
If you have more total resources or experience resource pressure already, first check out how many resources are in your cluster by running the following command:

kubectl get all -A --no-headers | wc -l

The command should print an approximate count of resources in your cluster.
Then, based on the number you see, allocate 100 MiB of memory for every 200 resources in your cluster over the count of 1250, but no less than 128 MiB total.
The formula for memory is as follows:

MemoryLimit := max(128, 0.4 * YOUR_AMOUNT_OF_RESOURCES)

For example, if your cluster has 500 resources, a sensible memory limit would be:

kubescape:
  resources:
    limits:
      memory: 200Mi  # max(128, 0.4 * 500) == 200

If your cluster has 50 resources, we still recommend allocating at least 128 MiB of memory.

When it comes to CPU, the more you allocate, the faster Kubescape will scan your cluster.
This is especially true for clusters that have a large number of resources.
However, we recommend that you give Kubescape no less than 500m CPU no matter the size of your cluster so it can scan a relatively large amount of resources fast ;)

Supported Helm values

KeyTypeDefaultDescription
kollector.affinityobject{}Assign custom affinity rules to the StatefulSet
kollector.enabledbooltrueenable/disable the kollector
kollector.env[0]object{"name":"PRINT_REPORT","value":"false"}print in verbose mode (print all reported data)
kollector.image.repositorystring"quay.io/kubescape/kollector"source code
kollector.nodeSelectorobject{}Node selector
kollector.volumesobject[]Additional volumes for the collector
kollector.volumeMountsobject[]Additional volumeMounts for the collector
kubescape.affinityobject{}Assign custom affinity rules to the deployment
kubescape.downloadArtifactsbooltruedownload policies every scan, we recommend it should remain true, you should change to 'false' when running in an air-gapped environment or when scanning with high frequency (when running with Prometheus)
kubescape.enableHostScanbooltrueenable host scanner feature
kubescape.enabledbooltrueenable/disable kubescape scanning
kubescape.image.repositorystring"quay.io/kubescape/kubescape"source code (public repo)
kubescape.nodeSelectorobject{}Node selector
kubescape.serviceMonitor.enabledboolfalseenable/disable service monitor for prometheus (operator) integration
kubescape.skipUpdateCheckboolfalseskip check for a newer version
kubescape.submitbooltruesubmit results to Kubescape SaaS: https://cloud.armosec.io/
kubescape.volumesobject[]Additional volumes for Kubescape
kubescape.volumeMountsobject[]Additional volumeMounts for Kubescape
kubescapeScheduler.enabledbooltrueenable/disable a kubescape scheduled scan using a CronJob
kubescapeScheduler.image.repositorystring"quay.io/kubescape/http_request"source code (public repo)
kubescapeScheduler.scanSchedulestring"0 0 * * *"scan schedule frequency
kubescapeScheduler.volumesobject[]Additional volumes for scan scheduler
kubescapeScheduler.volumeMountsobject[]Additional volumeMounts for scan scheduler
gateway.affinityobject{}Assign custom affinity rules to the deployment
gateway.enabledbooltrueenable/disable passing notifications from Kubescape SaaS to the Operator microservice. The notifications are the onDemand scanning and the scanning schedule settings
gateway.image.repositorystring"quay.io/kubescape/gateway"source code
gateway.nodeSelectorobject{}Node selector
gateway.volumesobject[]Additional volumes for the notification service
gateway.volumeMountsobject[]Additional volumeMounts for the notification service
kubevuln.affinityobject{}Assign custom affinity rules to the deployment
kubevuln.enabledbooltrueenable/disable image vulnerability scanning
kubevuln.image.repositorystring"quay.io/kubescape/kubevuln"source code
kubevuln.nodeSelectorobject{}Node selector
kubevuln.volumesobject[]Additional volumes for the image vulnerability scanning
kubevuln.volumeMountsobject[]Additional volumeMounts for the image vulnerability scanning
kubevulnScheduler.enabledbooltrueenable/disable a image vulnerability scheduled scan using a CronJob
kubevulnScheduler.image.repositorystring"quay.io/kubescape/http_request"source code (public repo)
kubevulnScheduler.scanSchedulestring"0 0 * * *"scan schedule frequency
kubevulnScheduler.volumesobject[]Additional volumes for scan scheduler
kubevulnScheduler.volumeMountsobject[]Additional volumeMounts for scan scheduler
operator.affinityobject{}Assign custom affinity rules to the deployment
operator.enabledbooltrueenable/disable kubescape and image vulnerability scanning
operator.image.repositorystring"quay.io/kubescape/operator"source code
operator.nodeSelectorobject{}Node selector
operator.volumesobject[]Additional volumes for the web socket
operator.volumeMountsobject[]Additional volumeMounts for the web socket
kubescapeHostScanner.volumesobject[]Additional volumes for the host scanner
kubescapeHostScanner.volumeMountsobject[]Additional volumeMounts for the host scanner
awsIamRoleArnstringnilAWS IAM arn role
clientIDstring""client ID, read more
addRevisionLabelbooltrueAdd revision label to the components. This will insure the components will restart when updating the helm
cloudRegionstringnilcloud region
cloudProviderEnginestringnilcloud provider engine
gkeProjectstringnilGKE project
gkeServiceAccountstringnilGKE service account
secretKeystring""secret key, read more
triggerNewImageScanboolfalseenable/disable trigger image scan for new images
volumesobject[]Additional volumes for all containers
volumeMountsobject[]Additional volumeMounts for all containers