Configuration of Cloud Providers

Azure - AKS

Prerequisites

  1. Run
    az identity list | grep \<cluster_name>
    
  2. Take the “id”
  3. Run
    az identity show --ids \<id_from_step_2>
    
  4. Take the principalId
  5. Run
    az aks list | grep \<cluster_name>
    
  6. Take the “id” (this is the cluster id)
  7. Run
    az role assignment create --assignee "\<principal_id_from_step_4>" --role "Reader" --scope "\<cluster_id_from_step_6>"
    

In-cluster scan (Helm)

helm repo add kubescape https://kubescape.github.io/helm-charts/ ; helm repo update ; helm upgrade --install kubescape kubescape/kubescape-cloud-operator -n kubescape --create-namespace --set clusterName=`kubectl config current-context` --set account=1e3a88bf-92ce-44f8-914e-cbe71830d566 --set environment=dev --set cloudProviderMetadata.aksSubscriptionID=`az account show --query id --output tsv` --set cloudProviderMetadata.aksResourceGroup=`az resource list --name \`kubectl config current-context\` --query [].resourceGroup --output tsv` --set cloudProviderMetadata.cloudProviderEngine=aks

Integrate with Kubescape CLI

To configure your environment for working with Azure AKS, you need to set the following environment variables:

  • AZURE_SUBSCRIPTION_ID - Azure's subscription ID
export AZURE_SUBSCRIPTION_ID=`az account show --query id --output tsv`
export AZURE_SUBSCRIPTION_ID=`az account show --query id --output tsv`
  • AZURE_RESOURCE_GROUP - Azure's resource group linked to the cluster
export AZURE_RESOURCE_GROUP = `az resource list --name \`kubectl config current-context\` --query [].resourceGroup --output tsv`
export AZURE_RESOURCE_GROUP=$(az resource list --name $(kubectl config current-context) --query "[].resourceGroup" --output tsv)
  • KS_CLOUD_PROVIDER - Set to "aks"
export KS_CLOUD_PROVIDER=aks
export KS_CLOUD_PROVIDER=aks

AWS - EKS

Integrate with Kubescape CLI

Kubescape EKS integration is based on the official AWS Go SDK and it supports authentication based on the local execution context of the CLI.
To make Kubescape run properly the below items must be configured;

  1. from CLI, run aws configure
    1. config AWS Access Key ID [****************XXXX]:
    2. config AWS Secret Access Key [****************XXXX]:
    3. config Default region name []: (cluster default region)
    4. hit Enter for approval
  2. run cat ~/.aws/credentials
    1. make sure that AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are configured properly.
  3. run cat ~/.aws/config
    1. make sure that region is configured properly.
  4. in case of EC2 instances, access to IAM role through EC2 metadata service.
    The way EKS authentication is constructed, Kubescape EKS integration should work automatically from any shell where from you are accessing your cluster. (edited)

The way EKS authentication is constructed, Kubescape EKS integration should work automatically from any shell where from you are accessing your cluster.

Troubleshooting

Make sure that you have cluster access through:

kubectl get nodes

Make sure you have the proper EKS-related IAM roles in AWS CLI itself:

aws eks describe-cluster --name <cluster name> --region <cluster region>

Kubescape first looks for the KS_CLOUD_REGION environment variable to get your cluster region. If this variable is not set, Kubescape tries to get the cluster region from the cluster's name. So if Kubescape is not able to identify your cluster region, make sure you set this environment variable.
On top of that, set the KS_CLOUD_PROVIDER environment variable to eks.

Integrate with Kubescape Microservice

The Kubescape microservice is based on the CLI and therefore expects the same mechanisms in its execution context.
You can add the environment variables to the Kubescape cronjob with the following command:

kubectl patch -n kubescape cronjob kubescape  -p='{"spec": {"jobTemplate": {"spec": {"template": {"spec": {"containers": [{"name": "kubescape","env": [{"name":"KS_CLOUD_REGION", "value": "<region>"}, {"name":"KS_CLOUD_PROVIDER", "value": "eks"}]}]}}}}}}'

We are going to add an example of authorization via EKS IAM roles for the ServiceAccount, stay tuned!

GCP - GKE

Integrate with Kubescape CLI

Kubescape GKE is based on the official GCP SDK and it supports authentication based on the local execution context of the CLI:

  • GOOGLE_APPLICATION_CREDENTIALS environment variable or
  • ~/.config/gcloud/application_default_credentials.json file

Make sure that one of them is defined properly in the execution context of Kubescape.

If you're missing the application_default_credentials.json, but you do have GCP access from the shell, run the following command to create it:

gcloud auth application-default login

Troubleshooting

Make sure that this command works

gcloud container clusters describe <cluster name> --zone <cluster zone> --project <GCP project>

Kubescape first looks for the KS_CLOUD_REGION and KS_GKE_PROJECT environment variables to get your cluster region and project, respectively. If these variables are not set, Kubescape tries to get the cluster region and project from the cluster name. So if Kubescape is not able to identify your cluster's region/project, make sure you set the proper environment variable.
On top of that, set the KS_CLOUD_PROVIDER environment variable to gke.

Integrate with Kubescape Microservice

The Kubescape microservice is based on the CLI and therefore expects the same mechanisms in its execution context.
You can add the environment variables to the Kubescape cronjob with the following command:

kubectl patch -n kubescape cronjob kubescape  -p='{"spec": {"jobTemplate": {"spec": {"template": {"spec": {"containers": [{"name": "kubescape","env": [{"name":"KS_CLOUD_REGION", "value": "<region>"},{"name":"KS_GKE_PROJECT", "value": "<project>"}, {"name":"KS_CLOUD_PROVIDER", "value": "gke"}]}]}}}}}}'

We are going to add an example of authorization via service account, stay tuned!