Configuration of Cloud Providers
Azure - AKS
Prerequisites
- Run
az identity list | grep \<cluster_name>
- Take the “id”
- Run
az identity show --ids \<id_from_step_2>
- Take the principalId
- Run
az aks list | grep \<cluster_name>
- Take the “id” (this is the cluster id)
- Run
az role assignment create --assignee "\<principal_id_from_step_4>" --role "Reader" --scope "\<cluster_id_from_step_6>"
In-cluster scan (Helm)
helm repo add kubescape https://kubescape.github.io/helm-charts/ ; helm repo update ; helm upgrade --install kubescape kubescape/kubescape-cloud-operator -n kubescape --create-namespace --set clusterName=`kubectl config current-context` --set account=1e3a88bf-92ce-44f8-914e-cbe71830d566 --set environment=dev --set cloudProviderMetadata.aksSubscriptionID=`az account show --query id --output tsv` --set cloudProviderMetadata.aksResourceGroup=`az resource list --name \`kubectl config current-context\` --query [].resourceGroup --output tsv` --set cloudProviderMetadata.cloudProviderEngine=aks
Integrate with Kubescape CLI
To configure your environment for working with Azure AKS, you need to set the following environment variables:
AZURE_SUBSCRIPTION_ID
- Azure's subscription ID
export AZURE_SUBSCRIPTION_ID=`az account show --query id --output tsv`
export AZURE_SUBSCRIPTION_ID=`az account show --query id --output tsv`
AZURE_RESOURCE_GROUP
- Azure's resource group linked to the cluster
export AZURE_RESOURCE_GROUP = `az resource list --name \`kubectl config current-context\` --query [].resourceGroup --output tsv`
export AZURE_RESOURCE_GROUP=$(az resource list --name $(kubectl config current-context) --query "[].resourceGroup" --output tsv)
KS_CLOUD_PROVIDER
- Set to "aks"
export KS_CLOUD_PROVIDER=aks
export KS_CLOUD_PROVIDER=aks
AWS - EKS
Integrate with Kubescape CLI
Kubescape EKS integration is based on the official AWS Go SDK and it supports authentication based on the local execution context of the CLI.
To make Kubescape run properly the below items must be configured;
- from CLI, run aws configure
config AWS Access Key ID [****************XXXX]:
config AWS Secret Access Key [****************XXXX]:
config Default region name []: (cluster default region)
- hit
Enter
for approval
- run
cat ~/.aws/credentials
- make sure that
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
are configured properly.
- make sure that
- run
cat ~/.aws/config
- make sure that region is configured properly.
- in case of
EC2 instances
, access to IAM role through EC2 metadata service.
The way EKS authentication is constructed, Kubescape EKS integration should work automatically from any shell where from you are accessing your cluster. (edited)
The way EKS authentication is constructed, Kubescape EKS integration should work automatically from any shell where from you are accessing your cluster.
Troubleshooting
Make sure that you have cluster access through:
kubectl get nodes
Make sure you have the proper EKS-related IAM roles in AWS CLI itself:
aws eks describe-cluster --name <cluster name> --region <cluster region>
Kubescape first looks for the KS_CLOUD_REGION
environment variable to get your cluster region. If this variable is not set, Kubescape tries to get the cluster region from the cluster's name. So if Kubescape is not able to identify your cluster region, make sure you set this environment variable.
On top of that, set the KS_CLOUD_PROVIDER
environment variable to eks
.
Integrate with Kubescape Microservice
The Kubescape microservice is based on the CLI and therefore expects the same mechanisms in its execution context.
You can add the environment variables to the Kubescape cronjob with the following command:
kubectl patch -n kubescape cronjob kubescape -p='{"spec": {"jobTemplate": {"spec": {"template": {"spec": {"containers": [{"name": "kubescape","env": [{"name":"KS_CLOUD_REGION", "value": "<region>"}, {"name":"KS_CLOUD_PROVIDER", "value": "eks"}]}]}}}}}}'
We are going to add an example of authorization via EKS IAM roles for the ServiceAccount, stay tuned!
GCP - GKE
Integrate with Kubescape CLI
Kubescape GKE is based on the official GCP SDK and it supports authentication based on the local execution context of the CLI:
GOOGLE_APPLICATION_CREDENTIALS
environment variable or~/.config/gcloud/application_default_credentials.json
file
Make sure that one of them is defined properly in the execution context of Kubescape.
If you're missing the application_default_credentials.json
, but you do have GCP access from the shell, run the following command to create it:
gcloud auth application-default login
Troubleshooting
Make sure that this command works
gcloud container clusters describe <cluster name> --zone <cluster zone> --project <GCP project>
Kubescape first looks for the KS_CLOUD_REGION
and KS_GKE_PROJECT
environment variables to get your cluster region and project, respectively. If these variables are not set, Kubescape tries to get the cluster region and project from the cluster name. So if Kubescape is not able to identify your cluster's region/project, make sure you set the proper environment variable.
On top of that, set the KS_CLOUD_PROVIDER
environment variable to gke
.
Integrate with Kubescape Microservice
The Kubescape microservice is based on the CLI and therefore expects the same mechanisms in its execution context.
You can add the environment variables to the Kubescape cronjob with the following command:
kubectl patch -n kubescape cronjob kubescape -p='{"spec": {"jobTemplate": {"spec": {"template": {"spec": {"containers": [{"name": "kubescape","env": [{"name":"KS_CLOUD_REGION", "value": "<region>"},{"name":"KS_GKE_PROJECT", "value": "<project>"}, {"name":"KS_CLOUD_PROVIDER", "value": "gke"}]}]}}}}}}'
We are going to add an example of authorization via service account, stay tuned!
Updated 5 months ago