Migration from Kubescape Helm Chart 1.2x to ARMO Helm Chart 1.3x
Safely migrate from Kubescape Helm chart 1.2x to the ARMO Helm chart 1.3x, including required cleanup steps and GitOps considerations.
Overview
ARMO Helm chart 1.3x introduces a unified, production-ready operator experience that replaces the legacy Kubescape 1.2x deployment model.
Due to architectural and schema changes, upgrading from Kubescape Helm chart 1.2x to ARMO Helm chart 1.3x requires a clean migration, including removal of legacy CRDs, persistent volumes, and an aggregated APIService.
Don't be alarmed by the length of this document, the process is straightforward. We've just documented everything 😀
The main steps are:
- Uninstall the existing installation
- Cleanup
- Install ARMO Helm Chart
This guide covers:
- What changed in 1.3x
- Why cleanup is required
- Migration steps
- ArgoCD/GitOps considerations
What changed in 1.3x
Unified ARMO Operator chart
Historically, ARMO SaaS customers deployed Kubescape Operator + ARMO Node Agent, while open-source users deployed Kubescape Operator only. The new ARMO Operator Helm chart unifies the operator experience into a single, preconfigured chart designed for production deployments.
CEL-based detection engine
The operator now uses a CEL (Common Expression Language) rule engine rather than hard-coded Go-based detection rules. This improves iteration speed and enables user-authored custom detections.
Image-based eBPF gadgets
The operator uses image-based eBPF gadgets (via the Inspektor Gadget framework), improving modularity, isolation, and versioning of eBPF capabilities.
Breaking change: cleanup is required
Upgrading from 1.2x → 1.3x requires removing old CRDs, PVs, and API services due to changes in storage format, CRD/API groups, eBPF infrastructure, and rule engine structure. Skipping cleanup can lead to upgrade failures or inconsistent behavior.
Migration steps
These steps apply to:
- Kubescape OSS users upgrading from 1.2x to 1.3x
- ARMO customers migrating to the new ARMO Operator chart
- ArgoCD users (see GitOps notes below)
Uninstall the old Helm release
helm uninstall -n kubescape kubescapeThis removes the deployments and services, but leaves CRDs and persistent storage, which must be cleaned up next.
Delete legacy PersistentVolumes
kubectl get pv | grep kubescape | awk '{print $1}' | xargs kubectl delete pvWhy: the storage format changed in 1.3x; legacy PVs can cause schema conflicts and failures.
Some environments may return NotFound depending on the cloud provider—this can be expected.
Delete legacy CRDs
kubectl get crds | grep kubescape.io | awk '{print $1}' | xargs kubectl delete crdsWhy: 1.3x uses updated CRD versions and schemas that are incompatible with 1.2x CRDs.
Delete the legacy APIService
kubectl get apiservices | grep kubescape/storage | awk '{print $1}' | xargs kubectl delete apiservicesWhy: the old aggregated APIService can block the new API group from responding properly if it remains.
Some environments may return NotFound depending on the cloud provider—this can be expected.
Install the ARMO Helm chart (1.3x)
Add and update the ARMO Helm repository:
helm repo add armosec https://armosec.github.io/helm-charts/
helm repo updateInstall/upgrade using the ARMO operator chart:
helm upgrade --install armosec armosec/armosec-kubescape-operator \
-n kubescape \
--create-namespace \
--set kubescape-operator.clusterName="$(kubectl config current-context)" \
--set kubescape-operator.account="<ARMO_ACCOUNT_ID>" \
--set kubescape-operator.accessKey="<ARMO_ACCESS_KEY>" \
--set kubescape-operator.server="<ARMO_BACKEND_URL>" \
--set kubescape-operator.imagePullSecret.password="<ARMO_IMAGE_PULL_SECRET>"Parameters:
kubescape-operator.account: ARMO Account IDkubescape-operator.accessKey: ARMO Access Keykubescape-operator.server: ARMO Backend URL
ArgoCD / GitOps notes
If you deploy via ArgoCD, use the following approach to avoid drift and race conditions during cleanup.
Disable pruning during migration
Prevent ArgoCD from recreating resources while you delete them manually:
metadata:
annotations:
argocd.argoproj.io/sync-options: Prune=falseAfter migration, remove the annotation and resume normal sync.
Update the chart reference
Change the chart from:
kubescape/kubescape-operator
To:
armosec/armosec-kubescape-operator
Ensure CRDs are not managed by ArgoCD
CRD replacement with schema changes is not handled cleanly by ArgoCD. Configure ArgoCD to ignore CRD spec diffs:
spec:
ignoreDifferences:
- group: apiextensions.k8s.io
kind: CustomResourceDefinition
jsonPointers:
- /specThis prevents CRD “fights” during the migration.
Recommended GitOps upgrade flow
- Disable pruning (
Prune=false) - Perform the cleanup steps (PV/CRD/APIService)
- Update your repo to the new chart reference/version
- Sync
- Re-enable pruning
Kubescape OSS users
If you are upgrading open-source Kubescape without ARMO SaaS configuration, the cleanup steps are still required. You can reinstall using the upstream Kubescape chart:
helm upgrade --install kubescape-operator kubescape/kubescape-operator -n kubescapeFAQ
Why do I need to delete CRDs?
1.3x introduces major schema updates that are incompatible with 1.2x CRDs.
Will I lose historical detections or profiles?
Yes. The storage format changed as part of the 1.3x redesign.
Is there downtime?
Yes. Threat detection is paused until the new operator is running, typically 1–3 minutes.
Can I run the old and new operators together?
No. They share CRDs and cannot coexist on the same cluster.
Updated about 22 hours ago
