Deploying ARMO Platform on OpenShift
This guide provides instructions for deploying the ARMO Platform on OpenShift clusters, including Azure Red Hat OpenShift (ARO), Red Hat OpenShift on AWS (ROSA), OpenShift Container Platform (OCP), and OKD.
Prerequisites
- OpenShift cluster (version 4.10 or later)
- Cluster admin access
- Helm 3.x installed
ocCLI installed and configured
Understanding OpenShift Security Context Constraints
OpenShift uses Security Context Constraints (SCCs) to control pod permissions. ARMO's Kubescape components require specific SCCs:
- node-agent: Requires
privilegedSCC (for runtime detection and eBPF probes) - Other components: Use
nonroot-v2SCC (operator, scanner, storage, etc.)
Installation
Step 1: Get Your Installation Command from ARMO UI
- Log in to your ARMO account at hub.armosec.io
- Navigate to Settings → Clusters → Add Cluster
- Copy the provided Helm installation command
The command will look similar to:
helm repo add armosec https://armosec.github.io/helm-charts/
helm repo update
helm upgrade --install armosec armosec/armosec-kubescape-operator
-n kubescape --create-namespace \
--set kubescape-operator.account=<your-account-id> \
--set kubescape-operator.accessKey=<your-access-key> \
--set kubescape-operator.imagePullSecret.password=<your-image-pull-secret-password> \
--set kubescape-operator.server=<your-server-url> \
--set kubescape-operator.clusterName=`kubectl config current-context`Step 2: Add OpenShift SCC Support
Critical: Append the following parameter to enable OpenShift SCC support:
--set kubescape-operator.global.openshift.scc.enabled=trueComplete Installation Command
Your final command should look like:
helm upgrade --install armosec armosec/armosec-kubescape-operator
-n kubescape --create-namespace \
--set kubescape-operator.account=<your-account-id> \
--set kubescape-operator.accessKey=<your-access-key> \
--set kubescape-operator.imagePullSecret.password=<your-image-pull-secret-password> \
--set kubescape-operator.server=<your-server-url> \
--set kubescape-operator.clusterName=`kubectl config current-context` \
--set kubescape-operator.global.openshift.scc.enabled=trueNote: The kubescape-operator. prefix is required because Kubescape Operator is a subchart within the ARMO platform chart.
Verification
1. Check Pod Status
oc get pods -n armo-systemAll pods should be in Running state within 2-3 minutes.
2. Verify SCC Assignments
oc get pod -n armo-system -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.annotations.openshift\.io/scc}{"\n"}{end}'Expected output:
- Node-agent pods:
privileged - All other pods:
nonroot-v2
3. Confirm SCC Role Bindings
oc get rolebinding -n armo-system | grep sccYou should see role bindings for each component referencing the appropriate SCCs.
Troubleshooting
Pods Failing with SCC Errors
Symptom: Pods stuck in CreateContainerConfigError or CrashLoopBackOff
Check events:
oc describe pod <pod-name> -n armo-systemSolution: Verify the OpenShift SCC parameter was included in the installation:
helm get values armosec -n kubescape | grep openshiftIf missing, upgrade with the correct parameter:
helm upgrade armosec armosec/armosec-kubescape-operator \
-n kubescape \
--reuse-values \
--set kubescape-operator.global.openshift.scc.enabled=trueNode-Agent Not Starting
Symptom: Node-agent pods not starting or restarting frequently
Verify SCC assignment:
oc get pod -n armo-system -l app=node-agent -o yaml | grep "openshift.io/scc"The node-agent must use the privileged SCC. If it's using a different SCC, the OpenShift parameter wasn't properly applied.
Viewing Logs
# Operator logs
oc logs -n kubescape -l app=operator
# Node-agent logs
oc logs -n kubescape -l app=node-agent
# Scanner logs
oc logs -n kubescape -l app=kubescapePlatform-Specific Notes
Azure Red Hat OpenShift (ARO)
- Fully supported - tested on ARO 4.17+
- All ARMO features available
Red Hat OpenShift on AWS (ROSA)
- Fully supported
- Same configuration as ARO
OpenShift Container Platform (OCP)
- Supported on self-managed OCP clusters
- Ensure cluster has sufficient resources
OKD (Community Distribution)
- Supported with same SCC requirements
- May have different default configurations
Support
For issues or questions:
- Contact ARMO Support through the platform
- Check cluster connectivity in ARMO UI
- Review pod logs for specific error messages
Uninstallation
To remove the ARMO platform:
helm uninstall armosec -n kubescape
oc delete namespace kubescapeUpdated 1 day ago
