Back to Blog
How-To 13 min read

SOC 2 Kubernetes Security: Controls for Container Workloads

SOC 2 Kubernetes security controls covering RBAC, network policies, pod security standards, secrets management, audit logging, and runtime threat detection for CC6 and CC7.

Key Takeaways
  • Kubernetes RBAC with namespace isolation is the primary CC6.1 and CC6.2 control for container environments.
  • Pod Security Standards (Restricted profile) prevent privilege escalation and container escape attacks.
  • Kubernetes audit logs capture every API server request and are essential CC7.2 monitoring evidence.
  • Network policies implement micro-segmentation between pods, satisfying CC6.6 boundary protection.
  • Never store secrets as plain Kubernetes Secrets — use Vault or AWS Secrets Manager with the CSI driver.
  • Runtime security tools like Falco detect anomalous container behavior and satisfy CC7.3 criteria.

Kubernetes and SOC 2: What Auditors Examine

Kubernetes introduces a new attack surface that traditional SOC 2 controls frameworks were not designed for. The AICPA Trust Services Criteria predate container orchestration, but auditors apply them by analogy: RBAC is logical access control, network policies are boundary protection, pod security is environment hardening, and audit logs are monitoring evidence.

The CIS Kubernetes Benchmark (published by the Center for Internet Security) provides 300+ specific recommendations that map to SOC 2 criteria. Running `kube-bench` against your cluster produces a scored report that you can present as evidence for multiple CC controls simultaneously. For EKS, GKE, and AKS, the managed control plane components are covered by the cloud provider's SOC 2 report — your responsibility is the configuration of the workload plane.

RBAC Controls (CC6.1, CC6.2)

Kubernetes RBAC uses Roles (namespace-scoped) and ClusterRoles (cluster-scoped) bound to users, groups, or service accounts. Never use `cluster-admin` ClusterRoleBinding for application service accounts — this is the Kubernetes equivalent of running as root. Audit all ClusterRoleBindings with `kubectl get clusterrolebindings -o json | jq` and remove any binding that grants cluster-admin to a non-admin service account or user.

Create namespace-scoped Roles for application workloads with only the permissions they actually need. A typical API server pod needs `get`, `list` on its own ConfigMaps and Secrets, nothing more. Use `kubectl auth can-i --list --as=system:serviceaccount:[namespace]:[sa-name]` to audit what a service account can actually do. Document each service account's permissions and the justification — this is your least-privilege evidence for CC6.2.

For human access to the Kubernetes API, integrate with your identity provider using OIDC. EKS supports OIDC with aws-auth ConfigMap or Access Entries; GKE supports Workload Identity and OIDC natively. Require MFA for all OIDC users accessing the cluster. This ensures that Kubernetes API access goes through your Okta/Entra ID MFA enforcement, satisfying CC6.1 without maintaining separate Kubernetes credentials.

Pod Security Standards (CC6.7)

Kubernetes Pod Security Standards replace the deprecated Pod Security Policy (PSP) and are enforced via the built-in Pod Security Admission controller. Three levels: Privileged (unrestricted), Baseline (prevents known privilege escalations), and Restricted (enforces security best practices). Apply the Restricted profile to all production namespaces. Add labels to your namespaces: `pod-security.kubernetes.io/enforce: restricted` and `pod-security.kubernetes.io/audit: restricted`.

The Restricted profile enforces: no privileged containers (`allowPrivilegeEscalation: false`), non-root user (`runAsNonRoot: true`, `runAsUser` > 0), read-only root filesystem (`readOnlyRootFilesystem: true`), dropped capabilities (`capabilities.drop: [ALL]`), no host network/PID/IPC sharing, and seccomp profile set to RuntimeDefault or Localhost. Add these settings to your Pod or Deployment specs and validate with `kubectl apply --dry-run=server`.

For legacy workloads that cannot run as non-root immediately, use the Baseline profile as a transitional measure and document exceptions with a target remediation date. Create an exception register that lists each workload with a privileged requirement, the business justification, compensating controls (e.g., network isolation, no sensitive data access), and review date. Auditors accept documented exceptions with compensating controls far better than undocumented privileged workloads.

Secrets Management (CC6.1)

Kubernetes Secrets are base64-encoded by default, not encrypted. Any user with `kubectl get secret` access in a namespace can decode all secrets in that namespace. Enable encryption at rest for etcd by configuring an EncryptionConfiguration in your API server with an AES-GCM or KMS provider. For managed clusters, use the native KMS encryption: EKS → Cluster → Enable secrets encryption with a KMS key, GKE → Enable application-layer secrets encryption.

Better still, do not store sensitive secrets as Kubernetes Secrets at all. Use the Secrets Store CSI Driver with a provider backed by AWS Secrets Manager, Azure Key Vault, GCP Secret Manager, or HashiCorp Vault. Pods mount secrets as volumes directly from the secrets manager at runtime — the secret is never stored in etcd. This approach satisfies CC6.1 and provides a complete audit trail of which workloads accessed which secrets in your secrets manager.

Scan your Git repositories and Helm chart values for hardcoded secrets using `trufflehog` or `detect-secrets`. Scan your running pods for environment variables containing secrets using `kubectl get pods -o json | jq '.[].spec.containers[].env'`. Secrets in environment variables are visible to anyone with pod exec access and can appear in logs. Migrate all secrets to the CSI driver or Kubernetes Secrets with proper RBAC restrictions.

Network Policies (CC6.6)

By default, all pods in a Kubernetes cluster can communicate with all other pods across all namespaces. This is the opposite of least-privilege networking. Kubernetes NetworkPolicy resources restrict ingress and egress traffic at the pod level. Start with a default-deny policy in each namespace: apply a NetworkPolicy with no pod selector (applies to all pods) and no ingress/egress rules (denies everything). Then add specific allow policies for required traffic.

Create NetworkPolicies that allow only the required traffic paths: frontend pods can talk to API server pods on port 8080, API server pods can talk to database pods on port 5432, database pods have no egress (except DNS). Label pods with `app` and `tier` labels and use `podSelector` matchLabels in NetworkPolicies to target specific tiers. Verify policies with `kubectl describe networkpolicy -n [namespace]` and test with a debug pod using `nc -zv [target-pod-ip] [port]`.

For cluster-to-external traffic, restrict egress using NetworkPolicies that only allow traffic to known external endpoints. For CNI plugins that support DNS-based egress (Calico, Cilium), create policies that allow `*.internal-api.com` rather than IP ranges, which change over time. Cilium's `CiliumNetworkPolicy` supports L7 HTTP policies — you can restrict pods to specific URL paths and HTTP methods, providing application-layer boundary protection.

Kubernetes Audit Logging (CC7.2)

The Kubernetes API server generates an audit log of every request. Configure an audit policy (`--audit-policy-file`) to capture the events relevant for SOC 2. A minimal audit policy should log at the `Metadata` level for all resources and at the `RequestResponse` level for secrets, configmaps, and RBAC resources. Log to a file backed by a persistent volume and ship to your SIEM using a log forwarder like Fluentd or the OpenTelemetry Collector.

For managed clusters: EKS → Cluster → Logging → Enable "Authenticator" and "API server" log types (delivered to CloudWatch Logs). GKE → Operations → Logs → Cloud Audit Logs are enabled by default. AKS → Diagnostic settings → Enable `kube-audit` and `kube-audit-admin` log categories. Configure log retention to at least 365 days.

Key events to alert on from Kubernetes audit logs: anonymous authentication requests (`user.username: system:anonymous`), use of sensitive verbs on secrets (`verb: get/list/watch` on `resources: secrets`), exec into pods (`subresource: exec`), creation of privileged pods, and changes to RBAC resources (ClusterRoleBinding create/update). Forward these to PagerDuty or your SIEM for real-time alerting.

Runtime Threat Detection (CC7.3)

Falco is the CNCF-graduated runtime security tool for Kubernetes. It monitors system calls from containers and alerts on anomalous behavior defined by rules. Deploy Falco as a DaemonSet: `helm install falco falcosecurity/falco`. Out of the box, Falco detects: shell spawned in a container, sensitive file reads (`/etc/shadow`, `/etc/kubernetes/admin.conf`), unexpected network connections, privilege escalation, and process anomalies.

Configure Falco to route alerts to your SIEM via the Falco Sidekick sidecar container. Falco Sidekick supports 50+ output backends including Slack, PagerDuty, Datadog, and Elasticsearch. Set up alerts for critical Falco rules (severity: Critical, Warning) to trigger immediate on-call notification. Include Falco alert counts and top rules fired in your monthly security report as CC7.3 monitoring evidence.

Complement Falco with admission controllers that enforce policy at deployment time. OPA Gatekeeper or Kyverno provide policy-as-code enforcement: block container images from untrusted registries, require resource limits on all pods (availability control), require `runAsNonRoot`, and enforce label standards. Define policies in Git, apply via ArgoCD or Flux, and audit policy violations monthly.

Container Image Security (CC7.1)

CC7.1 requires identifying and managing vulnerabilities. Container images are a primary vulnerability source — every layer adds packages, and packages have CVEs. Integrate image scanning into your CI pipeline using Trivy, Grype, or Snyk Container. In GitHub Actions: `trivy image --exit-code 1 --severity CRITICAL my-image:latest` — this fails the pipeline if a critical CVE is found in the image.

Set an admission policy that blocks deployment of images with critical vulnerabilities that are unresolved for more than 30 days. Kyverno policy: `block images with Trivy critical CVEs in cluster admission`. Configure your container registry (ECR, GCR, ACR) to scan images on push and block promotion to production unless the scan is clean. Export weekly vulnerability reports as CC7.1 evidence.

Use a minimal base image strategy: prefer `distroless`, `scratch`, or Alpine-based images over Ubuntu or Debian. Fewer packages = fewer CVEs. Pin image tags to specific digest hashes (`image: myapp@sha256:abc123`) rather than mutable tags like `latest` — this ensures the exact image you tested is the image deployed and provides evidence of change control (CC8.1).

Frequently Asked Questions

Does using EKS/GKE/AKS reduce our Kubernetes SOC 2 responsibilities?
Yes, for the control plane only. Managed Kubernetes services handle the security of the API server, etcd, and control plane components under their shared responsibility model. Your responsibilities remain: RBAC configuration, pod security, network policies, secrets management, node OS security, and workload-level controls. Run kube-bench against your node configuration to identify what still needs attention.
Is kube-bench output acceptable as SOC 2 evidence?
Yes. kube-bench runs the CIS Kubernetes Benchmark checks and produces a scored report. Include the report as evidence for multiple CC controls. Not every check will be passing — document exceptions with compensating controls. Auditors are sophisticated enough to understand that some benchmark checks have valid reasons for being disabled in managed cluster environments.
What is the easiest way to get CC7.2 evidence from Kubernetes?
Enable audit logging on your managed cluster (EKS CloudWatch audit logs, GKE Cloud Audit Logs) and ship to your SIEM. Then create three dashboards: authentication failures, exec-into-pod events, and secret access events. Export these dashboards monthly as evidence. This takes about a day to set up and provides continuous CC7.2 evidence for the audit period.
How do we handle privileged containers that are operationally required (e.g., log shippers)?
Document each privileged container exception in a formal exception register. For each, specify the workload name, namespace, reason for privilege requirement, compensating controls (e.g., dedicated node, network isolation), risk level, and review date. Apply additional monitoring to privileged containers via Falco rules. Auditors accept documented exceptions; they do not accept undocumented privileged workloads.
Should we encrypt etcd for SOC 2?
Yes, if you manage your own etcd (self-managed Kubernetes). For managed clusters like EKS and GKE, etcd encryption is handled by the cloud provider at the infrastructure level and is covered by their SOC 2 report. If you use EKS, enable application-layer secrets encryption via a KMS key for an additional layer of protection for Kubernetes Secrets specifically.

Automate your compliance today

AuditPath runs 86+ automated checks across AWS, GitHub, Okta, and 14 more integrations. SOC 2 and DPDP Act. Free plan available.

Start for free