Back to Blog
How-To 10 min read

SOC 2 Docker and Container Security Controls

SOC 2 Docker security controls covering image hardening, Dockerfile best practices, registry access, vulnerability scanning, and runtime security for CC6 and CC7 criteria.

Key Takeaways
  • Distroless or minimal base images reduce the CVE attack surface and simplify CC7.1 vulnerability management.
  • Running containers as non-root with read-only filesystems satisfies CC6.7 environment hardening criteria.
  • Docker Content Trust and image signing ensure containers deployed to production are verified and unchanged.
  • Private container registries with access controls satisfy CC6.1 for container artifact access.
  • Scanning images in CI with Trivy or Snyk provides continuous CC7.1 vulnerability evidence.
  • Docker daemon configuration (no privilege escalation, user namespaces) hardens the container runtime.

Docker in SOC 2 Scope

Docker containers are the unit of deployment for most modern SaaS applications. From a SOC 2 perspective, containers are in scope because they run your application code, process customer data, and are the runtime environment where vulnerabilities can be exploited. The AICPA does not have Docker-specific criteria, but auditors apply CC6 (logical access), CC6.7 (environment hardening), CC7.1 (vulnerability management), and CC7.2 (monitoring) to container environments.

The CIS Docker Benchmark, published by the Center for Internet Security, provides 100+ specific recommendations for Docker host and container configuration. Running `docker-bench-security` against your Docker hosts produces a scored report that maps to SOC 2 criteria. This tool is the fastest way to identify Docker-specific audit findings before an auditor does.

Secure Dockerfile Practices

Every Dockerfile should follow these security patterns. (1) Pin the base image to a specific digest: `FROM node:20.11.1-alpine3.19@sha256:abc123def456`. Tags are mutable; digests are not. Pinning to a digest ensures the exact tested image is used in production. (2) Run as a non-root user: create a dedicated user in the Dockerfile (`RUN addgroup -S appgroup && adduser -S appuser -G appgroup`) and switch to it before the `CMD` instruction (`USER appuser`). (3) Use multi-stage builds to exclude build tools from the final image. The build stage installs compiler tools and dependencies; the production stage copies only the compiled binary.

(4) Remove unnecessary packages: after installing runtime dependencies, run `apt-get clean && rm -rf /var/lib/apt/lists/*` (Debian) or `apk del build-deps` (Alpine) to remove temporary files. (5) Do not copy credentials into the image: never use `COPY .env .` or `ARG AWS_SECRET_ACCESS_KEY`. Use build secrets (`--mount=type=secret`) for values needed only at build time. (6) Set `WORKDIR` to a specific path rather than using root. (7) Use `COPY` instead of `ADD` unless you specifically need tar extraction — `ADD` is less transparent.

Lint Dockerfiles in your CI pipeline using Hadolint (`hadolint Dockerfile` or as a GitHub Action). Hadolint checks Dockerfile best practices and integrates with ShellCheck to also validate embedded shell commands. Configure Hadolint rules in `.hadolint.yaml` and set it as a required CI check. This provides automated evidence that Dockerfile security practices are enforced across all images.

Base Image Strategy and CC7.1

The base image is the largest CVE attack surface in a container. A full Ubuntu 22.04 base image contains thousands of packages, hundreds of which typically have known CVEs. Strategy options by security level: (1) Distroless images (`gcr.io/distroless/nodejs`, `gcr.io/distroless/java`) — contain only the language runtime and the application, no shell, no package manager. Smallest attack surface. (2) Alpine-based images — 5MB, uses musl libc, ~50 packages. Good balance of size and compatibility. (3) Chainguard images — distroless-style images with zero known CVEs at build time, rebuilt daily.

Avoid `latest` tags on base images — they pull different digests on different days and make your builds non-reproducible. Establish a base image update cadence: review new versions monthly, test in staging, and roll out to production. Use Dependabot for Dockerfiles (enabled under repository → Settings → Code security and analysis → Dependabot version updates → add `docker` ecosystem) to automatically open PRs when new base image versions are published.

Container Registry Security (CC6.1)

Use a private container registry — AWS ECR, Google Artifact Registry, Azure Container Registry, or GitHub Container Registry (ghcr.io). Never store production images in public Docker Hub repositories unless they are genuinely open-source components with no proprietary code. Configure registry access controls: only the CI/CD pipeline IAM role should have push access; application deployment roles should have pull-only access.

Enable vulnerability scanning on push in your registry. ECR Basic Scanning uses Clair; ECR Enhanced Scanning uses AWS Inspector with EPSS severity scoring. GCR and Artifact Registry support Artifact Analysis with continuous background scanning that re-alerts on newly published CVEs even for images pushed months ago. Enable image immutability: ECR → Image tag mutability → Immutable. This prevents tags from being overwritten, ensuring that the image deployed in production is the exact image that was scanned and approved.

Implement a promotion workflow: images are built and scanned in a development registry, then promoted (by adding a tag or copying the manifest) to a production registry only after the scan is clean and the deployment is approved. This separation ensures that unscanned or vulnerable images cannot be accidentally deployed to production.

Image Scanning in CI/CD (CC7.1)

Integrate Trivy into your CI pipeline. In GitHub Actions: `- name: Run Trivy vulnerability scanner / uses: aquasecurity/trivy-action@master / with: image-ref: my-image:${{ github.sha }} / format: sarif / output: trivy-results.sarif / severity: CRITICAL,HIGH / exit-code: "1"`. The `exit-code: "1"` fails the pipeline on CRITICAL or HIGH findings. Upload the SARIF to GitHub Code Scanning for historical trending.

Define a vulnerability management SLA: CRITICAL CVEs must be remediated within 7 days, HIGH within 30 days, MEDIUM within 90 days. Track open vulnerabilities in a spreadsheet or your compliance platform. For CVEs that cannot be remediated within SLA (e.g., no patched version exists), file a formal exception with compensating controls (network isolation, WAF rule, defense-in-depth). Monthly vulnerability reports showing CVE counts by severity over time are strong CC7.1 evidence.

Runtime Hardening (CC6.7)

When running Docker containers directly (not via Kubernetes), apply these runtime flags: `--read-only` (read-only root filesystem), `--no-new-privileges` (prevent privilege escalation via setuid/setgid), `--cap-drop=ALL` (drop all Linux capabilities), `--security-opt=no-new-privileges:true`, and `--user=1000:1000` (run as non-root UID). For sensitive workloads, add `--security-opt apparmor=docker-default` or a custom AppArmor profile.

In Docker Compose, these translate to: `security_opt: [no-new-privileges:true]`, `cap_drop: [ALL]`, `read_only: true`, `user: "1000:1000"`. Use a base `docker-compose.security.yml` overlay that applies these defaults and require it to be included in all service definitions. Run `docker-bench-security` quarterly and track the improvement in benchmark scores as evidence of environment hardening.

Handling Secrets in Containers

The most common Docker secrets mistake is environment variables in Compose files or Dockerfile ENV instructions. Environment variables are visible to any process in the container, included in `docker inspect` output (accessible to anyone with Docker daemon access), and often leak into application logs. Treat environment variables as ephemeral configuration only.

For secrets, use one of: (1) Docker secrets (for Swarm mode) — secrets are mounted as files in `/run/secrets/`, not environment variables. (2) AWS Secrets Manager / Vault with a sidecar or init container that fetches secrets at runtime and writes to a shared tmpfs volume. (3) The Docker BuildKit `--mount=type=secret` for build-time secrets only. Never pass secrets as `--build-arg` — they are visible in the image history with `docker history --no-trunc`.

Scan your Git history for accidentally committed Docker Compose files with hardcoded secrets using `trufflehog git file://path/to/repo`. If you find historical secrets, rotate them immediately and document the exposure window, the credentials affected, and the rotation action taken — this incident response record is evidence for CC7.3.

CIS Docker Benchmark Controls

Run `docker-bench-security` (github.com/docker/docker-bench-security) against each Docker host quarterly. The tool checks 100+ CIS controls across five sections: host configuration, Docker daemon configuration, Docker daemon files, container images, and container runtime. Key failures to address: Docker daemon not running with a dedicated non-root OS user, Docker socket not restricted (any process with socket access has root-equivalent control), `--privileged` containers, containers sharing the host network namespace, and sensitive host paths mounted as volumes.

For the Docker daemon configuration, edit `/etc/docker/daemon.json`: `{"icc": false, "userns-remap": "default", "no-new-privileges": true, "log-driver": "json-file", "log-opts": {"max-size": "10m", "max-file": "3"}}`. `icc: false` disables inter-container communication by default (must be explicitly allowed via network). `userns-remap` maps container root to an unprivileged host user, significantly reducing the impact of container escapes.

Frequently Asked Questions

Are Docker containers required to run as non-root for SOC 2?
SOC 2 does not explicitly require non-root containers, but running as root violates the principle of least privilege (CC6.7) and auditors will flag privileged containers as a finding. Non-root is industry best practice and expected by any competent auditor. Start with new containers running non-root and migrate legacy containers with a documented timeline.
Can we use Docker Desktop for development without it being in SOC 2 scope?
Development environments are generally out of SOC 2 scope unless they process real customer data. However, ensure that developers are not connecting Docker Desktop to production AWS accounts, and that production container images are built in CI, not on developer laptops. Include a policy statement clarifying that production image builds are CI-only.
What is the difference between image scanning and runtime scanning for SOC 2?
Image scanning (Trivy, Snyk Container) checks the image at rest for known CVEs before deployment. Runtime scanning (Falco, Aqua, Sysdig Secure) monitors running containers for anomalous behavior like shell spawning or unexpected network connections. Both are complementary: image scanning prevents deploying vulnerable software (CC7.1), runtime scanning detects exploitation attempts (CC7.2/CC7.3).
Do we need to scan base images separately from application images?
No — your full application image inherits all base image CVEs. When you scan `myapp:1.0`, Trivy scans all layers including the base image layers. The base image CVEs appear in the same report. However, you should also monitor for new CVEs in base images between application rebuilds — ECR Enhanced Scanning and Google Artifact Analysis do this continuously.
How do we prevent developers from deploying unscanned images to production?
Use an admission controller in Kubernetes (Kyverno, Gatekeeper) or container registry immutability to enforce this. A Kyverno policy can require that all images come from a specific registry and were pushed with a CI-generated tag pattern (e.g., `sha-[commit]`). Combined with CI pipeline scanning that must pass before tagging, this ensures only scanned images reach production.

Automate your compliance today

AuditPath runs 86+ automated checks across AWS, GitHub, Okta, and 14 more integrations. SOC 2 and DPDP Act. Free plan available.

Start for free