Container Security Best Practices

Container security encompasses the policies, tooling, configurations, and operational controls applied to containerized workloads across their full lifecycle — from image build through runtime deployment. As container adoption accelerates across enterprise and government infrastructure, the attack surface expands correspondingly, making structured security practices a regulatory and operational necessity rather than an optional enhancement. This page covers the defining scope of container security, its structural mechanics, classification boundaries, inherent tradeoffs, and a reference framework aligned with published standards from NIST, CIS, and federal agencies.


Definition and scope

Container security refers to the discipline of protecting containerized applications, their runtime environments, orchestration layers, and supporting infrastructure from unauthorized access, exploitation, misconfiguration, and data exfiltration. A container is a lightweight, portable execution unit that packages application code with its dependencies, sharing the host operating system kernel rather than running on a dedicated hypervisor.

The scope spans four primary domains: image security (what is packaged and from where), runtime security (behavior during execution), orchestration security (how containers are scheduled and networked, most commonly via Kubernetes), and host security (the underlying OS and kernel). Regulatory frameworks treat this scope as part of the broader cloud workload protection landscape.

NIST Special Publication 800-190, "Application Container Security Guide," establishes the authoritative federal reference for container security architecture. It identifies five risk categories: image vulnerabilities, image configuration defects, embedded secrets, supply chain compromises, and runtime threats. The Center for Internet Security (CIS) maintains Docker and Kubernetes Benchmarks that operationalize NIST's conceptual framing into testable controls.

Organizationally, container security intersects with DevSecOps cloud integration, cloud-native application security, and supply chain integrity — particularly given the prevalence of public registry images containing unpatched dependencies.


Core mechanics or structure

Container security operates across three structural layers: the build layer, the orchestration layer, and the runtime layer.

Build layer: Security controls at this stage govern what enters a container image. Static analysis tools scan images against known vulnerability databases, such as the National Vulnerability Database (NVD) maintained by NIST. Signed base images from verified publishers, combined with cryptographic image signing (e.g., via The Update Framework or Notary), establish provenance. The CIS Docker Benchmark, version 1.6, specifies 13 control categories covering daemon configuration, image build configuration, and file permissions.

Orchestration layer: In Kubernetes environments, the orchestration layer controls how containers are scheduled, networked, and granted access to cluster resources. Role-Based Access Control (RBAC), Pod Security Admission (which replaced Pod Security Policy in Kubernetes 1.25), and Network Policies define the permission boundaries. The NSA/CISA Kubernetes Hardening Guide (updated August 2022) outlines scanning requirements, privilege controls, and network segmentation specific to this layer.

Runtime layer: Runtime security monitors container behavior during execution, detecting anomalous system calls, unexpected network connections, and privilege escalation attempts. Linux kernel features — specifically seccomp profiles, AppArmor or SELinux policies, and Linux namespaces — enforce isolation at the process level. Runtime threat detection integrates with cloud SIEM and logging pipelines for correlation and alerting.

The interaction between these three layers creates defense-in-depth: a vulnerability missed at the build stage may be constrained by runtime controls, and a runtime escape attempt may be blocked at the host kernel level.


Causal relationships or drivers

The primary driver of container security risk is the shared-kernel architecture. Unlike virtual machines, containers do not have independent kernels; a kernel vulnerability exploited from within a container can compromise the host and all co-resident containers. The 2019 runC vulnerability (CVE-2019-5736) demonstrated this pathway, allowing container escape with write access to the host runc binary.

Supply chain exposure compounds this structural risk. Public registries such as Docker Hub host images with known vulnerabilities; a 2020 analysis by Snyk found that 44% of official Docker Hub images contained at least 10 high-severity vulnerabilities. This dependency on external, uncontrolled build artifacts feeds directly into supply chain security risks at the orchestration level.

Misconfiguration is the most statistically consistent driver. The cloud misconfigurations risk landscape includes containers running as root (UID 0), exposed Docker sockets mounted into containers (granting effective host root access), and unrestricted inter-pod communication in Kubernetes clusters. NIST SP 800-190 attributes misconfiguration as the primary root cause in the majority of container-related incidents examined in its threat model.

Regulatory pressure also shapes adoption. Federal agencies operating under FedRAMP authorization requirements must demonstrate container security controls aligned with NIST SP 800-53 Rev 5 control families, particularly CM (Configuration Management), SI (System and Information Integrity), and AC (Access Control). The FedRAMP authorization framework mandates continuous monitoring that extends to containerized workloads.


Classification boundaries

Container security controls are classified along two primary axes: lifecycle phase and enforcement layer.

By lifecycle phase:
- Pre-deployment (Shift-left): Static image scanning, software composition analysis (SCA), secret detection, Dockerfile linting
- Deployment-time: Admission controllers, image signature verification, policy enforcement (e.g., Open Policy Agent / Gatekeeper)
- Runtime: Behavioral monitoring, anomaly detection, network policy enforcement, incident response

By enforcement layer:
- Application layer: Dependency pinning, code signing, secret management via vaults (not hardcoded variables)
- Container engine layer: Daemon hardening, user namespace mapping, read-only root filesystems
- Orchestration layer: RBAC, namespace isolation, resource quotas, pod security contexts
- Host/OS layer: Kernel hardening, minimal host OS (e.g., Container-Optimized OS, Flatcar Linux), auditd configuration

These boundaries are not mutually exclusive — a single control (e.g., running containers as non-root) operates simultaneously at the application layer and the container engine layer. The CIS Benchmarks for Docker and Kubernetes maintain explicit mapping between controls and these classification categories.


Tradeoffs and tensions

Container security generates documented operational tensions across four dimensions.

Immutability vs. operational flexibility: Immutable containers — where the filesystem is read-only and runtime changes are prohibited — provide stronger security guarantees but conflict with applications that write logs, temporary files, or configuration state to local paths. Enforcing read-only root filesystems (a CIS Docker Benchmark recommendation) requires application refactoring that many legacy-lift workloads cannot accommodate without significant cost.

Least-privilege vs. compatibility: Removing capabilities from containers using Linux capability dropping (e.g., CAP_NET_ADMIN, CAP_SYS_PTRACE) breaks application functionality that was not designed for constrained environments. Security teams enforcing capability restrictions encounter pushback from development teams whose applications depend on elevated permissions for legitimate functions.

Scanning frequency vs. pipeline velocity: Comprehensive vulnerability scanning, including Software Bill of Materials (SBOM) generation and multi-layer image analysis, adds measurable latency to CI/CD pipelines. Organizations optimizing for deployment speed often reduce scan depth or defer scanning to asynchronous processes, creating windows where vulnerable images reach staging or production.

Visibility vs. overhead: Runtime security agents — eBPF-based tools, kernel modules, or sidecar containers — generate telemetry that feeds cloud security posture management platforms but impose CPU and memory overhead. In high-density container environments with hundreds of containers per node, this overhead affects application performance in measurable ways.


Common misconceptions

Misconception 1: Containers are inherently isolated. Containers provide process-level isolation through Linux namespaces and cgroups, not hardware-level isolation. A container and its host share the same kernel. Kernel exploits — such as the Dirty COW vulnerability (CVE-2016-5195) — can be leveraged from within containers to gain host access. NIST SP 800-190 explicitly states that containers "do not provide the same level of isolation as virtual machines."

Misconception 2: Private registries eliminate supply chain risk. A private registry that mirrors unvetted public images introduces the same vulnerabilities as pulling directly from Docker Hub. Provenance controls — including image signing, SBOM attestation, and vulnerability gates — are required regardless of registry privacy status.

Misconception 3: Kubernetes RBAC secures the entire cluster. RBAC controls API server access but does not govern container-to-container network traffic, runtime system calls, or host-level access. RBAC misconfiguration is one vector; unrestricted pod-to-pod communication and absence of network policies represent equally significant gaps. The NSA/CISA Kubernetes Hardening Guide identifies RBAC as one of 9 hardening categories, not a comprehensive solution.

Misconception 4: Scanning at build time is sufficient. New vulnerabilities are disclosed continuously. The NVD receives an average of over 25,000 new CVE entries per year (as of NVD statistical reports). An image scanned clean at build time may contain critical vulnerabilities within days of deployment if runtime and registry rescanning is not implemented.


Checklist or steps (non-advisory)

The following sequence reflects the operational phases of a container security program as structured by NIST SP 800-190 and CIS Benchmarks. These are descriptive reference points, not prescriptive guidance.

Phase 1 — Image Hardening
- [ ] Base image sourced from a verified, minimal distribution (e.g., Alpine, distroless)
- [ ] Image built from a pinned, versioned Dockerfile with no latest tag references
- [ ] Static vulnerability scan completed against NVD and vendor advisories
- [ ] No secrets, credentials, or API keys present in image layers
- [ ] Image signed using Notary or Sigstore Cosign
- [ ] SBOM generated and attached to image metadata

Phase 2 — Registry Controls
- [ ] Private registry configured with access control and audit logging
- [ ] Automated rescan policy active for stored images (minimum weekly cadence)
- [ ] Admission policy blocks images failing signature verification

Phase 3 — Orchestration Hardening (Kubernetes)
- [ ] RBAC configured with least-privilege service account bindings
- [ ] Pod Security Admission enforcing restricted or baseline policy
- [ ] Network Policies defined for all namespaces, denying inter-namespace traffic by default
- [ ] Secrets stored in dedicated secret management systems (not Kubernetes Secrets in plaintext)
- [ ] Resource limits (CPU, memory) set on all pods

Phase 4 — Runtime Security
- [ ] Seccomp profile applied, restricting system call surface
- [ ] AppArmor or SELinux policy active on host
- [ ] Read-only root filesystem enforced where application permits
- [ ] Runtime anomaly detection agent deployed and integrated with logging pipeline
- [ ] Container escape and privilege escalation alerts configured in SIEM

Phase 5 — Audit and Compliance
- [ ] CIS Benchmark automated assessment run against Docker daemon and Kubernetes configuration
- [ ] Audit logs from Kubernetes API server retained per NIST SP 800-92 log management requirements
- [ ] Compliance posture mapped to applicable frameworks (FedRAMP, SOC 2, PCI DSS)


Reference table or matrix

Control Domain Standard/Framework Key Controls Enforcement Layer
Image vulnerability management NIST SP 800-190; NVD CVE scanning, base image hardening Build / Registry
Image signing and provenance CNCF Sigstore; Notary v2 Cosign signing, SBOM attestation Build / Registry
Container engine hardening CIS Docker Benchmark v1.6 Daemon TLS, user namespaces, seccomp Container Engine
Orchestration access control NSA/CISA Kubernetes Hardening Guide RBAC, Pod Security Admission, audit logging Orchestration
Network segmentation Kubernetes Network Policy; NIST SP 800-204 Default-deny ingress/egress, namespace isolation Orchestration
Runtime threat detection NIST SP 800-190 §4.5; CIS Benchmark Behavioral baselines, syscall filtering (seccomp) Runtime / Host
Secret management NIST SP 800-57; HashiCorp Vault (open source) Dynamic secrets, no plaintext env vars Application / Orchestration
Host OS hardening CIS OS Benchmarks; SELinux/AppArmor Minimal host OS, mandatory access control Host
Compliance mapping FedRAMP (NIST SP 800-53 Rev 5) CM-7, SI-2, AC-6, AU-12 All layers
Logging and monitoring NIST SP 800-92; cloud-native SIEM integration API audit logs, runtime event streams Orchestration / Runtime

References

Explore This Site