Container Security Best Practices
Container security encompasses the policies, controls, and technical mechanisms that protect containerized workloads across their full lifecycle — from image creation through runtime operation and decommissioning. This page covers the structural components of container security, the regulatory frameworks that govern containerized environments, classification boundaries between security domains, and the known tensions in applying traditional security models to ephemeral container infrastructure.
- Definition and Scope
- Core Mechanics or Structure
- Causal Relationships or Drivers
- Classification Boundaries
- Tradeoffs and Tensions
- Common Misconceptions
- Checklist or Steps
- Reference Table or Matrix
Definition and Scope
Container security refers to the set of practices, tools, and governance mechanisms applied to software containers — most commonly those built on the Open Container Initiative (OCI) specification and executed through runtimes such as containerd or CRI-O. The scope extends from the base image layer through the orchestration control plane, covering build pipelines, registry storage, cluster configuration, network policy, and runtime behavior monitoring.
The Cloud Defense Providers provider network covers service providers operating in this space across managed Kubernetes security, container image scanning, and runtime protection categories.
Regulatory framing for container security derives from multiple sources. The National Institute of Standards and Technology (NIST) published Special Publication 800-190, Application Container Security Guide, which establishes the authoritative federal baseline for container security risk. The Federal Risk and Authorization Management Program (FedRAMP) applies to container-based cloud services used by federal agencies, requiring alignment with the 325-control baseline drawn from NIST SP 800-53 Rev 5. The Defense Information Systems Agency (DISA) publishes Security Technical Implementation Guides (STIGs) for container platforms, including a Kubernetes STIG that mandates specific pod security admission configurations.
The Center for Internet Security (CIS) publishes benchmarks for Docker and Kubernetes — the CIS Kubernetes Benchmark and CIS Docker Benchmark — that serve as de facto configuration standards in regulated industries. The Cloud Security Alliance (CSA) addresses containers within its Cloud Controls Matrix (CCM), mapping container risks to control domains including change control, vulnerability management, and logging.
Core Mechanics or Structure
Container security operates across five discrete layers, each with distinct attack surfaces and corresponding control categories.
Image Layer — The container image is the immutable build artifact. Security at this layer focuses on base image provenance, vulnerability scanning, and software bill of materials (SBOM) generation. The NIST Cybersecurity Framework 2.0 maps image integrity to the Identify and Protect functions. Images built from unverified upstream sources introduce supply chain risk documented in the CISA/NSA joint guidance Kubernetes Hardening Guidance.
Registry Layer — Container registries store and distribute images. Security controls at this layer include access control, content trust (image signing via Notary or Sigstore Cosign), and automated re-scanning of stored images as new CVEs are published. The SLSA (Supply-chain Levels for Software Artifacts) framework, hosted under the OpenSSF, provides a maturity model for registry and build provenance assurance.
Orchestration Layer — Kubernetes and equivalent platforms introduce a control plane with significant privilege. Misconfigurations in Role-Based Access Control (RBAC), admission controllers, and API server exposure are the primary attack vectors at this layer. NIST SP 800-190 identifies overly permissive RBAC as among the highest-impact container security risks.
Runtime Layer — At runtime, containers execute processes on shared kernel infrastructure. Security controls include seccomp profiles (restricting system call surface), AppArmor or SELinux mandatory access control profiles, and runtime threat detection tools that monitor for anomalous process or network behavior.
Network Layer — Container network security encompasses pod-to-pod communication policy (enforced via Kubernetes NetworkPolicy objects), service mesh mutual TLS (mTLS), and egress filtering. The NIST SP 800-204 series covers microservices security, directly applicable to container network architecture.
Causal Relationships or Drivers
The primary driver of container security complexity is the shared kernel model. Unlike virtual machines, containers do not run isolated kernels — all containers on a host share the host OS kernel. A kernel-level vulnerability exploitable from within a container can, under certain conditions, enable container escape, as documented in CVE-2019-5736 (runc container escape) and CVE-2022-0185 (Linux kernel heap overflow affecting container environments).
Ephemeral container lifecycles compress the window for detection and forensic analysis. A container that runs for under 60 seconds — common in batch and CI/CD workloads — may complete a malicious operation and terminate before a security alert is generated. This characteristic is documented in the CISA alert AA22-137A addressing Kubernetes threat patterns.
Supply chain compromise is a dominant causal vector. The 2020 SolarWinds incident and subsequent attention on software supply chain integrity accelerated adoption of SBOM requirements, formalized in Executive Order 14028 (May 2021), which mandates SBOM generation for software sold to the federal government. Container images distributed through public registries are a direct subject of this requirement for federal contractors.
Organizational drivers include the acceleration of DevOps pipelines, where security gates are frequently bypassed under release pressure, and the proliferation of third-party Helm charts and operator manifests that embed default credentials, disabled authentication, or over-privileged service accounts.
For a broader framing of the regulatory landscape governing cloud workload security, the page describes the coverage structure across cloud security domains.
Classification Boundaries
Container security is a sub-domain of cloud workload protection but is technically distinct from adjacent categories in ways that affect tool selection, compliance mapping, and vendor classification.
Container Security vs. VM Security — Virtual machine security controls assume kernel isolation between workloads. Container security must account for the shared kernel attack surface, requiring kernel hardening controls (seccomp, namespaces, cgroups) that have no direct VM equivalent.
Container Security vs. Kubernetes Security — Container security addresses the container runtime and image lifecycle. Kubernetes security addresses the orchestration control plane: API server hardening, etcd encryption, admission controller policy, and cluster RBAC. The two domains overlap at the pod specification level (pod security standards, security contexts) but require distinct tooling categories.
Container Security vs. Serverless Security — Serverless functions (AWS Lambda, Google Cloud Functions) may internally execute within containers but expose no container management surface to the operator. Serverless security focuses on function permissions, event injection, and dependency management rather than runtime process monitoring or image scanning.
Container Security vs. Application Security — Application security (SAST, DAST, dependency scanning) addresses code-level vulnerabilities. Container security addresses the packaging and runtime environment. The two disciplines intersect at the SBOM and dependency manifest layer.
Tradeoffs and Tensions
Security Scanning vs. Pipeline Velocity — Comprehensive vulnerability scanning at build time adds latency to CI/CD pipelines. Teams operating under continuous deployment models may set scanner thresholds that permit deployment of images containing medium-severity CVEs to meet release schedules. The tension is structural: organizations must define explicit risk acceptance policies, not rely on scanners to block all findings.
Least-Privilege vs. Operational Functionality — Enforcing minimal Linux capabilities and restrictive seccomp profiles can break applications that rely on non-standard system calls. The CIS Kubernetes Benchmark recommends disabling the NET_RAW capability by default, but doing so breaks ping and certain network diagnostic tools commonly used in operational workflows.
Immutability vs. Incident Response — Immutable containers — those that cannot be modified after deployment — are a security best practice per NIST SP 800-190. However, immutability complicates live forensics: responders cannot install debugging tools in a running container without violating the immutability model. Organizations must pre-stage forensic tooling in sidecar containers or accept that live analysis will require image-level inspection post-termination.
Centralized Registry Control vs. Developer Autonomy — Enforcing a single approved image registry reduces supply chain risk but creates friction in developer workflows that rely on pulling upstream images directly from Docker Hub or GitHub Container Registry. The CISA/NSA Kubernetes Hardening Guidance explicitly recommends restricting image sources, a policy that requires developer workflow changes.
Common Misconceptions
Misconception: Containers are inherently isolated from the host.
Containers provide namespace and cgroup-based isolation, not hypervisor-level isolation. The host kernel is shared. A container running as root with the --privileged flag or mounted host filesystem paths can directly access host resources. NIST SP 800-190, Section 3.1, explicitly addresses this distinction.
Misconception: Scanning images at build time is sufficient.
New CVEs are published continuously. The National Vulnerability Database (NVD) receives hundreds of new CVE entries per week. An image that passes scanning at build time may contain critical vulnerabilities within days of deployment as new disclosures emerge. Registry-level continuous re-scanning is required for ongoing assurance.
Misconception: Kubernetes RBAC provides complete access control.
Kubernetes RBAC controls API server access but does not govern container-level Linux process permissions, network policy, or host filesystem access. A pod with a permissive security context can operate outside RBAC boundaries entirely by exploiting container escape vulnerabilities.
Misconception: Using a managed Kubernetes service (EKS, AKS, GKE) eliminates security responsibility.
Cloud provider managed Kubernetes services handle control plane patching and availability but do not configure pod security standards, network policies, RBAC, or image provenance controls. The shared responsibility model, as defined by each major CSP and referenced in NIST SP 800-145, places workload security with the customer.
The How to Use This Cloud Defense Resource page describes how the provider network categorizes managed container security service providers within the broader cloud defense taxonomy.
Checklist or Steps
The following sequence represents the operationally standard phases of a container security implementation lifecycle, derived from NIST SP 800-190 and the CIS Kubernetes Benchmark structure. This is a reference sequence, not prescriptive instruction.
Phase 1 — Image Hardening
- Base images are sourced from verified, minimal distributions (e.g., distroless, Alpine) with documented provenance
- Images are built without embedded secrets, credentials, or development tools
- SBOM is generated at build time using a toolchain compliant with NTIA minimum SBOM elements
- Vulnerability scanner is integrated into the CI pipeline with defined severity thresholds
- Images are cryptographically signed before registry push
Phase 2 — Registry Controls
- A single approved registry is designated; external registry pulls are restricted via admission controller policy
- Image pull policies are set to prevent use of the latest tag in production
- Registry access is governed by role-based permissions with audit logging enabled
- Automated re-scanning is scheduled at minimum every 72 hours for production images
Phase 3 — Orchestration Hardening
- Kubernetes API server is configured per CIS Kubernetes Benchmark Level 1 or Level 2 as appropriate to the environment
- RBAC policies follow least-privilege; service account token auto-mounting is disabled where not required
- Pod Security Standards are enforced at the namespace level (Restricted profile for production workloads per Kubernetes Pod Security Standards)
- Admission controllers (OPA/Gatekeeper or Kyverno) enforce policy at deploy time
- etcd is encrypted at rest; access is restricted to the API server
Phase 4 — Runtime Protection
- Seccomp profiles are applied to all production workloads; the RuntimeDefault profile is the minimum baseline
- AppArmor or SELinux profiles are applied where the host OS supports them
- Runtime threat detection is deployed to monitor process execution, file system writes, and network connections against established baselines
- Privileged containers are prohibited; host namespace sharing (hostPID, hostNetwork, hostIPC) is blocked by admission policy
Phase 5 — Network Controls
- Default-deny NetworkPolicy is applied at the namespace level
- East-west traffic between services is encrypted via mTLS (service mesh or application-layer TLS)
- Egress traffic is filtered and logged; unexpected external connections trigger alerts
- Ingress controllers are hardened per the applicable CIS benchmark
Phase 6 — Logging and Monitoring
- Kubernetes audit logging is enabled at the RequestResponse level for sensitive API groups
- Container stdout/stderr logs are shipped to a centralized SIEM with retention meeting applicable compliance requirements (FedRAMP IR-2 requires logs retained for a minimum period defined in the authorization boundary)
- Anomaly detection baselines are established within 30 days of workload deployment
Reference Table or Matrix
Container Security Control Mapping by Layer
| Security Layer | Primary Standard | Key Controls | Regulatory Reference |
|---|---|---|---|
| Image Build | NIST SP 800-190 | Vulnerability scanning, SBOM generation, image signing | EO 14028, FedRAMP CM family |
| Container Registry | CIS Docker Benchmark | Access control, content trust, continuous re-scanning | NIST SP 800-53 Rev 5 CM-3, SI-3 |
| Orchestration (Kubernetes) | CIS Kubernetes Benchmark | RBAC, Pod Security Standards, admission control, etcd encryption | DISA Kubernetes STIG, FedRAMP AC family |
| Container Runtime | NIST SP 800-190, OCI Runtime Spec | Seccomp, AppArmor/SELinux, no-privileged enforcement | NIST SP 800-53 SI-7, SC-39 |
| Container Networking | NIST SP 800-204 | NetworkPolicy, mTLS, egress filtering | FedRAMP SC family |
| Logging and Monitoring | NIST SP 800-92 | Audit logging, SIEM integration, anomaly detection | FedRAMP AU family, HIPAA §164.312(b) |
Regulatory Framework Applicability to Containerized Workloads
| Framework | Governing Body | Container-Specific Applicability | Key Publication |
|---|---|---|---|
| FedRAMP | GSA / CISA | Federal cloud services; container workloads require ATO boundary documentation | NIST SP 800-53 Rev 5 |
| HIPAA Security Rule | HHS | Covered entities running ePHI in containers; encryption and audit controls required | 45 CFR Part 164 |
| PCI DSS v4.0 | PCI SSC | Containers in cardholder data environments; Requirement 6 (secure development) directly applicable | PCI DSS v4.0 |
| NIST CSF 2.0 | NIST | Voluntary framework; Identify/Protect/Detect functions map directly to container security lifecycle | NIST CSF 2.0 |
| Executive Order 14028 | OMB / CISA | Federal software supply chain; SBOM requirements apply to containerized software sold to federal agencies | [EO 14028](https://www.federalregister.gov/documents/2021/05/17/2021-10460/improving-the- |