Serverless Security: Risks and Controls
Serverless computing shifts infrastructure management to cloud providers, allowing organizations to deploy function-level code without provisioning or maintaining servers. This operational model introduces a distinct security profile — one in which traditional perimeter defenses are largely irrelevant and attack surfaces shift toward application logic, event triggers, identity and access controls, and third-party dependencies. The security controls applicable to serverless environments draw from frameworks including NIST SP 800-53 and guidance from the Cloud Security Alliance, and they apply across all major cloud providers offering function-as-a-service platforms. The Cloud Defense Providers section catalogs qualified service providers operating across these control domains.
Definition and scope
Serverless security refers to the policies, controls, and architectural practices that protect applications built on function-as-a-service (FaaS) and event-driven execution platforms from exploitation, data exposure, privilege escalation, and logic abuse. The term "serverless" describes the customer's experience — servers exist but are fully abstracted by the cloud provider — rather than the underlying infrastructure.
The security scope of serverless environments is defined by three primary boundaries:
- Provider-managed layer — physical infrastructure, hypervisor isolation, operating system patching, and runtime environments. Under FedRAMP authorization frameworks, cloud service providers carry documented responsibility for these controls.
- Customer-managed layer — function code, environment variable configuration, IAM role assignments, event source permissions, and dependency chains.
- Shared boundary — runtime configuration, logging enablement, network egress controls, and API gateway settings, which require active coordination between provider defaults and customer configuration choices.
NIST SP 800-145 establishes the foundational service model taxonomy that governs this division. Serverless platforms occupy a position closer to PaaS than IaaS, meaning customers surrender more infrastructure control but retain full responsibility for application-layer security. The describes how qualified security providers are categorized within this model.
How it works
Serverless functions execute in stateless, ephemeral containers that are instantiated on demand and terminated after execution — often within milliseconds to seconds. The security implications of this execution model are structurally different from those governing long-running virtual machines or containers.
The attack surface in serverless environments is organized around four discrete phases:
-
Trigger and invocation — Functions are invoked by event sources: HTTP requests via API gateways, message queue events, storage bucket updates, or scheduled timers. Each event source represents a potential injection point. The Open Web Application Security Project (OWASP) identifies event-data injection as a primary serverless attack vector in its Serverless Top 10 publication.
-
Execution context — During execution, a function operates with an assigned IAM role. Overly permissive roles — a condition NIST SP 800-190 flags as a misconfiguration risk in cloud-native environments — allow compromised functions to access unrelated resources, escalate privileges, or exfiltrate data.
-
Dependency resolution — Serverless functions commonly import third-party libraries. Supply chain attacks targeting open-source packages, as documented by the Cybersecurity and Infrastructure Security Agency (CISA AA22-137A), propagate directly into function execution environments.
-
Output and downstream integration — Function outputs often feed databases, notification services, or downstream APIs. Insecure deserialization or insufficient output validation at this phase can compromise downstream systems.
Unlike traditional applications, serverless functions do not maintain persistent network connections or long-lived processes, which limits some lateral movement techniques but eliminates visibility mechanisms that depend on persistent agents or host-level logging.
Common scenarios
Serverless security failures concentrate in identifiable operational patterns. The following represent the most structurally significant risk scenarios documented in public threat intelligence:
-
Overprivileged function roles — A function assigned an IAM policy granting
s3:*permissions when onlys3:GetObjecton a single bucket is required. AWS IAM and NIST SP 800-53 Rev 5, AC-6 (Least Privilege) both require privilege minimization, but automated deployment pipelines frequently generate permissive roles by default. -
Secrets in environment variables — Hardcoded API keys or database credentials stored in function environment variables are exposed through misconfigured logging or unauthorized invocation. The Cloud Security Alliance's Cloud Controls Matrix v4 addresses secrets management under control domain EKM-03.
-
Unvalidated event inputs — Functions triggered by external HTTP events that fail to validate or sanitize input data are vulnerable to injection attacks. An attacker controlling a message queue or storage event can inject malicious payloads that the function executes with its assigned permissions.
-
Insecure function-to-function communication — Architectures in which one function invokes another using synchronous API calls without authentication allow an attacker who compromises one function to chain invocations horizontally across the application.
-
Cold-start logging exposure — Some platforms log initialization data including environment state during cold starts. Without explicit log filtering, sensitive configuration data enters log streams accessible to anyone with logging service permissions.
Decision boundaries
Serverless security controls differ from general cloud security controls in scope and applicability. The following boundaries distinguish where serverless-specific controls apply versus where standard cloud controls govern:
| Dimension | Serverless-Specific | Standard Cloud Applies |
|---|---|---|
| Patch management | Provider-managed; customer has no control | Customer-managed in IaaS/PaaS contexts |
| IAM scope | Per-function role assignment; short-lived execution tokens | Per-instance or per-service role; longer session duration |
| Network controls | Egress filtering via VPC configuration; no inbound persistent connections | Full inbound/outbound firewall rules on persistent instances |
| Logging | Requires explicit opt-in and filtering; ephemeral execution reduces default log completeness | Persistent agents can capture full session data |
| Dependency risk | Runtime package imports without OS-level isolation between tenants | OS-level container isolation provides one additional boundary |
The How to Use This Cloud Defense Resource page describes how provider-specific service categories map to these control boundaries within the network structure.
For compliance alignment, serverless deployments subject to the HIPAA Security Rule (45 CFR §164.312) must ensure access controls, audit controls, and integrity controls are implemented at the function level — a requirement that cannot be delegated to the infrastructure provider regardless of the service model. FedRAMP-authorized platforms satisfy infrastructure-layer controls but leave application-layer HIPAA obligations with the covered entity or business associate deploying the functions.