Intro
At Include Security one type of assessment we perform is cloud configuration reviews. Most commonly, our clients have AWS cloud accounts provisioned using Terraform (or the community fork OpenTofu). Our reviews usually involve two parallel views: the live account in the AWS Console and a static review of Infrastructure as Code (IaC) config files. We look for issues like privilege escalation paths, missing encryption, and cross-account attack surfaces.
Over recent years, the AWS Console has improved significantly at setting secure defaults when creating cloud resources. But when the same resources are created with Terraform and other API-driven tools, those guardrails often disappear. In many places, Terraform’s AWS provider inherits legacy AWS API defaults that are surprising to encounter today. As a result, teams provision infrastructure that’s weaker than what the AWS Console would have produced with the same intent.
This post explores the growing security gap between Console-created resources and API-created resources. The problem is not exclusive to Terraform; it’s just by far the most popular tool we see used to manage cloud resources. The post highlights misconfigurations we often see in assessments, looking into why these occur, and the best approaches for preventing them.
Example A: RDS Encryption
An AWS Relational Database Service (RDS) instance can be created in Terraform with an aws_db_instance resource block like the following:
resource "aws_db_instance" "example" {
engine = "mysql"
identifier = "mydb"
instance_class = "db.t3.micro"
allocated_storage = 20
publicly_accessible = false
username = var.db_username
password = var.db_password
vpc_security_group_ids = [aws_security_group.example.id]
skip_final_snapshot = true
# bad: no storage_encrypted argument
}
Surprisingly, when created this database does not have encryption at rest enabled. This leaves data available in cleartext to an attacker who is able to access the underlying storage. The issue affects RDS snapshots and backups which inherit the lack of encryption of a database instance. Unencrypted snapshots can then be shared and restored on other AWS accounts.
While unlikely, unauthorized access could also occur due to an inside attacker, a flaw in AWS’s multi-tenant architecture, or mishandling of storage drives during maintenance. Compliance standards therefore typically require or strongly recommend encryption for sensitive data at rest.
It’s also operationally annoying to fix an unencrypted instance: it’s not possible to directly encrypt one, meaning a new encrypted database must be created from a snapshot.
The screenshot below of the AWS console shows the storage options of the instance created by Terraform with encryption “Not enabled”:

Meanwhile, when creating an RDS instance in the AWS Console, under advanced options “Enable encryption” is selected by default:

The fix in Terraform is to set the storage_encrypted option to true in the aws_db_instance resource block, which must be done when the database is created:
resource "aws_db_instance" "example" {
[...]
storage_encrypted = true
}
Further, in more mature organizations customer-managed keys (CMK) could be used to encrypt the data (using the kms_key_id argument) rather than AWS-managed keys, providing more control over key rotation and revocation.
Many online Terraform tutorials and examples miss out arguments like storage_encrypted when demonstrating how to create resources. Consequently, LLM-generated code often does too. We asked ChatGPT 5.2 to “show how to use an aws_db_instance resource block to create an RDS instance in Terraform” and it missed the storage_encrypted argument. When asked to consider security, it spotted the omission.
The equivalent Azure and GCP providers in Terraform do not have this problem, due to the more consistent secure-by-default approach in Azure and GCP which were able to learn from some of the early design decisions of AWS.
In Azure, transparent data encryption (TDE) is now set on all database resources by default. This is shown in the Terraform MSSQL database provider, where TDE is enabled by default.
Going one step further, the GCP SQL database provider doesn’t even have an option to toggle encryption-at-rest, since all data is encrypted-at-rest in GCP.
Example B: Lambda resource-based policies
AWS Lambda functions use resource-based policies to define which IAM service principals can invoke them. A common pattern is to allow an API Gateway to invoke a Lambda.
In the AWS Console, the UI enforces an important guardrail: it requires you to specify a “Source ARN”, to ensure that only the intended API Gateway can invoke the Lambda function:

However, the AWS API does not require the Source ARN value to be set, neither does Terraform or the AWS CLI. The source_arn condition can be omitted when setting up the aws_lambda_permission in Terraform:
resource "aws_lambda_permission" "vm_restart" {
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.vm_restart.function_name
principal = "apigateway.amazonaws.com"
# bad: no source_arn
}
This leads to a “Confused Deputy” vulnerability. Any API gateway resource in the same region, even one owned by an attacker in a different AWS account, could invoke the Lambda function. The attack is detailed in research on Dangers of a Service as a Principal in AWS Resource-Based Policies and can vary in severity depending on what the Lambda function does and how easy it is to discover by an attacker.
The documentation for the “aws_lambda_permission” resource notes how the source_arn argument is optional. The docs also mention the risk but this can be easily missed. A fixed example would look like this with the source_arn constrained to a specific API gateway:
resource "aws_lambda_permission" "vm_restart" {
…
source_arn = "arn:aws:execute-api:us-east-1:012345678901:aqnku8akd0/*/*/*"
}
Example C: Password Policy
Here’s a fun one that we saw once. AWS IAM account settings contain the minimum set of requirements for AWS IAM user passwords. The AWS default for new accounts is 8 character password minimum length, with at least three of the four character classes: uppercase, lowercase, numbers and symbols:

The aws_iam_account_password_policy resource can be used to modify this. In the following resource block, a developer intended to increase the password length from the default minimum of 8 characters to 10 characters:
resource "aws_iam_account_password_policy" "this" {
minimum_password_length = 10
# bad: missing other arguments
}
However, this has the unintended effect of disabling all other password strength requirements:

This is again due to built-in behavior of the AWS SDK where creating a custom password policy replaces the existing policy. Any omitted fields fall back to the API defaults (false if not specified), not the default policy.
The “terraform plan” output shows the fact that password strength arguments are being changed. But the values are only “known after apply”, as shown in the following terminal output, so it’s not clear that the requirements are being removed:
Terraform will perform the following actions:
# aws_iam_account_password_policy.this will be created
+ resource "aws_iam_account_password_policy" "this" {
+ allow_users_to_change_password = true
+ expire_passwords = (known after apply)
+ hard_expiry = (known after apply)
+ id = (known after apply)
+ max_password_age = (known after apply)
+ minimum_password_length = 14
+ password_reuse_prevention = (known after apply)
+ require_lowercase_characters = (known after apply)
+ require_numbers = (known after apply)
+ require_symbols = (known after apply)
+ require_uppercase_characters = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
The fix is to specify all requirements as arguments:
resource "aws_iam_account_password_policy" "this" {
minimum_password_length = 14 # CIS baseline suggests minimum of at least 14 characters
require_uppercase_characters = true
require_lowercase_characters = true
require_numbers = true
require_symbols = true
allow_users_to_change_password = false
password_reuse_prevention = 24
}
Ideally IAM users with passwords should not be used at all. The AWS account would be plugged into an existing identity provider like Okta, with SAML used to federate access into the account with a particular IAM role.
Why is Terraform like this?
Terraform uses the AWS SDK under the hood, and the AWS SDK seems committed to backwards compatibility in its interface. As such, the SDK does not apply security best practices like encryption at rest when creating legacy resources such as RDS instances, Redshift clusters, or EBS volumes (although it’s now possible to configure a region to encrypt EBS volumes by default).
Further, it’s hard to fix this behavior a layer above in Terraform due to the way Terraform works on existing infrastructure. After Terraform has been run successfully, a second run with the same configuration should be idempotent – there should be no further changes to existing cloud resources. If the Terraform provider code changed its defaults, this would have the potential to break existing infrastructure, as Terraform would try to apply unexpected changes to previously provisioned resources.
Terraform is not alone here. Terraform competitor Pulumi, which enables writing IaC in different languages, also follows the AWS SDK in its default values for arguments like storageEncrypted for RDS instances.
Counterexample: S3
In contrast to the examples above, AWS S3 has much better secure defaults these days. Using the Terraform aws_s3_bucket resource let’s declare a bucket with the minimum possible configuration:
resource "aws_s3_bucket" "app" {
bucket = "terraform-test12312321313"
}
Despite the lack of arguments, and the lack of an attached aws_s3_bucket_public_access_block resource, the created bucket still had a block on all public access, as shown in the following screenshot:

The default block of all public access has been a secure default enforced by AWS since April 2023, in response to breaches involving accidentally exposed buckets and objects. New AWS accounts also have public access blocks on DynamoDB, EFS, EC2, and EMR, but not EBS resources by default.
In S3, encryption at rest using SSE-S3 is enabled for all new buckets and objects created, as of early 2023:

The test bucket is still missing some other security best practices for important buckets like versioning, MFA delete, and access logs, but these are lower risk and optional.
Solutions
We’ve established that there are a number of footguns when creating AWS infrastructure using Terraform. What are the best options to avoid them?
Static Analysis
There are several static analysis security tools for Terraform: Trivy, checkov, Terrascan, and Semgrep has a Terraform ruleset.
The tool we like the most is Trivy which includes the functionality of the deprecated tfsec. Trivy scans against a database of common Terraform cloud misconfigurations. We recommend that everyone writing Terraform uses this tool in their CI/CD or pre-commit hooks.
The main disadvantages are the number of false positives and overstatement of risk. For instance Trivy marks as “Critical” the finding that a security group allows egress to 0.0.0.0/0. Such security groups are often used for public-facing websites, and even if not, it’s hard for an attacker to exploit as well as annoying for a defender to fully remediate. Similarly “Bucket does not encrypt data with a customer managed key” is marked as a High risk item, which we think is debatable.
Trivy has rules for “Instance does not have storage encryption enabled”, “Ensure that lambda function permission has a source arn specified”, and “IAM Password policy should have requirement for at least one uppercase character” which would catch all the the examples in this article.
The “Ensure that lambda function permission has a source arn specified” rule is broad as only a few service principals (like API Gateway) are actually exploitable, so again the “Critical” risk is often overstated. But it’s still best practice.
Trivy results therefore require a lot of interpretation and adjustment of risk based on the specific infrastructure tested. Static analysis will also not catch complex, context-dependent IAM logic which is often where the main vulnerabilities in a cloud environment exist. However running this tool leads to a decent baseline for catching common Terraform misconfigurations.
Policy as Code
Once achieving a decent baseline with Trivy, policy as code is the next step of increasing the security of Terraform configuration. Trivy comes with convenient built-in checks but doesn’t look at higher level conditions like “Only the security team can create IAM roles” or “Prod resources must use approved KMS keys”. These are traditionally written down in documents but automating enforcement through policy is better. Tools like Open Policy Agent and Sentinel can do this.
The downside is that writing policy as code is complex, and possibly only worthwhile for larger and more mature organizations.
Organization SCPs
In addition to static analysis helping to prevent insecure defaults from being committed, an idea for defence-in-depth is to add AWS Organizations Service Control Policies (SCPs) to prevent undesired API actions from being made. For instance if a Terraform module omits storage_encrypted = true, or someone clicks through a risky Console workflow, the underlying create/update call can be blocked with a policy such as this:
{ "Version": "2012-10-17", "Statement": [ { "Condition": { "Bool": { "rds:StorageEncrypted": "false" } }, "Action": [ "rds:CreateDBInstance", "rds:CreateDBCluster" ], "Resource": "*", "Effect": "Deny" } ] }
A limitation of SCPs is that they can cause Terraform to fail with generic AccessDenied errors that don’t clearly inform about the source of the failure.
Centralized Modules
Another best practice which can be added to the approaches above is to provide “Golden modules”. Instead of developers writing raw aws_db_instance resources, the organization could provide a hardened module where good defaults are hardcoded. This can be combined with the addition of a policy that prevents writing raw resource blocks.
Conclusion
IaC is still the right answer for provisioning repeatable, reviewable infrastructure. But when used to provision AWS resources, tools like Terraform require a higher degree of security vigilance than you might expect. The Console is increasingly adding guardrails and safer defaults, while API-driven provisioning keeps legacy behaviors for compatibility.
The gap can be closed by combining baseline static scanning (e.g., Trivy rules), Service Control Policies, targeted policy-as-code for org-specific requirements, and centralized “golden modules” that bake in the best argument choices.
Note
Examples were tested with Terraform AWS provider version v6.28.0.