Avoid the 5 most common Amazon Web Services misconfigurations in build-time

Avoid the 5 Most Common Amazon Web Services Misconfigurations in Build-Time

Infrastructure as code (IaC) makes cloud provisioning faster, simpler, and more scalable. It also gives us the opportunity to make relatively simple changes that can have a lasting impact on our cloud security posture.

To demonstrate this, we analyzed the most common Amazon Web Services (AWS) security errors across IaC modules in the wild. In this post, we’re looking at the most common non-compliant AWS policies and the risks associated with them. We’ll also share the simple build-time Terraform configuration needed to fix each error.

Ensure all data stored in S3 Bucket is securely encrypted at rest

S3 supports easy, free encryption using the AES-256 encryption standard. As I’m sure we’re all aware, S3 Bucket encryption at rest is important to prevent your data from being exposed to anyone who might get access to the hard drives that store your data.

To be compliant with this policy, which is required for PCI-DSS and NIST-800, encryption needs to be set by default on the relevant bucket(s). This will cause all subsequent items saved to that S3 bucket to be encrypted automatically.

Add the following block to a Terraform S3 resource to add AES-256 encryption:

server_side_encryption_configuration {
  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

Ensure all data stored in the Launch Configuration EBS is securely encrypted

Amazon Elastic Block Store (EBS) volumes support built-in encryption, but are not encrypted by default. EBS Launch Configurations specify the Amazon EC2 Auto Scaling launch configuration that can be used by an Auto Scaling group to configure Amazon EC2 instances.

When the entire EBS volume is encrypted, data stored at rest on the volume, disk I/O, snapshots created from the volume, and data-in-transit between EBS and EC2 are all encrypted.

Keeping your data encrypted at rest ensures that no unauthorized entities gain access to it. Compliance with this policy is also required for PCI-DSS. To prevent this AWS error in your Terraform module, make sure that encryption is enabled for EBS Launch Configurations:

resource "aws_launch_configuration" "as_conf" {
  name_prefix = "terraform-lc-example-"
  image_id = data.aws_ami.ubuntu.id
  instance_type = "t2.micro"
+ encrypted = true
}

Ensure rotation for customer created CMKs is enabled

AWS Key Management Service (KMS) allows customers to rotate backing keys. This is where key material is stored within the KMS and tied to the key ID of the customer master key (CMK). The backing key is used to perform cryptographic operations, such as encryption and decryption. Automatic key rotation currently retains all prior backing keys, allowing encrypted data decryption to occur transparently.

The longer a key goes un-rotated, the more data gets encrypted with it and the more likely it is to be compromised. Exposure of such a key exposes all the data that was encrypted using that key, so it is highly recommended to rotate the encryption key yearly.

By default, automatic CMK rotation is not enabled (it is in Google Cloud!), but it is recommended to help reduce a compromised key’s potential impact. It is also a requirement to enable it for PCI-DSS, CSI, and ISO27001 compliance.

To fix this misconfiguration in Terraform, turn on key rotation:

resource "aws_kms_key" "kms_key_1" {
  description = "kms_key_1"
  deletion_window_in_days = 10
  key_usage = "ENCRYPT_DECRYPT"
  is_enabled = true
+ enable_key_rotation = true
}

Ensure DynamoDB Point-in-Time Recovery (backup) is enabled

Point-in-Time Recovery (PITR) for Amazon DynamoDB allows you to restore your DynamoDB table data with a single click. This gives you a fail-safe when digging into data breaches and data corruption attacks, and is a requirement for PIC-DSS, CIS, and ISO27001.

To create and access DynamoDB backups, however, you need to enable PITR, which provides continuous backups that can be controlled using various programmatic parameters.

Fix this misconfiguration by configure the point_in_time configurations on your DynamoDB table:

resource "aws_dynamodb_table" "basic-dynamodb-table" {
  name = "GameScores"
  billing_mode = "PROVISIONED"
  read_capacity = 20
  write_capacity = 20
  hash_key = "UserId"
  range_key = "GameTitle"
+ point_in_time_recovery {
+   enabled = true
+ }
}

Ensure ECR image scanning on push is enabled

Amazon ECR supports scanning your container images for vulnerabilities using the Common Vulnerabilities and Exposures (CVEs) database. It is recommended that you enable ECR on every push, to help identify bad images and specific tags where vulnerabilities were introduced into the image.

Enabling ECR scanning on every push is required as part of ISO27001 compliance. To fix build-time resources, set scan_on_push to true:

resource "aws_ecr_repository" "foo" {
  name = "bar"
  image_tag_mutability = "MUTABLE"
  image_scanning_configuration {
+ scan_on_push = true
  }
}

Ensure all data stored in the SQS Queue is encrypted

Amazon Simple Queue Service (Amazon SQS) allows encrypting messages sent through each and every queue. This allows for another level of data access management, by denying access to specific data based on the encryption of the message, and protects sensitive data by encrypting it.

If you operate in a regulated market, such as HIPAA for healthcare, PCI DSS for finance, and FedRAMP for government, you need to ensure sensitive data messages passed in this service are encrypted at rest.

Avoid this misconfiguration by specifying the KMS key that the SQS should use to encrypt the data on the SQS configuration block.

In Terraform, set the length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again.

resource "aws_sqs_queue" "terraform_queue" {
  name = "terraform-example-queue"
+ kms_master_key_id = "alias/aws/sqs"
+ kms_data_key_reuse_period_seconds = 300
}

Conclusion

As you can see, fixing IaC misconfigurations often entails adding simple missing configuration arguments to already-existing blocks, or changing incorrect values to the compliant state. Making those small changes, however, can have a significant impact as they will inform future deployments.

By enforcing common security policies in IaC templates and modules at build-time, you can fix existing issues and prevent new misconfigurations from being deployed. It’s also a great way to save time hunting down issues in production that keep resurfacing when new infrastructure gets spun up. That’s why we at Bridgecrew believe IaC is a must for organizations with a growing presence in the cloud.

This post originally appeared on The New Stack.