Deploying Scout Suite Automation to AWS using Terraform

Airman
6 min readDec 16, 2023

This post is part 3 of a series:

Updates

2024–02–04: Code in the repository has been updated to use SNS instead of SES.

Background

In Part 3 we will explore the same automation as in Part 2, but will use Terraform as our Infrastructure as Code (IaC) tool.

Terraform

HashiCorp Terraform is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share. Terraform uses HashiCorp Configuration Language (HCL) for defining the infrastructure. This post is not intended to be an introduction into Terraform, if you’re new to Terraform, check out:

Architecture

A recap of the architecture:

  • Scout Suite scanning Lambda that is using a container image pulled from Elastic Container Registry (ECR).
  • EventBridge rule will invoke the scanning Lambda daily based on a cron schedule.
  • Once the scan is completed, the Scanning Lambda uploads the scan report to S3.
  • On object upload events, S3 is configured to invoke a notifier Lambda. Notifier Lambda is simple Python script that sends an email notification using Amazon Simple Email Service (SES).

Terraform Code

Full source code is available at: https://github.com/airman604/aws-scan-automation. Below are explanations for snippets of Terraform code that deploy the automation.

Terraform code is in terraform directory and broken down into the following files:

  • main.tf — defines Terraform providers and some variables.
  • s3_bucket.tf — defines S3 bucket and configures event notifications on object upload to trigger the notification Lambda.
  • scanner.tf — defines Scout Suit scanner Lambda, including container image.
  • schedule.tf — defines EventBridge Rule that triggers daily.
  • notifications.tf — defines SES email identity and Lambda that sends email notifications.
  • variables.tf — defines notification_recipient variable to be used as email address for notifications. You can create terraform.tfvars file to provide this value so you don’t need to enter it each time you run Terraform.

I used Terraform modules wherever possible to simplify the code.

S3 Bucket

S3 bucket will be used to store scan reports:

# bucket to store scan reports
resource "aws_s3_bucket" "s3_report_bucket" {
# bucket name - random identifier will be added at the end
bucket_prefix = "scout-scan-reports-"
}

Scanning Lambda

Scout Suite Lambda is created from a container image that is built from ../lambda-scout directory.

# ScoutSuite Docker image
module "scout_docker_image" {
source = "terraform-aws-modules/lambda/aws//modules/docker-build"

# create ECR repo and push image
create_ecr_repo = true
ecr_repo = "scout-lambda"

use_image_tag = false # If false, sha of the image will be used as tag

# path with Dockerfile and context to build the image
source_path = "../lambda-scout"
# specify platform to ensure portability of Terraform code
platform = "linux/amd64"
}

Similar to Part 1, we set memory size to 2Gb and maximum execution time to 15 min. S3 bucket name and a path in the bucket where to save reports are passed to the Lambda function as environment variables.

Scanning Lambda function needs permissions to run scans. We add SecurityAudit and ReadOnlyAccess AWS managed policies, as well as allow write access to the S3 bucket created earlier.

# ScoutSuite Lambda Function that will execute the scans
module "scout_scan_lambda" {
# use lambda Terraform module
source = "terraform-aws-modules/lambda/aws"

function_name = "scout-scanner"

# allow schedule-based triggers
allowed_triggers = {
ScoutScanEventBridgeRule = {
principal = "events.amazonaws.com"
source_arn = module.scout_scan_schedule.eventbridge_rule_arns["scout_scan"]
}
}
# get errors without this, see https://github.com/terraform-aws-modules/terraform-aws-lambda/issues/36
create_current_version_allowed_triggers = false

# don't need to create the package - using container image defined above
create_package = false
package_type = "Image"
# must match image architecture!
architectures = ["x86_64"]
image_uri = module.scout_docker_image.image_uri

# add permissions for ScoutSuite scans
attach_policies = true
policies = [
"arn:aws:iam::aws:policy/ReadOnlyAccess",
"arn:aws:iam::aws:policy/SecurityAudit"
]
number_of_policies = 2

# add access to S3 bucket with reports
attach_policy_statements = true
policy_statements = {
s3_write = {
effect = "Allow"
actions = ["s3:PutObject"]
resources = ["${aws_s3_bucket.s3_report_bucket.arn}/${local.s3_prefix}/*"]
}
}

# pass parameters to Lambda for report upload location
environment_variables = {
S3_BUCKET = aws_s3_bucket.s3_report_bucket.id
S3_PREFIX = local.s3_prefix
}

memory_size = 2048
# max possible Lambda execution time 15 min
timeout = 15 * 60
}

EventBridge Rule

Amazon EventBridge rule is defined to trigger daily at 10:00 UTC and invoke the scanning Lambda function.

# EventBridge schedule
module "scout_scan_schedule" {
# use eventbridge Terraform module
source = "terraform-aws-modules/eventbridge/aws"

# default bus must be used for scheduled events
create_bus = false

rules = {
scout_scan = {
description = "scout-scan-daily"
# daily 10am - time is in UTC
schedule_expression = "cron(0 10 * * ? *)"
}
}

# target is scanner Lambda function
targets = {
scout_scan = [
{
name = "scout-scan-daily"
arn = module.scout_scan_lambda.lambda_function_arn
}
]
}
}

Notification Lambda

Notification Lambda is invoked when a new report is uploaded to S3. This invocation is defined in s3_bucket.tf:

# configure notification Lambda invocation when new report is uploaded to the S3 bucket
# Lambda resources are defined in notifications.tf
module "report_notifications" {
# use notification sub-module of s3-bucket module
source = "terraform-aws-modules/s3-bucket/aws//modules/notification"

# bucket defined earlier
bucket = aws_s3_bucket.s3_report_bucket.id

# notification target is Lambda that will send emails using SES
lambda_notifications = {
scout_report_notifications = {
function_arn = module.notification_lambda.lambda_function_arn
function_name = module.notification_lambda.lambda_function_name
events = ["s3:ObjectCreated:*"]
filter_prefix = "${local.s3_prefix}"
filter_suffix = ".tar.gz"
}
}
}

notifications.tf defines:

  • SES email identity (i.e. verified email address) to use as the sender and the recipient for email notifications. When the code is deployed, SES will send an email to verify that you have access to the email address. To receive scan notifications you will need to confirm the address. notification_recipient variable from variables.tf is is used in the code for the email address.
  • Notification Lambda using Python code from ../lambda-notification directory. The email address is passed as environment variable to the Lambda code.
  • IAM permissions for Lambda to be able to send emails using SES and read reports from S3.
# SES identity (email) to send notifications to
resource "aws_sesv2_email_identity" "notification_recipient" {
email_identity = var.notification_recipient
}

# Lambda that will send SES notifications when new scan reports are uploaded to S3
module "notification_lambda" {
source = "terraform-aws-modules/lambda/aws"

function_name = "scout-notifications"
handler = "index.handler"
runtime = "python3.11"

# Lambda function source code location
source_path = "../lambda-notifications"

# add permissions for generation of S3 pre-signed URLs and sending emails
attach_policy_statements = true
policy_statements = {
s3_read = {
effect = "Allow"
actions = [
"s3:GetObject"
]
resources = ["${aws_s3_bucket.s3_report_bucket.arn}/${local.s3_prefix}/*"]
},
ses_send = {
effect = "Allow"
actions = [
"ses:SendEmail",
"ses:SendRawEmail"
]
resources = [
"${aws_sesv2_email_identity.notification_recipient.arn}"
]
}
}

# Lambda gets sender and recipient email addresses from environment
environment_variables = {
SENDER = var.notification_recipient
RECIPIENT = var.notification_recipient
}
}

Deployment

The full code is available here: https://github.com/airman604/aws-scan-automation. All of the Terraform code is in terraform directory.

IMPORTANT:

  • Terraform will store it’s state locally by default. It is a good idea to store the state remotely in a durable storage, or use Terraform Enterprise to manage state (and more). Read about Terraform state in Gruntwork’s How to manage Terraform state.

Deploying:

  • Install pre-requisites: AWS CLI, Terraform, Docker.
  • Configure your AWS CLI credentials and default region (aws configure).
  • Clone the repository.
  • Change to the terraform sub-directory of the repository.
  • Downloaded required providrs and modules using terraform init.
  • Deploy the stack using terraform apply.

Check out README.md in the repository for more details.

--

--

Airman

Random rumblings about #InfoSec. The opinions expressed here are my own and not necessarily those of my employer.