Peer Review – Using Terraform and Lambda to automate VPC Peering

| | | 0 comments

Terraform as a Foundation

Terraform is really great. As someone who has spent the better part of the past decade codifying cloud infrastructure, there is an inexorable groan from my vicinity when someone insists we use anything else.

However, great as it is, even Terraform has certain limitations; enter cross-account and cross-region VPC peering in Amazon Web Services.

Late last year, Foghorn announced a port of our VPC-in-a-Box to a Terraform module. In just three lines of code, our partners can deploy a best-practices networking solution to any region Amazon provides. It comes with features like highly available Network Address Translation that scales as your network spans more Availability Zones. As our partners’ needs change, our module is flexible, providing a wide array of options so that your cloud network adapts as your project grows.

Add Multi-Region

But what happens when we need multi-region connectivity or cross account access? In November, Amazon introduced support for inter-region VPC peering. About a month later, it was added to Terraform. That’s when the engineering team at Foghorn set out to build a simple, one-command module to connect all these pieces.

Setting up the Peering Connection itself is relatively simple. A resource for the requestor’s side and a resource for the acceptor’s side and you’re done. Except if you go to actually connect to anything you’ll find a last piece of the puzzle is to set up route tables. Happily, our VPC module will give you a list of route tables; but due to a long standing issue in how the core of Terraform performs interpolations, Terraform can’t use the length of a computed list to determine how many routes to build.

Lambda to the Rescue!

So instead of relying on Terraform directly to manage the routes in each table, our module uses a Lambda function. When calling our module, you merely need to pass it two VPC-ID’s and two AWS-providers — or if the VPC’s are in the same account and region, the same provider twice. On a Terraform apply, the module will go to each VPC, look up the relevant details, and build out the peering connection. Thereafter, it builds a Cloudwatch Event in the same region as each VPC and sets it to trigger a Lambda function every two minutes. That function acquires a list of route tables and checks to see if the relevant peering connection is correctly set up. After checking periodically for 10 minutes — in case some route tables were being created concurrently with the peering connection — the function will disable its own Cloudwatch event.

By utilizing Lambda, this production ready network is deployable with just the following:

provider "aws" {
region  = "us-east-1"
profile = "production1"
}

provider "aws" {
alias   = "dos"
region  = "us-west-2"
  profile = "production2”
}

module "aws_vpc" {
  source ="git::ssh://source.fogops.io/v1/repos/m-vpc?ref=v0.2.2"
}

module "aws_vpc2" {
  providers {
    aws = "aws.dos"
  }
  source ="git::ssh://source.fogops.io/v1/repos/m-vpc?ref=v0.2.2"
  cidr_block = "10.1.0.0/16"
}

module "peering" {
  providers {
    aws.peer1 = "aws"
    aws.peer2 = "aws.dos"
  }
  source = "git::ssh://source.fogops.io/v1/repos/m-peering?ref=v0.1.1"
  vpc1 = "${module.aws_vpc.id}"
  vpc2 = "${module.aws_vpc2.id}"
}

And with that you have flexible, best-practice, and peered VPCs in two regions and two accounts. This provides an excellent jumping off point for things like building your highly available application; monitoring your networks from a central account; or shifting your workloads to spot markets with better pricing throughout the day.

If you have any questions on this or our other work with Terraform, please don’t hesitate to get in touch. We love feature requests!

The Reinvention of Amazon Bedrock

The Reinvention of Amazon Bedrock

Amazon Bedrock is a sophisticated and fully managed service provided by AWS, designed to facilitate the development and scaling of generative AI applications. Some key improvements have been launched at AWS Re:Invent this week. We’ll dive deeper into those later....