Securing Egress Traffic for HITRUST Compliance

| | | 0 comments

We have several clients who have struggled in the past with meeting the HITRUST requirements specifically around DLP and domain based whitelisting of egress traffic in AWS.  Recently we’ve implemented a great set of controls for a healthcare SaaS company.  I’ll explain some of the challenges as well as the solution we came up with.

Problem 1:  NAT doesn’t do enough

Most customers route egress traffic through an AWS NAT gateway.  This is a great way to engineer a low maintenance network infrastructure that allows workloads that need to initiate outbound connections to live in the security of private subnets.  But NAT gateway, as you know, is a layer 3 device.  We can control network traffic by IP subnet, but not by domain name.  We need to get higher up the stack to get the functionality required for HITRUST

Problem 2:  Network appliances are challenging to manage

Since network appliances on AWS run on EC2 instances, each of these appliances is another server that you have to manage.  In addition, they are probably unique servers in your environment, and your existing team might not have the specific expertise to manage them.  We wanted a solution that was as easy as NAT Gateway for our customers to manage, without any chance of a failed instance causing a production outage

Solution:  Automated, Scalable Squid Proxy

Squid proxy is well documented and tested as a solid solution to proxy traffic for AWS VPC traffic.  In order to make the solution scalable and highly available, we used an autoscaling group and an internal ALB.  This allows for a cost effective proxy that can scale up when necessary, but most importantly, it’s always up.  By occasionally cycling the instances in the group, we know we have a resilient and dependable egress solution.

But this still doesn’t solve the problem that our customer isn’t a Squid Proxy expert, and doesn’t really want to become one.  So, we needed to make it easy to manage the domain based whitelist.  We solved this with a little userdata that regularly pulls down the list, which is stored in an S3 bucket, reconfigures the existing instances, and reloads the squid config.

What about Infrastructure as Code?

We always preach infrastructure as code, and practiced what we preached here.  All of the configs are stored in userdata and Terraform.  And since we’ve done this more than once, our client got to benefit from m-squid, our pre-written squid module for Terraform, which cut the cost and time of this project by about 75%.

Want to try it out?

Give us a ring, happy to share more or help you solve a similar problem!