The DynamoDB Design Dilemma

| | |

Amazon DynamoDB

For a recent project, our customer wanted to build a website to show the differences between two widgets and their corresponding data sets. We knew the amount of traffic accessing the website would be low and spiky at certain times of the day but needed to be highly available within a region while keeping costs low. Additionally, we were expecting the data sets to be comprehensive, containing over 20,000 items with each item amounting to 350kb or greater.

To ensure costs remain low, we designed the serverless solution below, containing the following:

      • Application Load Balancer (ALB)
        • Selection: ALB was selected over an API Gateway due to the already having internal compliance controls around ALB’s not API Gateways. 
      • Lambda
      • DynamoDB 
        • Selection: DynamoDB was selected over a relational database because each widget type could have different data in its dataset
      • Schema
          • Partition Key: widgetid (example: wig-123, wig-234)
          • Sort key: region (example: us-east-1, us-east-2)
          • Other Data: widgettype (example: standard, professional)
          • Other Data: Description

Website:

Widget Diff Tool

Design:

dynamodb-design

Roadblocks:

As we started to build this out, we realized we had a few concerns with our DynamoDB design:

  • Data Size: The initial data sets about widgets were already 350kb with the potential to grow over time but DynamoDB has item size limits of 400kb.
  • Read Unit Consumption: 
    • We needed to be able to perform the following queries on the data sets:
      • A list of all widgets located in a region that are of a certain widgettype
      • A specific widget based on the widgetid
    • If we queried for a single piece of metadata in a specific item, such as the description, it would get the whole 350kb item which would consume 85+ read units (a read unit is only 4kb, therefore ceil(350kb/4) = 88 read units)
    • Our schema was forcing us into using the scan functions vs. queries. Scans are not efficient when you have thousands of items in a table since it consumes a read unit per item per 4kb. Therefore, for only 200 items, each being 350kb, it would consume 17,600 read units! (a read unit is only 4kb, therefore ceil(350/4) = 88 read units * 200 items = 17,600 read units)
    • Thus, neither queries nor scans were viable options for search given the cost limitations

The Redesign

Data Size: The solution for the size limitation within DynamoDB was to store the majority of the data in a unique S3 object (1 per widget) rather than storing the whole data set within the DynamoDB table. To programmatically connect the DynamoDB item with the S3 object, the DynamoDB item pointed to the S3 location. This solution allowed for the data inside S3 to grow to 100’s of MB (or up to 5 terabytes, S3 maximum object size) without affecting the DynamoDB item size.  Additionally, this helped us with the read unit consumption because the items shrunk from 350kb to 4kb.  

Read Unit Consumption: As a result of the data size solution above, we lowered the consumption of our read units but still needed to execute scan commands.  Thus, we adjusted the partition key and sort keys. Since the widget types and regions had a static set of options that did not change often, we were able to combine those into a single partition key.  Upon this change, every query needed could be executed with the query command, not the scan command.  Ultimately, this architecture helped us scale better DynamoDB and avoid Hot Partition Keys issues.  

Final Design

After all the changes the final solution was the following:

  • ALB
  • Lambda
  • DynamoDB 
    • Partition Key: region_widgettype (example: us-east-1_standard, us-east-2_standard, us-east-1_professional, us-east-2_professional)
    • Sort key: widgets_id (example: wig-123, wig-234)
    • Other Data: s3_url
    • Other Data: Description
  • S3 Bucket with objects

Finally, this design over the original:

  • Reduce the cost of data/operations of DynamoDB by over 90%
  • Decrease web page interaction/load time by over 60%
  • Enabled large datasets without fear of running into size limitations
Dyanamodb Final Design
Azure DevOps YAML Pipeline with Terraform

Azure DevOps YAML Pipeline with Terraform

In my last post, I discussed the power of the Azure DevOps YAML pipeline with all of its built in features.  Today, I would like to focus on a specific use case for the Azure DevOps YAML pipeline with Terraform.  By combining these two great technologies, engineers...