AWS Databases: New Auto Scaling for Amazon DynamoDB

AutoScaling for Amazon DynamoDB

Amazon DynamoDB is a very popular NoSQL database service. Among AWS databases, the Flux7 AWS consultants like DynamoDB for its fast, reliable performance, especially for real time apps where we need faster access of data e.g. commerce, big data analytics, and IoT applications. Moreover, as a managed service, AWS takes care of the administrative burden (e.g. hardware provisioning, setup and configuration, replication, software patching, etc.) for you. While this database is well-loved, one feature that has been in high-demand--and we are happy to say was just released by AWS--is Auto Scaling for DynamoDB. We are really excited to see AWS deliver Auto Scaling to DynamoDB customers as it will make administration and managing capacity of data even easier, will help maximize availability for applications, all of which will positively impact cost savings.

This was a much-requested feature because without auto scaling, you had to find an alternate solution which involved the use of lambda function(s). The work around involved a lambda function where, if the read/write throughput reached capacity, the lambda function triggered a CloudWatch alarm. This would trigger a lambda function which tuned the provisioned capacity of the DynamoDB table to meet the throughput demand. Now that same CloudWatch alarm can trigger the auto scaling feature built into DynamoDB which saves us administrative work, removing room for potential error and streamlining the process.

Application auto scaling is used to scale up/down DynamoDB throughput. Enabling auto scaling for DynamoDB involves the creation of three additional resources:

  1. An IAM Role which gives specific permissions to the application auto scaling service.
  2. A scalable target
  3. A scaling policy

As strong supporters of AWS automation, we created a simple CloudFormation template to enable auto scaling for DynamoDB. Here is a snippet of the template which creates a scalable target and a scaling policy to scale DynamoDB read throughput.

 

For the full source code, please refer to our github repository here: https://github.com/Flux7Labs/blog-code-samples/tree/master/dynamodb_autoscaling

Now, with auto scaling for DynamoDB, we are able to automate capacity management for tables and global secondary indexes by first setting capacity and utilization, and then directing DynamoDB to monitor that in conjunction with CloudWatch. When an alarm is triggered, capacity will be automatically provisioned up or down, as needed. Note that AWS reports that auto scaling will be on by default for all new tables and indexes. You will need to configure it for existing ones.

This new capability has saved several of our customers significant time and resources. One, in particular, is a marketing services group who prior to the availability of auto scaling, had to come up with a complex solution. They used AWS ElastiCache to create, manage and scale a multi-level cache environment in AWS that was able to support variance in traffic. AWS ElastiCache served as the primary source where most data was retained and the actual data was written to DynamoDB at a slower rate, where it was safely stored. The application itself would go to AWS ElastiCache to be called up by the user.  However, now with DynamoDB auto scaling, they can effectively reduce the complexity of their solution, in the process decreasing their DynamoDB costs and bandwidth needs.  

Did you find this useful? If so, stay on top of future AWS news, tips, tricks, and best practices by clicking the button below and sign up to receive our blog in your mailbox.

Sign Me Up!

September 19, 2017 / Scalability, Autoscaling, Amazon Dynamo DB

About the Author

Flux7 Labs
Find me on:

Join Us

Join thousands of technology enthusiasts, subscribe and get expert perspective in your inbox.

Connect With Us

Recent Posts

Categories