AWS re:Invent 2016 and Container-Powered Migrations

container technology

At the re:Invent conference in Las Vegas last week, we had the opportunity to present a Flux7-powered case study of a successful containerized migration to AWS. As part of the session, “Getting Technically Inspired by Container Powered Migrations”, Flux7 CEO, Aater Suleman, shared Flux 7’s recent work with Rent-A-Center to perform a Hybris migration from their datacenter to AWS.

This session was a technical journey through application migration and refactoring using containerized technologies. Mandus Momberg, AWS Partner Solutions Architect, led off with how containers can help organizations create repeatability, scalability, efficiency and agility in their migrations. Specifically, he addressed how containers are an ideal foundation for migration factories, allowing organizations to create a reproducible workflow of items which enables your teams to easily transition their applications into new environments.

Within the framework of a migration factory, AWS recommends businesses embrace two key concepts:

  • Business factory. The business factory is the ultimate business engine that drives the migration process for all the teams. It governs every micro factory that gets launched by your teams. It is business, not technical, oriented and is the dependency mapping and the control of every environment.
  • Micro factory. These are independent from any other micro factory and should be tailored for team specific migration, allowing each team to migrate as quickly as possible. Micro factories focus on technical, not business rule, migration.

Factories can use blocks and containers that are re-used from other teams. Containers allow this because you get a fresh environment; there is no chance of getting static or secure data from another team because the ephemeral state of the container would have been destroyed when that team’s migration was completed. 

Container-Powered Migration in Action

Illustrating how these concepts came together for Rent-A-Center, Aater Suleman shared how Flux7 used Docker with Amazon ECS and Auto Scaling to meet the customer’s business objectives. Specifically, Flux7 helped Rent-A-Center migrate its ecommerce platform to AWS, with several important stated goals as you can see here: 

With AWS, containers and a CI/CD pipeline already in use at Rent-A-Center, Flux7 started working with the RAC team to build a solution that would address all the requirements. The two teams worked to leverage as many AWS services as they could in order to maximize agility, speed and automation. Then, it took SAP Hybris and containerized it, running it on top of ECS and using AWS WAF, CloudFront and Aurora DB.

As you can see in the diagram below, there are two ECS clusters, both of which are multi-AZ. The underlying EC2 are part of an auto scaling group. As a result, the solution is fully automated, with failover of the underlying nodes in the cluster. Running on top of this substrate of EC2 instances are the Docker containers for each one of the services – with auto scaling at each layer.  The result: the containers could scale up or down for an individual service depending on the load.

In addition, if the number of containers expanded to a point where the number of EC2 instances were not enough to support them, the underlying EC2 instances would scale up to create more room for additional containers. Similarly, both layers would scale down, coordinating with each other. For more information about this auto-scaling, please see our blog post with Rent-A-Center at AWS.

Last, the separation created by Docker containers makes the pipeline completely reusable at Rent-A-Center. In fact, a lot of teams have used it because most of what is available is externalized configuration. That is, the configuration is carried with the application itself; the app carries the Dockerfile, the software, the instructions to build the software, and the prerequisites as a part of the Dockerfile. As a result, Rent-A-Center has a pipeline where you can add any app to it, the app will go through the same pipeline, and be deployed on the same cluster with no changes needed.  This approach effectively enables organizations to speed migrations and increase their agility, pushing multiple applications through once the pipeline is established.

Questions

The presenters received several insightful questions which are not included in the recording but we have answered here for you:

Q: Where in the migration process do you convert to a container--before, during or after the migration?

 A: Like every technical decision, the real answer is, ‘it depends’.  It depends on what exactly you are trying to achieve. The two critical considerations are your timeline and the availability of the right skills and resources. With a tight deadline, if you decouple the two -- field planning and creating a landing zone in AWS for your services--you can start throwing applications at it very quickly. The reason is that you can work in parallel, starting with the applications teams prepping their applications to be thrown onto the landing zone. Your application teams don’t need to worry what the real landing zone looks like; what they have to worry about is can they get their containers up and going? Meanwhile, the DevOps team can be focused on building the underlying substrate where the containers are going to run.

This approach can breed quick success as the development and DevOps teams work in tandem. The dev team doesn’t need the full infrastructure built out because they are working in containers and that allows them to test the environment locally very easily. If you are in a situation where the dev team has to do some work, containers are a great way to do it. You can containerize in parallel with the substrate being put together and assembling it in the end is very easy. In this way, containers really shine.

Q: Do you have use cases where used Code* services?

A: At Flux7, we have numerous projects where we have used Code* services, such as building full end-to-end pipelines from CodeCommit. Now that AWS CloudFormation has support for building that entire pipeline specified in a CloudFormation template, we are big fans of creating one-click AWS Service Catalog buttons. You click that button and you get your full pipeline CI/CD set up for you all the way from the repo to the ECS cluster. We do use CodeStar services very heavily. (Please find here additional Code* Reading)

Q: How do you achieve ECR and EC2 Auto Scaling based on metrics that are not in AWS?

A:  The typical mindset doesn’t work when you can’t measure how much CPU and memory is currently in use and based on that decide how big you want your ECS cluster to be. What counts is the committed resources; every time you spin up a container, you specify how much memory and CPU a container is going to take. And if the committed resources are not enough, ECS auto scaling will fail.

In the Rent-A-Center example, scaling up and scaling down was triggered by a Lambda function, which was in fact querying ECS using the ECS API. It managed how the containers were currently scheduled, and if it detected that there were not enough resources to scale any of those individual services, it would start spinning up additional ECS instances, by adding additional containers, and add them to the ECS cluster. That’s the bottom layer and different from how we typically think of auto scaling.

Regarding how did we achieve the individual number of containers and how did they scale?  Initially, what we did was use CloudWatch metrics and a Lambda function that would look at CPU memory usage and if the containers were running hot, it would increase the number of containers for an individual service.

Q: What was the level of control that was implemented (other than the WAF) at the container level?

A: There is a long list of controls that we implement at Flux7. In this case, as a starting point, we implemented on top of the PCI quick start from AWS and inherited whatever we can from that. We also used a number of AWS accounts to create a security separation of duties (e.g. for dev, prod, etc.)

Containers from a security standpoint make it a lot easier to pass security and compliance checks.  From an auditability and defensibility standpoint, IS auditors care about three things:

  1. Do you have a process?
    Yes, you have a process because containers are built using the Docker file and Docker command.
  2. Are you following the process? Can you guarantee that people are following the process?
    As it so happens, with containers nobody needs to be allowed SSH access into any production instance; the only way you can actually get a container deployed, is through the Jenkins instance. If you go through the Jenkins instance, the process is guaranteed to be followed because that is how Jenkins is set up. The follow up question to that is, how do you know that Jenkins is secure and your AWS credentials are secure? AWS makes this answer easy with IAM rules, and being able to do credential rotation.
  3. Can someone tamper with the process? How do you ensure that a Byzantine entity can’t come in and tamper with your process?
    Containers make that really easy because you are guaranteed that no one is logging in. Your container by definition needs to be stateless which means that your files are read only. So, if somebody were to get in, they have to break through and have a file system that people can effectively write onto, because logs are being pulled out.

From that perspective, the story is actually a lot stronger and more defensible as a process. There are a couple other layers to it. One is detecting availability and another is to make sure you are using the right software. And since with containers once the image is built, you can’t tamper with it, you can run static analysis. There are several open source tools to do this that you can do a full static analysis of commonly known vulnerabilities on your containers. At Flux7 we do a lot of PCI and HIPAA projects and we find that the container pipeline is easier to defend and audit than a traditional pipeline. That is, once you prove the pipeline was built correctly, you are essentially done with the audit.

Q: With regard to using Lambda to create security rules for WAF, who does that, the customer or Flux7?

A: Our philosophy at Flux7 is to teach you how to fish. We definitely help in creating the initial rules and from there the customer creates the rules.

At Flux7, we have been focused on container consulting implementing Docker container technology in conjunction with AWS since 2013, helping companies use its benefits for migration and much more. Please contact us today if we can assess how a container-based AWS migration can help your business. Did you find the insights in this article helpful? Please sign up below to get regular news and analysis like this to your inbox.

Sign Me Up!

About the Author

Flux7 Labs
Find me on:

Join Us

Join thousands of technology enthusiasts, subscribe and get expert perspective in your inbox.

Connect With Us

Categories