At Flux7, we believe in high productivity, so each of our engineers handle multiple AWS client accounts, and sometimes multiple engineers handle one client. As a team leader who manages 10s of client accounts, I need to switch in and out of each account several times an hour, which is a real challenge because so much customer-specific information must be loaded into files and environments that we call “customer profiles”. Each includes the following:
AWS credentials on the CLI.
AWS IAM login in the browser and the login link.
ssh config file for each customer.
ssh keys to login to the servers.
ssh auto completions for hostnames. **
Customer-specific files related to other tools, e.g., config.groovy file for Netflix Asgard.
We like to let ssh autocompete AWS instance names. For instance, if we type “ssh prod[tab]r[tab]” it autocompletes with “ssh production-rails”. This requires that every instance have a name tag and we use its name both in ssh config and bash autocomplete.
As you might imagine, switching all customer-specific data manually is tedious and time consuming. It easily can take several minutes and even then it’s error prone.
To address this issue, we tried a few obvious things. Each solution worked for a certain period of time, but finally we settled on just one for the reasons outlined below.
For the Browser
Initially we tried to use LastPass to handle the saved username, password, and URL. But we shifted to using incognito windows through the command-line and filling in the username and password fields through selenium.
For the Command-Line Interface (CLI)
The CLI was a bit more challenging. We tried 6 different options before finding the right one.
First, we used the profile function in AWS CLI tools, but it didn’t work. There were several reasons, but the killer was that it only worked with AWS CLI, when what we needed was for our solution to also work with tools like the knife ec2 plugin.
We put a Workstation instance in every account that had all customer data loaded in. The workstation ssh-key and ssh-config entries were added manually to each of our engineer’s computers. This was an easy solution, but resulted in three issues. First, since all of our guys shared the same account on the workstation, if two tried to work on the client’s account concurrently, they could interfere with one another, and there was no tracking of who did what. Second, Workstation, even as a micro instance, costs $15 per month, and then there was an additional instance to maintain, ensure uptime, etc.. Third, any scripts or code developed to aid our work in general had to be copied to every Workstation instance of every client so that we could leverage it for all of them. That was problematic.
We made a VM setup for every client’s data, but this raised three issues. First, VMs are so large that transferring a VM to our employees was time consuming. Second, since VMs are resource hogs it was hard to run many of them concurrently. Third, VMs are slow to start and stop. Fourth, any general code/bug fixes had to be updated in all instances of all VMs. That was worse than when only one update was required per account. With this approach, we had to update all VM setups for each account on each engineer’s computer.
We tried using a Docker container for every client, which looked quite promising at first. Containers are typically a few hundred MB, so they’re easier to transfer, and they’re lightweight and cheap to start and stop. But the downsides were the 600MB footprint and the fact that code sharing was still a manual task, which meant we couldn’t share automation code across clients.
Then we tried a customer solution with an internal code name jamoora (meaning “very obedient slave”). We wrote simple bash script that made the switch for us. “account xyz” simply loaded all data related to a particular account. For example, environment variables were set, ssh-config was edited with clients’ machines, ssh-keys were added using ssh-agent, and any other files were handled in a customized fashion. That worked very well. It was extremely lightweight since only relevant customer data was saved for each customer. Transferring a customer profile was super easy via tar | gzip | bcrypt. Since the automation scripts were the same for all clients, the close-sharing issue was resolved. The only problem was that we were reinventing the wheel by debugging things that someone had already debugged for us!
Finally, we set up AWSma using virtualenv. Rather than developing our own tool to create, activate, deactivate, and remove customer profiles, we stole virtualenv and virtualenvwrapper from Python. What follows is a description of the AWSma project.
Quick Tutorial on Python virtualenv
(Read more here for a detailed tutorial)
Virtualenv is a Python package that creates a new isolated Python environment. It allows developers to test their Python code in isolation from the rest of their machines. Each virtual environment consists of folders like environment_name/bin, environment_name/lib, etc.. In the /bin folder, there are scripts that allow loading and unloading a virtual environment. For example, a script called “activate”, when called, sets up the environment variables so that the newly created virtual environment becomes the default path for Python to install and load packages from. Sounds very much like customer profiles right? But instead of Python-related variables, we need our variables and files loaded.
Quick Tutorial on virtualenvwrapper
(Read more here)
Thankfully, virtualenv is simply a building block that’s typically used in conjunction with an orchestration tool called virtualenvwrapper. virtualenvwrapper provides plumbing and housekeeping for virtual environments. Running the command “workon xyz activate” activates the virtualenv xyz by calling the /bin/activate script. Running the “deactivate” command unloads the environment.
virtualenvwrapper also does other useful things like providing an autocomplete interface and changing the bash prompt to “(environment name) <your regular prompt>”. That’s excellent as the dev knows which environment he or she is in.
How We Used virtualenvwrapper
Virtualenv is solid, debugged, and widely used in Python communities, so we didn’t need to recreate our own version of it. Instead, we found ways to build onto it.
Fortunately, virtualenvwrapper was built with support for extensions. It provides many hooks that can be used to inject user-desired behavior when an environment is created, activated, deactivated, or removed. Lastly, there are CLI options to disable setting up Python-related packages in a virtualenv. Thus, our task became very simple. We only needed to write simple hooks to be called by virtualenvwrapper. Our activate hook loads everything into a client profile. Our deactivate hook undoes everything the activate hook did. Our remove hook is left empty. The hook for mkvirtualenv was the only one that required major modifications. Why? We wanted special functions like copying setup files and ssh-keys into environment upon completion. So we created the simple bash function “mkaws”, which is a wrapper around mkvirtualenv. It takes as its arguments the location of the files containing AWS and ssh credentials/configs etc.. It calls mkvirtualenv and, once the environment is built, copies AWSma-specific files into the newly create virtual environment.
The Result: The AWSma Project As Part of the Flux7 OSS
We built and now are using AWSma for our day-to-day tasks which helps speed up significantly. Check out the project here at github:
AWSma is open source and free to use. We are looking for test users, collaborators, and commuters to help us improve AWSma and make it a better tool for the AWS community as a whole. Please hit me on twitter @futurechips or LinkedIn/Google+ as Aater Suleman to discuss this further.