We have been working closely with a customer who is undergoing a business transformation. As a multimedia equipment manufacturer, the organization has a loyal following of its high quality devices. However, like many companies facing the convergence of markets and new customer demands, the company has embarked on a metamorphosis. Traditionally very focused on hardware, their software was largely ignored even though it offered customers real value. Part of the company’s transformation was a move to treat their software like a full-fledged offering, rather than a free supplement. An upcoming product release marked the first (and biggest steps), in cementing this change in company direction.
In our last blog post, we discussed how Ansible’s configuration management tools can benefit Amazon Web Services (AWS) environments – especially for DevOps focused organizations. Today we’d like to share how to realize those benefits with Ansible Playbooks.
Playbooks are Ansible’s configuration, deployment, and orchestration language. Keeping in line with Ansible’s focus on simplicity without sacrificing security and reliability, Playbooks purposefully have a minimum of syntax because they aren’t meant to be a programming language or script, but rather a model of a configuration or a process.
Code Spaces. Its story is sending shivers up and down the spines of businesses and developers alike, and for good reason. But that doesn’t mean it should stop the progress of cloud migration or significantly change your strategy. In fact, the story brightly shines a light on an issue that is avoidable, and serves as a warning of what can happen in the complex world of cloud architecture.
Before now, we have shared a few posts that explore benchmarking and the analysis of the r3 instance family, as well as other instances. You can find any number of those posts here.
In our previous posts, we talked about CPU and disk I/O performance of c3 instances. In this post, we will focus on network performance. Amazon offers an Enhanced Networking feature for ‘c3’ instances running with hvm AMIs with the right drivers. In all our tests, we have used an HVM AMI with enhanced networking enabled.
As we expected it to be, the AWS Summit this week was an excellent experience. We talked to a lot of interesting people, new and old. We gathered several customer leads, shared technology best practices, talked about business development strategies, and explored several partnership opportunities. As promised, I am now sharing with you my experience in San Francisco, so please read on.
After my post last week about using AWS in the cloud, I thought I’d share the sessions at the upcoming AWS Summit in San Francisco that have us excited. These sessions are heavily influenced by my own interest coming from my role within Flux7 and the technology development I work on both internally at Flux7 and for our professional services clients.
In our previous post, we ran benchmark to measure network performance of ‘i’ instances using iperf tool. In this post we will talk about iperf benchmark run on ‘m1’ and ‘m3’ instances. Unlike ‘i’ instances, ‘m1’ and ‘m3’ instances does not support ‘enhanced networking’ which led us to have both the client and the server running in ‘ec2-classic’. However, we have ensured that the machines under test (client and server ) are in the same availability zone.
Those migrating to the cloud are often confused by performance on AWS. With so many metrics, they’re not sure which ones pertain to their application needs. The biggest questions seem to center on network performance characterized as “low,” “medium,” and “high,” which is why we’ve focused on network performance for this series on AWS performance benchmarking. To measure network performance we chose the industry-standard benchmark Iperf and used i2.8xlarge instances.