AWS Network Performance With iPerf On High IO Instance i2.8xlarge

Those migrating to the cloud are often confused by performance on AWS. With so many metrics, they’re not sure which ones pertain to their application needs. The biggest questions seem to center on network performance characterized as “low,” “medium,” and “high,” which is why we’ve focused on network performance for this series on AWS performance benchmarking. To measure network performance we chose the industry-standard benchmark Iperf and used i2.8xlarge instances.

[Tweet "Understanding #AWS Network Performance With #Iperf"]

Over the course of our Iperf experiments, we learned about many variables for optimizing AWS network performance. Before describing our test results, we want share what we've learned about performance’s effect on various configurations.

Okay, here we go….

About Iperf

Iperf is a popular network-benchmarking tool used to measure bandwidth, network tuning, etc., and that creates TCP/UDP data streams. More details on Iperf can be obtained from http://en.wikipedia.org/wiki/Iperf.

Installation and Setup

Iperf is available in two versions, the earlier version Iperf2.x and the new iperf3. You can install iperf3 from http://stats.es.net/software/iperf-3.0.1.tar.gz, while iperf2 is available as a standard package in most Linux distributions. You can install Iperf2 in Ubuntu by using the following:

Command to Install Iperf2

sudo apt-get install iperf

Command to Install Iperf3

wget <a href="http://stats.es.net/software/iperf-3.0.1.tar.gz" target="_blank">http://stats.es.net/software/iperf-3.0.1.tar.gz</a>
tar zxf iperf-3.0.1.tar.gz
cd iperf-3.0.1
./configure
make &amp;&amp; sudo make install

Iperf runs as either a client or server and allows you to tune various network parameters, details for which can be found by typing:

iperf --help or iperf3 --help

To run iperf3 as a server

  • Iperf3:

         iperf3 -s -D

* -s → run in server mode

* -D → run as daemon

  • Iperf2:

         iperf -s -D

Iperf3 offers the same features as Iperf2, but also includes some programmatic enhancements. Since we only used Iperf to measure throughput between two nodes, it didn’t matter whether we used Iperf2 or Iperf3, but we chose the latter except when using the public iperf server. We used i2.8xlarge instance as the client machine with Ubuntu 12.04 LTS as our OS for all experiments.

Take One – Public Server

We wanted to keep the setup as simple as possible, so began (in hindsight, quite naively) by using a public Iperf server hosted at iperf.scottlinux.com. We were surprised by the measured bandwidth, which was too low at just 4.678 Mbits/sec, That’s likely because the public server was loaded. We quickly recognized that this wasn’t an appropriate benchmarking route.

Take Two – Our Own Iperf Server

Next we set up two i2.8xlarge instances with one iperf running in server mode and the other in client. We made sure that both client and server were set up within the same placement group. This setup resulted in a 2.16 Gbits/sec bandwidth. It looked as if AWS had put a cap in place to restrict the network bandwidth to 2.16 Gbits/sec. In fact, this was the same performance we got for an i2.4xlarge instance. However, AWS advertises that i2.8xlarge instances are connected with a 10-Gigabit network switch within same placement groups. While 2.16 Gbits/sec is certainly respectable throughput, it was significantly below our expectations for an i2.8xlarge instance.

Take Three – Using Bare Metal

Disappointed by our initial results, we set up a bare metal server outside AWS. Our server machine had a 64-bit architecture, 4 cores and an Intel Xeon CPU E3-1245 V2 @ 3.40GHz processor. It used hyper threading, 8 vCPUs, 32GB of RAM and ran Ubuntu Server 12.04 LTS. We found bandwidth significantly higher than with the public server, averaging 189.83 Mbits/sec. We chose not to pursue this line any further because there were too many moving parts. With communication being made between two geographically different locations, we wanted to avoid any of the bottlenecks that can occur at any point in the chain. In addition, we used TCP packets and found that increased latency can cause the TCP memory buffer to fill up.

Take Four – Enabling “Enhanced Networking”

After digging more deeply, we discovered a feature called “Enhanced Networking” that’s needed to get a full 10-Gigabit network. It’s enabled by default in Amazon’s latest Linux AMI, but for other AMIs you have to install the required driver and register the image with a special flag in order to utilize the enhanced networking feature.

Here are the steps used to enable Enhanced Networking for Ubuntu 12.04 LTS:

Requirements

  • Instance launched using an HVM AMI.

  • Instance launched in a VPC.

  • Version 1.6.12.0 of the AWS EC2 Command Line Interface (CLI) tool.

Enable Enhanced Networking

  • Launch an HVM instance in a VPC and connect to that instance.

  • Download and extract the ixgbevf driver.

$ wget "http://downloads.sourceforge.net/project/e1000/ixgbevf
stable/2.11.3/ixgbevf-2.11.3.tar.gz"
...
$ tar -zxf ./ixgbevf-2.11.3.tar.gz
$ cd ixgbevf-2.11.3.tar.gz
$ make &amp;&amp; sudo make install
  • For best performance, enable dynamic interrupt throttling for the installed driver as follows:

$ sudo echo "options ixgbevf InterruptThrottleRate=1,
1,1,1,1,1,1,1" &gt; /etc/modprobe.d/ixgbevf.conf
$ sudo update-initramfs -c -k all
$ ec2-modify-instance-attribute --sriov simple instance_id
  • Create an AMI from the instance using the Amazon EC2 console or the ec2-create-image command as usual.

With Enhanced Networking enabled, we set up two instances in a VPC with an image and measured bandwidth at around 9.81 Gbits/sec, which was very close to the advertised 10 Gigabits/sec and was a significant improvement over the previous 2.16 Gigabit/sec.

Summary

The table below shows network bandwidth for i2.8xlarge in our different scenarios:


The most interesting result was that we actually saw the hardware utilized to its maximum capacity once we enabled the enhanced networking driver. That was quite amazing considering that our experiments were carried out in a virtualized environment. So, a couple of observations. First, Amazon is not time-slicing the larger instances and, second, hardware virtualization has come a long way over the years.

If you have any thoughts, ideas or questions, please leave a comment. Our expert technical staff monitors our chat during daytime hours and we’re always interested in hearing about issues that others are facing. We’ll be more than happy to provide assistance.

Special thanks to Watson Coburn of Netflix for guiding us in using AWS enhanced networking.

 

Did you find this useful?  

Interested in getting tips, best practices and commentary delivered regularly? Click the button below to sign up for our blog and set your topic and frequency preferences.

Sign Me Up!

About the Author

Flux7 Labs
Find me on:

Join Us

Join thousands of technology enthusiasts, subscribe and get expert perspective in your inbox.

Connect With Us

Recent Posts

Categories