We thought that comparing CoreMark and FIO benchmark results for different instance families would be interesting. Essentially, it gives a good idea, and may help you decide, on which instance type you should use.
In our earlier posts, we have used FIO tool for benchmarking I/O on various EC2 instances. In this post we have tried to explore the effects of parallelism in I/O operation on a single EC2 instance. In other words, we were trying to find the optimum number of parallel processes which are I/O bound and which results in best I/O throughput.
In a recent post we discussed our methodology for measuring bandwidth with Iperf and how to use Intel’s hardware virtualization drivers to take full advantage of the network card. Iperf is a popular network-benchmarking tool used for measuring bandwidth, packet loss, and delay jitter for the purpose of network tuning, and for creating TCP/UDP data streams. Read our previous post to learn more about Iperf and its installation and setup.
In a previous post, FIO benchmark was used for four types of IO operations on storage-optimized instances:
In previous posts we talked about micro-benchmarks that we ran for storage-optimized instances. Here we’ll talk about the same benchmarks run on general-purpose m1 and m3 instances. While the m1 is a previous-generation general-purpose instance type, the m3 is the current-generation version. One major difference between the two instances is that m1’s are based on Intel Xeon processors, while for m3 instances each vCPU is a hardware hyperthread from Intel Xeon E5-2670 processors. An m3 can be launched using both Paravirtual and hardware-assisted virtualization. For this post we used a paravirtual image for both m1’s and m3’s, and we used the Ubuntu 12.04 64-bit OS. Here are the details of all available instance types in the general-purpose category:
In a previous post we discussed CoreMark, an industry standard for benchmarking CPU performance. In this post we’ll run IO benchmarks on i instances using the Flexible IO (FIO) tool.
I’ve been doing a lot of analysis of latency and throughput recently as a part of benchmarking work on databases. I thought I’d share some insights on how the two are related. For an overview of what these terms mean, check out Aater’s post describing the differences between them here.