Application Benchmarking on AWS Using WikiBench

12

In many of our previous posts, we have talked about micro-benchmarking AWS instances, including benchmarking the instances for CPU, Disk I/O and Network performance. In this post, we will discuss our methodology for macro-benchmarking.

While micro-benchmarks give you a deep insight about low-level performance metrics in an isolated fashion, we need application benchmarks to understand the performance of an instance at the application level, especially when we have CPU, Disk I/O and Network I/O all working together in an application.

We used wikibench for application benchmarking. The reasons we chose wikibench are:

  1. Wikibench uses ‘real ‘ applications. It uses the mediawiki application. And it is powered by the services Apache and MySql in the backend.
  2. The website is populated with real data from Wikipedia, and the benchmark replays traces of real users of Wikipedia.
  3. Many of our customers host websites, so it is a workload relevant to their requirements.
  4. It is a distributed application and scales easily.

About Wikibench


To run a benchmark, we need three things:

  1. Mediawiki installation with a wikipedia dump. One can find instructions about how to download and install mediawiki here.
  2. Wikipedia access traces. One can download the traces here.
  3. Wikibench software, which can be downloaded here.

There is a README.txt that comes along with the wikibench package. It gives a good documentation for using wikibench.

Wikibench has three primary parts: wikiloader, tracebench and wikijector. We will now go into more details for each segment.

Wikiloader


Wikiloader is used to load the wikipedia dumps into the mysql database of a mediawiki installation. A typical usage of the dumper is the following:

1

The dumper supports the following parameters:

2

There is also another tool that loads the dump which can be obtained from the mediawiki site.The instructions can be found here.

Tracebench


This tool is used to sample the traces. Sampling is done to control the traffic to System Under Test (SUT ). Before running tracebench, we need to sort the traces. This can be done using the sort script. Here are the steps to run tracebench: 1. Sort the traces: Traces can be sorted using the following command. 3

By default, the sorted trace file is stored in gzipped form in the home folder of the user running the script ( ~/traces.txt.gz ). 2. Build tracebench: To build tracebench, the following dependencies must be met:

  • Java 1.6 or later
  • Java / MySQL connector (which can be obtained from libmysql-java)
  • Ant

Tracebench can be built using the following command:4

3. Run tracebench: Once tracebench is built, we can then run the tracebench by using the following command:5 These are the parameters:

  • <reduction in permil> → reduction percentage, an integer between 0 and 1000. If it is 0, this tool will be quicker. It will only remove unwanted trace lines without further sampling. If it is 1000, then the resulting number of requests will be zero.
  • <db uri> → standard MySQL URI for the MediaWiki database. ( ex: jdbc:mysql://localhost/wikidb?user=root&password=pass )
  • <plsampling|sampling> → sampling method
    • plsampling → page level sampling. The amount of pages that are in the traces are reduced by removing selected page names from the trace completely.
    • sampling → The x most popular wiki pages are considered and sampled like static files.
  • <date_ts|epoch_ts> → time stamp of the traces. Typically ‘epoch_ts’ is used, but in later traces ‘date_ts’ is used.

Tracebench uses standard input and output. Its output can be further archived or piped directly into wikibench. An example run would look something like this: 6

Wikijector


 

This tool takes the sampled traces as input and sends the http requests to SUT. Wikijector has two types of processes: a controller and a number of workers. The controller-worker approach is used so that the application can be scaled up or down as desired. To run wikijector, we need HttpComponents-Client and HttpComponents-Core which can be downloaded from http://hc.apache.org/downloads.cgi. Wikijector can be run in two steps

  1. Run the controller

a. Quiet mode:

7

b. Verbose mode:

8

The input to the controller is done via standard input. A typical run of the controller looks like this:

9

2. Run the Worker

10

The worker connects to the controller process and gets the data from the traces to send http requests.

Postprocessing


 

The worker processes output and sends the results into the logs file which is specified as part of the arguments when starting the process. A typical log file looks like this:11

Here is an explanation of what each field means. The first line is taken for the purpose of illustration.

  • 2362 → Relative timestamp of the request in milliseconds from the start of the trace; i.e., the first request in the tracefile would have a relative timestamp of 0.
  • GET → the type of http request sent. It can be either GET or POST.
  • 132 → Response time in milliseconds.
  • 200 → The status code of the http response. For more information, one can refer to http://www.w3.org/Protocols/HTTP/HTRESP.html.
  • 0 → size of the http response received.

To process these logs, we have written a custom python script which scans the logs and creates a histogram of number of requests, median response time, and mean response on a per minute basis. In the next post, let’s discuss benchmarking m3.xlarge using wikibench. In the meantime, if you have questions about application benchmarking on AWS using wikibench, send your inquiries to [email protected]. Or, visit us for more information at www.flux7.com.

May 20, 2014 / Benchmarking

About the Author

Flux7 Labs
Find me on:

Join Us

Join thousands of technology enthusiasts, subscribe and get expert perspective in your inbox.

Connect With Us

Categories