Here from HackerNews? This was originally posted several months ago. Check back in two weeks for an updated benchmark including newer versions of Hive, Impala, and Shark.
Several analytic frameworks have been announced in the last six months. Among them are inexpensive data-warehousing solutions based on traditional Massively Parallel Processor (MPP) architectures (Redshift), systems which impose MPP-like execution engines on top of Hadoop (Impala, HAWQ) and systems which optimize MapReduce to improve performance on analytical workloads (Shark, Stinger). This benchmark provides quantitative and qualitative comparisons of four sytems. It is entirely hosted on EC2 and can be reproduced directly from your computer.
This remains a work in progress and will evolve to include additional frameworks and new capabilities. We welcome contributions.
This benchmark measures response time on a handful of relational queries: scans, aggregations, joins, and UDF’s, across different data sizes. Keep in mind that these systems have very different sets of capabilities. MapReduce-like systems (Shark/Hive) target flexible and large-scale computation, supporting complex User Defined Functions (UDF’s), tolerating failures, and scaling to thousands of nodes. Traditional MPP databases are strictly SQL compliant and heavily optimized for relational queries. The workload here is simply one set of queries that most of these systems these can complete.
Our dataset and queries are inspired by the benchmark contained in ”A comparison of approaches to large scale analytics”. The input data set consists of a set of unstructured HTML documents and two SQL tables which contain summary information. It was generated using Intel’s Hadoop benchmark tools and data sampled from the Common Crawl document corpus. There are three datasets with the following schemas:
Documents |
Rankings |
UserVisits |
---|---|---|
Unstructured HTML documents | Lists websites and their page rank | Stores server logs for each web page |
|
|
Query 1 and Query 2 are exploratory SQL queries. We vary the size of the result to expose scaling properties of each systems.
Query 3 is a join query with a small result set, but varying sizes of joins.
Query 4 is a bulk UDF query. It calculates a simplified version of PageRank using a sample of the Common Crawl dataset.
Framework | Instance Type | Memory | Storage | Virtual Cores | $/hour |
---|---|---|---|---|---|
Impala, Hive, Shark | m2.4xlarge | 68.4 GB | 1680GB (2HDD) | 8 | .41 |
Redshift | dw.hs1.xlarge | 15 GB | 2 TB (3HDD) | 2 | .85 |
Framework | Instance Type | Instances | Memory | Storage | Virtual Cores | Cluster $/hour |
---|---|---|---|---|---|---|
Impala, Hive, Shark | m2.4xlarge | 5 | 342 GB | 8.4 TB (10HDD) | 40 | $8.20 |
Redshift | dw.hs1.xlarge | 10 | 150 GB | 20 TB (30HDD) | 20 | $8.50 |
We launch EC2 clusters and run each query several times. We report the median response time here. Except for Redshift, all data is stored on HDFS in compressed SequenceFile format using CDH 4.2.0. Each query is run with six frameworks:
Redshift | Amazon Redshift with default options. |
Shark - disk | Input and output tables are on-disk compressed with gzip. OS buffer cache is cleared before each run. |
Impala - disk | Input and output tables are on-disk compressed with snappy. OS buffer cache is cleared before each run. |
Shark - mem | Input tables are stored in Spark cache. Output tables are stored in Spark cache. |
Impala - mem | Input tables are coerced into the OS buffer cache. Output tables are on disk (Impala has no notion of a cached table). |
Hive | Hive with default options. Input and output tables are on disk compressed with snappy. OS buffer cache is cleared before each run. |
SELECT pageURL, pageRank FROM rankings WHERE pageRank > X
Query 1A 32,888 results |
Query 1B 3,331,851 results |
Query 1C 89,974,976 results |
|
---|---|---|---|
Median Response Time (s) | |||
Redshift | 2.4 | 2.5 | 12.2 |
Impala - disk | 9.9 | 12 | 104 |
Impala - mem | 0.75 | 4.48 | 108 |
Shark - disk | 11.8 | 11.9 | 24.9 |
Shark - mem | 1.1 | 1.1 | 3.5 |
Hive | 45 | 63 | 70 |
This query scans and filters the dataset and stores the results.
This query primarily tests the throughput with which each framework can read and write table data. The best performers are Impala (mem) and Shark (mem) which see excellent throughput by avoiding disk. For on-disk data, Redshift sees the best throughput for two reasons. First, the Redshift clusters have more disks and second, Redshift uses columnar compression which allows it to bypass a field which is not used in the query. Shark and Impala scan at HDFS throughput with fewer disks.
Both Shark and Impala outperform Hive by 3-4X due in part to more efficient task launching and scheduling. As the result sets get larger, Impala becomes bottlenecked on the ability to persist the results back to disk. It seems as if writing large tables is not yet optimized in Impala, presumably because its core focus is BI-style queries.
SELECT SUBSTR(sourceIP, 1, X), SUM(adRevenue) FROM uservisits GROUP BY SUBSTR(sourceIP, 1, X)
Query 2A 2,067,313 groups |
Query 2B 31,348,913 groups |
Query 2C 253,890,330 groups |
|
---|---|---|---|
Median Response Time (s) | |||
Redshift | 28 | 65 | 92 |
Impala - disk | 130 | 216 | 565 |
Impala - mem | 121 | 208 | 557 |
Shark - disk | 210 | 238 | 279 |
Shark - mem | 111 | 141 | 156 |
Hive - disk | 466 | 490 | 552 |
This query applies string parsing to each input tuple then performs a high-cardinality aggregation.
Redshift’s columnar storage provides greater benefit than in Query 1 since several columns of the UserVistits
table are un-used. While Shark’s in-memory tables are also columnar, it is bottlenecked here on the speed at which it evaluates the SUBSTR
expression. Since Impala is reading from the OS buffer cache, it must read and decompress entire rows. Unlike Shark, however, Impala evaluates this expression using very efficient compiled code. These two factors offset each other and Impala and Shark achieve roughly the same raw throughput for in memory tables. For larger result sets, Impala again sees high latency due to the speed of materializing output tables.
SELECT sourceIP, totalRevenue, avgPageRank
FROM
(SELECT sourceIP,
AVG(pageRank) as avgPageRank,
SUM(adRevenue) as totalRevenue
FROM Rankings AS R, UserVisits AS UV
WHERE R.pageURL = UV.destURL
AND UV.visitDate BETWEEN Date(`1980-01-01') AND Date(`X')
GROUP BY UV.sourceIP)
ORDER BY totalRevenue DESC LIMIT 1
Query 3A 485,312 rows |
Query 3B 53,332,015 rows |
Query 3C 533,287,121 rows |
|
---|---|---|---|
Median Response Time (s) | |||
Redshift | 42 | 47 | 200 |
Impala - disk | 158 | 168 | 345 |
Impala - mem | 74 | 90 | 337 |
Shark - disk | 253 | 277 | 538 |
Shark - mem | 131 | 172 | 447 |
Hive | 423 | 638 | 1822 |
This query joins a smaller table to a larger table then sorts the results.
When the join is small (3A), all frameworks spend the majority of time scanning the large table and performing date comparisons. For larger joins, the initial scan becomes a less significant fraction of overall response time. For this reason the gap between in-memory and on-disk representations diminishes in query 3C. All frameworks perform partitioned joins to answer this query. CPU (due to hashing join keys) and network IO (due to shuffling data) are the primary bottlenecks. Redshift has an edge in this case because the overall network capacity in the cluster is higher.
CREATE TABLE url_counts_partial AS
SELECT TRANSFORM (line)
USING "python /root/url_count.py" as (sourcePage, destPage, cnt)
FROM documents;
CREATE TABLE url_counts_total AS
SELECT SUM(cnt) AS totalCount, destPage
FROM url_counts_partial
GROUP BY destPage;
Query 4 (phase 1) | Query 4 (phase 2) | Query 4 (total) | |
---|---|---|---|
Median Response Time (s) | |||
Redshift | not supported | - | - |
Impala - mem | not supported | - | - |
Impala - disk | not supported | - | - |
Shark - mem | 156 | 34 | 189 |
Shark - disk | 583 | 133 | 716 |
Hive | 659 | 358 | 1017 |
This query calls an external Python function which extracts and aggregates URL information from a web crawl dataset. It then aggregates a total count per URL.
Impala and Redshift do not currently support calling this type of UDF, so they are omitted from the result set. The performance advantage of Shark (disk) over Hive in this query is less pronounced than in 1, 2, or 3 because the shuffle and reduce phases take a relatively small amount of time (this query only shuffles a small amount of data) so the task-launch overhead of Hive is less pronounced. Also note that when the data is in-memory, Shark is bottlenecked by the speed at which it can pipe tuples to the Python process rather than memory throughput. This makes the speedup relative to disk around 5X (rather than 10X or more seen in other queries).
These numbers compare performance on SQL workloads, but raw performance is just one of many important attributes of an analytic framework. The reason why systems like Hive, Impala, and Shark are used is because they offer a high degree of flexibility, both in terms of the underlying format of the data and the type of computation employed. Below we summarize a few qualitative points of comparison:
System | SQL variant | Execution engine | UDF Support | Mid-query fault tolerance | Open source | Commercial support | HDFS Compatible |
---|---|---|---|---|---|---|---|
Hive | Hive QL (HQL) | MapReduce | Yes | Yes | Yes | Yes | Yes |
Shark | Hive QL (HQL) | Spark | Yes | Yes | Yes | No | Yes |
Impala | Some HQL + some extensions | DBMS | No | No | Yes | Yes | Yes |
Redshift | Full SQL 92 (?) | DBMS | No | No | No | Yes | No |
We would like to include the columnar storage formats for Hadoop-based systems, such as Parquet and RC file. We would also like to run the suite at higher scale factors, using different types of nodes, and/or inducing failures during execution. Finally, we plan to re-evaluate on a regular basis as new versions are released.
We wanted to begin with a relatively well known workload, so we chose a variant of the Pavlo benchmark. This benchmark is heavily influenced by relational queries (SQL) and leaves out other types of analytics, such as machine learning and graph processing. The largest table also has fewer columns than in many modern RDBMS warehouses. In future iterations of this benchmark, we may extend the workload to address these gaps.
This benchmark is not an attempt to exactly recreate the environment of the Pavlo at al. benchmark. Instead, it draws on that benchmark for inspiration in the dataset and workload. The most notable differences are as follows:
We’ve started with a small number of EC2-hosted query engines because our primary goal is producing verifiable results. Over time we’d like to grow the set of frameworks. We actively welcome contributions!
We’ve tried to cover a set of fundamental operations in this benchmark, but of course, it may not correspond to your own workload. The prepare scripts provided with this benchmark will load sample data sets into each framework. From there, you are welcome to run your own types of queries against these tables. Because these are all easy to launch on EC2, you can also load your own datasets.
For now, no. The idea is to test “out of the box” performance on these queries even if you haven’t done a bunch of up-front work at the loading stage to optimize for specific access patterns. We may relax this requirement in the future.
We did, but the results were very hard to stabilize. The reason is that it is hard to coerce the entire input into the buffer cache because of the way Hive uses HDFS: Each file in HDFS has three replicas and Hive’s underlying scheduler may choose to launch a task at any replica on a given run. As a result, you would need 3X the amount of buffer cache (which exceeds the capacity in these clusters) and or need to have precise control over which node runs a given task (which is not offered by the MapReduce scheduler).
We plan to run this benchmark regularly and may introduce additional workloads over time. We welcome the addition of new frameworks as well. The only requirement is that running the benchmark be reproducible and verifiable in similar fashion to those already included. The best place to start is by contacting Patrick Wendell from the U.C. Berkeley AMPLab.
Since Redshift, Shark, Hive, and Impala all provide tools to easily provision a cluster on EC2, this benchmark can be easily replicated.
To allow this benchmark to be easily reproduced, we’ve prepared various sizes of the input dataset in S3. The scale factor is defined such that each node in a cluster of the given size will hold ~25GB of the UserVisits
table, ~1GB of the Rankings
table, and ~30GB of the web crawl, uncompressed. The datasets are encoded in TextFile
and SequenceFile
format along with corresponding compressed versions. They are available publicly at s3n://big-data-benchmark/pavlo/[text|text-deflate|sequence|sequence-snappy]/[suffix]
.
S3 Suffix | Scale Factor | Rankings (rows) |
Rankings (bytes) |
UserVisits (rows) |
UserVisits (bytes) |
Documents (bytes) |
---|---|---|---|---|---|---|
/tiny/ | small | 1200 | 77.6KB | 10000 | 1.7MB | 6.8MB |
/1node/ | 1 | 18 Million | 1.28GB | 155 Million | 25.4GB | 29.0GB |
/5nodes/ | 5 | 90 Million | 6.38GB | 775 Million | 126.8GB | 136.9GB |
Create an Impala, Redshift, Hive or Shark cluster using their provided provisioning tools.
$> ec2/spark-ec2 -s 5 -k [KEY PAIR NAME] -i [IDENTITY FILE] --hadoop-major-version=2 -t "m2.4xlarge" launch [CLUSTER NAME]
Scripts for preparing data are included in the benchmark github repo. Use the provided prepare-benchmark.sh
to load an appropriately sized dataset into the cluster.
./prepare-benchmark.sh --help
Here are a few examples showing the options used in this benchmark…
Redshift | Shark | Impala/Hive |
---|---|---|
|
|
|
|
|
|