“Next generation” data acquisition technologies are allowing scientists to collect exponentially more data at a lower cost. These trends are broadly impacting many scientific fields, including genomics, astronomy, and neuroscience. We can attack the problem caused by exponential data growth by applying horizontally scalable techniques from current analytics systems to accelerate scientific processing pipelines.
In this paper, we describe ADAM, an example genomics pipeline that leverages the open-source Apache Spark and Parquet systems to achieve a 28× speedup over current genomics pipelines, while reducing cost by 63%. From building this system, we were able to distill a set of techniques for implementing scientific analyses efficiently using commodity “big data” systems. To demonstrate the generality of our architecture, we then implement a scalable astronomy image processing system which achieves a 2.8–8.9× improvement over the state-of-the-art MPI-based system.
Full author list:
Frank Austin Nothaft∗, Matt Massie∗, Timothy Danford∗‡, Zhao Zhang∗, Uri Laserson◦, Carl Yeksigian‡, Jey Kottalam∗, Arun Ahuja†, Jeff Hammerbacher†◦, Michael Linderman†, Michael J. Franklin∗, Anthony D. Joseph∗, David A. Patterson
∗AMPLab, University of California, Berkeley, ◦Cloudera, San Francisco, CA, †Carl Icahn School of Medicine, Mount Sinai, New York, NY, ‡Genomebridge, Cambridge, MA