A PHP Error was encountered

Severity: 8192

Message: Non-static method URL_tube::usage() should not be called statically, assuming $this from incompatible context

Filename: url_tube/pi.url_tube.php

Line Number: 13

KDD 2018 | Accelerating Large-Scale Data Analysis by Offloading to High-Performance Computing Libraries using A

Accepted Papers

Accelerating Large-Scale Data Analysis by Offloading to High-Performance Computing Libraries using A

Alex Gittens (Rensselaer Polytechnic Institute); Kai Rothauge (University of California, Berkeley); Shusen Wang (University of California, Berkeley); Michael Mahoney (University of California, Berkeley); Lisa Gerhardt (NERSC/LBNL); Prabhat (NERSC/LBNL); Jey Kottalam (University of California, Berkeley); Michael Ringenburg (Cray Inc.); Kristyn Maschhoff (Cray Inc.)

Apache Spark is a popular system aimed at the analysis of large data sets, but recent studies have shown that certain computations—-in particular, many linear algebra computations that are the basis for solving common machine learning problems—-are significantly slower in Spark than when done using libraries written in a high-performance computing framework such as the Message-Passing Interface (MPI).

To remedy this, we introduce Alchemist, a system designed to call MPI-based libraries from Apache Spark. Using Alchemist with Spark helps accelerate linear algebra, machine learning, and related computations, while still retaining the benefits of working within the Spark environment. We discuss the motivation behind the development of Alchemist, and we provide a brief overview of its design and implementation.

We also compare the performances of pure Spark implementations with those of Spark implementations that leverage MPI-based codes via Alchemist. To do so, we use data science case studies: a large-scale application of the conjugate gradient method to solve very large linear systems arising in a speech classification problem, where we see an improvement of an order of magnitude; and the truncated singular value decomposition (SVD) of a 400GB three-dimensional ocean temperature data set, where we see a speedup of up to 7.9x. We also illustrate that the truncated SVD computation is easily scalable to terabyte-sized data by applying it to data sets of sizes up to 17.6TB.

Promotional Video