Relational datasets are being generated at an alarmingly rapid rate across organizations and industries. Compressing these datasets could significantly reduce storage and archival costs. Traditional compression algorithms, e.g., gzip, are suboptimal for compressing relational datasets since they ignore the table structure and relation-ships between attributes.

We study compression algorithms that leverage the relational structure to compress datasets to a much greater extent. We develop SQUISH, a system that uses a combination of Bayesian Networks and Arithmetic Coding to capture multiple kinds of dependencies among attributes and achieve near-entropy compression rate. SQUISH also supports user-defined attributes: users can instantiate new data types by simply implementing five functions for a new class inter-face. We prove the asymptotic optimality of our compression algorithm and conduct experiments to show the effectiveness of our system: SQUISH achieves a reduction of over 50% in storage size relative to systems developed in prior work on a variety of real datasets.

Filed under: Mining Rich Data Types