A Sample-and-Clean Framework for Fast and Accurate Query Processing on Dirty Data

In emerging Big Data scenarios, obtaining timely, high-quality answers to aggregate queries is difficult due to the challenges of processing and cleaning large, dirty data sets.  To increase the speed of query processing, there has been a resurgence of interest in sampling-based approximate query processing (SAQP).   In its usual formulation, however, SAQP does not address data cleaning at all, and in fact, exacerbates answer quality problems by introducing sampling error.   In this paper, we explore an intriguing opportunity.   That is, we explore the use of sampling to actually improve answer quality.  We introduce the Sample-and-Clean framework, which applies data cleaning to a relatively small subset of the data and uses the results of the cleaning process to lessen the impact of dirty data on aggregate query answers.   We derive confidence intervals as a function of sample size and show how our approach addresses error bias.  We evaluate the Sample-and-Clean framework using data from three sources: the TPC-H benchmark with synthetic noise, a subset of the Microsoft academic citation index and a sensor data set.  Our results are consistent with the theoretical confidence intervals and suggest that the Sample-and-Clean framework can produce significant improvements in accuracy compared to query processing without data cleaning and speed compared to data cleaning without sampling.