|Number of pages||66|
|Journal||Knowledge and Information Systems|
|Publication status||Published - 02 May 2020|
A big challenge in the knowledge discovery process is to perform data pre-processing, specifically feature selection, on a large amount of data and high dimensional attribute set. A variety of techniques have been proposed in the literature to deal with this challenge with different degrees of success as most of these techniques need further information about the given input data for thresholding, need to specify noise levels or use some feature ranking procedures. To overcome these limitations, Rough Set Theory (RST) can be used to discover the dependency within the data and reduce the number of attributes enclosed in an input data set while using the data alone and requiring no supplementary information. However, when it comes to massive data sets, RST reaches its limits as it is highly computationally expensive. In this paper, we propose a scalable and effective rough set theory based approach for large scale data pre-processing, specifically for feature selection, under the Spark framework. In our detailed experiments, data sets with up to 10 000 attributes have been considered, revealing that our proposed solution achieves a good speedup and performs its feature selection task well without sacrificing performance. Thus, making it relevant to big data.
Show more files.. Show less files..
Final published version, 2.03 MB, PDF
Licence: CC BY Show licence