False Discovery Rate Control via Data Splitting

20 Feb 2020  ·  Chenguang Dai, Buyu Lin, Xin Xing, Jun S. Liu ·

Selecting relevant features associated with a given response variable is an important issue in many scientific fields. Quantifying quality and uncertainty of the selection via the false discovery rate (FDR) control has been of recent interest. This paper introduces a way of using data-splitting strategies to asymptotically control FDR for various feature selection techniques while maintaining high power. For each feature, the method estimates two independent significance coefficients via data splitting, and constructs a contrast statistic. The FDR control is achieved by taking advantage of the statistic's property that, for any null feature, its sampling distribution is symmetric about 0. We further propose a strategy to aggregate multiple data splits (MDS) to stabilize the selection result and boost the power. Interestingly, this multiple data-splitting approach appears capable of overcoming the power loss caused by data splitting with FDR still under control. The proposed framework is applicable to canonical statistical models including linear models, Gaussian graphical models, and deep neural networks. Simulation results, as well as a real data application, show that the proposed approaches, especially the multiple data-splitting strategy, control FDR well and are often more powerful than existing methods including the Benjamini-Hochberg procedure and the knockoff filter.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper