no code implementations • 10 Mar 2024 • Yipei Wang, Bing He, Shannon Risacher, Andrew Saykin, Jingwen Yan, Xiaoqian Wang
Specifically, we introduce a monotonicity constraint that encourages the model to predict disease risk in a consistent and ordered manner across follow-up visits.
no code implementations • 21 Oct 2019 • Jingwen Yan, Zixin Xie, Jingyao Chen, Yinan Liu, Lei Liu
Sparse model is widely used in hyperspectral image classification. However, different of sparsity and regularization parameters has great influence on the classification results. In this paper, a novel adaptive sparse deep network based on deep architecture is proposed, which can construct the optimal sparse representation and regularization parameters by deep network. Firstly, a data flow graph is designed to represent each update iteration based on Alternating Direction Method of Multipliers (ADMM) algorithm. Forward network and Back-Propagation network are deduced. All parameters are updated by gradient descent in Back-Propagation. Then we proposed an Adaptive Sparse Deep Network. Comparing with several traditional classifiers or other algorithm for sparse model, experiment results indicate that our method achieves great improvement in HSI classification.
no code implementations • NeurIPS 2012 • Hua Wang, Feiping Nie, Heng Huang, Jingwen Yan, Sungeun Kim, Shannon Risacher, Andrew Saykin, Li Shen
Alzheimer disease (AD) is a neurodegenerative disorder characterized by progressive impairment of memory and other cognitive functions.