no code implementations • 19 Nov 2016 • Yanbin Wu, Li Wang, Fan Cui, Hongbin Zhai, Baoming Dong, Jim Jing-Yan Wang
A novel data representation method of convolutional neural net- work (CNN) is proposed in this paper to represent data of different modalities.
no code implementations • 16 Aug 2016 • Ru-Ze Liang, Wei Xie, Weizhi Li, Hongqi Wang, Jim Jing-Yan Wang, Lisa Taylor
We map the data of two domains to one single common space, and learn a classifier in this common space.
no code implementations • 7 Jun 2016 • Ru-Ze Liang, Wei Xie, Weizhi Li, Xin Du, Jim Jing-Yan Wang, Jingbin Wang
The existing semi-supervise structured output prediction methods learn a global predictor for all the data points in a data set, which ignores the differences of local distributions of the data set, and the effects to the structured output prediction.
no code implementations • 22 Apr 2016 • Ru-Ze Liang, Lihui Shi, Haoxiang Wang, Jiandong Meng, Jim Jing-Yan Wang, Qingquan Sun, Yi Gu
To fill this gap, in this paper, we propose a novel similarity learning method to maximize the top precision measure.
no code implementations • 18 Feb 2015 • Jingbin Wang, Yihua Zhou, Kanghong Duan, Jim Jing-Yan Wang, Halima Bensmail
In this problem, each document is composed two different modals of data, i. e., an image and a text.
no code implementations • 9 Feb 2015 • Mohua Zhang, Jianhua Peng, Xuejie Liu, Jim Jing-Yan Wang
It attempts to represent the feature vector of a data sample by reconstructing it as the sparse linear combination of some basic elements, and a $L_2$ norm distance function is usually used as the loss function for the reconstruction error.
no code implementations • 18 Jan 2015 • Jim Jing-Yan Wang, Yunji Wang, Bing-Yi Jing, Xin Gao
To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework.
no code implementations • 15 Jan 2015 • Jim Jing-Yan Wang
We propose to learn a linear discriminant functions for each view, and combine them to construct a overall multivariate mapping function for mult-view data.
no code implementations • 3 Oct 2014 • Jim Jing-Yan Wang, Xin Gao
Recently, manifold regularized NMF used a nearest neighbor graph to regulate the learning of factorization parameter matrices and has shown its advantage over traditional NMF methods for data representation problems.
no code implementations • 27 Sep 2014 • Jim Jing-Yan Wang, Yi Wang, Shiguang Zhao, Xin Gao
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label.
no code implementations • Elsevier Ltd 2014 • Jim Jing-Yan Wang, Jianhua Z. Huang, Yijun Sun, Xin Gao
To solve these bottlenecks, we propose two novel graph-regularized NMF methods, AGNMFFS and AGNMFMK, by introducing feature selection and multiple-kernel learning to the graph regularized NMF, respectively.
no code implementations • 8 Sep 2014 • Jim Jing-Yan Wang, Xuefeng Cui, Ge Yu, Lili Guo, Xin Gao
In this paper, we try to answer these questions by developing the first joint sparse coding and ranking score learning algorithm.
no code implementations • 3 Sep 2014 • Jim Jing-Yan Wang
The problem is to learn a predictor for the target domain to predict the structured outputs from the input.
no code implementations • 22 Apr 2014 • Jim Jing-Yan Wang, Majed Alzahrani, Xin Gao
In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets.
no code implementations • 5 Dec 2013 • Jim Jing-Yan Wang
It tries to decompose a nonnegative matrix of data samples as the product of a nonnegative basic matrix and a nonnegative coefficient matrix, and the coefficient matrix is used as the new representation.
no code implementations • 27 Nov 2013 • Jim Jing-Yan Wang
Sparse coding has shown its power as an effective data representation method.
no code implementations • 26 Nov 2013 • Jim Jing-Yan Wang, Xin Gao
Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations.
no code implementations • 18 Aug 2012 • Jim Jing-Yan Wang, Halima Bensmail, Xin Gao
However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.