no code implementations • IJCNLP 2017 • Peng Zhong, Jingbin Wang
In our system, the MAE predictive values of Valence and Arousal were 0. 811 and 0. 996, respectively, for the sentiment dimension prediction of words in Chinese.
no code implementations • 2 Mar 2017 • Yanyan Geng, Guohui Zhang, Weizhi Li, Yi Gu, Ru-Ze Liang, Gaoyuan Liang, Jingbin Wang, Yanbin Wu, Nitin Patil, Jing-Yan Wang
In this paper, we study the problem of image tag complete and proposed a novel method for this problem based on a popular image representation method, convolutional neural network (CNN).
no code implementations • 27 Sep 2016 • Yanyan Geng, Ru-Ze Liang, Weizhi Li, Jingbin Wang, Gaoyuan Liang, Chenhao Xu, Jing-Yan Wang
The CNN model is used to represent the multi-instance data point, and a classifier function is used to predict the label from the its CNN representation.
no code implementations • 7 Jun 2016 • Ru-Ze Liang, Wei Xie, Weizhi Li, Xin Du, Jim Jing-Yan Wang, Jingbin Wang
The existing semi-supervise structured output prediction methods learn a global predictor for all the data points in a data set, which ignores the differences of local distributions of the data set, and the effects to the structured output prediction.
no code implementations • 25 Aug 2015 • Jingbin Wang, Haoxiang Wang, Yihua Zhou, Nancy McDonald
The learning of the classifier parameter and the kernel weight are unified in a single objective function considering to minimize the upper boundary of the given multivariate performance measure.
no code implementations • 18 Aug 2015 • Xuejie Liu, Jingbin Wang, Ming Yin, Benjamin Edwards, Peijuan Xu
Context of data points, which is usually defined as the other data points in a data set, has been found to play important roles in data representation and classification.
no code implementations • 18 Feb 2015 • Jingbin Wang, Yihua Zhou, Kanghong Duan, Jim Jing-Yan Wang, Halima Bensmail
In this problem, each document is composed two different modals of data, i. e., an image and a text.
no code implementations • 30 Jan 2015 • Lan Yang, Jingbin Wang, Yujin Tu, Prarthana Mahapatra, Nelson Cardoso
This paper proposes a new method for vector quantization by minimizing the Kullback-Leibler Divergence between the class label distributions over the quantization inputs, which are original vectors, and the output, which is the quantization subsets of the vector set.