no code implementations • Findings (EMNLP) 2021 • Feifan Yang, Tao Yang, Xiaojun Quan, Qinliang Su
We argue that the posts created by a user contain critical contents that could help answer the questions in a questionnaire, resulting in an assessment of his personality by linking the texts and the questionnaire.
no code implementations • 8 May 2023 • Zenan Xu, Xiaojun Meng, Yasheng Wang, Qinliang Su, Zexuan Qiu, Xin Jiang, Qun Liu
Multimodal abstractive summarization for videos (MAS) requires generating a concise textual summary to describe the highlights of a video according to multimodal resources, in our case, the video content and its transcript.
2 code implementations • 3 Feb 2023 • Bowen Tian, Qinliang Su, Jianxing Yu
When training on such datasets, existing GANs will learn a mixture distribution of desired and contaminated instances, rather than the desired distribution of desired data only (target distribution).
Semi-supervised Anomaly Detection
supervised anomaly detection
1 code implementation • 31 Oct 2022 • Zexuan Qiu, Qinliang Su, Jianxing Yu, Shijing Si
Efficient document retrieval heavily relies on the technique of semantic hashing, which learns a binary code for every document and employs Hamming distance to evaluate document distances.
no code implementations • 26 May 2022 • Shijing Si, Jianzong Wang, Ruiyi Zhang, Qinliang Su, Jing Xiao
Non-negative matrix factorization (NMF) based topic modeling is widely used in natural language processing (NLP) to uncover hidden topics of short text documents.
no code implementations • 13 May 2022 • Zenan Xu, Wanjun Zhong, Qinliang Su, Zijing Ou, Fuwei Zhang
A key challenge in video question answering is how to realize the cross-modal semantic alignment between textual concepts and corresponding visual objects.
1 code implementation • 28 Apr 2022 • Bowen Tian, Qinliang Su, Jian Yin
The goal of anomaly detection is to identify anomalous samples from normal ones.
1 code implementation • 3 Mar 2022 • Zijing Ou, Tingyang Xu, Qinliang Su, Yingzhen Li, Peilin Zhao, Yatao Bian
Learning neural set functions becomes increasingly more important in many applications like product recommendation and compound selection in AI-aided drug discovery.
1 code implementation • Findings (EMNLP) 2021 • Zijing Ou, Qinliang Su, Jianxing Yu, Ruihui Zhao, Yefeng Zheng, Bang Liu
As a first try, we modify existing generative hashing models to accommodate the BERT embeddings.
1 code implementation • ACL 2021 • Zijing Ou, Qinliang Su, Jianxing Yu, Bang Liu, Jingwen Wang, Ruihui Zhao, Changyou Chen, Yefeng Zheng
With the need of fast retrieval speed and small memory footprint, document hashing has been playing a crucial role in large-scale information retrieval.
1 code implementation • 13 May 2021 • Zexuan Qiu, Qinliang Su, Zijing Ou, Jianxing Yu, Changyou Chen
Many unsupervised hashing methods are implicitly established on the idea of reconstructing the input data, which basically encourages the hashing codes to retain as much information of original data as possible.
1 code implementation • ACL 2021 • Zenan Xu, Daya Guo, Duyu Tang, Qinliang Su, Linjun Shou, Ming Gong, Wanjun Zhong, Xiaojun Quan, Nan Duan, Daxin Jiang
We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa.
1 code implementation • COLING 2020 • Yunyi Yang, Kun Li, Xiaojun Quan, Weizhou Shen, Qinliang Su
One of the remaining challenges for aspect term extraction in sentiment analysis resides in the extraction of phrase-level aspect terms, which is non-trivial to determine the boundaries of such terms.
Aspect Term Extraction and Sentiment Classification
Term Extraction
no code implementations • ACL 2020 • Jianxing Yu, Wei Liu, Shuang Qiu, Qinliang Su, Kai Wang, Xiaojun Quan, Jian Yin
Specifically, we first build a multi-hop generation model and guide it to satisfy the logical rationality by the reasoning chain extracted from a given text.
no code implementations • ACL 2020 • Lin Zheng, Qinliang Su, Dinghan Shen, Changyou Chen
Generative semantic hashing is a promising technique for large-scale information retrieval thanks to its fast retrieval speed and small memory footprint.
no code implementations • 22 Apr 2020 • Yang Zhao, Ping Yu, Suchismit Mahapatra, Qinliang Su, Changyou Chen
Variational autoencoders (VAEs) are essential tools in end-to-end representation learning.
no code implementations • IJCNLP 2019 • Wei Dong, Qinliang Su, Dinghan Shen, Changyou Chen
Hashing is promising for large-scale information retrieval tasks thanks to the efficiency of distance evaluation between binary codes.
no code implementations • IJCNLP 2019 • Zenan Xu, Qinliang Su, Xiaojun Quan, Weijia Zhang
Textual network embeddings aim to learn a low-dimensional representation for every node in the network so that both the structural and textual information from the networks can be well preserved in the representations.
2 code implementations • ACL 2018 • Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Chunyuan Li, Ricardo Henao, Lawrence Carin
Many deep learning architectures have been proposed to model the compositionality in text sequences, requiring a substantial number of parameters and expensive computations.
Ranked #1 on
Named Entity Recognition (NER)
on CoNLL 2000
1 code implementation • ACL 2018 • Dinghan Shen, Qinliang Su, Paidamoyo Chapfuwa, Wenlin Wang, Guoyin Wang, Lawrence Carin, Ricardo Henao
Semantic hashing has become a powerful paradigm for fast similarity search in many information retrieval systems.
no code implementations • ICLR 2018 • Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Ricardo Henao, Lawrence Carin
In this paper, we conduct an extensive comparative study between Simple Word Embeddings-based Models (SWEMs), with no compositional parameters, relative to employing word embeddings within RNN/CNN-based models.
no code implementations • 21 Sep 2017 • Dinghan Shen, Yizhe Zhang, Ricardo Henao, Qinliang Su, Lawrence Carin
A latent-variable model is introduced for text matching, inferring sentence representations by jointly optimizing generative and discriminative objectives.
no code implementations • NeurIPS 2017 • Qinliang Su, Xuejun Liao, Lawrence Carin
We present a probabilistic framework for nonlinearities, based on doubly truncated Gaussian distributions.
2 code implementations • 6 Sep 2017 • Liqun Chen, Shuyang Dai, Yunchen Pu, Chunyuan Li, Qinliang Su, Lawrence Carin
A new form of the variational autoencoder (VAE) is proposed, based on the symmetric Kullback-Leibler divergence.
no code implementations • 4 Sep 2017 • Changyou Chen, Wenlin Wang, Yizhe Zhang, Qinliang Su, Lawrence Carin
However, there has been little theoretical analysis of the impact of minibatch size to the algorithm's convergence rate.
no code implementations • ACL 2017 • Zhe Gan, Chunyuan Li, Changyou Chen, Yunchen Pu, Qinliang Su, Lawrence Carin
Recurrent neural networks (RNNs) have shown promising performance for language modeling.
no code implementations • 15 Nov 2016 • Qinliang Su, Xuejun Liao, Chunyuan Li, Zhe Gan, Lawrence Carin
Gaussian graphical models (GGMs) are widely used for statistical modeling, because of ease of inference and the ubiquitous use of the normal distribution in practical approximations.
no code implementations • 2 Jun 2016 • Qinliang Su, Xuejun Liao, Changyou Chen, Lawrence Carin
We introduce the truncated Gaussian graphical model (TGGM) as a novel framework for designing statistical models for nonlinear learning.