no code implementations • EMNLP 2021 • Yuxian Meng, Xiang Ao, Qing He, Xiaofei Sun, Qinghong Han, Fei Wu, Chun Fan, Jiwei Li
A long-standing issue with paraphrase generation is how to obtain reliable supervision signals.
1 code implementation • 12 May 2021 • Yuxiao Lin, Yuxian Meng, Xiaofei Sun, Qinghong Han, Kun Kuang, Jiwei Li, Fei Wu
In this work, we propose BertGCN, a model that combines large scale pretraining and transductive learning for text classification.
Ranked #1 on
Text Classification
on 20 Newsgroups
1 code implementation • 30 Dec 2020 • Yuxian Meng, Shuhe Wang, Qinghong Han, Xiaofei Sun, Fei Wu, Rui Yan, Jiwei Li
Based on this dataset, we propose a family of encoder-decoder models leveraging both textual and visual contexts, from coarse-grained image features extracted from CNNs to fine-grained object features extracted from Faster R-CNNs.
1 code implementation • 3 Dec 2020 • Zijun Sun, Chun Fan, Qinghong Han, Xiaofei Sun, Yuxian Meng, Fei Wu, Jiwei Li
The proposed model comes with the following merits: (1) span weights make the model self-explainable and do not require an additional probing model for interpretation; (2) the proposed model is general and can be adapted to any existing deep learning structures in NLP; (3) the weight associated with each text span provides direct importance scores for higher-level text units such as phrases and sentences.
Ranked #2 on
Sentiment Analysis
on SST-5 Fine-grained classification
(using extra training data)
no code implementations • NeurIPS 2020 • Xiaoya Li, Yuxian Meng, Mingxin Zhou, Qinghong Han, Fei Wu, Jiwei Li
In this way, the model is able to select the most salient nodes and reduce the quadratic complexity regardless of the sequence length.
no code implementations • 11 Feb 2020 • Qinghong Han, Yuxian Meng, Fei Wu, Jiwei Li
Unfortunately, under the framework of the \sts model, direct decoding from $\log p(y|x) + \log p(x|y)$ is infeasible since the second part (i. e., $p(x|y)$) requires the completion of target generation before it can be computed, and the search space for $y$ is enormous.
no code implementations • ICML 2020 • Duo Chai, Wei Wu, Qinghong Han, Fei Wu, Jiwei Li
We observe significant performance boosts over strong baselines on a wide range of text classification tasks including single-label classification, multi-label classification and multi-aspect sentiment analysis.
7 code implementations • ACL 2020 • Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, Jiwei Li
Instead of treating the task of NER as a sequence labeling problem, we propose to formulate it as a machine reading comprehension (MRC) task.
Ranked #2 on
Nested Mention Recognition
on ACE 2004
(using extra training data)
Chinese Named Entity Recognition
Entity Extraction using GAN
+4
no code implementations • ACL 2019 • Xiaoya Li, Yuxian Meng, Xiaofei Sun, Qinghong Han, Arianna Yuan, Jiwei Li
Based on these observations, we conduct comprehensive experiments to study why word-based models underperform char-based models in these deep learning-based NLP tasks.
2 code implementations • NeurIPS 2019 • Yuxian Meng, Wei Wu, Fei Wang, Xiaoya Li, Ping Nie, Fan Yin, Muyu Li, Qinghong Han, Xiaofei Sun, Jiwei Li
However, due to the lack of rich pictographic evidence in glyphs and the weak generalization ability of standard computer vision models on character data, an effective way to utilize the glyph information remains to be found.
Ranked #1 on
Chinese Word Segmentation
on AS
Chinese Dependency Parsing
Chinese Named Entity Recognition
+19