Search Results for author: Fei Huang

Found 60 papers, 29 papers with code

PALM: Pre-training an Autoencoding\&Autoregressive Language Model for Context-conditioned Generation

no code implementations EMNLP 2020 Bin Bi, Chenliang Li, Chen Wu, Ming Yan, Wei Wang, Songfang Huang, Fei Huang, Luo Si

An extensive set of experiments show that PALM achieves new state-of-the-art results on a variety of language generation benchmarks covering generative question answering (Rank 1 on the official MARCO leaderboard), abstractive summarization on CNN/DailyMail as well as Gigaword, question generation on SQuAD, and conversational response generation on Cornell Movie Dialogues.

Abstractive Text Summarization Conversational Response Generation +6

MuVER: Improving First-Stage Entity Retrieval with Multi-View Entity Representations

1 code implementation13 Sep 2021 Xinyin Ma, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Weiming Lu

Entity retrieval, which aims at disambiguating mentions to canonical entities from massive KBs, is essential for many tasks in natural language processing.

Entity Linking Entity Retrieval

LightNER: A Lightweight Generative Framework with Prompt-guided Attention for Low-resource NER

no code implementations31 Aug 2021 Xiang Chen, Ningyu Zhang, Lei LI, Xin Xie, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen

Most existing NER methods rely on extensive labeled data for model training, which struggles in the low-resource scenarios with limited training data.

Few-Shot Learning Language Modelling +2

Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners

no code implementations30 Aug 2021 Ningyu Zhang, Luoqiu Li, Xiang Chen, Shumin Deng, Zhen Bi, Chuanqi Tan, Fei Huang, Huajun Chen

Large-scale pre-trained language models have contributed significantly to natural language processing by demonstrating remarkable abilities as few-shot learners.

Language Modelling

Product-oriented Machine Translation with Cross-modal Cross-lingual Pre-training

1 code implementation25 Aug 2021 Yuqing Song, ShiZhe Chen, Qin Jin, Wei Luo, Jun Xie, Fei Huang

Firstly, there are many specialized jargons in the product description, which are ambiguous to translate without the product image.

Machine Translation

Risk Minimization for Zero-shot Sequence Labeling

no code implementations ACL 2021 Zechuan Hu, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu

In this paper, we propose a novel unified framework for zero-shot sequence labeling with minimum risk training and design a new decomposable risk function that models the relations between the predicted labels from the source models and the true labels.

Multi-View Cross-Lingual Structured Prediction with Minimum Supervision

no code implementations ACL 2021 Zechuan Hu, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu

In structured prediction problems, cross-lingual transfer learning is an efficient way to train quality models for low-resource languages, and further improvement can be obtained by learning from multiple source languages.

Cross-Lingual Transfer Structured Prediction +1

Document-level Relation Extraction as Semantic Segmentation

2 code implementations7 Jun 2021 Ningyu Zhang, Xiang Chen, Xin Xie, Shumin Deng, Chuanqi Tan, Mosha Chen, Fei Huang, Luo Si, Huajun Chen

Specifically, we leverage an encoder module to capture the context information of entities and a U-shaped segmentation module over the image-style feature map to capture global interdependency among triples.

Document-level Relation Extraction +1

NAST: A Non-Autoregressive Generator with Word Alignment for Unsupervised Text Style Transfer

1 code implementation4 Jun 2021 Fei Huang, Zikai Chen, Chen Henry Wu, Qihan Guo, Xiaoyan Zhu, Minlie Huang

First, we observe that most words in the transferred sentence can be aligned with related words in the source sentence, so we explicitly model word alignments to suppress irrelevant words.

Style Transfer Text Style Transfer +2

A Unified Span-Based Approach for Opinion Mining with Syntactic Constituents

1 code implementation NAACL 2021 Qingrong Xia, Bo Zhang, Rui Wang, Zhenghua Li, Yue Zhang, Fei Huang, Luo Si, Min Zhang

Fine-grained opinion mining (OM) has achieved increasing attraction in the natural language processing (NLP) community, which aims to find the opinion structures of {``}Who expressed what opinions towards what{''} in one sentence.

Multi-Task Learning Opinion Mining

Preview, Attend and Review: Schema-Aware Curriculum Learning for Multi-Domain Dialog State Tracking

no code implementations1 Jun 2021 Yinpei Dai, Hangyu Li, Yongbin Li, Jian Sun, Fei Huang, Luo Si, Xiaodan Zhu

Existing dialog state tracking (DST) models are trained with dialog data in a random order, neglecting rich structural information in a dataset.

 Ranked #1 on Multi-domain Dialogue State Tracking on MULTIWOZ 2.1 (using extra training data)

Curriculum Learning Multi-domain Dialogue State Tracking

OntoED: Low-resource Event Detection with Ontology Embedding

1 code implementation ACL 2021 Shumin Deng, Ningyu Zhang, Luoqiu Li, Hui Chen, Huaixiao Tou, Mosha Chen, Fei Huang, Huajun Chen

Most of current methods to ED rely heavily on training instances, and almost ignore the correlation of event types.

Event Detection

Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning

1 code implementation ACL 2021 Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu

We find empirically that the contextual representations computed on the retrieval-based input view, constructed through the concatenation of a sentence and its external contexts, can achieve significantly improved performance compared to the original input view based only on the sentence.

Document-level Named Entity Recognition +1

Relational Learning with Gated and Attentive Neighbor Aggregator for Few-Shot Knowledge Graph Completion

1 code implementation27 Apr 2021 Guanglin Niu, Yang Li, Chengguang Tang, Ruiying Geng, Jian Dai, Qiao Liu, Hao Wang, Jian Sun, Fei Huang, Luo Si

Moreover, modeling and inferring complex relations of one-to-many (1-N), many-to-one (N-1), and many-to-many (N-N) by previous knowledge graph completion approaches requires high model complexity and a large amount of training instances.

Few-Shot Learning Knowledge Graph Completion +1

Improving Biomedical Pretrained Language Models with Knowledge

1 code implementation21 Apr 2021 Zheng Yuan, Yijia Liu, Chuanqi Tan, Songfang Huang, Fei Huang

To this end, we propose KeBioLM, a biomedical pretrained language model that explicitly leverages knowledge from the UMLS knowledge bases.

Entity Linking Language Modelling +2

Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction

1 code implementation1 Apr 2021 Luoqiu Li, Xiang Chen, Ningyu Zhang, Shumin Deng, Xin Xie, Chuanqi Tan, Mosha Chen, Fei Huang, Huajun Chen

Recent neural-based relation extraction approaches, though achieving promising improvement on benchmark datasets, have reported their vulnerability towards adversarial attacks.

Relation Extraction

Photoproduction $γp \to K^+Λ(1520)$ in an effective Lagrangian approach

no code implementations22 Jan 2021 Neng-Chang Wei, Yu Zhang, Fei Huang, De-Min Li

In addition to the $t$-channel $K$ and $K^\ast$ exchanges, the $u$-channel $\Lambda$ exchange, the $s$-channel nucleon exchange, and the interaction current, a minimal number of nucleon resonances in the $s$ channel are introduced in constructing the reaction amplitudes to describe the data.

High Energy Physics - Phenomenology Nuclear Theory

A Text GAN for Language Generation with Non-Autoregressive Generator

no code implementations1 Jan 2021 Fei Huang, Jian Guan, Pei Ke, Qihan Guo, Xiaoyan Zhu, Minlie Huang

Despite the great success of Generative Adversarial Networks (GANs) in generating high-quality images, GANs for text generation still face two major challenges: first, most text GANs are unstable in training mainly due to ineffective optimization of the generator, and they heavily rely on maximum likelihood pretraining; second, most text GANs adopt autoregressive generators without latent variables, which largely limits the ability to learn latent representations for natural language text.

Decipherment Representation Learning +1

Bearings degradation monitoring indicators based on discarded projected space information and piecewise linear representation

no code implementations7 Dec 2020 Fei Huang, Alexandre Sava, Kondo H. Adjallah, Wang Zhouhang

To extract efficient indicators, in this paper we propose a method based on the discarded projected space information and piecewise linear representation (PLR) to build three bearings degradation monitoring indicators which are named SDHT2, VSDHT2 and NVSDHT2.

Aspect Sentiment Classification with Aspect-Specific Opinion Spans

1 code implementation EMNLP 2020 Lu Xu, Lidong Bing, Wei Lu, Fei Huang

Such a design allows the model to extract aspect-specific opinion spans and then evaluate sentiment polarity by exploiting the extracted opinion features.

Classification Extract Aspect

VECO: Variable and Flexible Cross-lingual Pre-training for Language Understanding and Generation

1 code implementation ACL 2021 Fuli Luo, Wei Wang, Jiahao Liu, Yijia Liu, Bin Bi, Songfang Huang, Fei Huang, Luo Si

Existing work in multilingual pretraining has demonstrated the potential of cross-lingual transferability by training a unified Transformer encoder for multiple languages.

Language Modelling Question Answering +1

Keyphrase Extraction with Dynamic Graph Convolutional Networks and Diversified Inference

no code implementations24 Oct 2020 Haoyu Zhang, Dingkun Long, Guangwei Xu, Pengjun Xie, Fei Huang, Ji Wang

Keyphrase extraction (KE) aims to summarize a set of phrases that accurately express a concept or a topic covered in a given document.

Keyphrase Extraction Representation Learning

Aspect Based Sentiment Analysis with Aspect-Specific Opinion Spans

1 code implementation EMNLP 2020 Lu Xu, Lidong Bing, Wei Lu, Fei Huang

Such a design allows the model to extract aspect-specific opinion spans and then evaluate sentiment polarity by exploiting the extracted opinion features.

Extract Aspect

FINDINGS OF THE IWSLT 2020 EVALUATION CAMPAIGN

no code implementations WS 2020 Ebrahim Ansari, Amittai Axelrod, Nguyen Bach, Ond{\v{r}}ej Bojar, Roldano Cattoni, Fahim Dalvi, Nadir Durrani, Marcello Federico, Christian Federmann, Jiatao Gu, Fei Huang, Kevin Knight, Xutai Ma, Ajay Nagesh, Matteo Negri, Jan Niehues, Juan Pino, Elizabeth Salesky, Xing Shi, Sebastian St{\"u}ker, Marco Turchi, Alex Waibel, er, Changhan Wang

The evaluation campaign of the International Conference on Spoken Language Translation (IWSLT 2020) featured this year six challenge tracks: (i) Simultaneous speech translation, (ii) Video speech translation, (iii) Offline speech translation, (iv) Conversational speech translation, (v) Open domain translation, and (vi) Non-native speech translation.

A Joint Neural Model for Information Extraction with Global Features

no code implementations ACL 2020 Ying Lin, Heng Ji, Fei Huang, Lingfei Wu

OneIE performs end-to-end IE in four stages: (1) Encoding a given sentence as contextualized word representations; (2) Identifying entity mentions and event triggers as nodes; (3) Computing label scores for all nodes and their pairwise links using local classifiers; (4) Searching for the globally optimal graph with a beam decoder.

PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation

2 code implementations14 Apr 2020 Bin Bi, Chenliang Li, Chen Wu, Ming Yan, Wei Wang, Songfang Huang, Fei Huang, Luo Si

An extensive set of experiments show that PALM achieves new state-of-the-art results on a variety of language generation benchmarks covering generative question answering (Rank 1 on the official MARCO leaderboard), abstractive summarization on CNN/DailyMail as well as Gigaword, question generation on SQuAD, and conversational response generation on Cornell Movie Dialogues.

Abstractive Text Summarization Conversational Response Generation +6

CoTK: An Open-Source Toolkit for Fast Development and Fair Evaluation of Text Generation

1 code implementation3 Feb 2020 Fei Huang, Dazhen Wan, Zhihong Shao, Pei Ke, Jian Guan, Yilin Niu, Xiaoyan Zhu, Minlie Huang

In text generation evaluation, many practical issues, such as inconsistent experimental settings and metric implementations, are often ignored but lead to unfair evaluation and untenable conclusions.

Text Generation

A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation

1 code implementation TACL 2020 Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, Minlie Huang

To further capture the causal and temporal dependencies between the sentences in a reasonable story, we employ multi-task learning which combines a discriminative objective to distinguish true and fake stories during fine-tuning.

Multi-Task Learning Story Generation

Event Ticket Price Prediction with Deep Neural Network on Spatial-Temporal Sparse Data

no code implementations3 Dec 2019 Fei Huang, Hao Huang

However, given all the historical transaction records, it is challenging to predict the sale price of the remaining seats at any future timestamp, not only because that the sale price is relevant to a lot of features (seat locations, date-to-event of the transaction, event date, team performance, etc.

ARAML: A Stable Adversarial Training Framework for Text Generation

1 code implementation IJCNLP 2019 Pei Ke, Fei Huang, Minlie Huang, Xiaoyan Zhu

The generator is optimized with maximum likelihood estimation augmented by the discriminator's rewards instead of policy gradient.

Text Generation

Unsupervised Multi-modal Neural Machine Translation

no code implementations CVPR 2019 Yuanhang Su, Kai Fan, Nguyen Bach, C. -C. Jay Kuo, Fei Huang

Unsupervised neural machine translation (UNMT) has recently achieved remarkable results with only large monolingual corpora in each language.

Machine Translation

Using Relevant Public Posts to Enhance News Article Summarization

no code implementations COLING 2016 Chen Li, Zhongyu Wei, Yang Liu, Yang Jin, Fei Huang

A news article summary usually consists of 2-3 key sentences that reflect the gist of that news article.

Sentence Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.