Search Results for author: Shaojun Wang

Found 20 papers, 0 papers with code

PINGAN Omini-Sinitic at SemEval-2021 Task 4:Reading Comprehension of Abstract Meaning

no code implementations SEMEVAL 2021 Ye Wang, Yanmeng Wang, Haijun Zhu, Bo Zeng, Zhenghong Hao, Shaojun Wang, Jing Xiao

This paper describes the winning system for subtask 2 and the second-placed system for subtask 1 in SemEval 2021 Task 4: ReadingComprehension of Abstract Meaning.

Denoising Language Modelling +1

Structure Controllable Text Generation

no code implementations1 Jan 2021 Liming Deng, Long Wang, Binzhu WANG, Jiang Qian, Bojin Zhuang, Shaojun Wang, Jing Xiao

Controlling the presented forms (or structures) of generated text are as important as controlling the generated contents during neural text generation.

Text Generation

KETG: A Knowledge Enhanced Text Generation Framework

no code implementations1 Jan 2021 Yan Cui, Xi Chen, Jiang Qian, Bojin Zhuang, Shaojun Wang, Jing Xiao

Embedding logical knowledge information into text generation is a challenging NLP task.

Text Generation

Contextualized Emotion Recognition in Conversation as Sequence Tagging

no code implementations1 Jul 2020 Yan Wang, Jiayu Zhang, Jun Ma, Shaojun Wang, Jing Xiao

Emotion recognition in conversation (ERC) is an important topic for developing empathetic machines in a variety of areas including social opinion mining, health-care and so on.

Emotion Classification Emotion Recognition in Conversation +1

BS-NAS: Broadening-and-Shrinking One-Shot NAS with Searchable Numbers of Channels

no code implementations22 Mar 2020 Zan Shen, Jiang Qian, Bojin Zhuang, Shaojun Wang, Jing Xiao

One-Shot methods have evolved into one of the most popular methods in Neural Architecture Search (NAS) due to weight sharing and single training of a supernet.

Neural Architecture Search

A simple discriminative training method for machine translation with large-scale features

no code implementations15 Sep 2019 Tian Xia, Shaodan Zhai, Shaojun Wang

Margin infused relaxed algorithms (MIRAs) dominate model tuning in statistical machine translation in the case of large scale features, but also they are famous for the complexity in implementation.

Machine Translation

Plackett-Luce model for learning-to-rank task

no code implementations15 Sep 2019 Tian Xia, Shaodan Zhai, Shaojun Wang

List-wise based learning to rank methods are generally supposed to have better performance than point- and pair-wise based.

Learning-To-Rank

Analysis of Regression Tree Fitting Algorithms in Learning to Rank

no code implementations12 Sep 2019 Tian Xia, Shaodan Zhai, Shaojun Wang

In learning to rank area, industry-level applications have been dominated by gradient boosting framework, which fits a tree using least square error principle.

Learning-To-Rank

Automatic Acrostic Couplet Generation with Three-Stage Neural Network Pipelines

no code implementations15 Jun 2019 Haoshen Fan, Jie Wang, Bojin Zhuang, Shaojun Wang, Jing Xiao

In this paper, we comprehensively study on automatic generation of acrostic couplet with the first characters defined by users.

Re-Ranking

A Syllable-Structured, Contextually-Based Conditionally Generation of Chinese Lyrics

no code implementations15 Jun 2019 Xu Lu, Jie Wang, Bojin Zhuang, Shaojun Wang, Jing Xiao

This paper presents a novel, syllable-structured Chinese lyrics generation model given a piece of original melody.

A Hierarchical Attention Based Seq2seq Model for Chinese Lyrics Generation

no code implementations15 Jun 2019 Haoshen Fan, Jie Wang, Bojin Zhuang, Shaojun Wang, Jing Xiao

In this paper, we comprehensively study on context-aware generation of Chinese song lyrics.

Slim Embedding Layers for Recurrent Neural Language Models

no code implementations27 Nov 2017 Zhongliang Li, Raymond Kulhanek, Shaojun Wang, Yunxin Zhao, Shuang Wu

When the vocabulary size is large, the space taken to store the model parameters becomes the bottleneck for the use of recurrent neural language models.

Language Modelling

Une m\'ethode discriminant formation simple pour la traduction automatique avec Grands Caract\'eristiques

no code implementations JEPTALNRECITAL 2015 Tian Xia, Shaodan Zhai, Zhongliang Li, Shaojun Wang

Marge infus{\'e} algorithmes d{\'e}tendus (MIRAS) dominent mod{\`e}le de tuning dans la traduction automatique statistique dans le cas des grandes caract{\'e}ristiques de l{'}{\'e}chelle, mais ils sont {\'e}galement c{\'e}l{\`e}bres pour la complexit{\'e} de mise en {\oe}uvre.

Direct 0-1 Loss Minimization and Margin Maximization with Boosting

no code implementations NeurIPS 2013 Shaodan Zhai, Tian Xia, Ming Tan, Shaojun Wang

We propose a boosting method, DirectBoost, a greedy coordinate descent algorithm that builds an ensemble classifier of weak classifiers through directly minimizing empirical classification error over labeled training examples; once the training classification error is reduced to a local coordinatewise minimum, DirectBoost runs a greedy coordinate ascent algorithm that continuously adds weak classifiers to maximize any targeted arbitrarily defined margins until reaching a local coordinatewise maximum of the margins in a certain sense.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.