Search Results for author: Si Wei

Found 20 papers, 11 papers with code

A Tree-Structured Decoder for Image-to-Markup Generation

1 code implementation ICML 2020 Jianshu Zhang, Jun Du, Yongxin Yang, Yi-Zhe Song, Si Wei, Li-Rong Dai

Recent encoder-decoder approaches typically employ string decoders to convert images into serialized strings for image-to-markup.

Math

SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning

1 code implementation SEMEVAL 2021 Boyuan Zheng, Xiaoyu Yang, Yu-Ping Ruan, ZhenHua Ling, Quan Liu, Si Wei, Xiaodan Zhu

Given a passage and the corresponding question, a participating system is expected to choose the correct answer from five candidates of abstract concepts in a cloze-style machine reading comprehension setup.

Machine Reading Comprehension

DialBERT: A Hierarchical Pre-Trained Model for Conversation Disentanglement

1 code implementation8 Apr 2020 Tianda Li, Jia-Chen Gu, Xiaodan Zhu, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei

Disentanglement is a problem in which multiple conversations occur in the same channel simultaneously, and the listener should decide which utterance is part of the conversation he will respond to.

Conversation Disentanglement Disentanglement

Exploring Unsupervised Pretraining and Sentence Structure Modelling for Winograd Schema Challenge

no code implementations22 Apr 2019 Yu-Ping Ruan, Xiaodan Zhu, Zhen-Hua Ling, Zhan Shi, Quan Liu, Si Wei

Winograd Schema Challenge (WSC) was proposed as an AI-hard problem in testing computers' intelligence on common sense representation and reasoning.

Common Sense Reasoning Sentence

Spelling Error Correction Using a Nested RNN Model and Pseudo Training Data

no code implementations1 Nov 2018 Hao Li, Yang Wang, Xinyu Liu, Zhichao Sheng, Si Wei

We propose a nested recurrent neural network (nested RNN) model for English spelling error correction and generate pseudo data based on phonetic similarity to train it.

Feature Engineering

Neural Natural Language Inference Models Enhanced with External Knowledge

1 code implementation ACL 2018 Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei

With the availability of large annotated data, it has recently become feasible to train complex models such as neural-network-based inference models, which have shown to achieve the state-of-the-art performance.

Natural Language Inference

Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference

2 code implementations WS 2017 Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, Diana Inkpen

The RepEval 2017 Shared Task aims to evaluate natural language understanding models for sentence representation, in which a sentence is represented as a fixed-length vector with neural networks and the quality of the representation is tested with a natural language inference task.

Natural Language Inference Natural Language Understanding +1

Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering

no code implementations14 Mar 2017 Junbei Zhang, Xiaodan Zhu, Qian Chen, Li-Rong Dai, Si Wei, Hui Jiang

The last several years have seen intensive interest in exploring neural-network-based models for machine comprehension (MC) and question answering (QA).

Question Answering Reading Comprehension

Neural Networks Models for Entity Discovery and Linking

no code implementations11 Nov 2016 Dan Liu, Wei. Lin, Shiliang Zhang, Si Wei, Hui Jiang

This paper describes the USTC_NELSLIP systems submitted to the Trilingual Entity Detection and Linking (EDL) track in 2016 TAC Knowledge Base Population (KBP) contests.

Clustering Entity Linking +1

Distraction-Based Neural Networks for Document Summarization

1 code implementation26 Oct 2016 Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang

Distributed representation learned with neural networks has recently shown to be effective in modeling natural languages at fine granularities such as words, phrases, and even sentences.

Document Summarization

Feedforward Sequential Memory Networks: A New Structure to Learn Long-term Dependency

no code implementations28 Dec 2015 Shiliang Zhang, Cong Liu, Hui Jiang, Si Wei, Li-Rong Dai, Yu Hu

In this paper, we propose a novel neural network structure, namely \emph{feedforward sequential memory networks (FSMN)}, to model long-term dependency in time series without using recurrent feedback.

Language Modelling speech-recognition +3

Feedforward Sequential Memory Neural Networks without Recurrent Feedback

no code implementations9 Oct 2015 ShiLiang Zhang, Hui Jiang, Si Wei, Li-Rong Dai

We introduce a new structure for memory neural networks, called feedforward sequential memory networks (FSMN), which can learn long-term dependency without using recurrent feedback.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.