Search Results for author: Xing Wang

Found 34 papers, 4 papers with code

Self-Training Sampling with Monolingual Data Uncertainty for Neural Machine Translation

1 code implementation ACL 2021 Wenxiang Jiao, Xing Wang, Zhaopeng Tu, Shuming Shi, Michael R. Lyu, Irwin King

In this work, we propose to improve the sampling procedure by selecting the most informative monolingual sentences to complement the parallel data.

Machine Translation Translation

Benchmarking Graph Neural Networks on Link Prediction

no code implementations24 Feb 2021 Xing Wang, Alexander Vinel

In this paper, we benchmark several existing graph neural network (GNN) models on different datasets for link predictions.

Graph Attention Link Prediction

Adaptive Spatial-Temporal Inception Graph Convolutional Networks for Multi-step Spatial-Temporal Network Data Forecasting

no code implementations1 Jan 2021 Xing Wang, Lin Zhu, Juan Zhao, Zhou Xu, Zhao Li, Junlan Feng, Chao Deng

Spatial-temporal data forecasting is of great importance for industries such as telecom network operation and transportation management.

Hierarchical Representation via Message Propagation for Robust Model Fitting

no code implementations29 Dec 2020 Shuyuan Lin, Xing Wang, Guobao Xiao, Yan Yan, Hanzi Wang

In this paper, we propose a novel hierarchical representation via message propagation (HRMP) method for robust model fitting, which simultaneously takes advantages of both the consensus analysis and the preference analysis to estimate the parameters of multiple model instances from data corrupted by outliers, for robust model fitting.

Non-Newtonian and poroelastic effects in simulations of arterial flows

no code implementations27 Oct 2020 Tongtong Li, Xing Wang, Ivan Yotov

In this paper, we focus on investigating the influence on hydrodynamic factors of different coupled computational models describing the interaction between an incompressible fluid and two symmetric elastic or poroelastic structures.

Fluid Dynamics Numerical Analysis Numerical Analysis

Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation

1 code implementation NAACL 2021 Yongchang Hao, Shilin He, Wenxiang Jiao, Zhaopeng Tu, Michael Lyu, Xing Wang

In addition, experimental results demonstrate that our Multi-Task NAT is complementary to knowledge distillation, the standard knowledge transfer method for NAT.

Knowledge Distillation Machine Translation +2

Data Rejuvenation: Exploiting Inactive Training Examples for Neural Machine Translation

1 code implementation EMNLP 2020 Wenxiang Jiao, Xing Wang, Shilin He, Irwin King, Michael R. Lyu, Zhaopeng Tu

First, we train an identification model on the original training data, and use it to distinguish inactive examples and active examples by their sentence-level output probabilities.

Machine Translation Translation

Reannealing of Decaying Exploration Based On Heuristic Measure in Deep Q-Network

no code implementations29 Sep 2020 Xing Wang, Alexander Vinel

Existing exploration strategies in reinforcement learning (RL) often either ignore the history or feedback of search, or are complicated to implement.

Cross Learning in Deep Q-Networks

no code implementations29 Sep 2020 Xing Wang, Alexander Vinel

In this work, we propose a novel cross Q-learning algorithm, aim at alleviating the well-known overestimation problem in value-based reinforcement learning methods, particularly in the deep Q-networks where the overestimation is exaggerated by function approximation errors.

Q-Learning

How Does Selective Mechanism Improve Self-Attention Networks?

1 code implementation ACL 2020 Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu

Self-attention networks (SANs) with selective mechanism has produced substantial improvements in various NLP tasks by concentrating on a subset of input words.

Machine Translation Natural Language Inference +1

Assessing the Bilingual Knowledge Learned by Neural Machine Translation Models

no code implementations28 Apr 2020 Shilin He, Xing Wang, Shuming Shi, Michael R. Lyu, Zhaopeng Tu

In this paper, we bridge the gap by assessing the bilingual knowledge learned by NMT models with phrase table -- an interpretable table of bilingual lexicons.

Machine Translation Translation

Neuron Interaction Based Representation Composition for Neural Machine Translation

no code implementations22 Nov 2019 Jian Li, Xing Wang, Baosong Yang, Shuming Shi, Michael R. Lyu, Zhaopeng Tu

Starting from this intuition, we propose a novel approach to compose representations learned by different components in neural machine translation (e. g., multi-layer networks or multi-head attention), based on modeling strong interactions among neurons in the representation vectors.

Machine Translation Translation

Multi-Granularity Self-Attention for Neural Machine Translation

no code implementations IJCNLP 2019 Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, Zhaopeng Tu

Current state-of-the-art neural machine translation (NMT) uses a deep multi-head self-attention network with no explicit phrase information.

Machine Translation Translation

Towards Better Modeling Hierarchical Structure for Self-Attention with Ordered Neurons

no code implementations IJCNLP 2019 Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, Zhaopeng Tu

Recent studies have shown that a hybrid of self-attention networks (SANs) and recurrent neural networks (RNNs) outperforms both individual architectures, while not much is known about why the hybrid models work.

Hierarchical structure Machine Translation +1

Self-Attention with Structural Position Representations

no code implementations IJCNLP 2019 Xing Wang, Zhaopeng Tu, Long-Yue Wang, Shuming Shi

Although self-attention networks (SANs) have advanced the state-of-the-art on various NLP tasks, one criticism of SANs is their ability of encoding positions of input words (Shaw et al., 2018).

Translation

Towards Understanding Neural Machine Translation with Word Importance

no code implementations IJCNLP 2019 Shilin He, Zhaopeng Tu, Xing Wang, Long-Yue Wang, Michael R. Lyu, Shuming Shi

Although neural machine translation (NMT) has advanced the state-of-the-art on various language pairs, the interpretability of NMT remains unsatisfactory.

Machine Translation Translation

Multiple Independent Subspace Clusterings

no code implementations10 May 2019 Xing Wang, Jun Wang, Carlotta Domeniconi, Guoxian Yu, Guo-Qiang Xiao, Maozu Guo

To ease this process, we consider diverse clusterings embedded in different subspaces, and analyze the embedding subspaces to shed light into the structure of each clustering.

Modeling Recurrence for Transformer

no code implementations NAACL 2019 Jie Hao, Xing Wang, Baosong Yang, Long-Yue Wang, Jinfeng Zhang, Zhaopeng Tu

In addition to the standard recurrent neural network, we introduce a novel attentive recurrent network to leverage the strengths of both attention and recurrent networks.

Machine Translation Translation

Information Aggregation for Multi-Head Attention with Routing-by-Agreement

no code implementations NAACL 2019 Jian Li, Baosong Yang, Zi-Yi Dou, Xing Wang, Michael R. Lyu, Zhaopeng Tu

Multi-head attention is appealing for its ability to jointly extract different types of information from multiple representation subspaces.

Machine Translation Translation

Context-Aware Self-Attention Networks

no code implementations15 Feb 2019 Baosong Yang, Jian Li, Derek Wong, Lidia S. Chao, Xing Wang, Zhaopeng Tu

Self-attention model have shown its flexibility in parallel computation and the effectiveness on modeling both long- and short-term dependencies.

Translation

Dynamic Layer Aggregation for Neural Machine Translation with Routing-by-Agreement

no code implementations15 Feb 2019 Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Long-Yue Wang, Shuming Shi, Tong Zhang

With the promising progress of deep neural networks, layer aggregation has been used to fuse information across layers in various fields, such as computer vision and machine translation.

Machine Translation Translation

Learning to Refine Source Representations for Neural Machine Translation

no code implementations26 Dec 2018 Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu

Neural machine translation (NMT) models generally adopt an encoder-decoder architecture for modeling the entire translation process.

Machine Translation Translation

Exploiting Deep Representations for Neural Machine Translation

no code implementations EMNLP 2018 Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Shuming Shi, Tong Zhang

Advanced neural machine translation (NMT) models generally implement encoder and decoder as multiple layers, which allows systems to model complex functions and capture complicated linguistic structures.

Machine Translation Translation

Network Modeling and Pathway Inference from Incomplete Data ("PathInf")

no code implementations1 Oct 2018 Xiang Li, Qitian Chen, Xing Wang, Ning Guo, Nan Wu, Quanzheng Li

In this work, we developed a network inference method from incomplete data ("PathInf") , as massive and non-uniformly distributed missing values is a common challenge in practical problems.

Data Summarization

Neural Machine Translation Advised by Statistical Machine Translation

no code implementations17 Oct 2016 Xing Wang, Zhengdong Lu, Zhaopeng Tu, Hang Li, Deyi Xiong, Min Zhang

Neural Machine Translation (NMT) is a new approach to machine translation that has made great progress in recent years.

Machine Translation Translation

Scalable Compression of Deep Neural Networks

no code implementations26 Aug 2016 Xing Wang, Jie Liang

Deep neural networks generally involve some layers with mil- lions of parameters, making them difficult to be deployed and updated on devices with limited resources such as mobile phones and other smart embedded systems.

Cannot find the paper you are looking for? You can Submit a new open access paper.