Search Results for author: Xing Wang

Found 58 papers, 13 papers with code

Tencent AI Lab Machine Translation Systems for the WMT21 Biomedical Translation Task

no code implementations WMT (EMNLP) 2021 Xing Wang, Zhaopeng Tu, Shuming Shi

This paper describes the Tencent AI Lab submission of the WMT2021 shared task on biomedical translation in eight language directions: English-German, English-French, English-Spanish and English-Russian.

Machine Translation Translation

Tencent Translation System for the WMT21 News Translation Task

no code implementations WMT (EMNLP) 2021 Longyue Wang, Mu Li, Fangxu Liu, Shuming Shi, Zhaopeng Tu, Xing Wang, Shuangzhi Wu, Jiali Zeng, Wen Zhang

Based on our success in the last WMT, we continuously employed advanced techniques such as large batch training, data selection and data filtering.

Data Augmentation Translation

Tencent AI Lab Machine Translation Systems for the WMT20 Biomedical Translation Task

1 code implementation WMT (EMNLP) 2020 Xing Wang, Zhaopeng Tu, Longyue Wang, Shuming Shi

This paper describes the Tencent AI Lab submission of the WMT2020 shared task on biomedical translation in four language directions: German<->English, English<->German, Chinese<->English and English<->Chinese.

Machine Translation Translation

Is ChatGPT A Good Translator? A Preliminary Study

1 code implementation20 Jan 2023 Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, Zhaopeng Tu

This report provides a preliminary evaluation of ChatGPT for machine translation, including translation prompt, multilingual translation, and translation robustness.

Machine Translation Translation

Scaling Back-Translation with Domain Text Generation for Sign Language Gloss Translation

1 code implementation13 Oct 2022 Jinhui Ye, Wenxiang Jiao, Xing Wang, Zhaopeng Tu

In this paper, to overcome the limitation, we propose a Prompt based domain text Generation (PGEN) approach to produce the large-scale in-domain spoken language text data.

Language Modelling Text Generation +1

STAD: Self-Training with Ambiguous Data for Low-Resource Relation Extraction

1 code implementation COLING 2022 Junjie Yu, Xing Wang, Jiangjiang Zhao, Chunjie Yang, Wenliang Chen

The approach first classifies the auto-annotated instances into two groups: confident instances and uncertain instances, according to the probabilities predicted by a teacher model.

Relation Extraction

Semantic Segmentation of Fruits on Multi-sensor Fused Data in Natural Orchards

no code implementations4 Aug 2022 Hanwen Kang, Xing Wang

In this work, we propose a deep-learning-based segmentation method to perform accurate semantic segmentation on fused data from a LiDAR-Camera visual sensor.

Semantic Segmentation

Next-ViT: Next Generation Vision Transformer for Efficient Deployment in Realistic Industrial Scenarios

2 code implementations12 Jul 2022 Jiashi Li, Xin Xia, Wei Li, Huixia Li, Xing Wang, Xuefeng Xiao, Rui Wang, Min Zheng, Xin Pan

Then, Next Hybrid Strategy (NHS) is designed to stack NCB and NTB in an efficient hybrid paradigm, which boosts performance in various downstream tasks.

Image Classification

Asynchronous Hierarchical Federated Learning

no code implementations31 May 2022 Xing Wang, Yijun Wang

Federated Learning is a rapidly growing area of research and with various benefits and industry applications.

Federated Learning Image Classification

Sepsis Prediction with Temporal Convolutional Networks

no code implementations31 May 2022 Xing Wang, Yuntian He

We design and implement a temporal convolutional network model to predict sepsis onset.

MoCoViT: Mobile Convolutional Vision Transformer

1 code implementation25 May 2022 Hailong Ma, Xin Xia, Xing Wang, Xuefeng Xiao, Jiashi Li, Min Zheng

Recently, Transformer networks have achieved impressive results on a variety of vision tasks.

object-detection Object Detection

TRT-ViT: TensorRT-oriented Vision Transformer

no code implementations19 May 2022 Xin Xia, Jiashi Li, Jie Wu, Xing Wang, Xuefeng Xiao, Min Zheng, Rui Wang

We revisit the existing excellent Transformers from the perspective of practical application.

Image Classification object-detection +2

Network Topology Optimization via Deep Reinforcement Learning

no code implementations19 Apr 2022 Zhuoran Li, Xing Wang, Ling Pan, Lin Zhu, Zhendong Wang, Junlan Feng, Chao Deng, Longbo Huang

A2C-GS consists of three novel components, including a verifier to validate the correctness of a generated network topology, a graph neural network (GNN) to efficiently approximate topology rating, and a DRL actor layer to conduct a topology search.

Management reinforcement-learning +1

SepViT: Separable Vision Transformer

1 code implementation29 Mar 2022 Wei Li, Xing Wang, Xin Xia, Jie Wu, Xuefeng Xiao, Min Zheng, Shiping Wen

SepViT helps to carry out the information interaction within and among the windows via a depthwise separable self-attention.

Understanding and Improving Sequence-to-Sequence Pretraining for Neural Machine Translation

no code implementations ACL 2022 Wenxuan Wang, Wenxiang Jiao, Yongchang Hao, Xing Wang, Shuming Shi, Zhaopeng Tu, Michael Lyu

In this paper, we present a substantial step in better understanding the SOTA sequence-to-sequence (Seq2Seq) pretraining for neural machine translation~(NMT).

Machine Translation NMT +1

Bridging the Data Gap between Training and Inference for Unsupervised Neural Machine Translation

1 code implementation ACL 2022 Zhiwei He, Xing Wang, Rui Wang, Shuming Shi, Zhaopeng Tu

By carefully designing experiments, we identify two representative characteristics of the data gap in source: (1) style gap (i. e., translated vs. natural text style) that leads to poor generalization capability; (2) content gap that induces the model to produce hallucination content biased towards the target language.

Machine Translation Translation

Adaptive Multi-receptive Field Spatial-Temporal Graph Convolutional Network for Traffic Forecasting

no code implementations1 Nov 2021 Xing Wang, Juan Zhao, Lin Zhu, Xu Zhou, Zhao Li, Junlan Feng, Chao Deng, Yong Zhang

AMF-STGCN extends GCN by (1) jointly modeling the complex spatial-temporal dependencies in mobile networks, (2) applying attention mechanisms to capture various Receptive Fields of heterogeneous base stations, and (3) introducing an extra decoder based on a fully connected deep network to conquer the error propagation challenge with multi-step forecasting.

Failure-averse Active Learning for Physics-constrained Systems

no code implementations27 Oct 2021 Cheolhei Lee, Xing Wang, Jianguo Wu, Xiaowei Yue

Active learning is a subfield of machine learning that is devised for design and modeling of systems with highly expensive sampling costs.

Active Learning

Self-Training Sampling with Monolingual Data Uncertainty for Neural Machine Translation

1 code implementation ACL 2021 Wenxiang Jiao, Xing Wang, Zhaopeng Tu, Shuming Shi, Michael R. Lyu, Irwin King

In this work, we propose to improve the sampling procedure by selecting the most informative monolingual sentences to complement the parallel data.

Machine Translation NMT +1

Benchmarking Graph Neural Networks on Link Prediction

no code implementations24 Feb 2021 Xing Wang, Alexander Vinel

In this paper, we benchmark several existing graph neural network (GNN) models on different datasets for link predictions.

Graph Attention Link Prediction

Adaptive Spatial-Temporal Inception Graph Convolutional Networks for Multi-step Spatial-Temporal Network Data Forecasting

no code implementations1 Jan 2021 Xing Wang, Lin Zhu, Juan Zhao, Zhou Xu, Zhao Li, Junlan Feng, Chao Deng

Spatial-temporal data forecasting is of great importance for industries such as telecom network operation and transportation management.

Management

Hierarchical Representation via Message Propagation for Robust Model Fitting

no code implementations29 Dec 2020 Shuyuan Lin, Xing Wang, Guobao Xiao, Yan Yan, Hanzi Wang

In this paper, we propose a novel hierarchical representation via message propagation (HRMP) method for robust model fitting, which simultaneously takes advantages of both the consensus analysis and the preference analysis to estimate the parameters of multiple model instances from data corrupted by outliers, for robust model fitting.

Non-Newtonian and poroelastic effects in simulations of arterial flows

no code implementations27 Oct 2020 Tongtong Li, Xing Wang, Ivan Yotov

In this paper, we focus on investigating the influence on hydrodynamic factors of different coupled computational models describing the interaction between an incompressible fluid and two symmetric elastic or poroelastic structures.

Fluid Dynamics Numerical Analysis Numerical Analysis

Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation

1 code implementation NAACL 2021 Yongchang Hao, Shilin He, Wenxiang Jiao, Zhaopeng Tu, Michael Lyu, Xing Wang

In addition, experimental results demonstrate that our Multi-Task NAT is complementary to knowledge distillation, the standard knowledge transfer method for NAT.

Knowledge Distillation Machine Translation +2

Data Rejuvenation: Exploiting Inactive Training Examples for Neural Machine Translation

1 code implementation EMNLP 2020 Wenxiang Jiao, Xing Wang, Shilin He, Irwin King, Michael R. Lyu, Zhaopeng Tu

First, we train an identification model on the original training data, and use it to distinguish inactive examples and active examples by their sentence-level output probabilities.

Machine Translation NMT +1

Reannealing of Decaying Exploration Based On Heuristic Measure in Deep Q-Network

no code implementations29 Sep 2020 Xing Wang, Alexander Vinel

Existing exploration strategies in reinforcement learning (RL) often either ignore the history or feedback of search, or are complicated to implement.

reinforcement-learning reinforcement Learning

Cross Learning in Deep Q-Networks

no code implementations29 Sep 2020 Xing Wang, Alexander Vinel

In this work, we propose a novel cross Q-learning algorithm, aim at alleviating the well-known overestimation problem in value-based reinforcement learning methods, particularly in the deep Q-networks where the overestimation is exaggerated by function approximation errors.

Q-Learning reinforcement-learning +1

Progressive Automatic Design of Search Space for One-Shot Neural Architecture Search

no code implementations15 May 2020 Xin Xia, Xuefeng Xiao, Xing Wang, Min Zheng

In this way, PAD-NAS can automatically design the operations for each layer and achieve a trade-off between search space quality and model diversity.

Neural Architecture Search

How Does Selective Mechanism Improve Self-Attention Networks?

1 code implementation ACL 2020 Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu

Self-attention networks (SANs) with selective mechanism has produced substantial improvements in various NLP tasks by concentrating on a subset of input words.

Machine Translation Natural Language Inference +1

Assessing the Bilingual Knowledge Learned by Neural Machine Translation Models

no code implementations28 Apr 2020 Shilin He, Xing Wang, Shuming Shi, Michael R. Lyu, Zhaopeng Tu

In this paper, we bridge the gap by assessing the bilingual knowledge learned by NMT models with phrase table -- an interpretable table of bilingual lexicons.

Machine Translation NMT +1

Neuron Interaction Based Representation Composition for Neural Machine Translation

no code implementations22 Nov 2019 Jian Li, Xing Wang, Baosong Yang, Shuming Shi, Michael R. Lyu, Zhaopeng Tu

Starting from this intuition, we propose a novel approach to compose representations learned by different components in neural machine translation (e. g., multi-layer networks or multi-head attention), based on modeling strong interactions among neurons in the representation vectors.

Machine Translation Translation

Multi-Granularity Self-Attention for Neural Machine Translation

no code implementations IJCNLP 2019 Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, Zhaopeng Tu

Current state-of-the-art neural machine translation (NMT) uses a deep multi-head self-attention network with no explicit phrase information.

Machine Translation NMT +1

Towards Better Modeling Hierarchical Structure for Self-Attention with Ordered Neurons

no code implementations IJCNLP 2019 Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, Zhaopeng Tu

Recent studies have shown that a hybrid of self-attention networks (SANs) and recurrent neural networks (RNNs) outperforms both individual architectures, while not much is known about why the hybrid models work.

Inductive Bias Machine Translation +1

Self-Attention with Structural Position Representations

no code implementations IJCNLP 2019 Xing Wang, Zhaopeng Tu, Long-Yue Wang, Shuming Shi

Although self-attention networks (SANs) have advanced the state-of-the-art on various NLP tasks, one criticism of SANs is their ability of encoding positions of input words (Shaw et al., 2018).

Translation

Towards Understanding Neural Machine Translation with Word Importance

no code implementations IJCNLP 2019 Shilin He, Zhaopeng Tu, Xing Wang, Long-Yue Wang, Michael R. Lyu, Shuming Shi

Although neural machine translation (NMT) has advanced the state-of-the-art on various language pairs, the interpretability of NMT remains unsatisfactory.

Machine Translation NMT +1

Exploiting Sentential Context for Neural Machine Translation

no code implementations ACL 2019 Xing Wang, Zhaopeng Tu, Long-Yue Wang, Shuming Shi

In this work, we present novel approaches to exploit sentential context for neural machine translation (NMT).

Machine Translation NMT +1

Multiple Independent Subspace Clusterings

no code implementations10 May 2019 Xing Wang, Jun Wang, Carlotta Domeniconi, Guoxian Yu, Guo-Qiang Xiao, Maozu Guo

To ease this process, we consider diverse clusterings embedded in different subspaces, and analyze the embedding subspaces to shed light into the structure of each clustering.

Modeling Recurrence for Transformer

no code implementations NAACL 2019 Jie Hao, Xing Wang, Baosong Yang, Long-Yue Wang, Jinfeng Zhang, Zhaopeng Tu

In addition to the standard recurrent neural network, we introduce a novel attentive recurrent network to leverage the strengths of both attention and recurrent networks.

Machine Translation Translation

Information Aggregation for Multi-Head Attention with Routing-by-Agreement

no code implementations NAACL 2019 Jian Li, Baosong Yang, Zi-Yi Dou, Xing Wang, Michael R. Lyu, Zhaopeng Tu

Multi-head attention is appealing for its ability to jointly extract different types of information from multiple representation subspaces.

Machine Translation Translation

Dynamic Layer Aggregation for Neural Machine Translation with Routing-by-Agreement

no code implementations15 Feb 2019 Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Long-Yue Wang, Shuming Shi, Tong Zhang

With the promising progress of deep neural networks, layer aggregation has been used to fuse information across layers in various fields, such as computer vision and machine translation.

Machine Translation Translation

Context-Aware Self-Attention Networks

no code implementations15 Feb 2019 Baosong Yang, Jian Li, Derek Wong, Lidia S. Chao, Xing Wang, Zhaopeng Tu

Self-attention model have shown its flexibility in parallel computation and the effectiveness on modeling both long- and short-term dependencies.

Translation

Learning to Refine Source Representations for Neural Machine Translation

no code implementations26 Dec 2018 Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu

Neural machine translation (NMT) models generally adopt an encoder-decoder architecture for modeling the entire translation process.

Machine Translation NMT +1

Exploiting Deep Representations for Neural Machine Translation

no code implementations EMNLP 2018 Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Shuming Shi, Tong Zhang

Advanced neural machine translation (NMT) models generally implement encoder and decoder as multiple layers, which allows systems to model complex functions and capture complicated linguistic structures.

Machine Translation NMT +1

Network Modeling and Pathway Inference from Incomplete Data ("PathInf")

no code implementations1 Oct 2018 Xiang Li, Qitian Chen, Xing Wang, Ning Guo, Nan Wu, Quanzheng Li

In this work, we developed a network inference method from incomplete data ("PathInf") , as massive and non-uniformly distributed missing values is a common challenge in practical problems.

Data Summarization

Neural Machine Translation Advised by Statistical Machine Translation

no code implementations17 Oct 2016 Xing Wang, Zhengdong Lu, Zhaopeng Tu, Hang Li, Deyi Xiong, Min Zhang

Neural Machine Translation (NMT) is a new approach to machine translation that has made great progress in recent years.

Machine Translation NMT +1

Scalable Compression of Deep Neural Networks

no code implementations26 Aug 2016 Xing Wang, Jie Liang

Deep neural networks generally involve some layers with mil- lions of parameters, making them difficult to be deployed and updated on devices with limited resources such as mobile phones and other smart embedded systems.

Cannot find the paper you are looking for? You can Submit a new open access paper.