Search Results for author: Xinyi Wang

Found 54 papers, 25 papers with code

CMU’s IWSLT 2022 Dialect Speech Translation System

no code implementations IWSLT (ACL) 2022 Brian Yan, Patrick Fernandes, Siddharth Dalmia, Jiatong Shi, Yifan Peng, Dan Berrebbi, Xinyi Wang, Graham Neubig, Shinji Watanabe

We use additional paired Modern Standard Arabic data (MSA) to directly improve the speech recognition (ASR) and machine translation (MT) components of our cascaded systems.

Knowledge Distillation Machine Translation +3

mmT5: Modular Multilingual Pre-Training Solves Source Language Hallucinations

no code implementations23 May 2023 Jonas Pfeiffer, Francesco Piccinno, Massimo Nicosia, Xinyi Wang, Machel Reid, Sebastian Ruder

Multilingual sequence-to-sequence models perform poorly with increased language coverage and fail to consistently generate text in the correct target language in few-shot settings.

Natural Language Understanding

TheoremQA: A Theorem-driven Question Answering dataset

1 code implementation21 May 2023 Wenhu Chen, Ming Yin, Max Ku, Pan Lu, Yixin Wan, Xueguang Ma, Jianyu Xu, Xinyi Wang, Tony Xia

We evaluate a wide spectrum of 16 large language and code models with different prompting strategies like Chain-of-Thoughts and Program-of-Thoughts.

GSM8K Question Answering

Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning

1 code implementation20 May 2023 Liangming Pan, Alon Albalak, Xinyi Wang, William Yang Wang

We also introduce a self-refinement stage, which utilizes the symbolic solver's error messages to revise symbolic formalizations.

Logical Reasoning

Collaborative Generative AI: Integrating GPT-k for Efficient Editing in Text-to-Image Generation

no code implementations18 May 2023 Wanrong Zhu, Xinyi Wang, Yujie Lu, Tsu-Jui Fu, Xin Eric Wang, Miguel Eckstein, William Yang Wang

We conduct a series of experiments to compare the common edits made by humans and GPT-k, evaluate the performance of GPT-k in prompting T2I, and examine factors that may influence this process.

Text Generation Text-to-Image Generation

ArtGPT-4: Artistic Vision-Language Understanding with Adapter-enhanced MiniGPT-4

no code implementations12 May 2023 Zhengqing Yuan, Huiwen Xue, Xinyi Wang, Yongming Liu, Zhuanzhe Zhao, Kun Wang

However, training models on such a large scale is challenging, and finding datasets that match the model's scale is often difficult.

Serial Contrastive Knowledge Distillation for Continual Few-shot Relation Extraction

1 code implementation11 May 2023 Xinyi Wang, Zitao Wang, Wei Hu

Continual few-shot relation extraction (RE) aims to continuously train a model for new relations with few labeled training data, of which the major challenges are the catastrophic forgetting of old relations and the overfitting caused by data sparsity.

Contrastive Learning Knowledge Distillation +2

Air-Ground Integrated Sensing and Communications: Opportunities and Challenges

no code implementations13 Feb 2023 Zesong Fei, Xinyi Wang, Nan Wu, Jingxuan Huang, J. Andrew Zhang

The air-ground integrated sensing and communications (AG-ISAC) network, which consists of unmanned aerial vehicles (UAVs) and ground terrestrial networks, offers unique capabilities and demands special design techniques.

Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Learning

1 code implementation27 Jan 2023 Xinyi Wang, Wanrong Zhu, Michael Saxon, Mark Steyvers, William Yang Wang

In this study, we aim to examine the in-context learning phenomenon through a Bayesian lens, viewing large language models as topic models that implicitly infer task-related information from demonstrations.

Few-Shot Learning Language Modelling +2

A Survey of Face Recognition

no code implementations26 Dec 2022 Xinyi Wang, Jianteng Peng, Sufang Zhang, Bihui Chen, Yi Wang, Yandong Guo

Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks.

Face Recognition

Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks

3 code implementations22 Nov 2022 Wenhu Chen, Xueguang Ma, Xinyi Wang, William W. Cohen

By combining PoT with self-consistency decoding, we can achieve SoTA performance on all math problem datasets and near-SoTA performance on financial datasets.

TW-BAG: Tensor-wise Brain-aware Gate Network for Inpainting Disrupted Diffusion Tensor Imaging

no code implementations31 Oct 2022 Zihao Tang, Xinyi Wang, Lihaowen Zhu, Mariano Cabezas, Dongnan Liu, Michael Barnett, Weidong Cai, Chengyu Wang

Diffusion Weighted Imaging (DWI) is an advanced imaging technique commonly used in neuroscience and neurological clinical research through a Diffusion Tensor Imaging (DTI) model.

A Multi-dimensional Evaluation of Tokenizer-free Multilingual Pretrained Models

no code implementations13 Oct 2022 Jimin Sun, Patrick Fernandes, Xinyi Wang, Graham Neubig

Recent work on tokenizer-free multilingual pretrained models show promising results in improving cross-lingual transfer and reducing engineering overhead (Clark et al., 2022; Xue et al., 2022).

Cross-Lingual Transfer

Enhancing Document-level Relation Extraction by Entity Knowledge Injection

1 code implementation23 Jul 2022 Xinyi Wang, Zitao Wang, Weijian Sun, Wei Hu

Document-level relation extraction (RE) aims to identify the relations between entities throughout an entire document.

Document-level Relation Extraction Knowledge Graphs

A Comprehensive Review on Deep Supervision: Theories and Applications

no code implementations6 Jul 2022 Renjie Li, Xinyi Wang, Guan Huang, Wenli Yang, Kaining Zhang, Xiaotong Gu, Son N. Tran, Saurabh Garg, Jane Alty, Quan Bai

Deep supervision, or known as 'intermediate supervision' or 'auxiliary supervision', is to add supervision at hidden layers of a neural network.

Causal Balancing for Domain Generalization

1 code implementation10 Jun 2022 Xinyi Wang, Michael Saxon, Jiachen Li, Hongyang Zhang, Kun Zhang, William Yang Wang

While machine learning models rapidly advance the state-of-the-art on various real-world tasks, out-of-domain (OOD) generalization remains a challenging problem given the vulnerability of these models to spurious correlations.

Domain Generalization

Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation

1 code implementation ACL 2022 Xinyi Wang, Sebastian Ruder, Graham Neubig

The performance of multilingual pretrained models is highly dependent on the availability of monolingual or parallel text present in a target language.

Towards Bi-directional Skip Connections in Encoder-Decoder Architectures and Beyond

no code implementations11 Mar 2022 Tiange Xiang, Chaoyi Zhang, Xinyi Wang, Yang song, Dongnan Liu, Heng Huang, Weidong Cai

With the backward skip connections, we propose a U-Net based network family, namely Bi-directional O-shape networks, which set new benchmarks on multiple public medical imaging segmentation datasets.

Medical Image Segmentation Neural Architecture Search

PECO: Examining Single Sentence Label Leakage in Natural Language Inference Datasets through Progressive Evaluation of Cluster Outliers

no code implementations16 Dec 2021 Michael Saxon, Xinyi Wang, Wenda Xu, William Yang Wang

Building natural language inference (NLI) benchmarks that are both challenging for modern techniques, and free from shortcut biases is difficult.

Natural Language Inference

BiX-NAS: Searching Efficient Bi-directional Architecture for Medical Image Segmentation

1 code implementation26 Jun 2021 Xinyi Wang, Tiange Xiang, Chaoyi Zhang, Yang song, Dongnan Liu, Heng Huang, Weidong Cai

We evaluate BiX-NAS on two segmentation tasks using three different medical image datasets, and the experimental results show that our BiX-NAS searched architecture achieves the state-of-the-art performance with significantly lower computational cost.

Image Segmentation Medical Image Segmentation +2

Innovations Autoencoder and its Application in One-class Anomalous Sequence Detection

no code implementations23 Jun 2021 Xinyi Wang, Lang Tong

An innovations sequence of a time series is a sequence of independent and identically distributed random variables with which the original time series has a causal representation.

Anomaly Detection Gaussian Processes +1

RefBERT: Compressing BERT by Referencing to Pre-computed Representations

no code implementations11 Jun 2021 Xinyi Wang, Haiqin Yang, Liang Zhao, Yang Mo, Jianping Shen

Differently, in this paper, we propose RefBERT to leverage the knowledge learned from the teacher, i. e., facilitating the pre-computed BERT representation on the reference sample and compressing BERT into a smaller student model.

Knowledge Distillation

Counterfactual Maximum Likelihood Estimation for Training Deep Networks

1 code implementation NeurIPS 2021 Xinyi Wang, Wenhu Chen, Michael Saxon, William Yang Wang

Although deep learning models have driven state-of-the-art performance on a wide array of tasks, they are prone to spurious correlations that should not be learned as predictive clues.

Domain Generalization Image Captioning +1

Multi-view Subword Regularization

1 code implementation NAACL 2021 Xinyi Wang, Sebastian Ruder, Graham Neubig

Multilingual pretrained representations generally rely on subword segmentation algorithms to create a shared multilingual vocabulary.

Cross-Lingual Transfer

Gradient-guided Loss Masking for Neural Machine Translation

no code implementations26 Feb 2021 Xinyi Wang, Ankur Bapna, Melvin Johnson, Orhan Firat

To mitigate the negative effect of low quality training data on the performance of neural machine translation models, most existing strategies focus on filtering out harmful data before training starts.

Machine Translation Translation

Meta Back-translation

1 code implementation ICLR 2021 Hieu Pham, Xinyi Wang, Yiming Yang, Graham Neubig

Back-translation is an effective strategy to improve the performance of Neural Machine Translation~(NMT) by generating pseudo-parallel data.

Machine Translation Meta-Learning +2

Fast Dynamics in a Model Metallic Glass-forming Material

no code implementations28 Jan 2021 Hao Zhang, Xinyi Wang, Hai-Bin Yu, Jack F. Douglas

We investigate the fast $\beta$- and Johari-Goldstein (JG) $\beta$-relaxation processes, along with the elastic scattering response of glass-forming (GF) liquids and the Boson peak, in a simulated Al-Sm GF material exhibiting a fragile-strong (FS) transition.

Materials Science

Dynamic Heterogeneity, Cooperative Motion, and Johari-Goldstein $β$-Relaxation in a Metallic Glass-Forming Material Exhibiting a Fragile to Strong Transition

no code implementations27 Jan 2021 Hao Zhang, Xinyi Wang, Hai-Bin Yu, Jack F. Douglas

We investigate the Johari-Goldstein (JG) $\beta$-relaxation process in a model metallic glass-forming (GF) material (Al90Sm10), previously studied extensively by both frequency-dependent mechanical measurements and simulation studies devoted to equilibrium properties, by molecular dynamics simulations based on validated and optimized interatomic potentials with the primary aim of better understanding the nature of this universal relaxation process from a dynamic heterogeneity (DH) perspective.

Materials Science

Modeling Disclosive Transparency in NLP Application Descriptions

1 code implementation EMNLP 2021 Michael Saxon, Sharon Levy, Xinyi Wang, Alon Albalak, William Yang Wang

Broader disclosive transparency$-$truth and clarity in communication regarding the function of AI systems$-$is widely considered desirable.

Fairness Language Modelling

A Deep Learning Approach to Anomaly Sequence Detection for High-Resolution Monitoring of Power Systems

no code implementations9 Dec 2020 Kursat Rasim Mestav, Xinyi Wang, Lang Tong

A deep learning approach is proposed to detect data and system anomalies using high-resolution continuous point-on-wave (CPOW) or phasor measurements.

Anomaly Detection

Improving Target-side Lexical Transfer in Multilingual Neural Machine Translation

no code implementations Findings of the Association for Computational Linguistics 2020 Luyu Gao, Xinyi Wang, Graham Neubig

To improve the performance of Neural Machine Translation~(NMT) for low-resource languages~(LRL), one effective strategy is to leverage parallel data from a related high-resource language~(HRL).

Machine Translation NMT +1

Adaptive Subband Compression for Streaming of Continuous Point-on-Wave and PMU Data

no code implementations23 Aug 2020 Xinyi Wang, Yilu Liu, Lang Tong

A data compression system capable of providing real-time streaming of high-resolution continuous point-on-wave (CPOW) and phasor measurement unit (PMU) measurements is proposed.

Data Compression

Balancing Training for Multilingual Neural Machine Translation

2 code implementations ACL 2020 Xinyi Wang, Yulia Tsvetkov, Graham Neubig

When training multilingual machine translation (MT) models that can translate to/from multiple languages, we are faced with imbalanced training sets: some languages have much more training data than others.

Machine Translation Translation

A Probabilistic Formulation of Unsupervised Text Style Transfer

5 code implementations ICLR 2020 Junxian He, Xinyi Wang, Graham Neubig, Taylor Berg-Kirkpatrick

Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes.

Decipherment Language Modelling +6

Optimizing Data Usage via Differentiable Rewards

1 code implementation ICML 2020 Xinyi Wang, Hieu Pham, Paul Michel, Antonios Anastasopoulos, Jaime Carbonell, Graham Neubig

To acquire a new skill, humans learn better and faster if a tutor, based on their current knowledge level, informs them of how much attention they should pay to particular content or practice problems.

Image Classification Machine Translation

Improving Conditioning in Context-Aware Sequence to Sequence Models

no code implementations21 Nov 2019 Xinyi Wang, Jason Weston, Michael Auli, Yacine Jernite

Neural sequence to sequence models are well established for applications which can be cast as mapping a single input sequence into a single output sequence.

abstractive question answering Data Augmentation +2

Target Conditioned Sampling: Optimizing Data Selection for Multilingual Neural Machine Translation

no code implementations ACL 2019 Xinyi Wang, Graham Neubig

To improve low-resource Neural Machine Translation (NMT) with multilingual corpora, training on the most related high-resource language only is often more effective than using all data available (Neubig and Hu, 2018).

Low-Resource Neural Machine Translation NMT +1

compare-mt: A Tool for Holistic Comparison of Language Generation Systems

2 code implementations NAACL 2019 Graham Neubig, Zi-Yi Dou, Junjie Hu, Paul Michel, Danish Pruthi, Xinyi Wang, John Wieting

In this paper, we describe compare-mt, a tool for holistic analysis and comparison of the results of systems for language generation tasks such as machine translation.

Machine Translation Text Generation +1

The ARIEL-CMU Systems for LoReHLT18

no code implementations24 Feb 2019 Aditi Chaudhary, Siddharth Dalmia, Junjie Hu, Xinjian Li, Austin Matthews, Aldrian Obaja Muis, Naoki Otani, Shruti Rijhwani, Zaid Sheikh, Nidhi Vyas, Xinyi Wang, Jiateng Xie, Ruochen Xu, Chunting Zhou, Peter J. Jansen, Yiming Yang, Lori Levin, Florian Metze, Teruko Mitamura, David R. Mortensen, Graham Neubig, Eduard Hovy, Alan W. black, Jaime Carbonell, Graham V. Horwood, Shabnam Tafreshi, Mona Diab, Efsun S. Kayi, Noura Farra, Kathleen McKeown

This paper describes the ARIEL-CMU submissions to the Low Resource Human Language Technologies (LoReHLT) 2018 evaluations for the tasks Machine Translation (MT), Entity Discovery and Linking (EDL), and detection of Situation Frames in Text and Speech (SF Text and Speech).

Machine Translation Translation

Multilingual Neural Machine Translation With Soft Decoupled Encoding

1 code implementation ICLR 2019 Xinyi Wang, Hieu Pham, Philip Arthur, Graham Neubig

Multilingual training of neural machine translation (NMT) systems has led to impressive accuracy improvements on low-resource languages.

Machine Translation NMT +1

A Tree-based Decoder for Neural Machine Translation

1 code implementation EMNLP 2018 Xinyi Wang, Hieu Pham, Pengcheng Yin, Graham Neubig

Recent advances in Neural Machine Translation (NMT) show that adding syntactic information to NMT systems can improve the quality of their translations.

Machine Translation NMT +1

XNMT: The eXtensible Neural Machine Translation Toolkit

1 code implementation WS 2018 Graham Neubig, Matthias Sperber, Xinyi Wang, Matthieu Felix, Austin Matthews, Sarguna Padmanabhan, Ye Qi, Devendra Singh Sachan, Philip Arthur, Pierre Godard, John Hewitt, Rachid Riad, Liming Wang

In this paper we describe the design of XNMT and its experiment configuration system, and demonstrate its utility on the tasks of machine translation, speech recognition, and multi-tasked machine translation/parsing.

Machine Translation NMT +3

Cannot find the paper you are looking for? You can Submit a new open access paper.