Search Results for author: Runxin Xu

Found 23 papers, 14 papers with code

S^4-Tuning: A Simple Cross-lingual Sub-network Tuning Method

no code implementations ACL 2022 Runxin Xu, Fuli Luo, Baobao Chang, Songfang Huang, Fei Huang

The emergence of multilingual pre-trained language models makes it possible to adapt to target languages with only few labeled examples. However, vanilla fine-tuning tends to achieve degenerated and unstable results, owing to the Language Interference among different languages, and Parameter Overload under the few-sample transfer learning scenarios. To address two problems elegantly, we propose S^4-Tuning, a Simple Cross-lingual Sub-network Tuning method.

Transfer Learning

DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models

1 code implementation5 Feb 2024 Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, Y. K. Li, Y. Wu, Daya Guo

Mathematical reasoning poses a significant challenge for language models due to its complex and structured nature.

Ranked #11 on Math Word Problem Solving on MATH (using extra training data)

Arithmetic Reasoning Math +1

A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction

1 code implementation NAACL 2022 Runxin Xu, Peiyi Wang, Tianyu Liu, Shuang Zeng, Baobao Chang, Zhifang Sui

In this paper, we focus on extracting event arguments from an entire document, which mainly faces two critical problems: a) the long-distance dependency between trigger and arguments over sentences; b) the distracting context towards an event in the document.

Document-level Event Extraction Event Argument Extraction +2

ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs

2 code implementations Findings (NAACL) 2022 Liang Chen, Peiyi Wang, Runxin Xu, Tianyu Liu, Zhifang Sui, Baobao Chang

As Abstract Meaning Representation (AMR) implicitly involves compound semantic annotations, we hypothesize auxiliary tasks which are semantically or formally related can better enhance AMR parsing.

Ranked #7 on AMR Parsing on LDC2020T02 (using extra training data)

AMR Parsing Dependency Parsing +1

Knowledgeable Salient Span Mask for Enhancing Language Models as Knowledge Base

no code implementations17 Apr 2022 Cunxiang Wang, Fuli Luo, Yanyang Li, Runxin Xu, Fei Huang, Yue Zhang

Pre-trained language models (PLMs) like BERT have made significant progress in various downstream NLP tasks.

Self-Supervised Learning

Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency

no code implementations ACL 2022 Yanyang Li, Fuli Luo, Runxin Xu, Songfang Huang, Fei Huang, LiWei Wang

Structured pruning has been extensively studied on monolingual pre-trained language models and is yet to be fully evaluated on their multilingual counterparts.

Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning

1 code implementation1 Apr 2022 Ziyun Xu, Chengyu Wang, Minghui Qiu, Fuli Luo, Runxin Xu, Songfang Huang, Jun Huang

Pre-trained Language Models (PLMs) have achieved remarkable performance for various language understanding tasks in IR systems, which require the fine-tuning process based on labeled training data.

Contrastive Learning

Focus on the Target's Vocabulary: Masked Label Smoothing for Machine Translation

2 code implementations6 Mar 2022 Liang Chen, Runxin Xu, Baobao Chang

Label smoothing and vocabulary sharing are two widely used techniques in neural machine translation models.

Machine Translation Translation

From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression

2 code implementations14 Dec 2021 Runxin Xu, Fuli Luo, Chengyu Wang, Baobao Chang, Jun Huang, Songfang Huang, Fei Huang

Unified in contrastive learning, CAP enables the pruned model to learn from the pre-trained model for task-agnostic knowledge, and fine-tuned model for task-specific knowledge.

Contrastive Learning Language Modelling +2

An Enhanced Span-based Decomposition Method for Few-Shot Sequence Labeling

1 code implementation NAACL 2022 Peiyi Wang, Runxin Xu, Tianyu Liu, Qingyu Zhou, Yunbo Cao, Baobao Chang, Zhifang Sui

Few-Shot Sequence Labeling (FSSL) is a canonical paradigm for the tagging models, e. g., named entity recognition and slot filling, to generalize on an emerging, resource-scarce domain.

Few-shot NER Meta-Learning +4

Behind the Scenes: An Exploration of Trigger Biases Problem in Few-Shot Event Classification

1 code implementation29 Aug 2021 Peiyi Wang, Runxin Xu, Tianyu Liu, Damai Dai, Baobao Chang, Zhifang Sui

However, we find they suffer from trigger biases that signify the statistical homogeneity between some trigger words and target event types, which we summarize as trigger overlapping and trigger separability.

Explicit Interaction Network for Aspect Sentiment Triplet Extraction

no code implementations21 Jun 2021 Peiyi Wang, Tianyu Liu, Damai Dai, Runxin Xu, Baobao Chang, Zhifang Sui

Table encoder extracts sentiment at token-pair level, so that the compositional feature between targets and opinions can be easily captured.

Aspect Sentiment Triplet Extraction Sentence +1

Document-level Event Extraction via Heterogeneous Graph-based Interaction Model with a Tracker

2 code implementations ACL 2021 Runxin Xu, Tianyu Liu, Lei LI, Baobao Chang

Existing methods are not effective due to two challenges of this task: a) the target event arguments are scattered across sentences; b) the correlation among events in a document is non-trivial to model.

Document-level Event Extraction Event Extraction

Volctrans Parallel Corpus Filtering System for WMT 2020

no code implementations WMT (EMNLP) 2020 Runxin Xu, Zhuo Zhi, Jun Cao, Mingxuan Wang, Lei LI

In this paper, we describe our submissions to the WMT20 shared task on parallel corpus filtering and alignment for low-resource conditions.

Sentence Word Alignment

Xiaomingbot: A Multilingual Robot News Reporter

no code implementations ACL 2020 Runxin Xu, Jun Cao, Mingxuan Wang, Jiaze Chen, Hao Zhou, Ying Zeng, Yu-Ping Wang, Li Chen, Xiang Yin, Xijin Zhang, Songcheng Jiang, Yuxuan Wang, Lei LI

This paper proposes the building of Xiaomingbot, an intelligent, multilingual and multimodal software robot equipped with four integral capabilities: news generation, news translation, news reading and avatar animation.

News Generation Translation +1

ACMo: Angle-Calibrated Moment Methods for Stochastic Optimization

1 code implementation12 Jun 2020 Xunpeng Huang, Runxin Xu, Hao Zhou, Zhe Wang, Zhengyang Liu, Lei LI

Due to its simplicity and outstanding ability to generalize, stochastic gradient descent (SGD) is still the most widely used optimization method despite its slow convergence.

BIG-bench Machine Learning Stochastic Optimization

Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite Epochs

no code implementations12 Jun 2020 Xunpeng Huang, Hao Zhou, Runxin Xu, Zhe Wang, Lei LI

Adaptive gradient methods have attracted much attention of machine learning communities due to the high efficiency.

Cannot find the paper you are looking for? You can Submit a new open access paper.