Search Results for author: Xiaoli Wang

Found 15 papers, 8 papers with code

Tencent submission for WMT20 Quality Estimation Shared Task

no code implementations WMT (EMNLP) 2020 Haijiang Wu, Zixuan Wang, Qingsong Ma, Xinjie Wen, Ruichen Wang, Xiaoli Wang, Yulin Zhang, Zhipeng Yao, Siyao Peng

This paper presents Tencent’s submission to the WMT20 Quality Estimation (QE) Shared Task: Sentence-Level Post-editing Effort for English-Chinese in Task 2.

Machine Translation Sentence +2

Rethinking Multi-view Representation Learning via Distilled Disentangling

1 code implementation16 Mar 2024 Guanzhou Ke, Bo wang, Xiaoli Wang, Shengfeng He

To this end, we propose an innovative framework for multi-view representation learning, which incorporates a technique we term 'distilled disentangling'.

Representation Learning

Fine-tuning Large Language Models for Domain-specific Machine Translation

no code implementations23 Feb 2024 Jiawei Zheng, Hanghai Hong, Xiaoli Wang, Jingsong Su, Yonggui Liang, Shikai Wu

Second, LLMs with fine-tuning on domain-specific data often require high training costs for domain adaptation, and may weaken the zero-shot MT capabilities of LLMs due to over-specialization.

Domain Adaptation In-Context Learning +2

BESTMVQA: A Benchmark Evaluation System for Medical Visual Question Answering

no code implementations13 Dec 2023 Xiaojie Hong, Zixin Song, Liangzhi Li, Xiaoli Wang, Feiyan Liu

Medical Visual Question Answering (Med-VQA) is a very important task in healthcare industry, which answers a natural language question with a medical image.

Medical Visual Question Answering Question Answering +1

Disentangling Multi-view Representations Beyond Inductive Bias

1 code implementation3 Aug 2023 Guanzhou Ke, Yang Yu, Guoqing Chao, Xiaoli Wang, Chenyang Xu, Shengfeng He

In this paper, we propose a novel multi-view representation disentangling method that aims to go beyond inductive biases, ensuring both interpretability and generalizability of the resulting representations.

Clustering Inductive Bias +2

A Sequence-to-Sequence&Set Model for Text-to-Table Generation

1 code implementation31 May 2023 Tong Li, Zhihao Wang, Liangying Shao, Xuling Zheng, Xiaoli Wang, Jinsong Su

Specifically, in addition to a text encoder encoding the input text, our model is equipped with a table header generator to first output a table header, i. e., the first row of the table, in the manner of sequence generation.

Search-Map-Search: A Frame Selection Paradigm for Action Recognition

no code implementations CVPR 2023 Mingjun Zhao, Yakun Yu, Xiaoli Wang, Lei Yang, Di Niu

To overcome the limitations of existing methods, we propose a Search-Map-Search learning paradigm which combines the advantages of heuristic search and supervised learning to select the best combination of frames from a video as one entity.

Action Recognition Video Understanding

LA3: Efficient Label-Aware AutoAugment

1 code implementation20 Apr 2023 Mingjun Zhao, Shan Lu, Zixuan Wang, Xiaoli Wang, Di Niu

Automated augmentation is an emerging and effective technique to search for data augmentation policies to improve generalizability of deep neural network training.

Bayesian Optimization Data Augmentation

A Clustering-guided Contrastive Fusion for Multi-view Representation Learning

1 code implementation28 Dec 2022 Guanzhou Ke, Guoqing Chao, Xiaoli Wang, Chenyang Xu, Yongqi Zhu, Yang Yu

To this end, we utilize a deep fusion network to fuse view-specific representations into the view-common representation, extracting high-level semantics for obtaining robust representation.

Clustering MULTI-VIEW LEARNING +1

WR-ONE2SET: Towards Well-Calibrated Keyphrase Generation

1 code implementation13 Nov 2022 Binbin Xie, Xiangpeng Wei, Baosong Yang, Huan Lin, Jun Xie, Xiaoli Wang, Min Zhang, Jinsong Su

Keyphrase generation aims to automatically generate short phrases summarizing an input document.

Keyphrase Generation

Verdi: Quality Estimation and Error Detection for Bilingual Corpora

1 code implementation31 May 2021 Mingjun Zhao, Haijiang Wu, Di Niu, Zixuan Wang, Xiaoli Wang

Verdi adopts two word predictors to enable diverse features to be extracted from a pair of sentences for subsequent quality estimation, including a transformer-based neural machine translation (NMT) model and a pre-trained cross-lingual language model (XLM).

Language Modelling Machine Translation +3

Reinforced Curriculum Learning on Pre-trained Neural Machine Translation Models

no code implementations13 Apr 2020 Mingjun Zhao, Haijiang Wu, Di Niu, Xiaoli Wang

Specifically, we propose a data selection framework based on Deterministic Actor-Critic, in which a critic network predicts the expected change of model performance due to a certain sample, while an actor network learns to select the best sample out of a random batch of samples presented to it.

Machine Translation NMT +1

Cannot find the paper you are looking for? You can Submit a new open access paper.