Search Results for author: Songming Zhang

Found 6 papers, 4 papers with code

Data Mixture in Training Un-assures Out-of-Distribution Generalization

no code implementations25 Dec 2023 Songming Zhang, Yuxiao Luo, Qizhou Wang, Haoang Chi, Weikai Li, Bo Han, Jinyan Li

We study the problem of out-of-distribution (OOD) generalization capability of models by exploring the relationship between generalization error and training set size.

Data Augmentation Out-of-Distribution Generalization

Revisiting Knowledge Distillation under Distribution Shift

1 code implementation25 Dec 2023 Songming Zhang, Ziyu Lyu, Xiaofeng Chen

Knowledge distillation transfers knowledge from large models into small models, and has recently made remarkable achievements.

Data Augmentation Knowledge Distillation

A Quality-based Syntactic Template Retriever for Syntactically-controlled Paraphrase Generation

1 code implementation20 Oct 2023 Xue Zhang, Songming Zhang, Yunlong Liang, Yufeng Chen, Jian Liu, Wenjuan Han, Jinan Xu

Furthermore, for situations requiring multiple paraphrases for each source sentence, we design a Diverse Templates Search (DTS) algorithm, which can enhance the diversity between paraphrases without sacrificing quality.

Data Augmentation Paraphrase Generation +2

Towards Understanding and Improving Knowledge Distillation for Neural Machine Translation

1 code implementation14 May 2023 Songming Zhang, Yunlong Liang, Shuaibo Wang, Wenjuan Han, Jian Liu, Jinan Xu, Yufeng Chen

In this work, we first unravel this mystery from an empirical perspective and show that the knowledge comes from the top-1 predictions of teachers, which also helps us build a potential connection between word- and sequence-level KD.

Knowledge Distillation Machine Translation +2

Conditional Bilingual Mutual Information Based Adaptive Training for Neural Machine Translation

1 code implementation ACL 2022 Songming Zhang, Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, Jian Liu, Jie zhou

Token-level adaptive training approaches can alleviate the token imbalance problem and thus improve neural machine translation, through re-weighting the losses of different target tokens based on specific statistical metrics (e. g., token frequency or mutual information).

Language Modelling Machine Translation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.