Search Results for author: Chenyu Zhao

Found 6 papers, 1 papers with code

Pre-training Tasks for User Intent Detection and Embedding Retrieval in E-commerce Search

1 code implementation12 Aug 2022 Yiming Qiu, Chenyu Zhao, Han Zhang, Jingwei Zhuo, TianHao Li, Xiaowei Zhang, Songlin Wang, Sulong Xu, Bo Long, Wen-Yun Yang

BERT-style models pre-trained on the general corpus (e. g., Wikipedia) and fine-tuned on specific task corpus, have recently emerged as breakthrough techniques in many NLP tasks: question answering, text classification, sequence labeling and so on.

Intent Detection Question Answering +3

Polynomial Approximation of Discounted Moments

no code implementations30 Oct 2021 Chenyu Zhao, Misha van Beek, Peter Spreij, Makhtar Ba

We introduce an approximation strategy for the discounted moments of a stochastic process that can, for a large class of problems, approximate the true moments.

Differentiable Retrieval Augmentation via Generative Language Modeling for E-commerce Query Intent Classification

no code implementations18 Aug 2023 Chenyu Zhao, Yunjiang Jiang, Yiming Qiu, Han Zhang, Wen-Yun Yang

Retrieval augmentation, which enhances downstream models by a knowledge retriever and an external corpus instead of by merely increasing the number of model parameters, has been successfully applied to many natural language processing (NLP) tasks such as text classification, question answering and so on.

intent-classification Intent Classification +5

F$^2$AT: Feature-Focusing Adversarial Training via Disentanglement of Natural and Perturbed Patterns

no code implementations23 Oct 2023 Yaguan Qian, Chenyu Zhao, Zhaoquan Gu, Bin Wang, Shouling Ji, Wei Wang, Boyang Zhou, Pan Zhou

We propose a Feature-Focusing Adversarial Training (F$^2$AT), which differs from previous work in that it enforces the model to focus on the core features from natural patterns and reduce the impact of spurious features from perturbed patterns.

Adversarial Robustness Disentanglement +2

Cannot find the paper you are looking for? You can Submit a new open access paper.