Search Results for author: Bowen Yang

Found 8 papers, 3 papers with code

Improving Conversational Recommendation Systems’ Quality with Context-Aware Item Meta-Information

no code implementations Findings (NAACL) 2022 Bowen Yang, Cong Han, Yu Li, Lei Zuo, Zhou Yu

In this paper, we propose a simple yet effective architecture comprising a pre-trained language model (PLM) and an item metadata encoder to integrate the recommendation and the dialog generation better.

Knowledge Graphs Language Modelling +2

ZeroGen: Zero-shot Multimodal Controllable Text Generation with Multiple Oracles

1 code implementation29 Jun 2023 Haoqin Tu, Bowen Yang, Xianfeng Zhao

Automatically generating textual content with desired attributes is an ambitious task that people have pursued long.

News Generation Sentence

Improving Conversational Recommendation Systems' Quality with Context-Aware Item Meta Information

1 code implementation15 Dec 2021 Bowen Yang, Cong Han, Yu Li, Lei Zuo, Zhou Yu

The encoder learns to map item metadata to embeddings that can reflect the semantic information in the dialog context.

Language Modelling Recommendation Systems +1

AllWOZ: Towards Multilingual Task-Oriented Dialog Systems for All

no code implementations15 Dec 2021 Lei Zuo, Kun Qian, Bowen Yang, Zhou Yu

A commonly observed problem of the state-of-the-art natural language technologies, such as Amazon Alexa and Apple Siri, is that their services do not extend to most developing countries' citizens due to language barriers.

Meta-Learning

Few-Shot Domain Expansion for Face Anti-Spoofing

no code implementations27 Jun 2021 Bowen Yang, Jing Zhang, Zhenfei Yin, Jing Shao

In practice, given a handful of labeled samples from a new deployment scenario (target domain) and abundant labeled face images in the existing source domain, the FAS system is expected to perform well in the new scenario without sacrificing the performance on the original domain.

Face Anti-Spoofing Face Recognition +1

PipeMare: Asynchronous Pipeline Parallel DNN Training

no code implementations9 Oct 2019 Bowen Yang, Jian Zhang, Jonathan Li, Christopher Ré, Christopher R. Aberger, Christopher De Sa

Pipeline parallelism (PP) when training neural networks enables larger models to be partitioned spatially, leading to both lower network communication and overall higher hardware utilization.

Cannot find the paper you are looking for? You can Submit a new open access paper.