Search Results for author: Haoyu Song

Found 8 papers, 7 papers with code

Language Models are General-Purpose Interfaces

1 code implementation13 Jun 2022 Yaru Hao, Haoyu Song, Li Dong, Shaohan Huang, Zewen Chi, Wenhui Wang, Shuming Ma, Furu Wei

Experimental results across various language-only and vision-language benchmarks show that our model outperforms or is competitive with specialized models on finetuning, zero-shot generalization, and few-shot learning.

Causal Language Modeling Few-Shot Learning +4

Visually-Augmented Language Modeling

1 code implementation20 May 2022 Weizhi Wang, Li Dong, Hao Cheng, Haoyu Song, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei

With the visually-augmented context, VaLM uses a visual knowledge fusion layer to enable multimodal grounded language modeling by attending on both text context and visual knowledge in images.

Image Retrieval Language Modelling +1

CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment

1 code implementation ACL 2022 Haoyu Song, Li Dong, Wei-Nan Zhang, Ting Liu, Furu Wei

We first evaluate CLIP's zero-shot performance on a typical visual question answering task and demonstrate a zero-shot cross-modality transfer capability of CLIP on the visual entailment task.

Question Answering Visual Entailment +1

Profile Consistency Identification for Open-domain Dialogue Agents

1 code implementation EMNLP 2020 Haoyu Song, Yan Wang, Wei-Nan Zhang, Zhengyu Zhao, Ting Liu, Xiaojiang Liu

Maintaining a consistent attribute profile is crucial for dialogue agents to naturally converse with humans.

Exploiting Persona Information for Diverse Generation of Conversational Responses

1 code implementation29 May 2019 Haoyu Song, Wei-Nan Zhang, Yiming Cui, Dong Wang, Ting Liu

Giving conversational context with persona information to a chatbot, how to exploit the information to generate diverse and sustainable conversations is still a non-trivial task.

Chatbot

Cannot find the paper you are looking for? You can Submit a new open access paper.