Search Results for author: Wenliang Dai

Found 22 papers, 17 papers with code

Dimsum @LaySumm 20

1 code implementation EMNLP (sdp) 2020 Tiezheng Yu, Dan Su, Wenliang Dai, Pascale Fung

Lay summarization aims to generate lay summaries of scientific papers automatically.

Lay Summarization

InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning

1 code implementation11 May 2023 Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi

In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pre-trained BLIP-2 models.

Plausible May Not Be Faithful: Probing Object Hallucination in Vision-Language Pre-training

1 code implementation14 Oct 2022 Wenliang Dai, Zihan Liu, Ziwei Ji, Dan Su, Pascale Fung

Large-scale vision-language pre-trained (VLP) models are prone to hallucinate non-existent visual objects when generating text based on visual information.

Image Augmentation Language Modelling +1

Kaggle Competition: Cantonese Audio-Visual Speech Recognition for In-car Commands

no code implementations6 Jul 2022 Wenliang Dai, Samuel Cahyawijaya, Tiezheng Yu, Elham J Barezi, Pascale Fung

With the rise of deep learning and intelligent vehicles, the smart assistant has become an essential in-car component to facilitate driving and provide extra functionalities.

Audio-Visual Speech Recognition speech-recognition +1

Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation

no code implementations Findings (ACL) 2022 Wenliang Dai, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, Pascale Fung

Furthermore, the original textual language understanding and generation ability of the PLM is maintained after VLKD, which makes our model versatile for both multimodal and unimodal tasks.

Image Captioning Knowledge Distillation +4

Survey of Hallucination in Natural Language Generation

no code implementations8 Feb 2022 Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Wenliang Dai, Andrea Madotto, Pascale Fung

This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation.

Abstractive Text Summarization Data-to-Text Generation +3

Automatic Speech Recognition Datasets in Cantonese: A Survey and New Dataset

1 code implementation LREC 2022 Tiezheng Yu, Rita Frieske, Peng Xu, Samuel Cahyawijaya, Cheuk Tung Shadow Yiu, Holy Lovenia, Wenliang Dai, Elham J. Barezi, Qifeng Chen, Xiaojuan Ma, Bertram E. Shi, Pascale Fung

We further conduct experiments with Fairseq S2T Transformer, a state-of-the-art ASR model, on the biggest existing dataset, Common Voice zh-HK, and our proposed MDCC, and the results show the effectiveness of our dataset.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

ASCEND: A Spontaneous Chinese-English Dataset for Code-switching in Multi-turn Conversation

2 code implementations LREC 2022 Holy Lovenia, Samuel Cahyawijaya, Genta Indra Winata, Peng Xu, Xu Yan, Zihan Liu, Rita Frieske, Tiezheng Yu, Wenliang Dai, Elham J. Barezi, Qifeng Chen, Xiaojuan Ma, Bertram E. Shi, Pascale Fung

ASCEND (A Spontaneous Chinese-English Dataset) is a high-quality Mandarin Chinese-English code-switching corpus built on spontaneous multi-turn conversational dialogue sources collected in Hong Kong.

Greenformer: Factorization Toolkit for Efficient Deep Neural Networks

no code implementations14 Sep 2021 Samuel Cahyawijaya, Genta Indra Winata, Holy Lovenia, Bryan Wilie, Wenliang Dai, Etsuko Ishii, Pascale Fung

While the recent advances in deep neural networks (DNN) bring remarkable success, the computational cost also increases considerably.

Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization

1 code implementation EMNLP 2021 Tiezheng Yu, Wenliang Dai, Zihan Liu, Pascale Fung

Multimodal abstractive summarization (MAS) models that summarize videos (vision modality) and their corresponding transcripts (text modality) are able to extract the essential information from massive multimodal data on the Internet.

Abstractive Text Summarization Text Generation

Weakly-supervised Multi-task Learning for Multimodal Affect Recognition

no code implementations23 Apr 2021 Wenliang Dai, Samuel Cahyawijaya, Yejin Bang, Pascale Fung

In this paper, we propose to leverage these datasets using weakly-supervised multi-task learning to improve the generalization performance on each of them.

Emotion Recognition Multi-Task Learning +1

Multimodal End-to-End Sparse Model for Emotion Recognition

1 code implementation NAACL 2021 Wenliang Dai, Samuel Cahyawijaya, Zihan Liu, Pascale Fung

Existing works on multimodal affective computing tasks, such as emotion recognition, generally adopt a two-phase pipeline, first extracting feature representations for each single modality with hand-crafted algorithms and then performing end-to-end learning with the extracted features.

Emotion Recognition

Kungfupanda at SemEval-2020 Task 12: BERT-Based Multi-TaskLearning for Offensive Language Detection

1 code implementation SEMEVAL 2020 Wenliang Dai, Tiezheng Yu, Zihan Liu, Pascale Fung

Nowadays, offensive content in social media has become a serious problem, and automatically detecting offensive language is an essential task.

Language Modelling Multi-Task Learning

Multi-hop Question Generation with Graph Convolutional Network

1 code implementation Findings of the Association for Computational Linguistics 2020 Dan Su, Yan Xu, Wenliang Dai, Ziwei Ji, Tiezheng Yu, Pascale Fung

Multi-hop Question Generation (QG) aims to generate answer-related questions by aggregating and reasoning over multiple scattered evidence from different paragraphs.

Question Generation Question-Generation

Modality-Transferable Emotion Embeddings for Low-Resource Multimodal Emotion Recognition

1 code implementation Asian Chapter of the Association for Computational Linguistics 2020 Wenliang Dai, Zihan Liu, Tiezheng Yu, Pascale Fung

Despite the recent achievements made in the multi-modal emotion recognition task, two problems still exist and have not been well investigated: 1) the relationship between different emotion categories are not utilized, which leads to sub-optimal performance; and 2) current models fail to cope well with low-resource emotions, especially for unseen emotions.

Multimodal Emotion Recognition Word Embeddings

Kungfupanda at SemEval-2020 Task 12: BERT-Based Multi-Task Learning for Offensive Language Detection

1 code implementation28 Apr 2020 Wenliang Dai, Tiezheng Yu, Zihan Liu, Pascale Fung

Nowadays, offensive content in social media has become a serious problem, and automatically detecting offensive language is an essential task.

Abuse Detection Language Modelling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.