Search Results for author: Wenliang Dai

Found 26 papers, 19 papers with code

Kungfupanda at SemEval-2020 Task 12: BERT-Based Multi-Task Learning for Offensive Language Detection

1 code implementation28 Apr 2020 Wenliang Dai, Tiezheng Yu, Zihan Liu, Pascale Fung

Nowadays, offensive content in social media has become a serious problem, and automatically detecting offensive language is an essential task.

Abuse Detection Language Modelling +1

Modality-Transferable Emotion Embeddings for Low-Resource Multimodal Emotion Recognition

1 code implementation Asian Chapter of the Association for Computational Linguistics 2020 Wenliang Dai, Zihan Liu, Tiezheng Yu, Pascale Fung

Despite the recent achievements made in the multi-modal emotion recognition task, two problems still exist and have not been well investigated: 1) the relationship between different emotion categories are not utilized, which leads to sub-optimal performance; and 2) current models fail to cope well with low-resource emotions, especially for unseen emotions.

Multimodal Emotion Recognition Word Embeddings

Multi-hop Question Generation with Graph Convolutional Network

1 code implementation Findings of the Association for Computational Linguistics 2020 Dan Su, Yan Xu, Wenliang Dai, Ziwei Ji, Tiezheng Yu, Pascale Fung

Multi-hop Question Generation (QG) aims to generate answer-related questions by aggregating and reasoning over multiple scattered evidence from different paragraphs.

Question Generation Question-Generation +1

Kungfupanda at SemEval-2020 Task 12: BERT-Based Multi-TaskLearning for Offensive Language Detection

1 code implementation SEMEVAL 2020 Wenliang Dai, Tiezheng Yu, Zihan Liu, Pascale Fung

Nowadays, offensive content in social media has become a serious problem, and automatically detecting offensive language is an essential task.

Language Modelling Multi-Task Learning

Multimodal End-to-End Sparse Model for Emotion Recognition

1 code implementation NAACL 2021 Wenliang Dai, Samuel Cahyawijaya, Zihan Liu, Pascale Fung

Existing works on multimodal affective computing tasks, such as emotion recognition, generally adopt a two-phase pipeline, first extracting feature representations for each single modality with hand-crafted algorithms and then performing end-to-end learning with the extracted features.

Emotion Recognition

Weakly-supervised Multi-task Learning for Multimodal Affect Recognition

no code implementations23 Apr 2021 Wenliang Dai, Samuel Cahyawijaya, Yejin Bang, Pascale Fung

In this paper, we propose to leverage these datasets using weakly-supervised multi-task learning to improve the generalization performance on each of them.

Emotion Recognition Multi-Task Learning +1

Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization

1 code implementation EMNLP 2021 Tiezheng Yu, Wenliang Dai, Zihan Liu, Pascale Fung

Multimodal abstractive summarization (MAS) models that summarize videos (vision modality) and their corresponding transcripts (text modality) are able to extract the essential information from massive multimodal data on the Internet.

Abstractive Text Summarization Text Generation

Greenformer: Factorization Toolkit for Efficient Deep Neural Networks

no code implementations14 Sep 2021 Samuel Cahyawijaya, Genta Indra Winata, Holy Lovenia, Bryan Wilie, Wenliang Dai, Etsuko Ishii, Pascale Fung

While the recent advances in deep neural networks (DNN) bring remarkable success, the computational cost also increases considerably.

ASCEND: A Spontaneous Chinese-English Dataset for Code-switching in Multi-turn Conversation

2 code implementations LREC 2022 Holy Lovenia, Samuel Cahyawijaya, Genta Indra Winata, Peng Xu, Xu Yan, Zihan Liu, Rita Frieske, Tiezheng Yu, Wenliang Dai, Elham J. Barezi, Qifeng Chen, Xiaojuan Ma, Bertram E. Shi, Pascale Fung

ASCEND (A Spontaneous Chinese-English Dataset) is a high-quality Mandarin Chinese-English code-switching corpus built on spontaneous multi-turn conversational dialogue sources collected in Hong Kong.

Automatic Speech Recognition Datasets in Cantonese: A Survey and New Dataset

1 code implementation LREC 2022 Tiezheng Yu, Rita Frieske, Peng Xu, Samuel Cahyawijaya, Cheuk Tung Shadow Yiu, Holy Lovenia, Wenliang Dai, Elham J. Barezi, Qifeng Chen, Xiaojuan Ma, Bertram E. Shi, Pascale Fung

We further conduct experiments with Fairseq S2T Transformer, a state-of-the-art ASR model, on the biggest existing dataset, Common Voice zh-HK, and our proposed MDCC, and the results show the effectiveness of our dataset.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Survey of Hallucination in Natural Language Generation

no code implementations8 Feb 2022 Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Delong Chen, Ho Shu Chan, Wenliang Dai, Andrea Madotto, Pascale Fung

This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation.

Abstractive Text Summarization Data-to-Text Generation +4

Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation

no code implementations Findings (ACL) 2022 Wenliang Dai, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, Pascale Fung

Furthermore, the original textual language understanding and generation ability of the PLM is maintained after VLKD, which makes our model versatile for both multimodal and unimodal tasks.

Image Captioning Knowledge Distillation +4

Kaggle Competition: Cantonese Audio-Visual Speech Recognition for In-car Commands

no code implementations6 Jul 2022 Wenliang Dai, Samuel Cahyawijaya, Tiezheng Yu, Elham J Barezi, Pascale Fung

With the rise of deep learning and intelligent vehicles, the smart assistant has become an essential in-car component to facilitate driving and provide extra functionalities.

Audio-Visual Speech Recognition speech-recognition +1

Plausible May Not Be Faithful: Probing Object Hallucination in Vision-Language Pre-training

1 code implementation14 Oct 2022 Wenliang Dai, Zihan Liu, Ziwei Ji, Dan Su, Pascale Fung

Large-scale vision-language pre-trained (VLP) models are prone to hallucinate non-existent visual objects when generating text based on visual information.

Hallucination Image Augmentation +3

Visual Instruction Tuning with Polite Flamingo

2 code implementations3 Jul 2023 Delong Chen, Jianfeng Liu, Wenliang Dai, Baoyuan Wang

This side effect negatively impacts the model's ability to format responses appropriately -- for instance, its "politeness" -- due to the overly succinct and unformatted nature of raw annotations, resulting in reduced human preference.

mCLIP: Multilingual CLIP via Cross-lingual Transfer

1 code implementation ACL 2023 Guanhua Chen, Lu Hou, Yun Chen, Wenliang Dai, Lifeng Shang, Xin Jiang, Qun Liu, Jia Pan, Wenping Wang

Furthermore, to enhance the token- and sentence-level multilingual representation of the MTE, we propose to train it with machine translation and contrastive learning jointly before the TriKD to provide a better initialization.

Contrastive Learning Cross-Lingual Transfer +7

Survey of Social Bias in Vision-Language Models

no code implementations24 Sep 2023 Nayeon Lee, Yejin Bang, Holy Lovenia, Samuel Cahyawijaya, Wenliang Dai, Pascale Fung

This survey aims to provide researchers with a high-level insight into the similarities and differences of social bias studies in pre-trained models across NLP, CV, and VL.

Fairness

Negative Object Presence Evaluation (NOPE) to Measure Object Hallucination in Vision-Language Models

no code implementations9 Oct 2023 Holy Lovenia, Wenliang Dai, Samuel Cahyawijaya, Ziwei Ji, Pascale Fung

Object hallucination poses a significant challenge in vision-language (VL) models, often leading to the generation of nonsensical or unfaithful responses with non-existent objects.

Hallucination Object +2

Dimsum @LaySumm 20

1 code implementation EMNLP (sdp) 2020 Tiezheng Yu, Dan Su, Wenliang Dai, Pascale Fung

Lay summarization aims to generate lay summaries of scientific papers automatically.

Lay Summarization Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.