Search Results for author: Zihan Liu

Found 46 papers, 30 papers with code

CAiRE in DialDoc21: Data Augmentation for Information Seeking Dialogue System

1 code implementation ACL (dialdoc) 2021 Yan Xu, Etsuko Ishii, Genta Indra Winata, Zhaojiang Lin, Andrea Madotto, Zihan Liu, Peng Xu, Pascale Fung

Information-seeking dialogue systems, including knowledge identification and response generation, aim to respond to users with fluent, coherent, and informative responses based on users’ needs, which.

Data Augmentation Response Generation

Learning the Evolutionary and Multi-scale Graph Structure for Multivariate Time Series Forecasting

no code implementations28 Jun 2022 Junchen Ye, Zihan Liu, Bowen Du, Leilei Sun, Weimiao Li, Yanjie Fu, Hui Xiong

To equip the graph neural network with a flexible and practical graph structure, in this paper, we investigate how to model the evolutionary and multi-scale interactions of time series.

Multivariate Time Series Forecasting Self-Learning +1

SNP2Vec: Scalable Self-Supervised Pre-Training for Genome-Wide Association Study

1 code implementation BioNLP (ACL) 2022 Samuel Cahyawijaya, Tiezheng Yu, Zihan Liu, Tiffany T. W. Mak, Xiaopu Zhou, Nancy Y. Ip, Pascale Fung

We apply SNP2Vec to perform long-sequence genomics modeling, and we evaluate the effectiveness of our approach on predicting Alzheimer's disease risk in a Chinese cohort.

ASCEND: A Spontaneous Chinese-English Dataset for Code-switching in Multi-turn Conversation

1 code implementation12 Dec 2021 Holy Lovenia, Samuel Cahyawijaya, Genta Indra Winata, Peng Xu, Xu Yan, Zihan Liu, Rita Frieske, Tiezheng Yu, Wenliang Dai, Elham J. Barezi, Qifeng Chen, Xiaojuan Ma, Bertram E. Shi, Pascale Fung

ASCEND (A Spontaneous Chinese-English Dataset) is a high-quality Mandarin Chinese-English code-switching corpus built on spontaneous multi-turn conversational dialogue sources collected in Hong Kong.

NER-BERT: A Pre-trained Model for Low-Resource Entity Tagging

no code implementations1 Dec 2021 Zihan Liu, Feijun Jiang, Yuxiang Hu, Chen Shi, Pascale Fung

Named entity recognition (NER) models generally perform poorly when large training datasets are unavailable for low-resource domains.

Language Modelling named-entity-recognition +1

Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup

1 code implementation30 Nov 2021 Siyuan Li, Zicheng Liu, Di wu, Zihan Liu, Stan Z. Li

Mixup is a popular data-dependent augmentation technique for deep neural networks, which contains two sub-tasks, mixup generation and classification.

Data Augmentation Image Classification +2

Surrogate Representation Learning with Isometric Mapping for Gray-box Graph Adversarial Attacks

no code implementations20 Oct 2021 Zihan Liu, Yun Luo, Zelin Zang, Stan Z. Li

Gray-box graph attacks aim at disrupting the performance of the victim model by using inconspicuous attacks with limited knowledge of the victim model.

Node Classification Representation Learning

Powering Comparative Classification with Sentiment Analysis via Domain Adaptive Knowledge Transfer

1 code implementation EMNLP 2021 Zeyu Li, Yilong Qin, Zihan Liu, Wei Wang

We study Comparative Preference Classification (CPC) which aims at predicting whether a preference comparison exists between two entities in a given sentence and, if so, which entity is preferred over the other.

Question Answering Sentiment Analysis +1

Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization

1 code implementation EMNLP 2021 Tiezheng Yu, Wenliang Dai, Zihan Liu, Pascale Fung

Multimodal abstractive summarization (MAS) models that summarize videos (vision modality) and their corresponding transcripts (text modality) are able to extract the essential information from massive multimodal data on the Internet.

Abstractive Text Summarization Text Generation

CAiRE in DialDoc21: Data Augmentation for Information-Seeking Dialogue System

1 code implementation7 Jun 2021 Etsuko Ishii, Yan Xu, Genta Indra Winata, Zhaojiang Lin, Andrea Madotto, Zihan Liu, Peng Xu, Pascale Fung

Information-seeking dialogue systems, including knowledge identification and response generation, aim to respond to users with fluent, coherent, and informative responses based on users' needs, which.

Data Augmentation Response Generation

X2Parser: Cross-Lingual and Cross-Domain Framework for Task-Oriented Compositional Semantic Parsing

1 code implementation ACL (RepL4NLP) 2021 Zihan Liu, Genta Indra Winata, Peng Xu, Pascale Fung

Experimental results illustrate that our model can significantly outperform existing strong baselines in cross-lingual and cross-domain settings, and our model can also achieve a good generalization ability on target languages of target domains.

Semantic Parsing

Retrieval-Free Knowledge-Grounded Dialogue Response Generation with Adapters

1 code implementation dialdoc (ACL) 2022 Yan Xu, Etsuko Ishii, Samuel Cahyawijaya, Zihan Liu, Genta Indra Winata, Andrea Madotto, Dan Su, Pascale Fung

This paper proposes KnowExpert, a framework to bypass the explicit retrieval process and inject knowledge into the pre-trained language models with lightweight adapters and adapt to the knowledge-grounded dialogue task.

Response Generation

Unveiling the Power of Mixup for Stronger Classifiers

1 code implementation24 Mar 2021 Zicheng Liu, Siyuan Li, Di wu, Zihan Liu, ZhiYuan Chen, Lirong Wu, Stan Z. Li

Specifically, AutoMix reformulates the mixup classification into two sub-tasks (i. e., mixed sample generation and mixup classification) with corresponding sub-networks and solves them in a bi-level optimization framework.

Classification Data Augmentation +3

AdaptSum: Towards Low-Resource Domain Adaptation for Abstractive Summarization

1 code implementation NAACL 2021 Tiezheng Yu, Zihan Liu, Pascale Fung

State-of-the-art abstractive summarization models generally rely on extensive labeled data, which lowers their generalization ability on domains where such data are not available.

Abstractive Text Summarization Domain Adaptation

Multimodal End-to-End Sparse Model for Emotion Recognition

1 code implementation NAACL 2021 Wenliang Dai, Samuel Cahyawijaya, Zihan Liu, Pascale Fung

Existing works on multimodal affective computing tasks, such as emotion recognition, generally adopt a two-phase pipeline, first extracting feature representations for each single modality with hand-crafted algorithms and then performing end-to-end learning with the extracted features.

Emotion Recognition

Kungfupanda at SemEval-2020 Task 12: BERT-Based Multi-TaskLearning for Offensive Language Detection

1 code implementation SEMEVAL 2020 Wenliang Dai, Tiezheng Yu, Zihan Liu, Pascale Fung

Nowadays, offensive content in social media has become a serious problem, and automatically detecting offensive language is an essential task.

Language Modelling Multi-Task Learning

Cross-lingual Spoken Language Understanding with Regularized Representation Alignment

1 code implementation EMNLP 2020 Zihan Liu, Genta Indra Winata, Peng Xu, Zhaojiang Lin, Pascale Fung

Despite the promising results of current cross-lingual models for spoken language understanding systems, they still suffer from imperfect cross-lingual representation alignments between the source and target languages, which makes the performance sub-optimal.

Spoken Language Understanding

Modality-Transferable Emotion Embeddings for Low-Resource Multimodal Emotion Recognition

1 code implementation Asian Chapter of the Association for Computational Linguistics 2020 Wenliang Dai, Zihan Liu, Tiezheng Yu, Pascale Fung

Despite the recent achievements made in the multi-modal emotion recognition task, two problems still exist and have not been well investigated: 1) the relationship between different emotion categories are not utilized, which leads to sub-optimal performance; and 2) current models fail to cope well with low-resource emotions, especially for unseen emotions.

Multimodal Emotion Recognition Word Embeddings

EmoGraph: Capturing Emotion Correlations using Graph Networks

no code implementations21 Aug 2020 Peng Xu, Zihan Liu, Genta Indra Winata, Zhaojiang Lin, Pascale Fung

Most emotion recognition methods tackle the emotion understanding task by considering individual emotion independently while ignoring their fuzziness nature and the interconnections among them.

Classification Emotion Classification +3

Exploring Fine-tuning Techniques for Pre-trained Cross-lingual Models via Continual Learning

no code implementations29 Apr 2020 Zihan Liu, Genta Indra Winata, Andrea Madotto, Pascale Fung

Recently, fine-tuning pre-trained language models (e. g., multilingual BERT) to downstream cross-lingual tasks has shown promising results.

Continual Learning named-entity-recognition +2

Kungfupanda at SemEval-2020 Task 12: BERT-Based Multi-Task Learning for Offensive Language Detection

1 code implementation28 Apr 2020 Wenliang Dai, Tiezheng Yu, Zihan Liu, Pascale Fung

Nowadays, offensive content in social media has become a serious problem, and automatically detecting offensive language is an essential task.

Abuse Detection Language Modelling +1

Variational Transformers for Diverse Response Generation

2 code implementations28 Mar 2020 Zhaojiang Lin, Genta Indra Winata, Peng Xu, Zihan Liu, Pascale Fung

Despite the great promise of Transformers in many sequence modeling tasks (e. g., machine translation), their deterministic nature hinders them from generalizing to high entropy tasks such as dialogue response generation.

Machine Translation Response Generation +1

XPersona: Evaluating Multilingual Personalized Chatbot

1 code implementation EMNLP (NLP4ConvAI) 2021 Zhaojiang Lin, Zihan Liu, Genta Indra Winata, Samuel Cahyawijaya, Andrea Madotto, Yejin Bang, Etsuko Ishii, Pascale Fung

Experimental results show that the multilingual trained models outperform the translation-pipeline and that they are on par with the monolingual models, with the advantage of having a single model across multiple languages.

Chatbot Translation

Learning Fast Adaptation on Cross-Accented Speech Recognition

1 code implementation4 Mar 2020 Genta Indra Winata, Samuel Cahyawijaya, Zihan Liu, Zhaojiang Lin, Andrea Madotto, Peng Xu, Pascale Fung

The great variability and complex characteristics of accents creates a major challenge for training a robust and accent-agnostic automatic speech recognition (ASR) system.

Audio and Speech Processing Sound

Zero-Resource Cross-Domain Named Entity Recognition

1 code implementation WS 2020 Zihan Liu, Genta Indra Winata, Pascale Fung

Existing models for cross-domain named entity recognition (NER) rely on numerous unlabeled corpus or labeled NER training data in target domains.

Cross-Domain Named Entity Recognition Domain Adaptation +3

On the Importance of Word Order Information in Cross-lingual Sequence Labeling

no code implementations30 Jan 2020 Zihan Liu, Genta Indra Winata, Samuel Cahyawijaya, Andrea Madotto, Zhaojiang Lin, Pascale Fung

To verify this hypothesis, we investigate whether making models insensitive to the word order of the source language can improve the adaptation performance in target languages.

named-entity-recognition Named Entity Recognition +2

Universal Adversarial Perturbations for CNN Classifiers in EEG-Based BCIs

1 code implementation3 Dec 2019 Zihan Liu, Lubin Meng, Xiao Zhang, Weili Fang, Dongrui Wu

Multiple convolutional neural network (CNN) classifiers have been proposed for electroencephalogram (EEG) based brain-computer interfaces (BCIs).

EEG

Zero-shot Cross-lingual Dialogue Systems with Transferable Latent Variables

no code implementations IJCNLP 2019 Zihan Liu, Jamin Shin, Yan Xu, Genta Indra Winata, Peng Xu, Andrea Madotto, Pascale Fung

Despite the surging demands for multilingual task-oriented dialog systems (e. g., Alexa, Google Home), there has been less research done in multilingual or cross-lingual scenarios.

Intent Detection Natural Language Understanding +1

Generalizing Question Answering System with Pre-trained Language Model Fine-tuning

no code implementations WS 2019 Dan Su, Yan Xu, Genta Indra Winata, Peng Xu, Hyeondey Kim, Zihan Liu, Pascale Fung

With a large number of datasets being released and new techniques being proposed, Question answering (QA) systems have witnessed great breakthroughs in reading comprehension (RC)tasks.

Language Modelling Multi-Task Learning +2

Lightweight and Efficient End-to-End Speech Recognition Using Low-Rank Transformer

no code implementations30 Oct 2019 Genta Indra Winata, Samuel Cahyawijaya, Zhaojiang Lin, Zihan Liu, Pascale Fung

Highly performing deep neural networks come at the cost of computational complexity that limits their practicality for deployment on portable devices.

Speech Recognition

Multi-Task Deep Learning with Dynamic Programming for Embryo Early Development Stage Classification from Time-Lapse Videos

no code implementations22 Aug 2019 Zihan Liu, Bo Huang, Yuqi Cui, Yifan Xu, Bo Zhang, Lixia Zhu, Yang Wang, Lei Jin, Dongrui Wu

Accurate classification of embryo early development stages can provide embryologists valuable information for assessing the embryo quality, and hence is critical to the success of IVF.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.