Search Results for author: Zhengyuan Liu

Found 39 papers, 17 papers with code

Entity-based De-noising Modeling for Controllable Dialogue Summarization

no code implementations SIGDIAL (ACL) 2022 Zhengyuan Liu, Nancy Chen

Although fine-tuning pre-trained backbones produces fluent and grammatically-correct text in various language generation tasks, factual consistency in abstractive summarization remains challenging.

Abstractive Text Summarization Hallucination +1

Singlish Message Paraphrasing: A Joint Task of Creole Translation and Text Normalization

no code implementations COLING 2022 Zhengyuan Liu, Shikang Ni, Ai Ti Aw, Nancy F. Chen

In this work, we introduce a joint paraphrasing task of creole translation and text normalization of Singlish messages, which can shed light on how to process other language varieties and dialects.

Stance Detection Translation

In2Core: Leveraging Influence Functions for Coreset Selection in Instruction Finetuning of Large Language Models

no code implementations7 Aug 2024 Ayrton San Joaquin, Bin Wang, Zhengyuan Liu, Nicholas Asher, Brian Lim, Philippe Muller, Nancy F. Chen

By applying our algorithm to instruction fine-tuning data of LLMs, we can achieve similar performance with just 50% of the training data.

Decompose and Aggregate: A Step-by-Step Interpretable Evaluation Framework

no code implementations24 May 2024 Minzhi Li, Zhengyuan Liu, Shumin Deng, Shafiq Joty, Nancy F. Chen, Min-Yen Kan

The acceleration of Large Language Models (LLMs) research has opened up new possibilities for evaluating generated texts.

CRAFT: Extracting and Tuning Cultural Instructions from the Wild

2 code implementations6 May 2024 Bin Wang, Geyu Lin, Zhengyuan Liu, Chengwei Wei, Nancy F. Chen

Large language models (LLMs) have rapidly evolved as the foundation of various natural language processing (NLP) applications.

CrossIn: An Efficient Instruction Tuning Approach for Cross-Lingual Knowledge Alignment

1 code implementation18 Apr 2024 Geyu Lin, Bin Wang, Zhengyuan Liu, Nancy F. Chen

This performance discrepancy mainly stems from the imbalanced distribution of training data across languages during pre-training and instruction tuning stages.

Resilience of Large Language Models for Noisy Instructions

no code implementations15 Apr 2024 Bin Wang, Chengwei Wei, Zhengyuan Liu, Geyu Lin, Nancy F. Chen

As the rapidly advancing domain of natural language processing (NLP), large language models (LLMs) have emerged as powerful tools for interpreting human commands and generating text across various tasks.

Automatic Speech Recognition Optical Character Recognition +3

Personality-aware Student Simulation for Conversational Intelligent Tutoring Systems

no code implementations10 Apr 2024 Zhengyuan Liu, Stella Xin Yin, Geyu Lin, Nancy F. Chen

Intelligent Tutoring Systems (ITSs) can provide personalized and self-paced learning experience.

Math

Scaffolding Language Learning via Multi-modal Tutoring Systems with Pedagogical Instructions

no code implementations4 Apr 2024 Zhengyuan Liu, Stella Xin Yin, Carolyn Lee, Nancy F. Chen

Intelligent tutoring systems (ITSs) that imitate human tutors and aim to provide immediate and customized instructions or feedback to learners have shown their effectiveness in education.

Learning Planning-based Reasoning by Trajectories Collection and Process Reward Synthesizing

no code implementations1 Feb 2024 Fangkai Jiao, Chengwei Qin, Zhengyuan Liu, Nancy F. Chen, Shafiq Joty

Large Language Models (LLMs) have demonstrated significant potential in handling complex reasoning tasks through step-by-step rationale generation.

Hallucination Logical Reasoning

Picking the Underused Heads: A Network Pruning Perspective of Attention Head Selection for Fusing Dialogue Coreference Information

no code implementations15 Dec 2023 Zhengyuan Liu, Nancy F. Chen

In this work, we investigate the attention head selection and manipulation strategy for feature injection from a network pruning perspective, and conduct a case study on dialogue summarization.

Network Pruning

Multi-label and Multi-target Sampling of Machine Annotation for Computational Stance Detection

1 code implementation8 Nov 2023 Zhengyuan Liu, Hai Leong Chieu, Nancy F. Chen

Data collection from manual labeling provides domain-specific and task-aligned supervision for data-driven approaches, and a critical mass of well-annotated resources is required to achieve reasonable performance in natural language processing tasks.

Stance Detection

CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation

1 code implementation24 Oct 2023 Minzhi Li, Taiwei Shi, Caleb Ziems, Min-Yen Kan, Nancy F. Chen, Zhengyuan Liu, Diyi Yang

Annotated data plays a critical role in Natural Language Processing (NLP) in training models and evaluating their performance.

text annotation

Instructive Dialogue Summarization with Query Aggregations

1 code implementation17 Oct 2023 Bin Wang, Zhengyuan Liu, Nancy F. Chen

With the advancement of instruction-finetuned language models, we introduce instruction-tuning to dialogues to expand the capability set of dialogue summarization models.

Machine Reading Comprehension Text Summarization

Guiding Computational Stance Detection with Expanded Stance Triangle Framework

1 code implementation31 May 2023 Zhengyuan Liu, Yong Keong Yap, Hai Leong Chieu, Nancy F. Chen

Stance detection determines whether the author of a piece of text is in favor of, against, or neutral towards a specified target, and can be used to gain valuable insights into social media.

Stance Detection

Multi-source adversarial transfer learning for ultrasound image segmentation with limited similarity

no code implementations30 May 2023 Yifu Zhang, Hongru Li, Tao Yang, Rui Tao, Zhengyuan Liu, Shimeng Shi, Jiansong Zhang, Ning Ma, Wujin Feng, Zhanhu Zhang, Xinyu Zhang

Transfer learning provides the possibility to solve this problem, but there are too many features in natural images that are not related to the target domain.

Image Segmentation Lesion Segmentation +2

Exploring Self-supervised Logic-enhanced Training for Large Language Models

2 code implementations23 May 2023 Fangkai Jiao, Zhiyang Teng, Bosheng Ding, Zhengyuan Liu, Nancy F. Chen, Shafiq Joty

Existing efforts to improve logical reasoning ability of language models have predominantly relied on supervised fine-tuning, hindering generalization to new domains and/or tasks.

In-Context Learning Logical Reasoning

Learning from Bootstrapping and Stepwise Reinforcement Reward: A Semi-Supervised Framework for Text Style Transfer

1 code implementation Findings (NAACL) 2022 Zhengyuan Liu, Nancy F. Chen

To take advantage of both supervised and unsupervised paradigms and tackle the challenges, in this work, we propose a semi-supervised framework for text style transfer.

Sentence Style Transfer +1

DMRST: A Joint Framework for Document-Level Multilingual RST Discourse Segmentation and Parsing

1 code implementation CODI 2021 Zhengyuan Liu, Ke Shi, Nancy F. Chen

While previous work significantly improves the performance of RST discourse parsing, they are not readily applicable to practical use cases: (1) EDU segmentation is not integrated into most existing tree parsing frameworks, thus it is not straightforward to apply such models on newly-coming data.

Ranked #2 on End-to-End RST Parsing on RST-DT (using extra training data)

Discourse Segmentation End-to-End RST Parsing +2

Improving Multi-Party Dialogue Discourse Parsing via Domain Integration

1 code implementation CODI 2021 Zhengyuan Liu, Nancy F. Chen

While multi-party conversations are often less structured than monologues and documents, they are implicitly organized by semantic level correlations across the interactive turns, and dialogue discourse analysis can be applied to predict the dependency structure and relations between the elementary discourse units, and provide feature-rich structural information for downstream tasks.

Discourse Parsing Domain Adaptation +1

Controllable Neural Dialogue Summarization with Personal Named Entity Planning

1 code implementation EMNLP 2021 Zhengyuan Liu, Nancy F. Chen

The conditional sequences are modulated to decide what types of information or what perspective to focus on when forming summaries to tackle the under-constrained problem in summarization tasks.

dialogue summary Hallucination

Dynamic Sliding Window for Meeting Summarization

no code implementations31 Aug 2021 Zhengyuan Liu, Nancy F. Chen

In this work, we first analyze the linguistic characteristics of meeting transcripts on a representative corpus, and find that the sentences comprising the summary correlate with the meeting agenda.

Meeting Summarization

Coreference-Aware Dialogue Summarization

1 code implementation SIGDIAL (ACL) 2021 Zhengyuan Liu, Ke Shi, Nancy F. Chen

Summarizing conversations via neural approaches has been gaining research traction lately, yet it is still challenging to obtain practical solutions.

Abstractive Dialogue Summarization

N-Shot Learning for Augmenting Task-Oriented Dialogue State Tracking

no code implementations Findings (ACL) 2022 Taha Aksu, Zhengyuan Liu, Min-Yen Kan, Nancy F. Chen

Augmentation of task-oriented dialogues has followed standard methods used for plain-text such as back-translation, word-level manipulation, and paraphrasing despite its richly annotated structure.

Data Augmentation Dialogue State Tracking +3

An End-to-End Document-Level Neural Discourse Parser Exploiting Multi-Granularity Representations

no code implementations21 Dec 2020 Ke Shi, Zhengyuan Liu, Nancy F. Chen

Document-level discourse parsing, in accordance with the Rhetorical Structure Theory (RST), remains notoriously challenging.

Decoder Discourse Parsing +1

Multilingual Neural RST Discourse Parsing

1 code implementation COLING 2020 Zhengyuan Liu, Ke Shi, Nancy F. Chen

Text discourse parsing plays an important role in understanding information flow and argumentative structure in natural language.

Discourse Parsing Translation

Uncertainty Modeling for Machine Comprehension Systems using Efficient Bayesian Neural Networks

no code implementations COLING 2020 Zhengyuan Liu, Pavitra Krishnaswamy, Ai Ti Aw, Nancy Chen

While neural approaches have achieved significant improvement in machine comprehension tasks, models often work as a black-box, resulting in lower interpretability, which requires special attention in domains such as healthcare or education.

Active Learning Dialogue Generation +2

Conditional Neural Generation using Sub-Aspect Functions for Extractive News Summarization

no code implementations Findings of the Association for Computational Linguistics 2020 Zhengyuan Liu, Ke Shi, Nancy F. Chen

In this paper, we propose a neural framework that can flexibly control summary generation by introducing a set of sub-aspect functions (i. e. importance, diversity, position).

Diversity News Summarization +2

Retinal Vessel Segmentation based on Fully Convolutional Networks

2 code implementations22 Nov 2019 Zhengyuan Liu

The morphological attributes of retinal vessels, such as length, width, tortuosity and branching pattern and angles, play an important role in diagnosis, screening, treatment, and evaluation of various cardiovascular and ophthalmologic diseases such as diabetes, hypertension and arteriosclerosis.

Retinal Vessel Segmentation Segmentation

Exploiting Discourse-Level Segmentation for Extractive Summarization

no code implementations WS 2019 Zhengyuan Liu, Nancy Chen

We investigate how the sub-sentential segmentation improves extractive summarization performance when content selection is modeled through two basic neural network architectures and a deep bi-directional transformer.

Descriptive Extractive Summarization +1

Topic-aware Pointer-Generator Networks for Summarizing Spoken Conversations

no code implementations3 Oct 2019 Zhengyuan Liu, Angela Ng, Sheldon Lee, Ai Ti Aw, Nancy F. Chen

Such linguistic characteristics of dialogue topics make sentence-level extractive summarization approaches used in spoken documents ill-suited for summarizing conversations.

Conversation Summarization Extractive Summarization +2

Reading Turn by Turn: Hierarchical Attention Architecture for Spoken Dialogue Comprehension

no code implementations ACL 2019 Zhengyuan Liu, Nancy Chen

Comprehending multi-turn spoken conversations is an emerging research area, presenting challenges different from reading comprehension of passages due to the interactive nature of information exchange from at least two speakers.

Reading Comprehension

Deep Dilated Convolutional Nets for the Automatic Segmentation of Retinal Vessels

no code implementations28 May 2019 Ali Hatamizadeh, Hamid Hosseini, Zhengyuan Liu, Steven D. Schwartz, Demetri Terzopoulos

The reliable segmentation of retinal vasculature can provide the means to diagnose and monitor the progression of a variety of diseases affecting the blood vessel network, including diabetes and hypertension.

Decoder Retinal Vessel Segmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.