no code implementations • EMNLP (NLP4ConvAI) 2021 • Jin Qu, Kazuma Hashimoto, Wenhao Liu, Caiming Xiong, Yingbo Zhou
Compared with DNNC, our proposed method is more efficient in both training and serving since it is based upon the entailment between query utterance and labels instead of all the training examples.
no code implementations • EMNLP 2020 • Semih Yavuz, Kazuma Hashimoto, Wenhao Liu, Nitish Shirish Keskar, Richard Socher, Caiming Xiong
The concept of Dialogue Act (DA) is universal across different task-oriented dialogue domains - the act of {``}request{''} carries the same speaker intention whether it is for restaurant reservation or flight booking.
no code implementations • 25 Sep 2024 • Wenhao Liu, Siyu An, Junru Lu, Muling Wu, Tianlong Li, Xiaohua Wang, Xiaoqing Zheng, Di Yin, Xing Sun, Xuanjing Huang
To investigate RPAs' performance when faced with different types of conflicting requests, we develop an evaluation benchmark that includes contextual knowledge conflicting requests, parametric knowledge conflicting requests, and non-conflicting requests to assess RPAs' ability to identify conflicts and refuse to answer appropriately without over-refusing.
no code implementations • 23 Jun 2024 • Changze Lv, Yufei Gu, Zhengkang Guo, Zhibo Xu, Yixin Wu, Feiran Zhang, Tianyuan Shi, Zhenghua Wang, Ruicheng Yin, Yu Shang, Siqi Zhong, Xiaohua Wang, Muling Wu, Wenhao Liu, Tianlong Li, Jianhao Zhu, Cenyuan Zhang, Zixuan Ling, Xiaoqing Zheng
Backpropagation is a cornerstone algorithm in training neural networks for supervised learning, which uses a gradient descent method to update network weights by minimizing the discrepancy between actual and desired outputs.
no code implementations • 16 Jun 2024 • Jianhao Zhu, Changze Lv, Xiaohua Wang, Muling Wu, Wenhao Liu, Tianlong Li, Zixuan Ling, Cenyuan Zhang, Xiaoqing Zheng, Xuanjing Huang
Conventional federated learning primarily aims to secure the privacy of data distributed across multiple edge devices, with the global model dispatched to edge devices for parameter updates during the learning process.
1 code implementation • 23 Feb 2024 • Muling Wu, Wenhao Liu, Xiaohua Wang, Tianlong Li, Changze Lv, Zixuan Ling, Jianhao Zhu, Cenyuan Zhang, Xiaoqing Zheng, Xuanjing Huang
Parameter Efficient Fine-Tuning (PEFT) techniques have drawn significant attention due to their ability to yield competitive results while updating only a small portion of the adjustable parameters.
no code implementations • 12 Jan 2024 • Tianlong Li, Shihan Dou, Wenhao Liu, Muling Wu, Changze Lv, Rui Zheng, Xiaoqing Zheng, Xuanjing Huang
The recent surge in jailbreaking methods has revealed the vulnerability of Large Language Models (LLMs) to malicious inputs.
1 code implementation • 26 Dec 2023 • Wenhao Liu, Xiaohua Wang, Muling Wu, Tianlong Li, Changze Lv, Zixuan Ling, Jianhao Zhu, Cenyuan Zhang, Xiaoqing Zheng, Xuanjing Huang
Aligning large language models (LLMs) with human preferences is crucial for enhancing their utility in terms of helpfulness, truthfulness, safety, harmlessness, and interestingness.
no code implementations • 25 Oct 2023 • Tianlong Li, Shihan Dou, Changze Lv, Wenhao Liu, Jianhan Xu, Muling Wu, Zixuan Ling, Xiaoqing Zheng, Xuanjing Huang
Users can utilize UBPL to adjust the probability vectors of predicted words in the decoding phase of LLMs, thus influencing the personality expression of LLMs.
no code implementations • 10 Oct 2023 • Tianlong Li, Wenhao Liu, Changze Lv, Yufei Gu, Jianhan Xu, Cenyuan Zhang, Muling Wu, Xiaoqing Zheng, Xuanjing Huang
Spiking Neural Networks (SNNs) have emerged as a promising alternative to conventional Artificial Neural Networks (ANNs), demonstrating comparable performance in both visual and linguistic tasks while offering the advantage of improved energy efficiency.
1 code implementation • 3 Apr 2023 • Lifu Tu, Jin Qu, Semih Yavuz, Shafiq Joty, Wenhao Liu, Caiming Xiong, Yingbo Zhou
Our results demonstrate the strong and efficient modeling ability of NLI-based classifiers and the large cross-lingual transfer improvements achieved by our aligned prompts, particularly in few-shot settings.
no code implementations • 23 Oct 2022 • Prafulla Kumar Choubey, Yu Bai, Chien-Sheng Wu, Wenhao Liu, Nazneen Rajani
Pre-trained language models (PLMs) have been shown effective for zero-shot (0shot) text classification.
1 code implementation • 13 May 2022 • Philippe Laban, Chien-Sheng Wu, Wenhao Liu, Caiming Xiong
Precisely assessing the progress in natural language generation (NLG) tasks is challenging, and human evaluation to establish a preference in a model's output over another is often necessary.
no code implementations • Findings (NAACL) 2022 • Philippe Laban, Chien-Sheng Wu, Lidiya Murakhovs'ka, Wenhao Liu, Caiming Xiong
Question generation (QGen) models are often evaluated with standardized NLG metrics that are based on n-gram overlap.
1 code implementation • Findings (NAACL) 2022 • Ehsan Hosseini-Asl, Wenhao Liu, Caiming Xiong
Our evaluation results on the single-task polarity prediction show that our approach outperforms the previous state-of-the-art (based on BERT) on average performance by a large margins in few-shot and full-shot settings.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +4
1 code implementation • 23 Mar 2022 • Tian Xie, Xinyi Yang, Angela S. Lin, Feihong Wu, Kazuma Hashimoto, Jin Qu, Young Mo Kang, Wenpeng Yin, Huan Wang, Semih Yavuz, Gang Wu, Michael Jones, Richard Socher, Yingbo Zhou, Wenhao Liu, Caiming Xiong
At the core of the struggle is the need to script every single turn of interactions between the bot and the human user.
2 code implementations • 28 Feb 2022 • Liang Qiu, Chien-Sheng Wu, Wenhao Liu, Caiming Xiong
Extracting structure information from dialogue data can help us better understand user and system behaviors.
1 code implementation • NAACL 2022 • Alexander R. Fabbri, Chien-Sheng Wu, Wenhao Liu, Caiming Xiong
Factual consistency is an essential quality of text summarization models in practical settings.
1 code implementation • Findings (NAACL) 2022 • Jesse Vig, Alexander R. Fabbri, Wojciech Kryściński, Chien-Sheng Wu, Wenhao Liu
Query-focused summarization (QFS) aims to produce summaries that answer particular questions of interest, enabling greater user control and personalization.
1 code implementation • 18 Nov 2021 • Mingfei Gao, Chen Xing, Juan Carlos Niebles, Junnan Li, ran Xu, Wenhao Liu, Caiming Xiong
To enlarge the set of base classes, we propose a method to automatically generate pseudo bounding-box annotations of diverse objects from large-scale image-caption pairs.
2 code implementations • ACL 2022 • Prakhar Gupta, Chien-Sheng Wu, Wenhao Liu, Caiming Xiong
Fact-checking is an essential tool to mitigate the spread of misinformation and disinformation.
1 code implementation • Findings (NAACL) 2022 • Lidiya Murakhovs'ka, Chien-Sheng Wu, Philippe Laban, Tong Niu, Wenhao Liu, Caiming Xiong
Asking good questions is an essential ability for both human and machine intelligence.
no code implementations • 14 Oct 2021 • Prafulla Kumar Choubey, Alexander R. Fabbri, Jesse Vig, Chien-Sheng Wu, Wenhao Liu, Nazneen Fatema Rajani
Then, we fine-tune a base summarization model, which is trained on all training samples, on the clean (noisy) subset to obtain an \textit{expert} (\textit{anti-expert}) model.
no code implementations • 11 Oct 2021 • Zahra Fatemi, Chen Xing, Wenhao Liu, Caiming Xiong
In this work, we empirically show that catastrophic forgetting occurs in such methods by evaluating them with general NLP tasks in GLUE.
1 code implementation • 8 Oct 2021 • Tanya Goyal, Nazneen Fatema Rajani, Wenhao Liu, Wojciech Kryściński
Summarization systems make numerous "decisions" about summary properties during inference, e. g. degree of copying, specificity and length of outputs, etc.
no code implementations • 29 Sep 2021 • Tanya Goyal, Nazneen Rajani, Wenhao Liu, Wojciech Maciej Kryscinski
Existing abstractive summarization models lack explicit control mechanisms that would allow users to influence the stylistic features of the model outputs.
no code implementations • 29 Sep 2021 • Ben Krause, Nikhil Naik, Wenhao Liu, Ali Madani
Predicting the fitness, i. e. functional value, of a protein sequence is an important and challenging task in biology, particularly due to the scarcity of assay-labeled data.
1 code implementation • Findings (ACL) 2021 • Chien-Sheng Wu, Linqing Liu, Wenhao Liu, Pontus Stenetorp, Caiming Xiong
In this paper, we aim to improve abstractive dialogue summarization quality and, at the same time, enable granularity control.
1 code implementation • ACL 2022 • Chien-Sheng Wu, Andrea Madotto, Wenhao Liu, Pascale Fung, Caiming Xiong
This paper introduces QAConv, a new question answering (QA) dataset that uses conversations as a knowledge source.
no code implementations • 1 Jan 2021 • Chien-Sheng Wu, Linqing Liu, Wenhao Liu, Pontus Stenetorp, Caiming Xiong
2) A simple strategy to control the granularity of the final summary.
no code implementations • 16 Dec 2020 • Chen Xing, Wenhao Liu, Caiming Xiong
According to recent studies and our empirical observations, one possible reason is that some easy-to-fit patterns in the training data, such as frequently co-occurring word combinations, dominate and harm pre-training, making it hard for the model to fit more complex information.
1 code implementation • EMNLP 2020 • Jian-Guo Zhang, Kazuma Hashimoto, Wenhao Liu, Chien-Sheng Wu, Yao Wan, Philip S. Yu, Richard Socher, Caiming Xiong
Intent detection is one of the core components of goal-oriented dialog systems, and detecting out-of-scope (OOS) intents is also a practically important skill.