Search Results for author: Bing Liu

Found 184 papers, 73 papers with code

Human-Human Health Coaching via Text Messages: Corpus, Annotation, and Analysis

no code implementations SIGDIAL (ACL) 2020 Itika Gupta, Barbara Di Eugenio, Brian Ziebart, Aiswarya Baiju, Bing Liu, Ben Gerber, Lisa Sharp, Nadia Nabulsi, Mary Smart

In this paper, we discuss these schemas and briefly talk about their application for automatically extracting activity goals and annotating the second round of data, collected with different health coaches and patients.

Summarizing Behavioral Change Goals from SMS Exchanges to Support Health Coaches

no code implementations SIGDIAL (ACL) 2021 Itika Gupta, Barbara Di Eugenio, Brian D. Ziebart, Bing Liu, Ben S. Gerber, Lisa K. Sharp

In this paper, we present our work towards assisting health coaches by extracting the physical activity goal the user and coach negotiate via text messages.

Continual Learning Using Only Large Language Model Prompting

no code implementations20 Dec 2024 Jiabao Qiu, Zixuan Ke, Bing Liu

We introduce CLOB, a novel continual learning (CL) paradigm wherein a large language model (LLM) is regarded as a black box.

Continual Learning Language Modeling +2

In-context Continual Learning Assisted by an External Continual Learner

1 code implementation20 Dec 2024 Saleh Momeni, Sahisnu Mazumder, Zixuan Ke, Bing Liu

However, incrementally learning each new task in ICL necessitates adding training examples from each class of the task to the prompt, which hampers scalability as the prompt length increases.

Continual Learning In-Context Learning

Continual Learning Using a Kernel-Based Method Over Foundation Models

1 code implementation20 Dec 2024 Saleh Momeni, Sahisnu Mazumder, Bing Liu

This paper proposes a novel CIL method, called Kernel Linear Discriminant Analysis (KLDA), that can effectively avoid CF and ICS problems.

class-incremental learning Class Incremental Learning +2

DESIRE: Dynamic Knowledge Consolidation for Rehearsal-Free Continual Learning

no code implementations28 Nov 2024 Haiyang Guo, Fei Zhu, Fanhu Zeng, Bing Liu, Xu-Yao Zhang

On the one hand, we retain only two sets of LoRA parameters for merging and propose dynamic representation consolidation to calibrate the merged feature representation.

Continual Learning parameter-efficient fine-tuning

Privacy-Preserving Resilient Vector Consensus

no code implementations6 Nov 2024 Bing Liu, Chengcheng Zhao, Li Chai, Peng Cheng, Jiming Chen

This paper studies privacy-preserving resilient vector consensus in multi-agent systems against faulty agents, where normal agents can achieve consensus within the convex hull of their initial states while protecting state vectors from being disclosed.

Privacy Preserving

Boosting LLM Translation Skills without General Ability Loss via Rationale Distillation

no code implementations17 Oct 2024 Junhong Wu, Yang Zhao, Yangyifan Xu, Bing Liu, Chengqing Zong

These abilities, which are developed using proprietary and unavailable training data, make existing continual instruction tuning methods ineffective.

General Knowledge Instruction Following +2

Facial Action Unit Detection by Adaptively Constraining Self-Attention and Causally Deconfounding Sample

1 code implementation2 Oct 2024 Zhiwen Shao, Hancheng Zhu, Yong Zhou, Xiang Xiang, Bing Liu, Rui Yao, Lizhuang Ma

Specifically, we explore the mechanism of self-attention weight distribution, in which the self-attention weight distribution of each AU is regarded as spatial distribution and is adaptively learned under the constraint of location-predefined attention and the guidance of AU detection.

Action Unit Detection Causal Inference +1

Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models

no code implementations2 Oct 2024 Lucas Bandarkar, Benjamin Muller, Pritish Yuvraj, Rui Hou, Nayan Singhal, Hongjiang Lv, Bing Liu

We focus on mathematical reasoning and without in-language math data, facilitate cross-lingual transfer by composing language and math capabilities.

Math Mathematical Reasoning +1

Predicting Lung Cancer Patient Prognosis with Large Language Models

no code implementations15 Aug 2024 Danqing Hu, Bing Liu, Xiang Li, Xiaofeng Zhu, Nan Wu

The experimental results demonstrate that LLMs can achieve competitive, and in some tasks superior, performance in lung cancer prognosis prediction compared to data-driven logistic regression models despite not using additional patient data.

regression

The Llama 3 Herd of Models

2 code implementations31 Jul 2024 Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, Danny Wyatt, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Francisco Guzmán, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Govind Thattai, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Ishan Misra, Ivan Evtimov, Jack Zhang, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer Van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Karthik Prasad, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Kushal Lakhotia, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Maria Tsimpoukelli, Mathew Oldham, Mathieu Rita, Maya Pavlova, Melanie Kambadur, Mike Lewis, Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal, Narjes Torabi, Nikolay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Ning Zhang, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Vasic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohan Maheswari, Rohit Girdhar, Rohit Patel, Romain Sauvestre, Ronnie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini, Sahana Chennabasappa, Sanjay Singh, Sean Bell, Seohyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sharan Narang, Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Vandenhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Sydney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Virginie Do, Vish Vogeti, Vítor Albiero, Vladan Petrovic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whitney Meers, Xavier Martinet, Xiaodong Wang, Xiaofang Wang, Xiaoqing Ellen Tan, Xide Xia, Xinfeng Xie, Xuchao Jia, Xuewei Wang, Yaelle Goldschlag, Yashesh Gaur, Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre Coudert, Zheng Yan, Zhengxing Chen, Zoe Papakipos, Aaditya Singh, Aayushi Srivastava, Abha Jain, Adam Kelsey, Adam Shajnfeld, Adithya Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay Menon, Ajay Sharma, Alex Boesenberg, Alexei Baevski, Allie Feinstein, Amanda Kallet, Amit Sangani, Amos Teo, Anam Yunus, Andrei Lupu, Andres Alvarado, Andrew Caples, Andrew Gu, Andrew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchandani, Annie Dong, Annie Franco, Anuj Goyal, Aparajita Saraf, Arkabandhu Chowdhury, Ashley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh Yazdan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Hancock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina Mejia, Ce Liu, Changhan Wang, Changkyu Kim, Chao Zhou, Chester Hu, Ching-Hsiang Chu, Chris Cai, Chris Tindal, Christoph Feichtenhofer, Cynthia Gao, Damon Civin, Dana Beaty, Daniel Kreymer, Daniel Li, David Adkins, David Xu, Davide Testuggine, Delia David, Devi Parikh, Diana Liskovich, Didem Foss, Dingkang Wang, Duc Le, Dustin Holland, Edward Dowling, Eissa Jamil, Elaine Montgomery, Eleonora Presani, Emily Hahn, Emily Wood, Eric-Tuan Le, Erik Brinkman, Esteban Arcaute, Evan Dunbar, Evan Smothers, Fei Sun, Felix Kreuk, Feng Tian, Filippos Kokkinos, Firat Ozgenel, Francesco Caggioni, Frank Kanayet, Frank Seide, Gabriela Medina Florez, Gabriella Schwarz, Gada Badeer, Georgia Swee, Gil Halpern, Grant Herman, Grigory Sizov, Guangyi, Zhang, Guna Lakshminarayanan, Hakan Inan, Hamid Shojanazeri, Han Zou, Hannah Wang, Hanwen Zha, Haroun Habeeb, Harrison Rudolph, Helen Suk, Henry Aspegren, Hunter Goldman, Hongyuan Zhan, Ibrahim Damlaj, Igor Molybog, Igor Tufanov, Ilias Leontiadis, Irina-Elena Veliche, Itai Gat, Jake Weissman, James Geboski, James Kohli, Janice Lam, Japhet Asher, Jean-Baptiste Gaya, Jeff Marcus, Jeff Tang, Jennifer Chan, Jenny Zhen, Jeremy Reizenstein, Jeremy Teboul, Jessica Zhong, Jian Jin, Jingyi Yang, Joe Cummings, Jon Carvill, Jon Shepard, Jonathan McPhie, Jonathan Torres, Josh Ginsburg, Junjie Wang, Kai Wu, Kam Hou U, Karan Saxena, Kartikay Khandelwal, Katayoun Zand, Kathy Matosich, Kaushik Veeraraghavan, Kelly Michelena, Keqian Li, Kiran Jagadeesh, Kun Huang, Kunal Chawla, Kyle Huang, Lailin Chen, Lakshya Garg, Lavender A, Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrstedt, Madian Khabsa, Manav Avalani, Manish Bhatt, Martynas Mankus, Matan Hasson, Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Keneally, Miao Liu, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Patel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mohammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navyata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Nikhil Mehta, Nikolay Pavlovich Laptev, Ning Dong, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli, Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pedro Rittner, Philip Bontrager, Pierre Roux, Piotr Dollar, Polina Zvyagina, Prashant Ratanchandani, Pritish Yuvraj, Qian Liang, Rachad Alao, Rachel Rodriguez, Rafi Ayub, Raghotham Murthy, Raghu Nayani, Rahul Mitra, Rangaprabhu Parthasarathy, Raymond Li, Rebekkah Hogan, Robin Battey, Rocky Wang, Russ Howes, Ruty Rinott, Sachin Mehta, Sachin Siby, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Mahajan, Saurabh Verma, Seiji Yamamoto, Sharadh Ramaswamy, Shaun Lindsay, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shishir Patil, Shiva Shankar, Shuqiang Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Summer Deng, Sungmin Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Koehler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun Vontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, WenWen Jiang, Wes Bouaziz, Will Constable, Xiaocheng Tang, Xiaojian Wu, Xiaolan Wang, Xilun Wu, Xinbo Gao, Yaniv Kleinman, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi, Youngjin Nam, Yu, Wang, Yu Zhao, Yuchen Hao, Yundi Qian, Yunlu Li, Yuzi He, Zach Rait, Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, Zhiwei Zhao, Zhiyu Ma

This paper presents a new set of foundation models, called Llama 3.

Ranked #3 on Multi-task Language Understanding on MMLU (using extra training data)

Language Modeling Language Modelling +3

Talk With Human-like Agents: Empathetic Dialogue Through Perceptible Acoustic Reception and Reaction

2 code implementations18 Jun 2024 Haoqiu Yan, Yongxin Zhu, Kai Zheng, Bing Liu, Haoyu Cao, Deqiang Jiang, Linli Xu

This oversight can lead to misinterpretations of speakers' intentions, resulting in inconsistent or even contradictory responses within dialogues.

Language Modeling Language Modelling +1

Relational Learning in Pre-Trained Models: A Theory from Hypergraph Recovery Perspective

no code implementations17 Jun 2024 Yang Chen, Cong Fang, Zhouchen Lin, Bing Liu

Foundation Models (FMs) have demonstrated remarkable insights into the relational dynamics of the world, leading to the crucial question: how do these models acquire an understanding of world hybrid relations?

Entity Alignment Relational Reasoning

The current status of large language models in summarizing radiology report impressions

no code implementations4 Jun 2024 Danqing Hu, Shanyuan Zhang, Qing Liu, Xiaofeng Zhu, Bing Liu

Besides the automatic quantitative evaluation metrics, we define five human evaluation metrics, i. e., completeness, correctness, conciseness, verisimilitude, and replaceability, to evaluate the semantics of the generated impressions.

Text Generation

Probing Language Models for Pre-training Data Detection

1 code implementation3 Jun 2024 Zhenhua Liu, Tong Zhu, Chuanyuan Tan, Haonan Lu, Bing Liu, Wenliang Chen

Large Language Models (LLMs) have shown their impressive capabilities, while also raising concerns about the data contamination problems due to privacy issues and leakage of benchmark datasets in the pre-training phase.

Probing Language Models

Modeling Low-Resource Health Coaching Dialogues via Neuro-Symbolic Goal Summarization and Text-Units-Text Generation

1 code implementation16 Apr 2024 Yue Zhou, Barbara Di Eugenio, Brian Ziebart, Lisa Sharp, Bing Liu, Nikolaos Agadakos

Health coaching helps patients achieve personalized and lifestyle-related goals, effectively managing chronic conditions and alleviating mental health issues.

Dialogue Generation

Towards Enhancing Health Coaching Dialogue in Low-Resource Settings

1 code implementation COLING 2022 Yue Zhou, Barbara Di Eugenio, Brian Ziebart, Lisa Sharp, Bing Liu, Ben Gerber, Nikolaos Agadakos, Shweta Yadav

In this paper, we propose to build a dialogue system that converses with the patients, helps them create and accomplish specific goals, and can address their emotions with empathy.

Empathetic Response Generation Response Generation

A Theory for Length Generalization in Learning to Reason

no code implementations31 Mar 2024 Changnan Xiao, Bing Liu

Length generalization (LG) is a challenging problem in learning to reason.

Retrieval Augmented Cross-Modal Tag Recommendation in Software Q&A Sites

no code implementations6 Feb 2024 Sijin Lu, Pengyu Xu, Bing Liu, Hongjian Sun, Liping Jing, Jian Yu

For the retrieval-augmented representations, we employ a cross-modal context-aware attention to leverage the main modality description for targeted feature extraction across the submodalities title and code.

feature selection Retrieval +1

How to Forget Clients in Federated Online Learning to Rank?

1 code implementation24 Jan 2024 Shuyi Wang, Bing Liu, Guido Zuccon

In a FOLTR system, a ranker is learned by aggregating local updates to the global ranking model.

Learning-To-Rank

SciConNav: Knowledge navigation through contextual learning of extensive scientific research trajectories

1 code implementation22 Jan 2024 Shibing Xiang, Xin Jiang, Bing Liu, Yurui Huang, Chaolin Tian, Yifang Ma

New knowledge builds upon existing foundations, which means an interdependent relationship exists between knowledge, manifested in the historical development of the scientific system for hundreds of years.

Attribute

Grounding for Artificial Intelligence

no code implementations15 Dec 2023 Bing Liu

A core function of intelligence is grounding, which is the process of connecting the natural language and abstract knowledge to the internal representation of the real world in an intelligent being, e. g., a human.

Conditions for Length Generalization in Learning Reasoning Skills

no code implementations22 Nov 2023 Changnan Xiao, Bing Liu

However, numerous evaluations of the reasoning capabilities of LLMs have also showed some limitations.

Open-source Large Language Models are Strong Zero-shot Query Likelihood Models for Document Ranking

1 code implementation20 Oct 2023 Shengyao Zhuang, Bing Liu, Bevan Koopman, Guido Zuccon

In the field of information retrieval, Query Likelihood Models (QLMs) rank documents based on the probability of generating the query given the content of a document.

Document Ranking Information Retrieval +3

Sequential Tag Recommendation

no code implementations9 Oct 2023 Bing Liu, Pengyu Xu, Sijin Lu, Shijing Wang, Hongjian Sun, Liping Jing

With the development of Internet technology and the expansion of social networks, online platforms have become an important way for people to obtain information.

Recommendation Systems Retrieval +1

Class Incremental Learning via Likelihood Ratio Based Task Prediction

2 code implementations26 Sep 2023 Haowei Lin, Yijia Shao, Weinan Qian, Ningxin Pan, Yiduo Guo, Bing Liu

An emerging theory-guided approach (called TIL+OOD) is to train a task-specific model for each task in a shared network for all tasks based on a task-incremental learning (TIL) method to deal with catastrophic forgetting.

class-incremental learning Class Incremental Learning +1

Zero-shot information extraction from radiological reports using ChatGPT

no code implementations4 Sep 2023 Danqing Hu, Bing Liu, Xiaofeng Zhu, Xudong Lu, Nan Wu

Information extraction is the strategy to transform the sequence of characters into structured data, which can be employed for secondary analysis.

Language Modelling Large Language Model +3

CT-Net: Arbitrary-Shaped Text Detection via Contour Transformer

no code implementations25 Jul 2023 Zhiwen Shao, Yuchen Su, Yong Zhou, Fanrong Meng, Hancheng Zhu, Bing Liu, Rui Yao

Contour based scene text detection methods have rapidly developed recently, but still suffer from inaccurate frontend contour initialization, multi-stage error accumulation, or deficient local information aggregation.

Scene Text Detection Text Detection

Parameter-Level Soft-Masking for Continual Learning

1 code implementation26 Jun 2023 Tatsuya Konishi, Mori Kurokawa, Chihiro Ono, Zixuan Ke, Gyuhak Kim, Bing Liu

Although several techniques have achieved learning with no CF, they attain it by letting each task monopolize a sub-network in a shared network, which seriously limits knowledge transfer (KT) and causes over-consumption of the network capacity, i. e., as more tasks are learned, the performance deteriorates.

Continual Learning Incremental Learning +1

Class-Incremental Learning based on Label Generation

1 code implementation22 Jun 2023 Yijia Shao, Yiduo Guo, Dongyan Zhao, Bing Liu

Despite the great success of pre-trained language models, it is still a challenge to use these models for continual learning, especially for the class-incremental learning (CIL) setting due to catastrophic forgetting (CF).

class-incremental learning Class Incremental Learning +1

Dealing with Cross-Task Class Discrimination in Online Continual Learning

1 code implementation CVPR 2023 Yiduo Guo, Bing Liu, Dongyan Zhao

A novel optimization objective with a gradient-based adaptive method is proposed to dynamically deal with the problem in the online CL process.

class-incremental learning Class Incremental Learning +1

Sentiment Analysis in the Era of Large Language Models: A Reality Check

1 code implementation24 May 2023 Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Jialin Pan, Lidong Bing

This paper aims to provide a comprehensive investigation into the capabilities of LLMs in performing various sentiment analysis tasks, from conventional sentiment classification to aspect-based sentiment analysis and multifaceted analysis of subjective texts.

Aspect-Based Sentiment Analysis Few-Shot Learning +2

Do We Need an Encoder-Decoder to Model Dynamical Systems on Networks?

1 code implementation20 May 2023 Bing Liu, Wei Luo, Gang Li, Jing Huang, Bo Yang

As deep learning gains popularity in modelling dynamical systems, we expose an underappreciated misunderstanding relevant to modelling dynamics on networks.

Decoder Time Series

Analyzing and Reducing the Performance Gap in Cross-Lingual Transfer with Fine-tuning Slow and Fast

no code implementations19 May 2023 Yiduo Guo, Yaobo Liang, Dongyan Zhao, Bing Liu, Duan Nan

Existing research has shown that a multilingual pre-trained language model fine-tuned with one (source) language also performs well on downstream tasks for non-source languages, even though no fine-tuning is done on these languages.

Cross-Lingual Transfer Language Modeling +1

A Unified Evaluation Framework for Novelty Detection and Accommodation in NLP with an Instantiation in Authorship Attribution

no code implementations8 May 2023 Neeraj Varshney, Himanshu Gupta, Eric Robertson, Bing Liu, Chitta Baral

To initiate a systematic research in this important area of 'dealing with novelties', we introduce 'NoveltyTask', a multi-stage task to evaluate a system's performance on pipelined novelty 'detection' and 'accommodation' tasks.

Authorship Attribution Novelty Detection

Open-World Continual Learning: Unifying Novelty Detection and Continual Learning

no code implementations20 Apr 2023 Gyuhak Kim, Changnan Xiao, Tatsuya Konishi, Zixuan Ke, Bing Liu

The paper then proves that the theory can be generalized or extended to open-world CIL, which is the proposed open-world continual learning, that can perform CIL in the open world and detect future or open-world OOD data.

class-incremental learning Class Incremental Learning +3

Grab What You Need: Rethinking Complex Table Structure Recognition with Flexible Components Deliberation

no code implementations16 Mar 2023 Hao liu, Xin Li, Mingming Gong, Bing Liu, Yunfei Wu, Deqiang Jiang, Yinsong Liu, Xing Sun

Recently, Table Structure Recognition (TSR) task, aiming at identifying table structure into machine readable formats, has received increasing interest in the community.

Adapting a Language Model While Preserving its General Knowledge

2 code implementations21 Jan 2023 Zixuan Ke, Yijia Shao, Haowei Lin, Hu Xu, Lei Shu, Bing Liu

This paper shows that the existing methods are suboptimal and proposes a novel method to perform a more informed adaptation of the knowledge in the LM by (1) soft-masking the attention heads based on their importance to best preserve the general knowledge in the LM and (2) contrasting the representations of the general and the full (both general and domain knowledge) to learn an integrated representation with both general and domain-specific knowledge.

Continual Learning General Knowledge +2

Joint Spatio-Temporal Modeling for the Semantic Change Detection in Remote Sensing Images

3 code implementations10 Dec 2022 Lei Ding, Jing Zhang, Kai Zhang, Haitao Guo, Bing Liu, Lorenzo Bruzzone

Semantic Change Detection (SCD) refers to the task of simultaneously extracting the changed areas and the semantic categories (before and after the changes) in Remote Sensing Images (RSIs).

Change Detection

Guiding Neural Entity Alignment with Compatibility

1 code implementation29 Nov 2022 Bing Liu, Harrisen Scells, Wen Hua, Guido Zuccon, Genghong Zhao, Xia Zhang

Making compatible predictions thus should be one of the goals of training an EA model along with fitting the labelled data: this aspect however is neglected in current methods.

Entity Alignment Knowledge Graphs

Dependency-aware Self-training for Entity Alignment

1 code implementation29 Nov 2022 Bing Liu, Tiancheng Lan, Wen Hua, Guido Zuccon

Entity Alignment (EA), which aims to detect entity mappings (i. e. equivalent entity pairs) in different Knowledge Graphs (KGs), is critical for KG fusion.

Entity Alignment Knowledge Graphs

Continual Learning of Natural Language Processing Tasks: A Survey

1 code implementation23 Nov 2022 Zixuan Ke, Bing Liu

Continual learning (CL) is a learning paradigm that emulates the human capability of learning and accumulating knowledge continually without forgetting the previously learned knowledge and also transferring the learned knowledge to help learn new tasks better.

Continual Learning Survey +1

Lifelong and Continual Learning Dialogue Systems

no code implementations12 Nov 2022 Sahisnu Mazumder, Bing Liu

This book introduces the new paradigm of lifelong learning dialogue systems to endow chatbots the ability to learn continually by themselves through their own self-initiated interactions with their users and working environments to improve themselves.

Continual Learning

Semantic Novelty Detection and Characterization in Factual Text Involving Named Entities

1 code implementation31 Oct 2022 Nianzu Ma, Sahisnu Mazumder, Alexander Politowicz, Bing Liu, Eric Robertson, Scott Grigsby

Much of the existing work on text novelty detection has been studied at the topic level, i. e., identifying whether the topic of a document or a sentence is novel or not.

Novelty Detection Sentence

Knowledge-Guided Exploration in Deep Reinforcement Learning

no code implementations26 Oct 2022 Sahisnu Mazumder, Bing Liu, Shuai Wang, Yingxuan Zhu, Xiaotian Yin, Lifeng Liu, Jian Li

This paper proposes a new method to drastically speed up deep reinforcement learning (deep RL) training for problems that have the property of state-action permissibility (SAP).

Deep Reinforcement Learning reinforcement-learning +1

Continual Training of Language Models for Few-Shot Learning

3 code implementations11 Oct 2022 Zixuan Ke, Haowei Lin, Yijia Shao, Hu Xu, Lei Shu, Bing Liu

Recent work on applying large language models (LMs) achieves impressive performance in many NLP applications.

Continual Learning Continual Pretraining +2

High-quality Task Division for Large-scale Entity Alignment

1 code implementation22 Aug 2022 Bing Liu, Wen Hua, Guido Zuccon, Genghong Zhao, Xia Zhang

To include in the EA subtasks a high proportion of the potential mappings originally present in the large EA task, we devise a counterpart discovery method that exploits the locality principle of the EA task and the power of trained EA models.

Entity Alignment Informativeness +1

A Multi-Head Model for Continual Learning via Out-of-Distribution Replay

3 code implementations20 Aug 2022 Gyuhak Kim, Zixuan Ke, Bing Liu

Instead of using the saved samples in memory to update the network for previous tasks/classes in the existing approach, MORE leverages the saved samples to build a task specific classifier (adding a new classification head) without updating the network learned for previous tasks/classes.

class-incremental learning Class Incremental Learning +2

TextDCT: Arbitrary-Shaped Text Detection via Discrete Cosine Transform Mask

no code implementations27 Jun 2022 Yuchen Su, Zhiwen Shao, Yong Zhou, Fanrong Meng, Hancheng Zhu, Bing Liu, Rui Yao

Arbitrary-shaped scene text detection is a challenging task due to the variety of text changes in font, size, color, and orientation.

Scene Text Detection Text Detection

Beyond Opinion Mining: Summarizing Opinions of Customer Reviews

1 code implementation3 Jun 2022 Reinald Kim Amplayo, Arthur Bražinskas, Yoshi Suhara, Xiaolan Wang, Bing Liu

In this tutorial, we present various aspects of opinion summarization that are useful for researchers and practitioners.

Opinion Mining Opinion Summarization +2

Unsupervised Meta Learning With Multiview Constraints for Hyperspectral Image Small Sample set Classification

1 code implementation IEEE Transactions on Image Processing 2022 Kuiliang Gao, Bing Liu, Xuchu Yu, and Anzhu Yu

However, the existing methods based on meta learning still need to construct a labeled source data set with several pre-collected HSIs, and must utilize a large number of labeled samples for meta-training, which is actually time-consuming and labor-intensive.

Classification domain classification +2

Open-set Recognition via Augmentation-based Similarity Learning

no code implementations24 Mar 2022 Sepideh Esmaeilpour, Lei Shu, Bing Liu

In many practical scenarios, this is not the case because there are unknowns or unseen class samples in the test data, which is called the open set scenario, and the unknowns need to be detected.

Open Set Learning

Continual Learning Based on OOD Detection and Task Masking

1 code implementation17 Mar 2022 Gyuhak Kim, Sepideh Esmaeilpour, Changnan Xiao, Bing Liu

Existing continual learning techniques focus on either task incremental learning (TIL) or class incremental learning (CIL) problem, but not both.

class-incremental learning Class Incremental Learning +2

AI Autonomy : Self-Initiated Open-World Continual Learning and Adaptation

no code implementations17 Mar 2022 Bing Liu, Sahisnu Mazumder, Eric Robertson, Scott Grigsby

As more and more AI agents are used in practice, it is time to think about how to make these agents fully autonomous so that they can (1) learn by themselves continually in a self-motivated and self-initiated manner rather than being retrained offline periodically on the initiation of human engineers and (2) accommodate or adapt to unexpected or novel circumstances.

AI Agent Continual Learning

Ensemble Semi-supervised Entity Alignment via Cycle-teaching

1 code implementation12 Mar 2022 Kexuan Xin, Zequn Sun, Wen Hua, Bing Liu, Wei Hu, Jianfeng Qu, Xiaofang Zhou

We also design a conflict resolution mechanism to resolve the alignment conflict when combining the new alignment of an aligner and that from its teacher.

Entity Alignment Knowledge Graphs

Zero-Shot Aspect-Based Sentiment Analysis

no code implementations4 Feb 2022 Lei Shu, Hu Xu, Bing Liu, Jiahua Chen

Aspect-based sentiment analysis (ABSA) typically requires in-domain annotated data for supervised training/fine-tuning.

Aspect-Based Sentiment Analysis Aspect Extraction +2

Show, Deconfound and Tell: Image Captioning With Causal Inference

1 code implementation CVPR 2022 Bing Liu, Dong Wang, Xu Yang, Yong Zhou, Rui Yao, Zhiwen Shao, Jiaqi Zhao

In the encoding stage, the IOD is able to disentangle the region-based visual features by deconfounding the visual confounder.

Causal Inference Decoder +1

Continual Learning with Knowledge Transfer for Sentiment Classification

2 code implementations18 Dec 2021 Zixuan Ke, Bing Liu, Hao Wang, Lei Shu

In this setting, the CL system learns a sequence of SC tasks incrementally in a neural network, where each task builds a classifier to classify the sentiment of reviews of a particular product category or domain.

Classification Continual Learning +4

Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks

2 code implementations NeurIPS 2020 Zixuan Ke, Bing Liu, Xingchang Huang

To the best of our knowledge, no technique has been proposed to learn a sequence of mixed similar and dissimilar tasks that can deal with forgetting and also transfer knowledge forward and backward.

Continual Learning

Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning

1 code implementation NeurIPS 2021 Zixuan Ke, Bing Liu, Nianzu Ma, Hu Xu, Lei Shu

Although several papers have tried to deal with both CF and KT, our experiments show that they suffer from serious CF when the tasks do not have much shared knowledge.

Continual Learning Language Modeling +3

CLASSIC: Continual and Contrastive Learning of Aspect Sentiment Classification Tasks

1 code implementation EMNLP 2021 Zixuan Ke, Bing Liu, Hu Xu, Lei Shu

The key novelty is a contrastive continual learning method that enables both knowledge transfer across tasks and knowledge distillation from old tasks to the new task, which eliminates the need for task ids in testing.

Classification Continual Learning +6

Neural Collaborative Graph Machines for Table Structure Recognition

no code implementations CVPR 2022 Hao liu, Xin Li, Bing Liu, Deqiang Jiang, Yinsong Liu, Bo Ren

We also show that the proposed NCGM can modulate collaborative pattern of different modalities conditioned on the context of intra-modality cues, which is vital for diversified table cases.

Table Recognition

Self-Initiated Open World Learning for Autonomous AI Agents

no code implementations21 Oct 2021 Bing Liu, Eric Robertson, Scott Grigsby, Sahisnu Mazumder

As more and more AI agents are used in practice, it is time to think about how to make these agents fully autonomous so that they can learn by themselves in a self-motivated and self-supervised manner rather than being retrained periodically on the initiation of human engineers using expanded training data.

AI Agent

ActiveEA: Active Learning for Neural Entity Alignment

1 code implementation EMNLP 2021 Bing Liu, Harrisen Scells, Guido Zuccon, Wen Hua, Genghong Zhao

Entity Alignment (EA) aims to match equivalent entities across different Knowledge Graphs (KGs) and is an essential step of KG fusion.

Active Learning Entity Alignment +1

Partially Relaxed Masks for Lightweight Knowledge Transfer without Forgetting in Continual Learning

no code implementations29 Sep 2021 Tatsuya Konishi, Mori Kurokawa, Roberto Legaspi, Chihiro Ono, Zixuan Ke, Gyuhak Kim, Bing Liu

The goal of this work is to endow such systems with the additional ability to transfer knowledge among tasks when the tasks are similar and have shared knowledge to achieve higher accuracy.

Continual Learning Incremental Learning +1

Efficient Out-of-Distribution Detection via CVAE data Generation

no code implementations29 Sep 2021 Mengyu Wang, Yijia Shao, Haowei Lin, Wenpeng Hu, Bing Liu

Recently, contrastive loss with data augmentation and pseudo class creation has been shown to produce markedly better results for out-of-distribution (OOD) detection than previous methods.

Data Augmentation Out-of-Distribution Detection +1

Continual Learning Using Pseudo-Replay via Latent Space Sampling

no code implementations29 Sep 2021 Gyuhak Kim, Sepideh Esmaeilpour, Zixuan Ke, Tatsuya Konishi, Bing Liu

PLS is not only simple and efficient but also does not invade data privacy due to the fact that it works in the latent feature space.

class-incremental learning Class Incremental Learning +1

Zero-Shot Dialogue State Tracking via Cross-Task Transfer

1 code implementation EMNLP 2021 Zhaojiang Lin, Bing Liu, Andrea Madotto, Seungwhan Moon, Paul Crook, Zhenpeng Zhou, Zhiguang Wang, Zhou Yu, Eunjoon Cho, Rajen Subba, Pascale Fung

Zero-shot transfer learning for dialogue state tracking (DST) enables us to handle a variety of task-oriented dialogue domains without the expense of collecting in-domain data.

Dialogue State Tracking Question Answering +1

Zero-Shot Out-of-Distribution Detection Based on the Pre-trained Model CLIP

2 code implementations6 Sep 2021 Sepideh Esmaeilpour, Bing Liu, Eric Robertson, Lei Shu

In an out-of-distribution (OOD) detection problem, samples of known classes(also called in-distribution classes) are used to train a special classifier.

Out-of-Distribution Detection Out of Distribution (OOD) Detection +2

Concept-Based Label Embedding via Dynamic Routing for Hierarchical Text Classification

1 code implementation ACL 2021 Xuepeng Wang, Li Zhao, Bing Liu, Tao Chen, Feng Zhang, Di Wang

In this paper, we propose a novel concept-based label embedding method that can explicitly represent the concept and model the sharing mechanism among classes for the hierarchical text classification.

text-classification Text Classification

Leveraging Slot Descriptions for Zero-Shot Cross-Domain Dialogue State Tracking

2 code implementations10 May 2021 Zhaojiang Lin, Bing Liu, Seungwhan Moon, Paul Crook, Zhenpeng Zhou, Zhiguang Wang, Zhou Yu, Andrea Madotto, Eunjoon Cho, Rajen Subba

Zero-shot cross-domain dialogue state tracking (DST) enables us to handle task-oriented dialogue in unseen domains without the expense of collecting in-domain data.

Dialogue State Tracking Transfer Learning

Learning to Dynamically Select Between Reward Shaping Signals

no code implementations1 Jan 2021 Alexander Politowicz, Bing Liu

Automatic reward shaping is one approach to solving this problem, using automatic identification and modulation of shaping reward signals that are more informative about how agents should behave in any given scenario to learn and adapt faster.

Reinforcement Learning (RL)

Continual Learning in Task-Oriented Dialogue Systems

1 code implementation EMNLP 2021 Andrea Madotto, Zhaojiang Lin, Zhenpeng Zhou, Seungwhan Moon, Paul Crook, Bing Liu, Zhou Yu, Eunjoon Cho, Zhiguang Wang

Continual learning in task-oriented dialogue systems can allow us to add new domains and functionalities through time without incurring the high cost of a whole system retraining.

Continual Learning Intent Recognition +3

A Deep Reinforcement Learning Approach for Ramp Metering Based on Traffic Video Data

no code implementations9 Dec 2020 Bing Liu, Yu Tang, Yuxiong Ji, Yu Shen, Yuchuan Du

Ramp metering that uses traffic signals to regulate vehicle flows from the on-ramps has been widely implemented to improve vehicle mobility of the freeway.

Deep Reinforcement Learning Reinforcement Learning (RL)

HRN: A Holistic Approach to One Class Learning

1 code implementation NeurIPS 2020 Wenpeng Hu, Mengyu Wang, Qi Qin, Jinwen Ma, Bing Liu

Existing neural network based one-class learning methods mainly use various forms of auto-encoders or GAN style adversarial training to learn a latent representation of the given one class of data.

Anomaly Detection Image Classification

Attention Aware Cost Volume Pyramid Based Multi-view Stereo Network for 3D Reconstruction

1 code implementation25 Nov 2020 Anzhu Yu, Wenyue Guo, Bing Liu, Xin Chen, Xin Wang, Xuefeng Cao, Bingchuan Jiang

This strategy estimates the depth map at coarsest level, while the depth maps at finer levels are considered as the upsampled depth map from previous level with pixel-wise depth residual.

3D Reconstruction

Lifelong Knowledge Learning in Rule-based Dialogue Systems

no code implementations19 Nov 2020 Bing Liu, Chuhe Mei

One of the main weaknesses of current chatbots or dialogue systems is that they do not learn online during conversations after they are deployed.

Chatbot

Using the Past Knowledge to Improve Sentiment Classification

no code implementations Findings of the Association for Computational Linguistics 2020 Qi Qin, Wenpeng Hu, Bing Liu

It proposes a new lifelong learning model (called L2PG) that can retain and selectively transfer the knowledge learned in the past to help learn the new task.

Classification Knowledge Distillation +2

Understanding Pre-trained BERT for Aspect-based Sentiment Analysis

2 code implementations COLING 2020 Hu Xu, Lei Shu, Philip S. Yu, Bing Liu

Most features in the representation of an aspect are dedicated to the fine-grained semantics of the domain (or product category) and the aspect itself, instead of carrying summarized opinions from its context.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +3

NUANCED: Natural Utterance Annotation for Nuanced Conversation with Estimated Distributions

1 code implementation Findings (EMNLP) 2021 Zhiyu Chen, Honglei Liu, Hu Xu, Seungwhan Moon, Hao Zhou, Bing Liu

As there is no clean mapping for a user's free form utterance to an ontology, we first model the user preferences as estimated distributions over the system ontology and map the users' utterances to such distributions.

Conversational Recommendation Dialogue State Tracking

Adding Chit-Chat to Enhance Task-Oriented Dialogues

1 code implementation NAACL 2021 Kai Sun, Seungwhan Moon, Paul Crook, Stephen Roller, Becka Silvert, Bing Liu, Zhiguang Wang, Honglei Liu, Eunjoon Cho, Claire Cardie

Existing dialogue corpora and models are typically designed under two disjoint motives: while task-oriented systems focus on achieving functional goals (e. g., booking hotels), open-domain chatbots aim at making socially engaging conversations.

Dialogue Generation Dialogue Understanding +1

A Knowledge-Driven Approach to Classifying Object and Attribute Coreferences in Opinion Mining

no code implementations Findings of the Association for Computational Linguistics 2020 Jiahua Chen, Shuai Wang, Sahisnu Mazumder, Bing Liu

Classifying and resolving coreferences of objects (e. g., product names) and attributes (e. g., product aspects) in opinionated reviews is crucial for improving the opinion mining performance.

Attribute Opinion Mining

Text Classification with Novelty Detection

no code implementations23 Sep 2020 Qi Qin, Wenpeng Hu, Bing Liu

In this paper, we propose a significantly more effective approach that converts the original problem to a pair-wise matching problem and then outputs how probable two instances belong to the same class.

General Classification Novelty Detection +2

Lifelong Learning Dialogue Systems: Chatbots that Self-Learn On the Job

no code implementations22 Sep 2020 Bing Liu, Sahisnu Mazumder

Due to the huge amount of manual effort involved, they are difficult to scale and also tend to produce many errors ought to their limited ability to understand natural language and the limited knowledge in their KBs.

World Knowledge

Feature Projection for Improved Text Classification

no code implementations ACL 2020 Qi Qin, Wenpeng Hu, Bing Liu

In this paper, we propose a novel angle to further improve this representation learning, i. e., feature projection.

General Classification Representation Learning +4

User Memory Reasoning for Conversational Recommendation

no code implementations COLING 2020 Hu Xu, Seungwhan Moon, Honglei Liu, Pararth Shah, Bing Liu, Philip S. Yu

We study a conversational recommendation model which dynamically manages users' past (offline) preferences and current (online) requests through a structured and cumulative user memory knowledge graph, to allow for natural interactions and accurate recommendations.

Conversational Recommendation

DomBERT: Domain-oriented Language Model for Aspect-based Sentiment Analysis

1 code implementation Findings of the Association for Computational Linguistics 2020 Hu Xu, Bing Liu, Lei Shu, Philip S. Yu

This paper focuses on learning domain-oriented language models driven by end tasks, which aims to combine the worlds of both general-purpose language models (such as ELMo and BERT) and domain-specific language understanding.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +2

Computational Performance of a Germline Variant Calling Pipeline for Next Generation Sequencing

no code implementations1 Apr 2020 Jie Liu, Xiaotian Wu, Kai Zhang, Bing Liu, Renyi Bao, Xiao Chen, Yiran Cai, Yiming Shen, Xinjun He, Jun Yan, Weixing Ji

With the booming of next generation sequencing technology and its implementation in clinical practice and life science research, the need for faster and more efficient data analysis methods becomes pressing in the field of sequencing.

A Failure of Aspect Sentiment Classifiers and an Adaptive Re-weighting Solution

1 code implementation4 Nov 2019 Hu Xu, Bing Liu, Lei Shu, Philip S. Yu

Aspect-based sentiment classification (ASC) is an important task in fine-grained sentiment analysis.~Deep supervised ASC approaches typically model this task as a pair-wise classification task that takes an aspect and a sentence containing the aspect and outputs the polarity of the aspect in that sentence.

General Classification Sentence +2

Building an Application Independent Natural Language Interface

no code implementations30 Oct 2019 Sahisnu Mazumder, Bing Liu, Shuai Wang, Sepideh Esmaeilpour

Traditional approaches to building natural language (NL) interfaces typically use a semantic parser to parse the user command and convert it to a logical form, which is then translated to an executable action in an application.

Analyzing the Forgetting Problem in the Pretrain-Finetuning of Dialogue Response Models

no code implementations16 Oct 2019 Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, Fuchun Peng

We find that mix-review effectively regularizes the finetuning process, and the forgetting problem is alleviated to some extent.

Decoder Response Generation +2

Learning from Positive and Unlabeled Data with Adversarial Training

no code implementations25 Sep 2019 Wenpeng Hu, Ran Le, Bing Liu, Feng Ji, Haiqing Chen, Dongyan Zhao, Jinwen Ma, Rui Yan

Positive-unlabeled (PU) learning learns a binary classifier using only positive and unlabeled examples without labeled negative examples.

Continual Learning via Principal Components Projection

no code implementations25 Sep 2019 Gyuhak Kim, Bing Liu

The idea is that in learning a new task, if we can ensure that the gradient updates will only occur in the orthogonal directions to the input vectors of the previous tasks, then the weight updates for learning the new task will not affect the previous tasks.

Continual Learning

Learning with Noisy Labels for Sentence-level Sentiment Classification

no code implementations IJCNLP 2019 Hao Wang, Bing Liu, Chaozhuo Li, Yan Yang, Tianrui Li

We propose a novel DNN model called NetAb (as shorthand for convolutional neural Networks with Ab-networks) to handle noisy labels during training.

Classification General Classification +4

Modeling Multi-Action Policy for Task-Oriented Dialogues

1 code implementation IJCNLP 2019 Lei Shu, Hu Xu, Bing Liu, Piero Molino

Dialogue management (DM) plays a key role in the quality of the interaction with the user in a task-oriented dialogue system.

Dialogue Management Management

Flexibly-Structured Model for Task-Oriented Dialogues

1 code implementation WS 2019 Lei Shu, Piero Molino, Mahdi Namazifar, Hu Xu, Bing Liu, Huaixiu Zheng, Gokhan Tur

It is based on a simple and practical yet very effective sequence-to-sequence approach, where language understanding and state tracking tasks are modeled jointly with a structured copy-augmented sequential decoder and a multi-label decoder for each slot.

Decoder Task-Oriented Dialogue Systems +1

Lifelong and Interactive Learning of Factual Knowledge in Dialogues

no code implementations WS 2019 Sahisnu Mazumder, Bing Liu, Shuai Wang, Nianzu Ma

Dialogue systems are increasingly using knowledge bases (KBs) storing real-world facts to help generate quality responses.

Forward and Backward Knowledge Transfer for Sentiment Classification

no code implementations8 Jun 2019 Hao Wang, Bing Liu, Shuai Wang, Nianzu Ma, Yan Yang

That is, it is possible to improve the NB classifier for a task by improving its model parameters directly by using the retained knowledge from other tasks.

Classification General Classification +3

DOER: Dual Cross-Shared RNN for Aspect Term-Polarity Co-Extraction

1 code implementation ACL 2019 Huaishao Luo, Tianrui Li, Bing Liu, Junbo Zhang

This paper focuses on two related subtasks of aspect-based sentiment analysis, namely aspect term extraction and aspect sentiment classification, which we call aspect term-polarity co-extraction.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +3

GSN: A Graph-Structured Network for Multi-Party Dialogues

1 code implementation31 May 2019 Wenpeng Hu, Zhangming Chan, Bing Liu, Dongyan Zhao, Jinwen Ma, Rui Yan

Existing neural models for dialogue response generation assume that utterances are sequentially organized.

Response Generation

Spectral Perturbation Meets Incomplete Multi-view Data

no code implementations31 May 2019 Hao Wang, Linlin Zong, Bing Liu, Yan Yang, Wei Zhou

In this work, we show a strong link between perturbation risk bounds and incomplete multi-view clustering.

Clustering Incomplete multi-view clustering +1