no code implementations • SIGDIAL (ACL) 2021 • Itika Gupta, Barbara Di Eugenio, Brian D. Ziebart, Bing Liu, Ben S. Gerber, Lisa K. Sharp
In this paper, we present our work towards assisting health coaches by extracting the physical activity goal the user and coach negotiate via text messages.
no code implementations • COLING 2022 • Yue Zhou, Barbara Di Eugenio, Brian Ziebart, Lisa Sharp, Bing Liu, Ben Gerber, Nikolaos Agadakos, Shweta Yadav
In this paper, we propose to build a dialogue system that converses with the patients, helps them create and accomplish specific goals, and can address their emotions with empathy.
1 code implementation • EMNLP 2021 • Nianzu Ma, Alexander Politowicz, Sahisnu Mazumder, Jiahua Chen, Bing Liu, Eric Robertson, Scott Grigsby
This paper proposes to study a fine-grained semantic novelty detection task, which can be illustrated with the following example.
no code implementations • SIGDIAL (ACL) 2020 • Itika Gupta, Barbara Di Eugenio, Brian Ziebart, Aiswarya Baiju, Bing Liu, Ben Gerber, Lisa Sharp, Nadia Nabulsi, Mary Smart
In this paper, we discuss these schemas and briefly talk about their application for automatically extracting activity goals and annotating the second round of data, collected with different health coaches and patients.
1 code implementation • 26 Sep 2023 • Haowei Lin, Yijia Shao, Weinan Qian, Ningxin Pan, Yiduo Guo, Bing Liu
An emerging theoretically justified and effective approach is to train a task-specific model for each task in a shared network for all tasks based on a task-incremental learning (TIL) method to deal with forgetting.
no code implementations • 4 Sep 2023 • Danqing Hu, Bing Liu, Xiaofeng Zhu, Xudong Lu, Nan Wu
Information extraction is the strategy to transform the sequence of characters into structured data, which can be employed for secondary analysis.
no code implementations • 25 Jul 2023 • Zhiwen Shao, Yuchen Su, Yong Zhou, Fanrong Meng, Hancheng Zhu, Bing Liu, Rui Yao
Contour based scene text detection methods have rapidly developed recently, but still suffer from inaccurate frontend contour initialization, multi-stage error accumulation, or deficient local information aggregation.
1 code implementation • 26 Jun 2023 • Tatsuya Konishi, Mori Kurokawa, Chihiro Ono, Zixuan Ke, Gyuhak Kim, Bing Liu
Although several techniques have achieved learning with no CF, they attain it by letting each task monopolize a sub-network in a shared network, which seriously limits knowledge transfer (KT) and causes over-consumption of the network capacity, i. e., as more tasks are learned, the performance deteriorates.
1 code implementation • 22 Jun 2023 • Yijia Shao, Yiduo Guo, Dongyan Zhao, Bing Liu
Despite the great success of pre-trained language models, it is still a challenge to use these models for continual learning, especially for the class-incremental learning (CIL) setting due to catastrophic forgetting (CF).
1 code implementation • 22 Jun 2023 • Gyuhak Kim, Changnan Xiao, Tatsuya Konishi, Bing Liu
This paper shows that CIL is learnable.
1 code implementation • CVPR 2023 • Yiduo Guo, Bing Liu, Dongyan Zhao
A novel optimization objective with a gradient-based adaptive method is proposed to dynamically deal with the problem in the online CL process.
1 code implementation • 24 May 2023 • Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Jialin Pan, Lidong Bing
This paper aims to provide a comprehensive investigation into the capabilities of LLMs in performing various sentiment analysis tasks, from conventional sentiment classification to aspect-based sentiment analysis and multifaceted analysis of subjective texts.
no code implementations • 20 May 2023 • Bing Liu, Wei Luo, Gang Li, Jing Huang, Bo Yang
As deep learning gains popularity in modelling dynamical systems, we expose an underappreciated misunderstanding relevant to modelling dynamics on networks.
no code implementations • 19 May 2023 • Yiduo Guo, Yaobo Liang, Dongyan Zhao, Bing Liu, Duan Nan
Existing research has shown that a multilingual pre-trained language model fine-tuned with one (source) language also performs well on downstream tasks for non-source languages, even though no fine-tuning is done on these languages.
no code implementations • 8 May 2023 • Neeraj Varshney, Himanshu Gupta, Eric Robertson, Bing Liu, Chitta Baral
To initiate a systematic research in this important area of 'dealing with novelties', we introduce 'NoveltyTask', a multi-stage task to evaluate a system's performance on pipelined novelty 'detection' and 'accommodation' tasks.
no code implementations • 20 Apr 2023 • Gyuhak Kim, Changnan Xiao, Tatsuya Konishi, Zixuan Ke, Bing Liu
The key theoretical result is that regardless of whether WP and OOD detection (or TP) are defined explicitly or implicitly by a CIL algorithm, good WP and good OOD detection are necessary and sufficient conditions for good CIL, which unifies novelty or OOD detection and continual learning (CIL, in particular).
no code implementations • 16 Mar 2023 • Hao liu, Xin Li, Mingming Gong, Bing Liu, Yunfei Wu, Deqiang Jiang, Yinsong Liu, Xing Sun
Recently, Table Structure Recognition (TSR) task, aiming at identifying table structure into machine readable formats, has received increasing interest in the community.
2 code implementations • 7 Feb 2023 • Zixuan Ke, Yijia Shao, Haowei Lin, Tatsuya Konishi, Gyuhak Kim, Bing Liu
A novel proxy is also proposed to preserve the general knowledge in the original LM.
Ranked #1 on
Continual Pretraining
on SciERC
2 code implementations • 21 Jan 2023 • Zixuan Ke, Yijia Shao, Haowei Lin, Hu Xu, Lei Shu, Bing Liu
This paper shows that the existing methods are suboptimal and proposes a novel method to perform a more informed adaptation of the knowledge in the LM by (1) soft-masking the attention heads based on their importance to best preserve the general knowledge in the LM and (2) contrasting the representations of the general and the full (both general and domain knowledge) to learn an integrated representation with both general and domain-specific knowledge.
1 code implementation • 10 Dec 2022 • Lei Ding, Jing Zhang, Kai Zhang, Haitao Guo, Bing Liu, Lorenzo Bruzzone
Semantic Change Detection (SCD) refers to the task of simultaneously extracting the changed areas and the semantic categories (before and after the changes) in Remote Sensing Images (RSIs).
1 code implementation • 29 Nov 2022 • Bing Liu, Tiancheng Lan, Wen Hua, Guido Zuccon
Entity Alignment (EA), which aims to detect entity mappings (i. e. equivalent entity pairs) in different Knowledge Graphs (KGs), is critical for KG fusion.
1 code implementation • 29 Nov 2022 • Bing Liu, Harrisen Scells, Wen Hua, Guido Zuccon, Genghong Zhao, Xia Zhang
Making compatible predictions thus should be one of the goals of training an EA model along with fitting the labelled data: this aspect however is neglected in current methods.
1 code implementation • 23 Nov 2022 • Zixuan Ke, Bing Liu
Continual learning (CL) is a learning paradigm that emulates the human capability of learning and accumulating knowledge continually without forgetting the previously learned knowledge and also transferring the learned knowledge to help learn new tasks better.
no code implementations • 12 Nov 2022 • Sahisnu Mazumder, Bing Liu
This book introduces the new paradigm of lifelong learning dialogue systems to endow chatbots the ability to learn continually by themselves through their own self-initiated interactions with their users and working environments to improve themselves.
1 code implementation • 4 Nov 2022 • Gyuhak Kim, Changnan Xiao, Tatsuya Konishi, Zixuan Ke, Bing Liu
Continual learning (CL) learns a sequence of tasks incrementally.
1 code implementation • 31 Oct 2022 • Nianzu Ma, Sahisnu Mazumder, Alexander Politowicz, Bing Liu, Eric Robertson, Scott Grigsby
Much of the existing work on text novelty detection has been studied at the topic level, i. e., identifying whether the topic of a document or a sentence is novel or not.
no code implementations • 26 Oct 2022 • Sahisnu Mazumder, Bing Liu, Shuai Wang, Yingxuan Zhu, Xiaotian Yin, Lifeng Liu, Jian Li
This paper proposes a new method to drastically speed up deep reinforcement learning (deep RL) training for problems that have the property of state-action permissibility (SAP).
3 code implementations • 11 Oct 2022 • Zixuan Ke, Haowei Lin, Yijia Shao, Hu Xu, Lei Shu, Bing Liu
Recent work on applying large language models (LMs) achieves impressive performance in many NLP applications.
Ranked #1 on
Continual Pretraining
on AG News
1 code implementation • 22 Aug 2022 • Bing Liu, Wen Hua, Guido Zuccon, Genghong Zhao, Xia Zhang
To include in the EA subtasks a high proportion of the potential mappings originally present in the large EA task, we devise a counterpart discovery method that exploits the locality principle of the EA task and the power of trained EA models.
3 code implementations • 20 Aug 2022 • Gyuhak Kim, Zixuan Ke, Bing Liu
Instead of using the saved samples in memory to update the network for previous tasks/classes in the existing approach, MORE leverages the saved samples to build a task specific classifier (adding a new classification head) without updating the network learned for previous tasks/classes.
no code implementations • 3rd Conversational AI Workshop at 33rd Conference on Neural Information Processing Systems (NeurIPS 2019) 2019 • Jorge A. Mendez, Alborz Geramifard, Mohammad Ghavamzadeh, Bing Liu
Learning task-oriented dialog policies via reinforcement learning typically requires large amounts of interaction with users, which in practice renders such methods unusable for real-world applications.
no code implementations • 27 Jun 2022 • Yuchen Su, Zhiwen Shao, Yong Zhou, Fanrong Meng, Hancheng Zhu, Bing Liu, Rui Yao
Arbitrary-shaped scene text detection is a challenging task due to the variety of text changes in font, size, color, and orientation.
1 code implementation • 3 Jun 2022 • Reinald Kim Amplayo, Arthur Bražinskas, Yoshi Suhara, Xiaolan Wang, Bing Liu
In this tutorial, we present various aspects of opinion summarization that are useful for researchers and practitioners.
no code implementations • Findings (NAACL) 2022 • Zhiyu Chen, Bing Liu, Seungwhan Moon, Chinnadhurai Sankar, Paul Crook, William Yang Wang
We also propose two new models, SimpleToDPlus and Combiner, for the proposed task.
1 code implementation • IEEE Transactions on Image Processing 2022 • Kuiliang Gao, Bing Liu, Xuchu Yu, and Anzhu Yu
However, the existing methods based on meta learning still need to construct a labeled source data set with several pre-collected HSIs, and must utilize a large number of labeled samples for meta-training, which is actually time-consuming and labor-intensive.
no code implementations • 24 Mar 2022 • Sepideh Esmaeilpour, Lei Shu, Bing Liu
In many practical scenarios, this is not the case because there are unknowns or unseen class samples in the test data, which is called the open set scenario, and the unknowns need to be detected.
1 code implementation • 17 Mar 2022 • Gyuhak Kim, Sepideh Esmaeilpour, Changnan Xiao, Bing Liu
Existing continual learning techniques focus on either task incremental learning (TIL) or class incremental learning (CIL) problem, but not both.
no code implementations • 17 Mar 2022 • Bing Liu, Sahisnu Mazumder, Eric Robertson, Scott Grigsby
As more and more AI agents are used in practice, it is time to think about how to make these agents fully autonomous so that they can (1) learn by themselves continually in a self-motivated and self-initiated manner rather than being retrained offline periodically on the initiation of human engineers and (2) accommodate or adapt to unexpected or novel circumstances.
1 code implementation • 12 Mar 2022 • Kexuan Xin, Zequn Sun, Wen Hua, Bing Liu, Wei Hu, Jianfeng Qu, Xiaofang Zhou
We also design a conflict resolution mechanism to resolve the alignment conflict when combining the new alignment of an aligner and that from its teacher.
no code implementations • 4 Feb 2022 • Lei Shu, Hu Xu, Bing Liu, Jiahua Chen
Aspect-based sentiment analysis (ABSA) typically requires in-domain annotated data for supervised training/fine-tuning.
1 code implementation • CVPR 2022 • Bing Liu, Dong Wang, Xu Yang, Yong Zhou, Rui Yao, Zhiwen Shao, Jiaqi Zhao
In the encoding stage, the IOD is able to disentangle the region-based visual features by deconfounding the visual confounder.
2 code implementations • NeurIPS 2020 • Zixuan Ke, Bing Liu, Xingchang Huang
To the best of our knowledge, no technique has been proposed to learn a sequence of mixed similar and dissimilar tasks that can deal with forgetting and also transfer knowledge forward and backward.
Ranked #1 on
Continual Learning
on F-CelebA (10 tasks)
2 code implementations • 18 Dec 2021 • Zixuan Ke, Bing Liu, Hao Wang, Lei Shu
In this setting, the CL system learns a sequence of SC tasks incrementally in a neural network, where each task builds a classifier to classify the sentiment of reviews of a particular product category or domain.
Ranked #4 on
Continual Learning
on DSC (10 tasks)
1 code implementation • NAACL 2021 • Zixuan Ke, Hu Xu, Bing Liu
This paper studies continual learning (CL) of a sequence of aspect sentiment classification (ASC) tasks.
Ranked #3 on
Continual Learning
on ASC (19 tasks)
1 code implementation • EMNLP 2021 • Zixuan Ke, Bing Liu, Hu Xu, Lei Shu
The key novelty is a contrastive continual learning method that enables both knowledge transfer across tasks and knowledge distillation from old tasks to the new task, which eliminates the need for task ids in testing.
1 code implementation • NeurIPS 2021 • Zixuan Ke, Bing Liu, Nianzu Ma, Hu Xu, Lei Shu
Although several papers have tried to deal with both CF and KT, our experiments show that they suffer from serious CF when the tasks do not have much shared knowledge.
Ranked #1 on
Continual Learning
on DSC (10 tasks)
no code implementations • NeurIPS 2021 • Qi Qin, Wenpeng Hu, Han Peng, Dongyan Zhao, Bing Liu
Continual learning (CL) of a sequence of tasks is often accompanied with the catastrophic forgetting(CF) problem.
no code implementations • CVPR 2022 • Hao liu, Xin Li, Bing Liu, Deqiang Jiang, Yinsong Liu, Bo Ren
We also show that the proposed NCGM can modulate collaborative pattern of different modalities conditioned on the context of intra-modality cues, which is vital for diversified table cases.
Ranked #5 on
Table Recognition
on PubTabNet
no code implementations • 19 Nov 2021 • Yanni Li, Bing Liu, Kaicheng Yao, Xiaoli Kou, Pengfan Lv, Yueshen Xu, Jiangtao Cui
what is the upper bound of the learningable tasks sequentially for a given CL method?
no code implementations • 21 Oct 2021 • Bing Liu, Eric Robertson, Scott Grigsby, Sahisnu Mazumder
As more and more AI agents are used in practice, it is time to think about how to make these agents fully autonomous so that they can learn by themselves in a self-motivated and self-supervised manner rather than being retrained periodically on the initiation of human engineers using expanded training data.
1 code implementation • EMNLP 2021 • Bing Liu, Harrisen Scells, Guido Zuccon, Wen Hua, Genghong Zhao
Entity Alignment (EA) aims to match equivalent entities across different Knowledge Graphs (KGs) and is an essential step of KG fusion.
no code implementations • 29 Sep 2021 • Mengyu Wang, Yijia Shao, Haowei Lin, Wenpeng Hu, Bing Liu
Recently, contrastive loss with data augmentation and pseudo class creation has been shown to produce markedly better results for out-of-distribution (OOD) detection than previous methods.
no code implementations • 29 Sep 2021 • Yiduo Guo, Dongyan Zhao, Bing Liu
Most existing techniques for online continual learning are based on experience-replay.
no code implementations • 29 Sep 2021 • Gyuhak Kim, Sepideh Esmaeilpour, Zixuan Ke, Tatsuya Konishi, Bing Liu
PLS is not only simple and efficient but also does not invade data privacy due to the fact that it works in the latent feature space.
no code implementations • 29 Sep 2021 • Tatsuya Konishi, Mori Kurokawa, Roberto Legaspi, Chihiro Ono, Zixuan Ke, Gyuhak Kim, Bing Liu
The goal of this work is to endow such systems with the additional ability to transfer knowledge among tasks when the tasks are similar and have shared knowledge to achieve higher accuracy.
1 code implementation • EMNLP 2021 • Zhaojiang Lin, Bing Liu, Andrea Madotto, Seungwhan Moon, Paul Crook, Zhenpeng Zhou, Zhiguang Wang, Zhou Yu, Eunjoon Cho, Rajen Subba, Pascale Fung
Zero-shot transfer learning for dialogue state tracking (DST) enables us to handle a variety of task-oriented dialogue domains without the expense of collecting in-domain data.
1 code implementation • 6 Sep 2021 • Sepideh Esmaeilpour, Bing Liu, Eric Robertson, Lei Shu
In an out-of-distribution (OOD) detection problem, samples of known classes(also called in-distribution classes) are used to train a special classifier.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
+2
1 code implementation • ACL 2021 • Xuepeng Wang, Li Zhao, Bing Liu, Tao Chen, Feng Zhang, Di Wang
In this paper, we propose a novel concept-based label embedding method that can explicitly represent the concept and model the sharing mechanism among classes for the hierarchical text classification.
1 code implementation • NAACL 2021 • Zhaojiang Lin, Bing Liu, Seungwhan Moon, Paul Crook, Zhenpeng Zhou, Zhiguang Wang, Zhou Yu, Andrea Madotto, Eunjoon Cho, Rajen Subba
Zero-shot cross-domain dialogue state tracking (DST) enables us to handle unseen domains without the expense of collecting in-domain data.
2 code implementations • 10 May 2021 • Zhaojiang Lin, Bing Liu, Seungwhan Moon, Paul Crook, Zhenpeng Zhou, Zhiguang Wang, Zhou Yu, Andrea Madotto, Eunjoon Cho, Rajen Subba
Zero-shot cross-domain dialogue state tracking (DST) enables us to handle task-oriented dialogue in unseen domains without the expense of collecting in-domain data.
no code implementations • EACL 2021 • Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, Fuchun Peng
We find that mix-review effectively regularizes the finetuning process, and the forgetting problem is alleviated to some extent.
no code implementations • 1 Jan 2021 • Alexander Politowicz, Bing Liu
Automatic reward shaping is one approach to solving this problem, using automatic identification and modulation of shaping reward signals that are more informative about how agents should behave in any given scenario to learn and adapt faster.
1 code implementation • EMNLP 2021 • Andrea Madotto, Zhaojiang Lin, Zhenpeng Zhou, Seungwhan Moon, Paul Crook, Bing Liu, Zhou Yu, Eunjoon Cho, Zhiguang Wang
Continual learning in task-oriented dialogue systems can allow us to add new domains and functionalities through time without incurring the high cost of a whole system retraining.
no code implementations • 9 Dec 2020 • Bing Liu, Yu Tang, Yuxiong Ji, Yu Shen, Yuchuan Du
Ramp metering that uses traffic signals to regulate vehicle flows from the on-ramps has been widely implemented to improve vehicle mobility of the freeway.
no code implementations • COLING 2020 • Wenpeng Hu, Ran Le, Bing Liu, Jinwen Ma, Dongyan Zhao, Rui Yan
Understanding neural models is a major topic of interest in the deep learning community.
1 code implementation • NeurIPS 2020 • Wenpeng Hu, Mengyu Wang, Qi Qin, Jinwen Ma, Bing Liu
Existing neural network based one-class learning methods mainly use various forms of auto-encoders or GAN style adversarial training to learn a latent representation of the given one class of data.
no code implementations • COLING 2020 • Hao Wang, Shuai Wang, Sahisnu Mazumder, Bing Liu, Yan Yang, Tianrui Li
After each sentiment classification task is learned, its knowledge is retained to help future task learning.
1 code implementation • 25 Nov 2020 • Anzhu Yu, Wenyue Guo, Bing Liu, Xin Chen, Xin Wang, Xuefeng Cao, Bingchuan Jiang
This strategy estimates the depth map at coarsest level, while the depth maps at finer levels are considered as the upsampled depth map from previous level with pixel-wise depth residual.
no code implementations • 19 Nov 2020 • Bing Liu, Chuhe Mei
One of the main weaknesses of current chatbots or dialogue systems is that they do not learn online during conversations after they are deployed.
1 code implementation • 10 Nov 2020 • Lei Ding, Kai Zheng, Dong Lin, Yuxing Chen, Bing Liu, Jiansheng Li, Lorenzo Bruzzone
This CNN architecture can be used as a baseline method for future studies on the semantic segmentation of PolSAR images.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Qi Qin, Wenpeng Hu, Bing Liu
It proposes a new lifelong learning model (called L2PG) that can retain and selectively transfer the knowledge learned in the past to help learn the new task.
2 code implementations • COLING 2020 • Hu Xu, Lei Shu, Philip S. Yu, Bing Liu
Most features in the representation of an aspect are dedicated to the fine-grained semantics of the domain (or product category) and the aspect itself, instead of carrying summarized opinions from its context.
Aspect-Based Sentiment Analysis (ABSA)
Language Modelling
+1
1 code implementation • NAACL 2021 • Kai Sun, Seungwhan Moon, Paul Crook, Stephen Roller, Becka Silvert, Bing Liu, Zhiguang Wang, Honglei Liu, Eunjoon Cho, Claire Cardie
Existing dialogue corpora and models are typically designed under two disjoint motives: while task-oriented systems focus on achieving functional goals (e. g., booking hotels), open-domain chatbots aim at making socially engaging conversations.
1 code implementation • Findings (EMNLP) 2021 • Zhiyu Chen, Honglei Liu, Hu Xu, Seungwhan Moon, Hao Zhou, Bing Liu
As there is no clean mapping for a user's free form utterance to an ontology, we first model the user preferences as estimated distributions over the system ontology and map the users' utterances to such distributions.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Jiahua Chen, Shuai Wang, Sahisnu Mazumder, Bing Liu
Classifying and resolving coreferences of objects (e. g., product names) and attributes (e. g., product aspects) in opinionated reviews is crucial for improving the opinion mining performance.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Lei Shu, Alexandros Papangelis, Yi-Chia Wang, Gokhan Tur, Hu Xu, Zhaleh Feizollahi, Bing Liu, Piero Molino
This work introduces Focused-Variation Network (FVN), a novel model to control language generation.
no code implementations • 23 Sep 2020 • Qi Qin, Wenpeng Hu, Bing Liu
In this paper, we propose a significantly more effective approach that converts the original problem to a pair-wise matching problem and then outputs how probable two instances belong to the same class.
no code implementations • 22 Sep 2020 • Bing Liu, Sahisnu Mazumder
Due to the huge amount of manual effort involved, they are difficult to scale and also tend to produce many errors ought to their limited ability to understand natural language and the limited knowledge in their KBs.
no code implementations • 1 Sep 2020 • Bing Liu, Anzhu Yu, Pengqiang Zhang, Lei Ding, Wenyue Guo, Kuiliang Gao, Xibing Zuo
First, a deep densely connected convolutional network is considered for hyperspectral image classification.
no code implementations • ACL 2020 • Qi Qin, Wenpeng Hu, Bing Liu
In this paper, we propose a novel angle to further improve this representation learning, i. e., feature projection.
no code implementations • ACL 2020 • Nianzu Ma, Sahisnu Mazumder, Hao Wang, Bing Liu
This paper studies the task of comparative preference classification (CPC).
no code implementations • COLING 2020 • Hu Xu, Seungwhan Moon, Honglei Liu, Pararth Shah, Bing Liu, Philip S. Yu
We study a conversational recommendation model which dynamically manages users' past (offline) preferences and current (online) requests through a structured and cumulative user memory knowledge graph, to allow for natural interactions and accurate recommendations.
no code implementations • Findings (ACL) 2021 • Shuai Wang, Guangyi Lv, Sahisnu Mazumder, Bing Liu
We refer to this problem as domain polarity-changes of words.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Hu Xu, Bing Liu, Lei Shu, Philip S. Yu
This paper focuses on learning domain-oriented language models driven by end tasks, which aims to combine the worlds of both general-purpose language models (such as ELMo and BERT) and domain-specific language understanding.
no code implementations • 1 Apr 2020 • Jie Liu, Xiaotian Wu, Kai Zhang, Bing Liu, Renyi Bao, Xiao Chen, Yiran Cai, Yiming Shen, Xinjun He, Jun Yan, Weixing Ji
With the booming of next generation sequencing technology and its implementation in clinical practice and life science research, the need for faster and more efficient data analysis methods becomes pressing in the field of sequencing.
1 code implementation • COLING 2020 • Wenpeng Hu, Mengyu Wang, Bing Liu, Feng Ji, Haiqing Chen, Dongyan Zhao, Jinwen Ma, Rui Yan
The key idea of the proposed approach is to use a Forward Transformation to transform dense representations to sparse representations.
1 code implementation • 4 Nov 2019 • Hu Xu, Bing Liu, Lei Shu, Philip S. Yu
Aspect-based sentiment classification (ASC) is an important task in fine-grained sentiment analysis.~Deep supervised ASC approaches typically model this task as a pair-wise classification task that takes an aspect and a sentence containing the aspect and outputs the polarity of the aspect in that sentence.
no code implementations • 30 Oct 2019 • Sahisnu Mazumder, Bing Liu, Shuai Wang, Sepideh Esmaeilpour
Traditional approaches to building natural language (NL) interfaces typically use a semantic parser to parse the user command and convert it to a logical form, which is then translated to an executable action in an application.
no code implementations • 16 Oct 2019 • Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, Fuchun Peng
We find that mix-review effectively regularizes the finetuning process, and the forgetting problem is alleviated to some extent.
no code implementations • 25 Sep 2019 • Gyuhak Kim, Bing Liu
The idea is that in learning a new task, if we can ensure that the gradient updates will only occur in the orthogonal directions to the input vectors of the previous tasks, then the weight updates for learning the new task will not affect the previous tasks.
no code implementations • 25 Sep 2019 • Wenpeng Hu, Ran Le, Bing Liu, Feng Ji, Haiqing Chen, Dongyan Zhao, Jinwen Ma, Rui Yan
Positive-unlabeled (PU) learning learns a binary classifier using only positive and unlabeled examples without labeled negative examples.
no code implementations • IJCNLP 2019 • Hao Wang, Bing Liu, Chaozhuo Li, Yan Yang, Tianrui Li
We propose a novel DNN model called NetAb (as shorthand for convolutional neural Networks with Ab-networks) to handle noisy labels during training.
1 code implementation • IJCNLP 2019 • Lei Shu, Hu Xu, Bing Liu, Piero Molino
Dialogue management (DM) plays a key role in the quality of the interaction with the user in a task-oriented dialogue system.
1 code implementation • WS 2019 • Lei Shu, Piero Molino, Mahdi Namazifar, Hu Xu, Bing Liu, Huaixiu Zheng, Gokhan Tur
It is based on a simple and practical yet very effective sequence-to-sequence approach, where language understanding and state tracking tasks are modeled jointly with a structured copy-augmented sequential decoder and a multi-label decoder for each slot.
no code implementations • WS 2019 • Sahisnu Mazumder, Bing Liu, Shuai Wang, Nianzu Ma
Dialogue systems are increasingly using knowledge bases (KBs) storing real-world facts to help generate quality responses.
no code implementations • 8 Jun 2019 • Hao Wang, Bing Liu, Shuai Wang, Nianzu Ma, Yan Yang
That is, it is possible to improve the NB classifier for a task by improving its model parameters directly by using the retained knowledge from other tasks.
1 code implementation • ACL 2019 • Huaishao Luo, Tianrui Li, Bing Liu, Junbo Zhang
This paper focuses on two related subtasks of aspect-based sentiment analysis, namely aspect term extraction and aspect sentiment classification, which we call aspect term-polarity co-extraction.
Aspect-Based Sentiment Analysis (ABSA)
Sentiment Classification
+1
1 code implementation • 31 May 2019 • Wenpeng Hu, Zhangming Chan, Bing Liu, Dongyan Zhao, Jinwen Ma, Rui Yan
Existing neural models for dialogue response generation assume that utterances are sequentially organized.
no code implementations • 31 May 2019 • Hao Wang, Linlin Zong, Bing Liu, Yan Yang, Wei Zhou
In this work, we show a strong link between perturbation risk bounds and incomplete multi-view clustering.
no code implementations • 15 May 2019 • Lei Shu, Hu Xu, Bing Liu
The modified CNN has two types of control modules.
no code implementations • ICLR 2019 • Wenpeng Hu, Zhengwei Tao, Zhanxing Zhu, Bing Liu, Zhou Lin, Jinwen Ma, Dongyan Zhao, Rui Yan
A large amount of parallel data is needed to train a strong neural machine translation (NMT) system.
no code implementations • ICLR 2019 • Wenpeng Hu, Zhou Lin, Bing Liu, Chongyang Tao, Zhengwei Tao, Jinwen Ma, Dongyan Zhao, Rui Yan
Several continual learning methods have been proposed to address the problem.
no code implementations • 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019) 2019 • Dan Jin, Jian Xu, Kun Zhao, Fangzhou Hu, Zhengyi Yang, Bing Liu, Tianzi Jiang, Yong liu
Modern advancements in deep learning provide a powerful framework for disease classification based on neuroimaging data.
1 code implementation • NAACL 2019 • Hu Xu, Bing Liu, Lei Shu, Philip S. Yu
Since ReviewRC has limited training examples for RRC (and also for aspect-based sentiment analysis), we then explore a novel post-training approach on the popular language model BERT to enhance the performance of fine-tuning of BERT for RRC.
1 code implementation • 3 Feb 2019 • Hu Xu, Bing Liu, Lei Shu, Philip S. Yu
Inspired by conversational reading comprehension (CRC), this paper studies a novel task of leveraging reviews as a source to build an agent that can answer multi-turn questions from potential consumers of online businesses.
no code implementations • 27 Sep 2018 • Sahisnu Mazumder, Bing Liu, Shuai Wang, Yingxuan Zhu, Xiaotian Yin, Lifeng Liu, Jian Li, Yongbing Huang
This paper proposes a new method to drastically speed up deep reinforcement learning (deep RL) training for problems that have the property of \textit{state-action permissibility} (SAP).
1 code implementation • 17 Sep 2018 • Hu Xu, Bing Liu, Lei Shu, P. Yu
Classic supervised learning makes the closed-world assumption, meaning that classes seen in testing must have been seen in training.
no code implementations • COLING 2018 • Zhenni You, Tieyun Qian, Bing Liu
With the abundant attributes in existing entities and knowledge in other domains, we successfully solve the problem of data scarcity in the cold-start settings.
no code implementations • ACL 2018 • Shuai Wang, Sahisnu Mazumder, Bing Liu, Mianwei Zhou, Yi Chang
In MNs, attention mechanism plays a crucial role in detecting the sentiment context for the given target.
no code implementations • NAACL 2018 • Pararth Shah, Dilek Hakkani-T{\"u}r, Bing Liu, Gokhan T{\"u}r
End-to-end neural models show great promise towards building conversational agents that are trained from data and on-line experience using supervised and reinforcement learning.
no code implementations • NAACL 2018 • Bing Liu, Ian Lane
In this thesis proposal, we address the limitations of conventional pipeline design of task-oriented dialog systems and propose end-to-end learning solutions.
no code implementations • WS 2018 • Bing Liu, Ian Lane
We further discuss the covariate shift problem in online adversarial dialog learning and show how we can address that with partial access to user feedback.
1 code implementation • 25 May 2018 • Hu Xu, Bing Liu, Lei Shu, Philip S. Yu
Learning high-quality domain word embeddings is important for achieving good performance in many NLP tasks.
1 code implementation • 21 May 2018 • Huaishao Luo, Tianrui Li, Bing Liu, Bin Wang, Herwig Unger
The key idea is to explicitly incorporate both representations gained separately from the bottom-up and top-down propagation on the given dependency syntactic tree.
2 code implementations • ACL 2018 • Hu Xu, Bing Liu, Lei Shu, Philip S. Yu
Unlike other highly sophisticated supervised deep learning models, this paper proposes a novel and yet simple CNN model employing two types of pre-trained embeddings for aspect extraction: general-purpose embeddings and domain-specific embeddings.
1 code implementation • NAACL 2018 • Bing Liu, Gokhan Tur, Dilek Hakkani-Tur, Pararth Shah, Larry Heck
To address this challenge, we propose a hybrid imitation and reinforcement learning method, with which a dialogue agent can effectively learn from its interaction with users by learning from human teaching and feedback.
no code implementations • 16 Feb 2018 • Shuai Wang, Mianwei Zhou, Sahisnu Mazumder, Bing Liu, Yi Chang
Stage one extracts/groups the target-related words (call t-words) for a given target.
no code implementations • 16 Feb 2018 • Sahisnu Mazumder, Nianzu Ma, Bing Liu
We model the task as an open-world knowledge base completion problem and propose a novel technique called lifelong interactive learning and inference (LiLi) to solve it.
1 code implementation • 24 Jan 2018 • Lei Zhang, Shuai Wang, Bing Liu
Deep learning has emerged as a powerful machine learning technique that learns multiple layers of representations or features of the data and produces state-of-the-art prediction results.
no code implementations • 18 Jan 2018 • Shuai Wang, Mianwei Zhou, Geli Fei, Yi Chang, Bing Liu
While existing machine learning models have achieved great success for sentiment classification, they typically do not explicitly capture sentiment-oriented word interaction, which can lead to poor results for fine-grained analysis at the snippet level (a phrase or sentence).
1 code implementation • ICLR 2018 • Lei Shu, Hu Xu, Bing Liu
It is reasonable to assume that this knowledge can be transferred to the rejected examples and used to discover the hidden unseen classes in them.
no code implementations • IJCNLP 2015 • Zhiyuan Chen, Nianzu Ma, Bing Liu
This paper proposes a novel lifelong learning (LL) approach to sentiment classification.
no code implementations • ICLR 2018 • Hu Xu, Bing Liu, Lei Shu, Philip S. Yu
We observe that domains are not isolated and a small domain corpus can leverage the learned knowledge from many past domains to augment that corpus in order to generate high-quality embeddings.
no code implementations • ICLR 2018 • Wenpeng Hu, Bing Liu, Rui Yan, Dongyan Zhao, Jinwen Ma
In the paper, we propose a new question generation problem, which also requires the input of a target topic in addition to a piece of descriptive text.
no code implementations • 20 Dec 2017 • Sahisnu Mazumder, Bing Liu
PR algorithms enumerate paths between entity pairs in a KB and use those paths as features to train a model for missing fact prediction.
no code implementations • 30 Nov 2017 • Bing Liu, Ian Lane
Model that produces such shared representations can be combined with models trained on individual domain SLU data to reduce the amount of training samples required for developing a new domain.
no code implementations • 29 Nov 2017 • Bing Liu, Gokhan Tur, Dilek Hakkani-Tur, Pararth Shah, Larry Heck
We show that deep RL based optimization leads to significant improvement on task success rate and reduction in dialogue length comparing to supervised training model.
no code implementations • 22 Nov 2017 • Bing Liu, Tong Yu, Ian Lane, Ole J. Mengshoel
Moreover, we report encouraging response selection performance of the proposed neural bandit model using the Recall@k metric for a small set of online training samples.
no code implementations • 8 Oct 2017 • Zeng Yu, Tianrui Li, Ning Yu, Yi Pan, Hongmei Chen, Bing Liu
We believe that minimizing the reconstruction error of the hidden representation is more robust than minimizing the Frobenius norm of the Jacobian matrix of the hidden representation.
no code implementations • EMNLP 2017 • Lei Shu, Hu Xu, Bing Liu
As learning is used increasingly in dynamic open environments where some new/test documents may not belong to any of the training classes, identifying these novel documents during classification presents an important problem.
no code implementations • 18 Sep 2017 • Bing Liu, Ian Lane
In this paper, we present a deep reinforcement learning (RL) framework for iterative dialog policy optimization in end-to-end task-oriented dialog systems.
no code implementations • EMNLP 2017 • Yasheng Wang, Yang Zhang, Bing Liu
Although many sentiment lexicons in different languages exist, most are not comprehensive.
no code implementations • 20 Aug 2017 • Bing Liu, Ian Lane
We present a novel end-to-end trainable neural network model for task-oriented dialog systems.
no code implementations • ACL 2017 • Lei Shu, Hu Xu, Bing Liu
This paper makes a focused contribution to supervised aspect extraction.
6 code implementations • 4 Apr 2017 • Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, Bing Liu
Perception and expression of emotion are key factors to the success of dialogue systems or conversational agents.
no code implementations • 15 Jan 2017 • Bing Liu, Ian Lane
In this work, we propose contextual language models that incorporate dialog level discourse information into language modeling.
no code implementations • 23 Dec 2016 • Lei Shu, Bing Liu, Hu Xu, Annice Kim
When "screen" appears in a review of a new domain (or product), it is likely to be an aspect too.
no code implementations • WS 2016 • Bing Liu, Ian Lane
On SLU tasks, our joint model outperforms the independent task training model by 22. 3% on intent detection error rate, with slight degradation on slot filling F1 score.
Ranked #3 on
Intent Detection
on ATIS
6 code implementations • 6 Sep 2016 • Bing Liu, Ian Lane
Attention-based encoder-decoder neural network models have recently shown promising results in machine translation and speech recognition.
Ranked #2 on
Intent Detection
on ATIS