no code implementations • SIGDIAL (ACL) 2020 • Itika Gupta, Barbara Di Eugenio, Brian Ziebart, Aiswarya Baiju, Bing Liu, Ben Gerber, Lisa Sharp, Nadia Nabulsi, Mary Smart
In this paper, we discuss these schemas and briefly talk about their application for automatically extracting activity goals and annotating the second round of data, collected with different health coaches and patients.
no code implementations • SIGDIAL (ACL) 2021 • Itika Gupta, Barbara Di Eugenio, Brian D. Ziebart, Bing Liu, Ben S. Gerber, Lisa K. Sharp
In this paper, we present our work towards assisting health coaches by extracting the physical activity goal the user and coach negotiate via text messages.
1 code implementation • EMNLP 2021 • Nianzu Ma, Alexander Politowicz, Sahisnu Mazumder, Jiahua Chen, Bing Liu, Eric Robertson, Scott Grigsby
This paper proposes to study a fine-grained semantic novelty detection task, which can be illustrated with the following example.
no code implementations • 20 Dec 2024 • Jiabao Qiu, Zixuan Ke, Bing Liu
We introduce CLOB, a novel continual learning (CL) paradigm wherein a large language model (LLM) is regarded as a black box.
1 code implementation • 20 Dec 2024 • Saleh Momeni, Sahisnu Mazumder, Zixuan Ke, Bing Liu
However, incrementally learning each new task in ICL necessitates adding training examples from each class of the task to the prompt, which hampers scalability as the prompt length increases.
1 code implementation • 20 Dec 2024 • Saleh Momeni, Sahisnu Mazumder, Bing Liu
This paper proposes a novel CIL method, called Kernel Linear Discriminant Analysis (KLDA), that can effectively avoid CF and ICS problems.
no code implementations • 28 Nov 2024 • Haiyang Guo, Fei Zhu, Fanhu Zeng, Bing Liu, Xu-Yao Zhang
On the one hand, we retain only two sets of LoRA parameters for merging and propose dynamic representation consolidation to calibrate the merged feature representation.
no code implementations • 6 Nov 2024 • Bing Liu, Chengcheng Zhao, Li Chai, Peng Cheng, Jiming Chen
This paper studies privacy-preserving resilient vector consensus in multi-agent systems against faulty agents, where normal agents can achieve consensus within the convex hull of their initial states while protecting state vectors from being disclosed.
no code implementations • 17 Oct 2024 • Junhong Wu, Yang Zhao, Yangyifan Xu, Bing Liu, Chengqing Zong
These abilities, which are developed using proprietary and unavailable training data, make existing continual instruction tuning methods ineffective.
1 code implementation • 14 Oct 2024 • Chaoxi Niu, Guansong Pang, Ling Chen, Bing Liu
Graph CIL (GCIL) follows the same setting but needs to deal with graph tasks (e. g., node classification in a graph).
1 code implementation • 2 Oct 2024 • Zhiwen Shao, Hancheng Zhu, Yong Zhou, Xiang Xiang, Bing Liu, Rui Yao, Lizhuang Ma
Specifically, we explore the mechanism of self-attention weight distribution, in which the self-attention weight distribution of each AU is regarded as spatial distribution and is adaptively learned under the constraint of location-predefined attention and the guidance of AU detection.
no code implementations • 2 Oct 2024 • Lucas Bandarkar, Benjamin Muller, Pritish Yuvraj, Rui Hou, Nayan Singhal, Hongjiang Lv, Bing Liu
We focus on mathematical reasoning and without in-language math data, facilitate cross-lingual transfer by composing language and math capabilities.
no code implementations • 15 Aug 2024 • Danqing Hu, Bing Liu, Xiang Li, Xiaofeng Zhu, Nan Wu
The experimental results demonstrate that LLMs can achieve competitive, and in some tasks superior, performance in lung cancer prognosis prediction compared to data-driven logistic regression models despite not using additional patient data.
2 code implementations • 31 Jul 2024 • Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, Danny Wyatt, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Francisco Guzmán, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Govind Thattai, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Ishan Misra, Ivan Evtimov, Jack Zhang, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer Van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Karthik Prasad, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Kushal Lakhotia, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Maria Tsimpoukelli, Mathew Oldham, Mathieu Rita, Maya Pavlova, Melanie Kambadur, Mike Lewis, Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal, Narjes Torabi, Nikolay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Ning Zhang, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Vasic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohan Maheswari, Rohit Girdhar, Rohit Patel, Romain Sauvestre, Ronnie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini, Sahana Chennabasappa, Sanjay Singh, Sean Bell, Seohyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sharan Narang, Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Vandenhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Sydney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Virginie Do, Vish Vogeti, Vítor Albiero, Vladan Petrovic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whitney Meers, Xavier Martinet, Xiaodong Wang, Xiaofang Wang, Xiaoqing Ellen Tan, Xide Xia, Xinfeng Xie, Xuchao Jia, Xuewei Wang, Yaelle Goldschlag, Yashesh Gaur, Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre Coudert, Zheng Yan, Zhengxing Chen, Zoe Papakipos, Aaditya Singh, Aayushi Srivastava, Abha Jain, Adam Kelsey, Adam Shajnfeld, Adithya Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay Menon, Ajay Sharma, Alex Boesenberg, Alexei Baevski, Allie Feinstein, Amanda Kallet, Amit Sangani, Amos Teo, Anam Yunus, Andrei Lupu, Andres Alvarado, Andrew Caples, Andrew Gu, Andrew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchandani, Annie Dong, Annie Franco, Anuj Goyal, Aparajita Saraf, Arkabandhu Chowdhury, Ashley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh Yazdan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Hancock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina Mejia, Ce Liu, Changhan Wang, Changkyu Kim, Chao Zhou, Chester Hu, Ching-Hsiang Chu, Chris Cai, Chris Tindal, Christoph Feichtenhofer, Cynthia Gao, Damon Civin, Dana Beaty, Daniel Kreymer, Daniel Li, David Adkins, David Xu, Davide Testuggine, Delia David, Devi Parikh, Diana Liskovich, Didem Foss, Dingkang Wang, Duc Le, Dustin Holland, Edward Dowling, Eissa Jamil, Elaine Montgomery, Eleonora Presani, Emily Hahn, Emily Wood, Eric-Tuan Le, Erik Brinkman, Esteban Arcaute, Evan Dunbar, Evan Smothers, Fei Sun, Felix Kreuk, Feng Tian, Filippos Kokkinos, Firat Ozgenel, Francesco Caggioni, Frank Kanayet, Frank Seide, Gabriela Medina Florez, Gabriella Schwarz, Gada Badeer, Georgia Swee, Gil Halpern, Grant Herman, Grigory Sizov, Guangyi, Zhang, Guna Lakshminarayanan, Hakan Inan, Hamid Shojanazeri, Han Zou, Hannah Wang, Hanwen Zha, Haroun Habeeb, Harrison Rudolph, Helen Suk, Henry Aspegren, Hunter Goldman, Hongyuan Zhan, Ibrahim Damlaj, Igor Molybog, Igor Tufanov, Ilias Leontiadis, Irina-Elena Veliche, Itai Gat, Jake Weissman, James Geboski, James Kohli, Janice Lam, Japhet Asher, Jean-Baptiste Gaya, Jeff Marcus, Jeff Tang, Jennifer Chan, Jenny Zhen, Jeremy Reizenstein, Jeremy Teboul, Jessica Zhong, Jian Jin, Jingyi Yang, Joe Cummings, Jon Carvill, Jon Shepard, Jonathan McPhie, Jonathan Torres, Josh Ginsburg, Junjie Wang, Kai Wu, Kam Hou U, Karan Saxena, Kartikay Khandelwal, Katayoun Zand, Kathy Matosich, Kaushik Veeraraghavan, Kelly Michelena, Keqian Li, Kiran Jagadeesh, Kun Huang, Kunal Chawla, Kyle Huang, Lailin Chen, Lakshya Garg, Lavender A, Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrstedt, Madian Khabsa, Manav Avalani, Manish Bhatt, Martynas Mankus, Matan Hasson, Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Keneally, Miao Liu, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Patel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mohammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navyata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Nikhil Mehta, Nikolay Pavlovich Laptev, Ning Dong, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli, Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pedro Rittner, Philip Bontrager, Pierre Roux, Piotr Dollar, Polina Zvyagina, Prashant Ratanchandani, Pritish Yuvraj, Qian Liang, Rachad Alao, Rachel Rodriguez, Rafi Ayub, Raghotham Murthy, Raghu Nayani, Rahul Mitra, Rangaprabhu Parthasarathy, Raymond Li, Rebekkah Hogan, Robin Battey, Rocky Wang, Russ Howes, Ruty Rinott, Sachin Mehta, Sachin Siby, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Mahajan, Saurabh Verma, Seiji Yamamoto, Sharadh Ramaswamy, Shaun Lindsay, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shishir Patil, Shiva Shankar, Shuqiang Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Summer Deng, Sungmin Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Koehler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun Vontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, WenWen Jiang, Wes Bouaziz, Will Constable, Xiaocheng Tang, Xiaojian Wu, Xiaolan Wang, Xilun Wu, Xinbo Gao, Yaniv Kleinman, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi, Youngjin Nam, Yu, Wang, Yu Zhao, Yuchen Hao, Yundi Qian, Yunlu Li, Yuzi He, Zach Rait, Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, Zhiwei Zhao, Zhiyu Ma
This paper presents a new set of foundation models, called Llama 3.
Ranked #3 on
Multi-task Language Understanding
on MMLU
(using extra training data)
no code implementations • 25 Jul 2024 • Danqing Hu, Bing Liu, Xiaofeng Zhu, Nan Wu
We then designed a prompt template to integrate the patient data with the predicted probability from the machine learning model.
2 code implementations • 18 Jun 2024 • Haoqiu Yan, Yongxin Zhu, Kai Zheng, Bing Liu, Haoyu Cao, Deqiang Jiang, Linli Xu
This oversight can lead to misinterpretations of speakers' intentions, resulting in inconsistent or even contradictory responses within dialogues.
no code implementations • 17 Jun 2024 • Yang Chen, Cong Fang, Zhouchen Lin, Bing Liu
Foundation Models (FMs) have demonstrated remarkable insights into the relational dynamics of the world, leading to the crucial question: how do these models acquire an understanding of world hybrid relations?
no code implementations • 4 Jun 2024 • Danqing Hu, Shanyuan Zhang, Qing Liu, Xiaofeng Zhu, Bing Liu
Besides the automatic quantitative evaluation metrics, we define five human evaluation metrics, i. e., completeness, correctness, conciseness, verisimilitude, and replaceability, to evaluate the semantics of the generated impressions.
1 code implementation • 3 Jun 2024 • Zhenhua Liu, Tong Zhu, Chuanyuan Tan, Haonan Lu, Bing Liu, Wenliang Chen
Large Language Models (LLMs) have shown their impressive capabilities, while also raising concerns about the data contamination problems due to privacy issues and leakage of benchmark datasets in the pre-training phase.
no code implementations • 29 May 2024 • Alexander Politowicz, Sahisnu Mazumder, Bing Liu
Designing Reinforcement Learning (RL) solutions for real-life problems remains a significant challenge.
1 code implementation • 16 Apr 2024 • Yue Zhou, Barbara Di Eugenio, Brian Ziebart, Lisa Sharp, Bing Liu, Nikolaos Agadakos
Health coaching helps patients achieve personalized and lifestyle-related goals, effectively managing chronic conditions and alleviating mental health issues.
1 code implementation • COLING 2022 • Yue Zhou, Barbara Di Eugenio, Brian Ziebart, Lisa Sharp, Bing Liu, Ben Gerber, Nikolaos Agadakos, Shweta Yadav
In this paper, we propose to build a dialogue system that converses with the patients, helps them create and accomplish specific goals, and can address their emotions with empathy.
no code implementations • 31 Mar 2024 • Changnan Xiao, Bing Liu
Length generalization (LG) is a challenging problem in learning to reason.
no code implementations • 6 Feb 2024 • Sijin Lu, Pengyu Xu, Bing Liu, Hongjian Sun, Liping Jing, Jian Yu
For the retrieval-augmented representations, we employ a cross-modal context-aware attention to leverage the main modality description for targeted feature extraction across the submodalities title and code.
1 code implementation • 24 Jan 2024 • Shuyi Wang, Bing Liu, Guido Zuccon
In a FOLTR system, a ranker is learned by aggregating local updates to the global ranking model.
1 code implementation • 22 Jan 2024 • Shibing Xiang, Xin Jiang, Bing Liu, Yurui Huang, Chaolin Tian, Yifang Ma
New knowledge builds upon existing foundations, which means an interdependent relationship exists between knowledge, manifested in the historical development of the scientific system for hundreds of years.
no code implementations • 15 Dec 2023 • Bing Liu
A core function of intelligence is grounding, which is the process of connecting the natural language and abstract knowledge to the internal representation of the real world in an intelligent being, e. g., a human.
no code implementations • 22 Nov 2023 • Changnan Xiao, Bing Liu
However, numerous evaluations of the reasoning capabilities of LLMs have also showed some limitations.
no code implementations • 20 Nov 2023 • Eli Verwimp, Rahaf Aljundi, Shai Ben-David, Matthias Bethge, Andrea Cossu, Alexander Gepperth, Tyler L. Hayes, Eyke Hüllermeier, Christopher Kanan, Dhireesha Kudithipudi, Christoph H. Lampert, Martin Mundt, Razvan Pascanu, Adrian Popescu, Andreas S. Tolias, Joost Van de Weijer, Bing Liu, Vincenzo Lomonaco, Tinne Tuytelaars, Gido M. van de Ven
Continual learning is a subfield of machine learning, which aims to allow machine learning models to continuously learn on new data, by accumulating knowledge without forgetting what was learned in the past.
1 code implementation • 20 Oct 2023 • Shengyao Zhuang, Bing Liu, Bevan Koopman, Guido Zuccon
In the field of information retrieval, Query Likelihood Models (QLMs) rank documents based on the probability of generating the query given the content of a document.
1 code implementation • 13 Oct 2023 • Zixuan Ke, Bing Liu, Wenhan Xiong, Asli Celikyilmaz, Haoran Li
To our knowledge, only one method has been proposed to learn a sequence of mixed tasks.
no code implementations • 9 Oct 2023 • Bing Liu, Pengyu Xu, Sijin Lu, Shijing Wang, Hongjian Sun, Liping Jing
With the development of Internet technology and the expansion of social networks, online platforms have become an important way for people to obtain information.
2 code implementations • 26 Sep 2023 • Haowei Lin, Yijia Shao, Weinan Qian, Ningxin Pan, Yiduo Guo, Bing Liu
An emerging theory-guided approach (called TIL+OOD) is to train a task-specific model for each task in a shared network for all tasks based on a task-incremental learning (TIL) method to deal with catastrophic forgetting.
no code implementations • 4 Sep 2023 • Danqing Hu, Bing Liu, Xiaofeng Zhu, Xudong Lu, Nan Wu
Information extraction is the strategy to transform the sequence of characters into structured data, which can be employed for secondary analysis.
no code implementations • 25 Jul 2023 • Zhiwen Shao, Yuchen Su, Yong Zhou, Fanrong Meng, Hancheng Zhu, Bing Liu, Rui Yao
Contour based scene text detection methods have rapidly developed recently, but still suffer from inaccurate frontend contour initialization, multi-stage error accumulation, or deficient local information aggregation.
1 code implementation • 26 Jun 2023 • Tatsuya Konishi, Mori Kurokawa, Chihiro Ono, Zixuan Ke, Gyuhak Kim, Bing Liu
Although several techniques have achieved learning with no CF, they attain it by letting each task monopolize a sub-network in a shared network, which seriously limits knowledge transfer (KT) and causes over-consumption of the network capacity, i. e., as more tasks are learned, the performance deteriorates.
1 code implementation • 22 Jun 2023 • Yijia Shao, Yiduo Guo, Dongyan Zhao, Bing Liu
Despite the great success of pre-trained language models, it is still a challenge to use these models for continual learning, especially for the class-incremental learning (CIL) setting due to catastrophic forgetting (CF).
1 code implementation • 22 Jun 2023 • Gyuhak Kim, Changnan Xiao, Tatsuya Konishi, Bing Liu
This paper shows that CIL is learnable.
1 code implementation • CVPR 2023 • Yiduo Guo, Bing Liu, Dongyan Zhao
A novel optimization objective with a gradient-based adaptive method is proposed to dynamically deal with the problem in the online CL process.
1 code implementation • 24 May 2023 • Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Jialin Pan, Lidong Bing
This paper aims to provide a comprehensive investigation into the capabilities of LLMs in performing various sentiment analysis tasks, from conventional sentiment classification to aspect-based sentiment analysis and multifaceted analysis of subjective texts.
1 code implementation • 20 May 2023 • Bing Liu, Wei Luo, Gang Li, Jing Huang, Bo Yang
As deep learning gains popularity in modelling dynamical systems, we expose an underappreciated misunderstanding relevant to modelling dynamics on networks.
no code implementations • 19 May 2023 • Yiduo Guo, Yaobo Liang, Dongyan Zhao, Bing Liu, Duan Nan
Existing research has shown that a multilingual pre-trained language model fine-tuned with one (source) language also performs well on downstream tasks for non-source languages, even though no fine-tuning is done on these languages.
no code implementations • 8 May 2023 • Neeraj Varshney, Himanshu Gupta, Eric Robertson, Bing Liu, Chitta Baral
To initiate a systematic research in this important area of 'dealing with novelties', we introduce 'NoveltyTask', a multi-stage task to evaluate a system's performance on pipelined novelty 'detection' and 'accommodation' tasks.
no code implementations • 20 Apr 2023 • Gyuhak Kim, Changnan Xiao, Tatsuya Konishi, Zixuan Ke, Bing Liu
The paper then proves that the theory can be generalized or extended to open-world CIL, which is the proposed open-world continual learning, that can perform CIL in the open world and detect future or open-world OOD data.
no code implementations • 16 Mar 2023 • Hao liu, Xin Li, Mingming Gong, Bing Liu, Yunfei Wu, Deqiang Jiang, Yinsong Liu, Xing Sun
Recently, Table Structure Recognition (TSR) task, aiming at identifying table structure into machine readable formats, has received increasing interest in the community.
2 code implementations • 7 Feb 2023 • Zixuan Ke, Yijia Shao, Haowei Lin, Tatsuya Konishi, Gyuhak Kim, Bing Liu
A novel proxy is also proposed to preserve the general knowledge in the original LM.
Ranked #1 on
Continual Pretraining
on ACL-ARC
2 code implementations • 21 Jan 2023 • Zixuan Ke, Yijia Shao, Haowei Lin, Hu Xu, Lei Shu, Bing Liu
This paper shows that the existing methods are suboptimal and proposes a novel method to perform a more informed adaptation of the knowledge in the LM by (1) soft-masking the attention heads based on their importance to best preserve the general knowledge in the LM and (2) contrasting the representations of the general and the full (both general and domain knowledge) to learn an integrated representation with both general and domain-specific knowledge.
3 code implementations • 10 Dec 2022 • Lei Ding, Jing Zhang, Kai Zhang, Haitao Guo, Bing Liu, Lorenzo Bruzzone
Semantic Change Detection (SCD) refers to the task of simultaneously extracting the changed areas and the semantic categories (before and after the changes) in Remote Sensing Images (RSIs).
Ranked #2 on
Change Detection
on SECOND
1 code implementation • 29 Nov 2022 • Bing Liu, Harrisen Scells, Wen Hua, Guido Zuccon, Genghong Zhao, Xia Zhang
Making compatible predictions thus should be one of the goals of training an EA model along with fitting the labelled data: this aspect however is neglected in current methods.
1 code implementation • 29 Nov 2022 • Bing Liu, Tiancheng Lan, Wen Hua, Guido Zuccon
Entity Alignment (EA), which aims to detect entity mappings (i. e. equivalent entity pairs) in different Knowledge Graphs (KGs), is critical for KG fusion.
1 code implementation • 23 Nov 2022 • Zixuan Ke, Bing Liu
Continual learning (CL) is a learning paradigm that emulates the human capability of learning and accumulating knowledge continually without forgetting the previously learned knowledge and also transferring the learned knowledge to help learn new tasks better.
no code implementations • 12 Nov 2022 • Sahisnu Mazumder, Bing Liu
This book introduces the new paradigm of lifelong learning dialogue systems to endow chatbots the ability to learn continually by themselves through their own self-initiated interactions with their users and working environments to improve themselves.
1 code implementation • 4 Nov 2022 • Gyuhak Kim, Changnan Xiao, Tatsuya Konishi, Zixuan Ke, Bing Liu
Continual learning (CL) learns a sequence of tasks incrementally.
1 code implementation • 31 Oct 2022 • Nianzu Ma, Sahisnu Mazumder, Alexander Politowicz, Bing Liu, Eric Robertson, Scott Grigsby
Much of the existing work on text novelty detection has been studied at the topic level, i. e., identifying whether the topic of a document or a sentence is novel or not.
no code implementations • 26 Oct 2022 • Sahisnu Mazumder, Bing Liu, Shuai Wang, Yingxuan Zhu, Xiaotian Yin, Lifeng Liu, Jian Li
This paper proposes a new method to drastically speed up deep reinforcement learning (deep RL) training for problems that have the property of state-action permissibility (SAP).
3 code implementations • 11 Oct 2022 • Zixuan Ke, Haowei Lin, Yijia Shao, Hu Xu, Lei Shu, Bing Liu
Recent work on applying large language models (LMs) achieves impressive performance in many NLP applications.
Ranked #1 on
Continual Pretraining
on AG News
1 code implementation • 22 Aug 2022 • Bing Liu, Wen Hua, Guido Zuccon, Genghong Zhao, Xia Zhang
To include in the EA subtasks a high proportion of the potential mappings originally present in the large EA task, we devise a counterpart discovery method that exploits the locality principle of the EA task and the power of trained EA models.
3 code implementations • 20 Aug 2022 • Gyuhak Kim, Zixuan Ke, Bing Liu
Instead of using the saved samples in memory to update the network for previous tasks/classes in the existing approach, MORE leverages the saved samples to build a task specific classifier (adding a new classification head) without updating the network learned for previous tasks/classes.
no code implementations • 3rd Conversational AI Workshop at 33rd Conference on Neural Information Processing Systems (NeurIPS 2019) 2019 • Jorge A. Mendez, Alborz Geramifard, Mohammad Ghavamzadeh, Bing Liu
Learning task-oriented dialog policies via reinforcement learning typically requires large amounts of interaction with users, which in practice renders such methods unusable for real-world applications.
no code implementations • 27 Jun 2022 • Yuchen Su, Zhiwen Shao, Yong Zhou, Fanrong Meng, Hancheng Zhu, Bing Liu, Rui Yao
Arbitrary-shaped scene text detection is a challenging task due to the variety of text changes in font, size, color, and orientation.
1 code implementation • 3 Jun 2022 • Reinald Kim Amplayo, Arthur Bražinskas, Yoshi Suhara, Xiaolan Wang, Bing Liu
In this tutorial, we present various aspects of opinion summarization that are useful for researchers and practitioners.
no code implementations • Findings (NAACL) 2022 • Zhiyu Chen, Bing Liu, Seungwhan Moon, Chinnadhurai Sankar, Paul Crook, William Yang Wang
We also propose two new models, SimpleToDPlus and Combiner, for the proposed task.
1 code implementation • IEEE Transactions on Image Processing 2022 • Kuiliang Gao, Bing Liu, Xuchu Yu, and Anzhu Yu
However, the existing methods based on meta learning still need to construct a labeled source data set with several pre-collected HSIs, and must utilize a large number of labeled samples for meta-training, which is actually time-consuming and labor-intensive.
no code implementations • 24 Mar 2022 • Sepideh Esmaeilpour, Lei Shu, Bing Liu
In many practical scenarios, this is not the case because there are unknowns or unseen class samples in the test data, which is called the open set scenario, and the unknowns need to be detected.
1 code implementation • 17 Mar 2022 • Gyuhak Kim, Sepideh Esmaeilpour, Changnan Xiao, Bing Liu
Existing continual learning techniques focus on either task incremental learning (TIL) or class incremental learning (CIL) problem, but not both.
no code implementations • 17 Mar 2022 • Bing Liu, Sahisnu Mazumder, Eric Robertson, Scott Grigsby
As more and more AI agents are used in practice, it is time to think about how to make these agents fully autonomous so that they can (1) learn by themselves continually in a self-motivated and self-initiated manner rather than being retrained offline periodically on the initiation of human engineers and (2) accommodate or adapt to unexpected or novel circumstances.
1 code implementation • 12 Mar 2022 • Kexuan Xin, Zequn Sun, Wen Hua, Bing Liu, Wei Hu, Jianfeng Qu, Xiaofang Zhou
We also design a conflict resolution mechanism to resolve the alignment conflict when combining the new alignment of an aligner and that from its teacher.
no code implementations • 4 Feb 2022 • Lei Shu, Hu Xu, Bing Liu, Jiahua Chen
Aspect-based sentiment analysis (ABSA) typically requires in-domain annotated data for supervised training/fine-tuning.
1 code implementation • CVPR 2022 • Bing Liu, Dong Wang, Xu Yang, Yong Zhou, Rui Yao, Zhiwen Shao, Jiaqi Zhao
In the encoding stage, the IOD is able to disentangle the region-based visual features by deconfounding the visual confounder.
2 code implementations • 18 Dec 2021 • Zixuan Ke, Bing Liu, Hao Wang, Lei Shu
In this setting, the CL system learns a sequence of SC tasks incrementally in a neural network, where each task builds a classifier to classify the sentiment of reviews of a particular product category or domain.
Ranked #4 on
Continual Learning
on DSC (10 tasks)
2 code implementations • NeurIPS 2020 • Zixuan Ke, Bing Liu, Xingchang Huang
To the best of our knowledge, no technique has been proposed to learn a sequence of mixed similar and dissimilar tasks that can deal with forgetting and also transfer knowledge forward and backward.
Ranked #1 on
Continual Learning
on F-CelebA (10 tasks)
1 code implementation • NAACL 2021 • Zixuan Ke, Hu Xu, Bing Liu
This paper studies continual learning (CL) of a sequence of aspect sentiment classification (ASC) tasks.
Ranked #3 on
Continual Learning
on ASC (19 tasks)
1 code implementation • NeurIPS 2021 • Zixuan Ke, Bing Liu, Nianzu Ma, Hu Xu, Lei Shu
Although several papers have tried to deal with both CF and KT, our experiments show that they suffer from serious CF when the tasks do not have much shared knowledge.
Ranked #1 on
Continual Learning
on DSC (10 tasks)
1 code implementation • EMNLP 2021 • Zixuan Ke, Bing Liu, Hu Xu, Lei Shu
The key novelty is a contrastive continual learning method that enables both knowledge transfer across tasks and knowledge distillation from old tasks to the new task, which eliminates the need for task ids in testing.
no code implementations • NeurIPS 2021 • Qi Qin, Wenpeng Hu, Han Peng, Dongyan Zhao, Bing Liu
Continual learning (CL) of a sequence of tasks is often accompanied with the catastrophic forgetting(CF) problem.
no code implementations • CVPR 2022 • Hao liu, Xin Li, Bing Liu, Deqiang Jiang, Yinsong Liu, Bo Ren
We also show that the proposed NCGM can modulate collaborative pattern of different modalities conditioned on the context of intra-modality cues, which is vital for diversified table cases.
Ranked #7 on
Table Recognition
on PubTabNet
no code implementations • 19 Nov 2021 • Yanni Li, Bing Liu, Kaicheng Yao, Xiaoli Kou, Pengfan Lv, Yueshen Xu, Jiangtao Cui
what is the upper bound of the learningable tasks sequentially for a given CL method?
no code implementations • 21 Oct 2021 • Bing Liu, Eric Robertson, Scott Grigsby, Sahisnu Mazumder
As more and more AI agents are used in practice, it is time to think about how to make these agents fully autonomous so that they can learn by themselves in a self-motivated and self-supervised manner rather than being retrained periodically on the initiation of human engineers using expanded training data.
1 code implementation • EMNLP 2021 • Bing Liu, Harrisen Scells, Guido Zuccon, Wen Hua, Genghong Zhao
Entity Alignment (EA) aims to match equivalent entities across different Knowledge Graphs (KGs) and is an essential step of KG fusion.
no code implementations • 29 Sep 2021 • Tatsuya Konishi, Mori Kurokawa, Roberto Legaspi, Chihiro Ono, Zixuan Ke, Gyuhak Kim, Bing Liu
The goal of this work is to endow such systems with the additional ability to transfer knowledge among tasks when the tasks are similar and have shared knowledge to achieve higher accuracy.
no code implementations • 29 Sep 2021 • Yiduo Guo, Dongyan Zhao, Bing Liu
Most existing techniques for online continual learning are based on experience-replay.
no code implementations • 29 Sep 2021 • Mengyu Wang, Yijia Shao, Haowei Lin, Wenpeng Hu, Bing Liu
Recently, contrastive loss with data augmentation and pseudo class creation has been shown to produce markedly better results for out-of-distribution (OOD) detection than previous methods.
no code implementations • 29 Sep 2021 • Gyuhak Kim, Sepideh Esmaeilpour, Zixuan Ke, Tatsuya Konishi, Bing Liu
PLS is not only simple and efficient but also does not invade data privacy due to the fact that it works in the latent feature space.
1 code implementation • EMNLP 2021 • Zhaojiang Lin, Bing Liu, Andrea Madotto, Seungwhan Moon, Paul Crook, Zhenpeng Zhou, Zhiguang Wang, Zhou Yu, Eunjoon Cho, Rajen Subba, Pascale Fung
Zero-shot transfer learning for dialogue state tracking (DST) enables us to handle a variety of task-oriented dialogue domains without the expense of collecting in-domain data.
2 code implementations • 6 Sep 2021 • Sepideh Esmaeilpour, Bing Liu, Eric Robertson, Lei Shu
In an out-of-distribution (OOD) detection problem, samples of known classes(also called in-distribution classes) are used to train a special classifier.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
+2
1 code implementation • ACL 2021 • Xuepeng Wang, Li Zhao, Bing Liu, Tao Chen, Feng Zhang, Di Wang
In this paper, we propose a novel concept-based label embedding method that can explicitly represent the concept and model the sharing mechanism among classes for the hierarchical text classification.
1 code implementation • NAACL 2021 • Zhaojiang Lin, Bing Liu, Seungwhan Moon, Paul Crook, Zhenpeng Zhou, Zhiguang Wang, Zhou Yu, Andrea Madotto, Eunjoon Cho, Rajen Subba
Zero-shot cross-domain dialogue state tracking (DST) enables us to handle unseen domains without the expense of collecting in-domain data.
2 code implementations • 10 May 2021 • Zhaojiang Lin, Bing Liu, Seungwhan Moon, Paul Crook, Zhenpeng Zhou, Zhiguang Wang, Zhou Yu, Andrea Madotto, Eunjoon Cho, Rajen Subba
Zero-shot cross-domain dialogue state tracking (DST) enables us to handle task-oriented dialogue in unseen domains without the expense of collecting in-domain data.
no code implementations • EACL 2021 • Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, Fuchun Peng
We find that mix-review effectively regularizes the finetuning process, and the forgetting problem is alleviated to some extent.
no code implementations • 1 Jan 2021 • Alexander Politowicz, Bing Liu
Automatic reward shaping is one approach to solving this problem, using automatic identification and modulation of shaping reward signals that are more informative about how agents should behave in any given scenario to learn and adapt faster.
1 code implementation • EMNLP 2021 • Andrea Madotto, Zhaojiang Lin, Zhenpeng Zhou, Seungwhan Moon, Paul Crook, Bing Liu, Zhou Yu, Eunjoon Cho, Zhiguang Wang
Continual learning in task-oriented dialogue systems can allow us to add new domains and functionalities through time without incurring the high cost of a whole system retraining.
no code implementations • 9 Dec 2020 • Bing Liu, Yu Tang, Yuxiong Ji, Yu Shen, Yuchuan Du
Ramp metering that uses traffic signals to regulate vehicle flows from the on-ramps has been widely implemented to improve vehicle mobility of the freeway.
1 code implementation • NeurIPS 2020 • Wenpeng Hu, Mengyu Wang, Qi Qin, Jinwen Ma, Bing Liu
Existing neural network based one-class learning methods mainly use various forms of auto-encoders or GAN style adversarial training to learn a latent representation of the given one class of data.
no code implementations • COLING 2020 • Hao Wang, Shuai Wang, Sahisnu Mazumder, Bing Liu, Yan Yang, Tianrui Li
After each sentiment classification task is learned, its knowledge is retained to help future task learning.
no code implementations • COLING 2020 • Wenpeng Hu, Ran Le, Bing Liu, Jinwen Ma, Dongyan Zhao, Rui Yan
Understanding neural models is a major topic of interest in the deep learning community.
1 code implementation • 25 Nov 2020 • Anzhu Yu, Wenyue Guo, Bing Liu, Xin Chen, Xin Wang, Xuefeng Cao, Bingchuan Jiang
This strategy estimates the depth map at coarsest level, while the depth maps at finer levels are considered as the upsampled depth map from previous level with pixel-wise depth residual.
no code implementations • 19 Nov 2020 • Bing Liu, Chuhe Mei
One of the main weaknesses of current chatbots or dialogue systems is that they do not learn online during conversations after they are deployed.
1 code implementation • 10 Nov 2020 • Lei Ding, Kai Zheng, Dong Lin, Yuxing Chen, Bing Liu, Jiansheng Li, Lorenzo Bruzzone
This CNN architecture can be used as a baseline method for future studies on the semantic segmentation of PolSAR images.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Qi Qin, Wenpeng Hu, Bing Liu
It proposes a new lifelong learning model (called L2PG) that can retain and selectively transfer the knowledge learned in the past to help learn the new task.
2 code implementations • COLING 2020 • Hu Xu, Lei Shu, Philip S. Yu, Bing Liu
Most features in the representation of an aspect are dedicated to the fine-grained semantics of the domain (or product category) and the aspect itself, instead of carrying summarized opinions from its context.
Aspect-Based Sentiment Analysis
Aspect-Based Sentiment Analysis (ABSA)
+3
1 code implementation • Findings (EMNLP) 2021 • Zhiyu Chen, Honglei Liu, Hu Xu, Seungwhan Moon, Hao Zhou, Bing Liu
As there is no clean mapping for a user's free form utterance to an ontology, we first model the user preferences as estimated distributions over the system ontology and map the users' utterances to such distributions.
1 code implementation • NAACL 2021 • Kai Sun, Seungwhan Moon, Paul Crook, Stephen Roller, Becka Silvert, Bing Liu, Zhiguang Wang, Honglei Liu, Eunjoon Cho, Claire Cardie
Existing dialogue corpora and models are typically designed under two disjoint motives: while task-oriented systems focus on achieving functional goals (e. g., booking hotels), open-domain chatbots aim at making socially engaging conversations.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Jiahua Chen, Shuai Wang, Sahisnu Mazumder, Bing Liu
Classifying and resolving coreferences of objects (e. g., product names) and attributes (e. g., product aspects) in opinionated reviews is crucial for improving the opinion mining performance.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Lei Shu, Alexandros Papangelis, Yi-Chia Wang, Gokhan Tur, Hu Xu, Zhaleh Feizollahi, Bing Liu, Piero Molino
This work introduces Focused-Variation Network (FVN), a novel model to control language generation.
no code implementations • 23 Sep 2020 • Qi Qin, Wenpeng Hu, Bing Liu
In this paper, we propose a significantly more effective approach that converts the original problem to a pair-wise matching problem and then outputs how probable two instances belong to the same class.
no code implementations • 22 Sep 2020 • Bing Liu, Sahisnu Mazumder
Due to the huge amount of manual effort involved, they are difficult to scale and also tend to produce many errors ought to their limited ability to understand natural language and the limited knowledge in their KBs.
no code implementations • 1 Sep 2020 • Bing Liu, Anzhu Yu, Pengqiang Zhang, Lei Ding, Wenyue Guo, Kuiliang Gao, Xibing Zuo
First, a deep densely connected convolutional network is considered for hyperspectral image classification.
no code implementations • ACL 2020 • Nianzu Ma, Sahisnu Mazumder, Hao Wang, Bing Liu
This paper studies the task of comparative preference classification (CPC).
no code implementations • ACL 2020 • Qi Qin, Wenpeng Hu, Bing Liu
In this paper, we propose a novel angle to further improve this representation learning, i. e., feature projection.
no code implementations • COLING 2020 • Hu Xu, Seungwhan Moon, Honglei Liu, Pararth Shah, Bing Liu, Philip S. Yu
We study a conversational recommendation model which dynamically manages users' past (offline) preferences and current (online) requests through a structured and cumulative user memory knowledge graph, to allow for natural interactions and accurate recommendations.
no code implementations • Findings (ACL) 2021 • Shuai Wang, Guangyi Lv, Sahisnu Mazumder, Bing Liu
We refer to this problem as domain polarity-changes of words.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Hu Xu, Bing Liu, Lei Shu, Philip S. Yu
This paper focuses on learning domain-oriented language models driven by end tasks, which aims to combine the worlds of both general-purpose language models (such as ELMo and BERT) and domain-specific language understanding.
Aspect-Based Sentiment Analysis
Aspect-Based Sentiment Analysis (ABSA)
+2
no code implementations • 1 Apr 2020 • Jie Liu, Xiaotian Wu, Kai Zhang, Bing Liu, Renyi Bao, Xiao Chen, Yiran Cai, Yiming Shen, Xinjun He, Jun Yan, Weixing Ji
With the booming of next generation sequencing technology and its implementation in clinical practice and life science research, the need for faster and more efficient data analysis methods becomes pressing in the field of sequencing.
1 code implementation • COLING 2020 • Wenpeng Hu, Mengyu Wang, Bing Liu, Feng Ji, Haiqing Chen, Dongyan Zhao, Jinwen Ma, Rui Yan
The key idea of the proposed approach is to use a Forward Transformation to transform dense representations to sparse representations.
1 code implementation • 4 Nov 2019 • Hu Xu, Bing Liu, Lei Shu, Philip S. Yu
Aspect-based sentiment classification (ASC) is an important task in fine-grained sentiment analysis.~Deep supervised ASC approaches typically model this task as a pair-wise classification task that takes an aspect and a sentence containing the aspect and outputs the polarity of the aspect in that sentence.
no code implementations • 30 Oct 2019 • Sahisnu Mazumder, Bing Liu, Shuai Wang, Sepideh Esmaeilpour
Traditional approaches to building natural language (NL) interfaces typically use a semantic parser to parse the user command and convert it to a logical form, which is then translated to an executable action in an application.
no code implementations • 16 Oct 2019 • Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, Fuchun Peng
We find that mix-review effectively regularizes the finetuning process, and the forgetting problem is alleviated to some extent.
no code implementations • 25 Sep 2019 • Wenpeng Hu, Ran Le, Bing Liu, Feng Ji, Haiqing Chen, Dongyan Zhao, Jinwen Ma, Rui Yan
Positive-unlabeled (PU) learning learns a binary classifier using only positive and unlabeled examples without labeled negative examples.
no code implementations • 25 Sep 2019 • Gyuhak Kim, Bing Liu
The idea is that in learning a new task, if we can ensure that the gradient updates will only occur in the orthogonal directions to the input vectors of the previous tasks, then the weight updates for learning the new task will not affect the previous tasks.
no code implementations • IJCNLP 2019 • Hao Wang, Bing Liu, Chaozhuo Li, Yan Yang, Tianrui Li
We propose a novel DNN model called NetAb (as shorthand for convolutional neural Networks with Ab-networks) to handle noisy labels during training.
1 code implementation • IJCNLP 2019 • Lei Shu, Hu Xu, Bing Liu, Piero Molino
Dialogue management (DM) plays a key role in the quality of the interaction with the user in a task-oriented dialogue system.
1 code implementation • WS 2019 • Lei Shu, Piero Molino, Mahdi Namazifar, Hu Xu, Bing Liu, Huaixiu Zheng, Gokhan Tur
It is based on a simple and practical yet very effective sequence-to-sequence approach, where language understanding and state tracking tasks are modeled jointly with a structured copy-augmented sequential decoder and a multi-label decoder for each slot.
no code implementations • WS 2019 • Sahisnu Mazumder, Bing Liu, Shuai Wang, Nianzu Ma
Dialogue systems are increasingly using knowledge bases (KBs) storing real-world facts to help generate quality responses.
no code implementations • 8 Jun 2019 • Hao Wang, Bing Liu, Shuai Wang, Nianzu Ma, Yan Yang
That is, it is possible to improve the NB classifier for a task by improving its model parameters directly by using the retained knowledge from other tasks.
1 code implementation • ACL 2019 • Huaishao Luo, Tianrui Li, Bing Liu, Junbo Zhang
This paper focuses on two related subtasks of aspect-based sentiment analysis, namely aspect term extraction and aspect sentiment classification, which we call aspect term-polarity co-extraction.
Aspect-Based Sentiment Analysis
Aspect-Based Sentiment Analysis (ABSA)
+3
1 code implementation • 31 May 2019 • Wenpeng Hu, Zhangming Chan, Bing Liu, Dongyan Zhao, Jinwen Ma, Rui Yan
Existing neural models for dialogue response generation assume that utterances are sequentially organized.
no code implementations • 31 May 2019 • Hao Wang, Linlin Zong, Bing Liu, Yan Yang, Wei Zhou
In this work, we show a strong link between perturbation risk bounds and incomplete multi-view clustering.
no code implementations • 15 May 2019 • Lei Shu, Hu Xu, Bing Liu
The modified CNN has two types of control modules.
no code implementations • ICLR 2019 • Wenpeng Hu, Zhengwei Tao, Zhanxing Zhu, Bing Liu, Zhou Lin, Jinwen Ma, Dongyan Zhao, Rui Yan
A large amount of parallel data is needed to train a strong neural machine translation (NMT) system.
no code implementations • ICLR 2019 • Wenpeng Hu, Zhou Lin, Bing Liu, Chongyang Tao, Zhengwei Tao, Jinwen Ma, Dongyan Zhao, Rui Yan
Several continual learning methods have been proposed to address the problem.