no code implementations • 5 Jul 2023 • Jiaqi Xu, Cheng Luo, Weicheng Xie, Linlin Shen, Xiaofeng Liu, Lu Liu, Hatice Gunes, Siyang Song
Verbal and non-verbal human reaction generation is a challenging task, as different reactions could be appropriate for responding to the same behaviour.
1 code implementation • 24 May 2023 • Tong Xu, Micol Spitale, Hao Tang, Lu Liu, Hatice Gunes, Siyang Song
This means that we approach this problem by considering the generation of a distribution of the listener's appropriate facial reactions instead of multiple different appropriate facial reactions, i. e., 'many' appropriate facial reaction labels are summarised as 'one' distribution label during training.
1 code implementation • 23 May 2023 • Ridong Han, Tao Peng, Chaohao Yang, Benyou Wang, Lu Liu, Xiang Wan
ChatGPT has stimulated the research boom in the field of large language models.
no code implementations • 20 Mar 2023 • Ly V. Nguyen, Lu Liu, Nguyen Linh-Trung, A. Lee Swindlehurst
While block-wise processing (BWP) can effectively address the inter-symbol-interference (ISI) in frequency-selective fading channels, its computational complexity and processing delay can be too high for practical implementation.
no code implementations • 21 Feb 2023 • Lu Liu, Lei Zhou, Yuhan Dong
This allows the camera to capture images with shallow depth-of-field, in which only a small area of the image is in sharp focus, while the rest of the image is blurred.
no code implementations • 20 Jan 2023 • Lu Liu, Christos Masouros, A. Lee Swindlehurst
This paper focuses on designing robust symbol-level precoding (SLP) in an overlay cognitive radio (CR) network, where the primary and secondary networks transmit signals concurrently.
1 code implementation • 20 Dec 2022 • Ridong Han, Tao Peng, Benyou Wang, Lu Liu, Xiang Wan
Document-level relation extraction faces two overlooked challenges: long-tail problem and multi-label problem.
3 code implementations • 9 Nov 2022 • BigScience Workshop, :, Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilić, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, Dragomir Radev, Eduardo González Ponferrada, Efrat Levkovizh, Ethan Kim, Eyal Bar Natan, Francesco De Toni, Gérard Dupont, Germán Kruszewski, Giada Pistilli, Hady Elsahar, Hamza Benyamina, Hieu Tran, Ian Yu, Idris Abdulmumin, Isaac Johnson, Itziar Gonzalez-Dios, Javier de la Rosa, Jenny Chim, Jesse Dodge, Jian Zhu, Jonathan Chang, Jörg Frohberg, Joseph Tobing, Joydeep Bhattacharjee, Khalid Almubarak, Kimbo Chen, Kyle Lo, Leandro von Werra, Leon Weber, Long Phan, Loubna Ben allal, Ludovic Tanguy, Manan Dey, Manuel Romero Muñoz, Maraim Masoud, María Grandury, Mario Šaško, Max Huang, Maximin Coavoux, Mayank Singh, Mike Tian-Jian Jiang, Minh Chien Vu, Mohammad A. Jauhar, Mustafa Ghaleb, Nishant Subramani, Nora Kassner, Nurulaqilla Khamis, Olivier Nguyen, Omar Espejel, Ona de Gibert, Paulo Villegas, Peter Henderson, Pierre Colombo, Priscilla Amuok, Quentin Lhoest, Rheza Harliman, Rishi Bommasani, Roberto Luis López, Rui Ribeiro, Salomey Osei, Sampo Pyysalo, Sebastian Nagel, Shamik Bose, Shamsuddeen Hassan Muhammad, Shanya Sharma, Shayne Longpre, Somaieh Nikpoor, Stanislav Silberberg, Suhas Pai, Sydney Zink, Tiago Timponi Torrent, Timo Schick, Tristan Thrush, Valentin Danchev, Vassilina Nikoulina, Veronika Laippala, Violette Lepercq, Vrinda Prabhu, Zaid Alyafeai, Zeerak Talat, Arun Raja, Benjamin Heinzerling, Chenglei Si, Davut Emre Taşar, Elizabeth Salesky, Sabrina J. Mielke, Wilson Y. Lee, Abheesht Sharma, Andrea Santilli, Antoine Chaffin, Arnaud Stiegler, Debajyoti Datta, Eliza Szczechla, Gunjan Chhablani, Han Wang, Harshit Pandey, Hendrik Strobelt, Jason Alan Fries, Jos Rozen, Leo Gao, Lintang Sutawika, M Saiful Bari, Maged S. Al-shaibani, Matteo Manica, Nihal Nayak, Ryan Teehan, Samuel Albanie, Sheng Shen, Srulik Ben-David, Stephen H. Bach, Taewoon Kim, Tali Bers, Thibault Fevry, Trishala Neeraj, Urmish Thakker, Vikas Raunak, Xiangru Tang, Zheng-Xin Yong, Zhiqing Sun, Shaked Brody, Yallow Uri, Hadar Tojarieh, Adam Roberts, Hyung Won Chung, Jaesung Tae, Jason Phang, Ofir Press, Conglong Li, Deepak Narayanan, Hatim Bourfoune, Jared Casper, Jeff Rasley, Max Ryabinin, Mayank Mishra, Minjia Zhang, Mohammad Shoeybi, Myriam Peyrounette, Nicolas Patry, Nouamane Tazi, Omar Sanseviero, Patrick von Platen, Pierre Cornette, Pierre François Lavallée, Rémi Lacroix, Samyam Rajbhandari, Sanchit Gandhi, Shaden Smith, Stéphane Requena, Suraj Patil, Tim Dettmers, Ahmed Baruwa, Amanpreet Singh, Anastasia Cheveleva, Anne-Laure Ligozat, Arjun Subramonian, Aurélie Névéol, Charles Lovering, Dan Garrette, Deepak Tunuguntla, Ehud Reiter, Ekaterina Taktasheva, Ekaterina Voloshina, Eli Bogdanov, Genta Indra Winata, Hailey Schoelkopf, Jan-Christoph Kalo, Jekaterina Novikova, Jessica Zosa Forde, Jordan Clive, Jungo Kasai, Ken Kawamura, Liam Hazan, Marine Carpuat, Miruna Clinciu, Najoung Kim, Newton Cheng, Oleg Serikov, Omer Antverg, Oskar van der Wal, Rui Zhang, Ruochen Zhang, Sebastian Gehrmann, Shachar Mirkin, Shani Pais, Tatiana Shavrina, Thomas Scialom, Tian Yun, Tomasz Limisiewicz, Verena Rieser, Vitaly Protasov, Vladislav Mikhailov, Yada Pruksachatkun, Yonatan Belinkov, Zachary Bamberger, Zdeněk Kasner, Alice Rueda, Amanda Pestana, Amir Feizpour, Ammar Khan, Amy Faranak, Ana Santos, Anthony Hevia, Antigona Unldreaj, Arash Aghagol, Arezoo Abdollahi, Aycha Tammour, Azadeh HajiHosseini, Bahareh Behroozi, Benjamin Ajibade, Bharat Saxena, Carlos Muñoz Ferrandis, Daniel McDuff, Danish Contractor, David Lansky, Davis David, Douwe Kiela, Duong A. Nguyen, Edward Tan, Emi Baylor, Ezinwanne Ozoani, Fatima Mirza, Frankline Ononiwu, Habib Rezanejad, Hessie Jones, Indrani Bhattacharya, Irene Solaiman, Irina Sedenko, Isar Nejadgholi, Jesse Passmore, Josh Seltzer, Julio Bonis Sanz, Livia Dutra, Mairon Samagaio, Maraim Elbadri, Margot Mieskes, Marissa Gerchick, Martha Akinlolu, Michael McKenna, Mike Qiu, Muhammed Ghauri, Mykola Burynok, Nafis Abrar, Nazneen Rajani, Nour Elkott, Nour Fahmy, Olanrewaju Samuel, Ran An, Rasmus Kromann, Ryan Hao, Samira Alizadeh, Sarmad Shubber, Silas Wang, Sourav Roy, Sylvain Viguier, Thanh Le, Tobi Oyebade, Trieu Le, Yoyo Yang, Zach Nguyen, Abhinav Ramesh Kashyap, Alfredo Palasciano, Alison Callahan, Anima Shukla, Antonio Miranda-Escalada, Ayush Singh, Benjamin Beilharz, Bo wang, Caio Brito, Chenxi Zhou, Chirag Jain, Chuxin Xu, Clémentine Fourrier, Daniel León Periñán, Daniel Molano, Dian Yu, Enrique Manjavacas, Fabio Barth, Florian Fuhrimann, Gabriel Altay, Giyaseddin Bayrak, Gully Burns, Helena U. Vrabec, Imane Bello, Ishani Dash, Jihyun Kang, John Giorgi, Jonas Golde, Jose David Posada, Karthik Rangasai Sivaraman, Lokesh Bulchandani, Lu Liu, Luisa Shinzato, Madeleine Hahn de Bykhovetz, Maiko Takeuchi, Marc Pàmies, Maria A Castillo, Marianna Nezhurina, Mario Sänger, Matthias Samwald, Michael Cullan, Michael Weinberg, Michiel De Wolf, Mina Mihaljcic, Minna Liu, Moritz Freidank, Myungsun Kang, Natasha Seelam, Nathan Dahlberg, Nicholas Michio Broad, Nikolaus Muellner, Pascale Fung, Patrick Haller, Ramya Chandrasekhar, Renata Eisenberg, Robert Martin, Rodrigo Canalli, Rosaline Su, Ruisi Su, Samuel Cahyawijaya, Samuele Garda, Shlok S Deshmukh, Shubhanshu Mishra, Sid Kiblawi, Simon Ott, Sinee Sang-aroonsiri, Srishti Kumar, Stefan Schweter, Sushil Bharati, Tanmay Laud, Théo Gigant, Tomoya Kainuma, Wojciech Kusa, Yanis Labrak, Yash Shailesh Bajaj, Yash Venkatraman, Yifan Xu, Yingxin Xu, Yu Xu, Zhe Tan, Zhongli Xie, Zifan Ye, Mathilde Bras, Younes Belkada, Thomas Wolf
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions.
2 code implementations • 21 Sep 2022 • Yue Tan, Guodong Long, Jie Ma, Lu Liu, Tianyi Zhou, Jing Jiang
To prevent these issues from hindering the deployment of FL systems, we propose a lightweight framework where clients jointly learn to fuse the representations generated by multiple fixed pre-trained models rather than training a large-scale model from scratch.
no code implementations • 16 Aug 2022 • Jiayan Gu, Ashiq Anjum, Yan Wu, Lu Liu, John Panneerselvam, Yao Lu, Bo Yuan
The experimental results show that the proposed least-used key selection method improves the service retrieval efficiency significantly compared with the designated key selection method in the case of the unequal appearing probability of parameters in service retrieval requests under three indexing models.
no code implementations • 16 Nov 2021 • Jin Zhang, Lu Liu, Xinghu Wang, Haibo Ji
In this note, a novel observer-based output feedback control approach is proposed to address the distributed optimal output consensus problem of uncertain nonlinear multi-agent systems in the normal form over unbalanced directed graphs.
2 code implementations • NeurIPS 2021 • Xinyang Jiang, Lu Liu, Caihua Shan, Yifei Shen, Xuanyi Dong, Dongsheng Li
In this paper, we consider a different data format for images: vector graphics.
no code implementations • 7 Oct 2021 • Dongyang Dai, Yuanzhe Chen, Li Chen, Ming Tu, Lu Liu, Rui Xia, Qiao Tian, Yuping Wang, Yuxuan Wang
(2) How to clone a person's voice while controlling the style and prosody.
no code implementations • 30 Aug 2021 • Bo Li, Xinyang Jiang, Donglin Bai, Yuge Zhang, Ningxin Zheng, Xuanyi Dong, Lu Liu, Yuqing Yang, Dongsheng Li
The energy consumption of deep learning models is increasing at a breathtaking rate, which raises concerns due to potential negative effects on carbon neutrality in the context of global warming and climate change.
no code implementations • 7 Jun 2021 • William Hebgen Guss, Stephanie Milani, Nicholay Topin, Brandon Houghton, Sharada Mohanty, Andrew Melnik, Augustin Harter, Benoit Buschmaas, Bjarne Jaster, Christoph Berganski, Dennis Heitkamp, Marko Henning, Helge Ritter, Chengjie WU, Xiaotian Hao, Yiming Lu, Hangyu Mao, Yihuan Mao, Chao Wang, Michal Opanowicz, Anssi Kanervisto, Yanick Schraner, Christian Scheller, Xiren Zhou, Lu Liu, Daichi Nishio, Toi Tsuneda, Karolis Ramanauskas, Gabija Juceviciute
Reinforcement learning competitions have formed the basis for standard research benchmarks, galvanized advances in the state-of-the-art, and shaped the direction of the field.
1 code implementation • 18 May 2021 • Ridong Han, Tao Peng, Jiayu Han, Hai Cui, Lu Liu
Based on the above, in this paper, we design a novel Recursive Hierarchy-Interactive Attention network (RHIA) to further handle long-tail relations, which models the heuristic effect between relation levels.
no code implementations • 7 May 2021 • Lu Liu, Robby T. Tan
At inference, we propose a human-object regrouping approach by considering the object-exclusive property of an action, where the target object should not be shared by more than one human.
3 code implementations • 1 May 2021 • Yue Tan, Guodong Long, Lu Liu, Tianyi Zhou, Qinghua Lu, Jing Jiang, Chengqi Zhang
Heterogeneity across clients in federated learning (FL) usually hinders the optimization convergence and generalization performance when the aggregation of clients' knowledge occurs in the gradient space.
no code implementations • ICLR 2021 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Xuanyi Dong, Chengqi Zhang
To resolve this problem, we propose Isometric Propagation Network (IPN), which learns to strengthen the relation between classes within each space and align the class dependency in the two spaces.
no code implementations • 24 Jan 2021 • Xiaohan Zhang, Lu Liu, Guodong Long, Jing Jiang, Shenquan Liu
Typical methods to study cognitive function are to record the electrical activities of animal neurons during the training of animals performing behavioral tasks.
5 code implementations • ICLR 2021 • Shuo Yang, Lu Liu, Min Xu
In this paper, we calibrate the distribution of these few-sample classes by transferring statistics from the classes with sufficient examples, then an adequate number of examples can be sampled from the calibrated distribution to expand the inputs to the classifier.
no code implementations • 1 Jan 2021 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Xuanyi Dong, Chengqi Zhang
Few-shot learning aims to train a classifier given only a few samples per class that are highly insufficient to describe the whole data distribution.
no code implementations • 1 Jan 2021 • Iddo Drori, Brandon Kates, Anant Kharkar, Lu Liu, Qiang Ma, Jonah Deykin, Nihar Sidhu, Madeleine Udell
We train a graph neural network in which each node represents a dataset to predict the best machine learning pipeline for a new test dataset.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Lu Liu, Yi Zhou, Jianhan Xu, Xiaoqing Zheng, Kai-Wei Chang, Xuanjing Huang
The words in each sentence of a source language corpus are rearranged to meet the word order in a target language under the guidance of a part-of-speech based language model (LM).
no code implementations • 24 Sep 2020 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
To address this challenging task, most ZSL methods relate unseen test classes to seen(training) classes via a pre-defined set of attributes that can describe all classes in the same semantic space, so the knowledge learned on the training classes can be adapted to unseen classes.
no code implementations • 10 Sep 2020 • Lu Liu, Zhenqiao Song, Xiaoqing Zheng, Jun He
One of the major challenges in coreference resolution is how to make use of entity-level features defined over clusters of mentions rather than mention pairs.
2 code implementations • 28 Aug 2020 • Xuanyi Dong, Lu Liu, Katarzyna Musial, Bogdan Gabrys
In this paper, we propose NATS-Bench, a unified benchmark on searching for both topology and size, for (almost) any up-to-date NAS algorithm.
no code implementations • 20 Aug 2020 • Lu Liu, Ke Yang, Guangyu Wang, Hua Wu
Two-dimensional (2D) ferromagnets (FMs) have attracted widespread attention due to their prospects in spintronic applications.
Materials Science Strongly Correlated Electrons
1 code implementation • 28 Jun 2020 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
We study many-class few-shot (MCFS) problem in both supervised learning and meta-learning settings.
1 code implementation • ICLR 2021 • Lu Liu, William Hamilton, Guodong Long, Jing Jiang, Hugo Larochelle
We consider the problem of multi-domain few-shot image classification, where unseen classes and examples come from diverse data sources.
Ranked #1 on
Few-Shot Image Classification
on Meta-Dataset Rank
1 code implementation • 3 Jun 2020 • Wensi Tang, Lu Liu, Guodong Long
Recent few-shot learning works focus on training a model with prior meta-knowledge to fast adapt to new tasks with unseen classes and samples.
no code implementations • 15 Apr 2020 • Jiehang Zeng, Lu Liu, Xiaoqing Zheng
A generative network (GN) takes two elements of a (subject, predicate, object) triple as input and generates the vector representation of the missing element.
3 code implementations • ICLR 2022 • Wensi Tang, Guodong Long, Lu Liu, Tianyi Zhou, Michael Blumenstein, Jing Jiang
Particularly, it is a set of kernel sizes that can efficiently cover the best RF size across different datasets via consisting of multiple prime numbers according to the length of the time series.
2 code implementations • 8 Oct 2019 • Iddo Drori, Lu Liu, Yi Nian, Sharath C. Koorathota, Jie S. Li, Antonio Khalil Moretti, Juliana Freire, Madeleine Udell
We use these embeddings in a neural architecture to learn the distance between best-performing pipelines.
1 code implementation • NeurIPS 2019 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
It can significantly improve tasks that suffer from insufficient training data, e. g., few shot learning.
no code implementations • 19 Jul 2019 • Shuqiang Lu, Lingyun Ying, Wenjie Lin, Yu Wang, Meining Nie, Kaiwen Shen, Lu Liu, Haixin Duan
With the development of artificial intelligence algorithms like deep learning models and the successful applications in many different fields, further similar trails of deep learning technology have been made in cyber security area.
no code implementations • ACL 2019 • Zhenqiao Song, Xiaoqing Zheng, Lu Liu, Mu Xu, Xuanjing Huang
It is desirable for dialog systems to have capability to express specific emotions during a conversation, which has a direct, quantifiable impact on improvement of their usability and user satisfaction.
1 code implementation • 20 Jun 2019 • Athar Sefid, Jian Wu, Allen C. Ge, Jing Zhao, Lu Liu, Cornelia Caragea, Prasenjit Mitra, C. Lee Giles
We introduce a system designed to match scholarly document entities with noisy metadata against a reference dataset.
2 code implementations • 10 May 2019 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Lina Yao, Chengqi Zhang
The resulting graph of prototypes can be continually re-used and updated for new tasks and classes.
no code implementations • ICLR 2019 • Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
It addresses the ``many-class'' problem by exploring the class hierarchy, e. g., the coarse-class label that covers a subset of fine classes, which helps to narrow down the candidates for the fine class and is cheaper to obtain.
no code implementations • 17 Jan 2019 • Lu Liu, Robby T. Tan
Specifically, we propose two approaches, i. e. Filtering CCL and Temperature CCL to either filter out uncertain predictions or pay less attention on them in the consistency regularization.
no code implementations • 11 Jan 2019 • Yan Zhao, Lu Liu, Chunhua Liu, Ruoyao Yang, Dong Yu
We introduce a new task named Story Ending Generation (SEG), whic-h aims at generating a coherent story ending from a sequence of story plot.
no code implementations • 11 Dec 2018 • Lu Liu, Robby T. Tan, ShaoDi You
This requirement of bounding boxes as part of the input is needed to enable the methods to ignore irrelevant contexts and extract only human features.
no code implementations • SEMEVAL 2018 • Meiqian Zhao, Chunhua Liu, Lu Liu, Yan Zhao, Dong Yu
To comprehend an argument and fill the gap between claims and reasons, it is vital to find the implicit supporting warrants behind.
1 code implementation • 24 Oct 2016 • Lu Liu, Zhiguang Wang
Time series and signals are attracting more attention across statistics, machine learning and pattern recognition as it appears widely in the industry especially in sensor and IoT related research and applications, but few advances has been achieved in effective time series visual analytics and interaction due to its temporal dimensionality and complex dynamics.
1 code implementation • 24 Oct 2016 • Zhiguang Wang, Wei Song, Lu Liu, Fan Zhang, Junxiao Xue, Yangdong Ye, Ming Fan, Mingliang Xu
We propose a new model based on the deconvolutional networks and SAX discretization to learn the representation for multivariate time series.