no code implementations • COLING 2022 • Ziming Li, Yan Zhou, Weibo Zhang, Yaxin Liu, Chuanpeng Yang, Zheng Lian, Songlin Hu
Our model also achieves state-of-the-art performance on a widely used sarcasm dataset.
1 code implementation • 27 Oct 2024 • Ziming Li, Qianbo Zang, David Ma, Jiawei Guo, Tuney Zheng, Minghao Liu, Xinyao Niu, Yue Wang, Jian Yang, Jiaheng Liu, Wanjun Zhong, Wangchunshu Zhou, Wenhao Huang, Ge Zhang
Data science tasks involving tabular data present complex challenges that require sophisticated problem-solving approaches.
1 code implementation • 25 Sep 2024 • Jinchuan Zhang, Yan Zhou, Yaxin Liu, Ziming Li, Songlin Hu
Automated red teaming is an effective method for identifying misaligned behaviors in large language models (LLMs).
1 code implementation • 2 Sep 2024 • Ruojun Zhou, Lisha Qu, Lei Zhang, Ziming Li, Hongwei Yu, Bing Luo
To address the above challenges, we propose a novel multi-modal FL framework for brain tumor segmentation (Fed-MUnet) that is suitable for FL training.
no code implementations • 12 Jun 2024 • Amogh Mannekote, Jinseok Nam, Ziming Li, Jian Gao, Kristy Elizabeth Boyer, Bonnie J. Dorr
Indirect User Requests (IURs), such as "It's cold in here" instead of "Could you please increase the temperature?"
no code implementations • 20 May 2024 • Yaxin Liu, Yan Zhou, Ziming Li, Jinchuan Zhang, Yu Shang, Chenyang Zhang, Songlin Hu
As an important multimodal sentiment analysis task, Joint Multimodal Aspect-Sentiment Analysis (JMASA), aiming to jointly extract aspect terms and their associated sentiment polarities from the given text-image pairs, has gained increasing concerns.
1 code implementation • 4 Apr 2024 • Jiawei Guo, Ziming Li, Xueling Liu, Kaijing Ma, Tianyu Zheng, Zhouliang Yu, Ding Pan, Yizhi Li, Ruibo Liu, Yue Wang, Shuyue Guo, Xingwei Qu, Xiang Yue, Ge Zhang, Wenhu Chen, Jie Fu
Large Language Models (LLMs) for code are rapidly evolving, with code editing emerging as a critical capability.
1 code implementation • 16 Nov 2023 • Xiangru Tang, Anni Zou, Zhuosheng Zhang, Ziming Li, Yilun Zhao, Xingyao Zhang, Arman Cohan, Mark Gerstein
Large language models (LLMs), despite their remarkable progress across various general domains, encounter significant barriers in medicine and healthcare.
1 code implementation • CVPR 2023 • Haoran Geng, Ziming Li, Yiran Geng, Jiayi Chen, Hao Dong, He Wang
Learning a generalizable object manipulation policy is vital for an embodied agent to work in complex real-world scenes.
1 code implementation • 27 May 2022 • Julia Kiseleva, Alexey Skrynnik, Artem Zholus, Shrestha Mohanty, Negar Arabzadeh, Marc-Alexandre Côté, Mohammad Aliannejadi, Milagro Teruel, Ziming Li, Mikhail Burtsev, Maartje ter Hoeve, Zoya Volovikova, Aleksandr Panov, Yuxuan Sun, Kavya Srinet, Arthur Szlam, Ahmed Awadallah
Starting from a very young age, humans acquire new skills and learn how to solve new tasks either by imitating the behavior of others or by following provided natural language instructions.
no code implementations • 5 May 2022 • Julia Kiseleva, Ziming Li, Mohammad Aliannejadi, Shrestha Mohanty, Maartje ter Hoeve, Mikhail Burtsev, Alexey Skrynnik, Artem Zholus, Aleksandr Panov, Kavya Srinet, Arthur Szlam, Yuxuan Sun, Marc-Alexandre Côté, Katja Hofmann, Ahmed Awadallah, Linar Abdrazakov, Igor Churin, Putra Manggala, Kata Naszadi, Michiel van der Meer, Taewoon Kim
The primary goal of the competition is to approach the problem of how to build interactive agents that learn to solve a task while provided with grounded natural language instructions in a collaborative environment.
no code implementations • 13 Oct 2021 • Julia Kiseleva, Ziming Li, Mohammad Aliannejadi, Shrestha Mohanty, Maartje ter Hoeve, Mikhail Burtsev, Alexey Skrynnik, Artem Zholus, Aleksandr Panov, Kavya Srinet, Arthur Szlam, Yuxuan Sun, Katja Hofmann, Michel Galley, Ahmed Awadallah
Starting from a very young age, humans acquire new skills and learn how to solve new tasks either by imitating the behavior of others or by following provided natural language instructions.
1 code implementation • 30 Apr 2021 • Ziming Li, Julia Kiseleva, Maarten de Rijke
The proposed backward reasoning step pushes the model to produce more informative and coherent content because the forward generation step's output is used to infer the dialogue context in the backward direction.
no code implementations • 1 Mar 2021 • Ziming Li, Dookun Park, Julia Kiseleva, Young-Bum Kim, Sungjin Lee
Digital assistants are experiencing rapid growth due to their ability to assist users with day-to-day tasks where most dialogues are happening multi-turn.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Ziming Li, Sungjin Lee, Baolin Peng, Jinchao Li, Julia Kiseleva, Maarten de Rijke, Shahin Shayandeh, Jianfeng Gao
Reinforcement learning methods have emerged as a popular choice for training an efficient and effective dialogue policy.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Ziming Li, Julia Kiseleva, Maarten de Rijke
Then, the traditional multi-label classification solution for dialogue policy learning is extended by adding dense layers to improve the dialogue agent performance.
no code implementations • 19 Jun 2020 • Ziming Li, Julia Kiseleva, Alekh Agarwal, Maarten de Rijke, Ryen W. White
Effective optimization is essential for real-world interactive systems to provide a satisfactory user experience in response to changing user behavior.
1 code implementation • 7 Apr 2020 • Ziming Li, Sungjin Lee, Baolin Peng, Jinchao Li, Julia Kiseleva, Maarten de Rijke, Shahin Shayandeh, Jianfeng Gao
Reinforcement Learning (RL) methods have emerged as a popular choice for training an efficient and effective dialogue policy.
1 code implementation • 6 Feb 2020 • Hao Du, Jing Guo, Ziming Li, Elaine Wong
We consider the additive decomposition problem in primitive towers and present an algorithm to decompose a function in an S-primitive tower as a sum of a derivative in the tower and a remainder which is minimal in some sense.
Symbolic Computation
1 code implementation • 9 Dec 2018 • Ziming Li, Julia Kiseleva, Maarten de Rijke
The performance of adversarial dialogue generation models relies on the quality of the reward signal produced by the discriminator.
no code implementations • 17 Feb 2018 • Ziming Li, Julia Kiseleva, Alekh Agarwal, Maarten de Rijke
Effective optimization is essential for interactive systems to provide a satisfactory user experience.
1 code implementation • 7 Feb 2018 • Shaoshi Chen, Hao Du, Ziming Li
This paper extends the classical Ostrogradsky-Hermite reduction for rational functions to more general functions in primitive extensions of certain types.
Symbolic Computation 33F10, 68W30, 12H05
1 code implementation • 21 Jan 2013 • Alin Bostan, Shaoshi Chen, Frédéric Chyzak, Ziming Li, Guoce Xin
Based on this reduction algorithm, we design a new method to compute minimal telescopers for bivariate hyperexponential functions.
Symbolic Computation Combinatorics 33F10