no code implementations • 3 Mar 2020 • Jingyuan Yang, Guang Liu, Yuzhao Mao, Zhiwei Zhao, Weiguo Gao, Xuan Li, Haiqin Yang, Jianping Shen
Task 1 of the DSTC8-track1 challenge aims to develop an end-to-end multi-domain dialogue system to accomplish complex users' goals under tourist information desk settings.
no code implementations • 7 Jul 2020 • Guang Liu, Gang Tu, Zheng Li, Yi-Jian Liu
At present, most Natural Language Processing technology is based on the results of Word Segmentation for Dependency Parsing, which mainly uses an end-to-end method based on supervised learning.
no code implementations • 8 Jul 2020 • Zheng Li, Gang Tu, Guang Liu, Zhi-Qiang Zhan, Yi-Jian Liu
The algorithm can not only introduce background knowledge, recognize all kinds of nested phrases in sentences, but also recognize the dependency between phrases.
no code implementations • 15 Oct 2020 • Yuzhao Mao, Qi Sun, Guang Liu, Xiaojie Wang, Weiguo Gao, Xuan Li, Jianping Shen
Emotion Recognition in Conversations (ERC) is essential for building empathetic human-machine systems.
1 code implementation • EMNLP 2021 • Guang Liu, Yuzhao Mao, Hailong Huang, Weiguo Gao, Xuan Li
To address these issues, we propose the Adversarial Mixing Policy (AMP), organized in a min-max-rand formulation, to relax the Locally Linear Constraints in Mixup.
no code implementations • 24 Sep 2021 • Guang Liu, Hailong Huang, Yuzhao Mao, Weiguo Gao, Xuan Li, Jianping Shen
Previous studies mostly use a fine-tuned Language Model (LM) to strengthen the constraints but ignore the fact that the potential of diversity could improve the effectiveness of generated data.
no code implementations • 29 Sep 2021 • Wenming Cao, Qifan Liu, Guang Liu, Zhihai He
We construct a prime-dual network structure for few-shot learning which establishes a commutative relationship between the support set and the query set, as well as a new self- supervision constraint for highly effective few-shot learning.
no code implementations • 15 Sep 2022 • Guang Liu, Jie Yang, Ledell Wu
The learning of an effective contextual representation requires meaningful features and a large amount of data.
1 code implementation • 12 Nov 2022 • Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu
In this work, we present a conceptually simple and effective method to train a strong bilingual/multilingual multimodal representation model.
no code implementations • 18 Jul 2023 • Yazheng Yang, Yuqi Wang, Guang Liu, Ledell Wu, Qi Liu
This research primarily centers on classification and regression tasks involving tabular data, and conducts rigorous experimental testing and analyses to validate the effectiveness of our methodology.
1 code implementation • 19 Aug 2023 • Fulong Ye, Guang Liu, Xinya Wu, Ledell Wu
Specifically, we first train a multilingual text encoder based on the knowledge distillation.
no code implementations • 13 Dec 2023 • Zhenduo Zhang, Bo-Wen Zhang, Guang Liu
Current text-to-image editing models often encounter challenges with smoothly manipulating multiple attributes using a single instruction.
1 code implementation • 22 Dec 2023 • Rongao Li, Jie Fu, Bo-Wen Zhang, Tao Huang, Zhihong Sun, Chen Lyu, Guang Liu, Zhi Jin, Ge Li
Moreover, each TACO problem includes several fine-grained labels such as task topics, algorithms, programming skills, and difficulty levels, providing a more precise reference for the training and evaluation of code generation models.
Ranked #1 on Code Generation on TACO-Code
1 code implementation • 24 Jan 2024 • Zhaohu Xing, Tian Ye, Yijun Yang, Guang Liu, Lei Zhu
Our SegMamba, in contrast to Transformer-based methods, excels in whole volume feature modeling from a state space model standpoint, maintaining superior processing speed, even with volume features at a resolution of {$64\times 64\times 64$}.
no code implementations • 25 Jan 2024 • Zheqi He, Xinya Wu, Pengfei Zhou, Richeng Xuan, Guang Liu, Xi Yang, Qiannan Zhu, Hua Huang
Current multi-modal benchmarks for domain-specific knowledge concentrate on multiple-choice questions and are predominantly available in English, which imposes limitations on the comprehensiveness of the evaluation.
no code implementations • Findings (EMNLP) 2021 • Yuzhao Mao, Guang Liu, Xiaojie Wang, Weiguo Gao, Xuan Li
Emotion dynamics formulates principles explaining the emotional fluctuation during conversations.