1 code implementation • 8 Aug 2024 • Junbin Xiao, Nanxin Huang, Hangyu Qin, Dongyang Li, Yicong Li, Fengbin Zhu, Zhulin Tao, Jianxing Yu, Liang Lin, Tat-Seng Chua, Angela Yao
Video Large Language Models (Video-LLMs) are flourishing and has advanced many video-language tasks.
no code implementations • 24 Jun 2024 • Dongyang Li, Junbing Yan, Taolin Zhang, Chengyu Wang, Xiaofeng He, Longtao Huang, Hui Xue, Jun Huang
Retrieval augmented generation (RAG) exhibits outstanding performance in promoting the knowledge capabilities of large language models (LLMs) with retrieved documents related to user queries.
1 code implementation • 24 Jun 2024 • Dongyang Li, Taolin Zhang, Jiali Deng, Longtao Huang, Chengyu Wang, Xiaofeng He, Hui Xue
Specifically, to retrieve the tokens with similar meanings for the semantic data augmentation across different languages, we propose a sequential clustering process in 3 stages: within a single language, across multiple languages of a language family, and across languages from multiple language families.
1 code implementation • 24 Jun 2024 • Dongyang Li, Taolin Zhang, Longtao Huang, Chengyu Wang, Xiaofeng He, Hui Xue
Knowledge-enhanced pre-trained language models (KEPLMs) leverage relation triples from knowledge graphs (KGs) and integrate these external data sources into language models via self-supervised learning.
1 code implementation • 31 May 2024 • Taolin Zhang, Qizhou Chen, Dongyang Li, Chengyu Wang, Xiaofeng He, Longtao Huang, Hui Xue, Jun Huang
(2) Considering that auxiliary parameters are required to store the knowledge for sequential editing, we construct a new dataset named \textbf{DAFSet}, fulfilling recent, popular, long-tail and robust properties to enhance the generality of sequential editing.
no code implementations • 6 May 2024 • Qizhou Chen, Taolin Zhang, Xiaofeng He, Dongyang Li, Chengyu Wang, Longtao Huang, Hui Xue
Model editing aims to correct outdated or erroneous knowledge in large language models (LLMs) without the need for costly retraining.
no code implementations • 4 May 2024 • Taolin Zhang, Dongyang Li, Qizhou Chen, Chengyu Wang, Longtao Huang, Hui Xue, Xiaofeng He, Jun Huang
The reordering learning process is divided into two steps according to the quality of the generated responses: document order adjustment and document representation enhancement.
1 code implementation • 25 Mar 2024 • Qian Chen, Dongyang Li, Xiaofeng He, Hongzhao Li, Hongyu Yi
The research focus has shifted to Hierarchical Attribution (HA) for its ability to model feature interactions.
1 code implementation • 27 Feb 2024 • Zhaoyang Wang, Dongyang Li, Mingyang Zhang, Hao Luo, Maoguo Gong
Existing hyperspectral image (HSI) super-resolution (SR) methods struggle to effectively capture the complex spectral-spatial relationships and low-level details, while diffusion models represent a promising generative model known for their exceptional performance in modeling complex relations and learning high and low-level visual features.
no code implementations • 14 Jan 2024 • Weian Guo, Zecheng Kang, Dongyang Li, Lun Zhang, Li Li
Therefore, the deployment of RSUs is of utmost importance in ensuring the quality of communication services.
1 code implementation • 13 Dec 2023 • Qian Chen, Taolin Zhang, Dongyang Li, Xiaofeng He
The minimal feature removal problem in the post-hoc explanation area aims to identify the minimal feature set (MFS).
1 code implementation • 8 Jun 2023 • Daojun Liang, Haixia Zhang, Dongfeng Yuan, Xiaoyan Ma, Dongyang Li, Minggao Zhang
MABO allocates a process to each GPU via a queue mechanism, and then creates multiple trials at a time for asynchronous parallel search, which greatly reduces the search time.
no code implementations • 11 May 2023 • Dongyang Li, Ruixue Ding, Qiang Zhang, Zheng Li, Boli Chen, Pengjun Xie, Yao Xu, Xin Li, Ning Guo, Fei Huang, Xiaofeng He
With a fast developing pace of geographic applications, automatable and intelligent models are essential to be designed to handle the large volume of information.
1 code implementation • Findings (ACL) 2022 • Dongyang Li, Taolin Zhang, Nan Hu, Chengyu Wang, Xiaofeng He
In this paper, we propose a hierarchical contrastive learning Framework for Distantly Supervised relation extraction (HiCLRE) to reduce noisy sentences, which integrate the global structural information and local fine-grained interaction.
no code implementations • 5 Dec 2021 • Yichi Zhang, Rushi Jiao, Qingcheng Liao, Dongyang Li, Jicong Zhang
In this paper, we propose a novel uncertainty-guided mutual consistency learning framework to effectively exploit unlabeled data by integrating intra-task consistency learning from up-to-date predictions for self-ensembling and cross-task consistency learning from task-level regularization to exploit geometric shape information.
1 code implementation • 20 Sep 2021 • Zhenhong Sun, Zhiyu Tan, Xiuyu Sun, Fangyi Zhang, Yichen Qian, Dongyang Li, Hao Li
Compression standards have been used to reduce the cost of image storage and transmission for decades.
1 code implementation • 13 Apr 2021 • Zhenhong Sun, Zhiyu Tan, Xiuyu Sun, Fangyi Zhang, Dongyang Li, Yichen Qian, Hao Li
The framework of dominant learned video compression methods is usually composed of motion prediction modules as well as motion vector and residual image compression modules, suffering from its complex structure and error propagation problem.
2 code implementations • ICLR 2021 • Yichen Qian, Zhiyu Tan, Xiuyu Sun, Ming Lin, Dongyang Li, Zhenhong Sun, Hao Li, Rong Jin
In this work, we propose a novel Global Reference Model for image compression to effectively leverage both the local and the global context information, leading to an enhanced compression rate.
no code implementations • 1 Nov 2019 • Lingling Yang, Dongyang Li, Yao Lu
In this paper, we propose a new T+2 churn customer prediction model, in which the churn customers in two months are recognized and the one-month window T+1 is reserved to carry out churn management strategies.
no code implementations • 27 Feb 2019 • Yao Liu, Ying Tai, Jilin Li, Shouhong Ding, Chengjie Wang, Feiyue Huang, Dongyang Li, Wenshuai Qi, Rongrong Ji
In this paper, we propose a light reflection based face anti-spoofing method named Aurora Guard (AG), which is fast, simple yet effective that has already been deployed in real-world systems serving for millions of users.