no code implementations • Findings (EMNLP) 2021 • Boda Lin, Mingzheng Li, Si Li, Yong Luo
Unsupervised cross-domain dependency parsing is to accomplish domain adaptation for dependency parsing without using labeled data in target domain.
1 code implementation • 16 Jan 2025 • Anke Tang, Enneng Yang, Li Shen, Yong Luo, Han Hu, Bo Du, DaCheng Tao
In this study, we propose a training-free projection-based continual merging method that processes models sequentially through orthogonal projections of weight matrices and adaptive scaling mechanisms.
no code implementations • 14 Jan 2025 • Shuai Wang, Liang Ding, Yibing Zhan, Yong Luo, Zheng He, Dapeng Tao
Automated code generation using large language models (LLMs) has gained attention due to its efficiency and adaptability.
1 code implementation • 9 Jan 2025 • Yapeng Li, Yong Luo, Lefei Zhang, Zengmao Wang, Bo Du
To remedy these drawbacks, we propose a novel HSI classification model based on a Mamba model, named MambaHSI, which can simultaneously model long-range interaction of the whole image and integrate spatial and spectral information in an adaptive manner.
1 code implementation • 18 Nov 2024 • Ziyi Zhang, Li Shen, Sen Zhang, Deheng Ye, Yong Luo, Miaojing Shi, Bo Du, DaCheng Tao
Experimental results demonstrate that SDPO consistently outperforms prior methods in reward-based alignment across diverse step configurations, underscoring its robust step generalization capabilities.
no code implementations • 29 Oct 2024 • Li Shen, Anke Tang, Enneng Yang, Guibing Guo, Yong Luo, Lefei Zhang, Xiaochun Cao, Bo Du, DaCheng Tao
Building on WEMoE, we further introduce an efficient-and-effective WEMoE (E-WEMoE) method, whose core mechanism involves eliminating non-essential elements in the critical modules of WEMoE and implementing shared routing across multiple MoE modules, thereby significantly reducing both the trainable parameters, the overall parameter count, and computational overhead of the merged model by WEMoE.
1 code implementation • 23 Oct 2024 • Zhiwei Hao, Jianyuan Guo, Li Shen, Yong Luo, Han Hu, Yonggang Wen
To bridge this gap, we propose ADEM-VL, an efficient vision-language method that tunes VL models based on pretrained large language models (LLMs) by adopting a parameter-free cross-attention mechanism for similarity measurements in multimodal fusion.
1 code implementation • 27 Sep 2024 • Yang Qian, Xinbiao Wang, Yuxuan Du, Yong Luo, DaCheng Tao
To address this dilemma, here we first analyze the convergence behavior of QAOA, uncovering the origins of this dilemma and elucidating the intricate relationship between the employed mixer Hamiltonian, the specific problem at hand, and the permissible maximum circuit depth.
no code implementations • 9 Sep 2024 • Shuai Wang, Liang Ding, Li Shen, Yong Luo, Zheng He, Wei Yu, DaCheng Tao
Then, we selectively eliminate output noise induced by lame prompts based on the uncertainty of the prediction distribution from the standard prompt.
no code implementations • 9 Sep 2024 • Shuai Wang, Yibing Zhan, Yong Luo, Han Hu, Wei Yu, Yonggang Wen, DaCheng Tao
This mechanism assigns different weights to different categories of data according to the gradient of the output score, and uses knowledge distillation (KD) to reduce the mutual interference between the outputs of old and new tasks.
1 code implementation • 28 Aug 2024 • Wenbin Wang, Liang Ding, Minyan Zeng, Xiabin Zhou, Li Shen, Yong Luo, DaCheng Tao
Building upon this insight, we propose Divide, Conquer and Combine (DC$^2$), a novel training-free framework for enhancing MLLM perception of HR images.
no code implementations • 19 Aug 2024 • Xingrun Yan, Shiyuan Zuo, Rongfei Fan, Han Hu, Li Shen, Puning Zhao, Yong Luo
In a real federated learning (FL) system, communication overhead for passing model parameters between the clients and the parameter server (PS) is often a bottleneck.
1 code implementation • 19 Aug 2024 • Anke Tang, Li Shen, Yong Luo, Shuai Xie, Han Hu, Lefei Zhang, Bo Du, DaCheng Tao
Deep model training on extensive datasets is increasingly becoming cost-prohibitive, prompting the widespread adoption of deep model fusion techniques to leverage knowledge from pre-existing models.
1 code implementation • 14 Jun 2024 • Anke Tang, Li Shen, Yong Luo, Shiwei Liu, Han Hu, Bo Du
Once the routers are learned and a preference vector is set, the MoE module can be unloaded, thus no additional computational cost is introduced during inference.
1 code implementation • 5 Jun 2024 • Anke Tang, Li Shen, Yong Luo, Han Hu, Bo Du, DaCheng Tao
These techniques range from model ensemble methods, which combine the predictions to improve the overall performance, to model merging, which integrates different models into a single one, and model mixing methods, which upscale or recombine the components of the original models.
no code implementations • 12 May 2024 • Xinbiao Wang, Yuxuan Du, Kecheng Liu, Yong Luo, Bo Du, DaCheng Tao
The No-Free-Lunch (NFL) theorem, which quantifies problem- and data-independent generalization errors regardless of the optimization process, provides a foundational framework for comprehending diverse learning protocols' potential.
no code implementations • 24 Apr 2024 • Xuming An, Dui Wang, Li Shen, Yong Luo, Han Hu, Bo Du, Yonggang Wen, DaCheng Tao
Specifically, FedALC estimates the label correlations in the class embedding learning for different label pairs and utilizes it to improve the model training.
1 code implementation • 18 Apr 2024 • WenHao Zhang, Jun Wang, Yong Luo, Lei Yu, Wei Yu, Zheng He, Jialie Shen
Then we design a spatio-temporal fusion module based on temporal granularity alignment, where the global spatial features extracted from event frames, together with the local relative spatial and temporal features contained in voxel graph list are effectively aligned and integrated.
1 code implementation • 13 Feb 2024 • Ziyi Zhang, Sen Zhang, Yibing Zhan, Yong Luo, Yonggang Wen, DaCheng Tao
Then, we surprisingly discover that dormant neurons in our critic model act as a regularization against reward overoptimization while active neurons reflect primacy bias.
1 code implementation • 1 Feb 2024 • Anke Tang, Li Shen, Yong Luo, Nan Yin, Lefei Zhang, DaCheng Tao
A notable challenge is mitigating the interference between parameters of different models, which can substantially deteriorate performance.
no code implementations • 12 Jan 2024 • Wenbin Wang, Liang Ding, Li Shen, Yong Luo, Han Hu, DaCheng Tao
Sentiment analysis is rapidly advancing by utilizing various data modalities (e. g., text, image).
1 code implementation • 12 Jan 2024 • Shuai Wang, Liang Ding, Li Shen, Yong Luo, Bo Du, DaCheng Tao
Advancing automated programming necessitates robust and comprehensive code generation benchmarks, yet current evaluation frameworks largely neglect object-oriented programming (OOP) in favor of functional programming (FP), e. g., HumanEval and MBPP.
1 code implementation • CVPR 2024 • Yapeng Li, Yong Luo, Zengmao Wang, Bo Du
This motivates us to study GZSL in the more practical setting where unseen classes can be either similar or dissimilar to seen classes.
1 code implementation • 11 Dec 2023 • Anke Tang, Li Shen, Yong Luo, Liang Ding, Han Hu, Bo Du, DaCheng Tao
At the upper level, we focus on learning a shared Concrete mask to identify the subspace, while at the inner level, model merging is performed to maximize the performance of the merged model.
1 code implementation • 12 Oct 2023 • Hongling Zheng, Li Shen, Anke Tang, Yong Luo, Han Hu, Bo Du, DaCheng Tao
LFM focuses on the research, modification, and design of FM based on the model interface, so as to better understand the model structure and weights (in a black box environment), and to generalize the model to downstream tasks.
1 code implementation • 7 Oct 2023 • Anke Tang, Li Shen, Yong Luo, Yibing Zhan, Han Hu, Bo Du, Yixin Chen, DaCheng Tao
We demonstrate that our partial linearization technique enables a more effective fusion of multiple tasks into a single model, outperforming standard adapter tuning and task arithmetic alone.
1 code implementation • 5 Oct 2023 • Kun Li, Yong Luo, Xiantao Cai, Wenbin Hu, Bo Du
In this paper, we propose a zero-shot learning solution for the DRP task in preclinical drug screening.
Ranked #1 on
Zero-Shot Learning
on GDSCv2
no code implementations • 18 Sep 2023 • Xingyu Yang, Daqing Liu, Heng Zhang, Yong Luo, Chaoyue Wang, Jing Zhang
Composed image retrieval is a type of image retrieval task where the user provides a reference image as a starting point and specifies a text on how to shift from the starting point to the desired target image.
no code implementations • 10 Sep 2023 • Guanyu Xu, Zhiwei Hao, Yong Luo, Han Hu, Jianping An, Shiwen Mao
Our objective is to achieve fast and energy-efficient collaborative inference while maintaining comparable accuracy compared with large ViTs.
no code implementations • 24 Aug 2023 • Mengya Han, Heliang Zheng, Chaoyue Wang, Yong Luo, Han Hu, Jing Zhang, Yonggang Wen
In this work, we address the task of few-shot part segmentation, which aims to segment the different parts of an unseen object using very few labeled examples.
no code implementations • 11 Aug 2023 • Rui Xu, Yong Luo, Han Hu, Bo Du, Jialie Shen, Yonggang Wen
Weakly supervised object localization (WSOL) is one of the most popular and challenging tasks in computer vision.
1 code implementation • 1 Aug 2023 • Guanyu Xu, Jiawei Hao, Li Shen, Han Hu, Yong Luo, Hui Lin, Jialie Shen
Recently, the efficient deployment and acceleration of powerful vision transformers (ViTs) on resource-limited edge devices for providing multimedia services have become attractive tasks.
2 code implementations • 19 Jun 2023 • Ting Zhe, Jing Zhang, YongQian Li, Yong Luo, Han Hu, DaCheng Tao
To fill this gap, we introduce the FHA-Kitchens (Fine-Grained Hand Actions in Kitchen Scenes) dataset, providing both coarse- and fine-grained hand action categories along with localization annotations.
1 code implementation • 6 Jun 2023 • Xinbiao Wang, Yuxuan Du, Zhuozhuo Tu, Yong Luo, Xiao Yuan, DaCheng Tao
Recent progress has highlighted its positive impact on learning quantum dynamics, wherein the integration of entanglement into quantum operations or measurements of quantum machine learning (QML) models leads to substantial reductions in training data size, surpassing a specified prediction error threshold.
1 code implementation • 23 May 2023 • Anke Tang, Yong Luo, Han Hu, Fengxiang He, Kehua Su, Bo Du, Yixin Chen, DaCheng Tao
This paper studies multiparty learning, aiming to learn a model using the private data of different participants.
1 code implementation • 3 Apr 2023 • Rui Xu, Yong Luo, Bo Du
Cross-domain pulmonary nodule detection suffers from performance degradation due to a large shift of data distributions between the source and target domain.
1 code implementation • 7 Mar 2023 • Rui Xu, Zhi Liu, Yong Luo, Han Hu, Li Shen, Bo Du, Kaiming Kuang, Jiancheng Yang
To address this issue, we propose a slice grouped domain attention (SGDA) module to enhance the generalization capability of the pulmonary nodule detection networks.
no code implementations • 15 Feb 2023 • Dui Wang, Li Shen, Yong Luo, Han Hu, Kehua Su, Yonggang Wen, DaCheng Tao
In particular, we adopt the ``one-vs-all'' training strategy in each client to alleviate the unfair competition between classes by constructing a personalized binary classification problem for each class.
1 code implementation • 1 Jan 2023 • Huaizheng Zhang, Yuanming Li, Wencong Xiao, Yizheng Huang, Xing Di, Jianxiong Yin, Simon See, Yong Luo, Chiew Tong Lau, Yang You
The vision of this paper is to provide a more comprehensive and practical benchmark study for MIG in order to eliminate the need for tedious manual benchmarking and tuning efforts.
no code implementations • 31 Dec 2022 • Chengbo Yuan, Qianhui Xu, Yong Luo
Multimodal learning is a popular solution for automatic diagnosis of depression, and the existing works suffer two main drawbacks: 1) the high-order interactions between different modalities can not be well exploited; and 2) interpretability of the models are weak.
no code implementations • 7 Sep 2022 • Mengya Han, Yibing Zhan, Yong Luo, Bo Du, Han Hu, Yonggang Wen, DaCheng Tao
To address the above issues, we propose a novel metric-based meta-learning framework termed instance-adaptive class representation learning network (ICRL-Net) for few-shot visual recognition.
no code implementations • 30 Aug 2022 • Xinbiao Wang, Junyu Liu, Tongliang Liu, Yong Luo, Yuxuan Du, DaCheng Tao
To fill this knowledge gap, here we propose the effective quantum neural tangent kernel (EQNTK) and connect this concept with over-parameterization theory to quantify the convergence of QNNs towards the global optima.
1 code implementation • 3 Aug 2022 • Rui Xu, Yong Luo, Bo Du, Kaiming Kuang, Jiancheng Yang
Convolutional neural networks (CNNs) have been demonstrated to be highly effective in the field of pulmonary nodule detection.
1 code implementation • 27 Jul 2022 • Mengya Han, Heliang Zheng, Chaoyue Wang, Yong Luo, Han Hu, Bo Du
Overall, this work is an attempt to explore the internal relevance between generation tasks and perception tasks by prompt designing.
1 code implementation • 15 Jun 2022 • Xiaowen Wei, Xiuwen Gong, Yibing Zhan, Bo Du, Yong Luo, Wenbin Hu
Experimental results on real-world networks demonstrate that CLNode is a general framework that can be combined with various GNNs to improve their accuracy and robustness.
1 code implementation • 24 May 2022 • Zhiwei Hao, Yong Luo, Zhi Wang, Han Hu, Jianping An
To tackle this challenge, we propose a framework termed collaborative data-free knowledge distillation via multi-level feature sharing (CDFKD-MFS), which consists of a multi-header student module, an asymmetric adversarial data-free KD module, and an attention-based aggregation module.
1 code implementation • 24 May 2022 • Zhiwei Hao, Guanyu Xu, Yong Luo, Han Hu, Jianping An, Shiwen Mao
In this paper, we study the multi-agent collaborative inference scenario, where a single edge server coordinates the inference of multiple UEs.
no code implementations • 7 Mar 2022 • Peipei Zhu, Xiao Wang, Yong Luo, Zhenglong Sun, Wei-Shi Zheng, YaoWei Wang, Changwen Chen
The image-level labels are utilized to train a weakly-supervised object recognition model to extract object information (e. g., instance) in an image, and the extracted instances are adopted to infer the relationships among different objects based on an enhanced graph neural network (GNN).
no code implementations • 15 Feb 2022 • Yibing Zhan, Zhi Chen, Jun Yu, Baosheng Yu, DaCheng Tao, Yong Luo
As a result, HLN significantly improves the performance of scene graph generation by integrating and reasoning from object interactions, relationship interactions, and transitive inference of hyper-relationships.
no code implementations • 28 Jan 2022 • Boda Lin, Zijun Yao, Jiaxin Shi, Shulin Cao, Binghao Tang, Si Li, Yong Luo, Juanzi Li, Lei Hou
To remedy these drawbacks, we propose to achieve universal and schema-free Dependency Parsing (DP) via Sequence Generation (SG) DPSG by utilizing only the pre-trained language model (PLM) without any auxiliary structures or parsing algorithms.
1 code implementation • 18 Jan 2022 • Chao Chen, Yibing Zhan, Baosheng Yu, Liu Liu, Yong Luo, Bo Du
To address this problem, we propose Resistance Training using Prior Bias (RTPB) for the scene graph generation.
no code implementations • 2 Dec 2021 • Jingyi Feng, Yong Luo, Shuang Song
Neural decoding plays a vital role in the interaction between the brain and the outside world.
1 code implementation • 18 May 2021 • Yuanming Li, Huaizheng Zhang, Shanshan Jiang, Fan Yang, Yonggang Wen, Yong Luo
AI engineering has emerged as a crucial discipline to democratize deep neural network (DNN) models among software developers with a diverse background.
no code implementations • 31 Mar 2021 • Xinbiao Wang, Yuxuan Du, Yong Luo, DaCheng Tao
In this study, we fill this knowledge gap by exploiting the power of quantum kernels when the quantum system noise and sample error are considered.
no code implementations • 5 Feb 2021 • Huaizheng Zhang, Meng Shen, Yizheng Huang, Yonggang Wen, Yong Luo, Guanyu Gao, Kyle Guan
To save bandwidth and reduce RTT, VPaaS provides a new video streaming protocol that only sends low-quality video to the cloud.
no code implementations • ICCV 2021 • Lin Zhang, Yong Luo, Yan Bai, Bo Du, Ling-Yu Duan
Federated Learning (FL) aims to establish a shared model across decentralized clients under the privacy-preserving constraint.
2 code implementations • 9 Jun 2020 • Huaizheng Zhang, Yuanming Li, Qiming Ai, Yong Luo, Yonggang Wen, Yichao Jin, Nguyen Binh Duong Ta
Combining \underline{v}ideo streaming and online \underline{r}etailing (V2R) has been a growing trend recently.
no code implementations • 21 Dec 2019 • Huaizheng Zhang, Yong Luo, Qiming Ai, Yonggang Wen
A multitask loss function is also designed to train both the topic and sentiment prediction models jointly in an end-to-end manner.
1 code implementation • 31 Jul 2019 • Yihang Lou, Ling-Yu Duan, Yong Luo, Ziqian Chen, Tongliang Liu, Shiqi Wang, Wen Gao
The digital retina in smart cities is to select what the City Eye tells the City Brain, and convert the acquired visual data from front-end visual sensors to features in an intelligent sensing manner.
no code implementations • 8 Apr 2019 • Yong Luo, Tongliang Liu, DaCheng Tao, Chao Xu
In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics.
no code implementations • 8 Apr 2019 • Yong Luo, DaCheng Tao, Chang Xu, Chao Xu, Hong Liu, Yonggang Wen
In computer vision, image datasets used for classification are naturally associated with multiple labels and comprised of multiple views, because each image may contain several objects (e. g. pedestrian, bicycle and tree) and is properly characterized by multiple visual features (e. g. color, texture and shape).
no code implementations • 8 Apr 2019 • Yong Luo, Yonggang Wen, Tongliang Liu, DaCheng Tao
Some existing heterogeneous transfer learning (HTL) approaches can learn target distance metric by usually transforming the samples of source and target domain into a common subspace.
no code implementations • 8 Apr 2019 • Yong Luo, Yonggang Wen, DaCheng Tao, Jie Gui, Chao Xu
The features used in many image analysis-based applications are frequently of very high dimension.
no code implementations • 8 Apr 2019 • Yong Luo, Tongliang Liu, DaCheng Tao, Chao Xu
Therefore, we propose to weightedly combine the MC outputs of different views, and present the multi-view matrix completion (MVMC) framework for transductive multi-label image classification.
no code implementations • 8 Apr 2019 • Yong Luo, Yonggang Wen, DaCheng Tao
Heterogeneous transfer learning approaches can be adopted to remedy this drawback by deriving a metric from the learned transformation across different domains.
no code implementations • 4 Apr 2019 • Meng Liu, Chang Xu, Yong Luo, Chao Xu, Yonggang Wen, DaCheng Tao
Feature selection is beneficial for improving the performance of general machine learning tasks by extracting an informative subset from the high-dimensional features.
no code implementations • 9 Oct 2018 • Yong Luo, Yonggang Wen, Ling-Yu Duan, DaCheng Tao
Distance metric learning (DML) aims to find an appropriate way to reveal the underlying data relationship.
no code implementations • 5 Oct 2018 • Yong Luo, Huaizheng Zhang, Yongjie Wang, Yonggang We, Xinwen Zhang
We compare the different variants with our baseline model.
3 code implementations • 9 Feb 2015 • Yong Luo, DaCheng Tao, Yonggang Wen, Kotagiri Ramamohanarao, Chao Xu
As a consequence, the high order correlation information contained in the different views is explored and thus a more reliable common subspace shared by all features can be obtained.