no code implementations • Findings (EMNLP) 2021 • Boda Lin, Mingzheng Li, Si Li, Yong Luo
Unsupervised cross-domain dependency parsing is to accomplish domain adaptation for dependency parsing without using labeled data in target domain.
no code implementations • 24 Apr 2024 • Xuming An, Dui Wang, Li Shen, Yong Luo, Han Hu, Bo Du, Yonggang Wen, DaCheng Tao
Specifically, FedALC estimates the label correlations in the class embedding learning for different label pairs and utilizes it to improve the model training.
no code implementations • 18 Apr 2024 • WenHao Zhang, Jun Wang, Yong Luo, Lei Yu, Wei Yu, Zheng He
Then we design a spatio-temporal fusion module based on temporal granularity alignment, where the global spatial features extracted from event frames, together with the local relative spatial and temporal features contained in voxel graph list are effectively aligned and integrated.
no code implementations • 13 Feb 2024 • Ziyi Zhang, Sen Zhang, Yibing Zhan, Yong Luo, Yonggang Wen, DaCheng Tao
Then, we surprisingly discover that dormant neurons in our critic model act as a regularization against overoptimization, while active neurons reflect primacy bias in this setting.
no code implementations • 1 Feb 2024 • Anke Tang, Li Shen, Yong Luo, Nan Yin, Lefei Zhang, DaCheng Tao
A notable challenge is mitigating the interference between parameters of different models, which can substantially deteriorate performance.
1 code implementation • 12 Jan 2024 • Shuai Wang, Liang Ding, Li Shen, Yong Luo, Bo Du, DaCheng Tao
Advancing automated programming necessitates robust and comprehensive code generation benchmarks, yet current evaluation frameworks largely neglect object-oriented programming (OOP) in favor of functional programming (FP), e. g., HumanEval and MBPP.
no code implementations • 12 Jan 2024 • Wenbin Wang, Liang Ding, Li Shen, Yong Luo, Han Hu, DaCheng Tao
Sentiment analysis is rapidly advancing by utilizing various data modalities (e. g., text, image).
1 code implementation • 11 Dec 2023 • Anke Tang, Li Shen, Yong Luo, Liang Ding, Han Hu, Bo Du, DaCheng Tao
At the upper level, we focus on learning a shared Concrete mask to identify the subspace, while at the inner level, model merging is performed to maximize the performance of the merged model.
1 code implementation • 12 Oct 2023 • Hongling Zheng, Li Shen, Anke Tang, Yong Luo, Han Hu, Bo Du, DaCheng Tao
LFM focuses on the research, modification, and design of FM based on the model interface, so as to better understand the model structure and weights (in a black box environment), and to generalize the model to downstream tasks.
1 code implementation • 7 Oct 2023 • Anke Tang, Li Shen, Yong Luo, Yibing Zhan, Han Hu, Bo Du, Yixin Chen, DaCheng Tao
We demonstrate that our partial linearization technique enables a more effective fusion of multiple tasks into a single model, outperforming standard adapter tuning and task arithmetic alone.
1 code implementation • 5 Oct 2023 • Kun Li, Yong Luo, Xiantao Cai, Wenbin Hu, Bo Du
In this paper, we propose a zero-shot learning solution for the DRP task in preclinical drug screening.
no code implementations • 18 Sep 2023 • Xingyu Yang, Daqing Liu, Heng Zhang, Yong Luo, Chaoyue Wang, Jing Zhang
Composed image retrieval is a type of image retrieval task where the user provides a reference image as a starting point and specifies a text on how to shift from the starting point to the desired target image.
no code implementations • 10 Sep 2023 • Guanyu Xu, Zhiwei Hao, Yong Luo, Han Hu, Jianping An, Shiwen Mao
Our objective is to achieve fast and energy-efficient collaborative inference while maintaining comparable accuracy compared with large ViTs.
no code implementations • 24 Aug 2023 • Mengya Han, Heliang Zheng, Chaoyue Wang, Yong Luo, Han Hu, Jing Zhang, Yonggang Wen
In this work, we address the task of few-shot part segmentation, which aims to segment the different parts of an unseen object using very few labeled examples.
no code implementations • 11 Aug 2023 • Rui Xu, Yong Luo, Han Hu, Bo Du, Jialie Shen, Yonggang Wen
Weakly supervised object localization (WSOL) is one of the most popular and challenging tasks in computer vision.
1 code implementation • 1 Aug 2023 • Guanyu Xu, Jiawei Hao, Li Shen, Han Hu, Yong Luo, Hui Lin, Jialie Shen
Recently, the efficient deployment and acceleration of powerful vision transformers (ViTs) on resource-limited edge devices for providing multimedia services have become attractive tasks.
1 code implementation • 19 Jun 2023 • Ting Zhe, YongQian Li, Jing Zhang, Yong Luo, Han Hu, Bo Du, Yonggang Wen, DaCheng Tao
We represent the action information in each hand interaction region as a triplet, resulting in a total of 878 action triplets.
no code implementations • 6 Jun 2023 • Xinbiao Wang, Yuxuan Du, Zhuozhuo Tu, Yong Luo, Xiao Yuan, DaCheng Tao
Recent progress has highlighted its positive impact on learning quantum dynamics, wherein the integration of entanglement into quantum operations or measurements of quantum machine learning (QML) models leads to substantial reductions in training data size, surpassing a specified prediction error threshold.
1 code implementation • 23 May 2023 • Anke Tang, Yong Luo, Han Hu, Fengxiang He, Kehua Su, Bo Du, Yixin Chen, DaCheng Tao
This paper studies multiparty learning, aiming to learn a model using the private data of different participants.
no code implementations • 3 Apr 2023 • Rui Xu, Yong Luo, Bo Du
This motivates us to propose a Source-free Unsupervised cross-domain method for Pulmonary nodule detection (SUP).
1 code implementation • 7 Mar 2023 • Rui Xu, Zhi Liu, Yong Luo, Han Hu, Li Shen, Bo Du, Kaiming Kuang, Jiancheng Yang
To address this issue, we propose a slice grouped domain attention (SGDA) module to enhance the generalization capability of the pulmonary nodule detection networks.
no code implementations • 15 Feb 2023 • Dui Wang, Li Shen, Yong Luo, Han Hu, Kehua Su, Yonggang Wen, DaCheng Tao
In particular, we adopt the ``one-vs-all'' training strategy in each client to alleviate the unfair competition between classes by constructing a personalized binary classification problem for each class.
1 code implementation • 1 Jan 2023 • Huaizheng Zhang, Yuanming Li, Wencong Xiao, Yizheng Huang, Xing Di, Jianxiong Yin, Simon See, Yong Luo, Chiew Tong Lau, Yang You
The vision of this paper is to provide a more comprehensive and practical benchmark study for MIG in order to eliminate the need for tedious manual benchmarking and tuning efforts.
no code implementations • 31 Dec 2022 • Chengbo Yuan, Qianhui Xu, Yong Luo
Multimodal learning is a popular solution for automatic diagnosis of depression, and the existing works suffer two main drawbacks: 1) the high-order interactions between different modalities can not be well exploited; and 2) interpretability of the models are weak.
no code implementations • 7 Sep 2022 • Mengya Han, Yibing Zhan, Yong Luo, Bo Du, Han Hu, Yonggang Wen, DaCheng Tao
To address the above issues, we propose a novel metric-based meta-learning framework termed instance-adaptive class representation learning network (ICRL-Net) for few-shot visual recognition.
no code implementations • 30 Aug 2022 • Xinbiao Wang, Junyu Liu, Tongliang Liu, Yong Luo, Yuxuan Du, DaCheng Tao
To fill this knowledge gap, here we propose the effective quantum neural tangent kernel (EQNTK) and connect this concept with over-parameterization theory to quantify the convergence of QNNs towards the global optima.
1 code implementation • 3 Aug 2022 • Rui Xu, Yong Luo, Bo Du, Kaiming Kuang, Jiancheng Yang
Convolutional neural networks (CNNs) have been demonstrated to be highly effective in the field of pulmonary nodule detection.
1 code implementation • 27 Jul 2022 • Mengya Han, Heliang Zheng, Chaoyue Wang, Yong Luo, Han Hu, Bo Du
Overall, this work is an attempt to explore the internal relevance between generation tasks and perception tasks by prompt designing.
1 code implementation • 15 Jun 2022 • Xiaowen Wei, Xiuwen Gong, Yibing Zhan, Bo Du, Yong Luo, Wenbin Hu
Experimental results on real-world networks demonstrate that CLNode is a general framework that can be combined with various GNNs to improve their accuracy and robustness.
1 code implementation • 24 May 2022 • Zhiwei Hao, Yong Luo, Zhi Wang, Han Hu, Jianping An
To tackle this challenge, we propose a framework termed collaborative data-free knowledge distillation via multi-level feature sharing (CDFKD-MFS), which consists of a multi-header student module, an asymmetric adversarial data-free KD module, and an attention-based aggregation module.
1 code implementation • 24 May 2022 • Zhiwei Hao, Guanyu Xu, Yong Luo, Han Hu, Jianping An, Shiwen Mao
In this paper, we study the multi-agent collaborative inference scenario, where a single edge server coordinates the inference of multiple UEs.
no code implementations • 7 Mar 2022 • Peipei Zhu, Xiao Wang, Yong Luo, Zhenglong Sun, Wei-Shi Zheng, YaoWei Wang, Changwen Chen
The image-level labels are utilized to train a weakly-supervised object recognition model to extract object information (e. g., instance) in an image, and the extracted instances are adopted to infer the relationships among different objects based on an enhanced graph neural network (GNN).
no code implementations • 15 Feb 2022 • Yibing Zhan, Zhi Chen, Jun Yu, Baosheng Yu, DaCheng Tao, Yong Luo
As a result, HLN significantly improves the performance of scene graph generation by integrating and reasoning from object interactions, relationship interactions, and transitive inference of hyper-relationships.
no code implementations • 28 Jan 2022 • Boda Lin, Zijun Yao, Jiaxin Shi, Shulin Cao, Binghao Tang, Si Li, Yong Luo, Juanzi Li, Lei Hou
To remedy these drawbacks, we propose to achieve universal and schema-free Dependency Parsing (DP) via Sequence Generation (SG) DPSG by utilizing only the pre-trained language model (PLM) without any auxiliary structures or parsing algorithms.
1 code implementation • 18 Jan 2022 • Chao Chen, Yibing Zhan, Baosheng Yu, Liu Liu, Yong Luo, Bo Du
To address this problem, we propose Resistance Training using Prior Bias (RTPB) for the scene graph generation.
no code implementations • 2 Dec 2021 • Jingyi Feng, Yong Luo, Shuang Song
Neural decoding plays a vital role in the interaction between the brain and the outside world.
1 code implementation • 18 May 2021 • Yuanming Li, Huaizheng Zhang, Shanshan Jiang, Fan Yang, Yonggang Wen, Yong Luo
AI engineering has emerged as a crucial discipline to democratize deep neural network (DNN) models among software developers with a diverse background.
no code implementations • 31 Mar 2021 • Xinbiao Wang, Yuxuan Du, Yong Luo, DaCheng Tao
In this study, we fill this knowledge gap by exploiting the power of quantum kernels when the quantum system noise and sample error are considered.
no code implementations • 5 Feb 2021 • Huaizheng Zhang, Meng Shen, Yizheng Huang, Yonggang Wen, Yong Luo, Guanyu Gao, Kyle Guan
To save bandwidth and reduce RTT, VPaaS provides a new video streaming protocol that only sends low-quality video to the cloud.
no code implementations • ICCV 2021 • Lin Zhang, Yong Luo, Yan Bai, Bo Du, Ling-Yu Duan
Federated Learning (FL) aims to establish a shared model across decentralized clients under the privacy-preserving constraint.
2 code implementations • 9 Jun 2020 • Huaizheng Zhang, Yuanming Li, Qiming Ai, Yong Luo, Yonggang Wen, Yichao Jin, Nguyen Binh Duong Ta
Combining \underline{v}ideo streaming and online \underline{r}etailing (V2R) has been a growing trend recently.
no code implementations • 21 Dec 2019 • Huaizheng Zhang, Yong Luo, Qiming Ai, Yonggang Wen
A multitask loss function is also designed to train both the topic and sentiment prediction models jointly in an end-to-end manner.
1 code implementation • 31 Jul 2019 • Yihang Lou, Ling-Yu Duan, Yong Luo, Ziqian Chen, Tongliang Liu, Shiqi Wang, Wen Gao
The digital retina in smart cities is to select what the City Eye tells the City Brain, and convert the acquired visual data from front-end visual sensors to features in an intelligent sensing manner.
no code implementations • 8 Apr 2019 • Yong Luo, Tongliang Liu, DaCheng Tao, Chao Xu
In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics.
no code implementations • 8 Apr 2019 • Yong Luo, DaCheng Tao, Chang Xu, Chao Xu, Hong Liu, Yonggang Wen
In computer vision, image datasets used for classification are naturally associated with multiple labels and comprised of multiple views, because each image may contain several objects (e. g. pedestrian, bicycle and tree) and is properly characterized by multiple visual features (e. g. color, texture and shape).
no code implementations • 8 Apr 2019 • Yong Luo, Yonggang Wen, Tongliang Liu, DaCheng Tao
Some existing heterogeneous transfer learning (HTL) approaches can learn target distance metric by usually transforming the samples of source and target domain into a common subspace.
no code implementations • 8 Apr 2019 • Yong Luo, Yonggang Wen, DaCheng Tao, Jie Gui, Chao Xu
The features used in many image analysis-based applications are frequently of very high dimension.
no code implementations • 8 Apr 2019 • Yong Luo, Tongliang Liu, DaCheng Tao, Chao Xu
Therefore, we propose to weightedly combine the MC outputs of different views, and present the multi-view matrix completion (MVMC) framework for transductive multi-label image classification.
no code implementations • 8 Apr 2019 • Yong Luo, Yonggang Wen, DaCheng Tao
Heterogeneous transfer learning approaches can be adopted to remedy this drawback by deriving a metric from the learned transformation across different domains.
no code implementations • 4 Apr 2019 • Meng Liu, Chang Xu, Yong Luo, Chao Xu, Yonggang Wen, DaCheng Tao
Feature selection is beneficial for improving the performance of general machine learning tasks by extracting an informative subset from the high-dimensional features.
no code implementations • 9 Oct 2018 • Yong Luo, Yonggang Wen, Ling-Yu Duan, DaCheng Tao
Distance metric learning (DML) aims to find an appropriate way to reveal the underlying data relationship.
no code implementations • 5 Oct 2018 • Yong Luo, Huaizheng Zhang, Yongjie Wang, Yonggang We, Xinwen Zhang
We compare the different variants with our baseline model.
3 code implementations • 9 Feb 2015 • Yong Luo, DaCheng Tao, Yonggang Wen, Kotagiri Ramamohanarao, Chao Xu
As a consequence, the high order correlation information contained in the different views is explored and thus a more reliable common subspace shared by all features can be obtained.