no code implementations • insights (ACL) 2022 • Yue Ding, Karolis Martinkus, Damian Pascual, Simon Clematide, Roger Wattenhofer
Different studies of the embedding space of transformer models suggest that the distribution of contextual representations is highly anisotropic - the embeddings are distributed in a narrow cone.
no code implementations • 28 Mar 2024 • Weihao Jiang, Zhaozhi Xie, Yuxiang Lu, Longjie Qi, Jingyong Cai, Hiroyuki Uchiyama, Bin Chen, Yue Ding, Hongtao Lu
Our framework and model introduce the following key aspects: (1) to learn real-world adaptive semantic representation for objects with diverse and complex structures under real-world scenes, we introduce extra semantic segmentation and edge detection tasks on more diverse real-world data with segmentation annotations; (2) to avoid overfitting on low-level details, we propose a module to utilize the inconsistency between learned segmentation and matting representations to regularize detail refinement; (3) we propose a novel background line detection task into our auxiliary learning framework, to suppress interference of background lines or textures.
no code implementations • 26 Mar 2024 • Yue Ding, Sen Yan, Maqsood Hussain Shah, Hongyuan Fang, Ji Li, Mingming Liu
Furthermore, we provide a comprehensive analysis of energy consumption modelling based on the dataset using a set of representative machine learning algorithms and compare their performance against the contemporary mathematical models as a baseline.
no code implementations • 18 Mar 2024 • Yue Ding, Hongqiao Shi, Shuang Song, Yonghui Wang, Ya Li
The integration of local elements into shape contours is critical for target detection and identification in cluttered scenes.
no code implementations • 12 Mar 2024 • Maqsood Hussain Shah, Yue Ding, Shaoshu Zhu, Yingqi Gu, Mingming Liu
In response to the escalating global challenge of increasing emissions and pollution in transportation, shared electric mobility services, encompassing e-cars, e-bikes, and e-scooters, have emerged as a popular strategy.
no code implementations • 1 Mar 2024 • Suizhi Huang, Shalayiding Sirejiding, Yuxiang Lu, Yue Ding, Leheng Liu, Hui Zhou, Hongtao Lu
Object detection and semantic segmentation are pivotal components in biomedical image analysis.
no code implementations • 1 Mar 2024 • Yuxiang Lu, Shalayiding Sirejiding, Bayram Bayramli, Suizhi Huang, Yue Ding, Hongtao Lu
The task-conditional model is a distinctive stream for efficient multi-task learning.
1 code implementation • 20 Feb 2024 • Yuwen Yang, Yuxiang Lu, Suizhi Huang, Shalayiding Sirejiding, Hongtao Lu, Yue Ding
The innovative Federated Multi-Task Learning (FMTL) approach consolidates the benefits of Federated Learning (FL) and Multi-Task Learning (MTL), enabling collaborative model training on multi-task learning datasets.
1 code implementation • 18 Feb 2024 • Huayi Zhou, Mukun Luo, Fei Jiang, Yue Ding, Hongtao Lu
The 2D human pose estimation (HPE) is a basic visual problem.
no code implementations • 24 Jan 2024 • Ziru Zeng, Yue Ding, Hongtao Lu
Recently, the detection transformer has gained substantial attention for its inherent minimal post-processing requirement. However, this paradigm relies on abundant training data, yet in the context of the cross-domain adaptation, insufficient labels in the target domain exacerbate issues of class imbalance and model performance degradation. To address these challenges, we propose a novel class-aware cross domain detection transformer based on the adversarial learning and mean-teacher framework. First, considering the inconsistencies between the classification and regression tasks, we introduce an IoU-aware prediction branch and exploit the consistency of classification and location scores to filter and reweight pseudo labels. Second, we devise a dynamic category threshold refinement to adaptively manage model confidence. Third, to alleviate the class imbalance, an instance-level class-aware contrastive learning module is presented to encourage the generation of discriminative features for each class, particularly benefiting minority classes. Experimental results across diverse domain-adaptive scenarios validate our method's effectiveness in improving performance and alleviating class imbalance issues, which outperforms the state-of-the-art transformer based methods.
1 code implementation • 23 Jan 2024 • Zhaozhi Xie, Bochen Guan, Weihao Jiang, Muyang Yi, Yue Ding, Hongtao Lu, Lei Zhang
In this paper, we introduce a novel prompt-driven adapter into SAM, namely Prompt Adapter Segment Anything Model (PA-SAM), aiming to enhance the segmentation mask quality of the original SAM.
1 code implementation • 12 Dec 2023 • Jiyuan Yang, Yue Ding, Yidan Wang, Pengjie Ren, Zhumin Chen, Fei Cai, Jun Ma, Rui Zhang, Zhaochun Ren, Xin Xin
Then, we introduce a penalty to items with high exposure probability to avoid the overestimation of user preference for biased samples.
no code implementations • 22 Nov 2023 • Yuxiang Lu, Suizhi Huang, Yuwen Yang, Shalayiding Sirejiding, Yue Ding, Hongtao Lu
Moreover, we employ learnable Hyper Aggregation Weights for each client to customize personalized parameter updates.
no code implementations • 16 Sep 2023 • Yuwen Yang, Chang Liu, Xun Cai, Suizhi Huang, Hongtao Lu, Yue Ding
Federated Learning (FL) has emerged as a promising approach to enable collaborative learning among multiple clients while preserving data privacy.
no code implementations • 28 Jul 2023 • Yuxiang Lu, Shalayiding Sirejiding, Yue Ding, Chunlin Wang, Hongtao Lu
Task-conditional architecture offers advantage in parameter efficiency but falls short in performance compared to state-of-the-art multi-decoder methods.
1 code implementation • 21 Apr 2023 • Huayi Zhou, Fei Jiang, Jiaxin Si, Yue Ding, Hongtao Lu
In this paper, we focus on the joint detection of human body and its parts.
no code implementations • 19 Nov 2022 • Chang Liu, Yuwen Yang, Yue Ding, Hongtao Lu
While most existing message-passing graph neural networks (MPNNs) are permutation-invariant in graph-level representation learning and permutation-equivariant in node- and edge-level representation learning, their expressive power is commonly limited by the 1-Weisfeiler-Lehman (1-WL) graph isomorphism test.
1 code implementation • 1 Nov 2022 • Chang Liu, Yuwen Yang, Zhe Xie, Hongtao Lu, Yue Ding
2) Prevailing graph augmentation methods for GEL, including rule-based, sample-based, adaptive, and automated methods, are not suitable for augmenting subgraphs because a subgraph contains fewer nodes but richer information such as position, neighbor, and structure.
no code implementations • 28 Oct 2022 • Chang Liu, Yuwen Yang, Xun Cai, Yue Ding, Hongtao Lu
Federated learning (FL) faces three major difficulties: cross-domain, heterogeneous models, and non-i. i. d.
no code implementations • 13 Oct 2022 • Chang Liu, Yuwen Yang, Yue Ding, Hongtao Lu
The normalizing layer has become one of the basic configurations of deep learning models, but it still suffers from computational inefficiency, interpretability difficulties, and low generality.
no code implementations • 11 Aug 2022 • Yuxiang Shi, Yue Ding, Bo Chen, YuYang Huang, Ruiming Tang, Dong Wang
In this paper, we propose a Task aligned Meta-learning based Augmented Graph (TMAG) to address cold-start recommendation.
no code implementations • 24 Mar 2022 • Jiawei Sun, Ruoxin Chen, Jie Li, Chentao Wu, Yue Ding, Junchi Yan
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
no code implementations • 21 Nov 2021 • Runyuan Cai, Yue Ding, Hongtao Lu
A specialized pipeline is designed, and we further propose a frequency loss function to fit the nature of our frequency-domain task.
no code implementations • 28 Sep 2021 • Yunzhe Li, Yue Ding, Bo Chen, Xin Xin, Yule Wang, Yuxiang Shi, Ruiming Tang, Dong Wang
In this paper, we propose a novel time-aware sequential recommendation framework called Social Temporal Excitation Networks (STEN), which introduces temporal point processes to model the fine-grained impact of friends' behaviors on the user s dynamic interests in an event-level direct paradigm.
no code implementations • 27 Sep 2021 • Yule Wang, Xin Xin, Yue Ding, Yunzhe Li, Dong Wang
In detail, we define our item cluster-wise optimization target as the recommender model should balance all item clusters that differ in popularity, thus we set the model learning on each item cluster as a unique optimization objective.
no code implementations • 27 Sep 2021 • Yue Ding, Karolis Martinkus, Damian Pascual, Simon Clematide, Roger Wattenhofer
Different studies of the embedding space of transformer models suggest that the distribution of contextual representations is highly anisotropic - the embeddings are distributed in a narrow cone.
no code implementations • 26 Sep 2021 • Yule Wang, Qiang Luo, Yue Ding, Yunzhe Li, Dong Wang, Hongbo Deng
In this paper, we propose a novel model named DemiNet (short for DEpendency-Aware Multi-Interest Network) to address the above two issues.
1 code implementation • 19 Mar 2021 • Zhe Xie, Chengxuan Liu, Yichi Zhang, Hongtao Lu, Dong Wang, Yue Ding
To solve the above problem, in this work, we propose a novel method called Adversarial and Contrastive Variational Autoencoder (ACVAE) for sequential recommendation.