Search Results for author: Yue Ding

Found 29 papers, 10 papers with code

On Isotropy Calibration of Transformer Models

no code implementations insights (ACL) 2022 Yue Ding, Karolis Martinkus, Damian Pascual, Simon Clematide, Roger Wattenhofer

Different studies of the embedding space of transformer models suggest that the distribution of contextual representations is highly anisotropic - the embeddings are distributed in a narrow cone.

Learning Multiple Representations with Inconsistency-Guided Detail Regularization for Mask-Guided Matting

no code implementations28 Mar 2024 Weihao Jiang, Zhaozhi Xie, Yuxiang Lu, Longjie Qi, Jingyong Cai, Hiroyuki Uchiyama, Bin Chen, Yue Ding, Hongtao Lu

Our framework and model introduce the following key aspects: (1) to learn real-world adaptive semantic representation for objects with diverse and complex structures under real-world scenes, we introduce extra semantic segmentation and edge detection tasks on more diverse real-world data with segmentation annotations; (2) to avoid overfitting on low-level details, we propose a module to utilize the inconsistency between learned segmentation and matting representations to regularize detail refinement; (3) we propose a novel background line detection task into our auxiliary learning framework, to suppress interference of background lines or textures.

Auxiliary Learning Edge Detection +4

Data-driven Energy Consumption Modelling for Electric Micromobility using an Open Dataset

no code implementations26 Mar 2024 Yue Ding, Sen Yan, Maqsood Hussain Shah, Hongyuan Fang, Ji Li, Mingming Liu

Furthermore, we provide a comprehensive analysis of energy consumption modelling based on the dataset using a set of representative machine learning algorithms and compare their performance against the contemporary mathematical models as a baseline.

Perceptual learning in contour detection transfer across changes in contour path and orientation

no code implementations18 Mar 2024 Yue Ding, Hongqiao Shi, Shuang Song, Yonghui Wang, Ya Li

The integration of local elements into shape contours is critical for target detection and identification in cluttered scenes.

Contour Detection Specificity

Optimal Design and Implementation of an Open-source Emulation Platform for User-Centric Shared E-mobility Services

no code implementations12 Mar 2024 Maqsood Hussain Shah, Yue Ding, Shaoshu Zhu, Yingqi Gu, Mingming Liu

In response to the escalating global challenge of increasing emissions and pollution in transportation, shared electric mobility services, encompassing e-cars, e-bikes, and e-scooters, have emerged as a popular strategy.

Federated Multi-Task Learning on Non-IID Data Silos: An Experimental Study

1 code implementation20 Feb 2024 Yuwen Yang, Yuxiang Lu, Suizhi Huang, Shalayiding Sirejiding, Hongtao Lu, Yue Ding

The innovative Federated Multi-Task Learning (FMTL) approach consolidates the benefits of Federated Learning (FL) and Multi-Task Learning (MTL), enabling collaborative model training on multi-task learning datasets.

Federated Learning Multi-Task Learning

Enhancing cross-domain detection: adaptive class-aware contrastive transformer

no code implementations24 Jan 2024 Ziru Zeng, Yue Ding, Hongtao Lu

Recently, the detection transformer has gained substantial attention for its inherent minimal post-processing requirement. However, this paradigm relies on abundant training data, yet in the context of the cross-domain adaptation, insufficient labels in the target domain exacerbate issues of class imbalance and model performance degradation. To address these challenges, we propose a novel class-aware cross domain detection transformer based on the adversarial learning and mean-teacher framework. First, considering the inconsistencies between the classification and regression tasks, we introduce an IoU-aware prediction branch and exploit the consistency of classification and location scores to filter and reweight pseudo labels. Second, we devise a dynamic category threshold refinement to adaptively manage model confidence. Third, to alleviate the class imbalance, an instance-level class-aware contrastive learning module is presented to encourage the generation of discriminative features for each class, particularly benefiting minority classes. Experimental results across diverse domain-adaptive scenarios validate our method's effectiveness in improving performance and alleviating class imbalance issues, which outperforms the state-of-the-art transformer based methods.

Contrastive Learning Domain Adaptation

PA-SAM: Prompt Adapter SAM for High-Quality Image Segmentation

1 code implementation23 Jan 2024 Zhaozhi Xie, Bochen Guan, Weihao Jiang, Muyang Yi, Yue Ding, Hongtao Lu, Lei Zhang

In this paper, we introduce a novel prompt-driven adapter into SAM, namely Prompt Adapter Segment Anything Model (PA-SAM), aiming to enhance the segmentation mask quality of the original SAM.

Decoder Image Segmentation +2

FedHCA2: Towards Hetero-Client Federated Multi-Task Learning

1 code implementation CVPR 2024 Yuxiang Lu, Suizhi Huang, Yuwen Yang, Shalayiding Sirejiding, Yue Ding, Hongtao Lu

Federated Multi-Task Learning (FMTL) builds on FL to handle multiple tasks assuming model congruity that identical model architecture is deployed in each client.

Decoder Federated Learning +1

Debiasing Sequential Recommenders through Distributionally Robust Optimization over System Exposure

1 code implementation12 Dec 2023 Jiyuan Yang, Yue Ding, Yidan Wang, Pengjie Ren, Zhumin Chen, Fei Cai, Jun Ma, Rui Zhang, Zhaochun Ren, Xin Xin

Then, we introduce a penalty to items with high exposure probability to avoid the overestimation of user preference for biased samples.

Sequential Recommendation

FedHCA$^2$: Towards Hetero-Client Federated Multi-Task Learning

no code implementations22 Nov 2023 Yuxiang Lu, Suizhi Huang, Yuwen Yang, Shalayiding Sirejiding, Yue Ding, Hongtao Lu

Moreover, we employ learnable Hyper Aggregation Weights for each client to customize personalized parameter updates.

Decoder Federated Learning +1

UNIDEAL: Curriculum Knowledge Distillation Federated Learning

no code implementations16 Sep 2023 Yuwen Yang, Chang Liu, Xun Cai, Suizhi Huang, Hongtao Lu, Yue Ding

Federated Learning (FL) has emerged as a promising approach to enable collaborative learning among multiple clients while preserving data privacy.

Federated Learning Knowledge Distillation

Prompt Guided Transformer for Multi-Task Dense Prediction

1 code implementation28 Jul 2023 Yuxiang Lu, Shalayiding Sirejiding, Yue Ding, Chunlin Wang, Hongtao Lu

Task-conditional architecture offers advantage in parameter efficiency but falls short in performance compared to state-of-the-art multi-decoder methods.

Boundary Detection Decoder +4

EDEN: A Plug-in Equivariant Distance Encoding to Beyond the 1-WL Test

no code implementations19 Nov 2022 Chang Liu, Yuwen Yang, Yue Ding, Hongtao Lu

While most existing message-passing graph neural networks (MPNNs) are permutation-invariant in graph-level representation learning and permutation-equivariant in node- and edge-level representation learning, their expressive power is commonly limited by the 1-Weisfeiler-Lehman (1-WL) graph isomorphism test.

Graph Representation Learning

Position-Aware Subgraph Neural Networks with Data-Efficient Learning

1 code implementation1 Nov 2022 Chang Liu, Yuwen Yang, Zhe Xie, Hongtao Lu, Yue Ding

2) Prevailing graph augmentation methods for GEL, including rule-based, sample-based, adaptive, and automated methods, are not suitable for augmenting subgraphs because a subgraph contains fewer nodes but richer information such as position, neighbor, and structure.

Contrastive Learning Position +1

Completely Heterogeneous Federated Learning

no code implementations28 Oct 2022 Chang Liu, Yuwen Yang, Xun Cai, Yue Ding, Hongtao Lu

Federated learning (FL) faces three major difficulties: cross-domain, heterogeneous models, and non-i. i. d.

Data-free Knowledge Distillation Federated Learning

NoMorelization: Building Normalizer-Free Models from a Sample's Perspective

no code implementations13 Oct 2022 Chang Liu, Yuwen Yang, Yue Ding, Hongtao Lu

The normalizing layer has become one of the basic configurations of deep learning models, but it still suffers from computational inefficiency, interpretability difficulties, and low generality.

Task Aligned Meta-learning based Augmented Graph for Cold-Start Recommendation

no code implementations11 Aug 2022 Yuxiang Shi, Yue Ding, Bo Chen, YuYang Huang, Yule Wang, Ruiming Tang, Dong Wang

In this paper, we propose a Task aligned Meta-learning based Augmented Graph (TMAG) to address cold-start recommendation.

Graph Neural Network Meta-Learning +1

On Understanding and Mitigating the Dimensional Collapse of Graph Contrastive Learning: a Non-Maximum Removal Approach

no code implementations24 Mar 2022 Jiawei Sun, Ruoxin Chen, Jie Li, Chentao Wu, Yue Ding, Junchi Yan

Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.

Contrastive Learning Graph Classification +1

FreqNet: A Frequency-domain Image Super-Resolution Network with Dicrete Cosine Transform

no code implementations21 Nov 2021 Runyuan Cai, Yue Ding, Hongtao Lu

A specialized pipeline is designed, and we further propose a frequency loss function to fit the nature of our frequency-domain task.

Image Super-Resolution

Extracting Attentive Social Temporal Excitation for Sequential Recommendation

no code implementations28 Sep 2021 Yunzhe Li, Yue Ding, Bo Chen, Xin Xin, Yule Wang, Yuxiang Shi, Ruiming Tang, Dong Wang

In this paper, we propose a novel time-aware sequential recommendation framework called Social Temporal Excitation Networks (STEN), which introduces temporal point processes to model the fine-grained impact of friends' behaviors on the user s dynamic interests in an event-level direct paradigm.

Collaborative Filtering Graph Embedding +2

ICPE: An Item Cluster-Wise Pareto-Efficient Framework for Recommendation Debiasing

no code implementations27 Sep 2021 Yule Wang, Xin Xin, Yue Ding, Yunzhe Li, Dong Wang

In detail, we define our item cluster-wise optimization target as the recommender model should balance all item clusters that differ in popularity, thus we set the model learning on each item cluster as a unique optimization objective.

counterfactual Counterfactual Inference +2

On Isotropy Calibration of Transformers

no code implementations27 Sep 2021 Yue Ding, Karolis Martinkus, Damian Pascual, Simon Clematide, Roger Wattenhofer

Different studies of the embedding space of transformer models suggest that the distribution of contextual representations is highly anisotropic - the embeddings are distributed in a narrow cone.

Adversarial and Contrastive Variational Autoencoder for Sequential Recommendation

1 code implementation19 Mar 2021 Zhe Xie, Chengxuan Liu, Yichi Zhang, Hongtao Lu, Dong Wang, Yue Ding

To solve the above problem, in this work, we propose a novel method called Adversarial and Contrastive Variational Autoencoder (ACVAE) for sequential recommendation.

Collaborative Filtering Sequential Recommendation

Cannot find the paper you are looking for? You can Submit a new open access paper.