Search Results for author: Ximei Wang

Found 17 papers, 10 papers with code

Understanding the Ranking Loss for Recommendation with Sparse User Feedback

1 code implementation21 Mar 2024 Zhutian Lin, Junwei Pan, Shangyu Zhang, Ximei Wang, Xi Xiao, Shudong Huang, Lei Xiao, Jie Jiang

In this paper, we uncover a new challenge associated with BCE loss in scenarios with sparse positive feedback, such as CTR prediction: the gradient vanishing for negative samples.

Binary Classification Click-Through Rate Prediction

Ad Recommendation in a Collapsed and Entangled World

no code implementations22 Feb 2024 Junwei Pan, Wei Xue, Ximei Wang, Haibin Yu, Xun Liu, Shijie Quan, Xueming Qiu, Dapeng Liu, Lei Xiao, Jie Jiang

In this paper, we present an industry ad recommendation system, paying attention to the challenges and practices of learning appropriate representations.

Feature Correlation Model Optimization

On the Embedding Collapse when Scaling up Recommendation Models

no code implementations6 Oct 2023 Xingzhuo Guo, Junwei Pan, Ximei Wang, Baixu Chen, Jie Jiang, Mingsheng Long

Recent advances in deep foundation models have led to a promising trend of developing large recommendation models to leverage vast amounts of available data.

Decoupled Training: Return of Frustratingly Easy Multi-Domain Learning

no code implementations19 Sep 2023 Ximei Wang, Junwei Pan, Xingzhuo Guo, Dapeng Liu, Jie Jiang

Multi-domain learning (MDL) aims to train a model with minimal average risk across multiple overlapping but non-identical domains.

Recommendation Systems

STEM: Unleashing the Power of Embeddings for Multi-task Recommendation

1 code implementation16 Aug 2023 Liangcai Su, Junwei Pan, Ximei Wang, Xi Xiao, Shijie Quan, Xihua Chen, Jie Jiang

Surprisingly, negative transfer still occurs in existing MTL methods on samples that receive comparable feedback across tasks.

Multi-Task Learning Recommendation Systems

CLIPood: Generalizing CLIP to Out-of-Distributions

1 code implementation2 Feb 2023 Yang Shu, Xingzhuo Guo, Jialong Wu, Ximei Wang, Jianmin Wang, Mingsheng Long

This paper aims at generalizing CLIP to out-of-distribution test data on downstream tasks.

ForkMerge: Mitigating Negative Transfer in Auxiliary-Task Learning

1 code implementation NeurIPS 2023 Junguang Jiang, Baixu Chen, Junwei Pan, Ximei Wang, Liu Dapeng, Jie Jiang, Mingsheng Long

Auxiliary-Task Learning (ATL) aims to improve the performance of the target task by leveraging the knowledge obtained from related tasks.

AdaTask: A Task-aware Adaptive Learning Rate Approach to Multi-task Learning

no code implementations28 Nov 2022 Enneng Yang, Junwei Pan, Ximei Wang, Haibin Yu, Li Shen, Xihua Chen, Lei Xiao, Jie Jiang, Guibing Guo

In this paper, we propose to measure the task dominance degree of a parameter by the total updates of each task on this parameter.

Multi-Task Learning Recommendation Systems

X-model: Improving Data Efficiency in Deep Learning with A Minimax Model

no code implementations ICLR 2022 Ximei Wang, Xinyang Chen, Jianmin Wang, Mingsheng Long

To take the power of both worlds, we propose a novel X-model by simultaneously encouraging the invariance to {data stochasticity} and {model stochasticity}.

Age Estimation Object Recognition +2

Regressive Domain Adaptation for Unsupervised Keypoint Detection

2 code implementations CVPR 2021 Junguang Jiang, Yifei Ji, Ximei Wang, Yufeng Liu, Jianmin Wang, Mingsheng Long

First, based on our observation that the probability density of the output space is sparse, we introduce a spatial probability distribution to describe this sparsity and then use it to guide the learning of the adversarial regressor.

Domain Adaptation Keypoint Detection

Self-Tuning for Data-Efficient Deep Learning

2 code implementations25 Feb 2021 Ximei Wang, Jinghan Gao, Mingsheng Long, Jianmin Wang

Deep learning has made revolutionary advances to diverse applications in the presence of large-scale labeled datasets.

Transfer Learning

Bi-tuning of Pre-trained Representations

no code implementations12 Nov 2020 Jincheng Zhong, Ximei Wang, Zhi Kou, Jianmin Wang, Mingsheng Long

It is common within the deep learning community to first pre-train a deep neural network from a large-scale dataset and then fine-tune the pre-trained model to a specific downstream task.

Contrastive Learning Unsupervised Pre-training

Transferable Calibration with Lower Bias and Variance in Domain Adaptation

no code implementations NeurIPS 2020 Ximei Wang, Mingsheng Long, Jian-Min Wang, Michael. I. Jordan

In this paper, we delve into the open problem of Calibration in DA, which is extremely challenging due to the coexistence of domain shift and the lack of target labels.

Decision Making Domain Adaptation

Minimum Class Confusion for Versatile Domain Adaptation

3 code implementations ECCV 2020 Ying Jin, Ximei Wang, Mingsheng Long, Jian-Min Wang

It can be characterized as (1) a non-adversarial DA method without explicitly deploying domain alignment, enjoying faster convergence speed; (2) a versatile approach that can handle four existing scenarios: Closed-Set, Partial-Set, Multi-Source, and Multi-Target DA, outperforming the state-of-the-art methods in these scenarios, especially on one of the largest and hardest datasets to date (7. 3% on DomainNet).

Inductive Bias Multi-target Domain Adaptation +1

Towards Accurate Model Selection in Deep Unsupervised Domain Adaptation

2 code implementations International Conference on Machine Learning 2019 Kaichao You, Ximei Wang, Mingsheng Long, Michael Jordan

Deep unsupervised domain adaptation (Deep UDA) methods successfully leverage rich labeled data in a source domain to boost the performance on related but unlabeled data in a target domain.

Model Selection Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.