1 code implementation • 24 Nov 2024 • Kexin Zhang, Fuyuan Lyu, Xing Tang, Dugang Liu, Chen Ma, Kaize Ding, Xiuqiang He, Xue Liu
To bridge this gap, we introduce OptFusion, a method that automates the learning of fusion, encompassing both the connection learning and the operation selection.
no code implementations • 16 Oct 2024 • Ziqiang Cui, Yunpeng Weng, Xing Tang, Fuyuan Lyu, Dugang Liu, Xiuqiang He, Chen Ma
Furthermore, to utilize the global information of the KG, we construct an item-item graph using these semantic embeddings, which can directly capture higher-order associations between items.
no code implementations • 7 Oct 2024 • Qiyuan Zhang, YuFei Wang, Tiezheng Yu, Yuxin Jiang, Chuhan Wu, Liangyou Li, Yasheng Wang, Xin Jiang, Lifeng Shang, Ruiming Tang, Fuyuan Lyu, Chen Ma
With significant efforts in recent studies, LLM-as-a-Judge has become a cost-effective alternative to human evaluation for assessing the text generation quality in a wide range of tasks.
no code implementations • 16 Aug 2024 • Yunpeng Weng, Xing Tang, Zhenhao Xu, Fuyuan Lyu, Dugang Liu, Zexu Sun, Xiuqiang He
In this paper, we propose a novel optimal distribution selection model OptDist for CLTV prediction, which utilizes an adaptive optimal sub-distribution selection mechanism to improve the accuracy of complex distribution modeling.
1 code implementation • 1 Jul 2024 • Qiyuan Zhang, Fuyuan Lyu, Xue Liu, Chen Ma
The pioneering scaling law on downstream works demonstrated intrinsic similarities within model families and utilized such similarities for performance prediction.
no code implementations • 10 Apr 2024 • Shaoxiang Qin, Fuyuan Lyu, Wenhui Peng, Dingyang Geng, Ju Wang, Xing Tang, Sylvie Leroyer, Naiping Gao, Xue Liu, Liangzhu Leon Wang
In solving partial differential equations (PDEs), Fourier Neural Operators (FNOs) have exhibited notable effectiveness.
1 code implementation • 26 Mar 2024 • Xing Tang, Yang Qiao, Fuyuan Lyu, Dugang Liu, Xiuqiang He
In this paper, we study the MTL problem with hybrid targets for the first time and propose the model named Hybrid Targets Learning Network (HTLNet) to explore task dependence and enhance optimization.
no code implementations • 28 Feb 2024 • Tianze Yang, Tianyi Yang, Fuyuan Lyu, Shaoshan Liu, Xue, Liu
This study unveils the In-Context Evolutionary Search (ICE-SEARCH) method, which is among the first works that melds large language models (LLMs) with evolutionary algorithms for feature selection (FS) tasks and demonstrates its effectiveness in Medical Predictive Analytics (MPA) applications.
no code implementations • 6 Nov 2023 • Fuyuan Lyu, Yaochen Hu, Xing Tang, Yingxue Zhang, Ruiming Tang, Xue Liu
Hence, we propose a hypothesis that the negative sampler should align with the capacity of the recommendation models as well as the statistics of the datasets to achieve optimal performance.
1 code implementation • NeurIPS 2023 • Fuyuan Lyu, Xing Tang, Dugang Liu, Chen Ma, Weihong Luo, Liang Chen, Xiuqiang He, Xue Liu
In this work, we introduce a hybrid-grained feature interaction selection approach that targets both feature field and feature value for deep sparse networks.
no code implementations • 23 Jun 2023 • Xing Tang, Yang Qiao, Yuwen Fu, Fuyuan Lyu, Dugang Liu, Xiuqiang He
Existing approaches for multi-scenario CTR prediction generally consist of two main modules: i) a scenario-aware learning module that learns a set of multi-functional representations with scenario-shared and scenario-specific information from input features, and ii) a scenario-specific prediction module that serves each scenario based on these representations.
no code implementations • 1 Jun 2023 • Dugang Liu, Xing Tang, Han Gao, Fuyuan Lyu, Xiuqiang He
Our EFIN includes four customized modules: 1) a feature encoding module encodes not only the user and contextual features, but also the treatment features; 2) a self-interaction module aims to accurately model the user's natural response with all but the treatment features; 3) a treatment-aware interaction module accurately models the degree to which a particular treatment motivates a user through interactions between the treatment features and other features, i. e., ITE; and 4) an intervention constraint module is used to balance the ITE distribution of users between the control and treatment groups so that the model would still achieve a accurate uplift ranking on data collected from a non-random intervention marketing scenario.
no code implementations • 4 Feb 2023 • Fuyuan Lyu, Xing Tang, Dugang Liu, Haolun Wu, Chen Ma, Xiuqiang He, Xue Liu
Representation learning has been a critical topic in machine learning.
1 code implementation • 26 Jan 2023 • Fuyuan Lyu, Xing Tang, Dugang Liu, Liang Chen, Xiuqiang He, Xue Liu
Because of the large-scale search space, we develop a learning-by-continuation training scheme to learn such gates.
Ranked #4 on
Click-Through Rate Prediction
on KDD12
1 code implementation • 29 Dec 2022 • Haolun Wu, Yansen Zhang, Chen Ma, Fuyuan Lyu, Bowei He, Bhaskar Mitra, Xue Liu
Diversifying return results is an important research topic in retrieval systems in order to satisfy both the various interests of customers and the equal market exposure of providers.
1 code implementation • 9 Aug 2022 • Fuyuan Lyu, Xing Tang, Hong Zhu, Huifeng Guo, Yingxue Zhang, Ruiming Tang, Xue Liu
To this end, we propose an optimal embedding table learning framework OptEmbed, which provides a practical and general method to find an optimal embedding table for various base CTR models.
Ranked #3 on
Click-Through Rate Prediction
on KDD12
no code implementations • 20 Mar 2022 • Yuecai Zhu, Fuyuan Lyu, Chengming Hu, Xi Chen, Xue Liu
However, the temporal information embedded in the dynamic graphs brings new challenges in analyzing and deploying them.
1 code implementation • 3 Aug 2021 • Fuyuan Lyu, Xing Tang, Huifeng Guo, Ruiming Tang, Xiuqiang He, Rui Zhang, Xue Liu
As feature interactions bring in non-linearity, they are widely adopted to improve the performance of CTR prediction models.
Ranked #1 on
Click-Through Rate Prediction
on Avazu
no code implementations • 18 May 2020 • Fuyuan Lyu, Shien Zhu, Weichen Liu
However, these filter-wise quantification methods exist a natural upper limit, caused by the size of the kernel.