no code implementations • 14 Aug 2024 • Zhiming Yang, Haining Gao, Dehong Gao, Luwei Yang, Libin Yang, Xiaoyan Cai, Wei Ning, Guannan Zhang
In this paper, we propose a Multi-domain Low-Rank Adaptive network (MLoRA) for CTR prediction, where we introduce a specialized LoRA module for each domain.
1 code implementation • 6 Aug 2024 • Jianxing Ma, Zhibo Xiao, Luwei Yang, Hansheng Xue, Xuanzhou Liu, Wen Jiang, Wei Ning, Guannan Zhang
To cater to users' desire for an immersive browsing experience, numerous e-commerce platforms provide various recommendation scenarios, with a focus on Trigger-Induced Recommendation (TIR) tasks.
no code implementations • 16 Jul 2024 • Junqi Yin, Siming Liang, Siyan Liu, Feng Bao, Hristo G. Chipilski, Dan Lu, Guannan Zhang
While these models show considerable potential, they are not ready yet for operational use in weather forecasting or climate prediction.
1 code implementation • 15 Jul 2024 • Kaiming Shen, Xichen Ding, Zixiang Zheng, Yuqi Gong, Qianqian Li, Zhongyi Liu, Guannan Zhang
To address these challenges, we propose a unified lifelong multi-modal sequence model called SEMINAR-Search Enhanced Multi-Modal Interest Network and Approximate Retrieval.
1 code implementation • 31 Mar 2024 • Minglei Yang, Pengjun Wang, Ming Fan, Dan Lu, Yanzhao Cao, Guannan Zhang
We introduce a conditional pseudo-reversible normalizing flow for constructing surrogate models of a physical model polluted by additive noise to efficiently quantify forward and inverse uncertainty propagation.
no code implementations • CVPR 2024 • Yichen Li, Qunwei Li, Haozhao Wang, Ruixuan Li, Wenliang Zhong, Guannan Zhang
Then, the client trains the local model with both the cached samples and the samples from the new task.
no code implementations • 12 Feb 2024 • Mingzhe Li, Xiuying Chen, Jing Xiang, Qishen Zhang, Changsheng Ma, Chenchen Dai, Jinxiong Chang, Zhongyi Liu, Guannan Zhang
Since attributes from two ends are often not aligned in terms of number and type, we propose to exploit the benefit of attributes by multiple-intent modeling.
no code implementations • 31 Jan 2024 • Zhitian Xie, Yinger Zhang, Chenyi Zhuang, Qitao Shi, Zhining Liu, Jinjie Gu, Guannan Zhang
However, the gate's routing mechanism also gives rise to narrow vision: the individual MoE's expert fails to use more samples in learning the allocated sub-task, which in turn limits the MoE to further improve its generalization ability.
1 code implementation • 11 Jan 2024 • Yue Liu, Shihao Zhu, Jun Xia, Yingwei Ma, Jian Ma, Wenliang Zhong, Xinwang Liu, Guannan Zhang, Kejun Zhang
Concretely, we encode users' behavior sequences and initialize the cluster centers (latent intents) as learnable neurons.
no code implementations • 19 Dec 2023 • Zezhong Zhang, Feng Bao, Guannan Zhang
The impressive expressive power of deep neural networks (DNNs) underlies their widespread applicability.
no code implementations • 15 Dec 2023 • Xingyu Lu, Zhining Liu, Yanchu Guan, Hongxuan Zhang, Chenyi Zhuang, Wenqi Ma, Yize Tan, Jinjie Gu, Guannan Zhang
of a cascade RS, when a user triggers a request, we define two actions that determine the computation: (1) the trained instances of models with different computational complexity; and (2) the number of items to be inferred in the stage.
no code implementations • 15 Dec 2023 • Yao Zhao, Haipeng Zhang, Shiwei Lyu, Ruiying Jiang, Jinjie Gu, Guannan Zhang
Uplift modeling is widely used in performance marketing to estimate effects of promotion campaigns (e. g., increase of customer retention rate).
no code implementations • 8 Dec 2023 • Chunjing Gan, Dan Yang, Binbin Hu, Ziqi Liu, Yue Shen, Zhiqiang Zhang, Jinjie Gu, Jun Zhou, Guannan Zhang
In this paper, we seek to carefully prompt a Large Language Model (LLM) with domain-level knowledge as a better marketing-oriented knowledge miner for marketing-oriented knowledge graph construction, which is however non-trivial, suffering from several inevitable issues in real-world marketing scenarios, i. e., uncontrollable relation generation of LLMs, insufficient prompting ability of a single prompt, the unaffordable deployment cost of LLMs.
1 code implementation • 5 Dec 2023 • Xiaojie Sun, Keping Bi, Jiafeng Guo, Sihui Yang, Qishen Zhang, Zhongyi Liu, Guannan Zhang, Xueqi Cheng
Dense retrieval methods have been mostly focused on unstructured text and less attention has been drawn to structured data with various aspects, e. g., products with aspects such as category and brand.
1 code implementation • 5 Dec 2023 • Tianchi Cai, Xierui Song, Jiyan Jiang, Fei Teng, Jinjie Gu, Guannan Zhang
Aligning language models to human expectations, e. g., being helpful and harmless, has become a pressing challenge for large language models.
no code implementations • 4 Dec 2023 • Chunjing Gan, Bo Huang, Binbin Hu, Jian Ma, Ziqi Liu, Zhiqiang Zhang, Jun Zhou, Guannan Zhang, Wenliang Zhong
To help merchants/customers to provide/access a variety of services through miniapps, online service platforms have occupied a critical position in the effective content delivery, in which how to recommend items in the new domain launched by the service provider for customers has become more urgent.
no code implementations • 23 Nov 2023 • Yiming Wang, Yu Lin, Xiaodong Zeng, Guannan Zhang
To our knowledge, our proposed framework is the first efficient and privacy-preserving LLM solution in the literature.
no code implementations • 20 Nov 2023 • Yiming Wang, Yu Lin, Xiaodong Zeng, Guannan Zhang
Further investigation into weight update matrices of MultiLoRA exhibits reduced dependency on top singular vectors and more democratic unitary transform contributions.
no code implementations • 15 Nov 2023 • Lei Liu, Xiaoyan Yang, Yue Shen, Binbin Hu, Zhiqiang Zhang, Jinjie Gu, Guannan Zhang
Memory-augmented Large Language Models (LLMs) have demonstrated remarkable performance in long-term human-machine interactions, which basically relies on iterative recalling and reasoning of history to generate high-quality responses.
no code implementations • 1 Nov 2023 • You Zhou, Xiujing Lin, Xiang Zhang, Maolin Wang, Gangwei Jiang, Huakang Lu, Yupeng Wu, Kai Zhang, Zhe Yang, Kehang Wang, Yongduo Sui, Fengwei Jia, Zuoli Tang, Yao Zhao, Hongxuan Zhang, Tiannuo Yang, Weibo Chen, Yunong Mao, Yi Li, De Bao, Yu Li, Hongrui Liao, Ting Liu, Jingwen Liu, Jinchi Guo, Xiangyu Zhao, Ying WEI, Hong Qian, Qi Liu, Xiang Wang, Wai Kin, Chan, Chenliang Li, Yusen Li, Shiyu Yang, Jining Yan, Chao Mou, Shuai Han, Wuxia Jin, Guannan Zhang, Xiaodong Zeng
To tackle the challenges of computing resources and environmental impact of AI, Green Computing has become a hot research topic.
no code implementations • 22 Oct 2023 • Yanfang Liu, Minglei Yang, Zezhong Zhang, Feng Bao, Yanzhao Cao, Guannan Zhang
Unlike existing diffusion models that train neural networks to learn the score function, we develop a training-free score estimation method.
no code implementations • 16 Sep 2023 • Yuqi Gong, Xichen Ding, Yehui Su, Kaiming Shen, Zhongyi Liu, Guannan Zhang
With the development of large language models, LLM can extract global domain-invariant text features that serve both search and recommendation tasks.
no code implementations • 6 Sep 2023 • Tianchi Cai, Jiyan Jiang, Wenpeng Zhang, Shiji Zhou, Xierui Song, Li Yu, Lihong Gu, Xiaodong Zeng, Jinjie Gu, Guannan Zhang
We further show that this method is guaranteed to converge to the optimal policy, which cannot be achieved by previous value-based reinforcement learning methods for marketing budget allocation.
1 code implementation • 2 Sep 2023 • Feng Bao, Zezhong Zhang, Guannan Zhang
EnSF stores the information of the recursively updated filtering density function in the score function, instead of storing the information in a set of finite Monte Carlo samples (used in particle filters and ensemble Kalman filters).
no code implementations • 31 Aug 2023 • ZhaoXin Huan, Ke Ding, Ang Li, Xiaolu Zhang, Xu Min, Yong He, Liang Zhang, Jun Zhou, Linjian Mo, Jinjie Gu, Zhongyi Liu, Wenliang Zhong, Guannan Zhang
3) AntM$^{2}$C provides 1 billion CTR data with 200 features, including 200 million users and 6 million items.
no code implementations • 25 Aug 2023 • Tianchi Cai, Shenliao Bao, Jiyan Jiang, Shiji Zhou, Wenpeng Zhang, Lihong Gu, Jinjie Gu, Guannan Zhang
Model-free RL-based recommender systems have recently received increasing research attention due to their capability to handle partial feedback and long-term rewards.
no code implementations • 24 Aug 2023 • Yue Wang, Xinrui Wang, Juntao Li, Jinxiong Chang, Qishen Zhang, Zhongyi Liu, Guannan Zhang, Min Zhang
Instruction tuning is instrumental in enabling Large Language Models~(LLMs) to follow user instructions to complete various open-domain tasks.
no code implementations • 30 May 2023 • Dan Yang, Binbin Hu, Xiaoyan Yang, Yue Shen, Zhiqiang Zhang, Jinjie Gu, Guannan Zhang
At the online stage, the system offers the ability of user targeting in real-time based on the entity graph from the offline stage.
no code implementations • 25 Apr 2023 • Weifan Wang, Binbin Hu, Zhicheng Peng, Mingjie Zhong, Zhiqiang Zhang, Zhongyi Liu, Guannan Zhang, Jun Zhou
At last, we conduct extensive experiments on both offline and online environments, which demonstrates the superior capability of GARCIA in improving tail queries and overall performance in service search scenarios.
no code implementations • 12 Apr 2023 • Zexi Li, Qunwei Li, Yi Zhou, Wenliang Zhong, Guannan Zhang, Chao Wu
Federated learning (FL) is a popular way of edge computing that doesn't compromise users' privacy.
no code implementations • 27 Jan 2023 • Zezhong Zhang, Feng Bao, Lili Ju, Guannan Zhang
Transfer learning for partial differential equations (PDEs) is to develop a pre-trained neural network that can be used to solve a wide class of PDEs.
no code implementations • 18 Feb 2022 • Majdi I. Radaideh, Hoang Tran, Lianshan Lin, Hao Jiang, Drew Winder, Sarma Gorti, Guannan Zhang, Justin Mach, Sarah Cousineau
Given that some of the calibrated parameters that show a good agreement with the experimental data can be nonphysical mercury properties, we need a more advanced two-phase flow model to capture bubble dynamics and mercury cavitation.
2 code implementations • 2 Dec 2021 • Yuankai Teng, Zhu Wang, Lili Ju, Anthony Gruber, Guannan Zhang
Our method contains two major components: one is the pseudo-reversible neural network (PRNN) module that effectively transforms high-dimensional input variables to low-dimensional active variables, and the other is the synthesized regression module for approximating function values based on the transformed data in the low-dimensional space.
1 code implementation • ICLR 2022 • Siyan Liu, Pei Zhang, Dan Lu, Guannan Zhang
First, existing PI methods require retraining of neural networks (NNs) for every given confidence level and suffer from the crossing issue in calculating multiple PIs.
no code implementations • 14 Mar 2021 • Jiaxin Zhang, Sirui Bi, Guannan Zhang
However, the approach requires a sampling path to compute the pathwise gradient of the MI lower bound with respect to the design variables, and such a pathwise gradient is usually inaccessible for implicit models.
no code implementations • 14 Mar 2021 • Jiaxin Zhang, Sirui Bi, Guannan Zhang
However, the approach in Kleinegesse et al., 2020 requires a pathwise sampling path to compute the gradient of the MI lower bound with respect to the design variables, and such a pathwise sampling path is usually inaccessible for implicit models.
no code implementations • 28 Nov 2020 • Sirui Bi, Jiaxin Zhang, Guannan Zhang
Unlike the existing studies of DL for TO, our framework accelerates TO by learning the iterative history data and simultaneously training on the mapping between the given design and its gradient.
1 code implementation • 3 Nov 2020 • Hoang Tran, Guannan Zhang
The local gradient points to the direction of the steepest slope in an infinitesimal neighborhood.
no code implementations • 21 Feb 2020 • Jiaxing Zhang, Hoang Tran, Guannan Zhang
Evolution strategy (ES) has been shown great promise in many challenging reinforcement learning (RL) tasks, rivaling other state-of-the-art deep RL methods.
1 code implementation • 7 Feb 2020 • Jiaxin Zhang, Hoang Tran, Dan Lu, Guannan Zhang
Standard ES methods with $d$-dimensional Gaussian smoothing suffer from the curse of dimensionality due to the high variance of Monte Carlo (MC) based gradient estimators.
no code implementations • 10 Aug 2019 • Jiaxin Zhang, Xianglin Liu, Sirui Bi, Junqi Yin, Guannan Zhang, Markus Eisenbach
In this study, a robust data-driven framework based on Bayesian approaches is proposed and demonstrated on the accurate and efficient prediction of configurational energy of high entropy alloys.
no code implementations • NeurIPS 2019 • Guannan Zhang, Jiaxin Zhang, Jacob Hinkle
We developed a Nonlinear Level-set Learning (NLL) method for dimensionality reduction in high-dimensional function approximation with small data.
Functional Analysis