1 code implementation • 3 Feb 2025 • Gaurush Hiranandani, Haolun Wu, Subhojyoti Mukherjee, Sanmi Koyejo
In this paper, we propose a token-level probability reweighting framework that, given access to logits and a small amount of task-specific data, can effectively steer black-box LLMs toward application-specific content generation.
no code implementations • 25 Oct 2024 • Emiliano Penaloza, Olivier Gouvert, Haolun Wu, Laurent Charlin
We find the summaries capture user preferences uniquely.
no code implementations • 18 Jul 2024 • Shangyu Wu, Ying Xiong, Yufei Cui, Haolun Wu, Can Chen, Ye Yuan, Lianming Huang, Xue Liu, Tei-Wei Kuo, Nan Guan, Chun Jason Xue
Large language models (LLMs) have demonstrated great success in various fields, benefiting from their huge amount of parameters that store knowledge.
no code implementations • 22 May 2024 • Ye Yuan, Youyuan Zhang, Can Chen, Haolun Wu, Zixuan Li, Jianmo Li, James J. Clark, Xue Liu
Offline model-based optimization (MBO) aims to maximize a black-box objective function using only an offline dataset of designs and scores.
no code implementations • 17 May 2024 • Hao Zhou, Chengming Hu, Ye Yuan, Yufei Cui, Yili Jin, Can Chen, Haolun Wu, Dun Yuan, Li Jiang, Di wu, Xue Liu, Charlie Zhang, Xianbin Wang, Jiangchuan Liu
Then, we introduce LLM-enabled key techniques and telecom applications in terms of generation, classification, optimization, and prediction problems.
1 code implementation • 15 May 2024 • Ziqiang Cui, Haolun Wu, Bowei He, Ji Cheng, Chen Ma
Most existing approaches generate augmented views of the same user sequence through random augmentation and subsequently maximize their agreement in the representation space.
no code implementations • 26 Apr 2024 • Haolun Wu, Bhaskar Mitra, Nick Craswell
Traditional measures of search success often overlook the varying information needs of different demographic groups.
1 code implementation • 6 Feb 2024 • Haolun Wu, Ye Yuan, Liana Mikaelyan, Alexander Meulemans, Xue Liu, James Hensman, Bhaskar Mitra
Recent advances in machine learning have significantly impacted the field of information extraction, with Language Models (LMs) playing a pivotal role in extracting structured information from unstructured text.
no code implementations • 22 Dec 2023 • Chengming Hu, Haolun Wu, Xuan Li, Chen Ma, Xi Chen, Jun Yan, Boyu Wang, Xue Liu
A simple neural network then learns the implicit mapping from the intra- and inter-sample relations to an adaptive, sample-wise knowledge fusion ratio in a bilevel-optimization manner.
no code implementations • 31 Oct 2023 • Haolun Wu, Ofer Meshi, Masrour Zoghi, Fernando Diaz, Xue Liu, Craig Boutilier, Maryam Karimzadehgan
Accurate modeling of the diverse and dynamic interests of users remains a significant challenge in the design of personalized recommender systems.
no code implementations • 8 Aug 2023 • Chengming Hu, Xuan Li, Dan Liu, Haolun Wu, Xi Chen, Ju Wang, Xue Liu
Recently, Teacher-Student architectures have been effectively and widely embraced on various knowledge distillation (KD) objectives, including knowledge compression, knowledge expansion, knowledge adaptation, and knowledge enhancement.
no code implementations • 4 Feb 2023 • Fuyuan Lyu, Xing Tang, Dugang Liu, Haolun Wu, Chen Ma, Xiuqiang He, Xue Liu
Representation learning has been a critical topic in machine learning.
1 code implementation • 29 Dec 2022 • Haolun Wu, Yansen Zhang, Chen Ma, Fuyuan Lyu, Bowei He, Bhaskar Mitra, Xue Liu
Diversifying return results is an important research topic in retrieval systems in order to satisfy both the various interests of customers and the equal market exposure of providers.
no code implementations • 11 Nov 2022 • Haolun Wu, Yingxue Zhang, Chen Ma, Wei Guo, Ruiming Tang, Xue Liu, Mark Coates
To offer accurate and diverse recommendation services, recent methods use auxiliary information to foster the learning process of user and item representations.
no code implementations • 3 Aug 2022 • Chang Meng, Ziqi Zhao, Wei Guo, Yingxue Zhang, Haolun Wu, Chen Gao, Dong Li, Xiu Li, Ruiming Tang
More specifically, we propose a novel Coarse-to-fine Knowledge-enhanced Multi-interest Learning (CKML) framework to learn shared and behavior-specific interests for different behaviors.
1 code implementation • 2 Aug 2022 • Haolun Wu, Chen Ma, Yingxue Zhang, Xue Liu, Ruiming Tang, Mark Coates
In order to effectively utilize such information, most research adopts the pairwise ranking method on constructed training triplets (user, positive item, negative item) and aims to distinguish between positive items and negative items for each user.
1 code implementation • 29 Apr 2022 • Haolun Wu, Bhaskar Mitra, Chen Ma, Fernando Diaz, Xue Liu
Prior research on exposure fairness in the context of recommender systems has focused mostly on disparities in the exposure of individual or groups of items to individual users of the system.
no code implementations • 6 May 2021 • Haolun Wu, Chen Ma, Bhaskar Mitra, Fernando Diaz, Xue Liu
To address these limitations, we propose a multi-objective optimization framework for fairness-aware recommendation, Multi-FR, that adaptively balances accuracy and fairness for various stakeholders with Pareto optimality guarantee.
no code implementations • 13 Jan 2021 • Chen Ma, Liheng Ma, Yingxue Zhang, Haolun Wu, Xue Liu, Mark Coates
To effectively make use of the knowledge graph, we propose a recommendation model in the hyperbolic space, which facilitates the learning of the hierarchical structure of knowledge graphs.