1 code implementation • 12 Nov 2015 • Wu-Jun Li, Sheng Wang, Wang-Cheng Kang
For another common application scenario with pairwise labels, there have not existed methods for simultaneous feature learning and hash-code learning.
no code implementations • AAAI 2016 • Wang-Cheng Kang, Wu-Jun Li and Zhi-Hua Zhou
COSDISH is an iterative method, in each iteration of which several columns are sampled from the semantic similarity matrix and then the hashing code is decomposed into two parts which can be alternately optimized in a discrete way.
1 code implementation • 8 Jul 2017 • Ruining He, Wang-Cheng Kang, Julian McAuley
Modeling the complex interactions between users and items as well as amongst items themselves is at the core of designing successful recommender systems.
no code implementations • 7 Nov 2017 • Wang-Cheng Kang, Chen Fang, Zhaowen Wang, Julian McAuley
Here, we seek to extend this contribution by showing that recommendation performance can be significantly improved by learning `fashion aware' image representations directly, i. e., by training the image representation (from the pixel level) and the recommender system jointly; this contribution is related to recent work using Siamese CNNs, though we are able to show improvements over state-of-the-art recommendation techniques such as BPR and variants that make use of pre-trained visual features.
8 code implementations • 20 Aug 2018 • Wang-Cheng Kang, Julian McAuley
Sequential dynamics are a key feature of many modern recommender systems, which seek to capture the `context' of users' activities on the basis of actions they have performed recently.
Ranked #1 on Recommendation Systems on Steam
2 code implementations • 29 Aug 2018 • Wang-Cheng Kang, Mengting Wan, Julian McAuley
Recommender Systems have proliferated as general-purpose approaches to model a wide variety of consumer interaction data.
1 code implementation • CVPR 2019 • Wang-Cheng Kang, Eric Kim, Jure Leskovec, Charles Rosenberg, Julian McAuley
We design an approach to extract training data for this task, and propose a novel way to learn the scene-product compatibility from fashion or interior design images.
2 code implementations • 27 Aug 2019 • An Yan, Shuo Cheng, Wang-Cheng Kang, Mengting Wan, Julian McAuley
Sequential patterns play an important role in building modern recommender systems.
no code implementations • 12 Sep 2019 • Wang-Cheng Kang, Julian McAuley
Generating the Top-N recommendations from a large corpus is computationally expensive to perform at scale.
no code implementations • 20 Feb 2020 • Wang-Cheng Kang, Derek Zhiyuan Cheng, Ting Chen, Xinyang Yi, Dong Lin, Lichan Hong, Ed H. Chi
In this paper, we seek to learn highly compact embeddings for large-vocab sparse features in recommender systems (recsys).
no code implementations • 21 Oct 2020 • Wang-Cheng Kang, Derek Zhiyuan Cheng, Tiansheng Yao, Xinyang Yi, Ting Chen, Lichan Hong, Ed H. Chi
Embedding learning of categorical features (e. g. user/item IDs) is at the core of various recommendation models including matrix factorization and neural collaborative filtering.
no code implementations • 10 May 2023 • Wang-Cheng Kang, Jianmo Ni, Nikhil Mehta, Maheswaran Sathiamoorthy, Lichan Hong, Ed Chi, Derek Zhiyuan Cheng
In this paper, we conduct a thorough examination of both CF and LLMs within the classic task of user rating prediction, which involves predicting a user's rating for a candidate item based on their past ratings.
no code implementations • NeurIPS 2023 • Benjamin Coleman, Wang-Cheng Kang, Matthew Fahrbach, Ruoxi Wang, Lichan Hong, Ed H. Chi, Derek Zhiyuan Cheng
Learning high-quality feature embeddings efficiently and effectively is critical for the performance of web-scale machine learning systems.
no code implementations • 15 Oct 2023 • Noveen Sachdeva, Zexue He, Wang-Cheng Kang, Jianmo Ni, Derek Zhiyuan Cheng, Julian McAuley
We study data distillation for auto-regressive machine learning tasks, where the input and output have a strict left-to-right causal structure.
no code implementations • 15 Feb 2024 • Noveen Sachdeva, Benjamin Coleman, Wang-Cheng Kang, Jianmo Ni, Lichan Hong, Ed H. Chi, James Caverlee, Julian McAuley, Derek Zhiyuan Cheng
The training of large language models (LLMs) is expensive.