no code implementations • 11 Jul 2021 • Noveen Sachdeva, Carole-Jean Wu, Julian McAuley
As we demonstrate, commonly-used data sampling schemes can have significant consequences on algorithm performance -- masking performance deficiencies in algorithms or altering the relative performance of algorithms, as compared to models trained on the complete dataset.
no code implementations • 15 Oct 2023 • Noveen Sachdeva, Zexue He, Wang-Cheng Kang, Jianmo Ni, Derek Zhiyuan Cheng, Julian McAuley
We study data distillation for auto-regressive machine learning tasks, where the input and output have a strict left-to-right causal structure.
no code implementations • 24 Oct 2023 • Noveen Sachdeva, Lequn Wang, Dawen Liang, Nathan Kallus, Julian McAuley
To address these challenges, we introduce the Policy Convolution (PC) family of estimators.
no code implementations • 15 Feb 2024 • Noveen Sachdeva, Benjamin Coleman, Wang-Cheng Kang, Jianmo Ni, Lichan Hong, Ed H. Chi, James Caverlee, Julian McAuley, Derek Zhiyuan Cheng
The training of large language models (LLMs) is expensive.
1 code implementation • 16 Jun 2020 • Noveen Sachdeva, Yi Su, Thorsten Joachims
Learning effective contextual-bandit policies from past actions of a deployed system is highly desirable in many settings (e. g. voice assistants, recommendation, search), since it enables the reuse of large amounts of log data.
1 code implementation • 13 Jan 2022 • Noveen Sachdeva, Carole-Jean Wu, Julian McAuley
We study the practical consequences of dataset sampling strategies on the ranking performance of recommendation algorithms.
1 code implementation • 25 Nov 2018 • Noveen Sachdeva, Giuseppe Manco, Ettore Ritacco, Vikram Pudi
We introduce a recurrent version of the VAE, where instead of passing a subset of the whole history regardless of temporal dependencies, we rather pass the consumption sequence subset through a recurrent neural network.
Ranked #2 on Recommendation Systems on MovieLens 1M (nDCG@100 metric)
1 code implementation • 31 Jul 2021 • Anshul Mittal, Noveen Sachdeva, Sheshansh Agrawal, Sumeet Agarwal, Purushottam Kar, Manik Varma
This paper presents ECLARE, a scalable deep learning architecture that incorporates not only label text, but also label correlations, to offer accurate real-time predictions within a few milliseconds.
1 code implementation • 25 May 2020 • Noveen Sachdeva, Julian McAuley
We investigate a growing body of work that seeks to improve recommender systems through the use of review text.
5 code implementations • 3 Jun 2022 • Noveen Sachdeva, Mehak Preet Dhaliwal, Carole-Jean Wu, Julian McAuley
We leverage the Neural Tangent Kernel and its equivalence to training infinitely-wide neural networks to devise $\infty$-AE: an autoencoder with infinitely-wide bottleneck layers.
Ranked #1 on Recommendation Systems on Douban (AUC metric)
1 code implementation • 11 Jan 2023 • Noveen Sachdeva, Julian McAuley
The popularity of deep learning has led to the curation of a vast number of massive and multifarious datasets.