Search Results for author: Dong Lin

Found 12 papers, 5 papers with code

Learning to Rank when Grades Matter

no code implementations14 Jun 2023 Le Yan, Zhen Qin, Gil Shamir, Dong Lin, Xuanhui Wang, Mike Bendersky

In this paper, we conduct a rigorous study of learning to rank with grades, where both ranking performance and grade prediction performance are important.

Learning-To-Rank

Real World Large Scale Recommendation Systems Reproducibility and Smooth Activations

2 code implementations14 Feb 2022 Gil I. Shamir, Dong Lin

We describe a novel family of smooth activations; Smooth ReLU (SmeLU), designed to improve reproducibility with mathematical simplicity, with potentially cheaper implementation.

Click-Through Rate Prediction Recommendation Systems

Dropout Prediction Uncertainty Estimation Using Neuron Activation Strength

no code implementations13 Oct 2021 Haichao Yu, Zhe Chen, Dong Lin, Gil Shamir, Jie Han

Dropout has been commonly used to quantify prediction uncertainty, i. e, the variations of model predictions on a given input example.

Smooth activations and reproducibility in deep networks

1 code implementation20 Oct 2020 Gil I. Shamir, Dong Lin, Lorenzo Coviello

We propose a new family of activations; Smooth ReLU (\emph{SmeLU}), designed to give such better tradeoffs, while also keeping the mathematical expression simple, and thus implementation cheap.

Beyond Point Estimate: Inferring Ensemble Prediction Variation from Neuron Activation Strength in Recommender Systems

no code implementations17 Aug 2020 Zhe Chen, Yuyan Wang, Dong Lin, Derek Zhiyuan Cheng, Lichan Hong, Ed H. Chi, Claire Cui

Despite deep neural network (DNN)'s impressive prediction performance in various domains, it is well known now that a set of DNN models trained with the same model specification and the same data can produce very different prediction results.

Model-based Reinforcement Learning Recommendation Systems

Small Towers Make Big Differences

no code implementations13 Aug 2020 Yuyan Wang, Zhe Zhao, Bo Dai, Christopher Fifty, Dong Lin, Lichan Hong, Ed H. Chi

A delicate balance between multi-task generalization and multi-objective optimization is therefore needed for finding a better trade-off between efficiency and generalization.

Multi-Task Learning

Understanding and Improving Knowledge Distillation

no code implementations10 Feb 2020 Jiaxi Tang, Rakesh Shivanna, Zhe Zhao, Dong Lin, Anima Singh, Ed H. Chi, Sagar Jain

Knowledge Distillation (KD) is a model-agnostic technique to improve model quality while having a fixed capacity budget.

Knowledge Distillation Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.