Search Results for author: Xiaobing Liu

Found 13 papers, 4 papers with code

One Backward from Ten Forward, Subsampling for Large-Scale Deep Learning

no code implementations27 Apr 2021 Chaosheng Dong, Xiaojie Jin, Weihao Gao, Yijia Wang, Hongyi Zhang, Xiang Wu, Jianchao Yang, Xiaobing Liu

Deep learning models in large-scale machine learning systems are often continuously trained with enormous data from production environments.

g-SiC6 Monolayer: A New Graphene-like Dirac Cone Material with a High Fermi Velocity

no code implementations9 Feb 2021 Tao Yang, Xingang Jiang, Wencai Yi, Xiaomin Cheng, Xiaobing Liu

In this work, using first-principles calculations, we have predicted a new Dirac cone material of silicon carbide with the new stoichiometries, named g-SiC6 monolayer, which is composed of sp2 hybridized with a graphene-like structure.

Materials Science

Deep Retrieval: An End-to-End Structure Model for Large-Scale Recommendations

no code implementations1 Jan 2021 Weihao Gao, Xiangjun Fan, Jiankai Sun, Kai Jia, Wenzhi Xiao, Chong Wang, Xiaobing Liu

With the model learnt, a beam search over the latent codes is performed to retrieve the top candidates.

Jointly Learning to Recommend and Advertise

no code implementations28 Feb 2020 Xiangyu Zhao, Xudong Zheng, Xiwang Yang, Xiaobing Liu, Jiliang Tang

Online recommendation and advertising are two major income channels for online recommendation platforms (e. g. e-commerce and news feed site).

AutoEmb: Automated Embedding Dimensionality Search in Streaming Recommendations

no code implementations26 Feb 2020 Xiangyu Zhao, Chong Wang, Ming Chen, Xudong Zheng, Xiaobing Liu, Jiliang Tang

Deep learning based recommender systems (DLRSs) often have embedding layers, which are utilized to lessen the dimensionality of categorical variables (e. g. user/item identifiers) and meaningfully transform them in the low-dimensional space.

AutoML Recommendation Systems

Learning to Structure Long-term Dependence for Sequential Recommendation

no code implementations30 Jan 2020 Renqin Cai, Qinglei Wang, Chong Wang, Xiaobing Liu

To better model the long-term dependence structure, we propose a GatedLongRec solution in this work.

Sequential Recommendation

DEAR: Deep Reinforcement Learning for Online Advertising Impression in Recommender Systems

no code implementations9 Sep 2019 Xiangyu Zhao, Changsheng Gu, Haoshenglun Zhang, Xiwang Yang, Xiaobing Liu, Jiliang Tang, Hui Liu

However, most RL-based advertising algorithms focus on optimizing ads' revenue while ignoring the possible negative influence of ads on user experience of recommended items (products, articles and videos).

Recommendation Systems reinforcement-learning

Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling

3 code implementations21 Feb 2019 Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X. Chen, Ye Jia, Anjuli Kannan, Tara Sainath, Yuan Cao, Chung-Cheng Chiu, Yanzhang He, Jan Chorowski, Smit Hinsu, Stella Laurenzo, James Qin, Orhan Firat, Wolfgang Macherey, Suyog Gupta, Ankur Bapna, Shuyuan Zhang, Ruoming Pang, Ron J. Weiss, Rohit Prabhavalkar, Qiao Liang, Benoit Jacob, Bowen Liang, HyoukJoong Lee, Ciprian Chelba, Sébastien Jean, Bo Li, Melvin Johnson, Rohan Anil, Rajat Tibrewal, Xiaobing Liu, Akiko Eriguchi, Navdeep Jaitly, Naveen Ari, Colin Cherry, Parisa Haghani, Otavio Good, Youlong Cheng, Raziel Alvarez, Isaac Caswell, Wei-Ning Hsu, Zongheng Yang, Kuan-Chieh Wang, Ekaterina Gonina, Katrin Tomanek, Ben Vanik, Zelin Wu, Llion Jones, Mike Schuster, Yanping Huang, Dehao Chen, Kazuki Irie, George Foster, John Richardson, Klaus Macherey, Antoine Bruguier, Heiga Zen, Colin Raffel, Shankar Kumar, Kanishka Rao, David Rybach, Matthew Murray, Vijayaditya Peddinti, Maxim Krikun, Michiel A. U. Bacchiani, Thomas B. Jablin, Rob Suderman, Ian Williams, Benjamin Lee, Deepti Bhatia, Justin Carlson, Semih Yavuz, Yu Zhang, Ian McGraw, Max Galkin, Qi Ge, Golan Pundak, Chad Whipkey, Todd Wang, Uri Alon, Dmitry Lepikhin, Ye Tian, Sara Sabour, William Chan, Shubham Toshniwal, Baohua Liao, Michael Nirschl, Pat Rondon

Lingvo is a Tensorflow framework offering a complete solution for collaborative deep learning research, with a particular focus towards sequence-to-sequence models.

Sequence-To-Sequence Speech Recognition

Wide & Deep Learning for Recommender Systems

32 code implementations24 Jun 2016 Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, Hemal Shah

Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort.

Click-Through Rate Prediction Feature Engineering +2

Cannot find the paper you are looking for? You can Submit a new open access paper.