Search Results for author: Yingxue Zhou

Found 10 papers, 0 papers with code

Differentially Private Online Learning for Cloud-Based Video Recommendation with Multimedia Big Data in Social Networks

no code implementations1 Sep 2015 Pan Zhou, Yingxue Zhou, Dapeng Wu, Hai Jin

In addition, none of them has considered both the privacy of users' contexts (e, g., social status, ages and hobbies) and video service vendors' repositories, which are extremely sensitive and of significant commercial value.

Privacy Preserving Recommendation Systems

Hessian based analysis of SGD for Deep Nets: Dynamics and Generalization

no code implementations24 Jul 2019 Xinyan Li, Qilong Gu, Yingxue Zhou, Tiancong Chen, Arindam Banerjee

(2) how can we characterize the stochastic optimization dynamics of SGD with fixed and adaptive step sizes and diagonal pre-conditioning based on the first and second moments of SGs?

Stochastic Optimization

De-randomized PAC-Bayes Margin Bounds: Applications to Non-convex and Non-smooth Predictors

no code implementations23 Feb 2020 Arindam Banerjee, Tiancong Chen, Yingxue Zhou

Existing approaches for deterministic non-smooth deep nets typically need to bound the Lipschitz constant of such deep nets but such bounds are quite large, may even increase with the training set size yielding vacuous generalization bounds.

Generalization Bounds

Private Stochastic Non-Convex Optimization: Adaptive Algorithms and Tighter Generalization Bounds

no code implementations24 Jun 2020 Yingxue Zhou, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu, Arindam Banerjee

We obtain this rate by providing the first analyses on a collection of private gradient-based methods, including adaptive algorithms DP RMSProp and DP Adam.

Generalization Bounds

Towards Better Generalization of Adaptive Gradient Methods

no code implementations NeurIPS 2020 Yingxue Zhou, Belhal Karimi, Jinxing Yu, Zhiqiang Xu, Ping Li

Adaptive gradient methods such as AdaGrad, RMSprop and Adam have been optimizers of choice for deep learning due to their fast training speed.

Noisy Truncated SGD: Optimization and Generalization

no code implementations26 Feb 2021 Yingxue Zhou, Xinyan Li, Arindam Banerjee

Our experiments on a variety of benchmark datasets (MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100) with various networks (VGG and ResNet) validate the theoretical properties of NT-SGD, i. e., NT-SGD matches the speed and accuracy of vanilla SGD while effectively working with sparse gradients, and can successfully escape poor local minima.

Stability Based Generalization Bounds for Exponential Family Langevin Dynamics

no code implementations9 Jan 2022 Arindam Banerjee, Tiancong Chen, Xinyan Li, Yingxue Zhou

Recent years have seen advances in generalization bounds for noisy stochastic algorithms, especially stochastic gradient Langevin dynamics (SGLD) based on stability (Mou et al., 2018; Li et al., 2020) and information theoretic approaches (Xu and Raginsky, 2017; Negrea et al., 2019; Steinke and Zakynthinou, 2020).

Generalization Bounds

RecMind: Large Language Model Powered Agent For Recommendation

no code implementations28 Aug 2023 Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang

While the recommendation system (RS) has advanced significantly through deep learning, current RS approaches usually train and fine-tune models on task-specific datasets, limiting their generalizability to new recommendation tasks and their ability to leverage external knowledge due to model scale and data size constraints.

Explanation Generation Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.