1 code implementation • ICLR 2022 • Wei Deng, Siqi Liang, Botao Hao, Guang Lin, Faming Liang
We propose an interacting contour stochastic gradient Langevin dynamics (ICSGLD) sampler, an embarrassingly parallel multiple-chain contour stochastic gradient Langevin dynamics (CSGLD) sampler with efficient interactions.
1 code implementation • 23 Dec 2021 • Xiang Ling, Lingfei Wu, Jiangyu Zhang, Zhenqing Qu, Wei Deng, Xiang Chen, Chunming Wu, Shouling Ji, Tianyue Luo, Jingzheng Wu, Yanjun Wu
In this paper, we focus on malware with the file format of portable executable (PE) in the family of Windows operating systems, namely Windows PE malware, as a representative case to study the adversarial attack methods in such adversarial settings.
no code implementations • 9 Dec 2021 • Wei Deng, Yi-An Ma, Zhao Song, Qian Zhang, Guang Lin
Important to our approach is that the communication efficiency does not deteriorate with the injected noise in the Langevin algorithms.
no code implementations • 29 Sep 2021 • Wei Deng, Qian Zhang, Qi Feng, Faming Liang, Guang Lin
Parallel tempering (PT), also known as replica exchange, is the go-to workhorse for simulations of multi-modal distributions.
no code implementations • NeurIPS 2021 • Botao Hao, Tor Lattimore, Wei Deng
Stochastic sparse linear bandits offer a practical model for high-dimensional online decision-making problems and have a rich information-regret structure.
2 code implementations • NeurIPS 2020 • Wei Deng, Guang Lin, Faming Liang
We propose an adaptively weighted stochastic gradient Langevin dynamics algorithm (SGLD), so-called contour stochastic gradient Langevin dynamics (CSGLD), for Bayesian learning in big data statistics.
no code implementations • 3 Oct 2020 • Yating Wang, Wei Deng, Guang Lin
The bias introduced by stochastic approximation is controllable and can be analyzed theoretically.
1 code implementation • ICLR 2021 • Wei Deng, Qi Feng, Georgios Karagiannis, Guang Lin, Faming Liang
Replica exchange stochastic gradient Langevin dynamics (reSGLD) has shown promise in accelerating the convergence in non-convex learning; however, an excessively large correction for avoiding biases from noisy energy estimators has limited the potential of the acceleration.
2 code implementations • ICML 2020 • Wei Deng, Qi Feng, Liyao Gao, Faming Liang, Guang Lin
Replica exchange Monte Carlo (reMC), also known as parallel tempering, is an important technique for accelerating the convergence of the conventional Markov Chain Monte Carlo (MCMC) algorithms.
Ranked #65 on
Image Classification
on CIFAR-100
no code implementations • 29 Jun 2020 • Yating Wang, Wei Deng, Lin Guang
The algorithm utilizes a set of spike-and-slab priors for the parameters in the deep neural network.
5 code implementations • 5 May 2020 • Andreas Lugmayr, Martin Danelljan, Radu Timofte, Namhyuk Ahn, Dongwoon Bai, Jie Cai, Yun Cao, Junyang Chen, Kaihua Cheng, SeYoung Chun, Wei Deng, Mostafa El-Khamy, Chiu Man Ho, Xiaozhong Ji, Amin Kheradmand, Gwantae Kim, Hanseok Ko, Kanghyu Lee, Jungwon Lee, Hao Li, Ziluan Liu, Zhi-Song Liu, Shuai Liu, Yunhua Lu, Zibo Meng, Pablo Navarrete Michelini, Christian Micheloni, Kalpesh Prajapati, Haoyu Ren, Yong Hyeok Seo, Wan-Chi Siu, Kyung-Ah Sohn, Ying Tai, Rao Muhammad Umer, Shuangquan Wang, Huibing Wang, Timothy Haoning Wu, Hao-Ning Wu, Biao Yang, Fuzhi Yang, Jaejun Yoo, Tongtong Zhao, Yuanbo Zhou, Haijie Zhuo, Ziyao Zong, Xueyi Zou
This paper reviews the NTIRE 2020 challenge on real world super-resolution.
2 code implementations • 17 Feb 2020 • Wei Deng, Junwei Pan, Tian Zhou, Deguang Kong, Aaron Flores, Guang Lin
To address the issue of significantly increased serving delay and high memory usage for ad serving in production, this paper presents \emph{DeepLight}: a framework to accelerate the CTR predictions in three aspects: 1) accelerate the model inference via explicitly searching informative feature interactions in the shallow component; 2) prune redundant layers and parameters at intra-layer and inter-layer level in the DNN component; 3) promote the sparsity of the embedding layer to preserve the most discriminant signals.
Ranked #2 on
Click-Through Rate Prediction
on Criteo
1 code implementation • NeurIPS 2019 • Wei Deng, Xiao Zhang, Faming Liang, Guang Lin
We propose a novel adaptive empirical Bayesian method for sparse deep learning, where the sparsity is ensured via a class of self-adaptive spike-and-slab priors.
no code implementations • ICLR 2019 • Wei Deng, Xiao Zhang, Faming Liang, Guang Lin
We propose a robust Bayesian deep learning algorithm to infer complex posteriors with latent variables.
no code implementations • 9 Aug 2017 • Rongrong Zhang, Wei Deng, Michael Yu Zhu
We propose to use deep neural networks to automate the SA process.
no code implementations • 6 Feb 2013 • Fei Yang, Hong Jiang, Zuowei Shen, Wei Deng, Dimitris Metaxas
We address the problem of reconstructing and analyzing surveillance videos using compressive sensing.