1 code implementation • 19 Oct 2023 • Zipeng Xiao, Zhongkai Hao, Bokai Lin, Zhijie Deng, Hang Su
Neural operators, as an efficient surrogate model for learning the solutions of PDEs, have received extensive attention in the field of scientific machine learning.
no code implementations • 17 Oct 2023 • Siqi Kou, Lei Gan, Dequan Wang, Chongxuan Li, Zhijie Deng
In particular, we derive a novel uncertainty iteration principle to characterize the uncertainty dynamics in diffusion, and leverage the last-layer Laplace approximation for efficient Bayesian inference.
1 code implementation • 15 Oct 2023 • Zhenyi Liao, Zhijie Deng
Leveraging pre-trained conditional diffusion models for video editing without further tuning has gained increasing attention due to its promise in film production, advertising, etc.
no code implementations • 11 Oct 2023 • Xiaoxuan Liu, Lanxiang Hu, Peter Bailis, Ion Stoica, Zhijie Deng, Alvin Cheung, Hao Zhang
We develop a prototype of online speculative decoding based on online knowledge distillation and evaluate it using both synthetic and real query data on several popular LLMs.
1 code implementation • 29 Aug 2023 • Feng Zhou, Quyu Kong, Zhijie Deng, Fengxiang He, Peng Cui, Jun Zhu
This paper presents a novel extension of multi-task Gaussian Cox processes for modeling multiple heterogeneous correlated tasks jointly, e. g., classification and regression, via multi-output Gaussian processes (MOGP).
no code implementations • 16 Jun 2023 • Hongcheng Gao, Hao Zhang, Yinpeng Dong, Zhijie Deng
Text-to-image (T2I) diffusion models (DMs) have shown promise in generating high-quality images from textual descriptions.
no code implementations • 26 May 2023 • Zhijie Deng, Hongcheng Gao, Yibo Miao, Hao Zhang
The detection of machine-generated text, especially from large language models (LLMs), is crucial in preventing serious social problems resulting from their misuse.
no code implementations • ICCV 2023 • Zhijie Deng, Yucen Luo
Unsupervised semantic segmentation is a long-standing challenge in computer vision with great significance.
no code implementations • 10 Feb 2023 • Peng Cui, Yang Yue, Zhijie Deng, Jun Zhu
Deep neural networks (DNNs) have achieved remarkable success in a variety of computer vision tasks, where massive labeled images are routinely required for model optimization.
1 code implementation • 23 Oct 2022 • Zhijie Deng, Feng Zhou, Jun Zhu
Laplace approximation (LA) and its linearized variant (LLA) enable effortless adaptation of pretrained deep neural networks to Bayesian neural networks.
1 code implementation • 23 Oct 2022 • Zhijie Deng, Jiaxin Shi, Hao Zhang, Peng Cui, Cewu Lu, Jun Zhu
Unlike prior spectral methods such as Laplacian Eigenmap that operate in a nonparametric manner, Neural Eigenmap leverages NeuralEF to parametrically model eigenfunctions using a neural network.
1 code implementation • 30 Apr 2022 • Zhijie Deng, Jiaxin Shi, Jun Zhu
Learning the principal eigenfunctions of an integral operator defined by a kernel and a data distribution is at the core of many machine learning problems.
no code implementations • 30 Apr 2022 • Zhijie Deng, Feng Zhou, Jianfei Chen, Guoqiang Wu, Jun Zhu
In this way, we relate DE to Bayesian inference to enjoy reliable Bayesian uncertainty.
no code implementations • 29 Sep 2021 • Zhijie Deng, Feng Zhou, Jianfei Chen, Guoqiang Wu, Jun Zhu
Deep Ensemble (DE) is a flexible, feasible, and effective alternative to Bayesian neural networks (BNNs) for uncertainty estimation in deep learning.
1 code implementation • ICLR 2022 • Yinpeng Dong, Ke Xu, Xiao Yang, Tianyu Pang, Zhijie Deng, Hang Su, Jun Zhu
In this paper, we explore the memorization effect in adversarial training (AT) for promoting a deeper understanding of model capacity, convergence, generalization, and especially robust overfitting of the adversarially trained models.
no code implementations • 28 Mar 2021 • Peng Cui, Zhijie Deng, WenBo Hu, Jun Zhu
It is critical yet challenging for deep learning models to properly characterize uncertainty that is pervasive in real-world environments.
1 code implementation • CVPR 2021 • Zhijie Deng, Xiao Yang, Shizhen Xu, Hang Su, Jun Zhu
Despite their appealing flexibility, deep neural networks (DNNs) are vulnerable against adversarial examples.
no code implementations • ICCV 2021 • Yinpeng Dong, Xiao Yang, Zhijie Deng, Tianyu Pang, Zihao Xiao, Hang Su, Jun Zhu
Although deep neural networks (DNNs) have made rapid progress in recent years, they are vulnerable in adversarial environments.
1 code implementation • NeurIPS 2020 • Zhijie Deng, Yinpeng Dong, Shifeng Zhang, Jun Zhu
In this work, we decouple the training of a network with stochastic architectures (NSA) from NAS and provide a first systematical investigation on it as a stand-alone problem.
no code implementations • NeurIPS 2020 • Hao Zhang, Yuan Li, Zhijie Deng, Xiaodan Liang, Lawrence Carin, Eric Xing
Synchronization is a key step in data-parallel distributed machine learning (ML).
1 code implementation • 5 Oct 2020 • Zhijie Deng, Jun Zhu
Despite their theoretical appealingness, Bayesian neural networks (BNNs) are left behind in real-world adoption, mainly due to persistent concerns on their scalability, accessibility, and reliability.
no code implementations • 28 Sep 2020 • Zhijie Deng, Xiao Yang, Hao Zhang, Yinpeng Dong, Jun Zhu
Despite their theoretical appealingness, Bayesian neural networks (BNNs) are falling far behind in terms of adoption in real-world applications compared with normal NNs, mainly due to their limited scalability in training, and low fidelity in their uncertainty estimates.
1 code implementation • NeurIPS 2020 • Yinpeng Dong, Zhijie Deng, Tianyu Pang, Hang Su, Jun Zhu
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
1 code implementation • 22 Nov 2019 • Zhijie Deng, Yucen Luo, Jun Zhu, Bo Zhang
Bayesian neural networks (BNNs) augment deep networks with uncertainty quantification by Bayesian treatment of the network weights.
1 code implementation • 25 Sep 2019 • Zhijie Deng, Yucen Luo, Jun Zhu, Bo Zhang
Bayesian neural networks (BNNs) introduce uncertainty estimation to deep networks by performing Bayesian inference on network weights.
1 code implementation • ICCV 2019 • Zhijie Deng, Yucen Luo, Jun Zhu
Deep learning methods have shown promise in unsupervised domain adaptation, which aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.
Ranked #3 on
Domain Adaptation
on SVNH-to-MNIST
no code implementations • 25 Feb 2019 • Zhijie Deng, Yinpeng Dong, Jun Zhu
We present batch virtual adversarial training (BVAT), a novel regularization method for graph convolutional networks (GCNs).
1 code implementation • NeurIPS 2017 • Zhijie Deng, Hao Zhang, Xiaodan Liang, Luona Yang, Shizhen Xu, Jun Zhu, Eric P. Xing
We study the problem of conditional generative modeling based on designated semantics or structures.