no code implementations • ICML 2020 • Kun Xu, Chongxuan Li, Jun Zhu, Bo Zhang
There are existing efforts that model the training dynamics of GANs in the parameter space but the analysis cannot directly motivate practically effective stabilizing methods.
no code implementations • ECCV 2020 • Yueru Li, Shuyu Cheng, Hang Su, Jun Zhu
Based on our investigation, we further present a new robust learning algorithm which encourages a larger gradient component in the tangent space of data manifold, suppressing the gradient leaking phenomenon consequently.
no code implementations • ICML 2020 • Michael Zhu, Chang Liu, Jun Zhu
Particle-based Variational Inference methods (ParVIs), like Stein Variational Gradient Descent, are nonparametric variational inference methods that optimize a set of particles to best approximate a target distribution.
no code implementations • 31 Jan 2023 • Liyuan Wang, Xingxing Zhang, Hang Su, Jun Zhu
To cope with real-world dynamics, an intelligent agent needs to incrementally acquire, update, accumulate, and exploit knowledge throughout its lifetime.
no code implementations • 10 Jan 2023 • Kexuan Li, Jun Zhu, Anthony R. Ives, Volker C. Radeloff, Fangfang Wang
To be specific, we use a sparsely connected deep neural network with rectified linear unit (ReLU) activation function to estimate the unknown regression function that describes the relationship between response and covariates in the presence of spatial dependence.
no code implementations • 1 Dec 2022 • Fan Bao, Chongxuan Li, Jiacheng Sun, Jun Zhu
Extensive empirical evidence demonstrates that conditional generative models are easier to train and perform better than unconditional ones by exploiting the labels of data.
1 code implementation • 28 Nov 2022 • Shilong Liu, Yaoyuan Liang, Feng Li, Shijia Huang, Hao Zhang, Hang Su, Jun Zhu, Lei Zhang
As phrase extraction can be regarded as a $1$D text segmentation problem, we formulate PEG as a dual detection problem and propose a novel DQ-DETR model, which introduces dual queries to probe different features from image and text for object prediction and phrase mask prediction.
Ranked #2 on
Referring Expression Comprehension
on RefCOCO
no code implementations • 15 Nov 2022 • Zhongkai Hao, Songming Liu, Yichi Zhang, Chengyang Ying, Yao Feng, Hang Su, Jun Zhu
Recent work shows that it provides potential benefits for machine learning models by incorporating the physical prior and collected data, which makes the intersection of machine learning and physics become a prevailing paradigm.
1 code implementation • 2 Nov 2022 • Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, Jun Zhu
The commonly-used fast sampler for guided sampling is DDIM, a first-order diffusion ODE solver that generally needs 100 to 250 steps for high-quality samples.
no code implementations • 2 Nov 2022 • Jinali Zhang, Yinpeng Dong, Jun Zhu, Jihong Zhu, Minchi Kuang, Xiaming Yuan
Extensive experiments show that the SS attack proposed in this paper can be seamlessly combined with the existing state-of-the-art (SOTA) 3D point cloud attack methods to form more powerful attack methods, and the SS attack improves the transferability over 3. 6 times compare to the baseline.
no code implementations • 2 Nov 2022 • Yao Feng, Yuhong Jiang, Hang Su, Dong Yan, Jun Zhu
Model-based reinforcement learning usually suffers from a high sample complexity in training the world model, especially for the environments with complex dynamics.
Model-based Reinforcement Learning
reinforcement-learning
+1
no code implementations • 29 Oct 2022 • Ziyu Wang, Yucen Luo, Yueru Li, Jun Zhu, Bernhard Schölkopf
For nonparametric conditional moment models, efficient estimation often relies on preimposed conditions on various measures of ill-posedness of the hypothesis space, which are hard to validate when flexible models are used.
no code implementations • 27 Oct 2022 • Yibo Miao, Yinpeng Dong, Jun Zhu, Xiao-Shan Gao
For naturalness, we constrain the adversarial example to be $\epsilon$-isometric to the original one by adopting the Gaussian curvature as a surrogate metric guaranteed by a theoretical analysis.
1 code implementation • 23 Oct 2022 • Zhijie Deng, Jiaxin Shi, Hao Zhang, Peng Cui, Cewu Lu, Jun Zhu
In this paper, we introduce a scalable method for learning structured, adaptive-length deep representations.
1 code implementation • 23 Oct 2022 • Zhijie Deng, Feng Zhou, Jun Zhu
Laplace approximation (LA) and its linearized variant (LLA) enable effortless adaptation of pretrained deep neural networks to Bayesian neural networks.
1 code implementation • 8 Oct 2022 • Yinpeng Dong, Shouwei Ruan, Hang Su, Caixin Kang, Xingxing Wei, Jun Zhu
Recent studies have demonstrated that visual recognition models lack robustness to distribution shift.
1 code implementation • 6 Oct 2022 • Songming Liu, Zhongkai Hao, Chengyang Ying, Hang Su, Jun Zhu, Ze Cheng
We present a unified hard-constraint framework for solving geometrically complex PDEs with neural networks, where the most commonly used Dirichlet, Neumann, and Robin boundary conditions (BCs) are considered.
2 code implementations • 30 Sep 2022 • Fan Bao, Min Zhao, Zhongkai Hao, Peiyao Li, Chongxuan Li, Jun Zhu
Inverse molecular design is critical in material science and drug discovery, where the generated molecules should satisfy certain desirable properties.
no code implementations • 30 Sep 2022 • Jianyun Xu, Zhenwei Miao, Da Zhang, Hongyu Pan, Kaixuan Liu, Peihan Hao, Jun Zhu, Zhengyang Sun, Hongmin Li, Xin Zhan
By employing INT on CenterPoint, we can get around 7% (Waymo) and 15% (nuScenes) performance boost with only 2~4ms latency overhead, and currently SOTA on the Waymo 3D Detection leaderboard.
no code implementations • 29 Sep 2022 • Huayu Chen, Cheng Lu, Chengyang Ying, Hang Su, Jun Zhu
To address this problem, we adopt a generative approach by decoupling the learned policy into two parts: an expressive generative behavior model and an action evaluation model.
2 code implementations • 25 Sep 2022 • Fan Bao, Shen Nie, Kaiwen Xue, Yue Cao, Chongxuan Li, Hang Su, Jun Zhu
In particular, a latent diffusion model with a small U-ViT achieves a record-breaking FID of 5. 48 in text-to-image generation on MS-COCO, among methods without accessing large external datasets during the training of generative models.
no code implementations • 15 Sep 2022 • Zhongkai Hao, Chengyang Ying, Hang Su, Jun Zhu, Jian Song, Ze Cheng
In this paper, we present a novel bi-level optimization framework to resolve the challenge by decoupling the optimization of the targets and constraints.
no code implementations • 15 Sep 2022 • Chengyang Ying, Zhongkai Hao, Xinning Zhou, Hang Su, Dong Yan, Jun Zhu
In this paper, we reveal that the instability is also related to a new notion of Reuse Bias of IS -- the bias in off-policy evaluation caused by the reuse of the replay buffer for evaluation and optimization.
no code implementations • 11 Aug 2022 • Qihan Guo, Siwei Wang, Jun Zhu
We study an extension of standard bandit problem in which there are R layers of experts.
no code implementations • 3 Aug 2022 • Wenkai Li, Cheng Feng, Ting Chen, Jun Zhu
In this work, to tackle this important challenge, we firstly investigate the robustness of commonly used deep TSAD methods with contaminated training data which provides a guideline for applying these methods when the provided training data are not guaranteed to be anomaly-free.
1 code implementation • 14 Jul 2022 • Min Zhao, Fan Bao, Chongxuan Li, Jun Zhu
Further, we provide an alternative explanation of the EGSDE as a product of experts, where each of the three experts (corresponding to the SDE and two feature extractors) solely contributes to faithfulness or realism.
Ranked #1 on
Image-to-Image Translation
on AFHQ (Wild to Dog)
1 code implementation • 13 Jul 2022 • Liyuan Wang, Xingxing Zhang, Qian Li, Jun Zhu, Yi Zhong
Continual learning requires incremental compatibility with a sequence of tasks.
1 code implementation • 12 Jul 2022 • Wenze Chen, Shiyu Huang, Yuan Chiang, Ting Chen, Jun Zhu
Recent algorithms designed for reinforcement learning tasks focus on finding a single optimal solution.
no code implementations • 18 Jun 2022 • Siwei Wang, Jun Zhu
To make the algorithm efficient, they usually use the sum of upper confidence bounds within arm set $S$ to represent the upper confidence bound of $S$, which can be much larger than the tight upper confidence bound of $S$ and leads to a much higher complexity than necessary, since the empirical means of different arms in $S$ are independent.
1 code implementation • 17 Jun 2022 • Siyu Wang, Jianfei Chen, Chongxuan Li, Jun Zhu, Bo Zhang
In this work, we propose Integer-only Discrete Flows (IODF), an efficient neural compressor with integer-only arithmetic.
1 code implementation • 16 Jun 2022 • Cheng Lu, Kaiwen Zheng, Fan Bao, Jianfei Chen, Chongxuan Li, Jun Zhu
To fill up this gap, we show that the negative likelihood of the ODE can be bounded by controlling the first, second, and third-order score matching errors; and we further present a novel high-order denoising score matching method to enable maximum likelihood training of score-based diffusion ODEs.
1 code implementation • 15 Jun 2022 • Fan Bao, Chongxuan Li, Jiacheng Sun, Jun Zhu, Bo Zhang
Thus, the generation performance on a subset of timesteps is crucial, which is greatly influenced by the covariance design in DPMs.
no code implementations • 12 Jun 2022 • You Qiaoben, Chengyang Ying, Xinning Zhou, Hang Su, Jun Zhu, Bo Zhang
However, deep neural networks are vulnerable to malicious adversarial noises, which may potentially cause catastrophic failures in Embodied Vision Navigation.
no code implementations • 9 Jun 2022 • Weikai Yang, Xi Ye, Xingxing Zhang, Lanxi Xiao, Jiazhi Xia, Zhongyuan Wang, Jun Zhu, Hanspeter Pfister, Shixia Liu
The base learners and labeled samples (shots) in an ensemble few-shot classifier greatly affect the model performance.
no code implementations • 9 Jun 2022 • Zhongkai Hao, Chengyang Ying, Yinpeng Dong, Hang Su, Jun Zhu, Jian Song
Under the GSmooth framework, we present a scalable algorithm that uses a surrogate image-to-image network to approximate the complex transformation.
no code implementations • 9 Jun 2022 • Chengyang Ying, Xinning Zhou, Hang Su, Dong Yan, Ning Chen, Jun Zhu
Though deep reinforcement learning (DRL) has obtained substantial success, it may encounter catastrophic failures due to the intrinsic uncertainty of both transition and observation.
1 code implementation • 2 Jun 2022 • Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, Jun Zhu
In this work, we propose an exact formulation of the solution of diffusion ODEs.
no code implementations • 28 May 2022 • Shih-Han Chan, Yinpeng Dong, Jun Zhu, Xiaolu Zhang, Jun Zhou
We propose four kinds of backdoor attacks for object detection task: 1) Object Generation Attack: a trigger can falsely generate an object of the target class; 2) Regional Misclassification Attack: a trigger can change the prediction of a surrounding object to the target class; 3) Global Misclassification Attack: a single trigger can change the predictions of all objects in an image to the target class; and 4) Object Disappearance Attack: a trigger can make the detector fail to detect the object of the target class.
1 code implementation • 26 May 2022 • Tim Pearce, Jong-Hyeon Jeong, Yichen Jia, Jun Zhu
To offer theoretical insight into our algorithm, we show firstly that it can be interpreted as a form of expectation-maximisation, and secondly that it exhibits a desirable `self-correcting' property.
1 code implementation • 22 May 2022 • Ziyu Wang, Yuhao Zhou, Jun Zhu
We investigate nonlinear instrumental variable (IV) regression given high-dimensional instruments.
1 code implementation • Findings (NAACL) 2022 • Jun Zhu, Céline Hudelot
Works on learning job title representation are mainly based on \textit{Job-Transition Graph}, built from the working history of talents.
no code implementations • 30 Apr 2022 • Zhijie Deng, Feng Zhou, Jianfei Chen, Guoqiang Wu, Jun Zhu
In this way, we relate DE to Bayesian inference to enjoy reliable Bayesian uncertainty.
1 code implementation • 30 Apr 2022 • Zhijie Deng, Jiaxin Shi, Jun Zhu
Learning the principal eigenfunctions of an integral operator defined by a kernel and a data distribution is at the core of many machine learning problems.
no code implementations • 26 Mar 2022 • Sha Yuan, Hanyu Zhao, Shuai Zhao, Jiahong Leng, Yangxiao Liang, Xiaozhi Wang, Jifan Yu, Xin Lv, Zhou Shao, Jiaao He, Yankai Lin, Xu Han, Zhenghao Liu, Ning Ding, Yongming Rao, Yizhao Gao, Liang Zhang, Ming Ding, Cong Fang, Yisen Wang, Mingsheng Long, Jing Zhang, Yinpeng Dong, Tianyu Pang, Peng Cui, Lingxiao Huang, Zheng Liang, HuaWei Shen, HUI ZHANG, Quanshi Zhang, Qingxiu Dong, Zhixing Tan, Mingxuan Wang, Shuo Wang, Long Zhou, Haoran Li, Junwei Bao, Yingwei Pan, Weinan Zhang, Zhou Yu, Rui Yan, Chence Shi, Minghao Xu, Zuobai Zhang, Guoqiang Wang, Xiang Pan, Mengjie Li, Xiaoyu Chu, Zijun Yao, Fangwei Zhu, Shulin Cao, Weicheng Xue, Zixuan Ma, Zhengyan Zhang, Shengding Hu, Yujia Qin, Chaojun Xiao, Zheni Zeng, Ganqu Cui, Weize Chen, Weilin Zhao, Yuan YAO, Peng Li, Wenzhao Zheng, Wenliang Zhao, Ziyi Wang, Borui Zhang, Nanyi Fei, Anwen Hu, Zenan Ling, Haoyang Li, Boxi Cao, Xianpei Han, Weidong Zhan, Baobao Chang, Hao Sun, Jiawen Deng, Chujie Zheng, Juanzi Li, Lei Hou, Xigang Cao, Jidong Zhai, Zhiyuan Liu, Maosong Sun, Jiwen Lu, Zhiwu Lu, Qin Jin, Ruihua Song, Ji-Rong Wen, Zhouchen Lin, LiWei Wang, Hang Su, Jun Zhu, Zhifang Sui, Jiajun Zhang, Yang Liu, Xiaodong He, Minlie Huang, Jian Tang, Jie Tang
With the rapid development of deep learning, training Big Models (BMs) for multiple downstream tasks becomes a popular paradigm.
no code implementations • 13 Mar 2022 • Jialian Li, Tongzheng Ren, Dong Yan, Hang Su, Jun Zhu
Our goal is to identify a near-optimal robust policy for the perturbed testing environment, which introduces additional technical difficulties as we need to simultaneously estimate the training environment uncertainty from samples and find the worst-case perturbation for testing.
1 code implementation • 13 Mar 2022 • Yinpeng Dong, Shuyu Cheng, Tianyu Pang, Hang Su, Jun Zhu
However, the existing methods inevitably suffer from low attack success rates or poor query efficiency since it is difficult to estimate the gradient in a high-dimensional input space with limited information.
no code implementations • 9 Mar 2022 • Xiao Yang, Yinpeng Dong, Tianyu Pang, Zihao Xiao, Hang Su, Jun Zhu
It is therefore imperative to develop a framework that can enable a comprehensive evaluation of the vulnerability of face recognition in the physical world.
8 code implementations • 7 Mar 2022 • Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel M. Ni, Heung-Yeung Shum
Compared to other models on the leaderboard, DINO significantly reduces its model size and pre-training data size while achieving better results.
Ranked #1 on
Object Detection
on COCO 2017 val
(box AP metric)
1 code implementation • 21 Feb 2022 • Tianyu Pang, Min Lin, Xiao Yang, Jun Zhu, Shuicheng Yan
The trade-off between robustness and accuracy has been widely studied in the adversarial literature.
1 code implementation • ICLR 2022 • Liyuan Wang, Xingxing Zhang, Kuo Yang, Longhui Yu, Chongxuan Li, Lanqing Hong, Shifeng Zhang, Zhenguo Li, Yi Zhong, Jun Zhu
In this work, we propose memory replay with data compression (MRDC) to reduce the storage cost of old training samples and thus increase their amount that can be stored in the memory buffer.
4 code implementations • ICLR 2022 • Shilong Liu, Feng Li, Hao Zhang, Xiao Yang, Xianbiao Qi, Hang Su, Jun Zhu, Lei Zhang
We present in this paper a novel query formulation using dynamic anchor boxes for DETR (DEtection TRansformer) and offer a deeper understanding of the role of queries in DETR.
2 code implementations • ICLR 2022 • Fan Bao, Chongxuan Li, Jun Zhu, Bo Zhang
In this work, we present a surprising result that both the optimal reverse variance and the corresponding optimal KL divergence of a DPM have analytic forms w. r. t.
no code implementations • CVPR 2022 • Yunlong Wang, Hongyu Pan, Jun Zhu, Yu-Huan Wu, Xin Zhan, Kun Jiang, Diange Yang
In this paper, we propose a novel Spatial-Temporal Integrated network with Bidirectional Enhancement, BE-STI, to improve the temporal motion prediction performance by spatial semantic features, which points out an efficient way to combine semantic segmentation and motion prediction.
no code implementations • CVPR 2022 • Hongyang Gu, Jianmin Li, Guangyuan Fu, Chifong Wong, Xinghao Chen, Jun Zhu
In this paper, we propose a novel method, AutoLoss-GMS, to search the better loss function in the space of generalized margin-based softmax loss function for person re-identification automatically.
no code implementations • 9 Nov 2021 • Jun Zhu, Gautier Viaud, Céline Hudelot
The second module learns job seeker representations.
1 code implementation • NeurIPS 2021 • Liyuan Wang, Mingtian Zhang, Zhongfan Jia, Qian Li, Chenglong Bao, Kaisheng Ma, Jun Zhu, Yi Zhong
Without accessing to the old training samples, knowledge transfer from the old tasks to each new task is difficult to determine, which might be either positive or negative.
1 code implementation • 17 Oct 2021 • Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Jun Zhu, Fangcheng Liu, Chao Zhang, Hongyang Zhang, Yichi Zhang, Shilong Liu, Chang Liu, Wenzhao Xiang, Yajie Wang, Huipeng Zhou, Haoran Lyu, Yidan Xu, Zixuan Xu, Taoyu Zhu, Wenjun Li, Xianfeng Gao, Guoqiu Wang, Huanqian Yan, Ying Guo, Chaoning Zhang, Zheng Fang, Yang Wang, Bingyang Fu, Yunfei Zheng, Yekui Wang, Haorong Luo, Zhen Yang
Many works have investigated the adversarial attacks or defenses under the settings where a bounded and imperceptible perturbation can be added to the input.
1 code implementation • 15 Oct 2021 • Yinpeng Dong, Qi-An Fu, Xiao Yang, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu, Jiayu Tang, Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Ye Liu, Qilong Zhang, Lianli Gao, Yunrui Yu, Xitong Gao, Zhe Zhao, Daquan Lin, Jiadong Lin, Chuanbiao Song, ZiHao Wang, Zhennan Wu, Yang Guo, Jiequan Cui, Xiaogang Xu, Pengguang Chen
Due to the vulnerability of deep neural networks (DNNs) to adversarial examples, a large number of defense techniques have been proposed to alleviate this problem in recent years.
no code implementations • 13 Oct 2021 • Xiao Yang, Yinpeng Dong, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu
The vulnerability of deep neural networks to adversarial examples has motivated an increasing number of defense strategies for promoting model robustness.
1 code implementation • 9 Oct 2021 • Shiyu Huang, Wenze Chen, Longfei Zhang, Shizhen Xu, Ziyang Li, Fengming Zhu, Deheng Ye, Ting Chen, Jun Zhu
To the best of our knowledge, Tikick is the first learning-based AI system that can take over the multi-agent Google Research Football full game, while previous work could either control a single agent or experiment on toy academic scenarios.
1 code implementation • 8 Oct 2021 • Shiyu Huang, Bin Wang, Dong Li, Jianye Hao, Ting Chen, Jun Zhu
In this work, we propose a new algorithm for circuit routing, named Ranking Cost, which innovatively combines search-based methods (i. e., A* algorithm) and learning-based methods (i. e., Evolution Strategies) to form an efficient and trainable router.
no code implementations • ICML Workshop AML 2021 • Yichi Zhang, Zijian Zhu, Xiao Yang, Jun Zhu
To address this issue, we propose a novel method of Adversarial Semantic Contour (ASC) guided by object contour as prior.
no code implementations • 29 Sep 2021 • Zhijie Deng, Feng Zhou, Jianfei Chen, Guoqiang Wu, Jun Zhu
Deep Ensemble (DE) is a flexible, feasible, and effective alternative to Bayesian neural networks (BNNs) for uncertainty estimation in deep learning.
no code implementations • 29 Sep 2021 • Yichi Zhou, Shihong Song, Huishuai Zhang, Jun Zhu, Wei Chen, Tie-Yan Liu
In contextual bandit, one major challenge is to develop theoretically solid and empirically efficient algorithms for general function classes.
1 code implementation • ICML Workshop AML 2021 • Zhengyi Wang, Zhongkai Hao, Ziqiao Wang, Hang Su, Jun Zhu
In this work, we propose Cluster Attack -- a Graph Injection Attack (GIA) on node classification, which injects fake nodes into the original graph to degenerate the performance of graph neural networks (GNNs) on certain victim nodes while affecting the other nodes as little as possible.
1 code implementation • 29 Jul 2021 • Jiayi Weng, Huayu Chen, Dong Yan, Kaichao You, Alexis Duburcq, Minghao Zhang, Yi Su, Hang Su, Jun Zhu
In this paper, we present Tianshou, a highly modularized Python library for deep reinforcement learning (DRL) that uses PyTorch as its backend.
2 code implementations • 22 Jul 2021 • Shilong Liu, Lei Zhang, Xiao Yang, Hang Su, Jun Zhu
The use of Transformer is rooted in the need of extracting local discriminative features adaptively for different labels, which is a strongly desired property due to the existence of multiple objects in one image.
Ranked #1 on
Multi-Label Classification
on PASCAL VOC 2012
1 code implementation • NeurIPS 2021 • Shuyu Cheng, Guoqiang Wu, Jun Zhu
Finally, our theoretical results are confirmed by experiments on several numerical benchmarks as well as adversarial attacks.
1 code implementation • ICML Workshop AML 2021 • Xiao Yang, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu
Transfer-based adversarial attacks can evaluate model robustness in the black-box setting.
no code implementations • 30 Jun 2021 • You Qiaoben, Chengyang Ying, Xinning Zhou, Hang Su, Jun Zhu, Bo Zhang
In this paper, we provide a framework to better understand the existing methods by reformulating the problem of adversarial attacks on reinforcement learning in the function space.
no code implementations • CVPR 2021 • Zihao Xiao, Xianfeng Gao, Chilin Fu, Yinpeng Dong, Wei Gao, Xiaolu Zhang, Jun Zhou, Jun Zhu
However, deep CNNs are vulnerable to adversarial patches, which are physically realizable and stealthy, raising new security concerns on the real-world applications of these models.
no code implementations • 29 Jun 2021 • Yichi Zhou, Shihong Song, Huishuai Zhang, Jun Zhu, Wei Chen, Tie-Yan Liu
However, it is in general unknown how to deriveefficient and effective EE trade-off methods for non-linearcomplex tasks, suchas contextual bandit with deep neural network as the reward function.
no code implementations • CVPR 2021 • Zhenwei Miao, Jikai Chen, Hongyu Pan, Ruiwen Zhang, Kaixuan Liu, Peihan Hao, Jun Zhu, Yang Wang, Xin Zhan
Quantization-based methods are widely used in LiDAR points 3D object detection for its efficiency in extracting context information.
no code implementations • ICML Workshop AML 2021 • You Qiaoben, Xinning Zhou, Chengyang Ying, Jun Zhu
Deep reinforcement learning (DRL) policies are vulnerable to the adversarial attack on their observations, which may mislead real-world RL agents to catastrophic failures.
no code implementations • ICML Workshop AML 2021 • Chengyang Ying, Xinning Zhou, Dong Yan, Jun Zhu
Though deep reinforcement learning (DRL) has obtained substantial success, it may encounter catastrophic failures due to the intrinsic uncertainty caused by stochastic policies and environment variability.
1 code implementation • NeurIPS 2021 • Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, Jun Zhu
Collecting training data from untrusted sources exposes machine learning services to poisoning adversaries, who maliciously manipulate training data to degrade the model accuracy.
1 code implementation • NeurIPS 2021 • Ziyu Wang, Yuhao Zhou, Tongzheng Ren, Jun Zhu
Recent years have witnessed an upsurge of interest in employing flexible machine learning models for instrumental variable (IV) regression, but the development of uncertainty quantification methodology is still lacking.
no code implementations • 14 Jun 2021 • Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan YAO, Ao Zhang, Liang Zhang, Wentao Han, Minlie Huang, Qin Jin, Yanyan Lan, Yang Liu, Zhiyuan Liu, Zhiwu Lu, Xipeng Qiu, Ruihua Song, Jie Tang, Ji-Rong Wen, Jinhui Yuan, Wayne Xin Zhao, Jun Zhu
Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved great success and become a milestone in the field of artificial intelligence (AI).
no code implementations • 9 Jun 2021 • Tim Pearce, Alexandra Brintrup, Jun Zhu
It is often remarked that neural networks fail to increase their uncertainty when predicting on data far from the training distribution.
no code implementations • 9 Jun 2021 • Feng Zhou, Quyu Kong, Yixuan Zhang, Cheng Feng, Jun Zhu
Hawkes processes are a class of point processes that have the ability to model the self- and mutual-exciting phenomena.
1 code implementation • NeurIPS 2021 • Fan Bao, Guoqiang Wu, Chongxuan Li, Jun Zhu, Bo Zhang
Our results can explain some mysterious behaviours of the bilevel programming in practice, for instance, overfitting to the validation set.
1 code implementation • ICLR 2022 • Yinpeng Dong, Ke Xu, Xiao Yang, Tianyu Pang, Zhijie Deng, Hang Su, Jun Zhu
In this paper, we explore the memorization effect in adversarial training (AT) for promoting a deeper understanding of model capacity, convergence, generalization, and especially robust overfitting of the adversarially trained models.
no code implementations • 2 Jun 2021 • Yingtao Luo, Qiang Liu, Yuntian Chen, WenBo Hu, Jun Zhu
Especially, the discovery of PDEs with highly nonlinear coefficients from low-quality data remains largely under-addressed.
1 code implementation • CVPR 2022 • Tianyu Pang, Huishuai Zhang, Di He, Yinpeng Dong, Hang Su, Wei Chen, Jun Zhu, Tie-Yan Liu
Along with this routine, we find that confidence and a rectified confidence (R-Con) can form two coupled rejection metrics, which could provably distinguish wrongly classified inputs from correctly classified ones.
no code implementations • CVPR 2021 • Shilong Liu, Lei Zhang, Xiao Yang, Hang Su, Jun Zhu
We study the problem of unsupervised discovery and segmentation of object parts, which, as an intermediate local representation, are capable of finding intrinsic object structure and providing more explainable recognition results.
no code implementations • NeurIPS 2021 • Ziyu Wang, Yuhao Zhou, Tongzheng Ren, Jun Zhu
Recent years have witnessed an upsurge of interest in employing flexible machine learning models for instrumental variable (IV) regression, but the development of uncertainty quantification methodology is still lacking.
no code implementations • NeurIPS 2021 • Guoqiang Wu, Chongxuan Li, Kun Xu, Jun Zhu
Our results show that learning algorithms with the consistent univariate loss have an error bound of $O(c)$ ($c$ is the number of labels), while algorithms with the inconsistent pairwise loss depend on $O(\sqrt{c})$ as shown in prior work.
no code implementations • 9 May 2021 • Qi-An Fu, Yinpeng Dong, Hang Su, Jun Zhu
Deep learning models are vulnerable to adversarial examples, which can fool a target classifier by imposing imperceptible perturbations onto natural examples.
1 code implementation • ICLR 2021 • Tsung Wei Tsai, Chongxuan Li, Jun Zhu
We present Mixture of Contrastive Experts (MiCE), a unified probabilistic clustering framework that simultaneously exploits the discriminative representations learned by contrastive learning and the semantic structures captured by a latent mixture model.
Ranked #7 on
Image Clustering
on Imagenet-dog-15
no code implementations • 19 Apr 2021 • Liyuan Wang, Qian Li, Yi Zhong, Jun Zhu
Our solution is based on the observation that continual learning of a task sequence inevitably interferes few-shot generalization, which makes it highly nontrivial to extend few-shot learning strategies to continual learning scenarios.
3 code implementations • 9 Apr 2021 • Tim Pearce, Jun Zhu
This paper describes an AI agent that plays the popular first-person-shooter (FPS) video game `Counter-Strike; Global Offensive' (CSGO) from pixel input.
no code implementations • 28 Mar 2021 • Peng Cui, Zhijie Deng, WenBo Hu, Jun Zhu
It is critical yet challenging for deep learning models to properly characterize uncertainty that is pervasive in real-world environments.
1 code implementation • CVPR 2021 • Zhijie Deng, Xiao Yang, Shizhen Xu, Hang Su, Jun Zhu
Despite their appealing flexibility, deep neural networks (DNNs) are vulnerable against adversarial examples.
no code implementations • ICCV 2021 • Yinpeng Dong, Xiao Yang, Zhijie Deng, Tianyu Pang, Zihao Xiao, Hang Su, Jun Zhu
Although deep neural networks (DNNs) have made rapid progress in recent years, they are vulnerable in adversarial environments.
1 code implementation • ICLR 2021 • Cheng Lu, Jianfei Chen, Chongxuan Li, Qiuhao Wang, Jun Zhu
Through theoretical analysis, we show that the function space of ImpFlow is strictly richer than that of ResFlows.
no code implementations • 24 Feb 2021 • Qiang Liu, Zhaocheng Liu, Haoli Zhang, Yuntian Chen, Jun Zhu
Accordingly, we can design an automatic feature crossing method to find feature interactions in DNN, and use them as cross features in LR.
1 code implementation • 23 Feb 2021 • Xiao Li, Jianmin Li, Ting Dai, Jie Shi, Jun Zhu, Xiaolin Hu
A detection model based on the classification model EfficientNet-B7 achieved a top-1 accuracy of 53. 95%, surpassing previous state-of-the-art classification models trained on ImageNet, suggesting that accurate localization information can significantly boost the performance of classification models on ImageNet-A.
no code implementations • 25 Jan 2021 • Jun Zhu, Ye Chen, Frank Brinker, Winfried Decking, Sergey Tomin, Holger Schlarb
We also show the scalability and interpretability of the model by sharing the same decoder with more than one encoder used for different setups of the photoinjector, and propose a pragmatic way to model a facility with various diagnostics and working points.
no code implementations • 11 Jan 2021 • Yuanyuan Ding, Junchi Yan, Guoqiang Hu, Jun Zhu
This paper discloses a novel visual inspection system for liquid crystal display (LCD), which is currently a dominant type in the FPD industry.
no code implementations • 5 Jan 2021 • Qijun Luo, Zhili Liu, Lanqing Hong, Chongxuan Li, Kuo Yang, Liyuan Wang, Fengwei Zhou, Guilin Li, Zhenguo Li, Jun Zhu
Semi-supervised domain adaptation (SSDA), which aims to learn models in a partially labeled target domain with the assistance of the fully labeled source domain, attracts increasing attention in recent years.
no code implementations • CVPR 2021 • Liyuan Wang, Kuo Yang, Chongxuan Li, Lanqing Hong, Zhenguo Li, Jun Zhu
Continual learning usually assumes the incoming data are fully labeled, which might not be applicable in real applications.
no code implementations • 1 Jan 2021 • Guan Wang, Dong Yan, Hang Su, Jun Zhu
In this work, we point out that the optimal value of n actually differs on each data point, while the fixed value n is a rough average of them.
no code implementations • 1 Jan 2021 • Shiyu Huang, Bin Wang, Dong Li, Jianye Hao, Jun Zhu, Ting Chen
In our method, we introduce a new set of variables called cost maps, which can help the A* router to find out proper paths to achieve the global object.
no code implementations • 16 Dec 2020 • Qingyi Pan, WenBo Hu, Jun Zhu
Though deep learning methods have recently been developed to give superior forecasting results, it is crucial to improve the interpretability of time series models.
1 code implementation • 14 Dec 2020 • Qipeng Guo, Zhijing Jin, Ziyu Wang, Xipeng Qiu, Weinan Zhang, Jun Zhu, Zheng Zhang, David Wipf
Cycle-consistent training is widely used for jointly learning a forward and inverse mapping between two domains of interest without the cumbersome requirement of collecting matched pairs within each domain.
1 code implementation • NeurIPS 2020 • Zhijie Deng, Yinpeng Dong, Shifeng Zhang, Jun Zhu
In this work, we decouple the training of a network with stochastic architectures (NSA) from NAS and provide a first systematical investigation on it as a stand-alone problem.
1 code implementation • NeurIPS 2020 • Guoqiang Wu, Jun Zhu
On the other hand, when directly optimizing SA with its surrogate loss, it has learning guarantees that depend on $O(\sqrt{c})$ for both HL and SA measures.
1 code implementation • NeurIPS 2020 • Ziyu Wang, Bin Dai, David Wipf, Jun Zhu
The recent, counter-intuitive discovery that deep generative models (DGMs) can frequently assign a higher likelihood to outliers has implications for both outlier detection applications as well as our overall understanding of generative modeling.
1 code implementation • NeurIPS Workshop ICBINB 2020 • Fan Bao, Kun Xu, Chongxuan Li, Lanqing Hong, Jun Zhu, Bo Zhang
The learning and evaluation of energy-based latent variable models (EBLVMs) without any structural assumptions are highly challenging, because the true posteriors and the partition functions in such models are generally intractable.
1 code implementation • NeurIPS 2020 • Fan Bao, Chongxuan Li, Kun Xu, Hang Su, Jun Zhu, Bo Zhang
This paper presents a bi-level score matching (BiSM) method to learn EBLVMs with general structures by reformulating SM as a bi-level optimization problem.
1 code implementation • 5 Oct 2020 • Zhijie Deng, Jun Zhu
Despite their theoretical appealingness, Bayesian neural networks (BNNs) are left behind in real-world adoption, mainly due to persistent concerns on their scalability, accessibility, and reliability.
2 code implementations • ICLR 2021 • Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, Jun Zhu
Adversarial training (AT) is one of the most effective strategies for promoting model robustness.
no code implementations • 28 Sep 2020 • Zhijie Deng, Xiao Yang, Hao Zhang, Yinpeng Dong, Jun Zhu
Despite their theoretical appealingness, Bayesian neural networks (BNNs) are falling far behind in terms of adoption in real-world applications compared with normal NNs, mainly due to their limited scalability in training, and low fidelity in their uncertainty estimates.
no code implementations • 15 Sep 2020 • Chen Ma, Shuyu Cheng, Li Chen, Jun Zhu, Junhai Yong
In each iteration, SWITCH first tries to update the current sample along the direction of $\hat{\mathbf{g}}$, but considers switching to its opposite direction $-\hat{\mathbf{g}}$ if our algorithm detects that it does not increase the value of the attack objective function.
1 code implementation • ECCV 2020 • Haoyu Liang, Zhihao Ouyang, Yuyuan Zeng, Hang Su, Zihao He, Shu-Tao Xia, Jun Zhu, Bo Zhang
Most existing works attempt post-hoc interpretation on a pre-trained model, while neglecting to reduce the entanglement underlying the model.
2 code implementations • 8 Jul 2020 • Xiao Yang, Dingcheng Yang, Yinpeng Dong, Hang Su, Wenjian Yu, Jun Zhu
Based on large-scale evaluations, the commercial FR API services fail to exhibit acceptable performance on robustness evaluation, and we also draw several important conclusions for understanding the adversarial robustness of FR models and providing insights for the design of robust FR models.
1 code implementation • NeurIPS 2020 • Tianyu Pang, Kun Xu, Chongxuan Li, Yang song, Stefano Ermon, Jun Zhu
Several machine learning applications involve the optimization of higher-order derivatives (e. g., gradients of gradients) during training, which can be expensive in respect to memory and computation even with automatic differentiation.
no code implementations • ICLR 2021 • Feng Zhou, Yixuan Zhang, Jun Zhu
Hawkes process provides an effective statistical framework for analyzing the time-dependent interaction of neuronal spiking activities.
no code implementations • NeurIPS 2020 • Peng Cui, Wen-Bo Hu, Jun Zhu
Accurate quantification of uncertainty is crucial for real-world applications of machine learning.
no code implementations • 14 Jun 2020 • Zhiheng Zhang, Wen-Bo Hu, Tian Tian, Jun Zhu
In this paper, we present the dynamic window-level Granger causality method (DWGC) for multi-channel time series data.
no code implementations • 5 Jun 2020 • Yujie Wu, Rong Zhao, Jun Zhu, Feng Chen, Mingkun Xu, Guoqi Li, Sen Song, Lei Deng, Guanrui Wang, Hao Zheng, Jing Pei, Youhui Zhang, Mingguo Zhao, Luping Shi
We demonstrate the advantages of this model in multiple different tasks, including few-shot learning, continual learning, and fault-tolerance learning in neuromorphic vision sensors.
1 code implementation • ICML 2020 • Yuhao Zhou, Jiaxin Shi, Jun Zhu
Estimating the score, i. e., the gradient of log density function, from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models that involve flexible yet intractable densities.
no code implementations • ICLR 2020 • Yichi Zhou, Tongzheng Ren, Jialian Li, Dong Yan, Jun Zhu
In this paper, we present Lazy-CFR, a CFR algorithm that adopts a lazy update strategy to avoid traversing the whole game tree in each round.
no code implementations • ICLR 2020 • Yichi Zhou, Jialian Li, Jun Zhu
Posterior sampling for reinforcement learning (PSRL) is a useful framework for making decisions in an unknown environment.
Multi-agent Reinforcement Learning
reinforcement-learning
+1
no code implementations • ICLR 2020 • Yucen Luo, Alex Beatson, Mohammad Norouzi, Jun Zhu, David Duvenaud, Ryan P. Adams, Ricky T. Q. Chen
Standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest.
1 code implementation • ICCV 2021 • Xiao Yang, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu, Yuefeng Chen, Hui Xue
As billions of personal data being shared through social media and network, the data privacy and security have drawn an increasing attention.
no code implementations • 6 Mar 2020 • Liyuan Wang, Bo Lei, Qian Li, Hang Su, Jun Zhu, Yi Zhong
Continual acquisition of novel experience without interfering previously learned knowledge, i. e. continual learning, is critical for artificial neural networks, but limited by catastrophic forgetting.
1 code implementation • ICML 2020 • Jianfei Chen, Cheng Lu, Biqi Chenli, Jun Zhu, Tian Tian
Generative flows are promising tractable models for density modeling that define probabilistic distributions with invertible transformations.
Ranked #25 on
Image Generation
on CIFAR-10
(bits/dimension metric)
1 code implementation • NeurIPS 2020 • Tianyu Pang, Xiao Yang, Yinpeng Dong, Kun Xu, Jun Zhu, Hang Su
Adversarial training (AT) is one of the most effective defenses against adversarial attacks for deep learning models.
1 code implementation • pproximateinference AABI Symposium 2019 • Ziyu Wang, Shuyu Cheng, Yueru Li, Jun Zhu, Bo Zhang
Score matching provides an effective approach to learning flexible unnormalized models, but its scalability is limited by the need to evaluate a second-order derivative.
1 code implementation • NeurIPS 2020 • Yinpeng Dong, Zhijie Deng, Tianyu Pang, Hang Su, Jun Zhu
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
no code implementations • 26 Jan 2020 • Kelei Cao, Mengchen Liu, Hang Su, Jing Wu, Jun Zhu, Shixia Liu
The key is to compare and analyze the datapaths of both the adversarial and normal examples.
no code implementations • ICLR 2020 • Zhaocheng Liu, Qiang Liu, Haoli Zhang, Jun Zhu
In recent years, substantial progress has been made on graph convolutional networks (GCN).
no code implementations • ICLR 2020 • Shiyu Huang, Hang Su, Jun Zhu, Ting Chen
Partially Observable Markov Decision Processes (POMDPs) are popular and flexible models for real-world decision-making applications that demand the information from past observations to make optimal decisions.
no code implementations • 26 Dec 2019 • Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, Jun Zhu
Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important research problems in the development of deep learning.
1 code implementation • 20 Dec 2019 • Chongxuan Li, Kun Xu, Jiashuo Liu, Jun Zhu, Bo Zhang
It is formulated as a three-player minimax game consisting of a generator, a classifier and a discriminator, and therefore is referred to as Triple Generative Adversarial Network (Triple-GAN).
1 code implementation • 5 Dec 2019 • Justin Cosentino, Federico Zaiter, Dan Pei, Jun Zhu
Recent work on deep neural network pruning has shown there exist sparse subnetworks that achieve equal or improved accuracy, training time, and loss using fewer network parameters when compared to their dense counterparts.
no code implementations • ECCV 2020 • Xiao Yang, Fangyun Wei, Hongyang Zhang, Jun Zhu
We consider universal adversarial patches for faces -- small visual elements whose addition to a face image reliably destroys the performance of face detectors.
1 code implementation • 22 Nov 2019 • Zhijie Deng, Yucen Luo, Jun Zhu, Bo Zhang
Bayesian neural networks (BNNs) augment deep networks with uncertainty quantification by Bayesian treatment of the network weights.
no code implementations • NeurIPS 2019 • Justin Cosentino, Jun Zhu
We propose Generative Well-intentioned Networks (GWINs), a novel framework for increasing the accuracy of certainty-based, closed-world classifiers.
1 code implementation • 29 Sep 2019 • Kun Xu, Chongxuan Li, Jun Zhu, Bo Zhang
There are existing efforts that model the training dynamics of GANs in the parameter space but the analysis cannot directly motivate practically effective stabilizing methods.
Ranked #32 on
Image Generation
on CIFAR-10
(Inception score metric)
1 code implementation • 25 Sep 2019 • Zhijie Deng, Yucen Luo, Jun Zhu, Bo Zhang
Bayesian neural networks (BNNs) introduce uncertainty estimation to deep networks by performing Bayesian inference on network weights.
no code implementations • 25 Sep 2019 • Haoyu Liang, Zhihao Ouyang, Hang Su, Yuyuan Zeng, Zihao He, Shu-Tao Xia, Jun Zhu, Bo Zhang
Convolutional neural networks (CNNs) have often been treated as “black-box” and successfully used in a range of tasks.
1 code implementation • ICLR 2020 • Tianyu Pang, Kun Xu, Jun Zhu
Our experiments on CIFAR-10 and CIFAR-100 demonstrate that MI can further improve the adversarial robustness for the models trained by mixup and its variants.
no code implementations • 20 Sep 2019 • Yucen Luo, Jun Zhu, Tomas Pfister
Recently deep neural networks have shown their capacity to memorize training data, even with noisy labels, which hurts generalization performance.
no code implementations • 18 Sep 2019 • Zheng Zhang, Ruiqing Yin, Jun Zhu, Pierre Zweigenbaum
Recent work in cross-lingual contextual word embedding learning cannot handle multi-sense words well.
no code implementations • 15 Sep 2019 • Zheyu Yang, Yujie Wu, Guanrui Wang, Yukuan Yang, Guoqi Li, Lei Deng, Jun Zhu, Luping Shi
To the best of our knowledge, DashNet is the first framework that can integrate and process ANNs and SNNs in a hybrid paradigm, which provides a novel solution to achieve both effectiveness and efficiency for high-speed object tracking.
2 code implementations • NeurIPS 2019 • Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu
We consider the black-box adversarial setting, where the adversary has to generate adversarial perturbations without access to the target models to compute gradients.
1 code implementation • NeurIPS 2019 • Kun Xu, Chongxuan Li, Jun Zhu, Bo Zhang
Deep generative models (DGMs) have shown promise in image generation.
2 code implementations • 27 May 2019 • Jiaxin Shi, Mohammad Emtiyaz Khan, Jun Zhu
Inference in Gaussian process (GP) models is computationally challenging for large data, and often difficult to approximate with a small number of inducing points.
2 code implementations • ICLR 2020 • Tianyu Pang, Kun Xu, Yinpeng Dong, Chao Du, Ning Chen, Jun Zhu
Previous work shows that adversarially robust generalization requires larger sample complexity, and the same dataset, e. g., CIFAR-10, which enables good standard accuracy may not suffice to train robust models.
no code implementations • 23 May 2019 • Tsung Wei Tsai, Chongxuan Li, Jun Zhu
We consider the learning from noisy labels (NL) problem which emerges in many real-world applications.
1 code implementation • 11 May 2019 • Fan Bao, Hang Su, Jun Zhu
Besides, our framework can be extended to semi-supervised boosting, where the boosted model learns a joint distribution of data and labels.
no code implementations • ICLR 2019 • Jialian Li, Hang Su, Jun Zhu
We can solve these tasks by first building models for other agents and then finding the optimal policy with these models.
no code implementations • ICLR 2019 • Yichi Zhou, Jun Zhu
We provide insights into the relationship between $A^*$ sampling and probability matching by analyzing a nontrivial special case in which the state space is partitioned into two subsets.
1 code implementation • CVPR 2019 • Yinpeng Dong, Hang Su, Baoyuan Wu, Zhifeng Li, Wei Liu, Tong Zhang, Jun Zhu
In this paper, we evaluate the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model.
1 code implementation • CVPR 2019 • Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu
In this paper, we propose a translation-invariant attack method to generate more transferable adversarial examples against the defense models.
1 code implementation • ICCV 2019 • Zhijie Deng, Yucen Luo, Jun Zhu
Deep learning methods have shown promise in unsupervised domain adaptation, which aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.
Ranked #3 on
Domain Adaptation
on SVNH-to-MNIST
1 code implementation • ICLR 2019 • Ziyu Wang, Tongzheng Ren, Jun Zhu, Bo Zhang
While Bayesian neural networks (BNNs) have drawn increasing attention, their posterior inference remains challenging, due to the high-dimensional and over-parameterized nature.
no code implementations • 25 Feb 2019 • Zhijie Deng, Yinpeng Dong, Jun Zhu
We present batch virtual adversarial training (BVAT), a novel regularization method for graph convolutional networks (GCNs).
1 code implementation • 1 Feb 2019 • Chang Liu, Jingwei Zhuo, Jun Zhu
It is known that the Langevin dynamics used in MCMC is the gradient flow of the KL divergence on the Wasserstein space, which helps convergence analysis and inspires recent particle-based variational inference methods (ParVIs).
no code implementations • 27 Jan 2019 • Haosheng Zou, Tongzheng Ren, Dong Yan, Hang Su, Jun Zhu
Reward shaping is one of the most effective methods to tackle the crucial yet challenging problem of credit assignment in Reinforcement Learning (RL).
6 code implementations • 25 Jan 2019 • Tianyu Pang, Kun Xu, Chao Du, Ning Chen, Jun Zhu
Though deep neural networks have achieved significant progress on various tasks, often enhanced by model ensemble, existing high-performance models can be vulnerable to adversarial attacks.
no code implementations • 25 Jan 2019 • Yinpeng Dong, Fan Bao, Hang Su, Jun Zhu
3) We propose to improve the consistency of neurons on adversarial example subset by an adversarial training algorithm with a consistent loss.
no code implementations • ICLR 2020 • Chongxuan Li, Chao Du, Kun Xu, Max Welling, Jun Zhu, Bo Zhang
We propose a black-box algorithm called {\it Adversarial Variational Inference and Learning} (AdVIL) to perform inference and learning on a general Markov random field (MRF).
no code implementations • NeurIPS 2018 • Jianfei Chen, Jun Zhu, Yee Whye Teh, Tong Zhang
However, sEM has a slower asymptotic convergence rate than batch EM, and requires a decreasing sequence of step sizes, which is difficult to tune.