Search Results for author: Peng Zhao

Found 40 papers, 1 papers with code

Learning with Feature and Distribution Evolvable Streams

no code implementations ICML 2020 Zhen-Yu Zhang, Peng Zhao, Yuan Jiang, Zhi-Hua Zhou

Besides the feature space evolving, it is noteworthy that the data distribution often changes in streaming data.

Factorized Fusion Shrinkage for Dynamic Relational Data

no code implementations30 Sep 2022 Peng Zhao, Anirban Bhattacharya, Debdeep Pati, Bani K. Mallick

Comparing estimated latent factors involves both adjacent and long-term comparisons, with the time range of comparison considered as a variable.

Variational Inference

Structured Optimal Variational Inference for Dynamic Latent Space Models

no code implementations29 Sep 2022 Peng Zhao, Anirban Bhattacharya, Debdeep Pati, Bani K. Mallick

We consider a latent space model for dynamic networks, where our objective is to estimate the pairwise inner products of the latent positions.

Variational Inference

Dynamic Regret of Online Markov Decision Processes

no code implementations26 Aug 2022 Peng Zhao, Long-Fei Li, Zhi-Hua Zhou

For these three models, we propose novel online ensemble algorithms and establish their dynamic regret guarantees respectively, in which the results for episodic (loop-free) SSP are provably minimax optimal in terms of time horizon and certain non-stationarity measure.

Adapting to Online Label Shift with Provable Guarantees

no code implementations5 Jul 2022 Yong Bai, Yu-Jie Zhang, Peng Zhao, Masashi Sugiyama, Zhi-Hua Zhou

In this paper, we formulate and investigate the problem of \emph{online label shift} (OLaS): the learner trains an initial model from the labeled offline data and then deploys it to an unlabeled online environment where the underlying label distribution changes over time but the label-conditional density does not.

FACM: Correct the Output of Deep Neural Network with Middle Layers Features against Adversarial Samples

no code implementations2 Jun 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

In the strong adversarial attacks against deep neural network (DNN), the output of DNN will be misclassified if and only if the last feature layer of the DNN is completely destroyed by adversarial samples, while our studies found that the middle feature layers of the DNN can still extract the effective features of the original normal category in these adversarial attacks.

Improving the Robustness and Generalization of Deep Neural Network with Confidence Threshold Reduction

no code implementations2 Jun 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

The empirical and theoretical analysis demonstrates that the MDL loss improves the robustness and generalization of the model simultaneously for natural training.

Enhancing the Transferability of Adversarial Examples via a Few Queries

no code implementations19 May 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

Due to the vulnerability of deep neural networks, the black-box attack has drawn great attention from the community.

Contrastive Multi-view Hyperbolic Hierarchical Clustering

no code implementations5 May 2022 Fangfei Lin, Bing Bai, Kun Bai, Yazhou Ren, Peng Zhao, Zenglin Xu

Then, we embed the representations into a hyperbolic space and optimize the hyperbolic embeddings via a continuous relaxation of hierarchical clustering loss.

Corralling a Larger Band of Bandits: A Case Study on Switching Regret for Linear Bandits

no code implementations12 Feb 2022 Haipeng Luo, Mengxiao Zhang, Peng Zhao, Zhi-Hua Zhou

The CORRAL algorithm of Agarwal et al. (2017) and its variants (Foster et al., 2020a) achieve this goal with a regret overhead of order $\widetilde{O}(\sqrt{MT})$ where $M$ is the number of base algorithms and $T$ is the time horizon.

Adaptive Bandit Convex Optimization with Heterogeneous Curvature

no code implementations12 Feb 2022 Haipeng Luo, Mengxiao Zhang, Peng Zhao

We consider the problem of adversarial bandit convex optimization, that is, online learning over a sequence of arbitrary convex loss functions with only one function evaluation for each of them.

online learning

No-Regret Learning in Time-Varying Zero-Sum Games

no code implementations30 Jan 2022 Mengxiao Zhang, Peng Zhao, Haipeng Luo, Zhi-Hua Zhou

Learning from repeated play in a fixed two-player zero-sum game is a classic problem in game theory and online learning.

online learning

Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization

no code implementations29 Dec 2021 Peng Zhao, Yu-Jie Zhang, Lijun Zhang, Zhi-Hua Zhou

We investigate online convex optimization in non-stationary environments and choose the \emph{dynamic regret} as the performance measure, defined as the difference between cumulative loss incurred by the online algorithm and that of any feasible comparator sequence.

EMDS-7: Environmental Microorganism Image Dataset Seventh Version for Multiple Object Detection Evaluation

no code implementations11 Oct 2021 Hechen Yang, Chen Li, Xin Zhao, Bencheng Cai, Jiawei Zhang, Pingli Ma, Peng Zhao, Ao Chen, Hongzan Sun, Yueyang Teng, Shouliang Qi, Tao Jiang, Marcin Grzegorzek

The Environmental Microorganism Image Dataset Seventh Version (EMDS-7) is a microscopic image data set, including the original Environmental Microorganism images (EMs) and the corresponding object labeling files in ". XML" format file.

object-detection Object Detection

A Comparison for Patch-level Classification of Deep Learning Methods on Transparent Environmental Microorganism Images: from Convolutional Neural Networks to Visual Transformers

no code implementations22 Jun 2021 Hechen Yang, Chen Li, Jinghua Zhang, Peng Zhao, Ao Chen, Xin Zhao, Tao Jiang, Marcin Grzegorzek

We conclude that ViT performs the worst in classifying 8 * 8 pixel patches, but it outperforms most convolutional neural networks in classifying 224 * 224 pixel patches.

Optimal Rates of (Locally) Differentially Private Heavy-tailed Multi-Armed Bandits

no code implementations4 Jun 2021 Youming Tao, Yulian Wu, Peng Zhao, Di Wang

Finally, we establish the lower bound to show that the instance-dependent regret of our improved algorithm is optimal.

Multi-Armed Bandits

A Comparison for Anti-noise Robustness of Deep Learning Classification Methods on a Tiny Object Image Dataset: from Convolutional Neural Network to Visual Transformer and Performer

no code implementations3 Jun 2021 Ao Chen, Chen Li, HaoYuan Chen, Hechen Yang, Peng Zhao, Weiming Hu, Wanli Liu, Shuojia Zou, Marcin Grzegorzek

In this paper, we first briefly review the development of Convolutional Neural Network and Visual Transformer in deep learning, and introduce the sources and development of conventional noises and adversarial attacks.

Classification Image Classification

Pinpointing the Memory Behaviors of DNN Training

no code implementations1 Apr 2021 Jiansong Li, Xiao Dong, Guangli Li, Peng Zhao, Xueying Wang, Xiaobing Chen, Xianzhi Yu, Yongxin Yang, Zihan Jiang, Wei Cao, Lei Liu, Xiaobing Feng

The training of deep neural networks (DNNs) is usually memory-hungry due to the limited device memory capacity of DNN accelerators.

Large Motion Video Super-Resolution with Dual Subnet and Multi-Stage Communicated Upsampling

no code implementations22 Mar 2021 Hongying Liu, Peng Zhao, Zhubo Ruan, Fanhua Shang, Yuanyuan Liu

In this paper, we propose a novel deep neural network with Dual Subnet and Multi-stage Communicated Upsampling (DSMC) for super-resolution of videos with large motion.

Motion Compensation Motion Estimation +1

Modeling Multivariate Cyber Risks: Deep Learning Dating Extreme Value Theory

no code implementations15 Mar 2021 Mingyue Zhang Wu, Jinzhu Luo, Xing Fang, Maochao Xu, Peng Zhao

The proposed model not only enjoys the high accurate point predictions via deep learning but also can provide the satisfactory high quantile prediction via extreme value theory.

Non-stationary Linear Bandits Revisited

no code implementations9 Mar 2021 Peng Zhao, Lijun Zhang

Existing studies develop various algorithms and show that they enjoy an $\widetilde{O}(T^{2/3}(1+P_T)^{1/3})$ dynamic regret, where $T$ is the time horizon and $P_T$ is the path-length that measures the fluctuation of the evolving unknown parameter.

Non-stationary Online Learning with Memory and Non-stochastic Control

no code implementations7 Feb 2021 Peng Zhao, Yu-Hu Yan, Yu-Xiang Wang, Zhi-Hua Zhou

We study the problem of Online Convex Optimization (OCO) with memory, which allows loss functions to depend on past decisions and thus captures temporal effects of learning problems.

online learning

Latent Dirichlet Allocation Model Training with Differential Privacy

no code implementations9 Oct 2020 Fangyuan Zhao, Xuebin Ren, Shusen Yang, Qing Han, Peng Zhao, Xinyu Yang

To address the privacy issue in LDA, we systematically investigate the privacy protection of the main-stream LDA training algorithm based on Collapsed Gibbs Sampling (CGS) and propose several differentially private LDA algorithms for typical training scenarios.

Privacy Preserving

A Single Frame and Multi-Frame Joint Network for 360-degree Panorama Video Super-Resolution

2 code implementations24 Aug 2020 Hongying Liu, Zhubo Ruan, Chaowei Fang, Peng Zhao, Fanhua Shang, Yuanyuan Liu, Lijun Wang

Spherical videos, also known as \ang{360} (panorama) videos, can be viewed with various virtual reality devices such as computers and head-mounted displays.

Video Super-Resolution

Video Super Resolution Based on Deep Learning: A Comprehensive Survey

no code implementations25 Jul 2020 Hongying Liu, Zhubo Ruan, Peng Zhao, Chao Dong, Fanhua Shang, Yuanyuan Liu, Linlin Yang, Radu Timofte

To the best of our knowledge, this work is the first systematic review on VSR tasks, and it is expected to make a contribution to the development of recent studies in this area and potentially deepen our understanding to the VSR techniques based on deep learning.

speech-recognition Speech Recognition +1

Storage Fit Learning with Feature Evolvable Streams

no code implementations22 Jul 2020 Bo-Jian Hou, Yu-Hu Yan, Peng Zhao, Zhi-Hua Zhou

Our framework is able to fit its behavior to different storage budgets when learning with feature evolvable streams with unlabeled data.

Dynamic Regret of Convex and Smooth Functions

no code implementations NeurIPS 2020 Peng Zhao, Yu-Jie Zhang, Lijun Zhang, Zhi-Hua Zhou

We investigate online convex optimization in non-stationary environments and choose the dynamic regret as the performance measure, defined as the difference between cumulative loss incurred by the online algorithm and that of any feasible comparator sequence.

Improved Analysis for Dynamic Regret of Strongly Convex and Smooth Functions

no code implementations10 Jun 2020 Peng Zhao, Lijun Zhang

In this paper, we present an improved analysis for dynamic regret of strongly convex and smooth functions.

CDC: Classification Driven Compression for Bandwidth Efficient Edge-Cloud Collaborative Deep Learning

no code implementations4 May 2020 Yuanrui Dong, Peng Zhao, Hanqiao Yu, Cong Zhao, Shusen Yang

The emerging edge-cloud collaborative Deep Learning (DL) paradigm aims at improving the performance of practical DL implementations in terms of cloud bandwidth consumption, response latency, and data privacy preservation.

Classification General Classification +1

Exploratory Machine Learning with Unknown Unknowns

no code implementations5 Feb 2020 Yu-Jie Zhang, Peng Zhao, Zhi-Hua Zhou

In conventional supervised learning, a training dataset is given with ground-truth labels from a known label set, and the learned model will classify unseen instances to the known labels.

BIG-bench Machine Learning

Improving deep forest by confidence screening

no code implementations the 18th IEEE International Conference on Data Mining 2019 Ming Pang, Kai-Ming Ting, Peng Zhao, Zhi-Hua Zhou

Most studies about deep learning are based on neural network models, where many layers of parameterized nonlinear differentiable modules are trained by back propagation.

Representation Learning

An Unbiased Risk Estimator for Learning with Augmented Classes

no code implementations NeurIPS 2020 Yu-Jie Zhang, Peng Zhao, Zhi-Hua Zhou

This paper studies the problem of learning with augmented classes (LAC), where augmented classes unobserved in the training data might emerge in the testing phase.

Bandit Convex Optimization in Non-stationary Environments

no code implementations29 Jul 2019 Peng Zhao, Guanghui Wang, Lijun Zhang, Zhi-Hua Zhou

In this paper, we investigate BCO in non-stationary environments and choose the \emph{dynamic regret} as the performance measure, which is defined as the difference between the cumulative loss incurred by the algorithm and that of any feasible comparator sequence.

Decision Making

High-Dimensional Linear Regression via Implicit Regularization

no code implementations22 Mar 2019 Peng Zhao, Yun Yang, Qiao-Chu He

Many statistical estimators for high-dimensional linear regression are M-estimators, formed through minimizing a data-dependent square loss function plus a regularizer.

regression

Auto-tuning Neural Network Quantization Framework for Collaborative Inference Between the Cloud and Edge

no code implementations16 Dec 2018 Guangli Li, Lei Liu, Xueying Wang, Xiao Dong, Peng Zhao, Xiaobing Feng

By analyzing the characteristics of layers in DNNs, an auto-tuning neural network quantization framework for collaborative inference is proposed.

Quantization

Handling Concept Drift via Model Reuse

no code implementations8 Sep 2018 Peng Zhao, Le-Wen Cai, Zhi-Hua Zhou

In many real-world applications, data are often collected in the form of stream, and thus the distribution usually changes in nature, which is referred as concept drift in literature.

Distribution-Free One-Pass Learning

no code implementations8 Jun 2017 Peng Zhao, Zhi-Hua Zhou

Moreover, as the whole data volume is unknown when constructing the model, it is desired to scan each data item only once with a storage independent with the data volume.

Cannot find the paper you are looking for? You can Submit a new open access paper.