no code implementations • ICML 2020 • Zhen-Yu Zhang, Peng Zhao, Yuan Jiang, Zhi-Hua Zhou
Besides the feature space evolving, it is noteworthy that the data distribution often changes in streaming data.
no code implementations • 7 Mar 2024 • Long-Fei Li, Peng Zhao, Zhi-Hua Zhou
We study reinforcement learning with linear function approximation, unknown transition, and adversarial losses in the bandit feedback setting.
1 code implementation • 3 Feb 2024 • Guangmo Tong, Peng Zhao, Mina Samizadeh
Considering a graph with unknown weights, can we find the shortest path for a pair of nodes if we know the minimal Steiner trees associated with some subset of nodes?
1 code implementation • 1 Feb 2024 • Xiang Zhang, Jingyang Huang, Huan Yan, Peng Zhao, Guohang Zhuang, Zhi Liu, Bin Liu
This uncertainty, resulting from noise and domains, leads to widely scattered and irregular data distributions in collected Wi-Fi sensing data.
no code implementations • 29 Dec 2023 • Jie Shen, Shusen Yang, Cong Zhao, Xuebin Ren, Peng Zhao, Yuqian Yang, Qing Han, Shuaijun Wu
Intelligent equipment fault diagnosis based on Federated Transfer Learning (FTL) attracts considerable attention from both academia and industry.
no code implementations • 21 Dec 2023 • Peng Zhao, Jiehua Zhang, Bowen Peng, Longguang Wang, YingMei Wei, Yu Liu, Li Liu
2) BNNs consistently exhibit better adversarial robustness under black-box attacks.
1 code implementation • 14 Dec 2023 • Guiqin Wang, Peng Zhao, Yanjiang Shi, Cong Zhao, Shusen Yang
Addressing this gap, our paper introduces an innovative knowledge distillation framework, with the generative model for training a lightweight student model.
1 code implementation • 27 Nov 2023 • Yutian Pang, Peng Zhao, Jueming Hu, Yongming Liu
This paper addresses aircraft delays, emphasizing their impact on safety and financial losses.
no code implementations • 16 Sep 2023 • Peng Zhao, Yan-Feng Xie, Lijun Zhang, Zhi-Hua Zhou
In this paper, we present efficient methods for optimizing dynamic regret and adaptive regret, which reduce the number of projections per round from $\mathcal{O}(\log T)$ to $1$.
no code implementations • ICCV 2023 • Guiqin Wang, Peng Zhao, Cong Zhao, Shusen Yang, Jie Cheng, Luziwei Leng, Jianxing Liao, Qinghai Guo
To address this problem, we propose a novel attention-based hierarchically-structured latent model to learn the temporal variations of feature semantics.
no code implementations • 14 Aug 2023 • Peng Zhao
A more realistic view is that planning ought to take into consideration partial observability beforehand and aim for a more flexible and robust solution.
no code implementations • NeurIPS 2023 • Yu-Hu Yan, Peng Zhao, Zhi-Hua Zhou
Our approach is based on a multi-layer online ensemble framework incorporating novel ingredients, including a carefully designed optimism for unifying diverse function types and cascaded corrections for algorithmic stability.
no code implementations • 13 May 2023 • Kiyeob Lee, Peng Zhao, Anirban Bhattacharya, Bani K. Mallick, Le Xie
Hosting capacity analysis (HCA) examines the amount of DERs that can be safely integrated into the grid and is a challenging task in full generality because there are many possible integration of DERs in foresight.
2 code implementations • 27 Mar 2023 • Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao
Although considerable efforts have been developed on improving the transferability of adversarial examples generated by transfer-based adversarial attacks, our investigation found that, the big deviation between the actual and steepest update directions of the current transfer-based adversarial attacks is caused by the large update step length, resulting in the generated adversarial examples can not converge well.
no code implementations • 17 Mar 2023 • Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao
In this paper, we first systematically investigated this issue and found that the enormous difference of attack success rates between the surrogate model and victim model is caused by the existence of a special area (known as fuzzy domain in our paper), in which the adversarial examples in the area are classified wrongly by the surrogate model while correctly by the victim model.
no code implementations • 5 Mar 2023 • Jing Wang, Peng Zhao, Zhi-Hua Zhou
We propose a refined analysis framework, which simplifies the derivation and importantly produces a simpler weight-based algorithm that is as efficient as window/restart-based algorithms while retaining the same regret as previous studies.
no code implementations • NeurIPS 2023 • Lijun Zhang, Peng Zhao, Zhen-Hua Zhuang, Tianbao Yang, Zhi-Hua Zhou
First, we formulate GDRO as a stochastic convex-concave saddle-point problem, and demonstrate that stochastic mirror descent (SMD), using $m$ samples in each iteration, achieves an $O(m (\log m)/\epsilon^2)$ sample complexity for finding an $\epsilon$-optimal solution, which matches the $\Omega(m/\epsilon^2)$ lower bound up to a logarithmic factor.
no code implementations • 9 Feb 2023 • Sijia Chen, Yu-Jie Zhang, Wei-Wei Tu, Peng Zhao, Lijun Zhang
Inspired by their work, we investigate the theoretical guarantees of optimistic online mirror descent (OMD) for the SEA model.
no code implementations • 30 Sep 2022 • Peng Zhao, Anirban Bhattacharya, Debdeep Pati, Bani K. Mallick
Comparing estimated latent factors involves both adjacent and long-term comparisons, with the time range of comparison considered as a variable.
no code implementations • 29 Sep 2022 • Peng Zhao, Anirban Bhattacharya, Debdeep Pati, Bani K. Mallick
We consider a latent space model for dynamic networks, where our objective is to estimate the pairwise inner products of the latent positions.
no code implementations • 26 Aug 2022 • Peng Zhao, Long-Fei Li, Zhi-Hua Zhou
For these three models, we propose novel online ensemble algorithms and establish their dynamic regret guarantees respectively, in which the results for episodic (loop-free) SSP are provably minimax optimal in terms of time horizon and certain non-stationarity measure.
no code implementations • 5 Jul 2022 • Yong Bai, Yu-Jie Zhang, Peng Zhao, Masashi Sugiyama, Zhi-Hua Zhou
In this paper, we formulate and investigate the problem of \emph{online label shift} (OLaS): the learner trains an initial model from the labeled offline data and then deploys it to an unlabeled online environment where the underlying label distribution changes over time but the label-conditional density does not.
no code implementations • 2 Jun 2022 • Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao
The empirical and theoretical analysis demonstrates that the MDL loss improves the robustness and generalization of the model simultaneously for natural training.
no code implementations • 2 Jun 2022 • Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao
To enhance the robustness of the classifier, in our paper, a \textbf{F}eature \textbf{A}nalysis and \textbf{C}onditional \textbf{M}atching prediction distribution (FACM) model is proposed to utilize the features of intermediate layers to correct the classification.
no code implementations • 19 May 2022 • Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao
Specifically, we propose a gradient aligned mechanism to ensure that the derivatives of the loss function with respect to the logit vector have the same weight coefficients between the surrogate and victim models.
no code implementations • 5 May 2022 • Fangfei Lin, Bing Bai, Kun Bai, Yazhou Ren, Peng Zhao, Zenglin Xu
Then, we embed the representations into a hyperbolic space and optimize the hyperbolic embeddings via a continuous relaxation of hierarchical clustering loss.
no code implementations • 12 Feb 2022 • Haipeng Luo, Mengxiao Zhang, Peng Zhao
We consider the problem of adversarial bandit convex optimization, that is, online learning over a sequence of arbitrary convex loss functions with only one function evaluation for each of them.
no code implementations • 12 Feb 2022 • Haipeng Luo, Mengxiao Zhang, Peng Zhao, Zhi-Hua Zhou
The CORRAL algorithm of Agarwal et al. (2017) and its variants (Foster et al., 2020a) achieve this goal with a regret overhead of order $\widetilde{O}(\sqrt{MT})$ where $M$ is the number of base algorithms and $T$ is the time horizon.
no code implementations • 30 Jan 2022 • Mengxiao Zhang, Peng Zhao, Haipeng Luo, Zhi-Hua Zhou
Learning from repeated play in a fixed two-player zero-sum game is a classic problem in game theory and online learning.
no code implementations • 29 Dec 2021 • Peng Zhao, Yu-Jie Zhang, Lijun Zhang, Zhi-Hua Zhou
Specifically, we introduce novel online algorithms that can exploit smoothness and replace the dependence on $T$ in dynamic regret with problem-dependent quantities: the variation in gradients of loss functions, the cumulative loss of the comparator sequence, and the minimum of these two terms.
no code implementations • 14 Dec 2021 • Peng Zhao, Chen Li, Md Mamunur Rahaman, Hao Xu, Pingli Ma, Hechen Yang, Hongzan Sun, Tao Jiang, Ning Xu, Marcin Grzegorzek
Each type of EM contains 40 original and 40 GT images, in total 1680 EM images.
no code implementations • 11 Oct 2021 • Hechen Yang, Chen Li, Xin Zhao, Bencheng Cai, Jiawei Zhang, Pingli Ma, Peng Zhao, Ao Chen, Hongzan Sun, Yueyang Teng, Shouliang Qi, Tao Jiang, Marcin Grzegorzek
The Environmental Microorganism Image Dataset Seventh Version (EMDS-7) is a microscopic image data set, including the original Environmental Microorganism images (EMs) and the corresponding object labeling files in ". XML" format file.
no code implementations • 16 Jul 2021 • Peng Zhao, Chen Li, Md Mamunur Rahaman, Hao Xu, Hechen Yang, Hongzan Sun, Tao Jiang, Marcin Grzegorzek
In recent years, deep learning has made brilliant achievements in Environmental Microorganism (EM) image classification.
no code implementations • 22 Jun 2021 • Hechen Yang, Chen Li, Jinghua Zhang, Peng Zhao, Ao Chen, Xin Zhao, Tao Jiang, Marcin Grzegorzek
We conclude that ViT performs the worst in classifying 8 * 8 pixel patches, but it outperforms most convolutional neural networks in classifying 224 * 224 pixel patches.
no code implementations • 4 Jun 2021 • Youming Tao, Yulian Wu, Peng Zhao, Di Wang
Finally, we establish the lower bound to show that the instance-dependent regret of our improved algorithm is optimal.
no code implementations • 3 Jun 2021 • Ao Chen, Chen Li, HaoYuan Chen, Hechen Yang, Peng Zhao, Weiming Hu, Wanli Liu, Shuojia Zou, Marcin Grzegorzek
In this paper, we first briefly review the development of Convolutional Neural Network and Visual Transformer in deep learning, and introduce the sources and development of conventional noises and adversarial attacks.
no code implementations • 1 Apr 2021 • Jiansong Li, Xiao Dong, Guangli Li, Peng Zhao, Xueying Wang, Xiaobing Chen, Xianzhi Yu, Yongxin Yang, Zihan Jiang, Wei Cao, Lei Liu, Xiaobing Feng
The training of deep neural networks (DNNs) is usually memory-hungry due to the limited device memory capacity of DNN accelerators.
no code implementations • 22 Mar 2021 • Hongying Liu, Peng Zhao, Zhubo Ruan, Fanhua Shang, Yuanyuan Liu
In this paper, we propose a novel deep neural network with Dual Subnet and Multi-stage Communicated Upsampling (DSMC) for super-resolution of videos with large motion.
no code implementations • 15 Mar 2021 • Mingyue Zhang Wu, Jinzhu Luo, Xing Fang, Maochao Xu, Peng Zhao
The proposed model not only enjoys the high accurate point predictions via deep learning but also can provide the satisfactory high quantile prediction via extreme value theory.
no code implementations • 9 Mar 2021 • Peng Zhao, Lijun Zhang
Existing studies develop various algorithms and show that they enjoy an $\widetilde{O}(T^{2/3}(1+P_T)^{1/3})$ dynamic regret, where $T$ is the time horizon and $P_T$ is the path-length that measures the fluctuation of the evolving unknown parameter.
no code implementations • 7 Feb 2021 • Peng Zhao, Yu-Hu Yan, Yu-Xiang Wang, Zhi-Hua Zhou
We study the problem of Online Convex Optimization (OCO) with memory, which allows loss functions to depend on past decisions and thus captures temporal effects of learning problems.
no code implementations • 9 Oct 2020 • Fangyuan Zhao, Xuebin Ren, Shusen Yang, Qing Han, Peng Zhao, Xinyu Yang
To address the privacy issue in LDA, we systematically investigate the privacy protection of the main-stream LDA training algorithm based on Collapsed Gibbs Sampling (CGS) and propose several differentially private LDA algorithms for typical training scenarios.
2 code implementations • 24 Aug 2020 • Hongying Liu, Zhubo Ruan, Chaowei Fang, Peng Zhao, Fanhua Shang, Yuanyuan Liu, Lijun Wang
Spherical videos, also known as \ang{360} (panorama) videos, can be viewed with various virtual reality devices such as computers and head-mounted displays.
no code implementations • 25 Jul 2020 • Hongying Liu, Zhubo Ruan, Peng Zhao, Chao Dong, Fanhua Shang, Yuanyuan Liu, Linlin Yang, Radu Timofte
To the best of our knowledge, this work is the first systematic review on VSR tasks, and it is expected to make a contribution to the development of recent studies in this area and potentially deepen our understanding to the VSR techniques based on deep learning.
no code implementations • 22 Jul 2020 • Bo-Jian Hou, Yu-Hu Yan, Peng Zhao, Zhi-Hua Zhou
Our framework is able to fit its behavior to different storage budgets when learning with feature evolvable streams with unlabeled data.
no code implementations • NeurIPS 2020 • Peng Zhao, Yu-Jie Zhang, Lijun Zhang, Zhi-Hua Zhou
We investigate online convex optimization in non-stationary environments and choose the dynamic regret as the performance measure, defined as the difference between cumulative loss incurred by the online algorithm and that of any feasible comparator sequence.
no code implementations • 4 Jul 2020 • Se Yoon Lee, Peng Zhao, Debdeep Pati, Bani K. Mallick
In this paper, we propose a robust sparse estimation method under diverse sparsity regimes, which has a tail-adaptive shrinkage property.
no code implementations • 10 Jun 2020 • Peng Zhao, Lijun Zhang
In this paper, we present an improved analysis for dynamic regret of strongly convex and smooth functions.
no code implementations • 4 May 2020 • Yuanrui Dong, Peng Zhao, Hanqiao Yu, Cong Zhao, Shusen Yang
The emerging edge-cloud collaborative Deep Learning (DL) paradigm aims at improving the performance of practical DL implementations in terms of cloud bandwidth consumption, response latency, and data privacy preservation.
no code implementations • 5 Feb 2020 • Yu-Jie Zhang, Peng Zhao, Zhi-Hua Zhou
In conventional supervised learning, a training dataset is given with ground-truth labels from a known label set, and the learned model will classify unseen instances to the known labels.
no code implementations • the 18th IEEE International Conference on Data Mining 2019 • Ming Pang, Kai-Ming Ting, Peng Zhao, Zhi-Hua Zhou
Most studies about deep learning are based on neural network models, where many layers of parameterized nonlinear differentiable modules are trained by back propagation.
no code implementations • NeurIPS 2020 • Yu-Jie Zhang, Peng Zhao, Zhi-Hua Zhou
This paper studies the problem of learning with augmented classes (LAC), where augmented classes unobserved in the training data might emerge in the testing phase.
no code implementations • 29 Jul 2019 • Peng Zhao, Guanghui Wang, Lijun Zhang, Zhi-Hua Zhou
In this paper, we investigate BCO in non-stationary environments and choose the \emph{dynamic regret} as the performance measure, which is defined as the difference between the cumulative loss incurred by the algorithm and that of any feasible comparator sequence.
no code implementations • 22 Mar 2019 • Peng Zhao, Yun Yang, Qiao-Chu He
Many statistical estimators for high-dimensional linear regression are M-estimators, formed through minimizing a data-dependent square loss function plus a regularizer.
no code implementations • 17 Jan 2019 • Xueying Wang, Lei Liu, Guangli Li, Xiao Dong, Peng Zhao, Xiaobing Feng
Background subtraction is a significant component of computer vision systems.
no code implementations • 16 Dec 2018 • Guangli Li, Lei Liu, Xueying Wang, Xiao Dong, Peng Zhao, Xiaobing Feng
By analyzing the characteristics of layers in DNNs, an auto-tuning neural network quantization framework for collaborative inference is proposed.
no code implementations • 8 Sep 2018 • Peng Zhao, Le-Wen Cai, Zhi-Hua Zhou
In many real-world applications, data are often collected in the form of stream, and thus the distribution usually changes in nature, which is referred as concept drift in literature.
no code implementations • 8 Jun 2017 • Peng Zhao, Zhi-Hua Zhou
Moreover, as the whole data volume is unknown when constructing the model, it is desired to scan each data item only once with a storage independent with the data volume.