no code implementations • 28 Sep 2023 • Yingzhen Yang
A carefully designed variance operator is used to ensure that the bound for the test loss on unlabeled test data in the transductive setting enjoys a remarkable similarity to that of the classical LRC bound in the inductive setting.
no code implementations • 28 Aug 2023 • Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing.
no code implementations • 20 Apr 2023 • Yingzhen Yang, Ping Li
It is proved that PPGD achieves a fast convergence rate of $\cO(1/k^2)$ when the iteration number $k \ge k_0$ for a finite $k_0$ on a class of nonconvex and nonsmooth problems under mild assumptions, which is locally Nesterov's optimal convergence rate of first-order methods on smooth and convex objective function with Lipschitz continuous gradient.
no code implementations • 19 Jan 2023 • Utkarsh Nath, Yancheng Wang, Yingzhen Yang
In this paper, we propose Robust Neural Architecture Search by Cross-Layer Knowledge Distillation (RNAS-CL), a novel NAS algorithm that improves the robustness of NAS by learning from a robust teacher through cross-layer knowledge distillation.
no code implementations • 22 Jun 2022 • Yingzhen Yang, Ping Li
Our results provide theoretical guarantee on the correctness of noisy $\ell^{0}$-SSC in terms of SDP on noisy data for the first time, which reveals the advantage of noisy $\ell^{0}$-SSC in terms of much less restrictive condition on subspace affinity.
no code implementations • 1 Jun 2022 • Lixi Zhou, Arindam Jain, Zijie Wang, Amitabh Das, Yingzhen Yang, Jia Zou
Deep learning has become the most popular direction in machine learning and artificial intelligence.
1 code implementation • 27 May 2022 • Yancheng Wang, Yingzhen Yang
In this work, we propose a novel and robust method, Bayesian Robust Graph Contrastive Learning (BRGCL), which trains a GNN encoder to learn robust node representations.
1 code implementation • 4 Mar 2022 • Yancheng Wang, Ning Xu, Yingzhen Yang
Non-local attention module has been proven to be crucial for image restoration.
1 code implementation • 17 Feb 2022 • Kaize Ding, Yancheng Wang, Yingzhen Yang, Huan Liu
In general, the contrastive learning process in GCL is performed on top of the representations learned by a graph neural network (GNN) backbone, which transforms and propagates the node contextual information based on its local neighborhoods.
no code implementations • ICLR 2022 • Yingzhen Yang, Ping Li
Similarity-based clustering methods separate data into clusters according to the pairwise similarity between the data, and the pairwise similarity is crucial for their performance.
1 code implementation • 10 Jun 2020 • Utkarsh Nath, Shrinu Kushagra, Yingzhen Yang
In this paper, we introduce Adjoined Networks, or AN, a learning paradigm that trains both the original base network and the smaller compressed network together.
no code implementations • ICLR 2020 • Yingzhen Yang, Jiahui Yu, Nebojsa Jojic, Jun Huan, Thomas S. Huang
FSNet has the same architecture as that of the baseline CNN to be compressed, and each convolution layer of FSNet has the same number of filters from FS as that of the basline CNN in the forward process.
no code implementations • 3 Feb 2019 • Yingzhen Yang, Jiahui Yu, Xingjian Li, Jun Huan, Thomas S. Huang
In this paper, we investigate the role of Rademacher complexity in improving generalization of DNNs and propose a novel regularizer rooted in Local Rademacher Complexity (LRC).
no code implementations • 5 Jan 2018 • Yingzhen Yang, Jianchao Yang, Ning Xu, Wei Han
Due to the weight sharing scheme, the parameter size of the $3$D-FilterMap is much smaller than that of the filters to be learned in the conventional convolution layer when $3$D-FilterMap generates the same number of filters.
no code implementations • ICLR 2018 • Xiaojie Jin, Yingzhen Yang, Ning Xu, Jianchao Yang, Jiashi Feng, Shuicheng Yan
We present a new approach and a novel architecture, termed WSNet, for learning compact and efficient deep neural networks.
no code implementations • ICML 2018 • Xiaojie Jin, Yingzhen Yang, Ning Xu, Jianchao Yang, Nebojsa Jojic, Jiashi Feng, Shuicheng Yan
We present a new approach and a novel architecture, termed WSNet, for learning compact and efficient deep neural networks.
no code implementations • 5 Sep 2017 • Yingzhen Yang, Jiashi Feng, Nebojsa Jojic, Jianchao Yang, Thomas S. Huang
We study the proximal gradient descent (PGD) method for $\ell^{0}$ sparse approximation problem as well as its accelerated optimization with randomized algorithms in this paper.
no code implementations • 5 Sep 2017 • Yingzhen Yang, Feng Liang, Nebojsa Jojic, Shuicheng Yan, Jiashi Feng, Thomas S. Huang
By generalization analysis via Rademacher complexity, the generalization error bound for the kernel classifier learned from hypothetical labeling is expressed as the sum of pairwise similarity between the data from different classes, parameterized by the weights of the kernel classifier.
no code implementations • CVPR 2016 • Zhangyang Wang, Ding Liu, Shiyu Chang, Qing Ling, Yingzhen Yang, Thomas S. Huang
In this paper, we design a Deep Dual-Domain (D3) based fast restoration model to remove artifacts of JPEG compressed images.
no code implementations • 6 Apr 2016 • Zhangyang Wang, Yingzhen Yang, Shiyu Chang, Qing Ling, Thomas S. Huang
We investigate the $\ell_\infty$-constrained representation which demonstrates robustness to quantization errors, utilizing the tool of deep learning.
no code implementations • CVPR 2016 • Zhangyang Wang, Shiyu Chang, Yingzhen Yang, Ding Liu, Thomas S. Huang
Visual recognition research often assumes a sufficient resolution of the region of interest (ROI).
no code implementations • 16 Jan 2016 • Zhangyang Wang, Ding Liu, Shiyu Chang, Qing Ling, Yingzhen Yang, Thomas S. Huang
In this paper, we design a Deep Dual-Domain ($\mathbf{D^3}$) based fast restoration model to remove artifacts of JPEG compressed images.
no code implementations • 28 Oct 2015 • Yingzhen Yang, Jiashi Feng, Jianchao Yang, Thomas S. Huang
Sparse subspace clustering methods, such as Sparse Subspace Clustering (SSC) \cite{ElhamifarV13} and $\ell^{1}$-graph \cite{YanW09, ChengYYFH10}, are effective in partitioning the data that lie in a union of subspaces.
no code implementations • 22 Apr 2015 • Zhangyang Wang, Yingzhen Yang, Zhaowen Wang, Shiyu Chang, Wei Han, Jianchao Yang, Thomas S. Huang
Deep learning has been successfully applied to image super resolution (SR).
no code implementations • 12 Mar 2015 • Zhangyang Wang, Yingzhen Yang, Jianchao Yang, Thomas S. Huang
We study the complementary behaviors of external and internal examples in image restoration, and are motivated to formulate a composite dictionary design framework.
no code implementations • 3 Mar 2015 • Zhangyang Wang, Yingzhen Yang, Zhaowen Wang, Shiyu Chang, Jianchao Yang, Thomas S. Huang
Single image super-resolution (SR) aims to estimate a high-resolution (HR) image from a lowresolution (LR) input.
no code implementations • NeurIPS 2014 • Yingzhen Yang, Feng Liang, Shuicheng Yan, Zhangyang Wang, Thomas S. Huang
Modeling the underlying data distribution by nonparametric kernel density estimation, the generalization error bounds for both unsupervised nonparametric classifiers are the sum of nonparametric pairwise similarity terms between the data points for the purpose of clustering.
no code implementations • 2 Oct 2012 • Yingzhen Yang, Thomas S. Huang
Unsupervised classification methods learn a discriminative classifier from unlabeled data, which has been proven to be an effective way of simultaneously clustering the data and training a classifier from the data.