no code implementations • NeurIPS 2014 • Yingzhen Yang, Feng Liang, Shuicheng Yan, Zhangyang Wang, Thomas S. Huang
Modeling the underlying data distribution by nonparametric kernel density estimation, the generalization error bounds for both unsupervised nonparametric classifiers are the sum of nonparametric pairwise similarity terms between the data points for the purpose of clustering.
no code implementations • 18 Dec 2014 • Zhangyang Wang, Jianchao Yang, Hailin Jin, Eli Shechtman, Aseem Agarwala, Jonathan Brandt, Thomas S. Huang
We present a domain adaption framework to address a domain mismatch between synthetic training and real-world testing data.
no code implementations • 3 Mar 2015 • Zhangyang Wang, Yingzhen Yang, Zhaowen Wang, Shiyu Chang, Jianchao Yang, Thomas S. Huang
Single image super-resolution (SR) aims to estimate a high-resolution (HR) image from a lowresolution (LR) input.
no code implementations • 12 Mar 2015 • Zhangyang Wang, Yingzhen Yang, Jianchao Yang, Thomas S. Huang
We study the complementary behaviors of external and internal examples in image restoration, and are motivated to formulate a composite dictionary design framework.
no code implementations • 31 Mar 2015 • Zhangyang Wang, Jianchao Yang, Hailin Jin, Eli Shechtman, Aseem Agarwala, Jonathan Brandt, Thomas S. Huang
We address a challenging fine-grain classification problem: recognizing a font style from an image of text.
no code implementations • 22 Apr 2015 • Zhangyang Wang, Yingzhen Yang, Zhaowen Wang, Shiyu Chang, Wei Han, Jianchao Yang, Thomas S. Huang
Deep learning has been successfully applied to image super resolution (SR).
1 code implementation • 12 Jul 2015 • Zhangyang Wang, Jianchao Yang, Hailin Jin, Eli Shechtman, Aseem Agarwala, Jonathan Brandt, Thomas S. Huang
As font is one of the core design concepts, automatic font identification and similar font suggestion from an image or photo has been on the wish list of many designers.
Ranked #1 on Font Recognition on VFR-Wild
no code implementations • 1 Sep 2015 • Zhangyang Wang, Qing Ling, Thomas S. Huang
We study the $\ell_0$ sparse approximation problem with the tool of deep learning, by proposing Deep $\ell_0$ Encoders.
no code implementations • 1 Sep 2015 • Zhangyang Wang, Shiyu Chang, Jiayu Zhou, Meng Wang, Thomas S. Huang
In this paper, we propose to emulate the sparse coding-based clustering pipeline in the context of deep learning, leading to a carefully crafted deep model benefiting from both.
no code implementations • 16 Jan 2016 • Zhangyang Wang, Ding Liu, Shiyu Chang, Qing Ling, Yingzhen Yang, Thomas S. Huang
In this paper, we design a Deep Dual-Domain ($\mathbf{D^3}$) based fast restoration model to remove artifacts of JPEG compressed images.
no code implementations • CVPR 2016 • Zhangyang Wang, Shiyu Chang, Yingzhen Yang, Ding Liu, Thomas S. Huang
Visual recognition research often assumes a sufficient resolution of the region of interest (ROI).
no code implementations • 16 Jan 2016 • Zhangyang Wang, Shiyu Chang, Florin Dolcos, Diane Beck, Ding Liu, Thomas S. Huang
Image aesthetics assessment has been challenging due to its subjective nature.
no code implementations • 6 Apr 2016 • Zhangyang Wang, Yingzhen Yang, Shiyu Chang, Qing Ling, Thomas S. Huang
We investigate the $\ell_\infty$-constrained representation which demonstrates robustness to quantization errors, utilizing the tool of deep learning.
no code implementations • CVPR 2016 • Zhangyang Wang, Ding Liu, Shiyu Chang, Qing Ling, Yingzhen Yang, Thomas S. Huang
In this paper, we design a Deep Dual-Domain (D3) based fast restoration model to remove artifacts of JPEG compressed images.
no code implementations • 4 Aug 2016 • Jiahui Yu, Yuning Jiang, Zhangyang Wang, Zhimin Cao, Thomas Huang
In present object detection systems, the deep convolutional neural networks (CNNs) are utilized to predict bounding boxes of object candidates, and have gained performance advantages over the traditional region proposal methods.
no code implementations • 14 Aug 2016 • Zhangyang Wang, Shiyu Chang, Qing Ling, Shuai Huang, Xia Hu, Honghui Shi, Thomas S. Huang
With the agreement of my coauthors, I Zhangyang Wang would like to withdraw the manuscript "Stacked Approximated Regression Machine: A Simple Deep Learning Approach".
no code implementations • 23 Aug 2016 • Zhangyang Wang, Thomas S. Huang
This paper emphasizes the significance to jointly exploit the problem structure and the parameter structure, in the context of deep modeling.
2 code implementations • 14 Jun 2017 • Ding Liu, Bihan Wen, Xianming Liu, Zhangyang Wang, Thomas S. Huang
Conventionally, image denoising and high-level vision tasks are handled separately in computer vision.
2 code implementations • 20 Jul 2017 • Boyi Li, Xiulian Peng, Zhangyang Wang, Jizheng Xu, Dan Feng
This paper proposes an image dehazing model built with a convolutional neural network (CNN), called All-in-One Dehazing Network (AOD-Net).
no code implementations • 10 Sep 2017 • Bowen Cheng, Zhangyang Wang, Zhaobin Zhang, Zhu Li, Ding Liu, Jianchao Yang, Shuai Huang, Thomas S. Huang
Emotion recognition from facial expressions is tremendously useful, especially when coupled with smart devices and wireless multimedia applications.
no code implementations • 12 Sep 2017 • Boyi Li, Xiulian Peng, Zhangyang Wang, Jizheng Xu, Dan Feng
Furthermore, we build an End-to-End United Video Dehazing and Detection Network(EVDD-Net), which concatenates and jointly trains EVD-Net with a video object detection model.
1 code implementation • ICCV 2017 • Boyi Li, Xiulian Peng, Zhangyang Wang, Jizheng Xu, Dan Feng
This paper proposes an image dehazing model built with a convolutional neural network (CNN), called All-in-One Dehazing Network (AOD-Net).
Ranked #20 on Image Dehazing on SOTS Outdoor
no code implementations • ICCV 2017 • Ding Liu, Zhaowen Wang, Yuchen Fan, Xian-Ming Liu, Zhangyang Wang, Shiyu Chang, Thomas Huang
Second, we reduce the complexity of motion between neighboring frames using a spatial alignment network that is much more robust and efficient than competing alignment methods and can be jointly trained with the temporal adaptive network in an end-to-end manner.
no code implementations • 29 Nov 2017 • Aven Samareh, Yan Jin, Zhangyang Wang, Xiangyu Chang, Shuai Huang
We present our preliminary work to determine if patient's vocal acoustic, linguistic, and facial patterns could predict clinical ratings of depression severity, namely Patient Health Questionnaire depression scale (PHQ-8).
1 code implementation • 12 Dec 2017 • Boyi Li, Wenqi Ren, Dengpan Fu, DaCheng Tao, Dan Feng, Wen-Jun Zeng, Zhangyang Wang
We present a comprehensive study and evaluation of existing single image dehazing algorithms, using a new large-scale benchmark consisting of both synthetic and real-world hazy images, called REalistic Single Image DEhazing (RESIDE).
no code implementations • 20 Dec 2017 • Ding Liu, Bowen Cheng, Zhangyang Wang, Haichao Zhang, Thomas S. Huang
Visual recognition under adverse conditions is a very important and challenging problem of high practical value, due to the ubiquitous existence of quality distortions during image acquisition, transmission, or storage.
1 code implementation • ICLR 2018 • Mengying Sun, Inci M. Baytas, Liang Zhan, Zhangyang Wang, Jiayu Zhou
Over the past decade a wide spectrum of machine learning models have been developed to model the neurodegenerative diseases, associating biomarkers, especially non-intrusive neuroimaging markers, with key clinical scores measuring the cognitive status of patients.
1 code implementation • 28 Feb 2018 • Dong Liu, Ke Sun, Zhangyang Wang, Runsheng Liu, Zheng-Jun Zha
We propose an interpretable deep structure namely Frank-Wolfe Network (F-W Net), whose architecture is inspired by unrolling and truncating the Frank-Wolfe algorithm for solving an $L_p$-norm constrained problem with $p\geq 1$.
no code implementations • 16 Apr 2018 • Hongyu Xu, Zhangyang Wang, Haichuan Yang, Ding Liu, Ji Liu
The thresholded feature has recently emerged as an extremely efficient, yet rough empirical approximation, of the time-consuming sparse coding inference process.
2 code implementations • 20 Jun 2018 • Mostafa Karimi, Di wu, Zhangyang Wang, Yang shen
Motivation: Drug discovery demands rapid quantification of compound-protein interaction (CPI).
Ranked #2 on Drug Discovery on BindingDB IC50
1 code implementation • 24 Jun 2018 • Junru Wu, Yue Wang, Zhen-Yu Wu, Zhangyang Wang, Ashok Veeraraghavan, Yingyan Lin
The current trend of pushing CNNs deeper with convolutions has created a pressing demand to achieve higher compression gains on CNNs where convolutions dominate the computation and parameter amount (e. g., GoogLeNet, ResNet and Wide ResNet).
1 code implementation • 30 Jun 2018 • Yu Liu, Guanlong Zhao, Boyuan Gong, Yang Li, Ritu Raj, Niraj Goel, Satya Kesav, Sandeep Gottimukkala, Zhangyang Wang, Wenqi Ren, DaCheng Tao
Here we explore two related but important tasks based on the recently released REalistic Single Image DEhazing (RESIDE) benchmark dataset: (i) single image dehazing as a low-level image restoration problem; and (ii) high-level visual understanding (e. g., object detection) of hazy images.
1 code implementation • ICML 2018 • Junru Wu, Yue Wang, Zhen-Yu Wu, Zhangyang Wang, Ashok Veeraraghavan, Yingyan Lin
The current trend of pushing CNNs deeper with convolutions has created a pressing demand to achieve higher compression gains on CNNs where convolutions dominate the computation and parameter amount (e. g., GoogLeNet, ResNet and Wide ResNet).
3 code implementations • ECCV 2018 • Zhen-Yu Wu, Zhangyang Wang, Zhaowen Wang, Hailin Jin
This paper aims to improve privacy-preserving visual recognition, an increasingly demanded feature in smart camera applications, by formulating a unique adversarial training framework.
1 code implementation • 29 Jul 2018 • Ramakrishna Prabhu, Xiaojing Yu, Zhangyang Wang, Ding Liu, Anxiao, Jiang
This paper studies the challenging problem of fingerprint image denoising and inpainting.
3 code implementations • NeurIPS 2018 • Xiaohan Chen, Jialin Liu, Zhangyang Wang, Wotao Yin
In this work, we study unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery.
2 code implementations • 29 Aug 2018 • Xiaofeng Zhang, Zhangyang Wang, Dong Liu, Qing Ling
Given insufficient data, while many techniques have been developed to help combat overfitting, the challenge remains if one tries to train deep networks, especially in the ill-posed extremely low data regimes: only a small set of labeled data are available, and nothing -- including unlabeled data -- else.
1 code implementation • 6 Sep 2018 • Ding Liu, Bihan Wen, Jianbo Jiao, Xian-Ming Liu, Zhangyang Wang, Thomas S. Huang
Second we propose a deep neural network solution that cascades two modules for image denoising and various high-level tasks, respectively, and use the joint loss for updating only the denoising network via back-propagation.
no code implementations • NIPS Workshop CDNNRIA 2018 • Yue Wang, Tan Nguyen, Yang Zhao, Zhangyang Wang, Yingyan Lin, Richard Baraniuk
The prohibitive energy cost of running high-performance Convolutional Neural Networks (CNNs) has been limiting their deployment on resource-constrained platforms including mobile and wearable devices.
1 code implementation • NeurIPS 2018 • Nitin Bansal, Xiaohan Chen, Zhangyang Wang
This paper seeks to answer the question: as the (near-) orthogonality of weights is found to be a favorable property for training deep convolutional neural networks, how can we enforce it in more effective and easy-to-use ways?
1 code implementation • NeurIPS 2018 • Nitin Bansal, Xiaohan Chen, Zhangyang Wang
This paper seeks to answer the question: as the (near-) orthogonality of weights is found to be a favorable property for training deep convolutional neural networks, how can we enforce it in more effective and easy-to-use ways?
no code implementations • 8 Jan 2019 • Randy Ardywibowo, Guang Zhao, Zhangyang Wang, Bobak Mortazavi, Shuai Huang, Xiaoning Qian
This power-efficient sensing scheme can be achieved by deciding which group of sensors to use at a given time, requiring an accurate characterization of the trade-off between sensor energy usage and the uncertainty in ignoring certain sensor signals while monitoring.
no code implementations • 28 Jan 2019 • Rosaura G. VidalMata, Sreya Banerjee, Brandon RichardWebster, Michael Albright, Pedro Davalos, Scott McCloskey, Ben Miller, Asong Tambo, Sushobhan Ghosh, Sudarshan Nagesh, Ye Yuan, Yueyu Hu, Junru Wu, Wenhan Yang, Xiaoshuai Zhang, Jiaying Liu, Zhangyang Wang, Hwann-Tzong Chen, Tzu-Wei Huang, Wen-Chi Chin, Yi-Chun Li, Mahmoud Lababidi, Charles Otto, Walter J. Scheirer
From the observed results, it is evident that we are in the early days of building a bridge between computational photography and visual recognition, leaving many opportunities for innovation in this area.
2 code implementations • NeurIPS 2019 • Shupeng Gui, Haotao Wang, Chen Yu, Haichuan Yang, Zhangyang Wang, Ji Liu
Deep model compression has been extensively studied, and state-of-the-art methods can now achieve high compression ratios with minimal accuracy loss.
no code implementations • 20 Mar 2019 • Peng Bao, Wenjun Xia, Kang Yang, Weiyan Chen, Mianyi Chen, Yan Xi, Shanzhou Niu, Jiliu Zhou, He Zhang, Huaiqiang Sun, Zhangyang Wang, Yi Zhang
Over the past few years, dictionary learning (DL)-based methods have been successfully used in various image reconstruction problems.
1 code implementation • CVPR 2019 • Siyuan Li, Iago Breno Araujo, Wenqi Ren, Zhangyang Wang, Eric K. Tokuda, Roberto Hirata Junior, Roberto Cesar-Junior, Jiawan Zhang, Xiaojie Guo, Xiaochun Cao
We present a comprehensive study and evaluation of existing single image deraining algorithms, using a new large-scale benchmark consisting of both synthetic and real-world rainy images. This dataset highlights diverse data sources and image contents, and is divided into three subsets (rain streak, rain drop, rain and mist), each serving different training or evaluation purposes.
no code implementations • 9 Apr 2019 • Ye Yuan, Wenhan Yang, Wenqi Ren, Jiaying Liu, Walter J. Scheirer, Zhangyang Wang
The UG$^{2+}$ challenge in IEEE CVPR 2019 aims to evoke a comprehensive discussion and exploration about how low-level vision techniques can benefit the high-level automatic visual recognition in various scenarios.
no code implementations • ICLR 2019 • Jialin Liu, Xiaohan Chen, Zhangyang Wang, Wotao Yin
In this work, we propose Analytic LISTA (ALISTA), where the weight matrix in LISTA is computed as the solution to a data-free optimization problem, leaving only the stepsize and threshold parameters to data-driven learning.
1 code implementation • ICCV 2019 • Shuai Yang, Zhangyang Wang, Zhaowen Wang, Ning Xu, Jiaying Liu, Zongming Guo
In this paper, we present the first text style transfer network that allows for real-time control of the crucial stylistic degree of the glyph through an adjustable parameter.
1 code implementation • 14 May 2019 • Ernest K. Ryu, Jialin Liu, Sicheng Wang, Xiaohan Chen, Zhangyang Wang, Wotao Yin
Plug-and-play (PnP) is a non-convex framework that integrates modern denoising priors, such as BM3D or deep learning-based denoisers, into ADMM or other proximal algorithms.
1 code implementation • CVPR 2019 • Wuyang Chen, Ziyu Jiang, Zhangyang Wang, Kexin Cui, Xiaoning Qian
In either way, the loss of local fine details or global contextual information results in limited segmentation accuracy.
Ranked #4 on Land Cover Classification on DeepGlobe
1 code implementation • 19 May 2019 • Sina Mohseni, Akshay Jagadeesh, Zhangyang Wang
While machine learning systems show high success rate in many complex tasks, research shows they can also fail in very unexpected situations.
2 code implementations • 22 May 2019 • Sicheng Wang, Bihan Wen, Junru Wu, DaCheng Tao, Zhangyang Wang
Several recent works discussed application-driven image restoration neural networks, which are capable of not only removing noise in images but also preserving their semantic-aware details, making them suitable for various high-level computer vision tasks as the pre-processing step.
1 code implementation • 30 May 2019 • Pritish Uplavikar, Zhen-Yu Wu, Zhangyang Wang
We train our model on a dataset consisting images of 10 Jerlov water types.
1 code implementation • 1 Jun 2019 • Ziyu Jiang, Kate Von Ness, Julie Loisel, Zhangyang Wang
Arctic environments are rapidly changing under the warming climate.
5 code implementations • 12 Jun 2019 • Zhen-Yu Wu, Haotao Wang, Zhaowen Wang, Hailin Jin, Zhangyang Wang
We first discuss an innovative heuristic of cross-dataset training and evaluation, enabling the use of multiple single-task datasets (one with target task labels and the other with privacy labels) in our problem.
8 code implementations • 17 Jun 2019 • Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, Zhangyang Wang
Deep learning-based methods have achieved remarkable success in image restoration and enhancement, but are they still competitive when there is a lack of paired training data?
no code implementations • 10 Jul 2019 • Yue Wang, Jianghao Shen, Ting-Kuei Hu, Pengfei Xu, Tan Nguyen, Richard Baraniuk, Zhangyang Wang, Yingyan Lin
State-of-the-art convolutional neural networks (CNNs) yield record-breaking predictive performance, yet at the cost of high-energy-consumption inference, that prohibits their widely deployments in resource-constrained Internet of Things (IoT) applications.
no code implementations • 26 Jul 2019 • Sreya Banerjee, Rosaura G. VidalMata, Zhangyang Wang, Walter J. Scheirer
How can we effectively engineer a computer vision system that is able to interpret videos from unconstrained mobility platforms like UAVs?
5 code implementations • ICCV 2019 • Tianlong Chen, Shaojin Ding, Jingyi Xie, Ye Yuan, Wuyang Chen, Yang Yang, Zhou Ren, Zhangyang Wang
Attention mechanism has been shown to be effective for person re-identification (Re-ID).
Ranked #16 on Person Re-Identification on Market-1501-C
6 code implementations • ICCV 2019 • Orest Kupyn, Tetiana Martyniuk, Junru Wu, Zhangyang Wang
We present a new end-to-end generative adversarial network (GAN) for single image motion deblurring, named DeblurGAN-v2, which considerably boosts state-of-the-art deblurring efficiency, quality, and flexibility.
Ranked #3 on Blind Face Restoration on CelebA-Test
2 code implementations • ICCV 2019 • Xinyu Gong, Shiyu Chang, Yifan Jiang, Zhangyang Wang
Neural architecture search (NAS) has witnessed prevailing success in image classification and (very recently) segmentation tasks.
Ranked #16 on Image Generation on STL-10
2 code implementations • ICCV 2019 • Zhen-Yu Wu, Karthik Suresh, Priya Narayanan, Hongyu Xu, Heesung Kwon, Zhangyang Wang
Object detection from images captured by Unmanned Aerial Vehicles (UAVs) is becoming increasingly useful.
no code implementations • 25 Sep 2019 • Howard Heaton, Xiaohan Chen, Zhangyang Wang, Wotao Yin
Inferences by each network form solution estimates, and networks are trained to optimize these estimates for a particular distribution of data.
no code implementations • 25 Sep 2019 • Reza Oftadeh, Jiayi Shen, Zhangyang Wang, Dylan Shell
In this paper, we propose a new loss function for performing principal component analysis (PCA) using linear autoencoders (LAEs).
no code implementations • 25 Sep 2019 • Zhenyu Wu, Ye Yuan, Zhaowen Wang, Jianming Zhang, Zhangyang Wang, Hailin Jin
Generative adversarial networks (GANs) nowadays are capable of producing im-ages of incredible realism.
2 code implementations • 26 Sep 2019 • Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Richard G. Baraniuk, Zhangyang Wang, Yingyan Lin
In this paper, we discover for the first time that the winning tickets can be identified at the very early training stage, which we term as early-bird (EB) tickets, via low-cost training schemes (e. g., early stopping and low-precision training) at large learning rates.
no code implementations • NeurIPS 2019 • Yue Wang, Ziyu Jiang, Xiaohan Chen, Pengfei Xu, Yang Zhao, Yingyan Lin, Zhangyang Wang
Extensive simulations and ablation studies, with real energy measurements from an FPGA board, confirm the superiority of our proposed strategies and demonstrate remarkable energy savings for training.
1 code implementation • NeurIPS 2019 • Yue Cao, Tianlong Chen, Zhangyang Wang, Yang shen
Learning to optimize has emerged as a powerful framework for various optimization and machine learning tasks.
1 code implementation • 26 Nov 2019 • Ye Yuan, Wuyang Chen, Tianlong Chen, Yang Yang, Zhou Ren, Zhangyang Wang, Gang Hua
Many real-world applications, such as city-scale traffic monitoring and control, requires large-scale re-identification.
no code implementations • 7 Dec 2019 • Junru Wu, Xiang Yu, Ding Liu, Manmohan Chandraker, Zhangyang Wang
To train and evaluate on more diverse blur severity levels, we propose a Challenging DVD dataset generated from the raw DVD video set by pooling frames with different temporal windows.
1 code implementation • 17 Dec 2019 • Ye Yuan, Wuyang Chen, Yang Yang, Zhangyang Wang
This work addresses the above two shortcomings of triplet loss, extending its effectiveness to large-scale ReID datasets with potentially noisy labels.
no code implementations • 20 Dec 2019 • Sina Mohseni, Mandar Pitale, Vasu Singh, Zhangyang Wang
Autonomous vehicles rely on machine learning to solve challenging tasks in perception and motion planning.
2 code implementations • ICLR 2020 • Wuyang Chen, Xinyu Gong, Xian-Ming Liu, Qian Zhang, Yuan Li, Zhangyang Wang
We present FasterSeg, an automatically designed semantic segmentation network with not only state-of-the-art performance but also faster speed than current methods.
Ranked #1 on Semantic Segmentation on BDD
Neural Architecture Search Real-Time Semantic Segmentation +1
no code implementations • 29 Dec 2019 • Mostafa Karimi, Di wu, Zhangyang Wang, Yang shen
DeepRelations shows superior interpretability to the state-of-the-art: without compromising affinity prediction, it boosts the AUPRC of contact prediction 9. 5, 16. 9, 19. 3 and 5. 7-fold for the test, compound-unique, protein-unique, and both-unique sets, respectively.
1 code implementation • 3 Jan 2020 • Jianghao Shen, Yonggan Fu, Yue Wang, Pengfei Xu, Zhangyang Wang, Yingyan Lin
The core idea of DFS is to hypothesize layer-wise quantization (to different bitwidths) as intermediate "soft" choices to be made between fully utilizing and skipping a layer.
1 code implementation • ECCV 2020 • Shuai Yang, Zhangyang Wang, Jiaying Liu, Zongming Guo
We present a sketch refinement strategy, as inspired by the coarse-to-fine drawing process of the artists, which we show can help our model well adapt to casual and varied sketches without the need for real sketch training data.
no code implementations • 6 Feb 2020 • Ting-Kuei Hu, Fernando Gama, Tianlong Chen, Zhangyang Wang, Alejandro Ribeiro, Brian M. Sadler
More specifically, we consider that each robot has access to a visual perception of the immediate surroundings, and communication capabilities to transmit and receive messages from other neighboring robots.
2 code implementations • ICLR 2020 • Ting-Kuei Hu, Tianlong Chen, Haotao Wang, Zhangyang Wang
Deep networks were recently suggested to face the odds between accuracy (on clean natural images) and robustness (on adversarially perturbed images) (Tsipras et al., 2019).
1 code implementation • ICLR 2020 • Haotao Wang, Tianlong Chen, Zhangyang Wang, Kede Ma
On the other hand, the trained classifiers have traditionally been evaluated on small and fixed sets of test images, which are deemed to be extremely sparsely distributed in the space of all natural images.
no code implementations • 3 Mar 2020 • Zepeng Huo, Arash Pakbin, Xiaohan Chen, Nathan Hurley, Ye Yuan, Xiaoning Qian, Zhangyang Wang, Shuai Huang, Bobak Mortazavi
Activity recognition in wearable computing faces two key challenges: i) activity characteristics may be context-dependent and change under different contexts or situations; ii) unknown contexts and activities may occur from time to time, requiring flexibility and adaptability of the algorithm.
no code implementations • 4 Mar 2020 • Howard Heaton, Xiaohan Chen, Zhangyang Wang, Wotao Yin
Our numerical examples show convergence of Safe-L2O algorithms, even when the provided data is not from the distribution of training data.
1 code implementation • CVPR 2020 • Tianlong Chen, Sijia Liu, Shiyu Chang, Yu Cheng, Lisa Amini, Zhangyang Wang
We conduct extensive experiments to demonstrate that the proposed framework achieves large performance margins (eg, 3. 83% on robust accuracy and 1. 3% on standard accuracy, on the CIFAR-10 dataset), compared with the conventional end-to-end adversarial training baseline.
2 code implementations • CVPR 2020 • Yuning You, Tianlong Chen, Zhangyang Wang, Yang shen
Graph convolution networks (GCN) are increasingly popular in many applications, yet remain notoriously hard to train over large graph datasets.
1 code implementation • ICLR 2020 • Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Richard G. Baraniuk, Zhangyang Wang, Yingyan Lin
Finally, we leverage the existence of EB tickets and the proposed mask distance to develop efficient training methods, which are achieved by first identifying EB tickets via low-cost schemes, and then continuing to train merely the EB tickets towards the target accuracy.
no code implementations • 7 May 2020 • Yang Zhao, Xiaohan Chen, Yue Wang, Chaojian Li, Haoran You, Yonggan Fu, Yuan Xie, Zhangyang Wang, Yingyan Lin
We present SmartExchange, an algorithm-hardware co-design framework to trade higher-cost memory storage/access for lower-cost computation, for energy-efficient inference of deep neural networks (DNNs).
3 code implementations • 7 May 2020 • Shaojin Ding, Tianlong Chen, Xinyu Gong, Weiwei Zha, Zhangyang Wang
Speaker recognition systems based on Convolutional Neural Networks (CNNs) are often built with off-the-shelf backbones such as VGG-Net or ResNet.
Ranked #6 on Speaker Identification on VoxCeleb1 (using extra training data)
1 code implementation • 22 May 2020 • Prateek Shroff, Tianlong Chen, Yunchao Wei, Zhangyang Wang
In this paper, we tried to focus on these marginal differences to extract more representative features.
no code implementations • ICML 2020 • Randy Ardywibowo, Shahin Boluki, Xinyu Gong, Zhangyang Wang, Xiaoning Qian
Machine learning (ML) systems often encounter Out-of-Distribution (OoD) errors when dealing with testing data coming from a distribution different from training data.
3 code implementations • ICML 2020 • Yonggan Fu, Wuyang Chen, Haotao Wang, Haoran Li, Yingyan Lin, Zhangyang Wang
Inspired by the recent success of AutoML in deep compression, we introduce AutoML to GAN compression and develop an AutoGAN-Distiller (AGD) framework.
1 code implementation • ICML 2020 • Yuning You, Tianlong Chen, Zhangyang Wang, Yang shen
We first elaborate three mechanisms to incorporate self-supervision into GCNs, analyze the limitations of pretraining & finetuning and self-training, and proceed to focus on multi-task learning.
1 code implementation • ICML 2020 • Xuxi Chen, Wuyang Chen, Tianlong Chen, Ye Yuan, Chen Gong, Kewei Chen, Zhangyang Wang
Many real-world applications have to tackle the Positive-Unlabeled (PU) learning problem, i. e., learning binary classifiers from a large amount of unlabeled data and a few labeled positive examples.
1 code implementation • 25 Jun 2020 • Yi Wang, Jingyang Zhou, Tianlong Chen, Sijia Liu, Shiyu Chang, Chandrajit Bajaj, Zhangyang Wang
Contrary to the traditional adversarial patch, this new form of attack is mapped into the 3D object world and back-propagates to the 2D image domain through differentiable rendering.
1 code implementation • ICML 2020 • Wuyang Chen, Zhiding Yu, Zhangyang Wang, Anima Anandkumar
Models trained on synthetic images often face degraded generalization to real data.
2 code implementations • NeurIPS 2020 • Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Zhangyang Wang, Michael Carbin
For a range of downstream tasks, we indeed find matching subnetworks at 40% to 90% sparsity.
no code implementations • 16 Aug 2020 • Xinyu Gong, Wuyang Chen, Yifan Jiang, Ye Yuan, Xian-Ming Liu, Qian Zhang, Yuan Li, Zhangyang Wang
Such simplification limits the fusion of information at different scales and fails to maintain high-resolution representations.
2 code implementations • ECCV 2020 • Haotao Wang, Shupeng Gui, Haichuan Yang, Ji Liu, Zhangyang Wang
Generative adversarial networks (GANs) have gained increasing popularity in various computer vision applications, and recently start to be deployed to resource-constrained mobile devices.
1 code implementation • 2 Oct 2020 • Zhenyu Wu, Duc Hoang, Shih-Yao Lin, Yusheng Xie, Liangjian Chen, Yen-Yu Lin, Zhangyang Wang, Wei Fan
Estimating the 3D hand pose from a monocular RGB image is important but challenging.
no code implementations • 6 Oct 2020 • Yuli Zheng, Zhenyu Wu, Ye Yuan, Tianlong Chen, Zhangyang Wang
While machine learning is increasingly used in this field, the resulting large-scale collection of user private information has reinvigorated the privacy debate, considering dozens of data breach incidents every year caused by unauthorized hackers, and (potentially even more) information misuse/abuse by authorized parties.
1 code implementation • NeurIPS 2020 • Tianlong Chen, Weiyi Zhang, Jingyang Zhou, Shiyu Chang, Sijia Liu, Lisa Amini, Zhangyang Wang
Learning to optimize (L2O) has gained increasing attention since classical optimizers require laborious problem-specific design and hyperparameter tuning.
4 code implementations • NeurIPS 2020 • Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, Yang shen
In this paper, we propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
1 code implementation • NeurIPS 2020 • Haotao Wang, Tianlong Chen, Shupeng Gui, Ting-Kuei Hu, Ji Liu, Zhangyang Wang
The trained model could be adjusted among different standard and robust accuracies "for free" at testing time.
1 code implementation • NeurIPS 2020 • Haoran You, Xiaohan Chen, Yongan Zhang, Chaojian Li, Sicheng Li, Zihao Liu, Zhangyang Wang, Yingyan Lin
Multiplication (e. g., convolution) is arguably a cornerstone of modern deep neural networks (DNNs).
1 code implementation • NeurIPS 2020 • Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness In this work, we improve robustness-aware self-supervised pre-training by learning representations that are consistent under both data augmentations and adversarial perturbations.
1 code implementation • 9 Nov 2020 • Jake Lee, Junfeng Yang, Zhangyang Wang
We present the results of three experiments comparing representations of millions of images with exhaustively shifted objects, examining both local invariance (within a few pixels) and global invariance (across the image frame).
no code implementations • 28 Nov 2020 • Junru Wu, Xiang Yu, Buyu Liu, Zhangyang Wang, Manmohan Chandraker
Face anti-spoofing (FAS) seeks to discriminate genuine faces from fake ones arising from any type of spoofing attack.
1 code implementation • NeurIPS 2020 • Xiaohan Chen, Zhangyang Wang, Siyu Tang, Krikamol Muandet
Meta-learning improves generalization of machine learning models when faced with previously unseen tasks by leveraging experiences from different, yet related prior tasks.
1 code implementation • CVPR 2021 • Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang
We extend the scope of LTH and question whether matching subnetworks still exist in pre-trained computer vision models, that enjoy the same downstream transfer performance.
1 code implementation • NeurIPS 2020 • Yonggan Fu, Haoran You, Yang Zhao, Yue Wang, Chaojian Li, Kailash Gopalakrishnan, Zhangyang Wang, Yingyan Lin
Recent breakthroughs in deep neural networks (DNNs) have fueled a tremendous demand for intelligent edge devices featuring on-site learning, while the practical realization of such systems remains a challenge due to the limited resources available at the edge and the required massive training costs for state-of-the-art (SOTA) DNNs.
no code implementations • 29 Dec 2020 • Jianghao Shen, Sicheng Wang, Zhangyang Wang
For example, our model with only 1 layer of 15 trees can perform comparably with the model in [3] with 2 layers of 2000 trees each.
1 code implementation • ACL 2021 • Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Zhangyang Wang, Jingjing Liu
Heavily overparameterized language models such as BERT, XLNet and T5 have achieved impressive success in many NLP tasks.
no code implementations • 1 Jan 2021 • Haotao Wang, Tianlong Chen, Zhangyang Wang, Kede Ma
Image segmentation lays the foundation for many high-stakes vision applications such as autonomous driving and medical image analysis.
no code implementations • 1 Jan 2021 • Junru Wu, Xiyang Dai, Dongdong Chen, Yinpeng Chen, Mengchen Liu, Ye Yu, Zhangyang Wang, Zicheng Liu, Mei Chen, Lu Yuan
Rather than expecting a single strong predictor to model the whole space, we seek a progressive line of weak predictors that can connect a path to the best architecture, thus greatly simplifying the learning task of each predictor.
no code implementations • 1 Jan 2021 • Randy Ardywibowo, Shahin Boluki, Zhangyang Wang, Bobak J Mortazavi, Shuai Huang, Xiaoning Qian
In many machine learning tasks, input features with varying degrees of predictive capability are usually acquired at some cost.
no code implementations • ICLR 2021 • Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, Zhangyang Wang
A recent study (Rice et al., 2020) revealed overfitting to be a dominant phenomenon in adversarially robust training of deep networks, and that appropriate early-stopping of adversarial training (AT) could match the performance gains of most recent algorithmic improvements.
no code implementations • 1 Jan 2021 • Yue Cao, Tianlong Chen, Zhangyang Wang, Yang shen
Optimizing an objective function with uncertainty awareness is well-known to improve the accuracy and confidence of optimization solutions.
no code implementations • ICLR 2021 • Jiayi Shen, Haotao Wang, Shupeng Gui, Jianchao Tan, Zhangyang Wang, Ji Liu
The recommendation system (RS) plays an important role in the content recommendation and retrieval scenarios.
no code implementations • 1 Jan 2021 • Junyuan Hong, Zhangyang Wang, Jiayu Zhou
In this paper, we provide comprehensive analysis of noise influence in dynamic privacy schedules to answer these critical questions.
no code implementations • 1 Jan 2021 • Tianlong Chen, Yu Cheng, Zhe Gan, Yu Hu, Zhangyang Wang, Jingjing Liu
Adversarial training is an effective method to combat adversarial attacks in order to create robust neural networks.
no code implementations • ICLR 2021 • Jiayi Shen, Xiaohan Chen, Howard Heaton, Tianlong Chen, Jialin Liu, Wotao Yin, Zhangyang Wang
We first present Twin L2O, the first dedicated minimax L2O framework consisting of two LSTMs for updating min and max variables, respectively.
no code implementations • ICLR 2021 • Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, Zhangyang Wang
In view of those, we introduce two pruning options, e. g., top-down and bottom-up, for finding lifelong tickets.
1 code implementation • 4 Jan 2021 • Xiaohan Chen, Yang Zhao, Yue Wang, Pengfei Xu, Haoran You, Chaojian Li, Yonggan Fu, Yingyan Lin, Zhangyang Wang
Results show that: 1) applied to inference, SD achieves up to 2. 44x energy efficiency as evaluated via real hardware implementations; 2) applied to training, SD leads to 10. 56x and 4. 48x reduction in the storage and training energy, with negligible accuracy loss compared to state-of-the-art training baselines.
1 code implementation • 8 Jan 2021 • Ajay Kumar Jaiswal, Haoyu Ma, Tianlong Chen, Ying Ding, Zhangyang Wang
In this paper, we demonstrate that it is unnecessary for spare retraining to strictly inherit those properties from the dense network.
no code implementations • 19 Jan 2021 • Junyuan Hong, Zhangyang Wang, Jiayu Zhou
In this paper, we provide comprehensive analysis of noise influence in dynamic privacy schedules to answer these critical questions.
2 code implementations • 12 Feb 2021 • Tianlong Chen, Yongduo Sui, Xuxi Chen, Aston Zhang, Zhangyang Wang
With graphs rapidly growing in size and deeper graph neural networks (GNNs) emerging, the training and inference of GNNs become increasingly expensive.
10 code implementations • NeurIPS 2021 • Yifan Jiang, Shiyu Chang, Zhangyang Wang
Our vanilla GAN architecture, dubbed TransGAN, consists of a memory-friendly transformer-based generator that progressively increases feature resolution, and correspondingly a multi-scale discriminator to capture simultaneously semantic contexts and low-level textures.
Ranked #8 on Image Generation on STL-10
1 code implementation • NeurIPS 2021 • Junru Wu, Xiyang Dai, Dongdong Chen, Yinpeng Chen, Mengchen Liu, Ye Yu, Zhangyang Wang, Zicheng Liu, Mei Chen, Lu Yuan
We propose a paradigm shift from fitting the whole architecture space using one strong predictor, to progressively fitting a search path towards the high-performance sub-space through a set of weaker predictors.
1 code implementation • 22 Feb 2021 • Xinyu Gong, Wuyang Chen, Tianlong Chen, Zhangyang Wang
We present Sandwich Batch Normalization (SaBN), a frustratingly easy improvement of Batch Normalization (BN) with only a few lines of code changes.
Ranked #20 on Neural Architecture Search on NAS-Bench-201, CIFAR-100
4 code implementations • ICLR 2021 • Wuyang Chen, Xinyu Gong, Zhangyang Wang
Can we select the best neural architectures without involving any training and eliminate a drastic portion of the search cost?
1 code implementation • 27 Feb 2021 • Jiebin Yan, Yu Zhong, Yuming Fang, Zhangyang Wang, Kede Ma
A natural question then arises: Does the superior performance on the closed (and frequently re-used) test sets transfer to the open visual world with unconstrained variations?
1 code implementation • NeurIPS 2021 • Tianlong Chen, Yu Cheng, Zhe Gan, Jingjing Liu, Zhangyang Wang
Training generative adversarial networks (GANs) with limited real image data generally results in deteriorated performance and collapsed models.
1 code implementation • 22 Mar 2021 • Tianlong Chen, Yu Cheng, Zhe Gan, JianFeng Wang, Lijuan Wang, Zhangyang Wang, Jingjing Liu
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
1 code implementation • 23 Mar 2021 • Xingqian Xu, Zhangyang Wang, Humphrey Shi
In this work, we propose UltraSR, a simple yet effective new network design based on implicit image functions in which we deeply integrated spatial coordinates and periodic encoding with the implicit neural representation.
1 code implementation • 23 Mar 2021 • Tianlong Chen, Xiaohan Chen, Wuyang Chen, Howard Heaton, Jialin Liu, Zhangyang Wang, Wotao Yin
It automates the design of an optimization method based on its performance on a set of training problems.
1 code implementation • NeurIPS 2021 • Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Jingjing Liu, Zhangyang Wang
Based on these results, we articulate the Elastic Lottery Ticket Hypothesis (E-LTH): by mindfully replicating (or dropping) and re-ordering layers for one network, its corresponding winning ticket could be stretched (or squeezed) into a subnetwork for another deeper (or shallower) network from the same family, whose performance is nearly the same competitive as the latter's winning ticket directly found by IMP.
2 code implementations • ICLR 2021 • Wuyang Chen, Zhiding Yu, Shalini De Mello, Sifei Liu, Jose M. Alvarez, Zhangyang Wang, Anima Anandkumar
Training on synthetic data can be beneficial for label or data-scarce scenarios.
no code implementations • 7 Apr 2021 • Tingyi Wanyan, Jing Zhang, Ying Ding, Ariful Azad, Zhangyang Wang, Benjamin S Glicksberg
Electronic Health Record (EHR) data has been of tremendous utility in Artificial Intelligence (AI) for healthcare such as predicting future clinical events.
no code implementations • ICLR 2021 • Tianjian Meng, Xiaohan Chen, Yifan Jiang, Zhangyang Wang
Unrolling is believed to incorporate the model-based prior with the learning capacity of deep learning.
no code implementations • 11 Apr 2021 • Yan Han, Chongyan Chen, Ahmed Tewfik, Benjamin Glicksberg, Ying Ding, Yifan Peng, Zhangyang Wang
The key knob of our framework is a unique positive sampling approach tailored for the medical images, by seamlessly integrating radiomic features as a knowledge augmentation.
1 code implementation • 16 Apr 2021 • Tianlong Chen, Zhenyu Zhang, Xu Ouyang, Zechun Liu, Zhiqiang Shen, Zhangyang Wang
However, the BN layer is costly to calculate and is typically implemented with non-binary parameters, leaving a hurdle for the efficient implementation of BNN training.
Ranked #167 on Image Classification on CIFAR-10
1 code implementation • 22 Apr 2021 • Yonggan Fu, Zhongzhi Yu, Yongan Zhang, Yifan Jiang, Chaojian Li, Yongyuan Liang, Mingchao Jiang, Zhangyang Wang, Yingyan Lin
The promise of Deep Neural Network (DNN) powered Internet of Thing (IoT) devices has motivated a tremendous demand for automated solutions to enable fast development and deployment of efficient (1) DNNs equipped with instantaneous accuracy-efficiency trade-off capability to accommodate the time-varying resources at IoT devices and (2) dataflows to optimize DNNs' execution efficiency on different devices.
1 code implementation • 22 Apr 2021 • Arman Maesumi, Mingkang Zhu, Yi Wang, Tianlong Chen, Zhangyang Wang, Chandrajit Bajaj
This paper presents a novel patch-based adversarial attack pipeline that trains adversarial patches on 3D human meshes.
1 code implementation • 13 May 2021 • Aaditya Singh, Shreeshail Hingane, Xinyu Gong, Zhangyang Wang
We demonstrate that plugging SAFIN into the base network of another state-of-the-art method results in enhanced stylization.
no code implementations • CVPR 2021 • Zhihua Wang, Haotao Wang, Tianlong Chen, Zhangyang Wang, Kede Ma
Recently, the group maximum differentiation competition (gMAD) has been used to improve blind image quality assessment (BIQA) models, with the help of full-reference metrics.
1 code implementation • ICLR 2021 • Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Chenyu You, Xiaohui Xie, Zhangyang Wang
Knowledge Distillation (KD) is a widely used technique to transfer knowledge from pre-trained teacher models to (usually more lightweight) student models.
1 code implementation • NeurIPS 2021 • Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang
Contrastive learning approaches have achieved great success in learning visual representations with few labels of the target classes.
1 code implementation • 6 Jun 2021 • Zhenyu Zhang, Xuxi Chen, Tianlong Chen, Zhangyang Wang
We observe that a high-quality winning ticket can be found with training and pruning the dense network on the very compact PrAC set, which can substantially save training iterations for the ticket finding process.
1 code implementation • 6 Jun 2021 • Ziyu Jiang, Tianlong Chen, Bobak Mortazavi, Zhangyang Wang
Hence, the key innovation in SDCLR is to create a dynamic self-competitor model to contrast with the target model, which is a pruned version of the latter.
1 code implementation • NeurIPS 2021 • Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
For example, our sparsified DeiT-Small at (5%, 50%) sparsity for (data, architecture), improves 0. 28% top-1 accuracy, and meanwhile enjoys 49. 32% FLOPs and 4. 40% running time savings.
Ranked #20 on Efficient ViTs on ImageNet-1K (with DeiT-T)
no code implementations • 9 Jun 2021 • Sina Mohseni, Haotao Wang, Zhiding Yu, Chaowei Xiao, Zhangyang Wang, Jay Yadawa
The open-world deployment of Machine Learning (ML) algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities such as interpretability, verifiability, and performance limitations.
1 code implementation • 10 Jun 2021 • Mingkang Zhu, Tianlong Chen, Zhangyang Wang
Compared to state-of-the-art methods, our homotopy attack leads to significantly fewer perturbations, e. g., reducing 42. 91% on CIFAR-10 and 75. 03% on ImageNet (average case, targeted attack), at similar maximal perturbation magnitudes, when still achieving 100% attack success rates.
2 code implementations • 10 Jun 2021 • Yuning You, Tianlong Chen, Yang shen, Zhangyang Wang
Unfortunately, unlike its counterpart on image data, the effectiveness of GraphCL hinges on ad-hoc data augmentations, which have to be manually picked per dataset, by either rules of thumb or trial-and-errors, owing to the diverse nature of graph data.
1 code implementation • 18 Jun 2021 • Junyuan Hong, Haotao Wang, Zhangyang Wang, Jiayu Zhou
In this paper, we study a novel FL strategy: propagating adversarial robustness from rich-resource users that can afford AT, to those with poor resources that cannot afford it, during federated learning.
2 code implementations • NeurIPS 2021 • Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu
Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on post-training pruning (iterative magnitude pruning), and before-training pruning (pruning at initialization).
Ranked #3 on Sparse Learning on ImageNet
no code implementations • NeurIPS 2021 • Bowen Pan, Rameswar Panda, Yifan Jiang, Zhangyang Wang, Rogerio Feris, Aude Oliva
The self-attention-based model, transformer, is recently becoming the leading backbone in the field of computer vision.
Ranked #29 on Efficient ViTs on ImageNet-1K (with DeiT-S)
1 code implementation • 24 Jun 2021 • Ting-Kuei Hu, Fernando Gama, Tianlong Chen, Wenqing Zheng, Zhangyang Wang, Alejandro Ribeiro, Brian M. Sadler
Our framework is implemented by a cascade of a convolutional and a graph neural network (CNN / GNN), addressing agent-level visual perception and feature learning, as well as swarm-level communication, local information aggregation and agent action inference, respectively.
2 code implementations • ICLR 2022 • Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elena Mocanu, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu
Our framework, FreeTickets, is defined as the ensemble of these relatively cheap sparse subnetworks.
2 code implementations • NeurIPS 2021 • Xiaolong Ma, Geng Yuan, Xuan Shen, Tianlong Chen, Xuxi Chen, Xiaohan Chen, Ning Liu, Minghai Qin, Sijia Liu, Zhangyang Wang, Yanzhi Wang
Based on our analysis, we summarize a guideline for parameter settings in regards of specific architecture characteristics, which we hope to catalyze the research progress on the topic of lottery ticket hypothesis.
no code implementations • 16 Jul 2021 • Chaojian Li, Wuyang Chen, Yuchen Gu, Tianlong Chen, Yonggan Fu, Zhangyang Wang, Yingyan Lin
Semantic segmentation for scene understanding is nowadays widely demanded, raising significant challenges for the algorithm efficiency, especially its applications on resource-limited platforms.
1 code implementation • 23 Jul 2021 • Zhenyu Wu, Zhaowen Wang, Ye Yuan, Jianming Zhang, Zhangyang Wang, Hailin Jin
Existing diversity tests of samples from GANs are usually conducted qualitatively on a small scale, and/or depends on the access to original training data as well as the trained model parameters.
1 code implementation • 1 Aug 2021 • Zeyuan Chen, Yifan Jiang, Dong Liu, Zhangyang Wang
We present \underline{C}oordinated \underline{E}nhancement for \underline{R}eal-world \underline{L}ow-light Noisy Images (CERL), that seamlessly integrates light enhancement and noise suppression parts into a unified and physics-grounded optimization framework.
1 code implementation • the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining 2021 • Junyuan Hong, Zhuangdi Zhu, Shuyang Yu, Zhangyang Wang, Hiroko Dodge, Jiayu Zhou
While adversarial learning is commonly used in centralized learning for mitigating bias, there are significant barriers when extending it to the federated framework.
1 code implementation • ICCV 2021 • Yifan Jiang, He Zhang, Jianming Zhang, Yilin Wang, Zhe Lin, Kalyan Sunkavalli, Simon Chen, Sohrab Amirghodsi, Sarah Kong, Zhangyang Wang
Image harmonization aims to improve the quality of image compositing by matching the "appearance" (\eg, color tone, brightness and contrast) between foreground and background images.
1 code implementation • 24 Aug 2021 • Tianlong Chen, Kaixiong Zhou, Keyu Duan, Wenqing Zheng, Peihao Wang, Xia Hu, Zhangyang Wang
In view of those, we present the first fair and reproducible benchmark dedicated to assessing the "tricks" of training deep GNNs.
1 code implementation • 26 Aug 2021 • Wuyang Chen, Xinyu Gong, Junru Wu, Yunchao Wei, Humphrey Shi, Zhicheng Yan, Yi Yang, Zhangyang Wang
This work targets designing a principled and unified training-free framework for Neural Architecture Search (NAS), with high performance, low cost, and in-depth interpretation.
1 code implementation • 30 Aug 2021 • Ye Yuan, Wuyang Chen, Zhaowen Wang, Matthew Fisher, Zhifei Zhang, Zhangyang Wang, Hailin Jin
The novel graph constructor maps a glyph's latent code to its graph representation that matches expert knowledge, which is trained to help the translation task.
no code implementations • ICCV 2021 • Xinyu Gong, Heng Wang, Zheng Shou, Matt Feiszli, Zhangyang Wang, Zhicheng Yan
We design a multivariate search space, including 6 search variables to capture a wide variety of choices in designing two-stream models.
no code implementations • ICCV 2021 • Yi Guo, Huan Yuan, Jianchao Tan, Zhangyang Wang, Sen yang, Ji Liu
During the training process, the polarization effect will drive a subset of gates to smoothly decrease to exact zero, while other gates gradually stay away from zero by a large margin.
1 code implementation • 21 Sep 2021 • Abduallah Mohamed, Huancheng Chen, Zhangyang Wang, Christian Claudel
We propose Skeleton-Graph, a deep spatio-temporal graph CNN model that predicts the future 3D skeleton poses in a single pass from the 2D ones.
Ranked #1 on Trajectory Prediction on PROX
no code implementations • 29 Sep 2021 • Junjie Yang, Tianlong Chen, Mingkang Zhu, Fengxiang He, DaCheng Tao, Yingbin Liang, Zhangyang Wang
Learning to optimize (L2O) has gained increasing popularity in various optimization tasks, since classical optimizers usually require laborious, problem-specific design and hyperparameter tuning.
no code implementations • 29 Sep 2021 • Yan Han, Ying Ding, Ahmed Tewfik, Yifan Peng, Zhangyang Wang
During training, the image branch leverages its learned attention to estimate pathology localization, which is then utilized to extract radiomic features from images in the radiomics branch.
no code implementations • ICLR 2022 • Yuning You, Yue Cao, Tianlong Chen, Zhangyang Wang, Yang shen
Optimizing an objective function with uncertainty awareness is well-known to improve the accuracy and confidence of optimization solutions.
no code implementations • ICLR 2022 • Xiaohan Chen, Jason Zhang, Zhangyang Wang
In this work, we define an extended class of subnetworks in randomly initialized NNs called disguised subnetworks, which are not only "hidden" in the random networks but also "disguised" -- hence can only be "unmasked" with certain transformations on weights.
no code implementations • 29 Sep 2021 • Qiming Wu, Xiaohan Chen, Yifan Jiang, Pan Zhou, Zhangyang Wang
Drawing inspirations from the recently prosperous research on lottery ticket hypothesis (LTH), we conjecture and study a novel “lottery image prior” (LIP), stated as: given an (untrained or trained) DNN-based image prior, it will have a sparse subnetwork that can be training in isolation, to match the original DNN’s performance when being applied as a prior to various image inverse problems.
no code implementations • 29 Sep 2021 • Haotao Wang, Junyuan Hong, Jiayu Zhou, Zhangyang Wang
In this paper, we first propose a new fairness goal, termed Equalized Robustness (ER), to impose fair model robustness against unseen distribution shifts across majority and minority groups.
no code implementations • ICLR 2022 • Peihao Wang, Wenqing Zheng, Tianlong Chen, Zhangyang Wang
The first technique, termed AttnScale, decomposes a self-attention block into low-pass and high-pass components, then rescales and combines these two filters to produce an all-pass self-attention matrix.
no code implementations • 29 Sep 2021 • Wenqing Zheng, S P Sharan, Zhiwen Fan, Zhangyang Wang
Deep vision models are nowadays widely integrated into visual reinforcement learning (RL) to parameterize the policy networks.
no code implementations • ICLR 2022 • Shaojin Ding, Tianlong Chen, Zhangyang Wang
In this paper, we investigate the tantalizing possibility of using lottery ticket hypothesis to discover lightweight speech recognition models, that are (1) robust to various noise existing in speech; (2) transferable to fit the open-world personalization; and 3) compatible with structured sparsity.
no code implementations • 29 Sep 2021 • William T Redman, Tianlong Chen, Akshunna S. Dogra, Zhangyang Wang
Foundational work on the Lottery Ticket Hypothesis has suggested an exciting corollary: winning tickets found in the context of one task can be transferred to similar tasks, possibly even across different architectures.
no code implementations • 29 Sep 2021 • Duc N.M Hoang, Kaixiong Zhou, Tianlong Chen, Xia Hu, Zhangyang Wang
Despite the preliminary success, we argue that for GNNs, NAS has to be customized further, due to the topological complicacy of GNN input data (graph) as well as the notorious training instability.
no code implementations • 29 Sep 2021 • Haoyu Ma, Yifan Huang, Tianlong Chen, Hao Tang, Chenyu You, Zhangyang Wang, Xiaohui Xie
However, it is unclear why the distorted distribution of the logits is catastrophic to the student model.
no code implementations • 29 Sep 2021 • Tianlong Chen, Xuxi Chen, Xiaolong Ma, Yanzhi Wang, Zhangyang Wang
The lottery ticket hypothesis (LTH) has shown that dense models contain highly sparse subnetworks (i. e., $\textit{winning tickets}$) that can be trained in isolation to match full accuracy.
1 code implementation • ICLR 2022 • Lu Miao, Xiaolong Luo, Tianlong Chen, Wuyang Chen, Dong Liu, Zhangyang Wang
Conventional methods often require (iterative) pruning followed by re-training, which not only incurs large overhead beyond the original DNN training but also can be sensitive to retraining hyperparameters.
no code implementations • 7 Oct 2021 • William T. Redman, Tianlong Chen, Zhangyang Wang, Akshunna S. Dogra
Foundational work on the Lottery Ticket Hypothesis has suggested an exciting corollary: winning tickets found in the context of one task can be transferred to similar tasks, possibly even across different architectures.
1 code implementation • 9 Oct 2021 • Mu Yang, Shaojin Ding, Tianlong Chen, Tong Wang, Zhangyang Wang
This work presents a lifelong learning approach to train a multilingual Text-To-Speech (TTS) system, where each language was seen as an individual task and was learned sequentially and continually.
1 code implementation • NeurIPS 2021 • Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, Zhangyang Wang
Diversity and hardness are two complementary dimensions of data augmentation to achieve robustness.
1 code implementation • NeurIPS 2021 • Xiaohan Chen, Jialin Liu, Zhangyang Wang, Wotao Yin
Learned Iterative Shrinkage-Thresholding Algorithm (LISTA) introduces the concept of unrolling an iterative algorithm and training it like a neural network.
1 code implementation • NeurIPS 2021 • Wenqing Zheng, Qiangqiang Guo, Hao Yang, Peihao Wang, Zhangyang Wang
This paper presents the Delayed Propagation Transformer (DePT), a new transformer-based model that specializes in the global modeling of CPS while taking into account the immutable constraints from the physical world.
1 code implementation • 30 Oct 2021 • Xuxi Chen, Tianlong Chen, Weizhu Chen, Ahmed Hassan Awadallah, Zhangyang Wang, Yu Cheng
To address these pain points, we propose a framework for resource- and parameter-efficient fine-tuning by leveraging the sparsity prior in both weight updates and the final model weights.
1 code implementation • NeurIPS 2021 • Xuxi Chen, Tianlong Chen, Zhenyu Zhang, Zhangyang Wang
The lottery ticket hypothesis (LTH) emerges as a promising framework to leverage a special sparse subnetwork (i. e., winning ticket) instead of a full model for both training and inference, that can lower both costs without sacrificing the performance.
1 code implementation • NeurIPS 2021 • Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang
Contrastive learning approaches have achieved great success in learning visual representations with few labels of the target classes.
2 code implementations • ICLR 2022 • Wenqing Zheng, Edward W Huang, Nikhil Rao, Sumeet Katariya, Zhangyang Wang, Karthik Subbian
We propose Cold Brew, a teacher-student distillation approach to address the SCS and noisy-neighbor challenges for GNNs.
no code implementations • 9 Dec 2021 • Yifan Jiang, Xinyu Gong, Junru Wu, Humphrey Shi, Zhicheng Yan, Zhangyang Wang
Efficient video architecture is the key to deploying video recognition systems on devices with limited computing resources.
3 code implementations • 17 Dec 2021 • Wuyang Chen, Xianzhi Du, Fan Yang, Lucas Beyer, Xiaohua Zhai, Tsung-Yi Lin, Huizhong Chen, Jing Li, Xiaodan Song, Zhangyang Wang, Denny Zhou
In this paper, we comprehensively study three architecture design choices on ViT -- spatial reduction, doubled channels, and multiscale features -- and demonstrate that a vanilla ViT architecture can fulfill this goal without handcrafting multiscale features, maintaining the original ViT design philosophy.
1 code implementation • 18 Dec 2021 • Sameer Bibikar, Haris Vikalo, Zhangyang Wang, Xiaohan Chen
Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices.
1 code implementation • CVPR 2022 • Zhiwen Fan, Tianlong Chen, Peihao Wang, Zhangyang Wang
CADTransformer tokenizes directly from the set of graphical primitives in CAD drawings, and correspondingly optimizes line-grained semantic and instance symbol spotting altogether by a pair of prediction heads.
no code implementations • CVPR 2022 • Haoyu Ma, Handong Zhao, Zhe Lin, Ajinkya Kale, Zhangyang Wang, Tong Yu, Jiuxiang Gu, Sunav Choudhary, Xiaohui Xie
recommendation, and marketing services.
no code implementations • 2 Jan 2022 • Yifan Jiang, Bartlomiej Wronski, Ben Mildenhall, Jonathan T. Barron, Zhangyang Wang, Tianfan Xue
These spatially-varying kernels are produced by an efficient predictor network running on a downsampled input, making them much more efficient to compute than per-pixel kernels produced by a full-resolution image, and also enlarging the network's receptive field compared with static kernels.
1 code implementation • 4 Jan 2022 • Yuning You, Tianlong Chen, Zhangyang Wang, Yang shen
Accordingly, we have extended the prefabricated discrete prior in the augmentation set, to a learnable continuous prior in the parameter space of graph generators, assuming that graph priors per se, similar to the concept of image manifolds, can be learned by data generation.
no code implementations • 17 Jan 2022 • Mengshu Sun, Haoyu Ma, Guoliang Kang, Yifan Jiang, Tianlong Chen, Xiaolong Ma, Zhangyang Wang, Yanzhi Wang
To the best of our knowledge, this is the first time quantization has been incorporated into ViT acceleration on FPGAs with the help of a fully automatic framework to guide the quantization strategy on the software side and the accelerator implementations on the hardware side given the target frame rate.