Search Results for author: Huan Wang

Found 57 papers, 15 papers with code

Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation

1 code implementation NeurIPS 2021 Can Qin, Handong Zhao, Lichen Wang, Huan Wang, Yulun Zhang, Yun Fu

For slow learning of graph similarity, this paper proposes a novel early-fusion approach by designing a co-attention-based feature fusion network on multilevel GNN features.

Anomaly Detection Graph Similarity +2

Aligned Structured Sparsity Learning for Efficient Image Super-Resolution

1 code implementation NeurIPS 2021 Yulun Zhang, Huan Wang, Can Qin, Yun Fu

To address the above issues, we propose aligned structured sparsity learning (ASSL), which introduces a weight normalization layer and applies $L_2$ regularization to the scale parameters for sparsity.

Image Super-Resolution Knowledge Distillation +3

Ensemble of Averages: Improving Model Selection and Boosting Performance in Domain Generalization

no code implementations21 Oct 2021 Devansh Arpit, Huan Wang, Yingbo Zhou, Caiming Xiong

In Domain Generalization (DG) settings, models trained on a given set of training domains have notoriously chaotic performance on distribution shifted test domains, and stochasticity in optimization (e. g. seed) plays a big role.

Domain Generalization Model Selection

Learning Rich Nearest Neighbor Representations from Self-supervised Ensembles

no code implementations19 Oct 2021 Bram Wallace, Devansh Arpit, Huan Wang, Caiming Xiong

Pretraining convolutional neural networks via self-supervision, and applying them in transfer learning, is an incredibly fast-growing field that is rapidly and iteratively improving performance across practically all image domains.

Transfer Learning

Momentum Contrastive Autoencoder: Using Contrastive Learning for Latent Space Distribution Matching in WAE

no code implementations19 Oct 2021 Devansh Arpit, Aadyot, Bhatnagar, Huan Wang, Caiming Xiong

Wasserstein autoencoder (WAE) shows that matching two distributions is equivalent to minimizing a simple autoencoder (AE) loss under the constraint that the latent space of this AE matches a pre-specified prior distribution.

Contrastive Learning Representation Learning

Continuous Conditional Random Field Convolution for Point Cloud Segmentation

1 code implementation12 Oct 2021 Fei Yang, Franck Davoine, Huan Wang, Zhong Jin

Furthermore, we build an encoder-decoder network based on the proposed continuous CRF graph convolution (CRFConv), in which the CRFConv embedded in the decoding layers can restore the details of high-level features that were lost in the encoding stage to enhance the location ability of the network, thereby benefiting segmentation.

Point Cloud Segmentation Semantic Segmentation

Multi-Tensor Network Representation for High-Order Tensor Completion

no code implementations9 Sep 2021 Chang Nie, Huan Wang, Zhihui Lai

In particular, each component can be represented as multilinear connections over several latent factors and naturally mapped to a specific tensor network (TN) topology.

Tensor Decomposition

WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU

1 code implementation31 Aug 2021 Tian Lan, Sunil Srinivasa, Huan Wang, Stephan Zheng

We present WarpDrive, a flexible, lightweight, and easy-to-use open-source RL framework that implements end-to-end deep multi-agent RL on a single GPU (Graphics Processing Unit), built on PyCUDA and PyTorch.

Decision Making Multi-agent Reinforcement Learning

Adapting Stepsizes by Momentumized Gradients Improves Optimization and Generalization

no code implementations22 Jun 2021 Yizhou Wang, Yue Kang, Can Qin, Huan Wang, Yi Xu, Yulun Zhang, Yun Fu

Adaptive gradient methods, such as Adam, have achieved tremendous success in machine learning.

Understanding the Under-Coverage Bias in Uncertainty Estimation

no code implementations NeurIPS 2021 Yu Bai, Song Mei, Huan Wang, Caiming Xiong

Estimating the data uncertainty in regression tasks is often done by learning a quantile function or a prediction interval of the true label conditioned on the input.

Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning

no code implementations NeurIPS 2021 Tengyang Xie, Nan Jiang, Huan Wang, Caiming Xiong, Yu Bai

This offline result is the first that matches the sample complexity lower bound in this setting, and resolves a recent open question in offline RL.

Offline RL

Evaluating State-of-the-Art Classification Models Against Bayes Optimality

1 code implementation NeurIPS 2021 Ryan Theisen, Huan Wang, Lav R. Varshney, Caiming Xiong, Richard Socher

Moreover, we show that by varying the temperature of the learned flow models, we can generate synthetic datasets that closely resemble standard benchmark datasets, but with almost any desired Bayes error.

Classification

Dynamical Isometry: The Missing Ingredient for Neural Network Pruning

no code implementations12 May 2021 Huan Wang, Can Qin, Yue Bai, Yun Fu

This paper is meant to explain it through the lens of dynamical isometry [42].

Network Pruning

Emerging Paradigms of Neural Network Pruning

no code implementations11 Mar 2021 Huan Wang, Can Qin, Yulun Zhang, Yun Fu

In spite of the encouraging progress, how to coordinate these new pruning fashions with the traditional pruning has not been explored yet.

Network Pruning

Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games

no code implementations NeurIPS 2021 Yu Bai, Chi Jin, Huan Wang, Caiming Xiong

Real world applications such as economics and policy making often involve solving multi-agent games with two unique features: (1) The agents are inherently asymmetric and partitioned into leaders and followers; (2) The agents have different reward functions, thus the game is general-sum.

Localized Calibration: Metrics and Recalibration

no code implementations22 Feb 2021 Rachel Luo, Aadyot Bhatnagar, Huan Wang, Caiming Xiong, Silvio Savarese, Yu Bai, Shengjia Zhao, Stefano Ermon

Probabilistic classifiers output confidence scores along with their predictions, and these confidence scores must be well-calibrated (i. e. reflect the true probability of an event) to be meaningful and useful for downstream tasks.

Decision Making

Don't Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification

no code implementations15 Feb 2021 Yu Bai, Song Mei, Huan Wang, Caiming Xiong

Modern machine learning models with high accuracy are often miscalibrated -- the predicted top probability does not reflect the actual accuracy, and tends to be over-confident.

Automatic Segmentation of Organs-at-Risk from Head-and-Neck CT using Separable Convolutional Neural Network with Hard-Region-Weighted Loss

1 code implementation3 Feb 2021 Wenhui Lei, Haochen Mei, Zhengwentai Sun, Shan Ye, Ran Gu, Huan Wang, Rui Huang, Shichuan Zhang, Shaoting Zhang, Guotai Wang

Despite the stateof-the-art performance achieved by Convolutional Neural Networks (CNNs) for automatic segmentation of OARs, existing methods do not provide uncertainty estimation of the segmentation results for treatment planning, and their accuracy is still limited by several factors, including the low contrast of soft tissues in CT, highly imbalanced sizes of OARs and large inter-slice spacing.

Computed Tomography (CT)

Use or Misuse of NMR to Test Molecular Mobility during Chemical Reaction

no code implementations28 Jan 2021 Huan Wang, Tian Huang, Steve Granick

With raw NMR spectra available in a public depository, we confirm boosted mobility during the click chemical reaction (Science 2020, 369, 537) regardless of the order of magnetic field gradient (linearly-increasing, linearly-decreasing, random sequence).

Soft Condensed Matter

Neural Bayes: A Generic Parameterization Method for Unsupervised Learning

no code implementations1 Jan 2021 Devansh Arpit, Huan Wang, Caiming Xiong, Richard Socher, Yoshua Bengio

Disjoint Manifold Separation: Neural Bayes allows us to formulate an objective which can optimally label samples from disjoint manifolds present in the support of a continuous distribution.

Unsupervised Representation Learning

Momentum Contrastive Autoencoder

no code implementations1 Jan 2021 Devansh Arpit, Aadyot Bhatnagar, Huan Wang, Caiming Xiong

Quantitatively, we show that our algorithm achieves a new state-of-the-art FID of 54. 36 on CIFAR-10, and performs competitively with existing models on CelebA in terms of FID score.

Contrastive Learning Representation Learning

Improved Uncertainty Post-Calibration via Rank Preserving Transforms

no code implementations1 Jan 2021 Yu Bai, Tengyu Ma, Huan Wang, Caiming Xiong

In this paper, we propose Neural Rank Preserving Transforms (NRPT), a new post-calibration method that adjusts the output probabilities of a trained classifier using a calibrator of higher capacity, while maintaining its prediction accuracy.

Text Classification

Context Reasoning Attention Network for Image Super-Resolution

no code implementations ICCV 2021 Yulun Zhang, Donglai Wei, Can Qin, Huan Wang, Hanspeter Pfister, Yun Fu

However, the basic convolutional layer in CNNs is designed to extract local patterns, lacking the ability to model global context.

Image Super-Resolution

Szegő kernel asymptotics on some non-compact complete CR manifolds

no code implementations21 Dec 2020 Chin-Yu Hsiao, George Marinescu, Huan Wang

We establish Szeg\H{o} kernel asymptotic expansions on non-compact strictly pseudoconvex complete CR manifolds with transversal CR $\mathbb{R}$-action under certain natural geometric conditions.

Complex Variables Differential Geometry

An Event Correlation Filtering Method for Fake News Detection

no code implementations10 Dec 2020 Hao Li, Huan Wang, Guanghua Liu

To improve the detection performance of fake news, we take advantage of the event correlations of news and propose an event correlation filtering method (ECFM) for fake news detection, mainly consisting of the news characterizer, the pseudo label annotator, the event credibility updater, and the news entropy selector.

Fake News Detection

Knowledge Distillation Thrives on Data Augmentation

no code implementations5 Dec 2020 Huan Wang, Suhas Lohit, Michael Jones, Yun Fu

In addition, when our approaches are combined with more advanced distillation losses, we can advance the state-of-the-art performance even more.

Active Learning Data Augmentation +1

Multi-head Knowledge Distillation for Model Compression

no code implementations5 Dec 2020 Huan Wang, Suhas Lohit, Michael Jones, Yun Fu

We add loss terms for training the student that measure the dissimilarity between student and teacher outputs of the auxiliary classifiers.

Knowledge Distillation Neural Network Compression

Unsupervised Paraphrasing with Pretrained Language Models

no code implementations EMNLP 2021 Tong Niu, Semih Yavuz, Yingbo Zhou, Nitish Shirish Keskar, Huan Wang, Caiming Xiong

To enforce a surface form dissimilar from the input, whenever the language model emits a token contained in the source sequence, DB prevents the model from outputting the subsequent source token for the next generation step.

Language Modelling Paraphrase Generation +1

How Important is the Train-Validation Split in Meta-Learning?

no code implementations12 Oct 2020 Yu Bai, Minshuo Chen, Pan Zhou, Tuo Zhao, Jason D. Lee, Sham Kakade, Huan Wang, Caiming Xiong

A common practice in meta-learning is to perform a train-validation split (\emph{train-val method}) where the prior adapts to the task on one split of the data, and the resulting predictor is evaluated on another split.

Meta-Learning

Towards Understanding Hierarchical Learning: Benefits of Neural Representations

no code implementations NeurIPS 2020 Minshuo Chen, Yu Bai, Jason D. Lee, Tuo Zhao, Huan Wang, Caiming Xiong, Richard Socher

When the trainable network is the quadratic Taylor model of a wide two-layer network, we show that neural representation can achieve improved sample complexities compared with the raw input: For learning a low-rank degree-$p$ polynomial ($p \geq 4$) in $d$ dimension, neural representation requires only $\tilde{O}(d^{\lceil p/2 \rceil})$ samples, while the best-known sample complexity upper bound for the raw input is $\tilde{O}(d^{p-1})$.

Collaborative Distillation for Ultra-Resolution Universal Style Transfer

1 code implementation CVPR 2020 Huan Wang, Yijun Li, Yuehai Wang, Haoji Hu, Ming-Hsuan Yang

In this work, we present a new knowledge distillation method (named Collaborative Distillation) for encoder-decoder based neural style transfer to reduce the convolutional filters.

Knowledge Distillation Style Transfer

MNN: A Universal and Efficient Inference Engine

1 code implementation27 Feb 2020 Xiaotang Jiang, Huan Wang, Yiliu Chen, Ziqi Wu, Lichuan Wang, Bin Zou, Yafeng Yang, Zongyang Cui, Yu Cai, Tianhang Yu, Chengfei Lv, Zhihua Wu

Deploying deep learning models on mobile devices draws more and more attention recently.

Neural Bayes: A Generic Parameterization Method for Unsupervised Representation Learning

1 code implementation20 Feb 2020 Devansh Arpit, Huan Wang, Caiming Xiong, Richard Socher, Yoshua Bengio

Disjoint Manifold Labeling: Neural Bayes allows us to formulate an objective which can optimally label samples from disjoint manifolds present in the support of a continuous distribution.

Unsupervised Representation Learning

Taylorized Training: Towards Better Approximation of Neural Network Training at Finite Width

no code implementations10 Feb 2020 Yu Bai, Ben Krause, Huan Wang, Caiming Xiong, Richard Socher

We propose \emph{Taylorized training} as an initiative towards better understanding neural network training at finite width.

Contradictory Structure Learning for Semi-supervised Domain Adaptation

1 code implementation6 Feb 2020 Can Qin, Lichen Wang, Qianqian Ma, Yu Yin, Huan Wang, Yun Fu

Current adversarial adaptation methods attempt to align the cross-domain features, whereas two challenges remain unsolved: 1) the conditional distribution mismatch and 2) the bias of the decision boundary towards the source domain.

Domain Adaptation

Global Capacity Measures for Deep ReLU Networks via Path Sampling

no code implementations22 Oct 2019 Ryan Theisen, Jason M. Klusowski, Huan Wang, Nitish Shirish Keskar, Caiming Xiong, Richard Socher

Classical results on the statistical complexity of linear models have commonly identified the norm of the weights $\|w\|$ as a fundamental capacity measure.

Generalization Bounds Multi-class Classification

Miss Detection vs. False Alarm: Adversarial Learning for Small Object Segmentation in Infrared Images

no code implementations ICCV 2019 Huan Wang, Luping Zhou, Lei Wang

Second, the adversarial training of the two models naturally produces a delicate balance of MD and FA, and low rates for both MD and FA could be achieved at Nash equilibrium.

Semantic Segmentation

On the Generalization Gap in Reparameterizable Reinforcement Learning

no code implementations29 May 2019 Huan Wang, Stephan Zheng, Caiming Xiong, Richard Socher

For this problem class, estimating the expected return is efficient and the trajectory can be computed deterministically given peripheral random variables, which enables us to study reparametrizable RL using supervised learning and transfer learning theory.

Learning Theory Transfer Learning

Triplet Distillation for Deep Face Recognition

1 code implementation11 May 2019 Yushu Feng, Huan Wang, Daniel T. Yi, Roland Hu

Convolutional neural networks (CNNs) have achieved a great success in face recognition, which unfortunately comes at the cost of massive computation and storage consumption.

Face Recognition

Multi-Task Learning for Semantic Parsing with Cross-Domain Sketch

no code implementations ICLR 2019 Huan Wang, Yuxiang Hu, Li Dong, Feijun Jiang, Zaiqing Nie

Semantic parsing which maps a natural language sentence into a formal machine-readable representation of its meaning, is highly constrained by the limited annotated training data.

Multi-Task Learning Semantic Parsing

Structured Pruning for Efficient ConvNets via Incremental Regularization

no code implementations NIPS Workshop CDNNRIA 2018 Huan Wang, Qiming Zhang, Yuehai Wang, Haoji Hu

Parameter pruning is a promising approach for CNN compression and acceleration by eliminating redundant model parameters with tolerable performance loss.

Three Dimensional Convolutional Neural Network Pruning with Regularization-Based Method

no code implementations NIPS Workshop CDNNRIA 2018 Yuxin Zhang, Huan Wang, Yang Luo, Lu Yu, Haoji Hu, Hangguan Shan, Tony Q. S. Quek

Despite enjoying extensive applications in video analysis, three-dimensional convolutional neural networks (3D CNNs)are restricted by their massive computation and storage consumption.

Model Compression Network Pruning

Shubnikov-de Haas and de Haas-van Alphen oscillations in topological semimetal CaAl4

no code implementations15 Nov 2018 Sheng Xu, Jian-Feng Zhang, Yi-Yan Wang, Lin-Lin Sun, Huan Wang, Yuan Su, Xiao-Yan Wang, Kai Liu, Tian-Long Xia

An electron-type quasi-2D Fermi surface is found by the angle-dependent Shubnikov-de Haas oscillations, de Haas-van Alphen oscillations and the first-principles calculations.

Materials Science Mesoscale and Nanoscale Physics

Identifying Generalization Properties in Neural Networks

no code implementations ICLR 2019 Huan Wang, Nitish Shirish Keskar, Caiming Xiong, Richard Socher

In particular, we prove that model generalization ability is related to the Hessian, the higher-order "smoothness" terms characterized by the Lipschitz constant of the Hessian, and the scales of the parameters.

Structured Pruning for Efficient ConvNets via Incremental Regularization

1 code implementation25 Apr 2018 Huan Wang, Qiming Zhang, Yuehai Wang, Yu Lu, Haoji Hu

Parameter pruning is a promising approach for CNN compression and acceleration by eliminating redundant model parameters with tolerable performance degrade.

Network Pruning

Adaptive Dropout with Rademacher Complexity Regularization

no code implementations ICLR 2018 Ke Zhai, Huan Wang

We propose a novel framework to adaptively adjust the dropout rates for the deep neural network based on a Rademacher complexity bound.

Document Classification

Structured Probabilistic Pruning for Convolutional Neural Network Acceleration

2 code implementations20 Sep 2017 Huan Wang, Qiming Zhang, Yuehai Wang, Haoji Hu

Unlike existing deterministic pruning approaches, where unimportant weights are permanently eliminated, SPP introduces a pruning probability for each weight, and pruning is guided by sampling from the pruning probabilities.

Transfer Learning

A Batchwise Monotone Algorithm for Dictionary Learning

no code implementations31 Jan 2015 Huan Wang, John Wright, Daniel Spielman

Unlike the state-of-the-art dictionary learning algorithms which impose sparsity constraints on a sample-by-sample basis, we instead treat the samples as a batch, and impose the sparsity constraint on the whole.

Dictionary Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.