Search Results for author: Chengyue Gong

Found 19 papers, 9 papers with code

Network Pruning by Greedy Subnetwork Selection

no code implementations ICML 2020 Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, Qiang Liu

Theoretically, we show that the small networks pruned using our method achieve provably lower loss than small networks trained from scratch with the same size.

Network Pruning

Learning with Different Amounts of Annotation: From Zero to Many Labels

no code implementations9 Sep 2021 Shujian Zhang, Chengyue Gong, Eunsol Choi

Introducing such multi label examples at the cost of annotating fewer examples brings clear gains on natural language inference task and entity typing task, even when we simply first train with a single label data and then fine tune with multi label examples.

Data Augmentation Entity Typing +1

MaxUp: Lightweight Adversarial Training With Data Augmentation Improves Neural Network Training

no code implementations CVPR 2021 Chengyue Gong, Tongzheng Ren, Mao Ye, Qiang Liu

The idea is to generate a set of augmented data with some random perturbations or transforms, and minimize the maximum, or worst case loss over the augmented data.

Data Augmentation Image Classification +1

Knowing More About Questions Can Help: Improving Calibration in Question Answering

1 code implementation2 Jun 2021 Shujian Zhang, Chengyue Gong, Eunsol Choi

We study calibration in question answering, estimating whether model correctly predicts answer for each question.

Data Augmentation Question Answering +1

Vision Transformers with Patch Diversification

1 code implementation26 Apr 2021 Chengyue Gong, Dilin Wang, Meng Li, Vikas Chandra, Qiang Liu

To alleviate this problem, in this work, we introduce novel loss functions in vision transformer training to explicitly encourage diversity across patch representations for more discriminative feature extraction.

Image Classification Semantic Segmentation

AlphaNet: Improved Training of Supernets with Alpha-Divergence

2 code implementations16 Feb 2021 Dilin Wang, Chengyue Gong, Meng Li, Qiang Liu, Vikas Chandra

Weight-sharing NAS builds a supernet that assembles all the architectures as its sub-networks and jointly trains the supernet with the sub-networks.

Neural Architecture Search

Capturing Label Distribution: A Case Study in NLI

no code implementations13 Feb 2021 Shujian Zhang, Chengyue Gong, Eunsol Choi

We depart from the standard practice of collecting a single reference per each training example, and find that collecting multiple references can achieve better accuracy under the fixed annotation budget.

Natural Language Inference

Fast Training of Contrastive Learning with Intermediate Contrastive Loss

no code implementations1 Jan 2021 Chengyue Gong, Xingchao Liu, Qiang Liu

We apply our method to recently-proposed MOCO, SimCLR, SwAV and notice that we can reduce the computational cost with little loss on the performance of ImageNet linear classification and other downstream tasks.

Contrastive Learning

AlphaMatch: Improving Consistency for Semi-supervised Learning with Alpha-divergence

no code implementations CVPR 2021 Chengyue Gong, Dilin Wang, Qiang Liu

Semi-supervised learning (SSL) is a key approach toward more data-efficient machine learning by jointly leverage both labeled and unlabeled data.

AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling

2 code implementations CVPR 2021 Dilin Wang, Meng Li, Chengyue Gong, Vikas Chandra

Our discovered model family, AttentiveNAS models, achieves top-1 accuracy from 77. 3% to 80. 7% on ImageNet, and outperforms SOTA models, including BigNAS and Once-for-All networks.

Neural Architecture Search

SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions

1 code implementation ACL 2020 Mao Ye, Chengyue Gong, Qiang Liu

For security reasons, it is of critical importance to develop models with certified robustness that can provably guarantee that the prediction is can not be altered by any possible synonymous word substitution.

Text Classification

Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection

1 code implementation3 Mar 2020 Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, Qiang Liu

This differs from the existing methods based on backward elimination, which remove redundant neurons from the large network.

Network Pruning

Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework

no code implementations NeurIPS 2020 Dinghuai Zhang, Mao Ye, Chengyue Gong, Zhanxing Zhu, Qiang Liu

Randomized classifiers have been shown to provide a promising approach for achieving certified robustness against adversarial attacks in deep learning.

MaxUp: A Simple Way to Improve Generalization of Neural Network Training

1 code implementation20 Feb 2020 Chengyue Gong, Tongzheng Ren, Mao Ye, Qiang Liu

The idea is to generate a set of augmented data with some random perturbations or transforms and minimize the maximum, or worst case loss over the augmented data.

Few-Shot Image Classification General Classification +1

Improving Neural Language Modeling via Adversarial Training

1 code implementation10 Jun 2019 Dilin Wang, Chengyue Gong, Qiang Liu

Theoretically, we show that our adversarial mechanism effectively encourages the diversity of the embedding vectors, helping to increase the robustness of models.

Language Modelling Machine Translation

FRAGE: Frequency-Agnostic Word Representation

3 code implementations NeurIPS 2018 Chengyue Gong, Di He, Xu Tan, Tao Qin, Li-Wei Wang, Tie-Yan Liu

Continuous word representation (aka word embedding) is a basic building block in many neural network-based models used in natural language processing tasks.

Language Modelling Machine Translation +3

Deep Dynamic Poisson Factorization Model

no code implementations NeurIPS 2017 Chengyue Gong, Win-Bin Huang

A new model, named as deep dynamic poisson factorization model, is proposed in this paper for analyzing sequential count vectors.

Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.