Search Results for author: Yu Cheng

Found 116 papers, 52 papers with code

Object Tracking using Spatio-Temporal Networks for Future Prediction Location

no code implementations ECCV 2020 Yuan Liu, Ruoteng Li, Yu Cheng, Robby T. Tan, Xiubao Sui

To facilitate the future prediction ability, we follow three key observations: 1) object motion trajectory is affected significantly by camera motion; 2) the past trajectory of an object can act as a salient cue to estimate the object motion in the spatial domain; 3) previous frames contain the surroundings and appearance of the target object, which is useful for predicting the target object’s future locations.

Future prediction Object Tracking

Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models

no code implementations4 Nov 2021 Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, Bo Li

In this paper, we present Adversarial GLUE (AdvGLUE), a new multi-task benchmark to quantitatively and thoroughly explore and evaluate the vulnerabilities of modern large-scale language models under various types of adversarial attacks.

Adversarial Attack Adversarial Robustness +2

DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models

1 code implementation30 Oct 2021 Xuxi Chen, Tianlong Chen, Yu Cheng, Weizhu Chen, Zhangyang Wang, Ahmed Hassan Awadallah

To address these pain points, we propose a framework for resource- and parameter-efficient fine-tuning by leveraging the sparsity prior in both weight updates and the final model weights.

Fine-tuning

A Good Prompt Is Worth Millions of Parameters? Low-resource Prompt-based Learning for Vision-Language Models

no code implementations16 Oct 2021 Woojeong Jin, Yu Cheng, Yelong Shen, Weizhu Chen, Xiang Ren

Large pretrained vision-language (VL) models can learn a new task with a handful of examples or generalize to a new task without fine-tuning.

Fine-tuning Image Captioning +2

What do Compressed Large Language Models Forget? Robustness Challenges in Model Compression

no code implementations16 Oct 2021 Mengnan Du, Subhabrata Mukherjee, Yu Cheng, Milad Shokouhi, Xia Hu, Ahmed Hassan Awadallah

Recent works have focused on compressing pre-trained language models (PLMs) like BERT where the major focus has been to improve the compressed model performance for downstream tasks.

Knowledge Distillation Language understanding +2

MA-CLIP: Towards Modality-Agnostic Contrastive Language-Image Pre-training

no code implementations29 Sep 2021 Haoxuan You, Luowei Zhou, Bin Xiao, Noel C Codella, Yu Cheng, Ruochen Xu, Shih-Fu Chang, Lu Yuan

Large-scale multimodal contrastive pretraining has demonstrated great utility to support high performance in a range of downstream tasks by mapping multiple modalities into a shared embedding space.

Outlier-Robust Sparse Estimation via Non-Convex Optimization

no code implementations23 Sep 2021 Yu Cheng, Ilias Diakonikolas, Daniel M. Kane, Rong Ge, Shivam Gupta, Mahdi Soltanolkotabi

We explore the connection between outlier-robust high-dimensional statistics and non-convex optimization in the presence of sparsity constraints, with a focus on the fundamental tasks of robust sparse mean estimation and robust sparse PCA.

Few-Shot Object Detection via Classification Refinement and Distractor Retreatment

no code implementations CVPR 2021 Yiting Li, Haiyue Zhu, Yu Cheng, Wenxin Wang, Chek Sing Teo, Cheng Xiang, Prahlad Vadakkepat, Tong Heng Lee

The failure modes of FSOD are investigated that the performance degradation is mainly due to the classification incapability (false positives), which motivates us to address it from a novel aspect of hard example mining.

Classification Few-Shot Object Detection

Chasing Sparsity in Vision Transformers: An End-to-End Exploration

1 code implementation NeurIPS 2021 Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang

For example, our sparsified DeiT-Small at (5%, 50%) sparsity for (data, architecture), improves 0. 28% top-1 accuracy, and meanwhile enjoys 49. 32% FLOPs and 4. 40% running time savings.

Robust Learning of Fixed-Structure Bayesian Networks in Nearly-Linear Time

1 code implementation ICLR 2021 Yu Cheng, Honghao Lin

We achieve this by establishing a direct connection between robust learning of Bayesian networks and robust mean estimation.

Playing Lottery Tickets with Vision and Language

no code implementations23 Apr 2021 Zhe Gan, Yen-Chun Chen, Linjie Li, Tianlong Chen, Yu Cheng, Shuohang Wang, Jingjing Liu

In this work, we perform the first empirical study to assess whether such trainable subnetworks also exist in pre-trained V+L models.

Question Answering Referring Expression Comprehension +3

Automated Mechanism Design for Classification with Partial Verification

no code implementations12 Apr 2021 Hanrui Zhang, Yu Cheng, Vincent Conitzer

We study the problem of automated mechanism design with partial verification, where each type can (mis)report only a restricted set of types (rather than any other type), induced by the principal's limited verification power.

Classification General Classification

Monocular 3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks

1 code implementation CVPR 2021 Yu Cheng, Bo wang, Bo Yang, Robby T. Tan

Besides the integration of top-down and bottom-up networks, unlike existing pose discriminators that are designed solely for single person, and consequently cannot assess natural inter-person interactions, we propose a two-person pose discriminator that enforces natural two-person interactions.

3D Multi-Person Pose Estimation (absolute) 3D Multi-Person Pose Estimation (root-relative) +2

The Elastic Lottery Ticket Hypothesis

1 code implementation NeurIPS 2021 Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Jingjing Liu, Zhangyang Wang

Based on these results, we articulate the Elastic Lottery Ticket Hypothesis (E-LTH): by mindfully replicating (or dropping) and re-ordering layers for one network, its corresponding winning ticket could be stretched (or squeezed) into a subnetwork for another deeper (or shallower) network from the same family, whose performance is nearly the same competitive as the latter's winning ticket directly found by IMP.

Context-aware Biaffine Localizing Network for Temporal Sentence Grounding

1 code implementation CVPR 2021 Daizong Liu, Xiaoye Qu, Jianfeng Dong, Pan Zhou, Yu Cheng, Wei Wei, Zichuan Xu, Yulai Xie

This paper addresses the problem of temporal sentence grounding (TSG), which aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query.

Adversarial Feature Augmentation and Normalization for Visual Recognition

1 code implementation22 Mar 2021 Tianlong Chen, Yu Cheng, Zhe Gan, JianFeng Wang, Lijuan Wang, Zhangyang Wang, Jingjing Liu

Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.

Classification Data Augmentation +1

Few-Shot Text Classification with Triplet Networks, Data Augmentation, and Curriculum Learning

1 code implementation NAACL 2021 Jason Wei, Chengyu Huang, Soroush Vosoughi, Yu Cheng, Shiqi Xu

Few-shot text classification is a fundamental NLP task in which a model aims to classify text into a large number of categories, given only a few training examples per category.

Classification Curriculum Learning +4

Data-Efficient GAN Training Beyond (Just) Augmentations: A Lottery Ticket Perspective

1 code implementation NeurIPS 2021 Tianlong Chen, Yu Cheng, Zhe Gan, Jingjing Liu, Zhangyang Wang

Training generative adversarial networks (GANs) with limited real image data generally results in deteriorated performance and collapsed models.

Data Augmentation

Deep Co-Attention Network for Multi-View Subspace Learning

1 code implementation15 Feb 2021 Lecheng Zheng, Yu Cheng, Hongxia Yang, Nan Cao, Jingrui He

For example, given the diagnostic result that a model provided based on the X-ray images of a patient at different poses, the doctor needs to know why the model made such a prediction.

Structure Of Flavor Changing Goldstone Boson Interactions

no code implementations15 Jan 2021 Jin Sun, Yu Cheng, Xiao-Gang He

Or it may be the Majoron in models from lepton number violation in producing seesaw Majorana neutrino masses if the symmetry breaking scale is much higher than the electroweak scale.

High Energy Physics - Phenomenology

Adversarial Masking: Towards Understanding Robustness Trade-off for Generalization

no code implementations1 Jan 2021 Minhao Cheng, Zhe Gan, Yu Cheng, Shuohang Wang, Cho-Jui Hsieh, Jingjing Liu

By incorporating different feature maps after the masking, we can distill better features to help model generalization.

ALFA: Adversarial Feature Augmentation for Enhanced Image Recognition

no code implementations1 Jan 2021 Tianlong Chen, Yu Cheng, Zhe Gan, Yu Hu, Zhangyang Wang, Jingjing Liu

Adversarial training is an effective method to combat adversarial attacks in order to create robust neural networks.

EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets

1 code implementation ACL 2021 Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Zhangyang Wang, Jingjing Liu

Heavily overparameterized language models such as BERT, XLNet and T5 have achieved impressive success in many NLP tasks.

Fine-tuning Model Compression

Graph and Temporal Convolutional Networks for 3D Multi-person Pose Estimation in Monocular Videos

1 code implementation22 Dec 2020 Yu Cheng, Bo wang, Bo Yang, Robby T. Tan

To tackle this problem, we propose a novel framework integrating graph convolutional networks (GCNs) and temporal convolutional networks (TCNs) to robustly estimate camera-centric multi-person 3D poses that do not require camera parameters.

3D Absolute Human Pose Estimation 3D Multi-Person Pose Estimation (absolute) +5

Fair for All: Best-effort Fairness Guarantees for Classification

no code implementations18 Dec 2020 Anilesh K. Krishnaswamy, Zhihao Jiang, Kangning Wang, Yu Cheng, Kamesh Munagala

Instead, we propose a fairness notion whose guarantee, on each group $g$ in a class $\mathcal{G}$, is relative to the performance of the best classifier on $g$.

Classification Fairness +1

Light dark matter from dark sector decay

no code implementations3 Dec 2020 Yu Cheng, Wei Liao

We find that the mass of the dark sector singlet fermion can be GeV scale or MeV scale and the interaction of the dark sector singlet fermion is very weak.

High Energy Physics - Phenomenology

DSAM: A Distance Shrinking with Angular Marginalizing Loss for High Performance Vehicle Re-identificatio

no code implementations12 Nov 2020 Jiangtao Kong, Yu Cheng, Benjia Zhou, Kai Li, Junliang Xing

To obtain a high-performance vehicle ReID model, we present a novel Distance Shrinking with Angular Marginalizing (DSAM) loss function to perform hybrid learning in both the Original Feature Space (OFS) and the Feature Angular Space (FAS) using the local verification and the global identification information.

Person Re-Identification Vehicle Re-Identification

Object Tracking Using Spatio-Temporal Future Prediction

no code implementations15 Oct 2020 YuAn Liu, Ruoteng Li, Robby T. Tan, Yu Cheng, Xiubao Sui

Our trajectory prediction module predicts the target object's locations in the current and future frames based on the object's past trajectory.

Future prediction Object Tracking +1

Cross-Thought for Sentence Encoder Pre-training

1 code implementation EMNLP 2020 Shuohang Wang, Yuwei Fang, Siqi Sun, Zhe Gan, Yu Cheng, Jing Jiang, Jingjing Liu

In this paper, we propose Cross-Thought, a novel approach to pre-training sequence encoder, which is instrumental in building reusable sequence embeddings for large-scale NLP tasks such as question answering.

Information Retrieval Language Modelling +2

Multi-Fact Correction in Abstractive Text Summarization

no code implementations EMNLP 2020 Yue Dong, Shuohang Wang, Zhe Gan, Yu Cheng, Jackie Chi Kit Cheung, Jingjing Liu

Pre-trained neural abstractive summarization systems have dominated extractive strategies on news summarization performance, at least in terms of ROUGE.

Abstractive Text Summarization Question Answering

Efficient Robust Training via Backward Smoothing

1 code implementation3 Oct 2020 Jinghui Chen, Yu Cheng, Zhe Gan, Quanquan Gu, Jingjing Liu

Adversarial training is so far the most effective strategy in defending against adversarial examples.

Contrastive Distillation on Intermediate Representations for Language Model Compression

1 code implementation EMNLP 2020 Siqi Sun, Zhe Gan, Yu Cheng, Yuwei Fang, Shuohang Wang, Jingjing Liu

Existing language model compression methods mostly use a simple L2 loss to distill knowledge in the intermediate representations of a large BERT model to a smaller one.

Knowledge Distillation Language Modelling +1

Fine-grained Iterative Attention Network for TemporalLanguage Localization in Videos

no code implementations6 Aug 2020 Xiaoye Qu, Pengwei Tang, Zhikang Zhou, Yu Cheng, Jianfeng Dong, Pan Zhou

In this paper, we propose a Fine-grained Iterative Attention Network (FIAN) that consists of an iterative attention module for bilateral query-video in-formation extraction.

Graph Optimal Transport for Cross-Domain Alignment

1 code implementation ICML 2020 Liqun Chen, Zhe Gan, Yu Cheng, Linjie Li, Lawrence Carin, Jingjing Liu

In GOT, cross-domain alignment is formulated as a graph matching problem, by representing entities into a dynamically-constructed graph.

Graph Matching Image Captioning +5

MaxVA: Fast Adaptation of Step Sizes by Maximizing Observed Variance of Gradients

1 code implementation21 Jun 2020 Chen Zhu, Yu Cheng, Zhe Gan, Furong Huang, Jingjing Liu, Tom Goldstein

Adaptive gradient methods such as RMSProp and Adam use exponential moving estimate of the squared gradient to compute adaptive step sizes, achieving better convergence than SGD in face of noisy objectives.

Image Classification Language understanding +4

Behind the Scene: Revealing the Secrets of Pre-trained Vision-and-Language Models

no code implementations ECCV 2020 Jize Cao, Zhe Gan, Yu Cheng, Licheng Yu, Yen-Chun Chen, Jingjing Liu

To reveal the secrets behind the scene of these powerful models, we present VALUE (Vision-And-Language Understanding Evaluation), a set of meticulously designed probing tasks (e. g., Visual Coreference Resolution, Visual Relation Detection, Linguistic Probing Tasks) generalizable to standard pre-trained V+L models, aiming to decipher the inner workings of multimodal pre-training (e. g., the implicit knowledge garnered in individual attention heads, the inherent cross-modal alignment learned through contextualized multimodal embeddings).

Coreference Resolution Language understanding

Low-shot Object Detection via Classification Refinement

no code implementations6 May 2020 Yiting Li, Yu Cheng, Lu Liu, Sichao Tian, Haiyue Zhu, Cheng Xiang, Prahlad Vadakkepat, Cheksing Teo, Tongheng Lee

Specially, we sample false positive proposals from a base detector to train a separate classification correction network.

Classification General Classification +1

High-Dimensional Robust Mean Estimation via Gradient Descent

no code implementations ICML 2020 Yu Cheng, Ilias Diakonikolas, Rong Ge, Mahdi Soltanolkotabi

We study the problem of high-dimensional robust mean estimation in the presence of a constant fraction of adversarial outliers.

APo-VAE: Text Generation in Hyperbolic Space

no code implementations NAACL 2021 Shuyang Dai, Zhe Gan, Yu Cheng, Chenyang Tao, Lawrence Carin, Jingjing Liu

In this paper, we investigate text generation in a hyperbolic latent space to learn continuous hierarchical representations.

Hierarchical structure Language Modelling +1

Contextual Text Style Transfer

no code implementations Findings of the Association for Computational Linguistics 2020 Yu Cheng, Zhe Gan, Yizhe Zhang, Oussama Elachqar, Dianqi Li, Jingjing Liu

To realize high-quality style transfer with natural context preservation, we propose a Context-Aware Style Transfer (CAST) model, which uses two separate encoders for each input sentence and its surrounding context.

Style Transfer Text Style Transfer +1

Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning

1 code implementation CVPR 2020 Tianlong Chen, Sijia Liu, Shiyu Chang, Yu Cheng, Lisa Amini, Zhangyang Wang

We conduct extensive experiments to demonstrate that the proposed framework achieves large performance margins (eg, 3. 83% on robust accuracy and 1. 3% on standard accuracy, on the CIFAR-10 dataset), compared with the conventional end-to-end adversarial training baseline.

Adversarial Robustness Fine-tuning

BachGAN: High-Resolution Image Synthesis from Salient Object Layout

1 code implementation CVPR 2020 Yandong Li, Yu Cheng, Zhe Gan, Licheng Yu, Liqiang Wang, Jingjing Liu

We propose a new task towards more practical application for image generation - high-quality image synthesis from salient object layout.

Image Generation

VIOLIN: A Large-Scale Dataset for Video-and-Language Inference

1 code implementation CVPR 2020 Jingzhou Liu, Wenhu Chen, Yu Cheng, Zhe Gan, Licheng Yu, Yiming Yang, Jingjing Liu

We introduce a new task, Video-and-Language Inference, for joint multimodal understanding of video and text.

Constrained Deep Reinforcement Learning for Energy Sustainable Multi-UAV based Random Access IoT Networks with NOMA

no code implementations31 Jan 2020 Sami Khairy, Prasanna Balaprakash, Lin X. Cai, Yu Cheng

In this paper, we apply the Non-Orthogonal Multiple Access (NOMA) technique to improve the massive channel access of a wireless IoT network where solar-powered Unmanned Aerial Vehicles (UAVs) relay data from IoT devices to remote servers.

Distinguishing Distributions When Samples Are Strategically Transformed

no code implementations NeurIPS 2019 Hanrui Zhang, Yu Cheng, Vincent Conitzer

In other settings, the principal may not even be able to observe samples directly; instead, she must rely on signals that the agent is able to send based on the samples that he obtains, and he will choose these signals strategically.

Towards Better Understanding of Disentangled Representations via Mutual Information

no code implementations25 Nov 2019 Xiaojiang Yang, Wendong Bi, Yitong Sun, Yu Cheng, Junchi Yan

Most existing works on disentangled representation learning are solely built upon an marginal independence assumption: all factors in disentangled representations should be statistically independent.

Representation Learning

INSET: Sentence Infilling with INter-SEntential Transformer

1 code implementation ACL 2020 Yichen Huang, Yizhe Zhang, Oussama Elachqar, Yu Cheng

Missing sentence generation (or sentence infilling) fosters a wide range of applications in natural language generation, such as document auto-completion and meeting note expansion.

Natural Language Understanding Text Generation

Distilling Knowledge Learned in BERT for Text Generation

1 code implementation ACL 2020 Yen-Chun Chen, Zhe Gan, Yu Cheng, Jingzhou Liu, Jingjing Liu

Experiments show that the proposed approach significantly outperforms strong Transformer baselines on multiple language generation tasks such as machine translation and text summarization.

Language Modelling Language understanding +4

Discourse-Aware Neural Extractive Text Summarization

1 code implementation ACL 2020 Jiacheng Xu, Zhe Gan, Yu Cheng, Jingjing Liu

Recently BERT has been adopted for document encoding in state-of-the-art text summarization models.

Extractive Text Summarization

Tell-the-difference: Fine-grained Visual Descriptor via a Discriminating Referee

no code implementations14 Oct 2019 Shuangjie Xu, Feng Xu, Yu Cheng, Pan Zhou

In this paper, we investigate a novel problem of telling the difference between image pairs in natural language.

Image Captioning

Meta Module Network for Compositional Visual Reasoning

1 code implementation8 Oct 2019 Wenhu Chen, Zhe Gan, Linjie Li, Yu Cheng, William Wang, Jingjing Liu

To design a more powerful NMN architecture for practical use, we propose Meta Module Network (MMN) centered on a novel meta module, which can take in function recipes and morph into diverse instance modules dynamically.

Visual Reasoning

UNITER: Learning UNiversal Image-TExt Representations

no code implementations25 Sep 2019 Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, Jingjing Liu

Joint image-text embedding is the bedrock for most Vision-and-Language (V+L) tasks, where multimodality inputs are jointly processed for visual and textual understanding.

Language Modelling Question Answering +5

FreeLB: Enhanced Adversarial Training for Natural Language Understanding

2 code implementations ICLR 2020 Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, Jingjing Liu

Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language models.

Language understanding Natural Language Understanding +1

UNITER: UNiversal Image-TExt Representation Learning

5 code implementations ECCV 2020 Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, Jingjing Liu

Different from previous work that applies joint random masking to both modalities, we use conditional masking on pre-training tasks (i. e., masked language/region modeling is conditioned on full observation of image/text).

Language Modelling Question Answering +6

Contrastively Smoothed Class Alignment for Unsupervised Domain Adaptation

no code implementations11 Sep 2019 Shuyang Dai, Yu Cheng, Yizhe Zhang, Zhe Gan, Jingjing Liu, Lawrence Carin

Recent unsupervised approaches to domain adaptation primarily focus on minimizing the gap between the source and the target domains through refining the feature generator, in order to learn a better alignment between the two domains.

Unsupervised Domain Adaptation

What Makes A Good Story? Designing Composite Rewards for Visual Storytelling

1 code implementation11 Sep 2019 Junjie Hu, Yu Cheng, Zhe Gan, Jingjing Liu, Jianfeng Gao, Graham Neubig

Previous storytelling approaches mostly focused on optimizing traditional metrics such as BLEU, ROUGE and CIDEr.

Visual Storytelling

Patient Knowledge Distillation for BERT Model Compression

2 code implementations IJCNLP 2019 Siqi Sun, Yu Cheng, Zhe Gan, Jingjing Liu

Pre-trained language models such as BERT have proven to be highly effective for natural language processing (NLP) tasks.

Knowledge Distillation Model Compression

EnlightenGAN: Deep Light Enhancement without Paired Supervision

8 code implementations17 Jun 2019 Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, Zhangyang Wang

Deep learning-based methods have achieved remarkable success in image restoration and enhancement, but are they still competitive when there is a lack of paired training data?

Image Restoration Low-Light Image Enhancement

Faster Algorithms for High-Dimensional Robust Covariance Estimation

no code implementations11 Jun 2019 Yu Cheng, Ilias Diakonikolas, Rong Ge, David Woodruff

We study the problem of estimating the covariance matrix of a high-dimensional distribution when a small constant fraction of the samples can be arbitrarily corrupted.

Adversarial Category Alignment Network for Cross-domain Sentiment Classification

no code implementations NAACL 2019 Xiaoye Qu, Zhikang Zou, Yu Cheng, Yang Yang, Pan Zhou

Cross-domain sentiment classification aims to predict sentiment polarity on a target domain utilizing a classifier learned from a source domain.

Classification General Classification +1

A Hybrid Approach with Optimization and Metric-based Meta-Learner for Few-Shot Learning

no code implementations4 Apr 2019 Duo Wang, Yu Cheng, Mo Yu, Xiaoxiao Guo, Tao Zhang

The task-specific classifiers are required to be homogeneous-structured to ease the parameter prediction, so the meta-learning approaches could only handle few-shot learning problems where the tasks share a uniform number of classes.

Few-Shot Learning General Classification +3

Relation-Aware Graph Attention Network for Visual Question Answering

1 code implementation ICCV 2019 Linjie Li, Zhe Gan, Yu Cheng, Jingjing Liu

In order to answer semantically-complicated questions about an image, a Visual Question Answering (VQA) model needs to fully understand the visual scene in the image, especially the interactive dynamics between different objects.

Graph Attention Question Answering +1

POP-CNN: Predicting Odor's Pleasantness with Convolutional Neural Network

no code implementations19 Mar 2019 Danli Wu, Yu Cheng, Dehan Luo, Kin-Yeung Wong, Kevin Hung, Zhijing Yang

Predicting odor's pleasantness simplifies the evaluation of odors and has the potential to be applied in perfumes and environmental monitoring industry.

Measuring Patient Similarities via a Deep Architecture with Medical Concept Embedding

1 code implementation9 Feb 2019 Zihao Zhu, Changchang Yin, Buyue Qian, Yu Cheng, Jishang Wei, Fei Wang

One major carrier for conducting patient similarity research is Electronic Health Records(EHRs), which are usually heterogeneous, longitudinal, and sparse.

Multi-step Reasoning via Recurrent Dual Attention for Visual Dialog

no code implementations ACL 2019 Zhe Gan, Yu Cheng, Ahmed El Kholy, Linjie Li, Jingjing Liu, Jianfeng Gao

This paper presents a new model for visual dialog, Recurrent Dual Attention Network (ReDAN), using multi-step reasoning to answer a series of questions about an image.

Question Answering Visual Dialog

Few-shot Learning with Meta Metric Learners

no code implementations26 Jan 2019 Yu Cheng, Mo Yu, Xiaoxiao Guo, Bo-Wen Zhou

Our meta metric learning approach consists of task-specific learners, that exploit metric learning to handle flexible labels, and a meta learner, that discovers good parameters and gradient decent to specify the metrics in task-specific learners.

Few-Shot Learning Metric Learning

Deep Multimodality Model for Multi-task Multi-view Learning

1 code implementation25 Jan 2019 Lecheng Zheng, Yu Cheng, Jingrui He

However, there is no existing deep learning algorithm that jointly models task and view dual heterogeneity, particularly for a data set with multiple modalities (text and image mixed data set or text and video mixed data set, etc.).

General Classification Image Classification +1

Sequential Attention GAN for Interactive Image Editing

no code implementations20 Dec 2018 Yu Cheng, Zhe Gan, Yitong Li, Jingjing Liu, Jianfeng Gao

The main challenges in this sequential and interactive image generation task are two-fold: 1) contextual consistency between a generated image and the provided textual description; 2) step-by-step region-level modification to maintain visual consistency across the generated image sequence in each session.

Text-to-Image Generation

High-Dimensional Robust Mean Estimation in Nearly-Linear Time

no code implementations23 Nov 2018 Yu Cheng, Ilias Diakonikolas, Rong Ge

We study the fundamental problem of high-dimensional mean estimation in a robust model where a constant fraction of the samples are adversarially corrupted.

Bayesian Cycle-Consistent Generative Adversarial Networks via Marginalizing Latent Sampling

1 code implementation19 Nov 2018 Haoran You, Yu Cheng, Tianheng Cheng, Chunliang Li, Pan Zhou

We evaluate the proposed Bayesian CycleGAN on multiple benchmark datasets, including Cityscapes, Maps, and Monet2photo.

Image-to-Image Translation Semantic Segmentation +1

Dialog-based Interactive Image Retrieval

1 code implementation NeurIPS 2018 Xiaoxiao Guo, Hui Wu, Yu Cheng, Steven Rennie, Gerald Tesauro, Rogerio Schmidt Feris

Experiments on both simulated and real-world data show that 1) our proposed learning framework achieves better accuracy than other supervised and reinforcement learning baselines and 2) user feedback based on natural language rather than pre-specified attributes leads to more effective retrieval results, and a more natural and expressive communication interface.

Image Retrieval Visual Dialog

Understanding Humans in Crowded Scenes: Deep Nested Adversarial Learning and A New Benchmark for Multi-Human Parsing

2 code implementations10 Apr 2018 Jian Zhao, Jianshu Li, Yu Cheng, Li Zhou, Terence Sim, Shuicheng Yan, Jiashi Feng

Despite the noticeable progress in perceptual tasks like detection, instance segmentation and human parsing, computers still perform unsatisfactorily on visually understanding humans in crowded scenes, such as group behavior analysis, person re-identification and autonomous driving, etc.

Autonomous Driving Instance Segmentation +4

Pedestrian-Synthesis-GAN: Generating Pedestrian Data in Real Scene and Beyond

3 code implementations5 Apr 2018 Xi Ouyang, Yu Cheng, Yifan Jiang, Chun-Liang Li, Pan Zhou

The results show that our framework can smoothly synthesize pedestrians on background images of variations and different levels of details.

Pedestrian Detection Scene Text Recognition

Non-Convex Matrix Completion Against a Semi-Random Adversary

no code implementations28 Mar 2018 Yu Cheng, Rong Ge

Matrix completion is a well-studied problem with many machine learning applications.

Matrix Completion

Deep Nearest Class Mean Model for Incremental Odor Classification

no code implementations8 Jan 2018 Yu Cheng, Angus Wong, Kevin Hung, Zhizhong Li, Weitong Li, Jun Zhang

That is, the odor datasets are dynamically growing while both training samples and number of classes are increasing over time.

Classification General Classification

On the Distortion of Voting with Multiple Representative Candidates

no code implementations21 Nov 2017 Yu Cheng, Shaddin Dughmi, David Kempe

Our main result is a clean and tight characterization of positional voting rules that have constant expected distortion (independent of the number of candidates and the metric space).

Sobolev GAN

1 code implementation ICLR 2018 Youssef Mroueh, Chun-Liang Li, Tom Sercu, Anant Raj, Yu Cheng

We show that the Sobolev IPM compares two distributions in high dimensions based on weighted conditional Cumulative Distribution Functions (CDF) of each coordinate on a leave one out basis.

Text Generation

A Survey of Model Compression and Acceleration for Deep Neural Networks

no code implementations23 Oct 2017 Yu Cheng, Duo Wang, Pan Zhou, Tao Zhang

Methods of parameter pruning and quantization are described first, after that the other techniques are introduced.

Knowledge Distillation Model Compression +1

Catching Anomalous Distributed Photovoltaics: An Edge-based Multi-modal Anomaly Detection

no code implementations26 Sep 2017 Devu Manikantan Shilay, Kin Gwn Lorey, Tianshu Weiz, Teems Lovetty, Yu Cheng

A significant challenge in energy system cyber security is the current inability to detect cyber-physical attacks targeting and originating from distributed grid-edge devices such as photovoltaics (PV) panels, smart flexible loads, and electric vehicles.

Anomaly Detection Time Series

Boosting Deep Learning Risk Prediction with Generative Adversarial Networks for Electronic Health Records

no code implementations6 Sep 2017 Zhengping Che, Yu Cheng, Shuangfei Zhai, Zhaonan Sun, Yan Liu

We use this generative model together with a convolutional neural network (CNN) based prediction model to improve the onset prediction performance.

MMD GAN: Towards Deeper Understanding of Moment Matching Network

2 code implementations NeurIPS 2017 Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, Barnabás Póczos

In this paper, we propose to improve both the model expressiveness of GMMN and its computational efficiency by introducing adversarial kernel learning techniques, as the replacement of a fixed Gaussian kernel in the original GMMN.

Of the People: Voting Is More Effective with Representative Candidates

no code implementations4 May 2017 Yu Cheng, Shaddin Dughmi, David Kempe

However, we show that independence alone is not enough to achieve the upper bound: even when candidates are drawn independently, if the population of candidates can be different from the voters, then an upper bound of $2$ on the approximation is tight.

Exploiting Convolutional Neural Network for Risk Prediction with Medical Feature Embedding

no code implementations25 Jan 2017 Zhengping Che, Yu Cheng, Zhaonan Sun, Yan Liu

To account for high dimensionality, we use the embedding medical features in the CNN model which hold the natural medical concepts.

On the Recursive Teaching Dimension of VC Classes

no code implementations NeurIPS 2016 Xi Chen, Yu Cheng, Bo Tang

This is the first upper bound for $RTD(C)$ that depends only on $VCD(C)$, independent of the size of the concept class $|C|$ and its~domain size $n$.

S3Pool: Pooling with Stochastic Spatial Sampling

4 code implementations CVPR 2017 Shuangfei Zhai, Hui Wu, Abhishek Kumar, Yu Cheng, Yongxi Lu, Zhongfei Zhang, Rogerio Feris

We view the pooling operation in CNNs as a two-step procedure: first, a pooling window (e. g., $2\times 2$) slides over the feature map with stride one which leaves the spatial resolution intact, and second, downsampling is performed by selecting one pixel from each non-overlapping pooling window in an often uniform and deterministic (e. g., top-left) manner.

Data Augmentation Image Classification

Generative Adversarial Networks as Variational Training of Energy Based Models

1 code implementation6 Nov 2016 Shuangfei Zhai, Yu Cheng, Rogerio Feris, Zhongfei Zhang

We propose VGAN, which works by minimizing a variational lower bound of the negative log likelihood (NLL) of an energy based model (EBM), where the model density $p(\mathbf{x})$ is approximated by a variational distribution $q(\mathbf{x})$ that is easy to sample from.

Doubly Convolutional Neural Networks

no code implementations NeurIPS 2016 Shuangfei Zhai, Yu Cheng, Weining Lu, Zhongfei Zhang

Building large models with parameter sharing accounts for most of the success of deep convolutional neural networks (CNNs).

Image Classification

Robust Learning of Fixed-Structure Bayesian Networks

1 code implementation NeurIPS 2018 Yu Cheng, Ilias Diakonikolas, Daniel Kane, Alistair Stewart

We investigate the problem of learning Bayesian networks in a robust model where an $\epsilon$-fraction of the samples are adversarially corrupted.

Deep Structured Energy Based Models for Anomaly Detection

1 code implementation25 May 2016 Shuangfei Zhai, Yu Cheng, Weining Lu, Zhongfei Zhang

In this paper, we attack the anomaly detection problem by directly modeling the data distribution with deep architectures.

Anomaly Detection

Walk and Learn: Facial Attribute Representation Learning from Egocentric Video and Contextual Data

no code implementations CVPR 2016 Jing Wang, Yu Cheng, Rogerio Schmidt Feris

These image pairs are then fed into a deep network that preserves similarity of images connected by the same track, in order to capture identity-related attribute features, and optimizes for location and weather prediction to capture additional facial attribute features.

Facial Attribute Classification Representation Learning

Spectral Sparsification of Random-Walk Matrix Polynomials

no code implementations12 Feb 2015 Dehua Cheng, Yu Cheng, Yan Liu, Richard Peng, Shang-Hua Teng

Our work is particularly motivated by the algorithmic problems for speeding up the classic Newton's method in applications such as computing the inverse square-root of the precision matrix of a Gaussian random field, as well as computing the $q$th-root transition (for $q\geq1$) in a time-reversible Markov model.

An exploration of parameter redundancy in deep networks with circulant projections

no code implementations ICCV 2015 Yu Cheng, Felix X. Yu, Rogerio S. Feris, Sanjiv Kumar, Alok Choudhary, Shih-Fu Chang

We explore the redundancy of parameters in deep neural networks by replacing the conventional linear projection in fully-connected layers with the circulant projection.

Temporal Sequence Modeling for Video Event Detection

no code implementations CVPR 2014 Yu Cheng, Quanfu Fan, Sharath Pankanti, Alok Choudhary

Based on this idea, we represent a video by a sequence of visual words learnt from the video, and apply the Sequence Memoizer [21] to capture long-range dependencies in a temporal context in the visual sequence.

Event Detection General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.