Search Results for author: Kuan-Chuan Peng

Found 8 papers, 4 papers with code

ViewSynth: Learning Local Features from Depth using View Synthesis

1 code implementation22 Nov 2019 Jisan Mahmud, Rajat Vikram Singh, Peri Akiva, Spondon Kundu, Kuan-Chuan Peng, Jan-Michael Frahm

By learning view synthesis, we explicitly encourage the feature extractor to encode information about not only the visible, but also the occluded parts of the scene.

Camera Localization Keypoint Detection

Attention Guided Anomaly Localization in Images

no code implementations ECCV 2020 Shashanka Venkataramanan, Kuan-Chuan Peng, Rajat Vikram Singh, Abhijit Mahalanobis

Without the need of anomalous training images, we propose Convolutional Adversarial Variational autoencoder with Guided Attention (CAVGA), which localizes the anomaly with a convolutional latent variable to preserve the spatial information.

Ranked #18 on Anomaly Detection on MVTec AD (Segmentation AUROC metric, using extra training data)

Anomaly Detection

Learning without Memorizing

1 code implementation CVPR 2019 Prithviraj Dhar, Rajat Vikram Singh, Kuan-Chuan Peng, Ziyan Wu, Rama Chellappa

Incremental learning (IL) is an important task aimed at increasing the capability of a trained model, in terms of the number of classes recognizable by the model.

Incremental Learning

Sharpen Focus: Learning with Attention Separability and Consistency

1 code implementation ICCV 2019 Lezi Wang, Ziyan Wu, Srikrishna Karanam, Kuan-Chuan Peng, Rajat Vikram Singh, Bo Liu, Dimitris N. Metaxas

Recent developments in gradient-based attention modeling have seen attention maps emerge as a powerful tool for interpreting convolutional neural networks.

General Classification Image Classification

Tell Me Where to Look: Guided Attention Inference Network

2 code implementations CVPR 2018 Kunpeng Li, Ziyan Wu, Kuan-Chuan Peng, Jan Ernst, Yun Fu

Weakly supervised learning with only coarse labels can obtain visual explanations of deep neural network such as attention maps by back-propagating gradients.

Object Localization Semantic Segmentation

Learning Compositional Visual Concepts with Mutual Consistency

no code implementations CVPR 2018 Yunye Gong, Srikrishna Karanam, Ziyan Wu, Kuan-Chuan Peng, Jan Ernst, Peter C. Doerschuk

Compositionality of semantic concepts in image synthesis and analysis is appealing as it can help in decomposing known and generatively recomposing unknown data.

Data Augmentation Face Verification +1

Zero-Shot Deep Domain Adaptation

no code implementations ECCV 2018 Kuan-Chuan Peng, Ziyan Wu, Jan Ernst

Therefore, the source-domain task of interest solution (e. g. a classifier for classification tasks) which is jointly trained with the source-domain representation can be applicable to both the source and target representations.

Domain Adaptation General Classification +1

A Mixed Bag of Emotions: Model, Predict, and Transfer Emotion Distributions

no code implementations CVPR 2015 Kuan-Chuan Peng, Tsuhan Chen, Amir Sadovnik, Andrew C. Gallagher

First, we show through psychovisual studies that different people have different emotional reactions to the same image, which is a strong and novel departure from previous work that only records and predicts a single dominant emotion for each image.

Cannot find the paper you are looking for? You can Submit a new open access paper.