Search Results for author: Kuan-Chuan Peng

Found 15 papers, 5 papers with code

Long-Tailed Anomaly Detection with Learnable Class Names

no code implementations29 Mar 2024 Chih-Hui Ho, Kuan-Chuan Peng, Nuno Vasconcelos

Phase 2 then learns the parameters of the reconstruction and classification modules of LTAD.

Anomaly Detection

Tensor Factorization for Leveraging Cross-Modal Knowledge in Data-Constrained Infrared Object Detection

no code implementations28 Sep 2023 Manish Sharma, Moitreya Chatterjee, Kuan-Chuan Peng, Suhas Lohit, Michael Jones

We first pretrain these factor matrices on the RGB modality, for which plenty of training data are assumed to exist and then augment only a few trainable parameters for training on the IR modality to avoid over-fitting, while encouraging them to capture complementary cues from those trained only on the RGB modality.

object-detection Object Detection +1

Are Deep Neural Networks SMARTer than Second Graders?

1 code implementation CVPR 2023 Anoop Cherian, Kuan-Chuan Peng, Suhas Lohit, Kevin A. Smith, Joshua B. Tenenbaum

To answer this question, we propose SMART: a Simple Multimodal Algorithmic Reasoning Task and the associated SMART-101 dataset, for evaluating the abstraction, deduction, and generalization abilities of neural networks in solving visuo-linguistic puzzles designed specifically for children in the 6--8 age group.

Language Modelling Meta-Learning +1

Cross-Domain Video Anomaly Detection without Target Domain Adaptation

no code implementations14 Dec 2022 Abhishek Aich, Kuan-Chuan Peng, Amit K. Roy-Chowdhury

Most cross-domain unsupervised Video Anomaly Detection (VAD) works assume that at least few task-relevant target domain training data are available for adaptation from the source to the target domain.

Anomaly Detection Domain Adaptation +1

Cross-Modal Knowledge Transfer Without Task-Relevant Source Data

no code implementations8 Sep 2022 Sk Miraj Ahmed, Suhas Lohit, Kuan-Chuan Peng, Michael J. Jones, Amit K. Roy-Chowdhury

In such cases, transferring knowledge from a neural network trained on a well-labeled large dataset in the source modality (RGB) to a neural network that works on a target modality (depth, infrared, etc.)

Autonomous Navigation Transfer Learning

Towards To-a-T Spatio-Temporal Focus for Skeleton-Based Action Recognition

no code implementations4 Feb 2022 Lipeng Ke, Kuan-Chuan Peng, Siwei Lyu

Graph Convolutional Networks (GCNs) have been widely used to model the high-order dynamic dependencies for skeleton-based action recognition.

Action Recognition Skeleton Based Action Recognition

Iterative Self Knowledge Distillation -- From Pothole Classification to Fine-Grained and COVID Recognition

no code implementations4 Feb 2022 Kuan-Chuan Peng

Pothole classification has become an important task for road inspection vehicles to save drivers from potential car accidents and repair bills.

Classification Self-Knowledge Distillation

ViewSynth: Learning Local Features from Depth using View Synthesis

1 code implementation22 Nov 2019 Jisan Mahmud, Rajat Vikram Singh, Peri Akiva, Spondon Kundu, Kuan-Chuan Peng, Jan-Michael Frahm

By learning view synthesis, we explicitly encourage the feature extractor to encode information about not only the visible, but also the occluded parts of the scene.

Camera Localization Keypoint Detection

Attention Guided Anomaly Localization in Images

no code implementations ECCV 2020 Shashanka Venkataramanan, Kuan-Chuan Peng, Rajat Vikram Singh, Abhijit Mahalanobis

Without the need of anomalous training images, we propose Convolutional Adversarial Variational autoencoder with Guided Attention (CAVGA), which localizes the anomaly with a convolutional latent variable to preserve the spatial information.

Ranked #74 on Anomaly Detection on MVTec AD (Segmentation AUROC metric)

Anomaly Detection

Learning without Memorizing

1 code implementation CVPR 2019 Prithviraj Dhar, Rajat Vikram Singh, Kuan-Chuan Peng, Ziyan Wu, Rama Chellappa

Incremental learning (IL) is an important task aimed at increasing the capability of a trained model, in terms of the number of classes recognizable by the model.

Incremental Learning

Sharpen Focus: Learning with Attention Separability and Consistency

1 code implementation ICCV 2019 Lezi Wang, Ziyan Wu, Srikrishna Karanam, Kuan-Chuan Peng, Rajat Vikram Singh, Bo Liu, Dimitris N. Metaxas

Recent developments in gradient-based attention modeling have seen attention maps emerge as a powerful tool for interpreting convolutional neural networks.

General Classification Image Classification

Tell Me Where to Look: Guided Attention Inference Network

2 code implementations CVPR 2018 Kunpeng Li, Ziyan Wu, Kuan-Chuan Peng, Jan Ernst, Yun Fu

Weakly supervised learning with only coarse labels can obtain visual explanations of deep neural network such as attention maps by back-propagating gradients.

Object Localization Semantic Segmentation +1

Learning Compositional Visual Concepts with Mutual Consistency

no code implementations CVPR 2018 Yunye Gong, Srikrishna Karanam, Ziyan Wu, Kuan-Chuan Peng, Jan Ernst, Peter C. Doerschuk

Compositionality of semantic concepts in image synthesis and analysis is appealing as it can help in decomposing known and generatively recomposing unknown data.

Data Augmentation Face Verification +1

Zero-Shot Deep Domain Adaptation

no code implementations ECCV 2018 Kuan-Chuan Peng, Ziyan Wu, Jan Ernst

Therefore, the source-domain task of interest solution (e. g. a classifier for classification tasks) which is jointly trained with the source-domain representation can be applicable to both the source and target representations.

Classification Domain Adaptation +3

A Mixed Bag of Emotions: Model, Predict, and Transfer Emotion Distributions

no code implementations CVPR 2015 Kuan-Chuan Peng, Tsuhan Chen, Amir Sadovnik, Andrew C. Gallagher

First, we show through psychovisual studies that different people have different emotional reactions to the same image, which is a strong and novel departure from previous work that only records and predicts a single dominant emotion for each image.

Cannot find the paper you are looking for? You can Submit a new open access paper.