1 code implementation • CVPR 2019 • Prithviraj Dhar, Rajat Vikram Singh, Kuan-Chuan Peng, Ziyan Wu, Rama Chellappa
Incremental learning (IL) is an important task aimed at increasing the capability of a trained model, in terms of the number of classes recognizable by the model.
2 code implementations • CVPR 2018 • Kunpeng Li, Ziyan Wu, Kuan-Chuan Peng, Jan Ernst, Yun Fu
Weakly supervised learning with only coarse labels can obtain visual explanations of deep neural network such as attention maps by back-propagating gradients.
1 code implementation • CVPR 2023 • Anoop Cherian, Kuan-Chuan Peng, Suhas Lohit, Kevin A. Smith, Joshua B. Tenenbaum
To answer this question, we propose SMART: a Simple Multimodal Algorithmic Reasoning Task and the associated SMART-101 dataset, for evaluating the abstraction, deduction, and generalization abilities of neural networks in solving visuo-linguistic puzzles designed specifically for children in the 6--8 age group.
1 code implementation • ICCV 2019 • Lezi Wang, Ziyan Wu, Srikrishna Karanam, Kuan-Chuan Peng, Rajat Vikram Singh, Bo Liu, Dimitris N. Metaxas
Recent developments in gradient-based attention modeling have seen attention maps emerge as a powerful tool for interpreting convolutional neural networks.
1 code implementation • 22 Nov 2019 • Jisan Mahmud, Rajat Vikram Singh, Peri Akiva, Spondon Kundu, Kuan-Chuan Peng, Jan-Michael Frahm
By learning view synthesis, we explicitly encourage the feature extractor to encode information about not only the visible, but also the occluded parts of the scene.
no code implementations • ECCV 2018 • Kuan-Chuan Peng, Ziyan Wu, Jan Ernst
Therefore, the source-domain task of interest solution (e. g. a classifier for classification tasks) which is jointly trained with the source-domain representation can be applicable to both the source and target representations.
no code implementations • CVPR 2018 • Yunye Gong, Srikrishna Karanam, Ziyan Wu, Kuan-Chuan Peng, Jan Ernst, Peter C. Doerschuk
Compositionality of semantic concepts in image synthesis and analysis is appealing as it can help in decomposing known and generatively recomposing unknown data.
no code implementations • CVPR 2015 • Kuan-Chuan Peng, Tsuhan Chen, Amir Sadovnik, Andrew C. Gallagher
First, we show through psychovisual studies that different people have different emotional reactions to the same image, which is a strong and novel departure from previous work that only records and predicts a single dominant emotion for each image.
no code implementations • ECCV 2020 • Shashanka Venkataramanan, Kuan-Chuan Peng, Rajat Vikram Singh, Abhijit Mahalanobis
Without the need of anomalous training images, we propose Convolutional Adversarial Variational autoencoder with Guided Attention (CAVGA), which localizes the anomaly with a convolutional latent variable to preserve the spatial information.
Ranked #70 on Anomaly Detection on MVTec AD (Segmentation AUROC metric)
no code implementations • 4 Feb 2022 • Kuan-Chuan Peng
Pothole classification has become an important task for road inspection vehicles to save drivers from potential car accidents and repair bills.
no code implementations • 4 Feb 2022 • Lipeng Ke, Kuan-Chuan Peng, Siwei Lyu
Graph Convolutional Networks (GCNs) have been widely used to model the high-order dynamic dependencies for skeleton-based action recognition.
no code implementations • 8 Sep 2022 • Sk Miraj Ahmed, Suhas Lohit, Kuan-Chuan Peng, Michael J. Jones, Amit K. Roy-Chowdhury
In such cases, transferring knowledge from a neural network trained on a well-labeled large dataset in the source modality (RGB) to a neural network that works on a target modality (depth, infrared, etc.)
no code implementations • 14 Dec 2022 • Abhishek Aich, Kuan-Chuan Peng, Amit K. Roy-Chowdhury
Most cross-domain unsupervised Video Anomaly Detection (VAD) works assume that at least few task-relevant target domain training data are available for adaptation from the source to the target domain.
no code implementations • 28 Sep 2023 • Manish Sharma, Moitreya Chatterjee, Kuan-Chuan Peng, Suhas Lohit, Michael Jones
We first pretrain these factor matrices on the RGB modality, for which plenty of training data are assumed to exist and then augment only a few trainable parameters for training on the IR modality to avoid over-fitting, while encouraging them to capture complementary cues from those trained only on the RGB modality.