Search Results for author: Gukyeong Kwon

Found 14 papers, 6 papers with code

Masked Vision and Language Modeling for Multi-modal Representation Learning

no code implementations3 Aug 2022 Gukyeong Kwon, Zhaowei Cai, Avinash Ravichandran, Erhan Bas, Rahul Bhotika, Stefano Soatto

Instead of developing masked language modeling (MLM) and masked image modeling (MIM) independently, we propose to build joint masked vision and language modeling, where the masked signal of one modality is reconstructed with the help from another modality.

Language Modelling Masked Language Modeling +1

Patient Aware Active Learning for Fine-Grained OCT Classification

no code implementations23 Jun 2022 Yash-yee Logan, Ryan Benkert, Ahmad Mustafa, Gukyeong Kwon, Ghassan AlRegib

For this purpose, we propose a framework that incorporates clinical insights into the sample selection process of active learning that can be incorporated with existing algorithms.

Active Learning Classification

X-DETR: A Versatile Architecture for Instance-wise Vision-Language Tasks

no code implementations12 Apr 2022 Zhaowei Cai, Gukyeong Kwon, Avinash Ravichandran, Erhan Bas, Zhuowen Tu, Rahul Bhotika, Stefano Soatto

In this paper, we study the challenging instance-wise vision-language tasks, where the free-form language is required to align with the objects instead of the whole image.

A Gating Model for Bias Calibration in Generalized Zero-shot Learning

1 code implementation8 Mar 2022 Gukyeong Kwon, Ghassan AlRegib

Also, the two-stream autoencoder works as a unified framework for the gating model and the unseen expert, which makes the proposed method computationally efficient.

Attribute Generalized Zero-Shot Learning

Novelty Detection Through Model-Based Characterization of Neural Networks

no code implementations13 Aug 2020 Gukyeong Kwon, Mohit Prabhushankar, Dogancan Temel, Ghassan AlRegib

To articulate the significance of the model perspective in novelty detection, we utilize backpropagated gradients.

Novelty Detection

Contrastive Explanations in Neural Networks

3 code implementations1 Aug 2020 Mohit Prabhushankar, Gukyeong Kwon, Dogancan Temel, Ghassan AlRegib

Current modes of visual explanations answer questions of the form $`Why \text{ } P?'$.

Image Quality Assessment

Characterizing Missing Information in Deep Networks Using Backpropagated Gradients

no code implementations ICLR 2020 Gukyeong Kwon, Mohit Prabhushankar, Dogancan Temel, Ghassan AlRegib

To complement the learned information from activation-based representation, we propose utilizing a gradient-based representation that explicitly focuses on missing information.

Anomaly Detection Attribute +1

Distorted Representation Space Characterization Through Backpropagated Gradients

2 code implementations27 Aug 2019 Gukyeong Kwon, Mohit Prabhushankar, Dogancan Temel, Ghassan AlRegib

In this paper, we utilize weight gradients from backpropagation to characterize the representation space learned by deep learning algorithms.

General Classification Image Quality Assessment

Semantically Interpretable and Controllable Filter Sets

no code implementations17 Feb 2019 Mohit Prabhushankar, Gukyeong Kwon, Dogancan Temel, Ghassan AlRegib

In this paper, we generate and control semantically interpretable filters that are directly learned from natural images in an unsupervised fashion.

Image Quality Assessment

Power of Tempospatially Unified Spectral Density for Perceptual Video Quality Assessment

2 code implementations12 Dec 2018 Mohammed A. Aabed, Gukyeong Kwon, Ghassan AlRegib

This is a full-reference tempospatial approach that considers both temporal and spatial PSD characteristics.

Video Quality Assessment

CURE-TSR: Challenging Unreal and Real Environments for Traffic Sign Recognition

1 code implementation7 Dec 2017 Dogancan Temel, Gukyeong Kwon, Mohit Prabhushankar, Ghassan AlRegib

We benchmark the performance of existing solutions in real-world scenarios and analyze the performance variation with respect to challenging conditions.

Data Augmentation Traffic Sign Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.