Search Results for author: Donghyun Kim

Found 40 papers, 15 papers with code

Exploring Consistency in Cross-Domain Transformer for Domain Adaptive Semantic Segmentation

no code implementations27 Nov 2022 Kaihong Wang, Donghyun Kim, Rogerio Feris, Kate Saenko, Margrit Betke

We propose to perform adaptation on attention maps with cross-domain attention layers that share features between the source and the target domains.

Semantic Segmentation Unsupervised Domain Adaptation

ConStruct-VL: Data-Free Continual Structured VL Concepts Learning

no code implementations17 Nov 2022 James Seale Smith, Paola Cascante-Bonilla, Assaf Arbelle, Donghyun Kim, Rameswar Panda, David Cox, Diyi Yang, Zsolt Kira, Rogerio Feris, Leonid Karlinsky

We, therefore, propose a data-free method comprised of a new approach of Adversarial Pseudo-Replay (APR) which generates adversarial reminders of past tasks from past task models.

Grafting Vision Transformers

no code implementations28 Oct 2022 Jongwoo Park, Kumara Kahatapitiya, Donghyun Kim, Shivchander Sudalairaj, Quanfu Fan, Michael S. Ryoo

In this paper, we present a simple and efficient add-on component (termed GrafT) that considers global dependencies and multi-scale information throughout the network, in both high- and low-resolution features alike.

Image Classification Instance Segmentation +3

System Configuration and Navigation of a Guide Dog Robot: Toward Animal Guide Dog-Level Guiding Work

no code implementations24 Oct 2022 Hochul Hwang, Tim Xia, Ibrahima Keita, Ken Suzuki, Joydeep Biswas, Sunghoon I. Lee, Donghyun Kim

A robot guide dog has compelling advantages over animal guide dogs for its cost-effectiveness, potential for mass production, and low maintenance burden.

Emp-RFT: Empathetic Response Generation via Recognizing Feature Transitions between Utterances

no code implementations NAACL 2022 Wongyu Kim, Youbin Ahn, Donghyun Kim, Kyong-Ho Lee

To solve the above issue, we propose a novel approach of recognizing feature transitions between utterances, which helps understand the dialogue flow and better grasp the features of utterance that needs attention.

Empathetic Response Generation Response Generation

Temporal Relevance Analysis for Video Action Models

no code implementations25 Apr 2022 Quanfu Fan, Donghyun Kim, Chun-Fu, Chen, Stan Sclaroff, Kate Saenko, Sarah Adel Bargal

In this paper, we provide a deep analysis of temporal modeling for action recognition, an important but underexplored problem in the literature.

Action Recognition

A Unified Framework for Domain Adaptive Pose Estimation

1 code implementation1 Apr 2022 Donghyun Kim, Kaihong Wang, Kate Saenko, Margrit Betke, Stan Sclaroff

In this paper, we investigate the problem of domain adaptive 2D pose estimation that transfers knowledge learned on a synthetic source domain to a target domain without supervision.

Animal Pose Estimation Hand Pose Estimation +1

A Broad Study of Pre-training for Domain Generalization and Adaptation

1 code implementation22 Mar 2022 Donghyun Kim, Kaihong Wang, Stan Sclaroff, Kate Saenko

In this paper, we provide a broad study and in-depth analysis of pre-training for domain adaptation and generalization, namely: network architectures, size, pre-training loss, and datasets.

Domain Generalization

Robust Convergence in Federated Learning through Label-wise Clustering

no code implementations28 Dec 2021 Hunmin Lee, Yueyang Liu, Donghyun Kim, Yingshu Li

Non-IID dataset and heterogeneous environment of the local clients are regarded as a major issue in Federated Learning (FL), causing a downturn in the convergence without achieving satisfactory performance.

Federated Learning

OpenMatch: Open-Set Semi-supervised Learning with Open-set Consistency Regularization

1 code implementation NeurIPS 2021 Kuniaki Saito, Donghyun Kim, Kate Saenko

\ours achieves state-of-the-art performance on three datasets, and even outperforms a fully supervised model in detecting outliers unseen in unlabeled data on CIFAR10.

Outlier Detection

CogME: A Novel Evaluation Metric for Video Understanding Intelligence

no code implementations21 Jul 2021 Minjung Shin, Jeonghoon Kim, SeongHo Choi, Yu-Jung Heo, Donghyun Kim, Minsu Lee, Byoung-Tak Zhang, Jeh-Kwang Ryu

Then we propose a top-down evaluation system for VideoQA, based on the cognitive process of humans and story elements: Cognitive Modules for Evaluation (CogME).

Question Answering Video Question Answering +1

OpenMatch: Open-set Consistency Regularization for Semi-supervised Learning with Outliers

1 code implementation28 May 2021 Kuniaki Saito, Donghyun Kim, Kate Saenko

OpenMatch achieves state-of-the-art performance on three datasets, and even outperforms a fully supervised model in detecting outliers unseen in unlabeled data on CIFAR10.

Outlier Detection

Predicting Participation in Cancer Screening Programs with Machine Learning

no code implementations27 Jan 2021 Donghyun Kim

In this paper, we present machine learning models based on random forest classifiers, support vector machines, gradient boosted decision trees, and artificial neural networks to predict participation in cancer screening programs in South Korea.

BIG-bench Machine Learning

BROS: A Pre-trained Language Model for Understanding Texts in Document

no code implementations1 Jan 2021 Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park

Although the recent advance in OCR enables the accurate extraction of text segments, it is still challenging to extract key information from documents due to the diversity of layouts.

Document Layout Analysis Language Modelling +1

CDS: Cross-Domain Self-Supervised Pre-Training

no code implementations ICCV 2021 Donghyun Kim, Kuniaki Saito, Tae-Hyun Oh, Bryan A. Plummer, Stan Sclaroff, Kate Saenko

We present a two-stage pre-training approach that improves the generalization ability of standard single-domain pre-training.

Domain Adaptation

Self-supervised Visual Attribute Learning for Fashion Compatibility

no code implementations1 Aug 2020 Donghyun Kim, Kuniaki Saito, Samarth Mishra, Stan Sclaroff, Kate Saenko, Bryan A Plummer

Our approach consists of three self-supervised tasks designed to capture different concepts that are neglected in prior work that we can select from depending on the needs of our downstream tasks.

Object Recognition Retrieval +2

Unsupervised Differentiable Multi-aspect Network Embedding

1 code implementation7 Jun 2020 Chanyoung Park, Carl Yang, Qi Zhu, Donghyun Kim, Hwanjo Yu, Jiawei Han

To capture the multiple aspects of each node, existing studies mainly rely on offline graph clustering performed prior to the actual embedding, which results in the cluster membership of each node (i. e., node aspect distribution) fixed throughout training of the embedding model.

Graph Clustering Graph Mining +1

Learning to Scale Multilingual Representations for Vision-Language Tasks

no code implementations ECCV 2020 Andrea Burns, Donghyun Kim, Derry Wijaya, Kate Saenko, Bryan A. Plummer

Current multilingual vision-language models either require a large number of additional parameters for each supported language, or suffer performance degradation as languages are added.

Language Modelling Machine Translation +2

Cross-domain Self-supervised Learning for Domain Adaptation with Few Source Labels

no code implementations18 Mar 2020 Donghyun Kim, Kuniaki Saito, Tae-Hyun Oh, Bryan A. Plummer, Stan Sclaroff, Kate Saenko

We show that when labeled source examples are limited, existing methods often fail to learn discriminative features applicable for both source and target domains.

Self-Supervised Learning Unsupervised Domain Adaptation

Universal Domain Adaptation through Self Supervision

1 code implementation NeurIPS 2020 Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Kate Saenko

While some methods address target settings with either partial or open-set categories, they assume that the particular setting is known a priori.

Partial Domain Adaptation Universal Domain Adaptation +1

MILA: Multi-Task Learning from Videos via Efficient Inter-Frame Attention

no code implementations18 Feb 2020 Donghyun Kim, Tian Lan, Chuhang Zou, Ning Xu, Bryan A. Plummer, Stan Sclaroff, Jayan Eledath, Gerard Medioni

We embed the attention module in a ``slow-fast'' architecture, where the slower network runs on sparsely sampled keyframes and the light-weight shallow network runs on non-keyframes at a high frame rate.

Multi-Task Learning

Unsupervised Attributed Multiplex Network Embedding

1 code implementation15 Nov 2019 Chanyoung Park, Donghyun Kim, Jiawei Han, Hwanjo Yu

Even for those that consider the multiplexity of a network, they overlook node attributes, resort to node labels for training, and fail to model the global properties of a graph.

Network Embedding

MULE: Multimodal Universal Language Embedding

no code implementations8 Sep 2019 Donghyun Kim, Kuniaki Saito, Kate Saenko, Stan Sclaroff, Bryan A. Plummer

In this paper, we present a modular approach which can easily be incorporated into existing vision-language methods in order to support many languages.

Data Augmentation Machine Translation +1

Task-Guided Pair Embedding in Heterogeneous Network

1 code implementation4 Jun 2019 Chanyoung Park, Donghyun Kim, Qi Zhu, Jiawei Han, Hwanjo Yu

In this paper, we propose a novel task-guided pair embedding framework in heterogeneous network, called TaPEm, that directly models the relationship between a pair of nodes that are related to a specific task (e. g., paper-author relationship in author identification).

Network Embedding

Collaborative Translational Metric Learning

1 code implementation4 Jun 2019 Chanyoung Park, Donghyun Kim, Xing Xie, Hwanjo Yu

We also conduct extensive qualitative evaluations on the translation vectors learned by our proposed method to ascertain the benefit of adopting the translation mechanism for implicit feedback-based recommendations.

Knowledge Graph Embedding Metric Learning +1

Semi-supervised Domain Adaptation via Minimax Entropy

3 code implementations ICCV 2019 Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Trevor Darrell, Kate Saenko

Contemporary domain adaptation methods are very effective at aligning feature distributions of source and target domains without any target supervision.

Domain Adaptation

Conversation Model Fine-Tuning for Classifying Client Utterances in Counseling Dialogues

no code implementations NAACL 2019 Sungjoon Park, Donghyun Kim, Alice Oh

A dataset of those interactions can be used to learn to automatically classify the client utterances into categories that help counselors in diagnosing client status and predicting counseling outcome.

Language Modelling

Learning to Select: Problem, Solution, and Applications

no code implementations ICLR 2018 Heechang Ryu, Donghyun Kim, Hayong Shin

For example, job dispatching in the manufacturing factory is a typical "Learning to Select" problem.

Learning-To-Rank

Excitation Backprop for RNNs

1 code implementation CVPR 2018 Sarah Adel Bargal, Andrea Zunino, Donghyun Kim, Jianming Zhang, Vittorio Murino, Stan Sclaroff

Models are trained to caption or classify activity in videos, but little is known about the evidence used to make such decisions.

Action Recognition Video Captioning

Click-aware purchase prediction with push at the top

no code implementations21 Jun 2017 Chanyoung Park, Donghyun Kim, Min-Chul Yang, Jung-Tae Lee, Hwanjo Yu

We begin by formulating various model assumptions, each one assuming a different order of user preferences among purchased, clicked-but-not-purchased, and non-clicked items, to study the usefulness of leveraging click records.

Learning-To-Rank

Deep 3D Face Identification

no code implementations30 Mar 2017 Donghyun Kim, Matthias Hernandez, Jongmoo Choi, Gerard Medioni

We also propose a 3D face augmentation technique which synthesizes a number of different facial expressions from a single 3D face scan.

Face Identification Face Recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.