Search Results for author: Jinhyung Kim

Found 11 papers, 2 papers with code

Achieving Synergy in Cognitive Behavior of Humanoids via Deep Learning of Dynamic Visuo-Motor-Attentional Coordination

no code implementations9 Jul 2015 Jungsik Hwang, Minju Jung, Naveen Madapana, Jinhyung Kim, Minkyu Choi, Jun Tani

The current study examines how adequate coordination among different cognitive processes including visual recognition, attention switching, action preparation and generation can be developed via learning of robots by introducing a novel model, the Visuo-Motor Deep Dynamic Neural Network (VMDNN).

Predictive Coding-based Deep Dynamic Neural Network for Visuomotor Learning

no code implementations8 Jun 2017 Jungsik Hwang, Jinhyung Kim, Ahmadreza Ahmadi, Minkyu Choi, Jun Tani

This study presents a dynamic neural network model based on the predictive coding framework for perceiving and predicting the dynamic visuo-proprioceptive patterns.

Action Generation

Regularization on Spatio-Temporally Smoothed Feature for Action Recognition

no code implementations CVPR 2020 Jinhyung Kim, Seunghwan Cha, Dongyoon Wee, Soonmin Bae, Junmo Kim

We present that selective regularization on this locally smoothed feature makes a model handle the low-frequency and high-frequency component distinctively, resulting in performance improvement.

Action Recognition Temporal Action Localization

VideoMix: Rethinking Data Augmentation for Video Classification

2 code implementations7 Dec 2020 Sangdoo Yun, Seong Joon Oh, Byeongho Heo, Dongyoon Han, Jinhyung Kim

Recent data augmentation strategies have been reported to address the overfitting problems in static image classifiers.

Action Localization Action Recognition +5

Frequency Selective Augmentation for Video Representation Learning

no code implementations8 Apr 2022 Jinhyung Kim, Taeoh Kim, Minho Shim, Dongyoon Han, Dongyoon Wee, Junmo Kim

FreqAug stochastically removes specific frequency components from the video so that learned representation captures essential features more from the remaining information for various downstream tasks.

Action Recognition Data Augmentation +3

Exploring Temporally Dynamic Data Augmentation for Video Recognition

no code implementations30 Jun 2022 Taeoh Kim, Jinhyung Kim, Minho Shim, Sangdoo Yun, Myunggu Kang, Dongyoon Wee, Sangyoun Lee

The magnitude of augmentation operations on each frame is changed by an effective mechanism, Fourier Sampling that parameterizes diverse, smooth, and realistic temporal variations.

Action Segmentation Image Augmentation +3

Misalign, Contrast then Distill: Rethinking Misalignments in Language-Image Pre-training

no code implementations ICCV 2023 Bumsoo Kim, Yeonsik Jo, Jinhyung Kim, Seunghwan Kim

Contrastive Language-Image Pretraining has emerged as a prominent approach for training vision and text encoders with uncurated image-text pairs from the web.

Image Augmentation Metric Learning +1

Masked Autoencoder for Unsupervised Video Summarization

no code implementations2 Jun 2023 Minho Shim, Taeoh Kim, Jinhyung Kim, Dongyoon Wee

Summarizing a video requires a diverse understanding of the video, ranging from recognizing scenes to evaluating how much each frame is essential enough to be selected as a summary.

Self-Supervised Learning Unsupervised Video Summarization

Expediting Contrastive Language-Image Pretraining via Self-distilled Encoders

no code implementations19 Dec 2023 Bumsoo Kim, Jinhyung Kim, Yeonsik Jo, Seung Hwan Kim

Based on the unified text embedding space, ECLIPSE compensates for the additional computational cost of the momentum image encoder by expediting the online image encoder.

Knowledge Distillation

Misalign, Contrast then Distill: Rethinking Misalignments in Language-Image Pretraining

no code implementations19 Dec 2023 Bumsoo Kim, Yeonsik Jo, Jinhyung Kim, Seung Hwan Kim

Contrastive Language-Image Pretraining has emerged as a prominent approach for training vision and text encoders with uncurated image-text pairs from the web.

Image Augmentation Metric Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.