Search Results for author: Jinhyung Kim

Found 7 papers, 2 papers with code

Exploring Temporally Dynamic Data Augmentation for Video Recognition

no code implementations30 Jun 2022 Taeoh Kim, Jinhyung Kim, Minho Shim, Sangdoo Yun, Myunggu Kang, Dongyoon Wee, Sangyoun Lee

The magnitude of augmentation operations on each frame is changed by an effective mechanism, Fourier Sampling that parameterizes diverse, smooth, and realistic temporal variations.

Action Segmentation Image Augmentation +3

Frequency Selective Augmentation for Video Representation Learning

no code implementations8 Apr 2022 Jinhyung Kim, Taeoh Kim, Minho Shim, Dongyoon Han, Dongyoon Wee, Junmo Kim

FreqAug stochastically removes specific frequency components from the video so that learned representation captures essential features more from the remaining information for various downstream tasks.

Action Recognition Data Augmentation +3

VideoMix: Rethinking Data Augmentation for Video Classification

2 code implementations7 Dec 2020 Sangdoo Yun, Seong Joon Oh, Byeongho Heo, Dongyoon Han, Jinhyung Kim

Recent data augmentation strategies have been reported to address the overfitting problems in static image classifiers.

Action Localization Action Recognition +5

Regularization on Spatio-Temporally Smoothed Feature for Action Recognition

no code implementations CVPR 2020 Jinhyung Kim, Seunghwan Cha, Dongyoon Wee, Soonmin Bae, Junmo Kim

We present that selective regularization on this locally smoothed feature makes a model handle the low-frequency and high-frequency component distinctively, resulting in performance improvement.

Action Recognition Temporal Action Localization

Predictive Coding-based Deep Dynamic Neural Network for Visuomotor Learning

no code implementations8 Jun 2017 Jungsik Hwang, Jinhyung Kim, Ahmadreza Ahmadi, Minkyu Choi, Jun Tani

This study presents a dynamic neural network model based on the predictive coding framework for perceiving and predicting the dynamic visuo-proprioceptive patterns.

Action Generation

Achieving Synergy in Cognitive Behavior of Humanoids via Deep Learning of Dynamic Visuo-Motor-Attentional Coordination

no code implementations9 Jul 2015 Jungsik Hwang, Minju Jung, Naveen Madapana, Jinhyung Kim, Minkyu Choi, Jun Tani

The current study examines how adequate coordination among different cognitive processes including visual recognition, attention switching, action preparation and generation can be developed via learning of robots by introducing a novel model, the Visuo-Motor Deep Dynamic Neural Network (VMDNN).

Cannot find the paper you are looking for? You can Submit a new open access paper.