Search Results for author: Jungseul Ok

Found 21 papers, 5 papers with code

CLIPtone: Unsupervised Learning for Text-based Image Tone Adjustment

no code implementations1 Apr 2024 Hyeongmin Lee, Kyoungkook Kang, Jungseul Ok, Sunghyun Cho

Recent image tone adjustment (or enhancement) approaches have predominantly adopted supervised learning for learning human-centric perceptual assessment.

Image Enhancement

MedBN: Robust Test-Time Adaptation against Malicious Test Samples

no code implementations28 Mar 2024 Hyejin Park, Jeongyeon Hwang, Sunung Mun, Sangdon Park, Jungseul Ok

In response to the emerging threat, we propose median batch normalization (MedBN), leveraging the robustness of the median for statistics estimation within the batch normalization layer during test-time inference.

Test-time Adaptation

Active Label Correction for Semantic Segmentation with Foundation Models

no code implementations16 Mar 2024 Hoyoung Kim, Sehyun Hwang, Suha Kwak, Jungseul Ok

Training and validating models for semantic segmentation require datasets with pixel-wise annotations, which are notoriously labor-intensive.

Semantic Segmentation Superpixels

Addressing Feature Imbalance in Sound Source Separation

no code implementations11 Sep 2023 Jaechang Kim, Jeongyeon Hwang, Soheun Yi, Jaewoong Cho, Jungseul Ok

Neural networks often suffer from a feature preference problem, where they tend to overly rely on specific features to solve a task while disregarding other features, even if those neglected features are essential for the task.

Combating Label Distribution Shift for Active Domain Adaptation

no code implementations13 Aug 2022 Sehyun Hwang, Sohyun Lee, Sungyeon Kim, Jungseul Ok, Suha Kwak

We consider the problem of active domain adaptation (ADA) to unlabeled target data, of which subset is actively selected and labeled given a budget constraint.

Domain Adaptation

Towards Sequence-Level Training for Visual Tracking

2 code implementations11 Aug 2022 Minji Kim, Seungkwan Lee, Jungseul Ok, Bohyung Han, Minsu Cho

Despite the extensive adoption of machine learning on the task of visual object tracking, recent learning-based approaches have largely overlooked the fact that visual tracking is a sequence-level task in its nature; they rely heavily on frame-level training, which inevitably induces inconsistency between training and testing in terms of both data distributions and task objectives.

Data Augmentation Reinforcement Learning (RL) +1

Efficient Scheduling of Data Augmentation for Deep Reinforcement Learning

no code implementations1 Jun 2022 Byungchan Ko, Jungseul Ok

In deep reinforcement learning (RL), data augmentation is widely considered as a tool to induce a set of useful priors about semantic consistency and improve sample efficiency and generalization performance.

Data Augmentation reinforcement-learning +2

Few-Shot Unlearning by Model Inversion

no code implementations31 May 2022 Youngsik Yoon, Jinhwan Nam, Hyojeong Yun, Jaeho Lee, Dongwoo Kim, Jungseul Ok

We consider a practical scenario of machine unlearning to erase a target dataset, which causes unexpected behavior from the trained model.

Machine Unlearning

MetaSSD: Meta-Learned Self-Supervised Detection

1 code implementation30 May 2022 Moon Jeong Park, Jungseul Ok, Yo-Seb Jeon, Dongwoo Kim

There are two major limitations in the supervised approaches: a) a model needs to be retrained from scratch when new train symbols come to adapt to a new channel status, and b) the length of the training symbols needs to be longer than a certain threshold to make the model generalize well on unseen symbols.

Meta-Learning Self-Supervised Learning

Robust Deep Learning from Crowds with Belief Propagation

1 code implementation1 Nov 2021 Hoyoung Kim, Seunghyuk Cho, Dongwoo Kim, Jungseul Ok

Crowdsourcing systems enable us to collect large-scale dataset, but inherently suffer from noisy labels of low-paid workers.

Variational Inference

Learning Continuous Representation of Audio for Arbitrary Scale Super Resolution

1 code implementation30 Oct 2021 Jaechang Kim, Yunjoo Lee, Seunghoon Hong, Jungseul Ok

To obtain a continuous representation of audio and enable super resolution for arbitrary scale factor, we propose a method of implicit neural representation, coined Local Implicit representation for Super resolution of Arbitrary scale (LISA).

Audio Super-Resolution Self-Supervised Learning +1

Gradient Inversion with Generative Image Prior

1 code implementation NeurIPS 2021 Jinwoo Jeon, Jaechang Kim, Kangwook Lee, Sewoong Oh, Jungseul Ok

Federated Learning (FL) is a distributed learning framework, in which the local data never leaves clients devices to preserve privacy, and the server trains models on the data via accessing only the gradients of those local data.

Federated Learning

Multi-armed Bandit Algorithm against Strategic Replication

no code implementations23 Oct 2021 Suho Shin, Seungjoon Lee, Jungseul Ok

We consider a multi-armed bandit problem in which a set of arms is registered by each agent, and the agent receives reward when its arm is selected.

Efficient Scheduling of Data Augmentation for Deep Reinforcement Learning

no code implementations17 Feb 2021 Byungchan Ko, Jungseul Ok

In deep reinforcement learning (RL), data augmentation is widely considered as a tool to induce a set of useful priors about semantic consistency and improve sample efficiency and generalization performance.

Data Augmentation reinforcement-learning +2

Transfer Learning in Bandits with Latent Continuity

no code implementations4 Feb 2021 Hyejin Park, Seiyun Shin, Kwang-Sung Jun, Jungseul Ok

To cope with the latent structural parameter, we consider a transfer learning setting in which an agent must learn to transfer the structural information from the prior tasks to the next task, which is inspired by practical problems such as rate adaptation in wireless link.

Multi-Armed Bandits Transfer Learning

Exploration in Structured Reinforcement Learning

no code implementations NeurIPS 2018 Jungseul Ok, Alexandre Proutiere, Damianos Tranos

For Lipschitz MDPs, the bounds are shown not to scale with the sizes $S$ and $A$ of the state and action spaces, i. e., they are smaller than $c\log T$ where $T$ is the time horizon and the constant $c$ only depends on the Lipschitz structure, the span of the bias function, and the minimal action sub-optimality gap.

reinforcement-learning Reinforcement Learning (RL)

Iterative Bayesian Learning for Crowdsourced Regression

no code implementations28 Feb 2017 Jungseul Ok, Sewoong Oh, Yunhun Jang, Jinwoo Shin, Yung Yi

Crowdsourcing platforms emerged as popular venues for purchasing human intelligence at low cost for large volume of tasks.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.