no code implementations • 1 Apr 2024 • Hyeongmin Lee, Kyoungkook Kang, Jungseul Ok, Sunghyun Cho
Recent image tone adjustment (or enhancement) approaches have predominantly adopted supervised learning for learning human-centric perceptual assessment.
no code implementations • 28 Mar 2024 • Hyejin Park, Jeongyeon Hwang, Sunung Mun, Sangdon Park, Jungseul Ok
In response to the emerging threat, we propose median batch normalization (MedBN), leveraging the robustness of the median for statistics estimation within the batch normalization layer during test-time inference.
no code implementations • 16 Mar 2024 • Hoyoung Kim, Sehyun Hwang, Suha Kwak, Jungseul Ok
Training and validating models for semantic segmentation require datasets with pixel-wise annotations, which are notoriously labor-intensive.
no code implementations • 11 Sep 2023 • Jaechang Kim, Jeongyeon Hwang, Soheun Yi, Jaewoong Cho, Jungseul Ok
Neural networks often suffer from a feature preference problem, where they tend to overly rely on specific features to solve a task while disregarding other features, even if those neglected features are essential for the task.
no code implementations • ICCV 2023 • Hoyoung Kim, Minhyeon Oh, Sehyun Hwang, Suha Kwak, Jungseul Ok
Learning semantic segmentation requires pixel-wise annotations, which can be time-consuming and expensive.
no code implementations • 13 Aug 2022 • Sehyun Hwang, Sohyun Lee, Sungyeon Kim, Jungseul Ok, Suha Kwak
We consider the problem of active domain adaptation (ADA) to unlabeled target data, of which subset is actively selected and labeled given a budget constraint.
2 code implementations • 11 Aug 2022 • Minji Kim, Seungkwan Lee, Jungseul Ok, Bohyung Han, Minsu Cho
Despite the extensive adoption of machine learning on the task of visual object tracking, recent learning-based approaches have largely overlooked the fact that visual tracking is a sequence-level task in its nature; they rely heavily on frame-level training, which inevitably induces inconsistency between training and testing in terms of both data distributions and task objectives.
Ranked #16 on Visual Object Tracking on TrackingNet
no code implementations • 1 Jun 2022 • Byungchan Ko, Jungseul Ok
In deep reinforcement learning (RL), data augmentation is widely considered as a tool to induce a set of useful priors about semantic consistency and improve sample efficiency and generalization performance.
no code implementations • 31 May 2022 • Youngsik Yoon, Jinhwan Nam, Hyojeong Yun, Jaeho Lee, Dongwoo Kim, Jungseul Ok
We consider a practical scenario of machine unlearning to erase a target dataset, which causes unexpected behavior from the trained model.
1 code implementation • 30 May 2022 • Moon Jeong Park, Jungseul Ok, Yo-Seb Jeon, Dongwoo Kim
There are two major limitations in the supervised approaches: a) a model needs to be retrained from scratch when new train symbols come to adapt to a new channel status, and b) the length of the training symbols needs to be longer than a certain threshold to make the model generalize well on unseen symbols.
1 code implementation • 1 Nov 2021 • Hoyoung Kim, Seunghyuk Cho, Dongwoo Kim, Jungseul Ok
Crowdsourcing systems enable us to collect large-scale dataset, but inherently suffer from noisy labels of low-paid workers.
1 code implementation • 30 Oct 2021 • Jaechang Kim, Yunjoo Lee, Seunghoon Hong, Jungseul Ok
To obtain a continuous representation of audio and enable super resolution for arbitrary scale factor, we propose a method of implicit neural representation, coined Local Implicit representation for Super resolution of Arbitrary scale (LISA).
1 code implementation • NeurIPS 2021 • Jinwoo Jeon, Jaechang Kim, Kangwook Lee, Sewoong Oh, Jungseul Ok
Federated Learning (FL) is a distributed learning framework, in which the local data never leaves clients devices to preserve privacy, and the server trains models on the data via accessing only the gradients of those local data.
no code implementations • 23 Oct 2021 • Suho Shin, Seungjoon Lee, Jungseul Ok
We consider a multi-armed bandit problem in which a set of arms is registered by each agent, and the agent receives reward when its arm is selected.
no code implementations • 17 Feb 2021 • Byungchan Ko, Jungseul Ok
In deep reinforcement learning (RL), data augmentation is widely considered as a tool to induce a set of useful priors about semantic consistency and improve sample efficiency and generalization performance.
no code implementations • 4 Feb 2021 • Hyejin Park, Seiyun Shin, Kwang-Sung Jun, Jungseul Ok
To cope with the latent structural parameter, we consider a transfer learning setting in which an agent must learn to transfer the structural information from the prior tasks to the next task, which is inspired by practical problems such as rate adaptation in wireless link.
no code implementations • 14 Oct 2019 • Kaito Ariu, Jungseul Ok, Alexandre Proutiere, Se-Young Yun
The objective is to devise an algorithm with a minimal cluster recovery error rate.
no code implementations • NeurIPS 2018 • Jungseul Ok, Alexandre Proutiere, Damianos Tranos
For Lipschitz MDPs, the bounds are shown not to scale with the sizes $S$ and $A$ of the state and action spaces, i. e., they are smaller than $c\log T$ where $T$ is the time horizon and the constant $c$ only depends on the Lipschitz structure, the span of the bias function, and the minimal action sub-optimality gap.
no code implementations • 4 May 2018 • Weiran Huang, Jungseul Ok, Liang Li, Wei Chen
Each decision has a reward according to the distributions of arms.
no code implementations • 28 Feb 2017 • Jungseul Ok, Sewoong Oh, Yunhun Jang, Jinwoo Shin, Yung Yi
Crowdsourcing platforms emerged as popular venues for purchasing human intelligence at low cost for large volume of tasks.
no code implementations • 11 Feb 2016 • Jungseul Ok, Sewoong Oh, Jinwoo Shin, Yung Yi
Crowdsourcing systems are popular for solving large-scale labelling tasks with low-paid workers.