1 code implementation • 31 May 2023 • Elisa Nguyen, Minjoon Seo, Seong Joon Oh
We recommend that future researchers and practitioners trust TDA estimates only in such cases.
no code implementations • 26 May 2023 • Elif Akata, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, Matthias Bethge, Eric Schulz
In a large set of two players-two strategies games, we find that LLMs are particularly good at games where valuing their own self-interest pays off, like the iterated Prisoner's Dilemma family.
1 code implementation • 30 Mar 2023 • Dongyoon Han, Junsuk Choe, Seonghyeok Chun, John Joon Young Chung, Minsuk Chang, Sangdoo Yun, Jean Y. Song, Seong Joon Oh
We refer to the new paradigm of training models with annotation byproducts as learning using annotation byproducts (LUAB).
1 code implementation • 6 Feb 2023 • Michael Kirchhof, Enkelejda Kasneci, Seong Joon Oh
We prove that these distributions recover the correct posteriors of the data-generating process, including its level of aleatoric uncertainty, up to a rotation of the latent space.
1 code implementation • 4 Nov 2022 • Inwoo Hwang, Sangjun Lee, Yunhyeok Kwak, Seong Joon Oh, Damien Teney, Jin-Hwa Kim, Byoung-Tak Zhang
Experiments on standard benchmarks demonstrate the effectiveness of the method, in particular when label noise complicates the identification of bias-conflicting examples.
no code implementations • 16 Oct 2022 • Nam Hyeon-Woo, Kim Yu-Ji, Byeongho Heo, Doonyoon Han, Seong Joon Oh, Tae-Hyun Oh
We observe that the inclusion of CB reduces the degree of density in the original attention maps and increases both the capacity and generalizability of the ViT models.
no code implementations • 1 Sep 2022 • Damien Teney, Yong Lin, Seong Joon Oh, Ehsan Abbasnejad
- In these cases, studies on OOD generalization that use ID performance for model selection (a common recommended practice) will necessarily miss the best-performing models, making these studies blind to a whole range of phenomena.
2 code implementations • 30 May 2022 • Jang-Hyun Kim, Jinuk Kim, Seong Joon Oh, Sangdoo Yun, Hwanjun Song, JoonHyun Jeong, Jung-Woo Ha, Hyun Oh Song
The great success of machine learning with massive amounts of data comes at a price of huge computation costs and storage for training and tuning.
2 code implementations • 7 Apr 2022 • Sanghyuk Chun, Wonjae Kim, Song Park, Minsuk Chang, Seong Joon Oh
Image-Text matching (ITM) is a common task for evaluating the quality of Vision and Language (VL) models.
1 code implementation • CVPR 2022 • Jungbeom Lee, Seong Joon Oh, Sangdoo Yun, Junsuk Choe, Eunji Kim, Sungroh Yoon
However, training on class labels only, classifiers suffer from the spurious correlation between foreground and background cues (e. g. train and rail), fundamentally bounding the performance of WSSS.
Weakly supervised Semantic Segmentation
Weakly-Supervised Semantic Segmentation
1 code implementation • 16 Dec 2021 • Hazel Kim, Daecheol Woo, Seong Joon Oh, Jeong-Won Cha, Yo-Sub Han
Taken together, our contributions on the data augmentation strategies yield a strong training recipe for few-shot text classification tasks.
no code implementations • ICLR 2022 • Luca Scimeca, Seong Joon Oh, Sanghyuk Chun, Michael Poli, Sangdoo Yun
This phenomenon, also known as shortcut learning, is emerging as a key limitation of the current generation of machine learning models.
1 code implementation • ICCV 2021 • Jae Myung Kim, Junsuk Choe, Zeynep Akata, Seong Joon Oh
The class activation mapping, or CAM, has been the cornerstone of feature attribution methods for multiple vision tasks.
no code implementations • NeurIPS 2021 • Michael Poli, Stefano Massaroli, Luca Scimeca, Seong Joon Oh, Sanghyuk Chun, Atsushi Yamashita, Hajime Asama, Jinkyoo Park, Animesh Garg
Effective control and prediction of dynamical systems often require appropriate handling of continuous-time and discrete, event-triggered processes.
9 code implementations • ICCV 2021 • Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, Seong Joon Oh
We empirically show that such a spatial dimension reduction is beneficial to a transformer architecture as well, and propose a novel Pooling-based Vision Transformer (PiT) upon the original ViT model.
Ranked #303 on
Image Classification
on ImageNet
2 code implementations • CVPR 2021 • Sangdoo Yun, Seong Joon Oh, Byeongho Heo, Dongyoon Han, Junsuk Choe, Sanghyuk Chun
However, they have not fixed the training set, presumably because of a formidable annotation cost.
Ranked #21 on
Image Classification
on OmniBenchmark
1 code implementation • CVPR 2021 • Sanghyuk Chun, Seong Joon Oh, Rafael Sampaio de Rezende, Yannis Kalantidis, Diane Larlus
Instead, we propose to use Probabilistic Cross-Modal Embedding (PCME), where samples from the different modalities are represented as probabilistic distributions in the common embedding space.
2 code implementations • 7 Dec 2020 • Sangdoo Yun, Seong Joon Oh, Byeongho Heo, Dongyoon Han, Jinhyung Kim
Recent data augmentation strategies have been reported to address the overfitting problems in static image classifiers.
2 code implementations • 8 Jul 2020 • Junsuk Choe, Seong Joon Oh, Sanghyuk Chun, Seungho Lee, Zeynep Akata, Hyunjung Shim
In this paper, we argue that WSOL task is ill-posed with only image-level labels, and propose a new evaluation protocol where full supervision is limited to only a small held-out set not overlapping with the test set.
4 code implementations • ICLR 2021 • Byeongho Heo, Sanghyuk Chun, Seong Joon Oh, Dongyoon Han, Sangdoo Yun, Gyuwan Kim, Youngjung Uh, Jung-Woo Ha
Because of the scale invariance, this modification only alters the effective step sizes without changing the effective update directions, thus enjoying the original convergence properties of GD optimizers.
no code implementations • 9 Mar 2020 • Sanghyuk Chun, Seong Joon Oh, Sangdoo Yun, Dongyoon Han, Junsuk Choe, Youngjoon Yoo
Despite apparent human-level performances of deep neural networks (DNN), they behave fundamentally differently from humans.
2 code implementations • ICML 2020 • Muhammad Ferjad Naeem, Seong Joon Oh, Youngjung Uh, Yunjey Choi, Jaejun Yoo
In this paper, we show that even the latest version of the precision and recall metrics are not reliable yet.
2 code implementations • CVPR 2020 • Junsuk Choe, Seong Joon Oh, Seungho Lee, Sanghyuk Chun, Zeynep Akata, Hyunjung Shim
In this paper, we argue that WSOL task is ill-posed with only image-level labels, and propose a new evaluation protocol where full supervision is limited to only a small held-out set not overlapping with the test set.
2 code implementations • 10 Oct 2019 • Junyeop Lee, Sungrae Park, Jeonghun Baek, Seong Joon Oh, Seonghyeon Kim, Hwalsuk Lee
Scene text recognition (STR) is the task of recognizing character sequences in natural scenes.
Ranked #2 on
Scene Text Recognition
on ICDAR 2003
3 code implementations • ICML 2020 • Hyojin Bahng, Sanghyuk Chun, Sangdoo Yun, Jaegul Choo, Seong Joon Oh
This tactic is feasible in many scenarios where it is much easier to define a set of biased representations than to define and quantify bias.
29 code implementations • ICCV 2019 • Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, Youngjoon Yoo
Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers.
Ranked #1 on
Out-of-Distribution Generalization
on ImageNet-W
no code implementations • ICLR 2019 • Seong Joon Oh, Kevin P. Murphy, Jiyan Pan, Joseph Roth, Florian Schroff, Andrew C. Gallagher
Instance embeddings are an efficient and versatile image representation that facilitates applications like recognition, verification, retrieval, and clustering.
11 code implementations • ICCV 2019 • Jeonghun Baek, Geewook Kim, Junyeop Lee, Sungrae Park, Dongyoon Han, Sangdoo Yun, Seong Joon Oh, Hwalsuk Lee
Many new proposals for scene text recognition (STR) models have been introduced in recent years.
Ranked #6 on
Scene Text Recognition
on ICDAR 2003
1 code implementation • 30 Sep 2018 • Seong Joon Oh, Kevin Murphy, Jiyan Pan, Joseph Roth, Florian Schroff, Andrew Gallagher
Instance embeddings are an efficient and versatile image representation that facilitates applications like recognition, verification, retrieval, and clustering.
no code implementations • 31 May 2018 • Edgar Tretschk, Seong Joon Oh, Mario Fritz
As a result of our attack, the victim agent is misguided to optimise for the adversarial reward over time.
no code implementations • 15 May 2018 • Tribhuvanesh Orekondy, Seong Joon Oh, Yang Zhang, Bernt Schiele, Mario Fritz
At the core of FL is a network of anonymous user devices sharing training information (model parameter updates) computed locally on personal data.
no code implementations • CVPR 2018 • Qianru Sun, Liqian Ma, Seong Joon Oh, Luc van Gool, Bernt Schiele, Mario Fritz
As more and more personal photos are shared online, being able to obfuscate identities in such photos is becoming a necessity for privacy protection.
2 code implementations • ICLR 2018 • Seong Joon Oh, Max Augustin, Bernt Schiele, Mario Fritz
On the one hand, our work exposes the vulnerability of black-box neural networks to different types of attacks -- we show that the revealed internal information helps generate more effective adversarial examples against the black box model.
no code implementations • 9 Oct 2017 • Seong Joon Oh, Rodrigo Benenson, Mario Fritz, Bernt Schiele
Person recognition in social media photos sets new challenges for computer vision, including non-cooperative subjects (e. g. backward viewpoints, unusual poses) and great changes in appearance.
no code implementations • CVPR 2017 • Anna Rohrbach, Marcus Rohrbach, Siyu Tang, Seong Joon Oh, Bernt Schiele
At training time, we first learn how to localize characters by relating their visual appearance to mentions in the descriptions via a semi-supervised approach.
no code implementations • ICCV 2017 • Seong Joon Oh, Mario Fritz, Bernt Schiele
We derive the optimal strategy for the user that assures an upper bound on the recognition rate independent of the recogniser's counter measure.
no code implementations • CVPR 2017 • Seong Joon Oh, Rodrigo Benenson, Anna Khoreva, Zeynep Akata, Mario Fritz, Bernt Schiele
We show how to combine both information sources in order to recover 80% of the fully supervised performance - which is the new state of the art in weakly supervised training for pixel-wise semantic labelling.
Ranked #25 on
Semantic Segmentation
on PASCAL VOC 2012 val
no code implementations • 28 Jul 2016 • Seong Joon Oh, Rodrigo Benenson, Mario Fritz, Bernt Schiele
As we shift more of our lives into the virtual domain, the volume of data shared on the web keeps increasing and presents a threat to our privacy.
no code implementations • ICCV 2015 • Seong Joon Oh, Rodrigo Benenson, Mario Fritz, Bernt Schiele
Recognising persons in everyday photos presents major challenges (occluded faces, different clothing, locations, etc.)