1 code implementation • 13 Nov 2023 • Tien Dat Nguyen, Jinwoo Kim, Hongseok Yang, Seunghoon Hong
We present a general framework for symmetrizing an arbitrary neural-network architecture and making it equivariant with respect to a given group.
no code implementations • 8 Sep 2023 • Sungjun Cho, Dae-Woong Jeong, Sung Moon Ko, Jinwoo Kim, Sehui Han, Seunghoon Hong, Honglak Lee, Moontae Lee
Pretraining molecular representations from large unlabeled data is essential for molecular property prediction due to the high cost of obtaining ground-truth labels.
no code implementations • 28 Aug 2023 • Youngrae Kim, Younggeol Cho, Thanh-Tung Nguyen, Seunghoon Hong, Dongman Lee
Real-world weather conditions are intricate and often occur concurrently.
1 code implementation • NeurIPS 2023 • Jinwoo Kim, Tien Dat Nguyen, Ayhan Suleymanzade, Hyeokjun An, Seunghoon Hong
In contrary to equivariant architectures, we use an arbitrary base model such as an MLP or a transformer and symmetrize it to be equivariant to the given group by employing a small equivariant network that parameterizes the probabilistic distribution underlying the symmetrization.
Ranked #1 on Link Prediction on PCQM-Contact (using extra training data)
1 code implementation • 27 Mar 2023 • Donggyun Kim, Jinwoo Kim, Seongwoong Cho, Chong Luo, Seunghoon Hong
We propose Visual Token Matching (VTM), a universal few-shot learner for arbitrary dense prediction tasks.
1 code implementation • CVPR 2023 • Jaehoon Yoo, Semin Kim, Doyup Lee, Chiheon Kim, Seunghoon Hong
However, the transformers are prohibited from directly learning the long-term dependency in videos due to the quadratic complexity of self-attention, and inherently suffering from slow inference time and error propagation due to the autoregressive process.
Ranked #24 on Video Generation on UCF-101
1 code implementation • 27 Oct 2022 • Sungjun Cho, Seonwoo Min, Jinwoo Kim, Moontae Lee, Honglak Lee, Seunghoon Hong
The forward and backward cost are thus linear to the number of edges, which each attention head can also choose flexibly based on the input.
1 code implementation • 22 Aug 2022 • Jinwoo Kim, Saeyoon Oh, Sungjun Cho, Seunghoon Hong
Many problems in computer vision and machine learning can be cast as learning on hypergraphs that represent higher-order relations.
1 code implementation • 11 Aug 2022 • Woo Jae Kim, Seunghoon Hong, Sung-Eui Yoon
Adversarial attacks with improved transferability - the ability of an adversarial example crafted on a known model to also fool unknown models - have recently received much attention due to their practicality.
1 code implementation • 6 Jul 2022 • Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, Seunghoon Hong
We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice.
Ranked #15 on Graph Regression on PCQM4Mv2-LSC
1 code implementation • CVPR 2022 • Yoonki Cho, Woo Jae Kim, Seunghoon Hong, Sung-Eui Yoon
In this paper, we propose a novel Part-based Pseudo Label Refinement (PPLR) framework that reduces the label noise by employing the complementary relationship between global and part features.
Ranked #3 on Unsupervised Vehicle Re-Identification on VeRi-776
no code implementations • 4 Jan 2022 • kyungmoon lee, Sungyeon Kim, Seunghoon Hong, Suha Kwak
Motivated by this, we introduce a new data augmentation approach that synthesizes novel classes and their embedding vectors.
no code implementations • NeurIPS 2021 • HyeongJoo Hwang, Geon-Hyeong Kim, Seunghoon Hong, Kee-Eung Kim
Multi-View Representation Learning (MVRL) aims to discover a shared representation of observations from different views with the complex underlying correlation.
1 code implementation • 30 Oct 2021 • Jaechang Kim, Yunjoo Lee, Seunghoon Hong, Jungseul Ok
To obtain a continuous representation of audio and enable super resolution for arbitrary scale factor, we propose a method of implicit neural representation, coined Local Implicit representation for Super resolution of Arbitrary scale (LISA).
1 code implementation • 28 Oct 2021 • Donggyun Kim, Seongwoong Cho, Wonkwang Lee, Seunghoon Hong
To this end, we propose Multi-Task Neural Processes (MTNPs), an extension of NPs designed to jointly infer tasks realized from multiple stochastic processes.
2 code implementations • NeurIPS 2021 • Jinwoo Kim, Saeyoon Oh, Seunghoon Hong
We present a generalization of Transformers to any-order permutation invariant data (sets, graphs, and hypergraphs).
Ranked #5 on Graph Regression on PCQM4M-LSC (Validation MAE metric)
no code implementations • ICLR 2022 • Donggyun Kim, Seongwoong Cho, Wonkwang Lee, Seunghoon Hong
Neural Processes (NPs) consider a task as a function realized from a stochastic process and flexibly adapt to unseen tasks through inference on functions.
1 code implementation • NeurIPS 2021 • Jinwoo Kim, Saeyoon Oh, Seunghoon Hong
We present a generalization of Transformers to any-order permutation invariant data (sets, graphs, and hypergraphs).
1 code implementation • ICLR 2021 • Wonkwang Lee, Whie Jung, Han Zhang, Ting Chen, Jing Yu Koh, Thomas Huang, Hyungsuk Yoon, Honglak Lee, Seunghoon Hong
Despite the recent advances in the literature, existing approaches are limited to moderately short-term prediction (less than a few seconds), while extrapolating it to a longer future quickly leads to destruction in structure and content.
2 code implementations • CVPR 2021 • Jinwoo Kim, Jaehoon Yoo, Juho Lee, Seunghoon Hong
Generative modeling of set-structured data, such as point clouds, requires reasoning over local and global structures at various scales.
Ranked #3 on Point Cloud Generation on ShapeNet Car
1 code implementation • CVPR 2021 • Sungwon Park, Sungwon Han, Sundong Kim, Danu Kim, Sungkyu Park, Seunghoon Hong, Meeyoung Cha
Unsupervised image clustering methods often introduce alternative objectives to indirectly train the model and are subject to faulty predictions and overconfident results.
Ranked #1 on Image Clustering on CIFAR-100 (Train Set metric, using extra training data)
2 code implementations • NeurIPS 2020 • HyeongJoo Hwang, Geon-Hyeong Kim, Seunghoon Hong, Kee-Eung Kim
Grounded in information theory, we cast the simultaneous learning of domain-invariant and domain-specific representations as a joint objective of multiple information constraints, which does not require adversarial training or gradient reversal layers.
2 code implementations • ECCV 2020 • Wonkwang Lee, Donggyun Kim, Seunghoon Hong, Honglak Lee
Despite the simplicity, we show that the proposed method is highly effective, achieving comparable image generation quality to the state-of-the-art methods using the disentangled representation.
no code implementations • ICLR 2019 • Dingdong Yang, Seunghoon Hong, Yunseok Jang, Tianchen Zhao, Honglak Lee
We propose a simple yet highly effective method that addresses the mode-collapse problem in the Conditional Generative Adversarial Network (cGAN).
1 code implementation • NeurIPS 2018 • Seunghoon Hong, Xinchen Yan, Thomas Huang, Honglak Lee
In this work, we present a novel hierarchical framework for semantic image manipulation.
no code implementations • CVPR 2018 • Seunghoon Hong, Dingdong Yang, Jongwook Choi, Honglak Lee
We propose a novel hierarchical approach for text-to-image synthesis by inferring semantic layout.
1 code implementation • 25 Jun 2017 • Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak Lee
To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatiotemporal dynamics for pixel-level future prediction in natural videos.
Ranked #1 on Video Prediction on KTH (Cond metric)
no code implementations • CVPR 2017 • Seunghoon Hong, Donghun Yeo, Suha Kwak, Honglak Lee, Bohyung Han
Our goal is to overcome this limitation with no additional human intervention by retrieving videos relevant to target class labels from web repository, and generating segmentation labels from the retrieved videos to simulate strong supervision for semantic segmentation.
no code implementations • CVPR 2016 • Seunghoon Hong, Junhyuk Oh, Bohyung Han, Honglak Lee
We propose a novel weakly-supervised semantic segmentation algorithm based on Deep Convolutional Neural Network (DCNN).
3 code implementations • NeurIPS 2015 • Seunghoon Hong, Hyeonwoo Noh, Bohyung Han
We propose a novel deep neural network architecture for semi-supervised semantic segmentation using heterogeneous annotations.
5 code implementations • ICCV 2015 • Hyeonwoo Noh, Seunghoon Hong, Bohyung Han
We propose a novel semantic segmentation algorithm by learning a deconvolution network.
Ranked #3 on Curved Text Detection on SCUT-CTW1500
no code implementations • 24 Feb 2015 • Seunghoon Hong, Tackgeun You, Suha Kwak, Bohyung Han
We propose an online visual tracking algorithm by learning discriminative saliency map using Convolutional Neural Network (CNN).