1 code implementation • 20 Mar 2023 • Junsu Kim, Younggyo Seo, Sungsoo Ahn, Kyunghwan Son, Jinwoo Shin
Recently, graph-based planning algorithms have gained much attention to solve goal-conditioned reinforcement learning (RL) tasks: they provide a sequence of subgoals to reach the target-goal, and the agents learn to execute subgoal-conditioned policies.
1 code implementation • 28 Feb 2023 • Sungbin Shin, Yohan Jo, Sungsoo Ahn, Namhoon Lee
Concept bottleneck models (CBMs) are a class of interpretable neural network models that predict the target response of a given input based on its high-level concepts.
1 code implementation • 21 Feb 2023 • Hyosoon Jang, Sangwoo Mo, Sungsoo Ahn
We also propose a variational expectation maximization algorithm to train our DPM in the semi-supervised setting.
no code implementations • 15 Oct 2022 • Jiye Kim, Seungbeom Lee, Dongwoo Kim, Sungsoo Ahn, Jaesik Park
Designing a neural network architecture for molecular representation is crucial for AI-driven drug discovery and molecule design.
1 code implementation • 22 Jun 2022 • Nayeong Kim, Sehyun Hwang, Sungsoo Ahn, Jaesik Park, Suha Kwak
We propose a new method for training debiased classifiers with no spurious attribute label.
no code implementations • NeurIPS 2021 • Sihyun Yu, Sungsoo Ahn, Le Song, Jinwoo Shin
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
no code implementations • ICLR 2022 • Jaehyung Kim, Dongyeop Kang, Sungsoo Ahn, Jinwoo Shin
Remarkably, our method is more effective on the challenging low-data and class-imbalanced regimes, and the learned augmentation policy is well-transferable to the different tasks and models.
no code implementations • ICLR 2022 • Sungsoo Ahn, Binghong Chen, Tianzhe Wang, Le Song
In this paper, we explore the problem of generating molecules using deep neural networks, which has recently gained much interest in chemistry.
no code implementations • 22 Jul 2021 • Sihyun Yu, Sangwoo Mo, Sungsoo Ahn, Jinwoo Shin
Abstract reasoning, i. e., inferring complicated patterns from given observations, is a central building block of artificial general intelligence.
1 code implementation • 9 Jun 2021 • Junsu Kim, Sungsoo Ahn, Hankook Lee, Jinwoo Shin
Our main idea is based on a self-improving procedure that trains the model to imitate successful trajectories found by itself.
Ranked #3 on
Multi-step retrosynthesis
on USPTO-190
no code implementations • 3 May 2021 • Hankook Lee, Sungsoo Ahn, Seung-Woo Seo, You Young Song, Eunho Yang, Sung-Ju Hwang, Jinwoo Shin
Retrosynthesis, of which the goal is to find a set of reactants for synthesizing a target product, is an emerging research area of deep learning.
1 code implementation • ICLR 2021 • Jaeho Lee, Sejun Park, Sangwoo Mo, Sungsoo Ahn, Jinwoo Shin
Recent discoveries on neural network pruning reveal that, with a carefully chosen layerwise sparsity, a simple magnitude-based pruning achieves state-of-the-art tradeoff between sparsity and performance.
2 code implementations • 6 Jul 2020 • Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, Jinwoo Shin
Neural networks often learn to make predictions that overly rely on spurious correlation existing in the dataset, which causes the model to be biased.
Ranked #1 on
Out-of-Distribution Generalization
on ImageNet-W
2 code implementations • NeurIPS 2020 • Sungsoo Ahn, Junsu Kim, Hankook Lee, Jinwoo Shin
De novo molecular design attempts to search over the chemical space for molecules with the desired property.
1 code implementation • ICML 2020 • Sungsoo Ahn, Younggyo Seo, Jinwoo Shin
Designing efficient algorithms for combinatorial optimization appears ubiquitously in various scientific fields.
no code implementations • 25 Sep 2019 • Sungsoo Ahn, Younggyo Seo, Jinwoo Shin
Designing efficient algorithms for combinatorial optimization appears ubiquitously in various scientific fields.
2 code implementations • CVPR 2019 • Sungsoo Ahn, Shell Xu Hu, Andreas Damianou, Neil D. Lawrence, Zhenwen Dai
We further demonstrate the strength of our method on knowledge transfer across heterogeneous network architectures by transferring knowledge from a convolutional neural network (CNN) to a multi-layer perceptron (MLP) on CIFAR-10.
no code implementations • ICML 2018 • Sungsoo Ahn, Michael Chertkov, Adrian Weller, Jinwoo Shin
Probabilistic graphical models are a key tool in machine learning applications.
no code implementations • 5 Jan 2018 • Sungsoo Ahn, Michael Chertkov, Jinwoo Shin, Adrian Weller
Recently, so-called gauge transformations were used to improve variational lower bounds on $Z$.
no code implementations • NeurIPS 2017 • Sungsoo Ahn, Michael Chertkov, Jinwoo Shin
Computing partition function is the most important statistical inference task arising in applications of Graphical Models (GM).
no code implementations • 29 May 2016 • Sungsoo Ahn, Michael Chertkov, Jinwoo Shin
Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series.
no code implementations • NeurIPS 2015 • Sungsoo Ahn, Sejun Park, Michael Chertkov, Jinwoo Shin
Max-product Belief Propagation (BP) is a popular message-passing algorithm for computing a Maximum-A-Posteriori (MAP) assignment over a distribution represented by a Graphical Model (GM).