Search Results for author: Jinwoo Shin

Found 96 papers, 49 papers with code

Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing

no code implementations16 Dec 2021 Joonhyung Park, June Yong Yang, Jinwoo Shin, Sung Ju Hwang, Eunho Yang

However, they now suffer from lack of sample diversification as they always deterministically select regions with maximum saliency, injecting bias into the augmented data.

DAPPER: Performance Estimation of Domain Adaptation in Mobile Sensing

no code implementations22 Nov 2021 Taesik Gong, Yewon Kim, Adiba Orzikulova, Yunxin Liu, Sung Ju Hwang, Jinwoo Shin, Sung-Ju Lee

We present DAPPER (Domain AdaPtation Performance EstimatoR) that estimates the adaptation performance in a target domain with only unlabeled target data.

Domain Adaptation

Improving Transferability of Representations via Augmentation-Aware Self-Supervision

1 code implementation NeurIPS 2021 Hankook Lee, Kibok Lee, Kimin Lee, Honglak Lee, Jinwoo Shin

Recent unsupervised representation learning methods have shown to be effective in a range of vision tasks by learning representations invariant to data augmentations such as random cropping and color jittering.

Representation Learning Transfer Learning

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness

1 code implementation NeurIPS 2021 Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin

Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.

RoMA: Robust Model Adaptation for Offline Model-based Optimization

no code implementations NeurIPS 2021 Sihyun Yu, Sungsoo Ahn, Le Song, Jinwoo Shin

We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.

Meta-Learning Sparse Implicit Neural Representations

1 code implementation NeurIPS 2021 Jaeho Lee, Jihoon Tack, Namhoon Lee, Jinwoo Shin

Implicit neural representations are a promising new avenue of representing general signals by learning a continuous function that, parameterized as a neural network, maps the domain of a signal to its codomain; the mapping from spatial coordinates of an image to its pixel values, for example.

Meta-Learning

Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning

1 code implementation NeurIPS 2021 Junsu Kim, Younggyo Seo, Jinwoo Shin

In this paper, we present HIerarchical reinforcement learning Guided by Landmarks (HIGL), a novel framework for training a high-level policy with a reduced action space guided by landmarks, i. e., promising states to explore.

Efficient Exploration Hierarchical Reinforcement Learning

PASS: Patch-Aware Self-Supervision for Vision Transformer

no code implementations29 Sep 2021 Sukmin Yun, Hankook Lee, Jaehyung Kim, Jinwoo Shin

This paper aims to improve their performance further by utilizing the architectural advantages of the underlying neural network, as the current state-of-the-art visual pretext tasks for self-supervised learning do not enjoy the benefit, i. e., they are architecture-agnostic.

Object Detection Representation Learning +2

Object-aware Contrastive Learning for Debiased Scene Representation

1 code implementation NeurIPS 2021 Sangwoo Mo, Hyunwoo Kang, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin

Contrastive self-supervised learning has shown impressive results in learning visual representations from unlabeled images by enforcing invariance against different data augmentations.

Contrastive Learning Representation Learning +1

Abstract Reasoning via Logic-guided Generation

no code implementations22 Jul 2021 Sihyun Yu, Sangwoo Mo, Sungsoo Ahn, Jinwoo Shin

Abstract reasoning, i. e., inferring complicated patterns from given observations, is a central building block of artificial general intelligence.

Offline-to-Online Reinforcement Learning via Balanced Replay and Pessimistic Q-Ensemble

1 code implementation1 Jul 2021 SeungHyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin

Recent advance in deep offline reinforcement learning (RL) has made it possible to train strong robotic agents from offline datasets.

Offline RL

OpenCoS: Contrastive Semi-supervised Learning for Handling Open-set Unlabeled Data

no code implementations29 Jun 2021 Jongjin Park, Sukmin Yun, Jongheon Jeong, Jinwoo Shin

Modern semi-supervised learning methods conventionally assume both labeled and unlabeled data have the same class distribution.

Contrastive Learning

Co$^2$L: Contrastive Continual Learning

1 code implementation28 Jun 2021 Hyuntak Cha, Jaeho Lee, Jinwoo Shin

Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks than joint-training methods relying on task-specific supervision.

Continual Learning Contrastive Learning +2

Quality-Agnostic Image Recognition via Invertible Decoder

no code implementations CVPR 2021 Insoo Kim, Seungju Han, Ji-won Baek, Seong-Jin Park, Jae-Joon Han, Jinwoo Shin

Our two-stage scheme allows the network to produce clean-like and robust features from any quality images, by reconstructing their clean images via the invertible decoder.

Data Augmentation Domain Generalization +2

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Adversarial Robustness

no code implementations ICML Workshop AML 2021 Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin

Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.

Adversarial Robustness

Entropy Weighted Adversarial Training

no code implementations ICML Workshop AML 2021 Minseon Kim, Jihoon Tack, Jinwoo Shin, Sung Ju Hwang

Adversarial training methods, which minimizes the loss of adversarially-perturbed training examples, have been extensively studied as a solution to improve the robustness of the deep neural networks.

Scaling Neural Tangent Kernels via Sketching and Random Features

1 code implementation NeurIPS 2021 Amir Zandieh, Insu Han, Haim Avron, Neta Shoham, Chaewon Kim, Jinwoo Shin

To accelerate learning with NTK, we design a near input-sparsity time approximation algorithm for NTK, by sketching the polynomial expansions of arc-cosine kernels: our sketch for the convolutional counterpart of NTK (CNTK) can transform any image using a linear runtime in the number of pixels.

Self-Improved Retrosynthetic Planning

1 code implementation9 Jun 2021 Junsu Kim, Sungsoo Ahn, Hankook Lee, Jinwoo Shin

Our main idea is based on a self-improving procedure that trains the model to imitate successful trajectories found by itself.

RetCL: A Selection-based Approach for Retrosynthesis via Contrastive Learning

no code implementations3 May 2021 Hankook Lee, Sungsoo Ahn, Seung-Woo Seo, You Young Song, Eunho Yang, Sung-Ju Hwang, Jinwoo Shin

Retrosynthesis, of which the goal is to find a set of reactants for synthesizing a target product, is an emerging research area of deep learning.

Contrastive Learning

Consistency and Monotonicity Regularization for Neural Knowledge Tracing

no code implementations3 May 2021 Seewoo Lee, Youngduck Choi, Juneyoung Park, Byungsoo Kim, Jinwoo Shin

Knowledge Tracing (KT), tracking a human's knowledge acquisition, is a central component in online learning and AI in Education.

Data Augmentation Knowledge Tracing

Random Features for the Neural Tangent Kernel

1 code implementation3 Apr 2021 Insu Han, Haim Avron, Neta Shoham, Chaewon Kim, Jinwoo Shin

We combine random features of the arc-cosine kernels with a sketching-based algorithm which can run in linear with respect to both the number of data points and input dimension.

Training GANs with Stronger Augmentations via Contrastive Discriminator

1 code implementation ICLR 2021 Jongheon Jeong, Jinwoo Shin

Recent works in Generative Adversarial Networks (GANs) are actively revisiting various data augmentation techniques as an effective way to prevent discriminator overfitting.

Contrastive Learning Data Augmentation +1

Consistency Regularization for Adversarial Robustness

1 code implementation ICML Workshop AML 2021 Jihoon Tack, Sihyun Yu, Jongheon Jeong, Minseon Kim, Sung Ju Hwang, Jinwoo Shin

Adversarial training (AT) is currently one of the most successful methods to obtain the adversarial robustness of deep neural networks.

Adversarial Robustness Data Augmentation

Model-Augmented Q-learning

no code implementations7 Feb 2021 Youngmin Oh, Jinwoo Shin, Eunho Yang, Sung Ju Hwang

We show that the proposed scheme, called Model-augmented $Q$-learning (MQL), obtains a policy-invariant solution which is identical to the solution obtained by learning with true reward.

Q-Learning

Co2L: Contrastive Continual Learning

1 code implementation ICCV 2021 Hyuntak Cha, Jaeho Lee, Jinwoo Shin

Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks than cross-entropy based methods which rely on task-specific supervision.

Continual Learning Contrastive Learning +2

Addressing Distribution Shift in Online Reinforcement Learning with Offline Datasets

no code implementations1 Jan 2021 SeungHyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin

As it turns out, fine-tuning offline RL agents is a non-trivial challenge, due to distribution shift – the agent encounters out-of-distribution samples during online interaction, which may cause bootstrapping error in Q-learning and instability during fine-tuning.

Offline RL Q-Learning

MASKER: Masked Keyword Regularization for Reliable Text Classification

1 code implementation17 Dec 2020 Seung Jun Moon, Sangwoo Mo, Kimin Lee, Jaeho Lee, Jinwoo Shin

We claim that one central obstacle to the reliability is the over-reliance of the model on a limited number of keywords, instead of looking at the whole context.

Domain Generalization General Classification +4

Learning from Failure: De-biasing Classifier from Biased Classifier

no code implementations NeurIPS 2020 Junhyun Nam, Hyuntak Cha, Sung-Soo Ahn, Jaeho Lee, Jinwoo Shin

Neural networks often learn to make predictions that overly rely on spurious corre- lation existing in the dataset, which causes the model to be biased.

Provable Memorization via Deep Neural Networks using Sub-linear Parameters

no code implementations26 Oct 2020 Sejun Park, Jaeho Lee, Chulhee Yun, Jinwoo Shin

It is known that $O(N)$ parameters are sufficient for neural networks to memorize arbitrary $N$ input-label pairs.

Layer-adaptive sparsity for the Magnitude-based Pruning

1 code implementation ICLR 2021 Jaeho Lee, Sejun Park, Sangwoo Mo, Sungsoo Ahn, Jinwoo Shin

Recent discoveries on neural network pruning reveal that, with a carefully chosen layerwise sparsity, a simple magnitude-based pruning achieves state-of-the-art tradeoff between sparsity and performance.

Image Classification Network Pruning

Few-shot Visual Reasoning with Meta-analogical Contrastive Learning

no code implementations NeurIPS 2020 Youngsung Kim, Jinwoo Shin, Eunho Yang, Sung Ju Hwang

While humans can solve a visual puzzle that requires logical reasoning by observing only few samples, it would require training over large amount of data for state-of-the-art deep reasoning models to obtain similar performance on the same task.

Contrastive Learning Visual Reasoning

Time-Reversal Symmetric ODE Network

1 code implementation NeurIPS 2020 In Huh, Eunho Yang, Sung Ju Hwang, Jinwoo Shin

Time-reversal symmetry, which requires that the dynamics of a system should not change with the reversal of time axis, is a fundamental property that frequently holds in classical and quantum mechanics.

Distribution Aligning Refinery of Pseudo-label for Imbalanced Semi-supervised Learning

1 code implementation NeurIPS 2020 Jaehyung Kim, Youngbum Hur, Sejun Park, Eunho Yang, Sung Ju Hwang, Jinwoo Shin

While semi-supervised learning (SSL) has proven to be a promising way for leveraging unlabeled data when labeled data is scarce, the existing SSL algorithms typically assume that training class distributions are balanced.

Learning to Sample with Local and Global Contexts in Experience Replay Buffer

no code implementations ICLR 2021 Youngmin Oh, Kimin Lee, Jinwoo Shin, Eunho Yang, Sung Ju Hwang

Experience replay, which enables the agents to remember and reuse experience from the past, has played a significant role in the success of off-policy reinforcement learning (RL).

Learning from Failure: Training Debiased Classifier from Biased Classifier

2 code implementations6 Jul 2020 Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, Jinwoo Shin

Neural networks often learn to make predictions that overly rely on spurious correlation existing in the dataset, which causes the model to be biased.

Guiding Deep Molecular Optimization with Genetic Exploration

2 code implementations NeurIPS 2020 Sungsoo Ahn, Junsu Kim, Hankook Lee, Jinwoo Shin

De novo molecular design attempts to search over the chemical space for molecules with the desired property.

Imitation Learning

Learning to Generate Noise for Multi-Attack Robustness

1 code implementation22 Jun 2020 Divyam Madaan, Jinwoo Shin, Sung Ju Hwang

Adversarial learning has emerged as one of the successful techniques to circumvent the susceptibility of existing methods against adversarial perturbations.

Meta-Learning

QTRAN++: Improved Value Transformation for Cooperative Multi-Agent Reinforcement Learning

no code implementations22 Jun 2020 Kyunghwan Son, Sung-Soo Ahn, Roben Delos Reyes, Jinwoo Shin, Yung Yi

QTRAN is a multi-agent reinforcement learning (MARL) algorithm capable of learning the largest class of joint-action value functions up to date.

SMAC Starcraft

Learning What to Defer for Maximum Independent Sets

no code implementations ICML 2020 Sungsoo Ahn, Younggyo Seo, Jinwoo Shin

Designing efficient algorithms for combinatorial optimization appears ubiquitously in various scientific fields.

Combinatorial Optimization

Minimum Width for Universal Approximation

no code implementations ICLR 2021 Sejun Park, Chulhee Yun, Jaeho Lee, Jinwoo Shin

In this work, we provide the first definitive result in this direction for networks using the ReLU activation functions: The minimum width required for the universal approximation of the $L^p$ functions is exactly $\max\{d_x+1, d_y\}$.

Learning Bounds for Risk-sensitive Learning

1 code implementation NeurIPS 2020 Jaeho Lee, Sejun Park, Jinwoo Shin

The second result, based on a novel variance-based characterization of OCE, gives an expected loss guarantee with a suppressed dependence on the smoothness of the selected OCE.

Consistency Regularization for Certified Robustness of Smoothed Classifiers

1 code implementation NeurIPS 2020 Jongheon Jeong, Jinwoo Shin

A recent technique of randomized smoothing has shown that the worst-case (adversarial) $\ell_2$-robustness can be transformed into the average-case Gaussian-robustness by "smoothing" a classifier, i. e., by considering the averaged prediction over Gaussian noise.

Adversarial Robustness

Context-aware Dynamics Model for Generalization in Model-Based Reinforcement Learning

2 code implementations ICML 2020 Kimin Lee, Younggyo Seo, Seung-Hyun Lee, Honglak Lee, Jinwoo Shin

Model-based reinforcement learning (RL) enjoys several benefits, such as data-efficiency and planning, by learning a model of the environment's dynamics.

Model-based Reinforcement Learning

M2m: Imbalanced Classification via Major-to-minor Translation

1 code implementation CVPR 2020 Jaehyung Kim, Jongheon Jeong, Jinwoo Shin

In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.

General Classification imbalanced classification +1

Freeze the Discriminator: a Simple Baseline for Fine-Tuning GANs

4 code implementations25 Feb 2020 Sangwoo Mo, Minsu Cho, Jinwoo Shin

Generative adversarial networks (GANs) have shown outstanding performance on a wide range of problems in computer vision, graphics, and machine learning, but often require numerous training data and heavy computational resources.

Image Generation Transfer Learning

Lookahead: a Far-Sighted Alternative of Magnitude-based Pruning

1 code implementation ICLR 2020 Sejun Park, Jaeho Lee, Sangwoo Mo, Jinwoo Shin

Magnitude-based pruning is one of the simplest methods for pruning neural networks.

Mining GOLD Samples for Conditional GANs

1 code implementation NeurIPS 2019 Sangwoo Mo, Chiheon Kim, Sungwoong Kim, Minsu Cho, Jinwoo Shin

Conditional generative adversarial networks (cGANs) have gained a considerable attention in recent years due to its class-wise controllability and superior quality for complex generation tasks.

Active Learning

Self-supervised Label Augmentation via Input Transformations

1 code implementation ICML 2020 Hankook Lee, Sung Ju Hwang, Jinwoo Shin

Our main idea is to learn a single unified task with respect to the joint distribution of the original and self-supervised labels, i. e., we augment original labels via self-supervision of input transformation.

Data Augmentation imbalanced classification +2

Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning

2 code implementations ICLR 2020 Kimin Lee, Kibok Lee, Jinwoo Shin, Honglak Lee

Deep reinforcement learning (RL) agents often fail to generalize to unseen environments (yet semantically similar to trained agents), particularly when they are trained on high-dimensional state spaces, such as images.

Data Augmentation

Deep Auto-Deferring Policy for Combinatorial Optimization

no code implementations25 Sep 2019 Sungsoo Ahn, Younggyo Seo, Jinwoo Shin

Designing efficient algorithms for combinatorial optimization appears ubiquitously in various scientific fields.

Combinatorial Optimization

Adversarial Neural Pruning with Latent Vulnerability Suppression

1 code implementation ICML 2020 Divyam Madaan, Jinwoo Shin, Sung Ju Hwang

Despite the remarkable performance of deep neural networks on various computer vision tasks, they are known to be susceptible to adversarial perturbations, which makes it challenging to deploy them in real-world safety-critical applications.

Adversarial Robustness

Learning What and Where to Transfer

4 code implementations15 May 2019 Yunhun Jang, Hankook Lee, Sung Ju Hwang, Jinwoo Shin

To address the issue, we propose a novel transfer learning approach based on meta-learning that can automatically learn what knowledge to transfer from the source network to where in the target network.

Meta-Learning Small Data Image Classification +1

Spectral Approximate Inference

no code implementations14 May 2019 Sejun Park, Eunho Yang, Se-Young Yun, Jinwoo Shin

Our contribution is two-fold: (a) we first propose a fully polynomial-time approximation scheme (FPTAS) for approximating the partition function of GM associating with a low-rank coupling matrix; (b) for general high-rank GMs, we design a spectral mean-field scheme utilizing (a) as a subroutine, where it approximates a high-rank GM into a product of rank-1 GMs for an efficient approximation of the partition function.

Training CNNs with Selective Allocation of Channels

no code implementations11 May 2019 Jongheon Jeong, Jinwoo Shin

Recent progress in deep convolutional neural networks (CNNs) have enabled a simple paradigm of architecture design: larger models typically achieve better accuracy.

Selective Convolutional Units: Improving CNNs via Channel Selectivity

no code implementations ICLR 2019 Jongheon Jeong, Jinwoo Shin

Bottleneck structures with identity (e. g., residual) connection are now emerging popular paradigms for designing deep convolutional neural networks (CNN), for processing large-scale features efficiently.

Model Compression

Instance-aware Image-to-Image Translation

1 code implementation ICLR 2019 Sangwoo Mo, Minsu Cho, Jinwoo Shin

Unsupervised image-to-image translation has gained considerable attention due to the recent impressive progress based on generative adversarial networks (GANs).

Semantic Segmentation Translation +1

Robust Determinantal Generative Classifier for Noisy Labels and Adversarial Attacks

no code implementations ICLR 2019 Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin

For instance, on CIFAR-10 dataset containing 45% noisy training labels, we improve the test accuracy of a deep model optimized by the state-of-the-art noise-handling training method from33. 34% to 43. 02%.

Overcoming Catastrophic Forgetting with Unlabeled Data in the Wild

1 code implementation ICCV 2019 Kibok Lee, Kimin Lee, Jinwoo Shin, Honglak Lee

Lifelong learning with deep neural networks is well-known to suffer from catastrophic forgetting: the performance on previous tasks drastically degrades when learning a new task.

class-incremental learning Incremental Learning

Robust Inference via Generative Classifiers for Handling Noisy Labels

1 code implementation31 Jan 2019 Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin

Large-scale datasets may contain significant proportions of noisy (incorrect) class labels, and it is well-known that modern deep neural networks (DNNs) poorly generalize from such noisy training datasets.

InstaGAN: Instance-aware Image-to-Image Translation

1 code implementation28 Dec 2018 Sangwoo Mo, Minsu Cho, Jinwoo Shin

Our comparative evaluation demonstrates the effectiveness of the proposed method on different image datasets, in particular, in the aforementioned challenging cases.

Semantic Segmentation Translation +1

Learning to Specialize with Knowledge Distillation for Visual Question Answering

no code implementations NeurIPS 2018 Jonghwan Mun, Kimin Lee, Jinwoo Shin, Bohyung Han

The proposed framework is model-agnostic and applicable to any tasks other than VQA, e. g., image classification with a large number of labels but few per-class examples, which is known to be difficult under existing MCL schemes.

General Classification Knowledge Distillation +2

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

4 code implementations NeurIPS 2018 Kimin Lee, Kibok Lee, Honglak Lee, Jinwoo Shin

Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.

class-incremental learning Incremental Learning +1

Anytime Neural Prediction via Slicing Networks Vertically

1 code implementation7 Jul 2018 Hankook Lee, Jinwoo Shin

This is remarkable due to their simplicity and effectiveness, but training many thin sub-networks jointly faces a new challenge on training complexity.

Hierarchical Novelty Detection for Visual Object Recognition

no code implementations CVPR 2018 Kibok Lee, Kimin Lee, Kyle Min, Yuting Zhang, Jinwoo Shin, Honglak Lee

The essential ingredients of our methods are confidence-calibrated classifiers, data relabeling, and the leave-one-out strategy for modeling novel classes under the hierarchical taxonomy.

Generalized Zero-Shot Learning Object Recognition

Stochastic Chebyshev Gradient Descent for Spectral Optimization

1 code implementation NeurIPS 2018 Insu Han, Haim Avron, Jinwoo Shin

A large class of machine learning techniques requires the solution of optimization problems involving spectral functions of parametric matrices, e. g. log-determinant and nuclear norm.

Gauged Mini-Bucket Elimination for Approximate Inference

no code implementations5 Jan 2018 Sungsoo Ahn, Michael Chertkov, Jinwoo Shin, Adrian Weller

Recently, so-called gauge transformations were used to improve variational lower bounds on $Z$.

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples

3 code implementations ICLR 2018 Kimin Lee, Honglak Lee, Kibok Lee, Jinwoo Shin

The problem of detecting whether a test sample is from in-distribution (i. e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications.

Confident Multiple Choice Learning

2 code implementations ICML 2017 Kimin Lee, Changho Hwang, KyoungSoo Park, Jinwoo Shin

Ensemble methods are arguably the most trustworthy techniques for boosting the performance of machine learning models.

General Classification Image Classification

Simplified Stochastic Feedforward Neural Networks

no code implementations11 Apr 2017 Kimin Lee, Jaehyung Kim, Song Chong, Jinwoo Shin

In this paper, we aim at developing efficient training methods for SFNN, in particular using known architectures and pre-trained parameters of DNN.

Rapid Mixing Swendsen-Wang Sampler for Stochastic Partitioned Attractive Models

no code implementations6 Apr 2017 Sejun Park, Yunhun Jang, Andreas Galanis, Jinwoo Shin, Daniel Stefankovic, Eric Vigoda

The Gibbs sampler is a particularly popular Markov chain used for learning and inference problems in Graphical Models (GMs).

Sequential Local Learning for Latent Graphical Models

no code implementations12 Mar 2017 Sejun Park, Eunho Yang, Jinwoo Shin

Learning parameters of latent graphical models (GM) is inherently much harder than that of no-latent ones since the latent variables make the corresponding log-likelihood non-concave.

Faster Greedy MAP Inference for Determinantal Point Processes

1 code implementation ICML 2017 Insu Han, Prabhanjan Kambadur, KyoungSoo Park, Jinwoo Shin

Determinantal point processes (DPPs) are popular probabilistic models that arise in many machine learning tasks, where distributions of diverse sets are characterized by matrix determinants.

Point Processes

Gauging Variational Inference

no code implementations NeurIPS 2017 Sungsoo Ahn, Michael Chertkov, Jinwoo Shin

Computing partition function is the most important statistical inference task arising in applications of Graphical Models (GM).

Variational Inference

Contextual Multi-armed Bandits under Feature Uncertainty

no code implementations3 Mar 2017 Se-Young Yun, Jun Hyun Nam, Sangwoo Mo, Jinwoo Shin

We study contextual multi-armed bandit problems under linear realizability on rewards and uncertainty (or noise) on features.

Multi-Armed Bandits

Iterative Bayesian Learning for Crowdsourced Regression

no code implementations28 Feb 2017 Jungseul Ok, Sewoong Oh, Yunhun Jang, Jinwoo Shin, Yung Yi

Crowdsourcing platforms emerged as popular venues for purchasing human intelligence at low cost for large volume of tasks.

Synthesis of MCMC and Belief Propagation

no code implementations NeurIPS 2016 Sung-Soo Ahn, Michael Chertkov, Jinwoo Shin

In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i. e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC.

Approximating the Spectral Sums of Large-scale Matrices using Chebyshev Approximations

1 code implementation3 Jun 2016 Insu Han, Dmitry Malioutov, Haim Avron, Jinwoo Shin

Computation of the trace of a matrix function plays an important role in many scientific computing applications, including applications in machine learning, computational physics (e. g., lattice quantum chromodynamics), network analysis and computational biology (e. g., protein folding), just to name a few application areas.

Data Structures and Algorithms

MCMC assisted by Belief Propagation

no code implementations29 May 2016 Sungsoo Ahn, Michael Chertkov, Jinwoo Shin

Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series.

Adiabatic Persistent Contrastive Divergence Learning

no code implementations26 May 2016 Hyeryung Jang, Hyungwon Choi, Yung Yi, Jinwoo Shin

This paper studies the problem of parameter learning in probabilistic graphical models having latent variables, where the standard approach is the expectation maximization algorithm alternating expectation (E) and maximization (M) steps.

Optimal Inference in Crowdsourced Classification via Belief Propagation

no code implementations11 Feb 2016 Jungseul Ok, Sewoong Oh, Jinwoo Shin, Yung Yi

Crowdsourcing systems are popular for solving large-scale labelling tasks with low-paid workers.

General Classification

Minimum Weight Perfect Matching via Blossom Belief Propagation

no code implementations NeurIPS 2015 Sungsoo Ahn, Sejun Park, Michael Chertkov, Jinwoo Shin

Max-product Belief Propagation (BP) is a popular message-passing algorithm for computing a Maximum-A-Posteriori (MAP) assignment over a distribution represented by a Graphical Model (GM).

Combinatorial Optimization

Large-scale Log-determinant Computation through Stochastic Chebyshev Expansions

1 code implementation22 Mar 2015 Insu Han, Dmitry Malioutov, Jinwoo Shin

Logarithms of determinants of large positive definite matrices appear ubiquitously in machine learning applications including Gaussian graphical and Gaussian process models, partition functions of discrete graphical models, minimum-volume ellipsoids, metric learning and kernel learning.

Metric Learning

Scalable Iterative Algorithm for Robust Subspace Clustering

no code implementations5 Mar 2015 Sanghyuk Chun, Yung-Kyun Noh, Jinwoo Shin

Subspace clustering (SC) is a popular method for dimensionality reduction of high-dimensional data, where it generalizes Principal Component Analysis (PCA).

Dimensionality Reduction

Max-Product Belief Propagation for Linear Programming: Applications to Combinatorial Optimization

no code implementations16 Dec 2014 Sejun Park, Jinwoo Shin

The max-product {belief propagation} (BP) is a popular message-passing heuristic for approximating a maximum-a-posteriori (MAP) assignment in a joint distribution represented by a graphical model (GM).

Combinatorial Optimization

A Graphical Transformation for Belief Propagation: Maximum Weight Matchings and Odd-Sized Cycles

no code implementations NeurIPS 2013 Jinwoo Shin, Andrew E. Gelfand, Misha Chertkov

It was recently shown that BP converges to the correct MAP assignment for a class of loopy GMs with the following common feature: the Linear Programming (LP) relaxation to the MAP problem is tight (has no integrality gap).

Loop Calculus and Bootstrap-Belief Propagation for Perfect Matchings on Arbitrary Graphs

no code implementations5 Jun 2013 Michael Chertkov, Andrew Gelfand, Jinwoo Shin

This manuscript discusses computation of the Partition Function (PF) and the Minimum Weight Perfect Matching (MWPM) on arbitrary, non-bipartite graphs.

Belief Propagation for Linear Programming

no code implementations17 May 2013 Andrew Gelfand, Jinwoo Shin, Michael Chertkov

For this class of problems, MAP inference can be stated as an integer LP with an LP relaxation that coincides with minimization of the BFE at ``zero temperature".

Cannot find the paper you are looking for? You can Submit a new open access paper.