Search Results for author: Sejun Park

Found 13 papers, 5 papers with code

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness

1 code implementation NeurIPS 2021 Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin

Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Adversarial Robustness

no code implementations ICML Workshop AML 2021 Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin

Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.

Adversarial Robustness

Provable Memorization via Deep Neural Networks using Sub-linear Parameters

no code implementations26 Oct 2020 Sejun Park, Jaeho Lee, Chulhee Yun, Jinwoo Shin

It is known that $O(N)$ parameters are sufficient for neural networks to memorize arbitrary $N$ input-label pairs.

Layer-adaptive sparsity for the Magnitude-based Pruning

1 code implementation ICLR 2021 Jaeho Lee, Sejun Park, Sangwoo Mo, Sungsoo Ahn, Jinwoo Shin

Recent discoveries on neural network pruning reveal that, with a carefully chosen layerwise sparsity, a simple magnitude-based pruning achieves state-of-the-art tradeoff between sparsity and performance.

Image Classification Network Pruning

Distribution Aligning Refinery of Pseudo-label for Imbalanced Semi-supervised Learning

1 code implementation NeurIPS 2020 Jaehyung Kim, Youngbum Hur, Sejun Park, Eunho Yang, Sung Ju Hwang, Jinwoo Shin

While semi-supervised learning (SSL) has proven to be a promising way for leveraging unlabeled data when labeled data is scarce, the existing SSL algorithms typically assume that training class distributions are balanced.

Minimum Width for Universal Approximation

no code implementations ICLR 2021 Sejun Park, Chulhee Yun, Jaeho Lee, Jinwoo Shin

In this work, we provide the first definitive result in this direction for networks using the ReLU activation functions: The minimum width required for the universal approximation of the $L^p$ functions is exactly $\max\{d_x+1, d_y\}$.

Learning Bounds for Risk-sensitive Learning

1 code implementation NeurIPS 2020 Jaeho Lee, Sejun Park, Jinwoo Shin

The second result, based on a novel variance-based characterization of OCE, gives an expected loss guarantee with a suppressed dependence on the smoothness of the selected OCE.

Lookahead: a Far-Sighted Alternative of Magnitude-based Pruning

1 code implementation ICLR 2020 Sejun Park, Jaeho Lee, Sangwoo Mo, Jinwoo Shin

Magnitude-based pruning is one of the simplest methods for pruning neural networks.

Spectral Approximate Inference

no code implementations14 May 2019 Sejun Park, Eunho Yang, Se-Young Yun, Jinwoo Shin

Our contribution is two-fold: (a) we first propose a fully polynomial-time approximation scheme (FPTAS) for approximating the partition function of GM associating with a low-rank coupling matrix; (b) for general high-rank GMs, we design a spectral mean-field scheme utilizing (a) as a subroutine, where it approximates a high-rank GM into a product of rank-1 GMs for an efficient approximation of the partition function.

Rapid Mixing Swendsen-Wang Sampler for Stochastic Partitioned Attractive Models

no code implementations6 Apr 2017 Sejun Park, Yunhun Jang, Andreas Galanis, Jinwoo Shin, Daniel Stefankovic, Eric Vigoda

The Gibbs sampler is a particularly popular Markov chain used for learning and inference problems in Graphical Models (GMs).

Sequential Local Learning for Latent Graphical Models

no code implementations12 Mar 2017 Sejun Park, Eunho Yang, Jinwoo Shin

Learning parameters of latent graphical models (GM) is inherently much harder than that of no-latent ones since the latent variables make the corresponding log-likelihood non-concave.

Minimum Weight Perfect Matching via Blossom Belief Propagation

no code implementations NeurIPS 2015 Sungsoo Ahn, Sejun Park, Michael Chertkov, Jinwoo Shin

Max-product Belief Propagation (BP) is a popular message-passing algorithm for computing a Maximum-A-Posteriori (MAP) assignment over a distribution represented by a Graphical Model (GM).

Combinatorial Optimization

Max-Product Belief Propagation for Linear Programming: Applications to Combinatorial Optimization

no code implementations16 Dec 2014 Sejun Park, Jinwoo Shin

The max-product {belief propagation} (BP) is a popular message-passing heuristic for approximating a maximum-a-posteriori (MAP) assignment in a joint distribution represented by a graphical model (GM).

Combinatorial Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.