Search Results for author: Namuk Park

Found 5 papers, 4 papers with code

What Do Self-Supervised Vision Transformers Learn?

1 code implementation1 May 2023 Namuk Park, Wonjae Kim, Byeongho Heo, Taekyung Kim, Sangdoo Yun

We present a comparative study on how and why contrastive learning (CL) and masked image modeling (MIM) differ in their representations and in their performance of downstream tasks.

Contrastive Learning

How Do Vision Transformers Work?

3 code implementations ICLR 2022 Namuk Park, Songkuk Kim

In particular, we demonstrate the following properties of MSAs and Vision Transformers (ViTs): (1) MSAs improve not only accuracy but also generalization by flattening the loss landscapes.


Blurs Behave Like Ensembles: Spatial Smoothings to Improve Accuracy, Uncertainty, and Robustness

2 code implementations26 May 2021 Namuk Park, Songkuk Kim

Neural network ensembles, such as Bayesian neural networks (BNNs), have shown success in the areas of uncertainty estimation and robustness.

Differentiable Bayesian Neural Network Inference for Data Streams

no code implementations25 Sep 2019 Namuk Park, Taekyu Lee, Songkuk Kim

Instead of generating separate prediction for each data sample independently, this model estimates the increments of prediction for a new data sample from the previous predictions.

Semantic Segmentation

Vector Quantized Bayesian Neural Network Inference for Data Streams

1 code implementation12 Jul 2019 Namuk Park, Taekyu Lee, Songkuk Kim

The computational cost of this model is almost the same as that of non-Bayesian NNs.

Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.