Search Results for author: Se-Young Yun

Found 26 papers, 6 papers with code

Self-Contrastive Learning

no code implementations29 Jun 2021 Sangmin Bae, Sungnyun Kim, Jongwoo Ko, Gihun Lee, Seungjong Noh, Se-Young Yun

This paper proposes a novel contrastive learning framework, coined as Self-Contrastive (SelfCon) Learning, that self-contrasts within multiple outputs from the different levels of a network.

Contrastive Learning

Preservation of the Global Knowledge by Not-True Self Knowledge Distillation in Federated Learning

no code implementations6 Jun 2021 Gihun Lee, Yongjin Shin, Minchan Jeong, Se-Young Yun

In Federated Learning (FL), a strong global model is collaboratively learned by aggregating the clients' locally trained models.

Continual Learning Federated Learning +1

FedBABU: Towards Enhanced Representation for Federated Image Classification

no code implementations4 Jun 2021 Jaehoon Oh, Sangmook Kim, Se-Young Yun

To elucidate the cause of this personalization performance degradation problem, we decompose the entire network into the body (i. e., extractor), related to universality, and the head (i. e., classifier), related to personalization.

Federated Learning Image Classification

Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation

1 code implementation19 May 2021 Taehyeon Kim, Jaehoon Oh, Nakyil Kim, Sangwook Cho, Se-Young Yun

From this observation, we consider an intuitive KD loss function, the mean squared error (MSE) between the logit vectors, so that the student model can directly learn the logit of the teacher model.

Knowledge Distillation Learning with noisy labels

Winning Ticket in Noisy Image Classification

no code implementations23 Feb 2021 Taehyeon Kim, Jongwoo Ko, Jinhwan Choi, Sangwook Cho, Se-Young Yun

Modern deep neural networks (DNNs) become frail when the datasets contain noisy (incorrect) class labels.

General Classification Image Classification

Understanding Knowledge Distillation

no code implementations1 Jan 2021 Taehyeon Kim, Jaehoon Oh, Nakyil Kim, Sangwook Cho, Se-Young Yun

To verify this conjecture, we test an extreme logit learning model, where the KD is implemented with Mean Squared Error (MSE) between the student's logit and the teacher's logit.

Knowledge Distillation

Task Calibration for Distributional Uncertainty in Few-Shot Classification

no code implementations1 Jan 2021 Sungnyun Kim, Se-Young Yun

As numerous meta-learning algorithms improve performance when solving few-shot classification problems for practical applications, accurate prediction of uncertainty, though challenging, has been considered essential.

General Classification Meta-Learning

Test Score Algorithms for Budgeted Stochastic Utility Maximization

1 code implementation30 Dec 2020 Dabeen Lee, Milan Vojnovic, Se-Young Yun

Motivated by recent developments in designing algorithms based on individual item scores for solving utility maximization problems, we study the framework of using test scores, defined as a statistic of observed individual item performance data, for solving the budgeted stochastic utility maximization problem.

TornadoAggregate: Accurate and Scalable Federated Learning via the Ring-Based Architecture

no code implementations6 Dec 2020 Jin-woo Lee, Jaehoon Oh, Sungsu Lim, Se-Young Yun, Jae-Gil Lee

Federated learning has emerged as a new paradigm of collaborative machine learning; however, many prior studies have used global aggregation along a star topology without much consideration of the communication scalability or the diurnal property relied on clients' local time variety.

Federated Learning

Regret in Online Recommendation Systems

no code implementations NeurIPS 2020 Kaito Ariu, Narae Ryu, Se-Young Yun, Alexandre Proutière

Interestingly, our analysis reveals the relative weights of the different components of regret: the component due to the constraint of not presenting the same item twice to the same user, that due to learning the chances users like items, and finally that arising when learning the underlying structure.

Recommendation Systems

MixCo: Mix-up Contrastive Learning for Visual Representation

1 code implementation13 Oct 2020 Sungnyun Kim, Gihun Lee, Sangmin Bae, Se-Young Yun

Contrastive learning has shown remarkable results in recent self-supervised approaches for visual representation.

Contrastive Learning Self-Supervised Learning

BOIL: Towards Representation Change for Few-shot Learning

no code implementations ICLR 2021 Jaehoon Oh, Hyungjun Yoo, ChangHwan Kim, Se-Young Yun

It has recently been hypothesized that representation reuse, which makes little change in efficient representations, is the dominant factor in the performance of the meta-initialized model through MAML in contrast to representation change, which causes a significant change in representations.

Few-Shot Learning

SIPA: A Simple Framework for Efficient Networks

1 code implementation24 Apr 2020 Gihun Lee, Sangmin Bae, Jaehoon Oh, Se-Young Yun

With the success of deep learning in various fields and the advent of numerous Internet of Things (IoT) devices, it is essential to lighten models suitable for low-power devices.

Optimal Clustering from Noisy Binary Feedback

no code implementations14 Oct 2019 Kaito Ariu, Jungseul Ok, Alexandre Proutiere, Se-Young Yun

The objective is to devise an algorithm with a minimal cluster recovery error rate.

Reinforcement with Fading Memories

no code implementations29 Jul 2019 Kuang Xu, Se-Young Yun

We focus on a family of decision rules where the agent makes a new choice by randomly selecting an action with a probability approximately proportional to the amount of past rewards associated with each action in her memory.

Decision Making

Spectral Approximate Inference

no code implementations14 May 2019 Sejun Park, Eunho Yang, Se-Young Yun, Jinwoo Shin

Our contribution is two-fold: (a) we first propose a fully polynomial-time approximation scheme (FPTAS) for approximating the partition function of GM associating with a low-rank coupling matrix; (b) for general high-rank GMs, we design a spectral mean-field scheme utilizing (a) as a subroutine, where it approximates a high-rank GM into a product of rank-1 GMs for an efficient approximation of the partition function.

Non-Stationary Streaming PCA

no code implementations8 Feb 2019 Daniel Bienstock, Apurv Shukla, Se-Young Yun

We consider the problem of streaming principal component analysis (PCA) when the observations are noisy and generated in a non-stationary environment.

Accelerated MM Algorithms for Ranking Scores Inference from Comparison Data

1 code implementation1 Jan 2019 Milan Vojnovic, Se-Young Yun, Kaifang Zhou

In this paper, we study a popular method for inference of the Bradley-Terry model parameters, namely the MM algorithm, for maximum likelihood estimation and maximum a posteriori probability estimation.

Bayesian Inference

Spectrogram-channels u-net: a source separation model viewing each channel as the spectrogram of each source

no code implementations26 Oct 2018 Jaehoon Oh, Duyeon Kim, Se-Young Yun

The proposed model can be used for not only singing voice separation but also multi-instrument separation by changing only the number of output channels.

Information Retrieval Music Information Retrieval +1

Contextual Multi-armed Bandits under Feature Uncertainty

no code implementations3 Mar 2017 Se-Young Yun, Jun Hyun Nam, Sangwoo Mo, Jinwoo Shin

We study contextual multi-armed bandit problems under linear realizability on rewards and uncertainty (or noise) on features.

Multi-Armed Bandits

Fast and Memory Optimal Low-Rank Matrix Approximation

no code implementations NeurIPS 2015 Se-Young Yun, Marc Lelarge, Alexandre Proutiere

This means that its average mean-square error converges to 0 as $m$ and $n$ grow large (i. e., $\|\hat{M}^{(k)}-M^{(k)} \|_F^2 = o(mn)$ with high probability, where $\hat{M}^{(k)}$ and $M^{(k)}$ denote the output of SLA and the optimal rank $k$ approximation of $M$, respectively).

Optimal Cluster Recovery in the Labeled Stochastic Block Model

no code implementations NeurIPS 2016 Se-Young Yun, Alexandre Proutiere

We find the set of parameters such that there exists a clustering algorithm with at most $s$ misclassified items in average under the general LSBM and for any $s=o(n)$, which solves one open problem raised in \cite{abbe2015community}.

Community Detection Stochastic Block Model

Streaming, Memory Limited Matrix Completion with Noise

no code implementations13 Apr 2015 Se-Young Yun, Marc Lelarge, Alexandre Proutiere

We propose a streaming algorithm which produces an estimate of the original matrix with a vanishing mean square error, uses memory space scaling linearly with the ambient dimension of the matrix, i. e. the memory required to store the output alone, and spends computations as much as the number of non-zero entries of the input matrix.

Matrix Completion

Accurate Community Detection in the Stochastic Block Model via Spectral Algorithms

no code implementations23 Dec 2014 Se-Young Yun, Alexandre Proutiere

We consider the problem of community detection in the Stochastic Block Model with a finite number $K$ of communities of sizes linearly growing with the network size $n$.

Social and Information Networks Data Structures and Algorithms

Streaming, Memory Limited Algorithms for Community Detection

no code implementations NeurIPS 2014 Se-Young Yun, Marc Lelarge, Alexandre Proutiere

The first algorithm is {\it offline}, as it needs to store and keep the assignments of nodes to clusters, and requires a memory that scales linearly with the network size.

Community Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.