Search Results for author: Sangho Lee

Found 7 papers, 3 papers with code

Boundary-aware Self-supervised Learning for Video Scene Segmentation

1 code implementation14 Jan 2022 Jonghwan Mun, Minchul Shin, Gunsoo Han, Sangho Lee, Seongsu Ha, Joonseok Lee, Eun-Sol Kim

Inspired from this, we tackle video scene segmentation, which is a task of temporally localizing scene boundaries in a video, with a self-supervised learning framework where we mainly focus on designing effective pretext tasks.

Scene Segmentation Self-Supervised Learning

Unsupervised Representation Learning via Neural Activation Coding

1 code implementation7 Dec 2021 Yookoon Park, Sangho Lee, Gunhee Kim, David M. Blei

We argue that the deep encoder should maximize its nonlinear expressivity on the data for downstream predictors to take full advantage of its representation power.

Representation Learning

Boundary-aware Pre-training for Video Scene Segmentation

no code implementations29 Sep 2021 Jonghwan Mun, Minchul Shin, Gunsoo Han, Sangho Lee, Seongsu Ha, Joonseok Lee, Eun-Sol Kim

Inspired from this, we tackle video scene segmentation, which is a task of temporally localizing scene boundaries in a video, with a self-supervised learning framework where we mainly focus on designing effective pretext tasks.

Scene Segmentation Self-Supervised Learning

ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning

1 code implementation ICCV 2021 Sangho Lee, Jiwan Chung, Youngjae Yu, Gunhee Kim, Thomas Breuel, Gal Chechik, Yale Song

We demonstrate that our approach finds videos with high audio-visual correspondence and show that self-supervised models trained on our data achieve competitive performances compared to models trained on existing manually curated datasets.

Representation Learning

Self-Supervised Learning of Compressed Video Representations

no code implementations ICLR 2021 Youngjae Yu, Sangho Lee, Gunhee Kim, Yale Song

We show that our approach achieves competitive performance on self-supervised learning of video representations with a considerable improvement in speed compared to the traditional methods.

Self-Supervised Learning

Parameter Efficient Multimodal Transformers for Video Representation Learning

no code implementations ICLR 2021 Sangho Lee, Youngjae Yu, Gunhee Kim, Thomas Breuel, Jan Kautz, Yale Song

The recent success of Transformers in the language domain has motivated adapting it to a multimodal setting, where a new visual model is trained in tandem with an already pretrained language model.

Language Modelling Representation Learning

Edge Bias in Federated Learning and its Solution by Buffered Knowledge Distillation

no code implementations20 Oct 2020 Sangho Lee, KiYoon Yoo, Nojun Kwak

Federated learning (FL), which utilizes communication between the server (core) and local devices (edges) to indirectly learn from more data, is an emerging field in deep learning research.

Federated Learning Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.