Search Results for author: Jongpil Kim

Found 3 papers, 1 papers with code

A Study on Knowledge Distillation from Weak Teacher for Scaling Up Pre-trained Language Models

1 code implementation26 May 2023 Hayeon Lee, Rui Hou, Jongpil Kim, Davis Liang, Sung Ju Hwang, Alexander Min

Distillation from Weak Teacher (DWT) is a method of transferring knowledge from a smaller, weaker teacher model to a larger student model to improve its performance.

Knowledge Distillation

Discovering Characteristic Landmarks on Ancient Coins using Convolutional Networks

no code implementations30 Jun 2015 Jongpil Kim, Vladimir Pavlovic

We also propose a new framework to recognize the Roman coins which exploits hierarchical structure of the ancient Roman coins using the state-of-the-art classification power of the CNNs adopted to a new task of coin classification.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.