2 code implementations • 22 Apr 2019 • Sungrae Park, Kyungwoo Song, Mingi Ji, Wonsung Lee, Il-Chul Moon
Successful application processing sequential data, such as text and speech, requires an improved generalization performance of recurrent neural networks (RNNs).
1 code implementation • 26 Apr 2019 • Kyungwoo Song, Mingi Ji, Sungrae Park, Il-Chul Moon
The analyses on the user history require the robust sequential model to anticipate the transitions and the decays of user interests.
no code implementations • 15 Nov 2019 • Mingi Ji, Weonyoung Joo, Kyungwoo Song, Yoon-Yeong Kim, Il-Chul Moon
This work merges the self-attention of the Transformer and the sequential recommendation by adding a probabilistic model of the recommendation task specifics.
no code implementations • 1 Jan 2021 • Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park
Although the recent advance in OCR enables the accurate extraction of text segments, it is still challenging to extract key information from documents due to the diversity of layouts.
1 code implementation • 5 Feb 2021 • Mingi Ji, Byeongho Heo, Sungrae Park
Knowledge distillation extracts general knowledge from a pre-trained teacher network and provides guidance to a target student network.
Ranked #36 on Knowledge Distillation on ImageNet
1 code implementation • CVPR 2021 • Mingi Ji, Seungjae Shin, Seunghyun Hwang, Gibeom Park, Il-Chul Moon
Knowledge distillation is a method of transferring the knowledge from a pretrained complex teacher model to a student model, so a smaller network can replace a large teacher network at the deployment stage.
1 code implementation • 10 Aug 2021 • Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park
On the other hand, this paper tackles the problem by going back to the basic: effective combination of text and layout.
Ranked #5 on Relation Extraction on FUNSD
1 code implementation • 15 Jun 2022 • JoonHo Jang, Byeonghu Na, DongHyeok Shin, Mingi Ji, Kyungwoo Song, Il-Chul Moon
Therefore, we propose Unknown-Aware Domain Adversarial Learning (UADAL), which $\textit{aligns}$ the source and the target-$\textit{known}$ distribution while simultaneously $\textit{segregating}$ the target-$\textit{unknown}$ distribution in the feature alignment procedure.