Search Results for author: Mingi Ji

Found 8 papers, 6 papers with code

Adversarial Dropout for Recurrent Neural Networks

2 code implementations22 Apr 2019 Sungrae Park, Kyungwoo Song, Mingi Ji, Wonsung Lee, Il-Chul Moon

Successful application processing sequential data, such as text and speech, requires an improved generalization performance of recurrent neural networks (RNNs).

Language Modelling Semi-Supervised Text Classification

Hierarchical Context enabled Recurrent Neural Network for Recommendation

1 code implementation26 Apr 2019 Kyungwoo Song, Mingi Ji, Sungrae Park, Il-Chul Moon

The analyses on the user history require the robust sequential model to anticipate the transitions and the decays of user interests.

Sequential Recommendation

Sequential Recommendation with Relation-Aware Kernelized Self-Attention

no code implementations15 Nov 2019 Mingi Ji, Weonyoung Joo, Kyungwoo Song, Yoon-Yeong Kim, Il-Chul Moon

This work merges the self-attention of the Transformer and the sequential recommendation by adding a probabilistic model of the recommendation task specifics.

Relation Sequential Recommendation

BROS: A Pre-trained Language Model for Understanding Texts in Document

no code implementations1 Jan 2021 Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park

Although the recent advance in OCR enables the accurate extraction of text segments, it is still challenging to extract key information from documents due to the diversity of layouts.

Document Layout Analysis document understanding +2

Show, Attend and Distill:Knowledge Distillation via Attention-based Feature Matching

1 code implementation5 Feb 2021 Mingi Ji, Byeongho Heo, Sungrae Park

Knowledge distillation extracts general knowledge from a pre-trained teacher network and provides guidance to a target student network.

General Knowledge Knowledge Distillation +2

Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation

1 code implementation CVPR 2021 Mingi Ji, Seungjae Shin, Seunghyun Hwang, Gibeom Park, Il-Chul Moon

Knowledge distillation is a method of transferring the knowledge from a pretrained complex teacher model to a student model, so a smaller network can replace a large teacher network at the deployment stage.

Data Augmentation object-detection +4

Unknown-Aware Domain Adversarial Learning for Open-Set Domain Adaptation

1 code implementation15 Jun 2022 JoonHo Jang, Byeonghu Na, DongHyeok Shin, Mingi Ji, Kyungwoo Song, Il-Chul Moon

Therefore, we propose Unknown-Aware Domain Adversarial Learning (UADAL), which $\textit{aligns}$ the source and the target-$\textit{known}$ distribution while simultaneously $\textit{segregating}$ the target-$\textit{unknown}$ distribution in the feature alignment procedure.

Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.