Search Results for author: Jung-Woo Ha

Found 58 papers, 31 papers with code

Continuous Decomposition of Granularity for Neural Paraphrase Generation

1 code implementation5 Sep 2022 Xiaodong Gu, Zhaowei Zhang, Sang-Woo Lee, Kang Min Yoo, Jung-Woo Ha

While Transformers have had significant success in paragraph generation, they treat sentences as linear sequences of tokens and often neglect their hierarchical information.

Paraphrase Generation

Generator Knows What Discriminator Should Learn in Unconditional GANs

1 code implementation27 Jul 2022 Gayoung Lee, Hyunsu Kim, Junho Kim, Seonghyeon Kim, Jung-Woo Ha, Yunjey Choi

Here we explore the efficacy of dense supervision in unconditional generation and find generator feature maps can be an alternative of cost-expensive semantic label maps.

Conditional Image Generation Unconditional Image Generation

Time Is MattEr: Temporal Self-supervision for Video Transformers

1 code implementation19 Jul 2022 Sukmin Yun, Jaehyung Kim, Dongyoon Han, Hwanjun Song, Jung-Woo Ha, Jinwoo Shin

Understanding temporal dynamics of video is an essential aspect of learning better video representations.

Action Recognition

Deformable Graph Transformer

no code implementations29 Jun 2022 Jinyoung Park, Seongjun Yun, Hyeonjin Park, Jaewoo Kang, Jisu Jeong, Kyung-Min Kim, Jung-Woo Ha, Hyunwoo J. Kim

Then, the sparse attention is applied to the node sequences for learning node representations with a reduced computational cost.

Rarity Score : A New Metric to Evaluate the Uncommonness of Synthesized Images

no code implementations17 Jun 2022 Jiyeon Han, Hwanil Choi, Yunjey Choi, Junho Kim, Jung-Woo Ha, Jaesik Choi

In this work, we propose a new evaluation metric, called `rarity score', to measure the individual rarity of each image synthesized by generative models.

Image Generation

Dataset Condensation via Efficient Synthetic-Data Parameterization

2 code implementations30 May 2022 Jang-Hyun Kim, Jinuk Kim, Seong Joon Oh, Sangdoo Yun, Hwanjun Song, JoonHyun Jeong, Jung-Woo Ha, Hyun Oh Song

The great success of machine learning with massive amounts of data comes at a price of huge computation costs and storage for training and tuning.

Dataset Condensation

Two-Step Question Retrieval for Open-Domain QA

no code implementations Findings (ACL) 2022 Yeon Seonwoo, Juhee Son, Jiho Jin, Sang-Woo Lee, Ji-Hoon Kim, Jung-Woo Ha, Alice Oh

These models have shown a significant increase in inference speed, but at the cost of lower QA performance compared to the retriever-reader models.

Online Continual Learning on a Contaminated Data Stream with Blurry Task Boundaries

1 code implementation CVPR 2022 Jihwan Bang, Hyunseo Koh, Seulki Park, Hwanjun Song, Jung-Woo Ha, Jonghyun Choi

A large body of continual learning (CL) methods, however, assumes data streams with clean labels, and online learning scenarios under noisy data streams are yet underexplored.

Continual Learning online learning

Metropolis-Hastings Data Augmentation for Graph Neural Networks

no code implementations NeurIPS 2021 Hyeonjin Park, Seunghun Lee, Sihyeon Kim, Jinyoung Park, Jisu Jeong, Kyung-Min Kim, Jung-Woo Ha, Hyunwoo J. Kim

We also propose a simple and effective semi-supervised learning strategy with generated samples from MH-Aug. Our extensive experiments demonstrate that MH-Aug can generate a sequence of samples according to the target distribution to significantly improve the performance of GNNs.

Data Augmentation

Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks

1 code implementation ICLR 2022 Sihyun Yu, Jihoon Tack, Sangwoo Mo, Hyunsu Kim, Junho Kim, Jung-Woo Ha, Jinwoo Shin

In this paper, we found that the recent emerging paradigm of implicit neural representations (INRs) that encodes a continuous signal into a parameterized neural network effectively mitigates the issue.

Video Generation

Contrastive Fine-grained Class Clustering via Generative Adversarial Networks

1 code implementation ICLR 2022 Yunji Kim, Jung-Woo Ha

Specifically, we map the input of a generator, which was sampled from the categorical distribution, to the embedding space of the discriminator and let them act as a cluster centroid.

Contrastive Learning

Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference

1 code implementation ICLR 2022 Hyunseo Koh, Dahyun Kim, Jung-Woo Ha, Jonghyun Choi

For better practicality, we first propose a novel continual learning setup that is online, task-free, class-incremental, of blurry task boundaries and subject to inference queries at any moment.

Continual Learning Management

Weakly Supervised Pre-Training for Multi-Hop Retriever

1 code implementation Findings (ACL) 2021 Yeon Seonwoo, Sang-Woo Lee, Ji-Hoon Kim, Jung-Woo Ha, Alice Oh

In multi-hop QA, answering complex questions entails iterative document retrieval for finding the missing entity of the question.

Reward Optimization for Neural Machine Translation with Learned Metrics

1 code implementation15 Apr 2021 Raphael Shu, Kang Min Yoo, Jung-Woo Ha

Results show that the reward optimization with BLEURT is able to increase the metric scores by a large margin, in contrast to limited gain when training with smoothed BLEU.

Machine Translation Translation

Rainbow Memory: Continual Learning with a Memory of Diverse Samples

1 code implementation CVPR 2021 Jihwan Bang, Heesu Kim, Youngjoon Yoo, Jung-Woo Ha, Jonghyun Choi

Prevalent scenario of continual learning, however, assumes disjoint sets of classes as tasks and is less realistic rather artificial.

Continual Learning Data Augmentation +1

M2FN: Multi-step Modality Fusion for Advertisement Image Assessment

no code implementations31 Jan 2021 Kyung-Wha Park, Jung-Woo Ha, Junghoon Lee, Sunyoung Kwon, Kyung-Min Kim, Byoung-Tak Zhang

Assessing advertisements, specifically on the basis of user preferences and ad quality, is crucial to the marketing industry.

Marketing

Context-Aware Answer Extraction in Question Answering

1 code implementation EMNLP 2020 Yeon Seonwoo, Ji-Hoon Kim, Jung-Woo Ha, Alice Oh

With experiments on reading comprehension, we show that BLANC outperforms the state-of-the-art QA models, and the performance gap increases as the number of answer text occurrences increases.

Multi-Task Learning Question Answering +1

Which Strategies Matter for Noisy Label Classification? Insight into Loss and Uncertainty

no code implementations14 Aug 2020 Wonyoung Shin, Jung-Woo Ha, Shengzhe Li, Yongwoo Cho, Hoyean Song, Sunyoung Kwon

Label noise is a critical factor that degrades the generalization performance of deep neural networks, thus leading to severe issues in real-world problems.

Ranked #15 on Image Classification on Clothing1M (using extra training data)

General Classification Image Classification

Boosting Active Learning for Speech Recognition with Noisy Pseudo-labeled Samples

no code implementations19 Jun 2020 Jihwan Bang, Heesu Kim, Youngjoon Yoo, Jung-Woo Ha

The cost of annotating transcriptions for large speech corpora becomes a bottleneck to maximally enjoy the potential capacity of deep neural network-based automatic speech recognition models.

Active Learning Automatic Speech Recognition +1

AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights

4 code implementations ICLR 2021 Byeongho Heo, Sanghyuk Chun, Seong Joon Oh, Dongyoon Han, Sangdoo Yun, Gyuwan Kim, Youngjung Uh, Jung-Woo Ha

Because of the scale invariance, this modification only alters the effective step sizes without changing the effective update directions, thus enjoying the original convergence properties of GD optimizers.

Audio Classification Image Classification +2

Graphs, Entities, and Step Mixture

no code implementations18 May 2020 Kyuyong Shin, Wonyoung Shin, Jung-Woo Ha, Sunyoung Kwon

Existing approaches for graph neural networks commonly suffer from the oversmoothing issue, regardless of how neighborhoods are aggregated.

Modeling Musical Onset Probabilities via Neural Distribution Learning

no code implementations10 Feb 2020 Jaesung Huh, Egil Martinsson, Adrian Kim, Jung-Woo Ha

Musical onset detection can be formulated as a time-to-event (TTE) or time-since-event (TSE) prediction task by defining music as a sequence of onset events.

StarGAN v2: Diverse Image Synthesis for Multiple Domains

13 code implementations CVPR 2020 Yunjey Choi, Youngjung Uh, Jaejun Yoo, Jung-Woo Ha

A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains.

Fundus to Angiography Generation Multimodal Unsupervised Image-To-Image Translation +1

NL2pSQL: Generating Pseudo-SQL Queries from Under-Specified Natural Language Questions

no code implementations IJCNLP 2019 Fuxiang Chen, Seung-won Hwang, Jaegul Choo, Jung-Woo Ha, Sunghun Kim

Here we describe a new NL2pSQL task to generate pSQL codes from natural language questions on under-specified database issues, NL2pSQL.

Denoising

Neural Approximation of an Auto-Regressive Process through Confidence Guided Sampling

no code implementations15 Oct 2019 YoungJoon Yoo, Sanghyuk Chun, Sangdoo Yun, Jung-Woo Ha, Jaejun Yoo

We first assume that the priors of future samples can be generated in an independently and identically distributed (i. i. d.)

Which Ads to Show? Advertisement Image Assessment with Auxiliary Information via Multi-step Modality Fusion

no code implementations6 Oct 2019 Kyung-Wha Park, Junghoon Lee, Sunyoung Kwon, Jung-Woo Ha, Kyung-Min Kim, Byoung-Tak Zhang

Despite crucial influences of image quality, auxiliary information of ad images such as tags and target subjects can also determine image preference.

Phase-aware Speech Enhancement with Deep Complex U-Net

8 code implementations ICLR 2019 Hyeong-Seok Choi, Jang-Hyun Kim, Jaesung Huh, Adrian Kim, Jung-Woo Ha, Kyogu Lee

Most deep learning-based models for speech enhancement have mainly focused on estimating the magnitude of spectrogram while reusing the phase from noisy speech for reconstruction.

Speech Enhancement

Large-Scale Answerer in Questioner's Mind for Visual Dialog Question Generation

1 code implementation ICLR 2019 Sang-Woo Lee, Tong Gao, Sohee Yang, Jaejun Yoo, Jung-Woo Ha

Answerer in Questioner's Mind (AQM) is an information-theoretic framework that has been recently proposed for task-oriented dialog systems.

Question Generation Visual Dialog

Multi-Domain Processing via Hybrid Denoising Networks for Speech Enhancement

1 code implementation21 Dec 2018 Jang-Hyun Kim, Jaejun Yoo, Sanghyuk Chun, Adrian Kim, Jung-Woo Ha

We present a hybrid framework that leverages the trade-off between temporal and frequency precision in audio representations to improve the performance of speech enhancement task.

Audio and Speech Processing Sound

NSML: Meet the MLaaS platform with a real-world case study

no code implementations8 Oct 2018 Hanjoo Kim, Minkyu Kim, Dongjoo Seo, Jinwoong Kim, Heungseok Park, Soeun Park, Hyunwoo Jo, KyungHyun Kim, Youngil Yang, Youngkwan Kim, Nako Sung, Jung-Woo Ha

The boom of deep learning induced many industries and academies to introduce machine learning based approaches into their concern, competitively.

BIG-bench Machine Learning Management

NSML: A Machine Learning Platform That Enables You to Focus on Your Models

no code implementations16 Dec 2017 Nako Sung, Minkyu Kim, Hyunwoo Jo, Youngil Yang, Jingwoong Kim, Leonard Lausen, Youngkwan Kim, Gayoung Lee, Dong-Hyun Kwak, Jung-Woo Ha, Sunghun Kim

However, researchers are still required to perform a non-trivial amount of manual tasks such as GPU allocation, training status tracking, and comparison of models with different hyperparameter settings.

BIG-bench Machine Learning

StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation

33 code implementations CVPR 2018 Yunjey Choi, Min-Je Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, Jaegul Choo

To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model.

 Ranked #1 on Image-to-Image Translation on RaFD (using extra training data)

Image-to-Image Translation Translation

Representation Learning of Music Using Artist Labels

2 code implementations18 Oct 2017 Jiyoung Park, Jongpil Lee, Jangyeon Park, Jung-Woo Ha, Juhan Nam

In this paper, we present a supervised feature learning approach using artist labels annotated in every single track as objective meta data.

Sound Audio and Speech Processing

Energy-Based Sequence GANs for Recommendation and Their Connection to Imitation Learning

no code implementations28 Jun 2017 Jaeyoon Yoo, Heonseok Ha, Jihun Yi, Jongha Ryu, Chanju Kim, Jung-Woo Ha, Young-Han Kim, Sungroh Yoon

Recommender systems aim to find an accurate and efficient mapping from historic data of user-preferred items to a new item that is to be liked by a user.

Imitation Learning Recommendation Systems +1

Overcoming Catastrophic Forgetting by Incremental Moment Matching

1 code implementation NeurIPS 2017 Sang-Woo Lee, Jin-Hwa Kim, Jaehyun Jun, Jung-Woo Ha, Byoung-Tak Zhang

Catastrophic forgetting is a problem of neural networks that loses the information of the first task after training the second task.

Transfer Learning

Dual Attention Networks for Multimodal Reasoning and Matching

2 code implementations CVPR 2017 Hyeonseob Nam, Jung-Woo Ha, Jeonghee Kim

We propose Dual Attention Networks (DANs) which jointly leverage visual and textual attention mechanisms to capture fine-grained interplay between vision and language.

Question Answering Text Matching +2

Cannot find the paper you are looking for? You can Submit a new open access paper.