Search Results for author: Jaewon Lee

Found 15 papers, 6 papers with code

Implicit Neural Image Stitching

1 code implementation4 Sep 2023 Minsu Kim, Jaewon Lee, Byeonghun Lee, Sunghoon Im, Kyong Hwan Jin

Existing frameworks for image stitching often provide visually reasonable stitchings.

Image Stitching Super-Resolution

Semantic-Aware Implicit Template Learning via Part Deformation Consistency

1 code implementation ICCV 2023 Sihyeon Kim, Minseok Joo, Jaewon Lee, Juyeon Ko, Juhan Cha, Hyunwoo J. Kim

In this paper, we highlight the importance of part deformation consistency and propose a semantic-aware implicit template learning framework to enable semantically plausible deformation.

Chemical Property-Guided Neural Networks for Naphtha Composition Prediction

no code implementations2 Jun 2023 Chonghyo Joo, Jeongdong Kim, Hyungtae Cho, Jaewon Lee, Sungho Suh, Junghwan Kim

In this paper, we propose a neural network framework that utilizes chemical property information to improve the performance of naphtha composition prediction.

Semantic-aware Occlusion Filtering Neural Radiance Fields in the Wild

no code implementations5 Mar 2023 Jaewon Lee, Injae Kim, Hwan Heo, Hyunwoo J. Kim

We present a learning framework for reconstructing neural scene representations from a small number of unconstrained tourist photos.

Novel View Synthesis

Robust Camera Pose Refinement for Multi-Resolution Hash Encoding

no code implementations3 Feb 2023 Hwan Heo, Taekyung Kim, Jiyoung Lee, Jaewon Lee, Soohyun Kim, Hyunwoo J. Kim, Jin-Hwa Kim

Multi-resolution hash encoding has recently been proposed to reduce the computational cost of neural renderings, such as NeRF.

Neural Rendering Novel View Synthesis

Domain Generalization Emerges from Dreaming

no code implementations2 Feb 2023 Hwan Heo, Youngjin Oh, Jaewon Lee, Hyunwoo J. Kim

Recent studies have proven that DNNs, unlike human vision, tend to exploit texture information rather than shape.

Data Augmentation Domain Generalization +1

B-Spline Texture Coefficients Estimator for Screen Content Image Super-Resolution

1 code implementation CVPR 2023 Byeonghyun Pak, Jaewon Lee, Kyong Hwan Jin

Our network outperforms both a transformer-based reconstruction and an implicit Fourier representation method in almost upscaling factor, thanks to the positive constraint and compact support of the B-spline basis.

Image Super-Resolution Scene Text Recognition

Learning Local Implicit Fourier Representation for Image Warping

1 code implementation5 Jul 2022 Jaewon Lee, Kwang Pyo Choi, Kyong Hwan Jin

In this paper, we propose a local texture estimator for image warping (LTEW) followed by an implicit neural representation to deform images into continuous shapes.

ERP Super-Resolution

Building a Performance Model for Deep Learning Recommendation Model Training on GPUs

no code implementations19 Jan 2022 Zhongyi Lin, Louis Feng, Ehsan K. Ardestani, Jaewon Lee, John Lundell, Changkyu Kim, Arun Kejariwal, John D. Owens

We show that our general performance model not only achieves low prediction error on DLRM, which has highly customized configurations and is dominated by multiple factors but also yields comparable accuracy on other compute-bound ML models targeted by most previous methods.

A Neural Pre-Conditioning Active Learning Algorithm to Reduce Label Complexity

no code implementations8 Apr 2021 Seo Taek Kong, Soomin Jeon, Dongbin Na, Jaewon Lee, Hong-Seok Lee, Kyu-Hwan Jung

Although unlabeled data is readily available in pool-based AL, AL algorithms are usually evaluated by measuring the increase in supervised learning (SL) performance at consecutive acquisition steps.

Active Learning

Better Optimization can Reduce Sample Complexity: Active Semi-Supervised Learning via Convergence Rate Control

no code implementations1 Jan 2021 Seo Taek Kong, Soomin Jeon, Jaewon Lee, Hong-Seok Lee, Kyu-Hwan Jung

We name this AL scheme convergence rate control (CRC), and our experiments show that a deep neural network trained using a combination of CRC and a recently proposed SSL algorithm can quickly achieve high performance using far less labeled samples than SL.

Active Learning

Applying GPGPU to Recurrent Neural Network Language Model based Fast Network Search in the Real-Time LVCSR

no code implementations23 Jul 2020 Kyungmin Lee, Chiyoun Park, Ilhwan Kim, Namhoon Kim, Jaewon Lee

Recurrent Neural Network Language Models (RNNLMs) have started to be used in various fields of speech recognition due to their outstanding performance.

Language Modelling speech-recognition +1

Accelerating recurrent neural network language model based online speech recognition system

no code implementations30 Jan 2018 Kyungmin Lee, Chiyoun Park, Namhoon Kim, Jaewon Lee

This paper presents methods to accelerate recurrent neural network based language models (RNNLMs) for online speech recognition systems.

Language Modelling speech-recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.