Search Results for author: Seungyong Lee

Found 27 papers, 13 papers with code

HPU: High-Bandwidth Processing Unit for Scalable, Cost-effective LLM Inference via GPU Co-processing

no code implementations18 Apr 2025 Myunghyun Rhee, Joonseop Sim, Taeyoung Ahn, Seungyong Lee, Daegun Yoon, Euiseok Kim, Kyoung Park, Youngpyo Joo, Hosik Kim

The attention layer, a core component of Transformer-based LLMs, brings out inefficiencies in current GPU systems due to its low operational intensity and the substantial memory requirements of KV caches.

Deep Polycuboid Fitting for Compact 3D Representation of Indoor Scenes

no code implementations19 Mar 2025 Gahye Lee, Hyejeong Yoon, Jungeon Kim, Seungyong Lee

This paper presents a novel framework for compactly representing a 3D indoor scene using a set of polycuboids through a deep learning-based fitting method.

Graph Neural Network

VisPath: Automated Visualization Code Synthesis via Multi-Path Reasoning and Feedback-Driven Optimization

no code implementations16 Feb 2025 Wonduk Seo, Seungyong Lee, Daye Kang, Zonghao Yuan, SeungHyun Lee

Unprecedented breakthroughs in Large Language Models (LLMs) has amplified its penetration into application of automated visualization code generation.

Code Generation Data Visualization +1

Deep Cost Ray Fusion for Sparse Depth Video Completion

no code implementations23 Sep 2024 Jungeon Kim, Soongjin Kim, Jaesik Park, Seungyong Lee

In this paper, we present a learning-based framework for sparse depth video completion.

Depth Completion

Discontinuity-preserving Normal Integration with Auxiliary Edges

no code implementations CVPR 2024 Hyomin Kim, Yucheol Jung, Seungyong Lee

Using the auxiliary edges, we design a novel algorithm to optimize the discontinuity and the depth map from the input normal map.

Surface Reconstruction

Fashion Style Editing with Generative Human Prior

no code implementations2 Apr 2024 Chaerin Kong, Seungyong Lee, Soohyeok Im, Wonsuk Yang

Image editing has been a long-standing challenge in the research community with its far-reaching impact on numerous applications.

Gyro-based Neural Single Image Deblurring

no code implementations1 Apr 2024 Heemin Yang, Jaesung Rim, Seungyong Lee, Seung-Hwan Baek, Sunghyun Cho

To handle gyro error, GyroDeblurNet is equipped with two novel neural network blocks: a gyro refinement block and a gyro deblurring block.

Deblurring Image Deblurring +1

ParamISP: Learned Forward and Inverse ISPs using Camera Parameters

1 code implementation CVPR 2024 Woohyeok Kim, GeonU Kim, Junyong Lee, Seungyong Lee, Seung-Hwan Baek, Sunghyun Cho

RAW images are rarely shared mainly due to its excessive data size compared to their sRGB counterparts obtained by camera ISPs.

Deblurring HDR Reconstruction

Mesh Density Adaptation for Template-based Shape Reconstruction

1 code implementation30 Jul 2023 Yucheol Jung, Hyomin Kim, Gyeongha Hwang, Seung-Hwan Baek, Seungyong Lee

In 3D shape reconstruction based on template mesh deformation, a regularization, such as smoothness energy, is employed to guide the reconstruction into a desirable direction.

3D Shape Reconstruction Inverse Rendering

Differentiable Display Photometric Stereo

no code implementations CVPR 2024 Seokjun Choi, Seungwoo Yoon, Giljoo Nam, Seungyong Lee, Seung-Hwan Baek

In this paper, we present differentiable display photometric stereo (DDPS), addressing an often overlooked challenge in display photometric stereo: the design of display patterns.

Deep Deformable 3D Caricatures with Learned Shape Control

1 code implementation29 Jul 2022 Yucheol Jung, Wonjong Jang, Soongjin Kim, Jiaolong Yang, Xin Tong, Seungyong Lee

To achieve the goal, we propose an MLP-based framework for building a deformable surface model, which takes a latent code and produces a 3D surface.

Caricature Position

Real-Time Video Deblurring via Lightweight Motion Compensation

1 code implementation25 May 2022 Hyeongseok Son, Junyong Lee, Sunghyun Cho, Seungyong Lee

While motion compensation greatly improves video deblurring quality, separately performing motion compensation and video deblurring demands huge computational overhead.

Deblurring Motion Compensation +1

MSSNet: Multi-Scale-Stage Network for Single Image Deblurring

1 code implementation19 Feb 2022 Kiyeon Kim, Seungyong Lee, Sunghyun Cho

Based on the analysis, we propose Multi-Scale-Stage Network (MSSNet), a novel deep learning-based approach to single image deblurring that adopts our remedies to the defects.

Deblurring Deep Learning +2

Realistic Blur Synthesis for Learning Image Deblurring

1 code implementation17 Feb 2022 Jaesung Rim, Geonung Kim, Jungeon Kim, Junyong Lee, Seungyong Lee, Sunghyun Cho

To this end, we present RSBlur, a novel dataset with real blurred images and the corresponding sharp image sequences to enable a detailed analysis of the difference between real and synthetic blur.

Deblurring Diversity +1

Recurrent Video Deblurring with Blur-Invariant Motion Estimation and Pixel Volumes

2 code implementations23 Aug 2021 Hyeongseok Son, Junyong Lee, Jonghyeop Lee, Sunghyun Cho, Seungyong Lee

To alleviate this problem, we propose two novel approaches to deblur videos by effectively aggregating information from multiple video frames.

Deblurring Motion Compensation +2

Spatiotemporal Texture Reconstruction for Dynamic Objects Using a Single RGB-D Camera

no code implementations20 Aug 2021 Hyomin Kim, Jungeon Kim, Hyeonseo Nam, Jaesik Park, Seungyong Lee

This paper presents an effective method for generating a spatiotemporal (time-varying) texture map for a dynamic object using a single RGB-D camera.

Object

Deep Virtual Markers for Articulated 3D Shapes

1 code implementation ICCV 2021 Hyomin Kim, Jungeon Kim, Jaewon Kam, Jaesik Park, Seungyong Lee

We propose deep virtual markers, a framework for estimating dense and accurate positional information for various types of 3D data.

Test unseen

Single Image Defocus Deblurring Using Kernel-Sharing Parallel Atrous Convolutions

1 code implementation ICCV 2021 Hyeongseok Son, Junyong Lee, Sunghyun Cho, Seungyong Lee

To utilize the property with inverse kernels, we exploit the observation that when only the size of a defocus blur changes while keeping the shape, the shape of the corresponding inverse kernel remains the same and only the scale changes.

Deblurring Image Defocus Deblurring

StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation

1 code implementation9 Jul 2021 Wonjong Jang, Gwangjin Ju, Yucheol Jung, Jiaolong Yang, Xin Tong, Seungyong Lee

Our framework, dubbed StyleCariGAN, automatically creates a realistic and detailed caricature from an input photo with optional controls on shape exaggeration degree and color stylization type.

Caricature

NPRportrait 1.0: A Three-Level Benchmark for Non-Photorealistic Rendering of Portraits

no code implementations1 Sep 2020 Paul L. Rosin, Yu-Kun Lai, David Mould, Ran Yi, Itamar Berger, Lars Doyle, Seungyong Lee, Chuan Li, Yong-Jin Liu, Amir Semmo, Ariel Shamir, Minjung Son, Holger Winnemoller

Despite the recent upsurge of activity in image-based non-photorealistic rendering (NPR), and in particular portrait image stylisation, due to the advent of neural style transfer, the state of performance evaluation in this field is limited, especially compared to the norms in the computer vision and machine learning communities.

Style Transfer

SRFeat: Single Image Super-Resolution with Feature Discrimination

no code implementations ECCV 2018 Seong-Jin Park, Hyeongseok Son, Sunghyun Cho, Ki-Sang Hong, Seungyong Lee

Generative adversarial networks (GANs) have recently been adopted to single image super resolution (SISR) and showed impressive results with realistically synthesized high-frequency textures.

Image Super-Resolution

RDFNet: RGB-D Multi-Level Residual Feature Fusion for Indoor Semantic Segmentation

no code implementations ICCV 2017 Seong-Jin Park, Ki-Sang Hong, Seungyong Lee

Feature fusion blocks learn residual RGB and depth features and their combinations to fully exploit the complementary characteristics of RGB and depth data.

Ranked #35 on Semantic Segmentation on SUN-RGBD (using extra training data)

Segmentation Semantic Segmentation

Convergence Analysis of MAP based Blur Kernel Estimation

no code implementations ICCV 2017 Sunghyun Cho, Seungyong Lee

One popular approach for blind deconvolution is to formulate a maximum a posteriori (MAP) problem with sparsity priors on the gradients of the latent image, and then alternatingly estimate the blur kernel and the latent image.

Defocus Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.