Search Results for author: Hao Ouyang

Found 21 papers, 12 papers with code

MangaNinja: Line Art Colorization with Precise Reference Following

no code implementations14 Jan 2025 Zhiheng Liu, Ka Leong Cheng, Xi Chen, Jie Xiao, Hao Ouyang, Kai Zhu, Yu Liu, Yujun Shen, Qifeng Chen, Ping Luo

Derived from diffusion models, MangaNinjia specializes in the task of reference-guided line art colorization.

Line Art Colorization

Edicho: Consistent Image Editing in the Wild

1 code implementation30 Dec 2024 Qingyan Bai, Hao Ouyang, Yinghao Xu, Qiuyu Wang, Ceyuan Yang, Ka Leong Cheng, Yujun Shen, Qifeng Chen

As a verified need, consistent editing across in-the-wild images remains a technical challenge arising from various unmanageable factors, like object poses, lighting conditions, and photography environments.

Denoising

DepthLab: From Partial to Complete

no code implementations24 Dec 2024 Zhiheng Liu, Ka Leong Cheng, Qiuyu Wang, Shuzhe Wang, Hao Ouyang, Bin Tan, Kai Zhu, Yujun Shen, Qifeng Chen, Ping Luo

Missing values remain a common challenge for depth data across its wide range of applications, stemming from various causes like incomplete data acquisition and perspective alteration.

Depth Completion Missing Values +2

LeviTor: 3D Trajectory Oriented Image-to-Video Synthesis

1 code implementation19 Dec 2024 Hanlin Wang, Hao Ouyang, Qiuyu Wang, Wen Wang, Ka Leong Cheng, Qifeng Chen, Yujun Shen, LiMin Wang

The intuitive nature of drag-based interaction has led to its growing adoption for controlling object trajectories in image-to-video synthesis.

Object

AniDoc: Animation Creation Made Easier

no code implementations18 Dec 2024 Yihao Meng, Hao Ouyang, Hanlin Wang, Qiuyu Wang, Wen Wang, Ka Leong Cheng, Zhiheng Liu, Yujun Shen, Huamin Qu

The production of 2D animation follows an industry-standard workflow, encompassing four essential stages: character design, keyframe animation, in-betweening, and coloring.

Line Art Colorization

Framer: Interactive Frame Interpolation

no code implementations24 Oct 2024 Wen Wang, Qiuyu Wang, Kecheng Zheng, Hao Ouyang, Zhekai Chen, Biao Gong, Hao Chen, Yujun Shen, Chunhua Shen

We propose Framer for interactive frame interpolation, which targets producing smoothly transitioning frames between two images as per user creativity.

Image Morphing Video Generation

Dynamic Typography: Bringing Text to Life via Video Diffusion Prior

no code implementations17 Apr 2024 Zichen Liu, Yihao Meng, Hao Ouyang, Yue Yu, Bolin Zhao, Daniel Cohen-Or, Huamin Qu

Through quantitative and qualitative evaluations, we demonstrate the effectiveness of our framework in generating coherent text animations that faithfully interpret user prompts while maintaining readability.

Vector Graphics

Real-time 3D-aware Portrait Editing from a Single Image

1 code implementation21 Feb 2024 Qingyan Bai, Zifan Shi, Yinghao Xu, Hao Ouyang, Qiuyu Wang, Ceyuan Yang, Xuan Wang, Gordon Wetzstein, Yujun Shen, Qifeng Chen

Second, thanks to the powerful priors, our module could focus on the learning of editing-related variations, such that it manages to handle various types of editing simultaneously in the training phase and further supports fast adaptation to user-specified customized types of editing during inference (e. g., with ~5min fine-tuning per style).

Text2Immersion: Generative Immersive Scene with 3D Gaussians

no code implementations14 Dec 2023 Hao Ouyang, Kathryn Heal, Stephen Lombardi, Tiancheng Sun

We introduce Text2Immersion, an elegant method for producing high-quality 3D immersive scenes from text prompts.

Depth Estimation Diversity +1

Learning Naturally Aggregated Appearance for Efficient 3D Editing

1 code implementation11 Dec 2023 Ka Leong Cheng, Qiuyu Wang, Zifan Shi, Kecheng Zheng, Yinghao Xu, Hao Ouyang, Qifeng Chen, Yujun Shen

Neural radiance fields, which represent a 3D scene as a color field and a density field, have demonstrated great progress in novel view synthesis yet are unfavorable for editing due to the implicitness.

Novel View Synthesis

CoDeF: Content Deformation Fields for Temporally Consistent Video Processing

1 code implementation CVPR 2024 Hao Ouyang, Qiuyu Wang, Yuxi Xiao, Qingyan Bai, Juntao Zhang, Kecheng Zheng, Xiaowei Zhou, Qifeng Chen, Yujun Shen

With such a design, CoDeF naturally supports lifting image algorithms for video processing, in the sense that one can apply an image algorithm to the canonical image and effortlessly propagate the outcomes to the entire video with the aid of the temporal deformation field.

Image-to-Image Translation Keypoint Detection +1

High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization

1 code implementation CVPR 2023 Jiaxin Xie, Hao Ouyang, Jingtan Piao, Chenyang Lei, Qifeng Chen

We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views while preserving specific details of the input image.

Attribute Generative Adversarial Network +2

Real-Time Neural Character Rendering with Pose-Guided Multiplane Images

1 code implementation25 Apr 2022 Hao Ouyang, Bo Zhang, Pan Zhang, Hao Yang, Jiaolong Yang, Dong Chen, Qifeng Chen, Fang Wen

We propose pose-guided multiplane image (MPI) synthesis which can render an animatable character in real scenes with photorealistic quality.

Image-to-Image Translation Neural Rendering +1

Deep Video Prior for Video Consistency and Propagation

1 code implementation27 Jan 2022 Chenyang Lei, Yazhou Xing, Hao Ouyang, Qifeng Chen

A progressive propagation strategy with pseudo labels is also proposed to enhance DVP's performance on video propagation.

Optical Flow Estimation Semantic Segmentation +2

Internal Video Inpainting by Implicit Long-range Propagation

1 code implementation ICCV 2021 Hao Ouyang, Tengfei Wang, Qifeng Chen

We propose a novel framework for video inpainting by adopting an internal learning strategy.

4k Object +2

Image Inpainting with External-internal Learning and Monochromic Bottleneck

1 code implementation CVPR 2021 Tengfei Wang, Hao Ouyang, Qifeng Chen

Although recent inpainting approaches have demonstrated significant improvements with deep neural networks, they still suffer from artifacts such as blunt structures and abrupt colors when filling in the missing regions.

Image Inpainting

Neural Camera Simulators

1 code implementation CVPR 2021 Hao Ouyang, Zifan Shi, Chenyang Lei, Ka Lung Law, Qifeng Chen

To facilitate the learning of a simulator model, we collect a dataset of the 10, 000 raw images of 450 scenes with different exposure settings.

Data Augmentation

Human Pose Estimation with Spatial Contextual Information

no code implementations7 Jan 2019 Hong Zhang, Hao Ouyang, Shu Liu, Xiaojuan Qi, Xiaoyong Shen, Ruigang Yang, Jiaya Jia

With this principle, we present two conceptually simple and yet computational efficient modules, namely Cascade Prediction Fusion (CPF) and Pose Graph Neural Network (PGNN), to exploit underlying contextual information.

Graph Neural Network Pose Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.