Search Results for author: Haonan Qiu

Found 17 papers, 7 papers with code

DreamRelation: Relation-Centric Video Customization

no code implementations10 Mar 2025 Yujie Wei, Shiwei Zhang, Hangjie Yuan, Biao Gong, Longxiang Tang, Xiang Wang, Haonan Qiu, Hengjia Li, Shuai Tan, Yingya Zhang, Hongming Shan

First, in Relational Decoupling Learning, we disentangle relations from subject appearances using relation LoRA triplet and hybrid mask training strategy, ensuring better generalization across diverse relationships.

Relation Triplet +1

FreeScale: Unleashing the Resolution of Diffusion Models via Tuning-Free Scale Fusion

no code implementations12 Dec 2024 Haonan Qiu, Shiwei Zhang, Yujie Wei, Ruihang Chu, Hangjie Yuan, Xiang Wang, Yingya Zhang, Ziwei Liu

Visual diffusion models achieve remarkable progress, yet they are typically trained at limited resolutions due to the lack of high-resolution data and constrained computation resources, hampering their ability to generate high-fidelity images or videos at higher resolutions.

8k

Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model

no code implementations28 Nov 2024 Feng Liu, Shiwei Zhang, XiaoFeng Wang, Yujie Wei, Haonan Qiu, Yuzhong Zhao, Yingya Zhang, Qixiang Ye, Fang Wan

As a fundamental backbone for video generation, diffusion models are challenged by low inference speed due to the sequential nature of denoising.

Denoising Video Generation

PersonalVideo: High ID-Fidelity Video Customization without Dynamic and Semantic Degradation

no code implementations26 Nov 2024 Hengjia Li, Haonan Qiu, Shiwei Zhang, Xiang Wang, Yujie Wei, Zekun Li, Yingya Zhang, Boxi Wu, Deng Cai

The key challenge lies in maintaining high ID fidelity consistently while preserving the original motion dynamic and semantic following after the identity injection.

Video Generation

DreamVideo-2: Zero-Shot Subject-Driven Video Customization with Precise Motion Control

no code implementations17 Oct 2024 Yujie Wei, Shiwei Zhang, Hangjie Yuan, Xiang Wang, Haonan Qiu, Rui Zhao, Yutong Feng, Feng Liu, Zhizhong Huang, Jiaxin Ye, Yingya Zhang, Hongming Shan

In this paper, we present DreamVideo-2, a zero-shot video customization framework capable of generating videos with a specific subject and motion trajectory, guided by a single image and a bounding box sequence, respectively, and without the need for test-time fine-tuning.

Video Generation

FreeTraj: Tuning-Free Trajectory Control in Video Diffusion Models

1 code implementation24 Jun 2024 Haonan Qiu, Zhaoxi Chen, Zhouxia Wang, Yingqing He, Menghan Xia, Ziwei Liu

Diffusion model has demonstrated remarkable capability in video generation, which further sparks interest in introducing trajectory control into the generation process.

Video Generation

FreeNoise: Tuning-Free Longer Video Diffusion via Noise Rescheduling

3 code implementations23 Oct 2023 Haonan Qiu, Menghan Xia, Yong Zhang, Yingqing He, Xintao Wang, Ying Shan, Ziwei Liu

With the availability of large-scale video datasets and the advances of diffusion models, text-driven video generation has achieved substantial progress.

Video Generation

ReliTalk: Relightable Talking Portrait Generation from a Single Video

1 code implementation5 Sep 2023 Haonan Qiu, Zhaoxi Chen, Yuming Jiang, Hang Zhou, Xiangyu Fan, Lei Yang, Wayne Wu, Ziwei Liu

Our key insight is to decompose the portrait's reflectance from implicitly learned audio-driven facial normals and images.

Single-Image Portrait Relighting

Temporal Contrastive Learning for Spiking Neural Networks

no code implementations23 May 2023 Haonan Qiu, Zeyin Song, Yanqi Chen, Munan Ning, Wei Fang, Tao Sun, Zhengyu Ma, Li Yuan, Yonghong Tian

However, in this work, we find the method above is not ideal for the SNNs training as it omits the temporal dynamics of SNNs and degrades the performance quickly with the decrease of inference time steps.

Contrastive Learning

StyleFaceV: Face Video Generation via Decomposing and Recomposing Pretrained StyleGAN3

1 code implementation16 Aug 2022 Haonan Qiu, Yuming Jiang, Hang Zhou, Wayne Wu, Ziwei Liu

Notably, StyleFaceV is capable of generating realistic $1024\times1024$ face videos even without high-resolution training videos.

Image Generation Video Generation

Text2Human: Text-Driven Controllable Human Image Generation

2 code implementations31 May 2022 Yuming Jiang, Shuai Yang, Haonan Qiu, Wayne Wu, Chen Change Loy, Ziwei Liu

In this work, we present a text-driven controllable framework, Text2Human, for a high-quality and diverse human generation.

Diversity Human Parsing +1

Few-shot Forgery Detection via Guided Adversarial Interpolation

no code implementations12 Apr 2022 Haonan Qiu, Siyu Chen, Bei Gan, Kun Wang, Huafeng Shi, Jing Shao, Ziwei Liu

Notably, our method is also validated to be robust to choices of majority and minority forgery approaches.

SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing

1 code implementation19 Jun 2019 Haonan Qiu, Chaowei Xiao, Lei Yang, Xinchen Yan, Honglak Lee, Bo Li

In this paper, we aim to explore the impact of semantic manipulation on DNNs predictions by manipulating the semantic attributes of images and generate "unrestricted adversarial examples".

Attribute Face Recognition +1

Two-phase Hair Image Synthesis by Self-Enhancing Generative Model

no code implementations28 Feb 2019 Haonan Qiu, Chuan Wang, Hang Zhu, Xiangyu Zhu, Jinjin Gu, Xiaoguang Han

Generating plausible hair image given limited guidance, such as sparse sketches or low-resolution image, has been made possible with the rise of Generative Adversarial Networks (GANs).

Image-to-Image Translation Super-Resolution +2

Precise Temporal Action Localization by Evolving Temporal Proposals

no code implementations13 Apr 2018 Haonan Qiu, Yingbin Zheng, Hao Ye, Yao Lu, Feng Wang, Liang He

The performances of existing action localization approaches remain unsatisfactory in precisely determining the beginning and the end of an action.

Temporal Action Localization

Cannot find the paper you are looking for? You can Submit a new open access paper.