Search Results for author: Binxin Yang

Found 7 papers, 4 papers with code

Get In Video: Add Anything You Want to the Video

no code implementations8 Mar 2025 Shaobin Zhuang, Zhipeng Huang, Binxin Yang, Ying Zhang, Fangyikang Wang, Canmiao Fu, Chong Sun, Zheng-Jun Zha, Chen Li, Yali Wang

Video editing increasingly demands the ability to incorporate specific real-world instances into existing footage, yet current approaches fundamentally fail to capture the unique visual characteristics of particular subjects and ensure natural instance/scene interactions.

object-detection Object Detection +2

WeGen: A Unified Model for Interactive Multimodal Generation as We Chat

no code implementations3 Mar 2025 Zhipeng Huang, Shaobin Zhuang, Canmiao Fu, Binxin Yang, Ying Zhang, Chong Sun, Zhizheng Zhang, Yali Wang, Chen Li, Zheng-Jun Zha

In this work, we introduce WeGen, a model that unifies multimodal generation and understanding, and promotes their interplay in iterative generation.

multimodal generation

Semantics-Preserving Sketch Embedding for Face Generation

no code implementations23 Nov 2022 Binxin Yang, Xuejin Chen, Chaoqun Wang, Chi Zhang, Zihan Chen, Xiaoyan Sun

With a semantic feature matching loss for effective semantic supervision, our sketch embedding precisely conveys the semantics in the input sketches to the synthesized images.

Face Generation Image-to-Image Translation

3DFaceShop: Explicitly Controllable 3D-Aware Portrait Generation

1 code implementation12 Sep 2022 Junshu Tang, Bo Zhang, Binxin Yang, Ting Zhang, Dong Chen, Lizhuang Ma, Fang Wen

In contrast to the traditional avatar creation pipeline which is a costly process, contemporary generative approaches directly learn the data distribution from photographs.

3D Face Animation Disentanglement +3

Cannot find the paper you are looking for? You can Submit a new open access paper.