Search Results for author: Xingqian Xu

Found 20 papers, 15 papers with code

Deep Affinity Net: Instance Segmentation via Affinity

no code implementations15 Mar 2020 Xingqian Xu, Mang Tik Chiu, Thomas S. Huang, Honghui Shi

Most of the modern instance segmentation approaches fall into two categories: region-based approaches in which object bounding boxes are detected first and later used in cropping and segmenting instances; and keypoint-based approaches in which individual instances are represented by a set of keypoints followed by a dense pixel clustering around those keypoints.

Clustering graph partitioning +2

Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach

1 code implementation CVPR 2021 Xingqian Xu, Zhifei Zhang, Zhaowen Wang, Brian Price, Zhonghao Wang, Humphrey Shi

We also introduce Text Refinement Network (TexRNet), a novel text segmentation approach that adapts to the unique properties of text, e. g. non-convex boundary, diverse texture, etc., which often impose burdens on traditional segmentation models.

Segmentation Style Transfer +2

UltraSR: Spatial Encoding is a Missing Key for Implicit Image Function-based Arbitrary-Scale Super-Resolution

1 code implementation23 Mar 2021 Xingqian Xu, Zhangyang Wang, Humphrey Shi

In this work, we propose UltraSR, a simple yet effective new network design based on implicit image functions in which we deeply integrated spatial coordinates and periodic encoding with the implicit neural representation.

Super-Resolution

Towards Layer-wise Image Vectorization

1 code implementation CVPR 2022 Xu Ma, Yuqian Zhou, Xingqian Xu, Bin Sun, Valerii Filev, Nikita Orlov, Yun Fu, Humphrey Shi

Image rasterization is a mature technique in computer graphics, while image vectorization, the reverse path of rasterization, remains a major challenge.

Image Completion with Heterogeneously Filtered Spectral Hints

1 code implementation7 Nov 2022 Xingqian Xu, Shant Navasardyan, Vahram Tadevosyan, Andranik Sargsyan, Yadong Mu, Humphrey Shi

We also prove the effectiveness of our design via ablation studies, from which one may notice that the aforementioned challenges, i. e. pattern unawareness, blurry textures, and structure distortion, can be noticeably resolved.

Image Inpainting

StyleNAT: Giving Each Head a New Perspective

2 code implementations10 Nov 2022 Steven Walton, Ali Hassani, Xingqian Xu, Zhangyang Wang, Humphrey Shi

Image generation has been a long sought-after but challenging task, and performing the generation task in an efficient manner is similarly difficult.

Face Generation

Versatile Diffusion: Text, Images and Variations All in One Diffusion Model

3 code implementations ICCV 2023 Xingqian Xu, Zhangyang Wang, Eric Zhang, Kai Wang, Humphrey Shi

In this work, we expand the existing single-flow diffusion pipeline into a multi-task multimodal network, dubbed Versatile Diffusion (VD), that handles multiple flows of text-to-image, image-to-text, and variations in one unified model.

Disentanglement Image Captioning +5

MI-GAN: A Simple Baseline for Image Inpainting on Mobile Devices

1 code implementation ICCV 2023 Andranik Sargsyan, Shant Navasardyan, Xingqian Xu, Humphrey Shi

In this paper we present a simple image inpainting baseline, Mobile Inpainting GAN (MI-GAN), which is approximately one order of magnitude computationally cheaper and smaller than existing state-of-the-art inpainting models, and can be efficiently deployed on mobile devices.

Efficient Neural Network Image Inpainting +1

PAIR-Diffusion: A Comprehensive Multimodal Object-Level Image Editor

1 code implementation30 Mar 2023 Vidit Goel, Elia Peruzzo, Yifan Jiang, Dejia Xu, Xingqian Xu, Nicu Sebe, Trevor Darrell, Zhangyang Wang, Humphrey Shi

We propose PAIR Diffusion, a generic framework that can enable a diffusion model to control the structure and appearance properties of each object in the image.

Object

Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models

1 code implementation30 Mar 2023 Eric Zhang, Kai Wang, Xingqian Xu, Zhangyang Wang, Humphrey Shi

The unlearning problem of deep learning models, once primarily an academic concern, has become a prevalent issue in the industry.

Disentanglement Memorization +1

Zero-shot Generative Model Adaptation via Image-specific Prompt Learning

1 code implementation CVPR 2023 Jiayi Guo, Chaofei Wang, You Wu, Eric Zhang, Kai Wang, Xingqian Xu, Shiji Song, Humphrey Shi, Gao Huang

Recently, CLIP-guided image synthesis has shown appealing performance on adapting a pre-trained source-domain generator to an unseen target domain.

Image Generation

Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models

1 code implementation25 May 2023 Xingqian Xu, Jiayi Guo, Zhangyang Wang, Gao Huang, Irfan Essa, Humphrey Shi

Text-to-image (T2I) research has grown explosively in the past year, owing to the large-scale pre-trained diffusion models and many emerging personalization and editing approaches.

Conditional Text-to-Image Synthesis Image Generation +3

Reference-based Painterly Inpainting via Diffusion: Crossing the Wild Reference Domain Gap

no code implementations20 Jul 2023 Dejia Xu, Xingqian Xu, Wenyan Cong, Humphrey Shi, Zhangyang Wang

We propose Reference-based Painterly Inpainting, a novel task that crosses the wild reference domain gap and implants novel objects into artworks.

Image Inpainting

Interactive Neural Painting

no code implementations31 Jul 2023 Elia Peruzzo, Willi Menapace, Vidit Goel, Federica Arrigoni, Hao Tang, Xingqian Xu, Arman Chopikyan, Nikita Orlov, Yuxiao Hu, Humphrey Shi, Nicu Sebe, Elisa Ricci

This paper advances the state of the art in this emerging research domain by proposing the first approach for Interactive NP.

Smooth Diffusion: Crafting Smooth Latent Spaces in Diffusion Models

1 code implementation7 Dec 2023 Jiayi Guo, Xingqian Xu, Yifan Pu, Zanlin Ni, Chaofei Wang, Manushree Vasu, Shiji Song, Gao Huang, Humphrey Shi

Specifically, we introduce Step-wise Variation Regularization to enforce the proportion between the variations of an arbitrary input latent and that of the output image is a constant at any diffusion training step.

VASE: Object-Centric Appearance and Shape Manipulation of Real Videos

no code implementations4 Jan 2024 Elia Peruzzo, Vidit Goel, Dejia Xu, Xingqian Xu, Yifan Jiang, Zhangyang Wang, Humphrey Shi, Nicu Sebe

Recently, several works tackled the video editing task fostered by the success of large-scale text-to-image generative models.

Video Editing

OpenBias: Open-set Bias Detection in Text-to-Image Generative Models

no code implementations11 Apr 2024 Moreno D'Incà, Elia Peruzzo, Massimiliano Mancini, Dejia Xu, Vidit Goel, Xingqian Xu, Zhangyang Wang, Humphrey Shi, Nicu Sebe

In this paper, we tackle the challenge of open-set bias detection in text-to-image generative models presenting OpenBias, a new pipeline that identifies and quantifies the severity of biases agnostically, without access to any precompiled set.

Bias Detection Fairness +3

Cannot find the paper you are looking for? You can Submit a new open access paper.