Search Results for author: Zipeng Xu

Found 7 papers, 7 papers with code

Answer-Driven Visual State Estimator for Goal-Oriented Visual Dialogue

1 code implementation1 Oct 2020 Zipeng Xu, Fangxiang Feng, Xiaojie Wang, Yushu Yang, Huixing Jiang, Zhongyuan Wang

In this paper, we propose an Answer-Driven Visual State Estimator (ADVSE) to impose the effects of different answers on visual states.

Question Generation Question-Generation +1

Modeling Explicit Concerning States for Reinforcement Learning in Visual Dialogue

1 code implementation12 Jul 2021 Zipeng Xu, Fandong Meng, Xiaojie Wang, Duo Zheng, Chenxu Lv, Jie zhou

In Reinforcement Learning, it is crucial to represent states and assign rewards based on the action-caused transitions of states.

reinforcement-learning Reinforcement Learning (RL)

Enhancing Visual Dialog Questioner with Entity-based Strategy Learning and Augmented Guesser

1 code implementation Findings (EMNLP) 2021 Duo Zheng, Zipeng Xu, Fandong Meng, Xiaojie Wang, Jiaan Wang, Jie zhou

To enhance VD Questioner: 1) we propose a Related entity enhanced Questioner (ReeQ) that generates questions under the guidance of related entities and learns entity-based questioning strategy from human dialogs; 2) we propose an Augmented Guesser (AugG) that is strong and is optimized for the VD setting especially.

Reinforcement Learning (RL) Visual Dialog

Predict, Prevent, and Evaluate: Disentangled Text-Driven Image Manipulation Empowered by Pre-Trained Vision-Language Model

1 code implementation CVPR 2022 Zipeng Xu, Tianwei Lin, Hao Tang, Fu Li, Dongliang He, Nicu Sebe, Radu Timofte, Luc van Gool, Errui Ding

We propose a novel framework, i. e., Predict, Prevent, and Evaluate (PPE), for disentangled text-driven image manipulation that requires little manual annotation while being applicable to a wide variety of manipulations.

Image Manipulation Language Modelling

Spot the Difference: A Cooperative Object-Referring Game in Non-Perfectly Co-Observable Scene

1 code implementation16 Mar 2022 Duo Zheng, Fandong Meng, Qingyi Si, Hairun Fan, Zipeng Xu, Jie zhou, Fangxiang Feng, Xiaojie Wang

Visual dialog has witnessed great progress after introducing various vision-oriented goals into the conversation, especially such as GuessWhich and GuessWhat, where the only image is visible by either and both of the questioner and the answerer, respectively.

Visual Dialog

SpectralCLIP: Preventing Artifacts in Text-Guided Style Transfer from a Spectral Perspective

1 code implementation16 Mar 2023 Zipeng Xu, Songlong Xing, Enver Sangineto, Nicu Sebe

However, directly using CLIP to guide style transfer leads to undesirable artifacts (mainly written words and unrelated visual entities) spread over the image.

Image Generation Style Transfer

StylerDALLE: Language-Guided Style Transfer Using a Vector-Quantized Tokenizer of a Large-Scale Generative Model

1 code implementation ICCV 2023 Zipeng Xu, Enver Sangineto, Nicu Sebe

Despite the progress made in the style transfer task, most previous work focus on transferring only relatively simple features like color or texture, while missing more abstract concepts such as overall art expression or painter-specific traits.

Style Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.