Search Results for author: Zhipeng Hu

Found 28 papers, 13 papers with code

XRL-Bench: A Benchmark for Evaluating and Comparing Explainable Reinforcement Learning Techniques

1 code implementation20 Feb 2024 Yu Xiong, Zhipeng Hu, Ye Huang, Runze Wu, Kai Guan, Xingchen Fang, Ji Jiang, Tianze Zhou, Yujing Hu, Haoyu Liu, Tangjie Lyu, Changjie Fan

To address this, we introduce XRL-Bench, a unified standardized benchmark tailored for the evaluation and comparison of XRL methods, encompassing three main modules: standard RL environments, explainers based on state importance, and standard evaluators.

Decision Making Reinforcement Learning (RL)

Towards Efficient Diffusion-Based Image Editing with Instant Attention Masks

1 code implementation15 Jan 2024 Siyu Zou, Jiji Tang, Yiyi Zhou, Jing He, Chaoyi Zhao, Rongsheng Zhang, Zhipeng Hu, Xiaoshuai Sun

In particular, InstDiffEdit aims to employ the cross-modal attention ability of existing diffusion models to achieve instant mask guidance during the diffusion steps.

Text-Guided 3D Face Synthesis - From Generation to Editing

no code implementations CVPR 2024 Yunjie Wu, Yapeng Meng, Zhipeng Hu, Lincheng Li, Haoqian Wu, Kun Zhou, Weiwei Xu, Xin Yu

In the editing stage we first employ a pre-trained diffusion model to update facial geometry or texture based on the texts.

Face Generation Texture Synthesis

Text-Guided 3D Face Synthesis -- From Generation to Editing

no code implementations1 Dec 2023 Yunjie Wu, Yapeng Meng, Zhipeng Hu, Lincheng Li, Haoqian Wu, Kun Zhou, Weiwei Xu, Xin Yu

In the editing stage, we first employ a pre-trained diffusion model to update facial geometry or texture based on the texts.

Face Generation Texture Synthesis

AlignDiff: Aligning Diverse Human Preferences via Behavior-Customisable Diffusion Model

no code implementations3 Oct 2023 Zibin Dong, Yifu Yuan, Jianye Hao, Fei Ni, Yao Mu, Yan Zheng, Yujing Hu, Tangjie Lv, Changjie Fan, Zhipeng Hu

Aligning agent behaviors with diverse human preferences remains a challenging problem in reinforcement learning (RL), owing to the inherent abstractness and mutability of human preferences.

Attribute Reinforcement Learning (RL)

EfficientDreamer: High-Fidelity and Robust 3D Creation via Orthogonal-view Diffusion Prior

1 code implementation25 Aug 2023 Zhipeng Hu, Minda Zhao, Chaoyi Zhao, Xinyue Liang, Lincheng Li, Zeng Zhao, Changjie Fan, Xiaowei Zhou, Xin Yu

This limitation leads to the Janus problem, where multi-faced 3D models are generated under the guidance of such diffusion models.

Text to 3D

Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations

2 code implementations6 May 2023 Yufeng Huang, Jiji Tang, Zhuo Chen, Rongsheng Zhang, Xinfeng Zhang, WeiJie Chen, Zeng Zhao, Zhou Zhao, Tangjie Lv, Zhipeng Hu, Wen Zhang

In this paper, we present an end-to-end framework Structure-CLIP, which integrates Scene Graph Knowledge (SGK) to enhance multi-modal structured representations.

Image-text matching Text Matching

TalkCLIP: Talking Head Generation with Text-Guided Expressive Speaking Styles

no code implementations1 Apr 2023 Yifeng Ma, Suzhen Wang, Yu Ding, Bowen Ma, Tangjie Lv, Changjie Fan, Zhipeng Hu, Zhidong Deng, Xin Yu

In this work, we propose an expression-controllable one-shot talking head method, dubbed TalkCLIP, where the expression in a speech is specified by the natural language.

2D Semantic Segmentation task 3 (25 classes) Talking Head Generation

DINet: Deformation Inpainting Network for Realistic Face Visually Dubbing on High Resolution Video

1 code implementation7 Mar 2023 Zhimeng Zhang, Zhipeng Hu, Wenjin Deng, Changjie Fan, Tangjie Lv, Yu Ding

Different from previous works relying on multiple up-sample layers to directly generate pixels from latent embeddings, DINet performs spatial deformation on feature maps of reference images to better preserve high-frequency textural details.

Decoder Face Dubbing

Zero-Shot Text-to-Parameter Translation for Game Character Auto-Creation

no code implementations CVPR 2023 Rui Zhao, Wei Li, Zhipeng Hu, Lincheng Li, Zhengxia Zou, Zhenwei Shi, Changjie Fan

In our method, taking the power of large-scale pre-trained multi-modal CLIP and neural rendering, T2P searches both continuous facial parameters and discrete facial parameters in a unified framework.

3D Generation Face Model +3

Tailoring Language Generation Models under Total Variation Distance

1 code implementation26 Feb 2023 Haozhe Ji, Pei Ke, Zhipeng Hu, Rongsheng Zhang, Minlie Huang

The standard paradigm of neural language generation adopts maximum likelihood estimation (MLE) as the optimizing method.

Text Generation

StyleTalk: One-shot Talking Head Generation with Controllable Speaking Styles

1 code implementation3 Jan 2023 Yifeng Ma, Suzhen Wang, Zhipeng Hu, Changjie Fan, Tangjie Lv, Yu Ding, Zhidong Deng, Xin Yu

In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio.

Decoder Talking Face Generation +1

Towards Unbiased Volume Rendering of Neural Implicit Surfaces With Geometry Priors

no code implementations CVPR 2023 Yongqiang Zhang, Zhipeng Hu, Haoqian Wu, Minda Zhao, Lincheng Li, Zhengxia Zou, Changjie Fan

In this paper, we argue that this limited accuracy is due to the bias of their volume rendering strategies, especially when the viewing direction is close to be tangent to the surface.

Surface Reconstruction

TCFimt: Temporal Counterfactual Forecasting from Individual Multiple Treatment Perspective

no code implementations17 Dec 2022 Pengfei Xi, Guifeng Wang, Zhipeng Hu, Yu Xiong, Mingming Gong, Wei Huang, Runze Wu, Yu Ding, Tangjie Lv, Changjie Fan, Xiangnan Feng

TCFimt constructs adversarial tasks in a seq2seq framework to alleviate selection and time-varying bias and designs a contrastive learning-based block to decouple a mixed treatment effect into separated main treatment effects and causal interactions which further improves estimation accuracy.

Contrastive Learning counterfactual +3

Facial Action Unit Detection and Intensity Estimation from Self-supervised Representation

no code implementations28 Oct 2022 Bowen Ma, Rudong An, Wei zhang, Yu Ding, Zeng Zhao, Rongsheng Zhang, Tangjie Lv, Changjie Fan, Zhipeng Hu

As a fine-grained and local expression behavior measurement, facial action unit (FAU) analysis (e. g., detection and intensity estimation) has been documented for its time-consuming, labor-intensive, and error-prone annotation.

Action Unit Detection Facial Action Unit Detection

Facial Action Units Detection Aided by Global-Local Expression Embedding

no code implementations25 Oct 2022 Zhipeng Hu, Wei zhang, Lincheng Li, Yu Ding, Wei Chen, Zhigang Deng, Xin Yu

We find that AUs and facial expressions are highly associated, and existing facial expression datasets often contain a large number of identities.

3D Face Reconstruction

Generating Coherent Narratives by Learning Dynamic and Discrete Entity States with a Contrastive Framework

1 code implementation8 Aug 2022 Jian Guan, Zhenyu Yang, Rongsheng Zhang, Zhipeng Hu, Minlie Huang

Despite advances in generating fluent texts, existing pretraining models tend to attach incoherent event sequences to involved entities when generating narratives such as stories and news.

Decoder Sentence

LaMemo: Language Modeling with Look-Ahead Memory

1 code implementation NAACL 2022 Haozhe Ji, Rongsheng Zhang, Zhenyu Yang, Zhipeng Hu, Minlie Huang

Although Transformers with fully connected self-attentions are powerful to model long-term dependencies, they are struggling to scale to long texts with thousands of words in language modeling.

Language Modelling

I-Tuning: Tuning Frozen Language Models with Image for Lightweight Image Captioning

no code implementations14 Feb 2022 Ziyang Luo, Zhipeng Hu, Yadong Xi, Rongsheng Zhang, Jing Ma

Different to these heavy-cost models, we introduce a lightweight image captioning framework (I-Tuning), which contains a small number of trainable parameters.

Decoder Image Captioning +1

Towards Unifying Behavioral and Response Diversity for Open-ended Learning in Zero-sum Games

1 code implementation NeurIPS 2021 Xiangyu Liu, Hangtian Jia, Ying Wen, Yaodong Yang, Yujing Hu, Yingfeng Chen, Changjie Fan, Zhipeng Hu

With this unified diversity measure, we design the corresponding diversity-promoting objective and population effectivity when seeking the best responses in open-ended learning.

Neural-to-Tree Policy Distillation with Policy Improvement Criterion

no code implementations16 Aug 2021 Zhao-Hua Li, Yang Yu, Yingfeng Chen, Ke Chen, Zhipeng Hu, Changjie Fan

The empirical results show that the proposed method can preserve a higher cumulative reward than behavior cloning and learn a more consistent policy to the original one.

Decision Making reinforcement-learning +1

KuiLeiXi: a Chinese Open-Ended Text Adventure Game

no code implementations ACL 2021 Yadong Xi, Xiaoxi Mao, Le Li, Lei Lin, Yanjiang Chen, Shuhan Yang, Xuhan Chen, Kailun Tao, Zhi Li, Gongzheng li, Lin Jiang, Siyan Liu, Zeng Zhao, Minlie Huang, Changjie Fan, Zhipeng Hu

Equipped with GPT-2 and the latest GPT-3, AI Dungeon has been seen as a famous example of the powerful text generation capabilities of large-scale pre-trained language models, and a possibility for future games.

Story Generation

GLIB: Towards Automated Test Oracle for Graphically-Rich Applications

1 code implementation19 Jun 2021 Ke Chen, Yufei Li, Yingfeng Chen, Changjie Fan, Zhipeng Hu, Wei Yang

We perform an evaluation of \texttt{GLIB} on 20 real-world game apps (with bug reports available) and the result shows that \texttt{GLIB} can achieve 100\% precision and 99. 5\% recall in detecting non-crashing bugs such as game GUI glitches.

Data Augmentation

Unifying Behavioral and Response Diversity for Open-ended Learning in Zero-sum Games

no code implementations9 Jun 2021 Xiangyu Liu, Hangtian Jia, Ying Wen, Yaodong Yang, Yujing Hu, Yingfeng Chen, Changjie Fan, Zhipeng Hu

With this unified diversity measure, we design the corresponding diversity-promoting objective and population effectivity when seeking the best responses in open-ended learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.