Search Results for author: Zhaofeng He

Found 16 papers, 8 papers with code

Dynamic Generation of Personalities with Large Language Models

1 code implementation10 Apr 2024 Jianzhi Liu, Hexiang Gu, Tianyu Zheng, Liuyu Xiang, Huijia Wu, Jie Fu, Zhaofeng He

We propose a new metric to assess personality generation capability based on this evaluation method.

Personality Generation

FakeNewsGPT4: Advancing Multimodal Fake News Detection through Knowledge-Augmented LVLMs

no code implementations4 Mar 2024 Xuannan Liu, Peipei Li, Huaibo Huang, Zekun Li, Xing Cui, Jiahao Liang, Lixiong Qin, Weihong Deng, Zhaofeng He

In this paper, we propose FakeNewsGPT4, a novel framework that augments Large Vision-Language Models (LVLMs) with forgery-specific knowledge for manipulation reasoning while inheriting extensive world knowledge as complementary.

Fake News Detection Informativeness +1

LLMArena: Assessing Capabilities of Large Language Models in Dynamic Multi-Agent Environments

no code implementations26 Feb 2024 Junzhe Chen, Xuming Hu, Shuodi Liu, Shiyu Huang, Wei-Wei Tu, Zhaofeng He, Lijie Wen

Recent advancements in large language models (LLMs) have revealed their potential for achieving autonomous agents possessing human-level intelligence.

MORE-3S:Multimodal-based Offline Reinforcement Learning with Shared Semantic Spaces

1 code implementation20 Feb 2024 Tianyu Zheng, Ge Zhang, Xingwei Qu, Ming Kuang, Stephen W. Huang, Zhaofeng He

Drawing upon the intuition that aligning different modalities to the same semantic embedding space would allow models to understand states and actions more easily, we propose a new perspective to the offline reinforcement learning (RL) challenge.

Decision Making Offline RL +3

HyperMoE: Paying Attention to Unselected Experts in Mixture of Experts via Dynamic Transfer

1 code implementation20 Feb 2024 Hao Zhao, Zihan Qiu, Huijia Wu, Zili Wang, Zhaofeng He, Jie Fu

The Mixture of Experts (MoE) for language models has been proven effective in augmenting the capacity of models by dynamically routing each input token to a specific subset of experts for processing.

Multi-Task Learning

Read to Play (R2-Play): Decision Transformer with Multimodal Game Instruction

1 code implementation6 Feb 2024 Yonggang Jin, Ge Zhang, Hao Zhao, Tianyu Zheng, Jiawei Guo, Liuyu Xiang, Shawn Yue, Stephen W. Huang, Zhaofeng He, Jie Fu

Drawing inspiration from the success of multimodal instruction tuning in visual tasks, we treat the visual-based RL task as a long-horizon vision task and construct a set of multimodal game instructions to incorporate instruction tuning into a decision transformer.

Exploring 3D-aware Lifespan Face Aging via Disentangled Shape-Texture Representations

no code implementations28 Dec 2023 Qianrui Teng, Rui Wang, Xing Cui, Peipei Li, Zhaofeng He

Existing face aging methods often focus on modeling either texture aging or using an entangled shape-texture representation to achieve face aging.

3D Face Reconstruction Texture Synthesis

InstaStyle: Inversion Noise of a Stylized Image is Secretly a Style Adviser

no code implementations25 Nov 2023 Xing Cui, Zekun Li, Pei Pei Li, Huaibo Huang, Zhaofeng He

We employ DDIM inversion to extract this noise from the reference image and leverage a diffusion model to generate new stylized images from the "style" noise.

Text-to-Image Generation

JARVIS-1: Open-World Multi-task Agents with Memory-Augmented Multimodal Language Models

no code implementations10 Nov 2023 ZiHao Wang, Shaofei Cai, Anji Liu, Yonggang Jin, Jinbing Hou, Bowei Zhang, Haowei Lin, Zhaofeng He, Zilong Zheng, Yaodong Yang, Xiaojian Ma, Yitao Liang

Achieving human-like planning and control with multimodal observations in an open world is a key milestone for more functional generalist agents.

Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning

1 code implementation18 Oct 2023 Hao Zhao, Jie Fu, Zhaofeng He

Parameter-efficient fine-tuning (PEFT) has shown its effectiveness in adapting the pre-trained language models to downstream tasks while only updating a small number of parameters.

Multi-Task Learning

Learning-to-Rank Meets Language: Boosting Language-Driven Ordering Alignment for Ordinal Classification

2 code implementations NeurIPS 2023 Rui Wang, Peipei Li, Huaibo Huang, Chunshui Cao, Ran He, Zhaofeng He

Consequently, we propose a cross-modal ordinal pairwise loss to refine the CLIP feature space, where texts and images maintain both semantic alignment and ordering alignment.

Age Estimation Classification +2

Deep Reinforcement Learning with Task-Adaptive Retrieval via Hypernetwork

1 code implementation19 Jun 2023 Yonggang Jin, Chenxu Wang, Tianyu Zheng, Liuyu Xiang, Yaodong Yang, Junge Zhang, Jie Fu, Zhaofeng He

Deep reinforcement learning algorithms are usually impeded by sampling inefficiency, heavily depending on multiple interactions with the environment to acquire accurate decision-making capabilities.

Decision Making Hippocampus +2

Towards Spatio-temporal Sea Surface Temperature Forecasting via Static and Dynamic Learnable Personalized Graph Convolution Network

no code implementations12 Apr 2023 Xiaohan Li, Gaowei Zhang, Kai Huang, Zhaofeng He

Sea surface temperature (SST) is uniquely important to the Earth's atmosphere since its dynamics are a major force in shaping local and global climate and profoundly affect our ecosystems.

Graph Learning

Pluralistic Aging Diffusion Autoencoder

no code implementations ICCV 2023 Peipei Li, Rui Wang, Huaibo Huang, Ran He, Zhaofeng He

Face aging is an ill-posed problem because multiple plausible aging patterns may correspond to a given input.

Denoising

CHATEDIT: Towards Multi-turn Interactive Facial Image Editing via Dialogue

no code implementations20 Mar 2023 Xing Cui, Zekun Li, Peipei Li, Yibo Hu, Hailin Shi, Zhaofeng He

This paper explores interactive facial image editing via dialogue and introduces the ChatEdit benchmark dataset for evaluating image editing and conversation abilities in this context.

Attribute Facial Editing +1

Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective

1 code implementation19 Feb 2023 Baoyuan Wu, Zihao Zhu, Li Liu, Qingshan Liu, Zhaofeng He, Siwei Lyu

Adversarial machine learning (AML) studies the adversarial phenomenon of machine learning, which may make inconsistent or unexpected predictions with humans.

Backdoor Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.