Search Results for author: Zheyuan Liu

Found 30 papers, 20 papers with code

UPME: An Unsupervised Peer Review Framework for Multimodal Large Language Model Evaluation

no code implementations19 Mar 2025 Qihui Zhang, Munan Ning, Zheyuan Liu, Yanbo Wang, Jiayi Ye, Yue Huang, Shuo Yang, Xiao Chen, Yibing Song, Li Yuan

Multimodal Large Language Models (MLLMs) have emerged to tackle the challenges of Visual Question Answering (VQA), sparking a new research focus on conducting objective evaluations of these models.

Language Model Evaluation Language Modeling +5

Superficial Self-Improved Reasoners Benefit from Model Merging

no code implementations3 Mar 2025 Xiangchi Yuan, Chunhui Zhang, Zheyuan Liu, Dachuan Shi, Soroush Vosoughi, Wenke Lee

As scaled language models (LMs) approach human-level reasoning capabilities, self-improvement emerges as a solution to synthesizing high-quality data corpus.

Memorization

Modality-Aware Neuron Pruning for Unlearning in Multimodal Large Language Models

1 code implementation21 Feb 2025 Zheyuan Liu, Guangyao Dou, Xiangchi Yuan, Chunhui Zhang, Zhaoxuan Tan, Meng Jiang

While some prior works have explored this issue in the context of LLMs, it presents a unique challenge for MLLMs due to the entangled nature of knowledge across modalities, making comprehensive unlearning more difficult.

Can Large Language Models Understand Preferences in Personalized Recommendation?

1 code implementation23 Jan 2025 Zhaoxuan Tan, Zinan Zeng, Qingkai Zeng, Zhenyu Wu, Zheyuan Liu, Fengran Mo, Meng Jiang

To address this, we introduce PerRecBench, disassociating the evaluation from these two factors and assessing recommendation techniques on capturing the personal preferences in a grouped ranking manner.

regression

CLIPErase: Efficient Unlearning of Visual-Textual Associations in CLIP

no code implementations30 Oct 2024 Tianyu Yang, Lisen Dai, Zheyuan Liu, Xiangqi Wang, Meng Jiang, Yapeng Tian, Xiangliang Zhang

Machine unlearning (MU) has gained significant attention as a means to remove specific data from trained models without requiring a full retraining process.

Image Classification Machine Unlearning

Protecting Privacy in Multimodal Large Language Models with MLLMU-Bench

1 code implementation29 Oct 2024 Zheyuan Liu, Guangyao Dou, Mengzhao Jia, Zhaoxuan Tan, Qingkai Zeng, Yongle Yuan, Meng Jiang

Generative models such as Large Language Models (LLM) and Multimodal Large Language models (MLLMs) trained on massive web corpora can memorize and disclose individuals' confidential and private data, raising legal and ethical concerns.

Language Modeling Language Modelling +3

OpenKD: Opening Prompt Diversity for Zero- and Few-shot Keypoint Detection

1 code implementation30 Sep 2024 Changsheng Lu, Zheyuan Liu, Piotr Koniusz

Further, to infer the keypoint location of unseen texts, we add the auxiliary keypoints and texts interpolated from visual and textual domains into training, which improves the spatial reasoning of our model and significantly enhances zero-shot novel keypoint detection.

Diversity Keypoint Detection +2

Machine Unlearning in Generative AI: A Survey

1 code implementation30 Jul 2024 Zheyuan Liu, Guangyao Dou, Zhaoxuan Tan, Yijun Tian, Meng Jiang

We offer a comprehensive survey on many things about MU in Generative AI, such as a new problem formulation, evaluation methods, and a structured discussion on the advantages and limitations of different kinds of MU techniques.

Machine Unlearning Survey

DiffStega: Towards Universal Training-Free Coverless Image Steganography with Diffusion Models

1 code implementation15 Jul 2024 Yiwei Yang, Zheyuan Liu, Jun Jia, Zhongpai Gao, Yunhao Li, Wei Sun, Xiaohong Liu, Guangtao Zhai

Traditional image steganography focuses on concealing one image within another, aiming to avoid steganalysis by unauthorized entities.

Diversity Image Steganography +1

Avoiding Copyright Infringement via Large Language Model Unlearning

1 code implementation16 Jun 2024 Guangyao Dou, Zheyuan Liu, Qing Lyu, Kaize Ding, Eric Wong

In real-world scenarios, model owners need to continuously address copyright infringement as new requests for content removal emerge at different time points.

General Knowledge Language Modeling +3

Personalized Pieces: Efficient Personalized Large Language Models through Collaborative Efforts

1 code implementation15 Jun 2024 Zhaoxuan Tan, Zheyuan Liu, Meng Jiang

Personalized large language models (LLMs) aim to tailor interactions, content, and recommendations to individual user preferences.

parameter-efficient fine-tuning

Graph Learning for Parameter Prediction of Quantum Approximate Optimization Algorithm

no code implementations5 Mar 2024 Zhiding Liang, Gang Liu, Zheyuan Liu, Jinglei Cheng, Tianyi Hao, Kecheng Liu, Hang Ren, Zhixin Song, Ji Liu, Fanny Ye, Yiyu Shi

In recent years, quantum computing has emerged as a transformative force in the field of combinatorial optimization, offering novel approaches to tackling complex problems that have long challenged classical computational methods.

Combinatorial Optimization Graph Learning +1

Towards Safer Large Language Models through Machine Unlearning

1 code implementation15 Feb 2024 Zheyuan Liu, Guangyao Dou, Zhaoxuan Tan, Yijun Tian, Meng Jiang

To address this gap, we introduce Selective Knowledge negation Unlearning (SKU), a novel unlearning framework for LLMs, designed to eliminate harmful knowledge while preserving utility on normal prompts.

Machine Unlearning Negation

Can we Soft Prompt LLMs for Graph Learning Tasks?

no code implementations15 Feb 2024 Zheyuan Liu, Xiaoxin He, Yijun Tian, Nitesh V. Chawla

Graph plays an important role in representing complex relationships in real-world applications such as social networks, biological data and citation networks.

Graph Learning Graph Neural Network +2

UGMAE: A Unified Framework for Graph Masked Autoencoders

no code implementations12 Feb 2024 Yijun Tian, Chuxu Zhang, Ziyi Kou, Zheyuan Liu, Xiangliang Zhang, Nitesh V. Chawla

In light of this, we propose UGMAE, a unified framework for graph masked autoencoders to address these issues from the perspectives of adaptivity, integrity, complementarity, and consistency.

Self-Supervised Learning

Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuning

2 code implementations6 Feb 2024 Zhaoxuan Tan, Qingkai Zeng, Yijun Tian, Zheyuan Liu, Bing Yin, Meng Jiang

OPPU integrates parametric user knowledge in the personal PEFT parameters with non-parametric knowledge from retrieval and profiles, adapting LLMs to user behavior shifts.

parameter-efficient fine-tuning Retrieval

Breaking the Trilemma of Privacy, Utility, Efficiency via Controllable Machine Unlearning

1 code implementation28 Oct 2023 Zheyuan Liu, Guangyao Dou, Yijun Tian, Chunhui Zhang, Eli Chien, Ziwei Zhu

Exploring the full spectrum of trade-offs between privacy, model utility, and runtime efficiency is critical for practical unlearning scenarios.

Machine Unlearning

A Generalized Physical-knowledge-guided Dynamic Model for Underwater Image Enhancement

1 code implementation10 Aug 2023 Pan Mu, Hanning Xu, Zheyuan Liu, Zheng Wang, Sixian Chan, Cong Bai

To tackle these challenges, we design a Generalized Underwater image enhancement method via a Physical-knowledge-guided Dynamic Model (short for GUPDM), consisting of three parts: Atmosphere-based Dynamic Structure (ADS), Transmission-guided Dynamic Structure (TDS), and Prior-based Multi-scale Structure (PMS).

Image Enhancement

Histogram-guided Video Colorization Structure with Spatial-Temporal Connection

no code implementations9 Aug 2023 Zheyuan Liu, Pan Mu, Hanning Xu, Cong Bai

Video colorization, aiming at obtaining colorful and plausible results from grayish frames, has aroused a lot of interest recently.

Colorization

All-pairs Consistency Learning for Weakly Supervised Semantic Segmentation

1 code implementation8 Aug 2023 Weixuan Sun, Yanhao Zhang, Zhen Qin, Zheyuan Liu, Lin Cheng, Fanyi Wang, Yiran Zhong, Nick Barnes

Given a pair of augmented views, our approach regularizes the activation intensities between a pair of augmented views, while also ensuring that the affinity across regions within each view remains consistent.

All Object Localization +2

Candidate Set Re-ranking for Composed Image Retrieval with Dual Multi-modal Encoder

2 code implementations25 May 2023 Zheyuan Liu, Weixuan Sun, Damien Teney, Stephen Gould

An alternative approach is to allow interactions between the query and every possible candidate, i. e., reference-text-candidate triplets, and pick the best from the entire set.

Composed Image Retrieval (CoIR) Reranking +3

Bi-directional Training for Composed Image Retrieval via Text Prompt Learning

1 code implementation29 Mar 2023 Zheyuan Liu, Weixuan Sun, Yicong Hong, Damien Teney, Stephen Gould

Composed image retrieval searches for a target image based on a multi-modal user query comprised of a reference image and modification text describing the desired changes.

Composed Image Retrieval (CoIR) Prompt Learning +1

Learning Audio-Visual Source Localization via False Negative Aware Contrastive Learning

1 code implementation CVPR 2023 Weixuan Sun, Jiayi Zhang, Jianyuan Wang, Zheyuan Liu, Yiran Zhong, Tianpeng Feng, Yandong Guo, Yanhao Zhang, Nick Barnes

Based on this observation, we propose a new learning strategy named False Negative Aware Contrastive (FNAC) to mitigate the problem of misleading the training with such false negative samples.

Contrastive Learning

Image Retrieval on Real-life Images with Pre-trained Vision-and-Language Models

3 code implementations ICCV 2021 Zheyuan Liu, Cristian Rodriguez-Opazo, Damien Teney, Stephen Gould

We demonstrate that with a relatively simple architecture, CIRPLANT outperforms existing methods on open-domain images, while matching state-of-the-art accuracy on the existing narrow datasets, such as fashion.

Composed Image Retrieval (CoIR) Retrieval +1

Cannot find the paper you are looking for? You can Submit a new open access paper.