Search Results for author: Haozhao Wang

Found 41 papers, 16 papers with code

Resource-Constrained Federated Continual Learning: What Does Matter?

no code implementations15 Jan 2025 Yichen Li, Yuying Wang, Jiahua Dong, Haozhao Wang, Yining Qi, Rui Zhang, Ruixuan Li

We revisit this problem with a large-scale benchmark and analyze the performance of state-of-the-art FCL approaches under different resource-constrained settings.

Continual Learning Incremental Learning +1

Multilevel Semantic-Aware Model for AI-Generated Video Quality Assessment

no code implementations6 Jan 2025 Jiaze Li, Haoran Xu, Shiding Zhu, Junwei He, Haozhao Wang

We propose a Prompt Semantic Supervision Module using text encoder of CLIP to ensure semantic consistency between videos and conditional prompts.

Video Quality Assessment Visual Question Answering (VQA)

FedGIG: Graph Inversion from Gradient in Federated Learning

no code implementations24 Dec 2024 Tianzhe Xiao, Yichen Li, Yining Qi, Haozhao Wang, Ruixuan Li

Recent studies have shown that Federated learning (FL) is vulnerable to Gradient Inversion Attacks (GIA), which can recover private training data from shared gradients.

Federated Learning Graph Learning

Better Knowledge Enhancement for Privacy-Preserving Cross-Project Defect Prediction

no code implementations23 Dec 2024 Yuying Wang, Yichen Li, Haozhao Wang, Lei Zhao, Xiaofang Zhang

In this paper, we study the privacy-preserving cross-project defect prediction with data heterogeneity under the federated learning framework.

Federated Learning Knowledge Distillation +1

Deploying Foundation Model Powered Agent Services: A Survey

no code implementations18 Dec 2024 Wenchao Xu, Jinyu Chen, Peirong Zheng, Xiaoquan Yi, Tianyi Tian, Wenhui Zhu, Quan Wan, Haozhao Wang, Yunfeng Fan, Qinliang Su, Xuemin Shen

Foundation model (FM) powered agent services are regarded as a promising solution to develop intelligent and personalized applications for advancing toward Artificial General Intelligence (AGI).

model Model Compression +2

Rehearsal-Free Continual Federated Learning with Synergistic Regularization

no code implementations18 Dec 2024 Yichen Li, Yuying Wang, Tianzhe Xiao, Haozhao Wang, Yining Qi, Ruixuan Li

Specifically, we first apply traditional regularization techniques to CFL and observe that existing regularization techniques, especially synaptic intelligence, can achieve promising results under homogeneous data distribution but fail when the data is heterogeneous.

Federated Learning Novel Concepts

Unleashing the Power of Continual Learning on Non-Centralized Devices: A Survey

no code implementations18 Dec 2024 Yichen Li, Haozhao Wang, Wenchao Xu, Tianzhe Xiao, Hong Liu, Minzhu Tu, Yuying Wang, Xin Yang, Rui Zhang, Shui Yu, Song Guo, Ruixuan Li

To achieve high reliability and scalability in deploying this paradigm in distributed systems, it is essential to conquer challenges stemming from both spatial and temporal dimensions, manifesting as distribution shifts, catastrophic forgetting, heterogeneity, and privacy issues.

Continual Learning

Is the MMI Criterion Necessary for Interpretability? Degenerating Non-causal Features to Plain Noise for Self-Rationalization

1 code implementation8 Oct 2024 Wei Liu, Zhiying Deng, Zhongyu Niu, Jun Wang, Haozhao Wang, Yuankai Zhang, Ruixuan Li

In the optimization objectives of these methods, spurious features are still distinguished from plain noise, which hinders the discovery of causal rationales.

Mixed-Precision Embeddings for Large-Scale Recommendation Models

1 code implementation30 Sep 2024 Shiwei Li, Zhuoqi Hu, Xing Tang, Haozhao Wang, Shijie Xu, Weihong Luo, Yuhua Li, Xiuqiang He, Ruixuan Li

Specifically, to reduce the size of the search space, we first group features by frequency and then search precision for each feature group.

Quantization Recommendation Systems

FedBAT: Communication-Efficient Federated Learning via Learnable Binarization

1 code implementation6 Aug 2024 Shiwei Li, Wenchao Xu, Haozhao Wang, Xing Tang, Yining Qi, Shijie Xu, Weihong Luo, Yuhua Li, Xiuqiang He, Ruixuan Li

To this end, we propose Federated Binarization-Aware Training (FedBAT), a novel framework that directly learns binary model updates during the local training process, thus inherently reducing the approximation errors.

Binarization Federated Learning

Masked Random Noise for Communication Efficient Federated Learning

1 code implementation6 Aug 2024 Shiwei Li, Yingyi Cheng, Haozhao Wang, Xing Tang, Shijie Xu, Weihong Luo, Yuhua Li, Dugang Liu, Xiuqiang He, Ruixuan Li

For this purpose, we propose Federated Masked Random Noise (FedMRN), a novel framework that enables clients to learn a 1-bit mask for each model parameter and apply masked random noise (i. e., the Hadamard product of random noise and masks) to represent model updates.

Federated Learning

Self-Introspective Decoding: Alleviating Hallucinations for Large Vision-Language Models

1 code implementation4 Aug 2024 Fushuo Huo, Wenchao Xu, Zhong Zhang, Haozhao Wang, Zhicheng Chen, Peilin Zhao

While Large Vision-Language Models (LVLMs) have rapidly advanced in recent years, the prevalent issue known as the `hallucination' problem has emerged as a significant bottleneck, hindering their real-world deployments.

Hallucination

Detached and Interactive Multimodal Learning

1 code implementation28 Jul 2024 Yunfeng Fan, Wenchao Xu, Haozhao Wang, Junhong Liu, Song Guo

Recently, Multimodal Learning (MML) has gained significant interest as it compensates for single-modality limitations through comprehensive complementary information within multimodal data.

Transfer Learning

Personalized Federated Domain-Incremental Learning based on Adaptive Knowledge Matching

no code implementations6 Jul 2024 Yichen Li, Wenchao Xu, Haozhao Wang, Ruixuan Li, Yining Qi, Jingcai Guo

Then, the client can choose to adopt a new initial model or a previous model with similar knowledge to train the new task and simultaneously migrate knowledge from previous tasks based on these correlations.

Incremental Learning

Dual Expert Distillation Network for Generalized Zero-Shot Learning

1 code implementation25 Apr 2024 Zhijie Rao, Jingcai Guo, Xiaocheng Lu, Jingming Liang, Jie Zhang, Haozhao Wang, Kang Wei, Xiaofeng Cao

Zero-shot learning has consistently yielded remarkable progress via modeling nuanced one-to-one visual-attribute correlation.

Attribute Generalized Zero-Shot Learning

Disentangle Estimation of Causal Effects from Cross-Silo Data

no code implementations4 Jan 2024 Yuxuan Liu, Haozhao Wang, Shuang Wang, Zhiming He, Wenchao Xu, Jialiang Zhu, Fan Yang

Estimating causal effects among different events is of great importance to critical fields such as drug development.

C2KD: Bridging the Modality Gap for Cross-Modal Knowledge Distillation

no code implementations CVPR 2024 Fushuo Huo, Wenchao Xu, Jingcai Guo, Haozhao Wang, Song Guo

We empirically reveal that the modality gap i. e. modality imbalance and soft label misalignment incurs the ineffectiveness of traditional KD in CMKD.

Knowledge Distillation Transfer Learning

Balanced Multi-modal Federated Learning via Cross-Modal Infiltration

no code implementations31 Dec 2023 Yunfeng Fan, Wenchao Xu, Haozhao Wang, Jiaqi Zhu, Song Guo

Federated learning (FL) underpins advancements in privacy-preserving distributed computing by collaboratively training neural networks without exposing clients' raw data.

Distributed Computing Federated Learning +2

Overcome Modal Bias in Multi-modal Federated Learning via Balanced Modality Selection

1 code implementation31 Dec 2023 Yunfeng Fan, Wenchao Xu, Haozhao Wang, Fushuo Huo, Jinyu Chen, Song Guo

On the other hand, we propose the modality selection aiming to select subsets of local modalities with great diversity and achieving global modal balance simultaneously.

Diversity Federated Learning +1

Decoupling Representation and Knowledge for Few-Shot Intent Classification and Slot Filling

no code implementations21 Dec 2023 Jie Han, Yixiong Zou, Haozhao Wang, Jun Wang, Wei Liu, Yao Wu, Tao Zhang, Ruixuan Li

Therefore, current works first train a model on source domains with sufficiently labeled data, and then transfer the model to target domains where only rarely labeled data is available.

intent-classification Intent Classification +4

Enhancing the Rationale-Input Alignment for Self-explaining Rationalization

1 code implementation7 Dec 2023 Wei Liu, Haozhao Wang, Jun Wang, Zhiying Deng, Yuankai Zhang, Cheng Wang, Ruixuan Li

Rationalization empowers deep learning models with self-explaining capabilities through a cooperative game, where a generator selects a semantically consistent subset of the input as a rationale, and a subsequent predictor makes predictions based on the selected rationale.

Aligning Language Models with Human Preferences via a Bayesian Approach

1 code implementation NeurIPS 2023 Jiashuo Wang, Haozhao Wang, Shichao Sun, Wenjie Li

For this alignment, current popular methods leverage a reinforcement learning (RL) approach with a reward model trained on feedback from humans.

Contrastive Learning Reinforcement Learning (RL) +1

D-Separation for Causal Self-Explanation

1 code implementation NeurIPS 2023 Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Zhiying Deng, Yuankai Zhang, Yang Qiu

Instead of attempting to rectify the issues of the MMI criterion, we propose a novel criterion to uncover the causal rationale, termed the Minimum Conditional Dependence (MCD) criterion, which is grounded on our finding that the non-causal features and the target label are \emph{d-separated} by the causal rationale.

Decoupled Rationalization with Asymmetric Learning Rates: A Flexible Lipschitz Restraint

1 code implementation23 May 2023 Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Yang Qiu, Yuankai Zhang, Jie Han, Yixiong Zou

However, such a cooperative game may incur the degeneration problem where the predictor overfits to the uninformative pieces generated by a not yet well-trained generator and in turn, leads the generator to converge to a sub-optimal model that tends to select senseless pieces.

MGR: Multi-generator Based Rationalization

1 code implementation8 May 2023 Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Xinyang Li, Yuankai Zhang, Yang Qiu

Rationalization is to employ a generator and a predictor to construct a self-explaining NLP model in which the generator selects a subset of human-intelligible pieces of the input text to the following predictor.

Non-Exemplar Online Class-incremental Continual Learning via Dual-prototype Self-augment and Refinement

no code implementations20 Mar 2023 Fushuo Huo, Wenchao Xu, Jingcai Guo, Haozhao Wang, Yunfeng Fan, Song Guo

In this paper, we propose a novel Dual-prototype Self-augment and Refinement method (DSR) for NO-CL problem, which consists of two strategies: 1) Dual class prototypes: vanilla and high-dimensional prototypes are exploited to utilize the pre-trained information and obtain robust quasi-orthogonal representations rather than example buffers for both privacy preservation and memory reduction.

Continual Learning

DualMix: Unleashing the Potential of Data Augmentation for Online Class-Incremental Learning

no code implementations14 Mar 2023 Yunfeng Fan, Wenchao Xu, Haozhao Wang, Jiaqi Zhu, Junxiao Wang, Song Guo

Unfortunately, OCI learning can suffer from catastrophic forgetting (CF) as the decision boundaries for old classes can become inaccurate when perturbated by new ones.

class-incremental learning Class Incremental Learning +2

DaFKD: Domain-Aware Federated Knowledge Distillation

no code implementations CVPR 2023 Haozhao Wang, Yichen Li, Wenchao Xu, Ruixuan Li, Yufeng Zhan, Zhigang Zeng

In this paper, we propose a new perspective that treats the local data in each client as a specific domain and design a novel domain knowledge aware federated distillation method, dubbed DaFKD, that can discern the importance of each model to the distillation sample, and thus is able to optimize the ensemble of soft predictions from diverse models.

Knowledge Distillation

ProCC: Progressive Cross-primitive Compatibility for Open-World Compositional Zero-Shot Learning

no code implementations19 Nov 2022 Fushuo Huo, Wenchao Xu, Song Guo, Jingcai Guo, Haozhao Wang, Ziming Liu, Xiaocheng Lu

Open-World Compositional Zero-shot Learning (OW-CZSL) aims to recognize novel compositions of state and object primitives in images with no priors on the compositional space, which induces a tremendously large output space containing all possible state-object compositions.

Compositional Zero-Shot Learning Object

FedTune: A Deep Dive into Efficient Federated Fine-Tuning with Pre-trained Transformers

no code implementations15 Nov 2022 Jinyu Chen, Wenchao Xu, Song Guo, Junxiao Wang, Jie Zhang, Haozhao Wang

Federated Learning (FL) is an emerging paradigm that enables distributed users to collaboratively and iteratively train machine learning models without sharing their private data.

Federated Learning Language Modelling +1

PMR: Prototypical Modal Rebalance for Multimodal Learning

no code implementations CVPR 2023 Yunfeng Fan, Wenchao Xu, Haozhao Wang, Junxiao Wang, Song Guo

Multimodal learning (MML) aims to jointly exploit the common priors of different modalities to compensate for their inherent limitations.

FR: Folded Rationalization with a Unified Encoder

1 code implementation17 Sep 2022 Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Chao Yue, Yuankai Zhang

Conventional works generally employ a two-phase model in which a generator selects the most important pieces, followed by a predictor that makes predictions based on the selected pieces.

Sign Bit is Enough: A Learning Synchronization Framework for Multi-hop All-reduce with Ultimate Compression

no code implementations14 Apr 2022 Feijie Wu, Shiqi He, Song Guo, Zhihao Qu, Haozhao Wang, Weihua Zhuang, Jie Zhang

Traditional one-bit compressed stochastic gradient descent can not be directly employed in multi-hop all-reduce, a widely adopted distributed training paradigm in network-intensive high-performance computing systems such as public clouds.

From Deterioration to Acceleration: A Calibration Approach to Rehabilitating Step Asynchronism in Federated Optimization

1 code implementation17 Dec 2021 Feijie Wu, Song Guo, Haozhao Wang, Zhihao Qu, Haobo Zhang, Jie Zhang, Ziming Liu

In the setting of federated optimization, where a global model is aggregated periodically, step asynchronism occurs when participants conduct model training by efficiently utilizing their computational resources.

Parameterized Knowledge Transfer for Personalized Federated Learning

1 code implementation NeurIPS 2021 Jie Zhang, Song Guo, Xiaosong Ma, Haozhao Wang, Wencao Xu, Feijie Wu

To deal with such model constraints, we exploit the potentials of heterogeneous model settings and propose a novel training framework to employ personalized models for different clients.

Personalized Federated Learning Transfer Learning

Intermittent Pulling with Local Compensation for Communication-Efficient Federated Learning

no code implementations22 Jan 2020 Haozhao Wang, Zhihao Qu, Song Guo, Xin Gao, Ruixuan Li, Baoliu Ye

A major bottleneck on the performance of distributed Stochastic Gradient Descent (SGD) algorithm for large-scale Federated Learning is the communication overhead on pushing local gradients and pulling global model.

BIG-bench Machine Learning Federated Learning

Gradient Scheduling with Global Momentum for Non-IID Data Distributed Asynchronous Training

no code implementations21 Feb 2019 Chengjie Li, Ruixuan Li, Haozhao Wang, Yuhua Li, Pan Zhou, Song Guo, Keqin Li

Distributed asynchronous offline training has received widespread attention in recent years because of its high performance on large-scale data and complex models.

Scheduling

Cannot find the paper you are looking for? You can Submit a new open access paper.