Search Results for author: Haozhao Wang

Found 24 papers, 7 papers with code

Disentangle Estimation of Causal Effects from Cross-Silo Data

no code implementations4 Jan 2024 Yuxuan Liu, Haozhao Wang, Shuang Wang, Zhiming He, Wenchao Xu, Jialiang Zhu, Fan Yang

Estimating causal effects among different events is of great importance to critical fields such as drug development.

Balanced Multi-modal Federated Learning via Cross-Modal Infiltration

no code implementations31 Dec 2023 Yunfeng Fan, Wenchao Xu, Haozhao Wang, Jiaqi Zhu, Song Guo

Federated learning (FL) underpins advancements in privacy-preserving distributed computing by collaboratively training neural networks without exposing clients' raw data.

Distributed Computing Federated Learning +2

Client-wise Modality Selection for Balanced Multi-modal Federated Learning

no code implementations31 Dec 2023 Yunfeng Fan, Wenchao Xu, Haozhao Wang, Penghui Ruan, Song Guo

Selecting proper clients to participate in the iterative federated learning (FL) rounds is critical to effectively harness a broad range of distributed datasets.

Federated Learning Selection bias

Decoupling Representation and Knowledge for Few-Shot Intent Classification and Slot Filling

no code implementations21 Dec 2023 Jie Han, Yixiong Zou, Haozhao Wang, Jun Wang, Wei Liu, Yao Wu, Tao Zhang, Ruixuan Li

Therefore, current works first train a model on source domains with sufficiently labeled data, and then transfer the model to target domains where only rarely labeled data is available.

intent-classification Intent Classification +4

Enhancing the Rationale-Input Alignment for Self-explaining Rationalization

no code implementations7 Dec 2023 Wei Liu, Haozhao Wang, Jun Wang, Zhiying Deng, Yuankai Zhang, Cheng Wang, Ruixuan Li

Rationalization empowers deep learning models with self-explaining capabilities through a cooperative game, where a generator selects a semantically consistent subset of the input as a rationale, and a subsequent predictor makes predictions based on the selected rationale.

Aligning Language Models with Human Preferences via a Bayesian Approach

1 code implementation NeurIPS 2023 Jiashuo Wang, Haozhao Wang, Shichao Sun, Wenjie Li

For this alignment, current popular methods leverage a reinforcement learning (RL) approach with a reward model trained on feedback from humans.

Contrastive Learning Reinforcement Learning (RL) +1

D-Separation for Causal Self-Explanation

1 code implementation NeurIPS 2023 Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Zhiying Deng, Yuankai Zhang, Yang Qiu

Instead of attempting to rectify the issues of the MMI criterion, we propose a novel criterion to uncover the causal rationale, termed the Minimum Conditional Dependence (MCD) criterion, which is grounded on our finding that the non-causal features and the target label are \emph{d-separated} by the causal rationale.

Decoupled Rationalization with Asymmetric Learning Rates: A Flexible Lipschitz Restraint

1 code implementation23 May 2023 Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Yang Qiu, Yuankai Zhang, Jie Han, Yixiong Zou

However, such a cooperative game may incur the degeneration problem where the predictor overfits to the uninformative pieces generated by a not yet well-trained generator and in turn, leads the generator to converge to a sub-optimal model that tends to select senseless pieces.

MGR: Multi-generator Based Rationalization

1 code implementation8 May 2023 Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Xinyang Li, Yuankai Zhang, Yang Qiu

Rationalization is to employ a generator and a predictor to construct a self-explaining NLP model in which the generator selects a subset of human-intelligible pieces of the input text to the following predictor.

Non-Exemplar Online Class-incremental Continual Learning via Dual-prototype Self-augment and Refinement

no code implementations20 Mar 2023 Fushuo Huo, Wenchao Xu, Jingcai Guo, Haozhao Wang, Yunfeng Fan, Song Guo

In this paper, we propose a novel Dual-prototype Self-augment and Refinement method (DSR) for NO-CL problem, which consists of two strategies: 1) Dual class prototypes: vanilla and high-dimensional prototypes are exploited to utilize the pre-trained information and obtain robust quasi-orthogonal representations rather than example buffers for both privacy preservation and memory reduction.

Continual Learning

DualMix: Unleashing the Potential of Data Augmentation for Online Class-Incremental Learning

no code implementations14 Mar 2023 Yunfeng Fan, Wenchao Xu, Haozhao Wang, Jiaqi Zhu, Junxiao Wang, Song Guo

Unfortunately, OCI learning can suffer from catastrophic forgetting (CF) as the decision boundaries for old classes can become inaccurate when perturbated by new ones.

Class Incremental Learning Data Augmentation +1

DaFKD: Domain-Aware Federated Knowledge Distillation

no code implementations CVPR 2023 Haozhao Wang, Yichen Li, Wenchao Xu, Ruixuan Li, Yufeng Zhan, Zhigang Zeng

In this paper, we propose a new perspective that treats the local data in each client as a specific domain and design a novel domain knowledge aware federated distillation method, dubbed DaFKD, that can discern the importance of each model to the distillation sample, and thus is able to optimize the ensemble of soft predictions from diverse models.

Knowledge Distillation

ProCC: Progressive Cross-primitive Compatibility for Open-World Compositional Zero-Shot Learning

no code implementations19 Nov 2022 Fushuo Huo, Wenchao Xu, Song Guo, Jingcai Guo, Haozhao Wang, Ziming Liu, Xiaocheng Lu

Open-World Compositional Zero-shot Learning (OW-CZSL) aims to recognize novel compositions of state and object primitives in images with no priors on the compositional space, which induces a tremendously large output space containing all possible state-object compositions.

Compositional Zero-Shot Learning Object

FedTune: A Deep Dive into Efficient Federated Fine-Tuning with Pre-trained Transformers

no code implementations15 Nov 2022 Jinyu Chen, Wenchao Xu, Song Guo, Junxiao Wang, Jie Zhang, Haozhao Wang

Federated Learning (FL) is an emerging paradigm that enables distributed users to collaboratively and iteratively train machine learning models without sharing their private data.

Federated Learning Language Modelling +1

PMR: Prototypical Modal Rebalance for Multimodal Learning

no code implementations CVPR 2023 Yunfeng Fan, Wenchao Xu, Haozhao Wang, Junxiao Wang, Song Guo

Multimodal learning (MML) aims to jointly exploit the common priors of different modalities to compensate for their inherent limitations.

FR: Folded Rationalization with a Unified Encoder

1 code implementation17 Sep 2022 Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Chao Yue, Yuankai Zhang

Conventional works generally employ a two-phase model in which a generator selects the most important pieces, followed by a predictor that makes predictions based on the selected pieces.

Sign Bit is Enough: A Learning Synchronization Framework for Multi-hop All-reduce with Ultimate Compression

no code implementations14 Apr 2022 Feijie Wu, Shiqi He, Song Guo, Zhihao Qu, Haozhao Wang, Weihua Zhuang, Jie Zhang

Traditional one-bit compressed stochastic gradient descent can not be directly employed in multi-hop all-reduce, a widely adopted distributed training paradigm in network-intensive high-performance computing systems such as public clouds.

From Deterioration to Acceleration: A Calibration Approach to Rehabilitating Step Asynchronism in Federated Optimization

1 code implementation17 Dec 2021 Feijie Wu, Song Guo, Haozhao Wang, Zhihao Qu, Haobo Zhang, Jie Zhang, Ziming Liu

In the setting of federated optimization, where a global model is aggregated periodically, step asynchronism occurs when participants conduct model training by efficiently utilizing their computational resources.

Parameterized Knowledge Transfer for Personalized Federated Learning

1 code implementation NeurIPS 2021 Jie Zhang, Song Guo, Xiaosong Ma, Haozhao Wang, Wencao Xu, Feijie Wu

To deal with such model constraints, we exploit the potentials of heterogeneous model settings and propose a novel training framework to employ personalized models for different clients.

Personalized Federated Learning Transfer Learning

Intermittent Pulling with Local Compensation for Communication-Efficient Federated Learning

no code implementations22 Jan 2020 Haozhao Wang, Zhihao Qu, Song Guo, Xin Gao, Ruixuan Li, Baoliu Ye

A major bottleneck on the performance of distributed Stochastic Gradient Descent (SGD) algorithm for large-scale Federated Learning is the communication overhead on pushing local gradients and pulling global model.

BIG-bench Machine Learning Federated Learning

Joint Power and Coverage Control of Massive UAVs in Post-Disaster Emergency Networks: An Aggregative Game-Theoretic Learning Approach

no code implementations19 Jul 2019 Jing Wu, Qimei Chen, Hao Jiang, Haozhao Wang, Yulai Xie, Wenzheng Xu, Pan Zhou, Zichuan Xu, Lixing Chen, Beibei Li, Xiumin Wang, Dapeng Oliver Wu

In the context of fifth-generation (5G)/beyond-5G (B5G) wireless communications, post-disaster emergency networks have recently gained increasing attention and interest.

Gradient Scheduling with Global Momentum for Non-IID Data Distributed Asynchronous Training

no code implementations21 Feb 2019 Chengjie Li, Ruixuan Li, Haozhao Wang, Yuhua Li, Pan Zhou, Song Guo, Keqin Li

Distributed asynchronous offline training has received widespread attention in recent years because of its high performance on large-scale data and complex models.

Scheduling

Cannot find the paper you are looking for? You can Submit a new open access paper.