Search Results for author: Zhe Qu

Found 16 papers, 6 papers with code

Unsupervised Collaborative Metric Learning with Mixed-Scale Groups for General Object Retrieval

1 code implementation16 Mar 2024 Shichao Kan, Yuhai Deng, Yixiong Liang, Lihui Cen, Zhe Qu, Yigang Cen, Zhihai He

This paper presents a novel unsupervised deep metric learning approach, termed unsupervised collaborative metric learning with mixed-scale groups (MS-UGCML), devised to learn embeddings for objects of varying scales.

Metric Learning Object +1

What Makes Good Collaborative Views? Contrastive Mutual Information Maximization for Multi-Agent Perception

1 code implementation15 Mar 2024 Wanfang Su, Lixing Chen, Yang Bai, Xi Lin, Gaolei Li, Zhe Qu, Pan Zhou

The core philosophy of CMiMC is to preserve discriminative information of individual views in the collaborative view by maximizing mutual information between pre- and post-collaboration features while enhancing the efficacy of collaborative views by minimizing the loss function of downstream tasks.

Contrastive Learning Philosophy

Parrot-Trained Adversarial Examples: Pushing the Practicality of Black-Box Audio Attacks against Speaker Recognition Models

no code implementations13 Nov 2023 Rui Duan, Zhe Qu, Leah Ding, Yao Liu, Zhuo Lu

Motivated by recent advancements in voice conversion (VC), we propose to use the one short sentence knowledge to generate more synthetic speech samples that sound like the target speaker, called parrot speech.

Sentence Speaker Recognition +1

You Only Forward Once: Prediction and Rationalization in A Single Forward Pass

no code implementations4 Nov 2023 Han Jiang, Junwen Duan, Zhe Qu, Jianxin Wang

In our framework, A pre-trained language model like BERT is deployed to simultaneously perform prediction and rationalization with less impact from interlocking or spurious correlations.

Language Modelling

Faster Stochastic Variance Reduction Methods for Compositional MiniMax Optimization

no code implementations18 Aug 2023 Jin Liu, Xiaokang Pan, Junwen Duan, Hongdong Li, Youqi Li, Zhe Qu

All the proposed complexities indicate that our proposed methods can match lower bounds to existing minimax optimizations, without requiring a large batch size in each iteration.

Stochastic Optimization

Inter-Instance Similarity Modeling for Contrastive Learning

1 code implementation21 Jun 2023 Chengchao Shen, Dawei Liu, Hao Tang, Zhe Qu, Jianxin Wang

In this paper, we propose a novel image mix method, PatchMix, for contrastive learning in Vision Transformer (ViT), to model inter-instance similarities among images.

Contrastive Learning Instance Segmentation +4

Modeling Global Distribution for Federated Learning with Label Distribution Skew

1 code implementation17 Dec 2022 Tao Sheng, Chengchao Shen, YuAn Liu, Yeyu Ou, Zhe Qu, Jianxin Wang

It introduces a global Generative Adversarial Network to model the global data distribution without access to local datasets, so the global model can be trained using the global information of data distribution without privacy leakage.

Federated Learning Generative Adversarial Network

Perception-Aware Attack: Creating Adversarial Music via Reverse-Engineering Human Perception

no code implementations26 Jul 2022 Rui Duan, Zhe Qu, Shangqing Zhao, Leah Ding, Yao Liu, Zhuo Lu

In this work, we formulate the adversarial attack against music signals as a new perception-aware attack framework, which integrates human study into adversarial attack design.

Adversarial Attack Speaker Recognition +2

Generalized Federated Learning via Sharpness Aware Minimization

no code implementations6 Jun 2022 Zhe Qu, Xingyu Li, Rui Duan, Yao Liu, Bo Tang, Zhuo Lu

Therefore, in this paper, we revisit the solutions to the distribution shift problem in FL with a focus on local learning generality.

Federated Learning Privacy Preserving

LoMar: A Local Defense Against Poisoning Attack on Federated Learning

no code implementations8 Jan 2022 Xingyu Li, Zhe Qu, Shangqing Zhao, Bo Tang, Zhuo Lu, Yao Liu

Federated learning (FL) provides a high efficient decentralized machine learning framework, where the training data remains distributed at remote clients in a network.

Density Estimation Edge-computing +2

FedLGA: Towards System-Heterogeneity of Federated Learning via Local Gradient Approximation

no code implementations22 Dec 2021 Xingyu Li, Zhe Qu, Bo Tang, Zhuo Lu

Federated Learning (FL) is a decentralized machine learning architecture, which leverages a large number of remote devices to learn a joint model with distributed training data.

Federated Learning

Context-Aware Online Client Selection for Hierarchical Federated Learning

no code implementations2 Dec 2021 Zhe Qu, Rui Duan, Lixing Chen, Jie Xu, Zhuo Lu, Yao Liu

In addition, client selection for HFL faces more challenges than conventional FL, e. g., the time-varying connection of client-ES pairs and the limited budget of the Network Operator (NO).

Federated Learning

Stragglers Are Not Disaster: A Hybrid Federated Learning Algorithm with Delayed Gradients

no code implementations12 Feb 2021 Xingyu Li, Zhe Qu, Bo Tang, Zhuo Lu

Federated learning (FL) is a new machine learning framework which trains a joint model across a large amount of decentralized computing devices.

Federated Learning

D-GCCA: Decomposition-based Generalized Canonical Correlation Analysis for Multi-view High-dimensional Data

1 code implementation9 Jan 2020 Hai Shu, Zhe Qu, Hongtu Zhu

We propose a novel decomposition method for this model, called decomposition-based generalized canonical correlation analysis (D-GCCA).

CDPA: Common and Distinctive Pattern Analysis between High-dimensional Datasets

2 code implementations20 Dec 2019 Hai Shu, Zhe Qu

A representative model in integrative analysis of two high-dimensional correlated datasets is to decompose each data matrix into a low-rank common matrix generated by latent factors shared across datasets, a low-rank distinctive matrix corresponding to each dataset, and an additive noise matrix.

Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.