1 code implementation • 16 Mar 2024 • Shichao Kan, Yuhai Deng, Yixiong Liang, Lihui Cen, Zhe Qu, Yigang Cen, Zhihai He
This paper presents a novel unsupervised deep metric learning approach, termed unsupervised collaborative metric learning with mixed-scale groups (MS-UGCML), devised to learn embeddings for objects of varying scales.
1 code implementation • 15 Mar 2024 • Wanfang Su, Lixing Chen, Yang Bai, Xi Lin, Gaolei Li, Zhe Qu, Pan Zhou
The core philosophy of CMiMC is to preserve discriminative information of individual views in the collaborative view by maximizing mutual information between pre- and post-collaboration features while enhancing the efficacy of collaborative views by minimizing the loss function of downstream tasks.
no code implementations • 13 Nov 2023 • Rui Duan, Zhe Qu, Leah Ding, Yao Liu, Zhuo Lu
Motivated by recent advancements in voice conversion (VC), we propose to use the one short sentence knowledge to generate more synthetic speech samples that sound like the target speaker, called parrot speech.
no code implementations • 4 Nov 2023 • Han Jiang, Junwen Duan, Zhe Qu, Jianxin Wang
In our framework, A pre-trained language model like BERT is deployed to simultaneously perform prediction and rationalization with less impact from interlocking or spurious correlations.
no code implementations • 18 Aug 2023 • Jin Liu, Xiaokang Pan, Junwen Duan, Hongdong Li, Youqi Li, Zhe Qu
All the proposed complexities indicate that our proposed methods can match lower bounds to existing minimax optimizations, without requiring a large batch size in each iteration.
1 code implementation • 21 Jun 2023 • Chengchao Shen, Dawei Liu, Hao Tang, Zhe Qu, Jianxin Wang
In this paper, we propose a novel image mix method, PatchMix, for contrastive learning in Vision Transformer (ViT), to model inter-instance similarities among images.
no code implementations • CVPR 2023 • Zhe Qu, Xingyu Li, Xiao Han, Rui Duan, Chengchao Shen, Lixing Chen
Intuitively, these poor clients may come from biased universal information shared with others.
1 code implementation • 17 Dec 2022 • Tao Sheng, Chengchao Shen, YuAn Liu, Yeyu Ou, Zhe Qu, Jianxin Wang
It introduces a global Generative Adversarial Network to model the global data distribution without access to local datasets, so the global model can be trained using the global information of data distribution without privacy leakage.
no code implementations • 26 Jul 2022 • Rui Duan, Zhe Qu, Shangqing Zhao, Leah Ding, Yao Liu, Zhuo Lu
In this work, we formulate the adversarial attack against music signals as a new perception-aware attack framework, which integrates human study into adversarial attack design.
no code implementations • 6 Jun 2022 • Zhe Qu, Xingyu Li, Rui Duan, Yao Liu, Bo Tang, Zhuo Lu
Therefore, in this paper, we revisit the solutions to the distribution shift problem in FL with a focus on local learning generality.
no code implementations • 8 Jan 2022 • Xingyu Li, Zhe Qu, Shangqing Zhao, Bo Tang, Zhuo Lu, Yao Liu
Federated learning (FL) provides a high efficient decentralized machine learning framework, where the training data remains distributed at remote clients in a network.
no code implementations • 22 Dec 2021 • Xingyu Li, Zhe Qu, Bo Tang, Zhuo Lu
Federated Learning (FL) is a decentralized machine learning architecture, which leverages a large number of remote devices to learn a joint model with distributed training data.
no code implementations • 2 Dec 2021 • Zhe Qu, Rui Duan, Lixing Chen, Jie Xu, Zhuo Lu, Yao Liu
In addition, client selection for HFL faces more challenges than conventional FL, e. g., the time-varying connection of client-ES pairs and the limited budget of the Network Operator (NO).
no code implementations • 12 Feb 2021 • Xingyu Li, Zhe Qu, Bo Tang, Zhuo Lu
Federated learning (FL) is a new machine learning framework which trains a joint model across a large amount of decentralized computing devices.
1 code implementation • 9 Jan 2020 • Hai Shu, Zhe Qu, Hongtu Zhu
We propose a novel decomposition method for this model, called decomposition-based generalized canonical correlation analysis (D-GCCA).
2 code implementations • 20 Dec 2019 • Hai Shu, Zhe Qu
A representative model in integrative analysis of two high-dimensional correlated datasets is to decompose each data matrix into a low-rank common matrix generated by latent factors shared across datasets, a low-rank distinctive matrix corresponding to each dataset, and an additive noise matrix.