Search Results for author: Ziqiao Wang

Found 10 papers, 3 papers with code

Cross-Modal and Uni-Modal Soft-Label Alignment for Image-Text Retrieval

1 code implementation8 Mar 2024 Hailang Huang, Zhijie Nie, Ziqiao Wang, Ziyu Shang

Furthermore, our method can also boost the uni-modal retrieval performance of image-text retrieval models, enabling it to achieve universal retrieval.

Retrieval Text Retrieval

On f-Divergence Principled Domain Adaptation: An Improved Framework

no code implementations2 Feb 2024 Ziqiao Wang, Yongyi Mao

Unsupervised domain adaptation (UDA) plays a crucial role in addressing distribution shifts in machine learning.

Unsupervised Domain Adaptation

Over-training with Mixup May Hurt Generalization

no code implementations2 Mar 2023 Zixuan Liu, Ziqiao Wang, Hongyu Guo, Yongyi Mao

Mixup, which creates synthetic training instances by linearly interpolating random sample pairs, is a simple and yet effective regularization technique to boost the performance of deep models trained with SGD.

Tighter Information-Theoretic Generalization Bounds from Supersamples

1 code implementation5 Feb 2023 Ziqiao Wang, Yongyi Mao

In this work, we present a variety of novel information-theoretic generalization bounds for learning algorithms, from the supersample setting of Steinke & Zakynthinou (2020)-the setting of the "conditional mutual information" framework.

Generalization Bounds

Two Facets of SDE Under an Information-Theoretic Lens: Generalization of SGD via Training Trajectories and via Terminal States

no code implementations19 Nov 2022 Ziqiao Wang, Yongyi Mao

Using this estimate, we apply the PAC-Bayes-like information-theoretic bounds developed in both Xu & Raginsky (2017) and Negrea et al. (2019) to obtain generalization upper bounds in terms of the KL divergence between the steady-state weight distribution of SGD with respect to a prior distribution.

Generalization Bounds

Information-Theoretic Analysis of Unsupervised Domain Adaptation

no code implementations3 Oct 2022 Ziqiao Wang, Yongyi Mao

This paper uses information-theoretic tools to analyze the generalization error in unsupervised domain adaptation (UDA).

Unsupervised Domain Adaptation

On the Generalization of Models Trained with SGD: Information-Theoretic Bounds and Implications

no code implementations ICLR 2022 Ziqiao Wang, Yongyi Mao

This paper follows up on a recent work of Neu et al. (2021) and presents some new information-theoretic upper bounds for the generalization error of machine learning models, such as neural networks, trained with SGD.

Cluster Attack: Query-based Adversarial Attacks on Graphs with Graph-Dependent Priors

1 code implementation ICML Workshop AML 2021 Zhengyi Wang, Zhongkai Hao, Ziqiao Wang, Hang Su, Jun Zhu

In this work, we propose Cluster Attack -- a Graph Injection Attack (GIA) on node classification, which injects fake nodes into the original graph to degenerate the performance of graph neural networks (GNNs) on certain victim nodes while affecting the other nodes as little as possible.

Adversarial Attack Clustering +3

On the Generalization of Neural Networks Trained with SGD: Information-Theoretical Bounds and Implications

no code implementations NeurIPS 2021 Ziqiao Wang, Yongyi Mao

Understanding the generalization behaviour of deep neural networks is an important theme of modern research in machine learning.

On SkipGram Word Embedding Models with Negative Sampling: Unified Framework and Impact of Noise Distributions

no code implementations2 Sep 2020 Ziqiao Wang, Yongyi Mao, Hongyu Guo, Richong Zhang

SkipGram word embedding models with negative sampling, or SGN in short, is an elegant family of word embedding models.

Cannot find the paper you are looking for? You can Submit a new open access paper.