Search Results for author: Haozhe An

Found 10 papers, 3 papers with code

Learning Bias-reduced Word Embeddings Using Dictionary Definitions

1 code implementation Findings (ACL) 2022 Haozhe An, Xiaojiang Liu, Donald Zhang

Pre-trained word embeddings, such as GloVe, have shown undesirable gender, racial, and religious biases.

Word Embeddings

Nichelle and Nancy: The Influence of Demographic Attributes and Tokenization Length on First Name Biases

no code implementations26 May 2023 Haozhe An, Rachel Rudinger

We find that demographic attributes of a name (race, ethnicity, and gender) and name tokenization length are both factors that systematically affect the behavior of social commonsense reasoning models.

SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models

1 code implementation13 Oct 2022 Haozhe An, Zongxia Li, Jieyu Zhao, Rachel Rudinger

A common limitation of diagnostic tests for detecting social biases in NLP models is that they may only detect stereotypic associations that are pre-specified by the designer of the test.

Language Modelling Question Answering

Investigating Information Inconsistency in Multilingual Open-Domain Question Answering

no code implementations25 May 2022 Shramay Palta, Haozhe An, Yifan Yang, Shuaiyi Huang, Maharshi Gor

Retrieval based open-domain QA systems use retrieved documents and answer-span selection over retrieved documents to find best-answer candidates.

Open-Domain Question Answering Retrieval

Exploring the Common Principal Subspace of Deep Features in Neural Networks

no code implementations6 Oct 2021 Haoran Liu, Haoyi Xiong, Yaqing Wang, Haozhe An, Dongrui Wu, Dejing Dou

Specifically, we design a new metric $\mathcal{P}$-vector to represent the principal subspace of deep features learned in a DNN, and propose to measure angles between the principal subspaces using $\mathcal{P}$-vectors.

Image Reconstruction Self-Supervised Learning

Can We Use Gradient Norm as a Measure of Generalization Error for Model Selection in Practice?

no code implementations1 Jan 2021 Haozhe An, Haoyi Xiong, Xuhong LI, Xingjian Li, Dejing Dou, Zhanxing Zhu

The recent theoretical investigation (Li et al., 2020) on the upper bound of generalization error of deep neural networks (DNNs) demonstrates the potential of using the gradient norm as a measure that complements validation accuracy for model selection in practice.

Model Selection

Empirical Studies on the Convergence of Feature Spaces in Deep Learning

no code implementations1 Jan 2021 Haoran Liu, Haoyi Xiong, Yaqing Wang, Haozhe An, Dongrui Wu, Dejing Dou

While deep learning is effective to learn features/representations from data, the distributions of samples in feature spaces learned by various architectures for different training tasks (e. g., latent layers of AEs and feature vectors in CNN classifiers) have not been well-studied or compared.

Image Reconstruction Self-Supervised Learning

XMixup: Efficient Transfer Learning with Auxiliary Samples by Cross-domain Mixup

no code implementations20 Jul 2020 Xingjian Li, Haoyi Xiong, Haozhe An, Cheng-Zhong Xu, Dejing Dou

While the existing multitask learning algorithms need to run backpropagation over both the source and target datasets and usually consume a higher gradient complexity, XMixup transfers the knowledge from source to target tasks more efficiently: for every class of the target task, XMixup selects the auxiliary samples from the source dataset and augments training samples via the simple mixup strategy.

Transfer Learning

RIFLE: Backpropagation in Depth for Deep Transfer Learning through Re-Initializing the Fully-connected LayEr

1 code implementation ICML 2020 Xingjian Li, Haoyi Xiong, Haozhe An, Cheng-Zhong Xu, Dejing Dou

RIFLE brings meaningful updates to the weights of deep CNN layers and improves low-level feature learning, while the effects of randomization can be easily converged throughout the overall learning procedure.

Transfer Learning

COLAM: Co-Learning of Deep Neural Networks and Soft Labels via Alternating Minimization

no code implementations26 Apr 2020 Xingjian Li, Haoyi Xiong, Haozhe An, Dejing Dou, Chengzhong Xu

Softening labels of training datasets with respect to data representations has been frequently used to improve the training of deep neural networks (DNNs).

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.