Search Results for author: Jiaxu Zhao

Found 4 papers, 3 papers with code

You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets

1 code implementation28 Nov 2022 Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu

Recent works have impressively demonstrated that there exists a subnetwork in randomly initialized convolutional neural networks (CNNs) that can match the performance of the fully trained dense networks at initialization, without any optimization of the weights of the network (i. e., untrained networks).

Out-of-Distribution Detection

CHBias: Bias Evaluation and Mitigation of Chinese Conversational Language Models

1 code implementation18 May 2023 Jiaxu Zhao, Meng Fang, Zijing Shi, Yitong Li, Ling Chen, Mykola Pechenizkiy

We evaluate two popular pretrained Chinese conversational models, CDial-GPT and EVA2. 0, using CHBias.

Response Generation

REST: Enhancing Group Robustness in DNNs through Reweighted Sparse Training

1 code implementation5 Dec 2023 Jiaxu Zhao, Lu Yin, Shiwei Liu, Meng Fang, Mykola Pechenizkiy

These bias attributes are strongly spuriously correlated with the target variable, causing the models to be biased towards spurious correlations (i. e., \textit{bias-conflicting}).

GPTBIAS: A Comprehensive Framework for Evaluating Bias in Large Language Models

no code implementations11 Dec 2023 Jiaxu Zhao, Meng Fang, Shirui Pan, Wenpeng Yin, Mykola Pechenizkiy

In this work, we propose a bias evaluation framework named GPTBIAS that leverages the high performance of LLMs (e. g., GPT-4 \cite{openai2023gpt4}) to assess bias in models.

Cannot find the paper you are looking for? You can Submit a new open access paper.