Search Results for author: Miao Xiong

Found 9 papers, 7 papers with code

In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation

1 code implementation3 Mar 2024 Shiqi Chen, Miao Xiong, Junteng Liu, Zhengxuan Wu, Teng Xiao, Siyang Gao, Junxian He

Large language models (LLMs) frequently hallucinate and produce factual errors, yet our understanding of why they make these errors remains limited.

Hallucination

Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News Detection

1 code implementation28 Sep 2023 Jiaying Wu, Shen Li, Ailin Deng, Miao Xiong, Bryan Hooi

Despite considerable advances in automated fake news detection, due to the timely nature of news, it remains a critical open question how to effectively predict the veracity of news articles based on limited fact-checks.

Fake News Detection

Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs

1 code implementation22 Jun 2023 Miao Xiong, Zhiyuan Hu, Xinyang Lu, Yifei Li, Jie Fu, Junxian He, Bryan Hooi

To better break down the problem, we define a systematic framework with three components: prompting strategies for eliciting verbalized confidence, sampling methods for generating multiple responses, and aggregation techniques for computing consistency.

Arithmetic Reasoning Benchmarking +1

Proximity-Informed Calibration for Deep Neural Networks

1 code implementation NeurIPS 2023 Miao Xiong, Ailin Deng, Pang Wei Koh, Jiaying Wu, Shen Li, Jianqing Xu, Bryan Hooi

We examine the problem over 504 pretrained ImageNet models and observe that: 1) Proximity bias exists across a wide variety of model architectures and sizes; 2) Transformer-based models are relatively more susceptible to proximity bias than CNN-based models; 3) Proximity bias persists even after performing popular calibration algorithms like temperature scaling; 4) Models tend to overfit more heavily on low proximity samples than on high proximity samples.

GraphCleaner: Detecting Mislabelled Samples in Popular Graph Learning Benchmarks

1 code implementation30 May 2023 Yuwen Li, Miao Xiong, Bryan Hooi

Label errors have been found to be prevalent in popular text, vision, and audio datasets, which heavily influence the safe development and evaluation of machine learning algorithms.

Graph Learning

Great Models Think Alike: Improving Model Reliability via Inter-Model Latent Agreement

no code implementations2 May 2023 Ailin Deng, Miao Xiong, Bryan Hooi

To overcome this incoherence issue, we design a \emph{neighborhood agreement measure} between latent spaces and find that this agreement is surprisingly well-correlated with the reliability of a model's predictions.

Trust, but Verify: Using Self-Supervised Probing to Improve Trustworthiness

1 code implementation6 Feb 2023 Ailin Deng, Shen Li, Miao Xiong, Zhirui Chen, Bryan Hooi

Trustworthy machine learning is of primary importance to the practical deployment of deep learning models.

Out-of-Distribution Detection

Probabilistic Knowledge Distillation of Face Ensembles

no code implementations CVPR 2023 Jianqing Xu, Shen Li, Ailin Deng, Miao Xiong, Jiaying Wu, Jiaxiang Wu, Shouhong Ding, Bryan Hooi

Mean ensemble (i. e. averaging predictions from multiple models) is a commonly-used technique in machine learning that improves the performance of each individual model.

Face Image Quality Face Recognition +2

Cannot find the paper you are looking for? You can Submit a new open access paper.