Search Results for author: Ailin Deng

Found 8 papers, 6 papers with code

Seeing is Believing: Mitigating Hallucination in Large Vision-Language Models via CLIP-Guided Decoding

1 code implementation23 Feb 2024 Ailin Deng, Zhirui Chen, Bryan Hooi

Large Vision-Language Models (LVLMs) are susceptible to object hallucinations, an issue in which their generated text contains non-existent objects, greatly limiting their reliability and practicality.

Hallucination Object +3

Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News Detection

1 code implementation28 Sep 2023 Jiaying Wu, Shen Li, Ailin Deng, Miao Xiong, Bryan Hooi

Despite considerable advances in automated fake news detection, due to the timely nature of news, it remains a critical open question how to effectively predict the veracity of news articles based on limited fact-checks.

Fake News Detection

Proximity-Informed Calibration for Deep Neural Networks

1 code implementation NeurIPS 2023 Miao Xiong, Ailin Deng, Pang Wei Koh, Jiaying Wu, Shen Li, Jianqing Xu, Bryan Hooi

We examine the problem over 504 pretrained ImageNet models and observe that: 1) Proximity bias exists across a wide variety of model architectures and sizes; 2) Transformer-based models are relatively more susceptible to proximity bias than CNN-based models; 3) Proximity bias persists even after performing popular calibration algorithms like temperature scaling; 4) Models tend to overfit more heavily on low proximity samples than on high proximity samples.

Great Models Think Alike: Improving Model Reliability via Inter-Model Latent Agreement

no code implementations2 May 2023 Ailin Deng, Miao Xiong, Bryan Hooi

To overcome this incoherence issue, we design a \emph{neighborhood agreement measure} between latent spaces and find that this agreement is surprisingly well-correlated with the reliability of a model's predictions.

Trust, but Verify: Using Self-Supervised Probing to Improve Trustworthiness

1 code implementation6 Feb 2023 Ailin Deng, Shen Li, Miao Xiong, Zhirui Chen, Bryan Hooi

Trustworthy machine learning is of primary importance to the practical deployment of deep learning models.

Out-of-Distribution Detection

Probabilistic Knowledge Distillation of Face Ensembles

no code implementations CVPR 2023 Jianqing Xu, Shen Li, Ailin Deng, Miao Xiong, Jiaying Wu, Jiaxiang Wu, Shouhong Ding, Bryan Hooi

Mean ensemble (i. e. averaging predictions from multiple models) is a commonly-used technique in machine learning that improves the performance of each individual model.

Face Image Quality Face Recognition +2

Graph Neural Network-Based Anomaly Detection in Multivariate Time Series

2 code implementations13 Jun 2021 Ailin Deng, Bryan Hooi

Given high-dimensional time series data (e. g., sensor data), how can we detect anomalous events, such as system faults and attacks?

Anomaly Detection Time Series +1

Cannot find the paper you are looking for? You can Submit a new open access paper.