1 code implementation • 23 Feb 2024 • Ailin Deng, Zhirui Chen, Bryan Hooi
Large Vision-Language Models (LVLMs) are susceptible to object hallucinations, an issue in which their generated text contains non-existent objects, greatly limiting their reliability and practicality.
1 code implementation • 28 Sep 2023 • Jiaying Wu, Shen Li, Ailin Deng, Miao Xiong, Bryan Hooi
Despite considerable advances in automated fake news detection, due to the timely nature of news, it remains a critical open question how to effectively predict the veracity of news articles based on limited fact-checks.
1 code implementation • NeurIPS 2023 • Miao Xiong, Ailin Deng, Pang Wei Koh, Jiaying Wu, Shen Li, Jianqing Xu, Bryan Hooi
We examine the problem over 504 pretrained ImageNet models and observe that: 1) Proximity bias exists across a wide variety of model architectures and sizes; 2) Transformer-based models are relatively more susceptible to proximity bias than CNN-based models; 3) Proximity bias persists even after performing popular calibration algorithms like temperature scaling; 4) Models tend to overfit more heavily on low proximity samples than on high proximity samples.
no code implementations • 2 May 2023 • Ailin Deng, Miao Xiong, Bryan Hooi
To overcome this incoherence issue, we design a \emph{neighborhood agreement measure} between latent spaces and find that this agreement is surprisingly well-correlated with the reliability of a model's predictions.
1 code implementation • 6 Feb 2023 • Ailin Deng, Shen Li, Miao Xiong, Zhirui Chen, Bryan Hooi
Trustworthy machine learning is of primary importance to the practical deployment of deep learning models.
no code implementations • CVPR 2023 • Jianqing Xu, Shen Li, Ailin Deng, Miao Xiong, Jiaying Wu, Jiaxiang Wu, Shouhong Ding, Bryan Hooi
Mean ensemble (i. e. averaging predictions from multiple models) is a commonly-used technique in machine learning that improves the performance of each individual model.
1 code implementation • 29 Nov 2022 • Miao Xiong, Shen Li, Wenjie Feng, Ailin Deng, Jihai Zhang, Bryan Hooi
How do we know when the predictions made by a classifier can be trusted?
2 code implementations • 13 Jun 2021 • Ailin Deng, Bryan Hooi
Given high-dimensional time series data (e. g., sensor data), how can we detect anomalous events, such as system faults and attacks?