1 code implementation • 30 Sep 2024 • Yifei Ming, Senthil Purushwalkam, Shrey Pandit, Zixuan Ke, Xuan-Phi Nguyen, Caiming Xiong, Shafiq Joty
Ensuring faithfulness to context in large language models (LLMs) and retrieval-augmented generation (RAG) systems is crucial for reliable deployment in real-world applications, as incorrect or unsupported information can erode user trust.
1 code implementation • 25 Sep 2024 • Zhenmei Shi, Yifei Ming, Xuan-Phi Nguyen, YIngyu Liang, Shafiq Joty
Our research introduces a novel approach for the long context bottleneck to accelerate LLM inference and reduce GPU memory consumption.
no code implementations • 16 Sep 2024 • Xuan-Phi Nguyen, Shrey Pandit, Senthil Purushwalkam, Austin Xu, Hailin Chen, Yifei Ming, Zixuan Ke, Silvio Savarese, Caiming Xong, Shafiq Joty
Retrieval Augmented Generation (RAG), a paradigm that integrates external contextual information with large language models (LLMs) to enhance factual accuracy and relevance, has emerged as a pivotal area in generative AI.
1 code implementation • 21 Jun 2024 • Jiayu Wang, Yifei Ming, Zhenmei Shi, Vibhav Vineet, Xin Wang, Yixuan Li, Neel Joshi
Large language models (LLMs) and vision-language models (VLMs) have demonstrated remarkable performance across a wide range of tasks and domains.
no code implementations • 2 May 2024 • Yifei Ming, Yixuan Li
Pre-trained contrastive vision-language models have demonstrated remarkable performance across a wide range of tasks.
1 code implementation • 29 Mar 2024 • Atsuyuki Miyai, Jingkang Yang, Jingyang Zhang, Yifei Ming, Qing Yu, Go Irie, Yixuan Li, Hai Li, Ziwei Liu, Kiyoharu Aizawa
This paper introduces a novel and significant challenge for Vision Language Models (VLMs), termed Unsolvable Problem Detection (UPD).
1 code implementation • 12 Feb 2024 • Haoyue Bai, Yifei Ming, Julian Katz-Samuels, Yixuan Li
Out-of-distribution (OOD) generalization is critical for machine learning models deployed in the real world.
no code implementations • 9 Jun 2023 • Yifei Ming, Yixuan Li
Recent CLIP-based fine-tuning methods such as prompt learning have demonstrated significant improvements in ID classification and OOD generalization where OOD labels are available.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 13 Mar 2023 • Zhenmei Shi, Yifei Ming, Ying Fan, Frederic Sala, YIngyu Liang
In this paper, we propose a simple and effective regularization method based on the nuclear norm of the learned features for domain generalization.
2 code implementations • 24 Nov 2022 • Yifei Ming, Ziyang Cai, Jiuxiang Gu, Yiyou Sun, Wei Li, Yixuan Li
Recognizing out-of-distribution (OOD) samples is critical for machine learning systems deployed in the open world.
2 code implementations • 28 Jun 2022 • Yifei Ming, Ying Fan, Yixuan Li
In this work, we propose a novel posterior sampling-based outlier mining framework, POEM, which facilitates efficient use of outlier data and promotes learning a compact decision boundary between ID and OOD data for improved detection.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 23 May 2022 • Tuan Dinh, Jy-yong Sohn, Shashank Rajput, Timothy Ossowski, Yifei Ming, Junjie Hu, Dimitris Papailiopoulos, Kangwook Lee
Word translation without parallel corpora has become feasible, rivaling the performance of supervised methods.
2 code implementations • 13 Apr 2022 • Yiyou Sun, Yifei Ming, Xiaojin Zhu, Yixuan Li
In this paper, we explore the efficacy of non-parametric nearest-neighbor distance for OOD detection, which has been largely overlooked in the literature.
1 code implementation • 17 Mar 2022 • Soumya Suvra Ghosal, Yifei Ming, Yixuan Li
Deep neural networks may be susceptible to learning spurious correlations that hold on average but not in atypical test samples.
1 code implementation • 8 Mar 2022 • Yifei Ming, Yiyou Sun, Ousmane Dia, Yixuan Li
Out-of-distribution (OOD) detection is a critical task for reliable machine learning.
Out-of-Distribution Detection Out of Distribution (OOD) Detection +1
1 code implementation • 12 Sep 2021 • Yifei Ming, Hang Yin, Yixuan Li
Modern neural networks can assign high confidence to inputs drawn from outside the training distribution, posing threats to models in real-world deployments.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 20 Nov 2020 • Ying Fan, Yifei Ming
In this paper, we study model-based posterior sampling for reinforcement learning (PSRL) in continuous state-action spaces theoretically and empirically.
no code implementations • 28 Sep 2020 • Ying Fan, Yifei Ming
Our bound can be extended to nonlinear cases as well: using linear kernels on the feature representation $\phi$, the Bayesian regret bound becomes $\tilde{O}(H^{3/2}d_{\phi}\sqrt{T})$, where $d_\phi$ is the dimension of the representation space.