Search Results for author: Yifei Ming

Found 18 papers, 14 papers with code

FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows"

1 code implementation30 Sep 2024 Yifei Ming, Senthil Purushwalkam, Shrey Pandit, Zixuan Ke, Xuan-Phi Nguyen, Caiming Xiong, Shafiq Joty

Ensuring faithfulness to context in large language models (LLMs) and retrieval-augmented generation (RAG) systems is crucial for reliable deployment in real-world applications, as incorrect or unsupported information can erode user trust.

counterfactual Hallucination +3

Discovering the Gems in Early Layers: Accelerating Long-Context LLMs with 1000x Input Token Reduction

1 code implementation25 Sep 2024 Zhenmei Shi, Yifei Ming, Xuan-Phi Nguyen, YIngyu Liang, Shafiq Joty

Our research introduces a novel approach for the long context bottleneck to accelerate LLM inference and reduce GPU memory consumption.

Token Reduction

SFR-RAG: Towards Contextually Faithful LLMs

no code implementations16 Sep 2024 Xuan-Phi Nguyen, Shrey Pandit, Senthil Purushwalkam, Austin Xu, Hailin Chen, Yifei Ming, Zixuan Ke, Silvio Savarese, Caiming Xong, Shafiq Joty

Retrieval Augmented Generation (RAG), a paradigm that integrates external contextual information with large language models (LLMs) to enhance factual accuracy and relevance, has emerged as a pivotal area in generative AI.

counterfactual Hallucination +3

Is A Picture Worth A Thousand Words? Delving Into Spatial Reasoning for Vision Language Models

1 code implementation21 Jun 2024 Jiayu Wang, Yifei Ming, Zhenmei Shi, Vibhav Vineet, Xin Wang, Yixuan Li, Neel Joshi

Large language models (LLMs) and vision-language models (VLMs) have demonstrated remarkable performance across a wide range of tasks and domains.

Spatial Reasoning

Understanding Retrieval-Augmented Task Adaptation for Vision-Language Models

no code implementations2 May 2024 Yifei Ming, Yixuan Li

Pre-trained contrastive vision-language models have demonstrated remarkable performance across a wide range of tasks.

Cross-Modal Retrieval Retrieval

HYPO: Hyperspherical Out-of-Distribution Generalization

1 code implementation12 Feb 2024 Haoyue Bai, Yifei Ming, Julian Katz-Samuels, Yixuan Li

Out-of-distribution (OOD) generalization is critical for machine learning models deployed in the real world.

Out-of-Distribution Generalization

How Does Fine-Tuning Impact Out-of-Distribution Detection for Vision-Language Models?

no code implementations9 Jun 2023 Yifei Ming, Yixuan Li

Recent CLIP-based fine-tuning methods such as prompt learning have demonstrated significant improvements in ID classification and OOD generalization where OOD labels are available.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Domain Generalization via Nuclear Norm Regularization

1 code implementation13 Mar 2023 Zhenmei Shi, Yifei Ming, Ying Fan, Frederic Sala, YIngyu Liang

In this paper, we propose a simple and effective regularization method based on the nuclear norm of the learned features for domain generalization.

Domain Generalization

POEM: Out-of-Distribution Detection with Posterior Sampling

2 code implementations28 Jun 2022 Yifei Ming, Ying Fan, Yixuan Li

In this work, we propose a novel posterior sampling-based outlier mining framework, POEM, which facilitates efficient use of outlier data and promotes learning a compact decision boundary between ID and OOD data for improved detection.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Out-of-Distribution Detection with Deep Nearest Neighbors

2 code implementations13 Apr 2022 Yiyou Sun, Yifei Ming, Xiaojin Zhu, Yixuan Li

In this paper, we explore the efficacy of non-parametric nearest-neighbor distance for OOD detection, which has been largely overlooked in the literature.

Out-of-Distribution Detection

Are Vision Transformers Robust to Spurious Correlations?

1 code implementation17 Mar 2022 Soumya Suvra Ghosal, Yifei Ming, Yixuan Li

Deep neural networks may be susceptible to learning spurious correlations that hold on average but not in atypical test samples.

On the Impact of Spurious Correlation for Out-of-distribution Detection

1 code implementation12 Sep 2021 Yifei Ming, Hang Yin, Yixuan Li

Modern neural networks can assign high confidence to inputs drawn from outside the training distribution, posing threats to models in real-world deployments.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Model-based Reinforcement Learning for Continuous Control with Posterior Sampling

1 code implementation20 Nov 2020 Ying Fan, Yifei Ming

In this paper, we study model-based posterior sampling for reinforcement learning (PSRL) in continuous state-action spaces theoretically and empirically.

continuous-control Continuous Control +8

Efficient Exploration for Model-based Reinforcement Learning with Continuous States and Actions

no code implementations28 Sep 2020 Ying Fan, Yifei Ming

Our bound can be extended to nonlinear cases as well: using linear kernels on the feature representation $\phi$, the Bayesian regret bound becomes $\tilde{O}(H^{3/2}d_{\phi}\sqrt{T})$, where $d_\phi$ is the dimension of the representation space.

Efficient Exploration Gaussian Processes +4

Cannot find the paper you are looking for? You can Submit a new open access paper.