Search Results for author: Nitesh Sekhar

Found 2 papers, 1 papers with code

Language Grounded QFormer for Efficient Vision Language Understanding

no code implementations13 Nov 2023 Moulik Choraria, Nitesh Sekhar, Yue Wu, Xu Zhang, Prateek Singhal, Lav R. Varshney

Large-scale pretraining and instruction tuning have been successful for training general-purpose language models with broad competencies.

Representation Learning

Avoiding spurious correlations via logit correction

1 code implementation2 Dec 2022 Sheng Liu, Xu Zhang, Nitesh Sekhar, Yue Wu, Prateek Singhal, Carlos Fernandez-Granda

Empirical studies suggest that machine learning models trained with empirical risk minimization (ERM) often rely on attributes that may be spuriously correlated with the class labels.


Cannot find the paper you are looking for? You can Submit a new open access paper.