no code implementations • 13 Nov 2023 • Moulik Choraria, Nitesh Sekhar, Yue Wu, Xu Zhang, Prateek Singhal, Lav R. Varshney
Large-scale pretraining and instruction tuning have been successful for training general-purpose language models with broad competencies.
1 code implementation • 2 Dec 2022 • Sheng Liu, Xu Zhang, Nitesh Sekhar, Yue Wu, Prateek Singhal, Carlos Fernandez-Granda
Empirical studies suggest that machine learning models trained with empirical risk minimization (ERM) often rely on attributes that may be spuriously correlated with the class labels.