no code implementations • 4 Apr 2024 • Farnaz Kohankhaki, Jacob-Junqi Tian, David Emerson, Laleh Seyyed-Kalantari, Faiza Khan Khattak
This approach is widely used in bias quantification.
1 code implementation • 5 Dec 2023 • Matthew Choi, Muhammad Adil Asif, John Willes, David Emerson
With the growth of large language models, now incorporating billions of parameters, the hardware prerequisites for their training and deployment have seen a corresponding increase.
no code implementations • 24 Jul 2023 • Jacob-Junqi Tian, Omkar Dige, David Emerson, Faiza Khan Khattak
Given that language models are trained on vast datasets that may contain inherent biases, there is a potential danger of inadvertently perpetuating systemic discrimination.
no code implementations • 19 Jul 2023 • Omkar Dige, Jacob-Junqi Tian, David Emerson, Faiza Khan Khattak
As the breadth and depth of language model applications continue to expand rapidly, it is increasingly important to build efficient frameworks for measuring and mitigating the learned or inherited social biases of these models.
no code implementations • 7 Jun 2023 • Jacob-Junqi Tian, David Emerson, Sevil Zanjani Miyandoab, Deval Pandya, Laleh Seyyed-Kalantari, Faiza Khan Khattak
In this paper, we explore the use of soft-prompt tuning on sentiment classification task to quantify the biases of large language models (LLMs) such as Open Pre-trained Transformers (OPT) and Galactica language model.