Search Results for author: David Emerson

Found 5 papers, 1 papers with code

FlexModel: A Framework for Interpretability of Distributed Large Language Models

1 code implementation5 Dec 2023 Matthew Choi, Muhammad Adil Asif, John Willes, David Emerson

With the growth of large language models, now incorporating billions of parameters, the hardware prerequisites for their training and deployment have seen a corresponding increase.

Distributed Computing

Interpretable Stereotype Identification through Reasoning

no code implementations24 Jul 2023 Jacob-Junqi Tian, Omkar Dige, David Emerson, Faiza Khan Khattak

Given that language models are trained on vast datasets that may contain inherent biases, there is a potential danger of inadvertently perpetuating systemic discrimination.

Fairness

Can Instruction Fine-Tuned Language Models Identify Social Bias through Prompting?

no code implementations19 Jul 2023 Omkar Dige, Jacob-Junqi Tian, David Emerson, Faiza Khan Khattak

As the breadth and depth of language model applications continue to expand rapidly, it is increasingly important to build efficient frameworks for measuring and mitigating the learned or inherited social biases of these models.

Language Modelling

Soft-prompt Tuning for Large Language Models to Evaluate Bias

no code implementations7 Jun 2023 Jacob-Junqi Tian, David Emerson, Sevil Zanjani Miyandoab, Deval Pandya, Laleh Seyyed-Kalantari, Faiza Khan Khattak

In this paper, we explore the use of soft-prompt tuning on sentiment classification task to quantify the biases of large language models (LLMs) such as Open Pre-trained Transformers (OPT) and Galactica language model.

Fairness Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.