Search Results for author: Rafal Kocielnik

Found 8 papers, 0 papers with code

ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs

no code implementations19 Feb 2024 Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar

Our results reveal that: (1) ChatGPT can efficiently produce high-quality training data for debiasing other LLMs; (2) data produced via our approach surpasses existing datasets in debiasing performance while also preserving internal knowledge of a pre-trained LLM; and (3) synthetic data exhibits generalizability across categories, effectively mitigating various biases, including intersectional ones.

Data Augmentation Fairness

Deep Multimodal Fusion for Surgical Feedback Classification

no code implementations6 Dec 2023 Rafal Kocielnik, Elyssa Y. Wong, Timothy N. Chu, Lydia Lin, De-An Huang, Jiayun Wang, Anima Anandkumar, Andrew J. Hung

This work offers an important first look at the feasibility of automated classification of real-world live surgical feedback based on text, audio, and video modalities.

Classification

Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models

no code implementations5 Dec 2023 Adhithya Prakash Saravanan, Rafal Kocielnik, Roy Jiang, Pengrui Han, Anima Anandkumar

Text-to-image diffusion models have been adopted into key commercial workflows, such as art generation and image editing.

Image Generation

BiasTestGPT: Using ChatGPT for Social Bias Testing of Language Models

no code implementations14 Feb 2023 Rafal Kocielnik, Shrimai Prabhumoye, Vivian Zhang, Roy Jiang, R. Michael Alvarez, Anima Anandkumar

We thus enable seamless open-ended social bias testing of PLMs by domain experts through an automatic large-scale generation of diverse test sentences for any combination of social categories and attributes.

Sentence Text Generation

Can You Label Less by Using Out-of-Domain Data? Active & Transfer Learning with Few-shot Instructions

no code implementations21 Nov 2022 Rafal Kocielnik, Sara Kangaslahti, Shrimai Prabhumoye, Meena Hari, R. Michael Alvarez, Anima Anandkumar

Finally, we find that not all transfer scenarios yield a positive gain, which seems related to the PLMs initial performance on the target-domain task.

Active Learning Transfer Learning

From Who You Know to What You Read: Augmenting Scientific Recommendations with Implicit Social Networks

no code implementations21 Apr 2022 Hyeonsu B. Kang, Rafal Kocielnik, Andrew Head, Jiangjiang Yang, Matt Latzke, Aniket Kittur, Daniel S. Weld, Doug Downey, Jonathan Bragg

To improve the discovery experience we introduce multiple new methods for \em augmenting recommendations with textual relevance messages that highlight knowledge-graph connections between recommended papers and a user's publication and interaction history.

Few-shot Instruction Prompts for Pretrained Language Models to Detect Social Biases

no code implementations15 Dec 2021 Shrimai Prabhumoye, Rafal Kocielnik, Mohammad Shoeybi, Anima Anandkumar, Bryan Catanzaro

We then provide the LM with instruction that consists of this subset of labeled exemplars, the query text to be classified, a definition of bias, and prompt it to make a decision.

Cannot find the paper you are looking for? You can Submit a new open access paper.