Search Results for author: Stephen H. Bach

Found 15 papers, 8 papers with code

Tight Lower Bounds on Worst-Case Guarantees for Zero-Shot Learning with Attributes

no code implementations25 May 2022 Alessio Mazzetto, Cristina Menghini, Andrew Yuan, Eli Upfal, Stephen H. Bach

We develop the first non-trivial lower bound on the worst-case error of the best map from attributes to classes for this setting, even with perfect attribute detectors.

Zero-Shot Learning

Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations

no code implementations15 May 2022 Jessica Dai, Sohini Upadhyay, Ulrich Aivodji, Stephen H. Bach, Himabindu Lakkaraju

We then leverage these properties to propose a novel evaluation framework which can quantitatively measure disparities in the quality of explanations output by state-of-the-art methods.

Decision Making Fairness

Language Models in the Loop: Incorporating Prompting into Weak Supervision

no code implementations4 May 2022 Ryan Smith, Jason A. Fries, Braden Hancock, Stephen H. Bach

Our experimental evaluation shows that prompting large language models within a weak supervision framework can provide significant gains in accuracy.

Learning to Compose Soft Prompts for Compositional Zero-Shot Learning

1 code implementation7 Apr 2022 Nihal V. Nayak, Peilin Yu, Stephen H. Bach

Further, we show that CSP improves generalization to higher-order attribute-attribute-object compositions and combinations of pretrained attributes and fine-tuned objects.

Compositional Zero-Shot Learning

TAGLETS: A System for Automatic Semi-Supervised Learning with Auxiliary Data

2 code implementations8 Nov 2021 Wasu Piriyakulkij, Cristina Menghini, Ross Briden, Nihal V. Nayak, Jeffrey Zhu, Elaheh Raisi, Stephen H. Bach

Machine learning practitioners often have access to a spectrum of data: labeled data for the target task (which is often limited), unlabeled data, and auxiliary data, the many available labeled datasets for other tasks.

Image Classification Transfer Learning

What will it take to generate fairness-preserving explanations?

no code implementations24 Jun 2021 Jessica Dai, Sohini Upadhyay, Stephen H. Bach, Himabindu Lakkaraju

In situations where explanations of black-box models may be useful, the fairness of the black-box is also often a relevant concern.

Fairness

Learning from Multiple Noisy Partial Labelers

1 code implementation8 Jun 2021 Peilin Yu, Tiffany Ding, Stephen H. Bach

We evaluate our framework on three text classification and six object classification tasks.

Classification Text Classification +1

Extended Few-Shot Learning: Exploiting Existing Resources for Novel Tasks

2 code implementations13 Dec 2020 Reza Esfandiarpoor, Amy Pu, Mohsen Hajabdollahi, Stephen H. Bach

In many practical few-shot learning problems, even though labeled examples are scarce, there are abundant auxiliary datasets that potentially contain useful information.

Few-Shot Image Classification Semantic Similarity +2

Zero-Shot Learning with Common Sense Knowledge Graphs

2 code implementations18 Jun 2020 Nihal V. Nayak, Stephen H. Bach

Zero-shot learning relies on semantic class representations such as hand-engineered attributes or learned embeddings to predict classes without any labeled examples.

Generalized Zero-Shot Learning Knowledge Graphs

Snorkel: Rapid Training Data Creation with Weak Supervision

2 code implementations28 Nov 2017 Alexander Ratner, Stephen H. Bach, Henry Ehrenberg, Jason Fries, Sen Wu, Christopher Ré

In a user study, subject matter experts build models 2. 8x faster and increase predictive performance an average 45. 5% versus seven hours of hand labeling.

Hinge-Loss Markov Random Fields and Probabilistic Soft Logic

no code implementations17 May 2015 Stephen H. Bach, Matthias Broecheler, Bert Huang, Lise Getoor

In this paper, we introduce two new formalisms for modeling structured data, and show that they can both capture rich structure and scale to big data.

Knowledge Graphs Probabilistic Programming

Cannot find the paper you are looking for? You can Submit a new open access paper.