Search Results for author: James Foulds

Found 19 papers, 5 papers with code

Tell Me Something That Will Help Me Trust You: A Survey of Trust Calibration in Human-Agent Interaction

no code implementations6 May 2022 George J. Cancro, SHimei Pan, James Foulds

In this paper we survey literature in the area of trust between a single human supervisor and a single agent subordinate to determine the nature and extent of this additional information and to characterize it into a taxonomy that can be leveraged by future researchers and intelligent agent practitioners.

User Acceptance of Gender Stereotypes in Automated Career Recommendations

no code implementations13 Jun 2021 Clarice Wang, Kathryn Wang, Andrew Bian, Rashidul Islam, Kamrun Naher Keya, James Foulds, SHimei Pan

In other words, our results demonstrate we cannot fully address the gender bias issue in AI recommendations without addressing the gender bias in humans.

BIG-bench Machine Learning

Fair Representation Learning for Heterogeneous Information Networks

1 code implementation18 Apr 2021 Ziqian Zeng, Rashidul Islam, Kamrun Naher Keya, James Foulds, Yangqiu Song, SHimei Pan

Recently, much attention has been paid to the societal impact of AI, especially concerns regarding its fairness.

Fairness Representation Learning

Causal Feature Selection with Dimension Reduction for Interpretable Text Classification

no code implementations9 Oct 2020 Guohou Shan, James Foulds, SHimei Pan

Text features that are correlated with class labels, but do not directly cause them, are sometimesuseful for prediction, but they may not be insightful.

Causal Inference Dimensionality Reduction +4

Neural Fair Collaborative Filtering

no code implementations2 Sep 2020 Rashidul Islam, Kamrun Naher Keya, Ziqian Zeng, SHimei Pan, James Foulds

A growing proportion of human interactions are digitized on social media platforms and subjected to algorithmic decision-making, and it has become increasingly important to ensure fair treatment from these algorithms.

Collaborative Filtering Decision Making +1

Scalable Collapsed Inference for High-Dimensional Topic Models

1 code implementation NAACL 2019 Rashidul Islam, James Foulds

In this paper, we develop an online inference algorithm for topic models which leverages stochasticity to scale well in the number of documents, sparsity to scale well in the number of topics, and which operates in the collapsed representation of the topic model for improved accuracy and run-time performance.

Topic Models Vocal Bursts Intensity Prediction

Estimating Buildings' Parameters over Time Including Prior Knowledge

1 code implementation9 Jan 2019 Nilavra Pathak, James Foulds, Nirmalya Roy, Nilanjan Banerjee, Ryan Robucci

We perform extensive evaluations on two datasets to understand the generative process and show that the Bayesian approach is more interpretable.

Causal Inference Transfer Learning +1

Bayesian Modeling of Intersectional Fairness: The Variance of Bias

no code implementations18 Nov 2018 James Foulds, Rashidul Islam, Kamrun Keya, SHimei Pan

Intersectionality is a framework that analyzes how interlocking systems of power and oppression affect individuals along overlapping dimensions including race, gender, sexual orientation, class, and disability.

Fairness

An Intersectional Definition of Fairness

2 code implementations22 Jul 2018 James Foulds, Rashidul Islam, Kamrun Naher Keya, SHimei Pan

We propose definitions of fairness in machine learning and artificial intelligence systems that are informed by the framework of intersectionality, a critical lens arising from the Humanities literature which analyzes how interlocking systems of power and oppression affect individuals along overlapping dimensions including gender, race, sexual orientation, class, and disability.

BIG-bench Machine Learning Fairness

Mixed Membership Word Embeddings for Computational Social Science

no code implementations20 May 2017 James Foulds

Word embeddings improve the performance of NLP systems by revealing the hidden structural relationships between words.

Language Modelling Topic Models +1

Variational Bayes In Private Settings (VIPS)

1 code implementation1 Nov 2016 Mijung Park, James Foulds, Kamalika Chaudhuri, Max Welling

Many applications of Bayesian data analysis involve sensitive information, motivating methods which ensure that privacy is protected.

Bayesian Inference Data Augmentation +1

Private Topic Modeling

no code implementations14 Sep 2016 Mijung Park, James Foulds, Kamalika Chaudhuri, Max Welling

We develop a privatised stochastic variational inference method for Latent Dirichlet Allocation (LDA).

Variational Inference

On the Theory and Practice of Privacy-Preserving Bayesian Data Analysis

no code implementations23 Mar 2016 James Foulds, Joseph Geumlek, Max Welling, Kamalika Chaudhuri

Bayesian inference has great promise for the privacy-preserving analysis of sensitive data, as posterior sampling automatically preserves differential privacy, an algorithmic notion of data privacy, under certain conditions (Dimitrakakis et al., 2014; Wang et al., 2015).

Bayesian Inference Privacy Preserving +2

Stochastic Collapsed Variational Bayesian Inference for Latent Dirichlet Allocation

no code implementations10 May 2013 James Foulds, Levi Boyles, Christopher DuBois, Padhraic Smyth, Max Welling

We propose a stochastic algorithm for collapsed variational Bayesian inference for LDA, which is simpler and more efficient than the state of the art method.

Bayesian Inference Topic Models +1

Cannot find the paper you are looking for? You can Submit a new open access paper.