Search Results for author: Padhraic Smyth

Found 35 papers, 16 papers with code

Asynchronous Distributed Learning of Topic Models

no code implementations NeurIPS 2008 Padhraic Smyth, Max Welling, Arthur U. Asuncion

Distributed learning is a problem of fundamental interest in machine learning and cognitive science.

Topic Models

Particle-based Variational Inference for Continuous Systems

no code implementations NeurIPS 2009 Andrew Frank, Padhraic Smyth, Alexander T. Ihler

Since the development of loopy belief propagation, there has been considerable work on advancing the state of the art for approximate inference over distributions defined on discrete random variables.

Variational Inference

Learning concept graphs from text with stick-breaking priors

no code implementations NeurIPS 2010 America Chambers, Padhraic Smyth, Mark Steyvers

We describe a generative model that is based on a stick-breaking process for graphs, and a Markov Chain Monte Carlo inference procedure.

Topic Models

Continuous-Time Regression Models for Longitudinal Networks

no code implementations NeurIPS 2011 Duy Q. Vu, David Hunter, Padhraic Smyth, Arthur U. Asuncion

The development of statistical models for continuous-time longitudinal network data is of increasing interest in machine learning and social science.

regression

On Smoothing and Inference for Topic Models

1 code implementation9 May 2012 Arthur Asuncion, Max Welling, Padhraic Smyth, Yee Whye Teh

Latent Dirichlet analysis, or topic modeling, is a flexible latent variable framework for modeling high-dimensional sparse count data.

Topic Models Variational Inference

The Author-Topic Model for Authors and Documents

1 code implementation11 Jul 2012 Michal Rosen-Zvi, Thomas Griffiths, Mark Steyvers, Padhraic Smyth

A document with multiple authors is modeled as a distribution over topics that is a mixture of the distributions associated with the authors.

Stochastic Collapsed Variational Bayesian Inference for Latent Dirichlet Allocation

no code implementations10 May 2013 James Foulds, Levi Boyles, Christopher DuBois, Padhraic Smyth, Max Welling

We propose a stochastic algorithm for collapsed variational Bayesian inference for LDA, which is simpler and more efficient than the state of the art method.

Bayesian Inference Topic Models +1

Hot Swapping for Online Adaptation of Optimization Hyperparameters

no code implementations20 Dec 2014 Kevin Bache, Dennis Decoste, Padhraic Smyth

We describe a general framework for online adaptation of optimization hyperparameters by `hot swapping' their values during learning.

A Scale Mixture Perspective of Multiplicative Noise in Neural Networks

no code implementations10 Jun 2015 Eric Nalisnick, Anima Anandkumar, Padhraic Smyth

Corrupting the input and hidden layers of deep neural networks (DNNs) with multiplicative noise, often drawn from the Bernoulli distribution (or 'dropout'), provides regularization that has significantly contributed to deep learning's success.

Model Compression

Stick-Breaking Variational Autoencoders

2 code implementations20 May 2016 Eric Nalisnick, Padhraic Smyth

We extend Stochastic Gradient Variational Bayes to perform posterior inference for the weights of Stick-Breaking processes.

Bayesian Non-Homogeneous Markov Models via Polya-Gamma Data Augmentation with Applications to Rainfall Modeling

no code implementations11 Jan 2017 Tracy Holsclaw, Arthur M. Greene, Andrew W. Robertson, Padhraic Smyth

We extend such models to introduce additional non-homogeneity into the emission distribution using a generalized linear model (GLM), with data augmentation for sampling-based inference.

Data Augmentation speech-recognition +1

Learning Approximately Objective Priors

no code implementations4 Apr 2017 Eric Nalisnick, Padhraic Smyth

Informative Bayesian priors are often difficult to elicit, and when this is the case, modelers usually turn to noninformative or objective priors.

Mondrian Processes for Flow Cytometry Analysis

no code implementations21 Nov 2017 Disi Ji, Eric Nalisnick, Padhraic Smyth

Analysis of flow cytometry data is an essential tool for clinical diagnosis of hematological and immunological conditions.

Uncertainty Quantification

Dropout as a Structured Shrinkage Prior

1 code implementation9 Oct 2018 Eric Nalisnick, José Miguel Hernández-Lobato, Padhraic Smyth

We propose a novel framework for understanding multiplicative noise in neural networks, considering continuous distributions as well as Bernoulli noise (i. e. dropout).

Bayesian Inference

Active Bayesian Assessment for Black-Box Classifiers

1 code implementation16 Feb 2020 Disi Ji, Robert L. Logan IV, Padhraic Smyth, Mark Steyvers

Recent advances in machine learning have led to increased deployment of black-box classifiers across a wide variety of applications.

text-classification Text Classification

Can I Trust My Fairness Metric? Assessing Fairness with Unlabeled Data and Bayesian Inference

no code implementations NeurIPS 2020 Disi Ji, Padhraic Smyth, Mark Steyvers

We investigate the problem of reliably assessing group fairness when labeled examples are few but unlabeled examples are plentiful.

Bayesian Inference Fairness

User-Dependent Neural Sequence Models for Continuous-Time Event Data

1 code implementation NeurIPS 2020 Alex Boyd, Robert Bamler, Stephan Mandt, Padhraic Smyth

Modeling such data can be very challenging, in particular for applications with many different types of events, since it requires a model to predict the event types as well as the time of occurrence.

Variational Inference

Variational Beam Search for Novelty Detection

no code implementations pproximateinference AABI Symposium 2021 Aodong Li, Alex James Boyd, Padhraic Smyth, Stephan Mandt

We consider the problem of online learning in the presence of sudden distribution shifts, which may be hard to detect and can lead to a slow but steady degradation in model performance.

Novelty Detection

Detecting and Adapting to Irregular Distribution Shifts in Bayesian Online Learning

1 code implementation NeurIPS 2021 Aodong Li, Alex Boyd, Padhraic Smyth, Stephan Mandt

We consider the problem of online learning in the presence of distribution shifts that occur at an unknown rate and of unknown intensity.

Autonomous Navigation Change Point Detection

Automating Data Science: Prospects and Challenges

no code implementations12 May 2021 Tijl De Bie, Luc De Raedt, José Hernández-Orallo, Holger H. Hoos, Padhraic Smyth, Christopher K. I. Williams

Given the complexity of typical data science projects and the associated demand for human expertise, automation has the potential to transform the data science process.

AutoML BIG-bench Machine Learning

Fair Generalized Linear Models with a Convex Penalty

1 code implementation18 Jun 2022 Hyungrok Do, Preston Putzel, Axel Martin, Padhraic Smyth, Judy Zhong

In addition, we demonstrate that the fair GLM can generate fair predictions for a range of response variables, other than binary and continuous outcomes.

Binary Classification Fairness

Variable-Based Calibration for Machine Learning Classifiers

1 code implementation30 Sep 2022 Markelle Kelly, Padhraic Smyth

In this paper we introduce the notion of variable-based calibration to characterize calibration properties of a model with respect to a variable of interest, generalizing traditional score-based metrics such as expected calibration error (ECE).

Fairness

Predictive Querying for Autoregressive Neural Sequence Models

1 code implementation12 Oct 2022 Alex Boyd, Sam Showalter, Stephan Mandt, Padhraic Smyth

In reasoning about sequential events it is natural to pose probabilistic queries such as "when will event A occur next" or "what is the probability of A occurring before B", with applications in areas such as user modeling, medicine, and finance.

Language Modelling

Probabilistic Querying of Continuous-Time Event Sequences

no code implementations15 Nov 2022 Alex Boyd, Yuxin Chang, Stephan Mandt, Padhraic Smyth

Continuous-time event sequences, i. e., sequences consisting of continuous time stamps and associated event types ("marks"), are an important type of sequential data with many applications, e. g., in clinical medicine or user behavior modeling.

Vocal Bursts Type Prediction

Diffusion Generative Models in Infinite Dimensions

1 code implementation1 Dec 2022 Gavin Kerrigan, Justin Ley, Padhraic Smyth

We generalize diffusion models to operate directly in function space by developing the foundational theory for such models in terms of Gaussian measures on Hilbert spaces.

Time Series Time Series Analysis

Deep Anomaly Detection under Labeling Budget Constraints

1 code implementation15 Feb 2023 Aodong Li, Chen Qiu, Marius Kloft, Padhraic Smyth, Stephan Mandt, Maja Rudolph

Selecting informative data points for expert feedback can significantly improve the performance of anomaly detection (AD) in various contexts, such as medical diagnostics or fraud detection.

Anomaly Detection Fraud Detection

Capturing Humans' Mental Models of AI: An Item Response Theory Approach

1 code implementation15 May 2023 Markelle Kelly, Aakriti Kumar, Padhraic Smyth, Mark Steyvers

Improving our understanding of how humans perceive AI teammates is an important foundation for our general understanding of human-AI teams.

Question Answering

Functional Flow Matching

no code implementations26 May 2023 Gavin Kerrigan, Giosue Migliorini, Padhraic Smyth

We propose Functional Flow Matching (FFM), a function-space generative model that generalizes the recently-introduced Flow Matching model to operate in infinite-dimensional spaces.

Bayesian Online Learning for Consensus Prediction

no code implementations12 Dec 2023 Sam Showalter, Alex Boyd, Padhraic Smyth, Mark Steyvers

Given a pre-trained classifier and multiple human experts, we investigate the task of online classification where model predictions are provided for free but querying humans incurs a cost.

Probabilistic Modeling for Sequences of Sets in Continuous-Time

1 code implementation22 Dec 2023 Yuxin Chang, Alex Boyd, Padhraic Smyth

In this work, we develop a general framework for modeling set-valued data in continuous-time, compatible with any intensity-based recurrent neural point process model.

Model Selection Point Processes

The Calibration Gap between Model and Human Confidence in Large Language Models

no code implementations24 Jan 2024 Mark Steyvers, Heliodoro Tejeda, Aakriti Kumar, Catarina Belem, Sheer Karny, Xinyue Hu, Lukas Mayer, Padhraic Smyth

Recent work has focused on the quality of internal LLM confidence assessments, but the question remains of how well LLMs can communicate this internal model confidence to human users.

Multiple-choice

Cannot find the paper you are looking for? You can Submit a new open access paper.