Search Results for author: Weiwei Pan

Found 32 papers, 6 papers with code

Directly Optimizing Explanations for Desired Properties

no code implementations31 Oct 2024 Hiwot Belay Tadesse, Alihan Hüyük, Weiwei Pan, Finale Doshi-Velez

When explaining black-box machine learning models, it's often important for explanations to have certain desirable properties.

Inverse Reinforcement Learning with Multiple Planning Horizons

no code implementations26 Sep 2024 Jiayu Yao, Weiwei Pan, Finale Doshi-Velez, Barbara E Engelhardt

In this work, we study an inverse reinforcement learning (IRL) problem where the experts are planning under a shared reward function but with different, unknown planning horizons.

reinforcement-learning Reinforcement Learning

A Sim2Real Approach for Identifying Task-Relevant Properties in Interpretable Machine Learning

no code implementations31 May 2024 Eura Nofshin, Esther Brown, Brian Lim, Weiwei Pan, Finale Doshi-Velez

Explanations of an AI's function can assist human decision-makers, but the most useful explanation depends on the decision's context, referred to as the downstream task.

Interpretable Machine Learning

Towards Model-Agnostic Posterior Approximation for Fast and Accurate Variational Autoencoders

no code implementations13 Mar 2024 Yaniv Yacoby, Weiwei Pan, Finale Doshi-Velez

It approximates the posterior of the true model a priori; fixing this posterior approximation, we then maximize the lower bound relative to only the generative model.

Density Estimation

Reinforcement Learning Interventions on Boundedly Rational Human Agents in Frictionful Tasks

no code implementations26 Jan 2024 Eura Nofshin, Siddharth Swaroop, Weiwei Pan, Susan Murphy, Finale Doshi-Velez

Many important behavior changes are frictionful; they require individuals to expend effort over a long period with little immediate gratification.

AI Agent Attribute

Why do universal adversarial attacks work on large language models?: Geometry might be the answer

no code implementations1 Sep 2023 Varshini Subhash, Anna Bialas, Weiwei Pan, Finale Doshi-Velez

We believe this new geometric perspective on the underlying mechanism driving universal attacks could help us gain deeper insight into the internal workings and failure modes of LLMs, thus enabling their mitigation.

Dimensionality Reduction

SAP-sLDA: An Interpretable Interface for Exploring Unstructured Text

no code implementations28 Jul 2023 Charumathi Badrinath, Weiwei Pan, Finale Doshi-Velez

A common way to explore text corpora is through low-dimensional projections of the documents, where one hopes that thematically similar documents will be clustered together in the projected space.

Dimensionality Reduction

The Unintended Consequences of Discount Regularization: Improving Regularization in Certainty Equivalence Reinforcement Learning

no code implementations20 Jun 2023 Sarah Rathnam, Sonali Parbhoo, Weiwei Pan, Susan A. Murphy, Finale Doshi-Velez

We demonstrate that planning under a lower discount factor produces an identical optimal policy to planning using any prior on the transition matrix that has the same distribution for all states and actions.

Modeling Mobile Health Users as Reinforcement Learning Agents

no code implementations1 Dec 2022 Eura Shin, Siddharth Swaroop, Weiwei Pan, Susan Murphy, Finale Doshi-Velez

Mobile health (mHealth) technologies empower patients to adopt/maintain healthy behaviors in their daily lives, by providing interventions (e. g. push notifications) tailored to the user's needs.

Decision Making reinforcement-learning +2

An Empirical Analysis of the Advantages of Finite- v.s. Infinite-Width Bayesian Neural Networks

no code implementations16 Nov 2022 Jiayu Yao, Yaniv Yacoby, Beau Coker, Weiwei Pan, Finale Doshi-Velez

Comparing Bayesian neural networks (BNNs) with different widths is challenging because, as the width increases, multiple model properties change simultaneously, and, inference in the finite-width case is intractable.

What Makes a Good Explanation?: A Harmonized View of Properties of Explanations

no code implementations10 Nov 2022 Zixi Chen, Varshini Subhash, Marton Havasi, Weiwei Pan, Finale Doshi-Velez

In this work, we survey properties defined in interpretable machine learning papers, synthesize them based on what they actually measure, and describe the trade-offs between different formulations of these properties.

Interpretable Machine Learning

Success of Uncertainty-Aware Deep Models Depends on Data Manifold Geometry

no code implementations2 Aug 2022 Mark Penrod, Harrison Termotto, Varshini Reddy, Jiayu Yao, Finale Doshi-Velez, Weiwei Pan

For responsible decision making in safety-critical settings, machine learning models must effectively detect and process edge-case data.

Decision Making Deep Learning

Policy Optimization with Sparse Global Contrastive Explanations

no code implementations13 Jul 2022 Jiayu Yao, Sonali Parbhoo, Weiwei Pan, Finale Doshi-Velez

We develop a Reinforcement Learning (RL) framework for improving an existing behavior policy via sparse, user-interpretable changes.

reinforcement-learning Reinforcement Learning +1

Wide Mean-Field Bayesian Neural Networks Ignore the Data

1 code implementation23 Feb 2022 Beau Coker, Wessel P. Bruinsma, David R. Burt, Weiwei Pan, Finale Doshi-Velez

Finally, we show that the optimal approximate posterior need not tend to the prior if the activation function is not odd, showing that our statements cannot be generalized arbitrarily.

Variational Inference

Promises and Pitfalls of Black-Box Concept Learning Models

2 code implementations24 Jun 2021 Anita Mahinpei, Justin Clark, Isaac Lage, Finale Doshi-Velez, Weiwei Pan

Machine learning models that incorporate concept learning as an intermediate step in their decision making process can match the performance of black-box predictive models while retaining the ability to explain outcomes in human understandable terms.

Decision Making

Wide Mean-Field Variational Bayesian Neural Networks Ignore the Data

no code implementations13 Jun 2021 Beau Coker, Weiwei Pan, Finale Doshi-Velez

Variational inference enables approximate posterior inference of the highly over-parameterized neural networks that are popular in modern machine learning.

BIG-bench Machine Learning Variational Inference

Failure Modes of Variational Autoencoders and Their Effects on Downstream Tasks

no code implementations14 Jul 2020 Yaniv Yacoby, Weiwei Pan, Finale Doshi-Velez

Variational Auto-encoders (VAEs) are deep generative latent variable models that are widely used for a number of downstream tasks.

Adversarial Robustness

BaCOUn: Bayesian Classifers with Out-of-Distribution Uncertainty

no code implementations12 Jul 2020 Théo Guénais, Dimitris Vamvourellis, Yaniv Yacoby, Finale Doshi-Velez, Weiwei Pan

Traditional training of deep classifiers yields overconfident models that are not reliable under dataset shift.

Bayesian Inference

Uncertainty-Aware (UNA) Bases for Deep Bayesian Regression Using Multi-Headed Auxiliary Networks

no code implementations21 Jun 2020 Sujay Thakur, Cooper Lorsung, Yaniv Yacoby, Finale Doshi-Velez, Weiwei Pan

Neural Linear Models (NLM) are deep Bayesian models that produce predictive uncertainties by learning features from the data and then performing Bayesian linear regression over these features.

regression

Power Constrained Bandits

1 code implementation13 Apr 2020 Jiayu Yao, Emma Brunskill, Weiwei Pan, Susan Murphy, Finale Doshi-Velez

However, when bandits are deployed in the context of a scientific study -- e. g. a clinical trial to test if a mobile health intervention is effective -- the aim is not only to personalize for an individual, but also to determine, with sufficient statistical power, whether or not the system's intervention is effective.

Decision Making Multi-Armed Bandits

Characterizing and Avoiding Problematic Global Optima of Variational Autoencoders

no code implementations pproximateinference AABI Symposium 2019 Yaniv Yacoby, Weiwei Pan, Finale Doshi-Velez

Recent work shows that traditional training methods tend to yield solutions that violate modeling desiderata: (1) the learned generative model captures the observed data distribution but does so while ignoring the latent codes, resulting in codes that do not represent the data (e. g. van den Oord et al. (2017); Kim et al. (2018)); (2) the aggregate of the learned latent codes does not match the prior p(z).

Mitigating the Effects of Non-Identifiability on Inference for Bayesian Neural Networks with Latent Variables

no code implementations1 Nov 2019 Yaniv Yacoby, Weiwei Pan, Finale Doshi-Velez

Bayesian Neural Networks with Latent Variables (BNN+LVs) capture predictive uncertainty by explicitly modeling model uncertainty (via priors on network weights) and environmental stochasticity (via a latent input noise variable).

A general method for regularizing tensor decomposition methods via pseudo-data

no code implementations24 May 2019 Omer Gottesman, Weiwei Pan, Finale Doshi-Velez

Tensor decomposition methods allow us to learn the parameters of latent variable models through decomposition of low-order moments of data.

Tensor Decomposition Transfer Learning

Deep Variational Transfer: Transfer Learning through Semi-supervised Deep Generative Models

no code implementations7 Dec 2018 Marouan Belhaj, Pavlos Protopapas, Weiwei Pan

Thanks to the combination of a semi-supervised ELBO and parameters sharing across domains, we are able to simultaneously: (i) align all supervised examples of the same class into the same latent Gaussian Mixture component, independently from their domain; (ii) predict the class of unsupervised examples from different domains and use them to better model the occurring shifts.

General Classification Transfer Learning

Projected BNNs: Avoiding weight-space pathologies by learning latent representations of neural network weights

no code implementations16 Nov 2018 Melanie F. Pradier, Weiwei Pan, Jiayu Yao, Soumya Ghosh, Finale Doshi-Velez

As machine learning systems get widely adopted for high-stake decisions, quantifying uncertainty over predictions becomes crucial.

Variational Inference

Weighted Tensor Decomposition for Learning Latent Variables with Partial Data

no code implementations18 Oct 2017 Omer Gottesman, Weiwei Pan, Finale Doshi-Velez

Tensor decomposition methods are popular tools for learning latent variables given only lower-order moments of the data.

Tensor Decomposition

Cannot find the paper you are looking for? You can Submit a new open access paper.