Search Results for author: Andy Shih

Found 15 papers, 10 papers with code

Parallel Sampling of Diffusion Models

1 code implementation25 May 2023 Andy Shih, Suneel Belkhale, Stefano Ermon, Dorsa Sadigh, Nima Anari

Instead of reducing the number of denoising steps (trading quality for speed), in this paper we explore an orthogonal approach: can we run the denoising steps in parallel (trading compute for speed)?

Denoising Image Generation

Long Horizon Temperature Scaling

1 code implementation7 Feb 2023 Andy Shih, Dorsa Sadigh, Stefano Ermon

LHTS is compatible with all likelihood-based models, and optimizes for the long horizon likelihood of samples.


Training and Inference on Any-Order Autoregressive Models the Right Way

1 code implementation26 May 2022 Andy Shih, Dorsa Sadigh, Stefano Ermon

Conditional inference on arbitrary subsets of variables is a core problem in probabilistic inference with important applications such as masked language modeling and image inpainting.

Image Inpainting Language Modelling +1

Imitation Learning by Estimating Expertise of Demonstrators

1 code implementation2 Feb 2022 Mark Beliaev, Andy Shih, Stefano Ermon, Dorsa Sadigh, Ramtin Pedarsani

In this work, we show that unsupervised learning over demonstrator expertise can lead to a consistent boost in the performance of imitation learning algorithms.

Continuous Control Imitation Learning

Conditional Imitation Learning for Multi-Agent Games

no code implementations5 Jan 2022 Andy Shih, Stefano Ermon, Dorsa Sadigh

In this work, we study the problem of conditional multi-agent imitation learning, where we have access to joint trajectory demonstrations at training time, and we must interact with and adapt to new partners at test time.

Imitation Learning Tensor Decomposition

PantheonRL: A MARL Library for Dynamic Training Interactions

1 code implementation13 Dec 2021 Bidipta Sarkar, Aditi Talati, Andy Shih, Dorsa Sadigh

We present PantheonRL, a multiagent reinforcement learning software package for dynamic training interactions such as round-robin, adaptive, and ad-hoc training.

reinforcement-learning Reinforcement Learning (RL)

HyperSPNs: Compact and Expressive Probabilistic Circuits

1 code implementation NeurIPS 2021 Andy Shih, Dorsa Sadigh, Stefano Ermon

Probabilistic circuits (PCs) are a family of generative models which allows for the computation of exact likelihoods and marginals of its probability distributions.

Density Estimation

Influencing Towards Stable Multi-Agent Interactions

no code implementations5 Oct 2021 Woodrow Z. Wang, Andy Shih, Annie Xie, Dorsa Sadigh

Instead of reactively adapting to the other agent's (opponent or partner) behavior, we propose an algorithm to proactively influence the other agent's strategy to stabilize -- which can restrain the non-stationarity caused by the other agent.

Autonomous Driving

On the Opportunities and Risks of Foundation Models

3 code implementations16 Aug 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.

Transfer Learning

On the Critical Role of Conventions in Adaptive Human-AI Collaboration

1 code implementation ICLR 2021 Andy Shih, Arjun Sawhney, Jovana Kondic, Stefano Ermon, Dorsa Sadigh

Humans can quickly adapt to new partners in collaborative tasks (e. g. playing basketball), because they understand which fundamental skills of the task (e. g. how to dribble, how to shoot) carry over across new partners.

Probabilistic Circuits for Variational Inference in Discrete Graphical Models

1 code implementation NeurIPS 2020 Andy Shih, Stefano Ermon

Inference in discrete graphical models with variational methods is difficult because of the inability to re-parameterize gradients of the Evidence Lower Bound (ELBO).

Variational Inference

On Symbolically Encoding the Behavior of Random Forests

no code implementations3 Jul 2020 Arthur Choi, Andy Shih, Anchal Goyanka, Adnan Darwiche

Recent work has shown that the input-output behavior of some machine learning systems can be captured symbolically using Boolean expressions or tractable Boolean circuits, which facilitates reasoning about the behavior of these systems.

BIG-bench Machine Learning

On Tractable Representations of Binary Neural Networks

no code implementations5 Apr 2020 Weijia Shi, Andy Shih, Adnan Darwiche, Arthur Choi

We consider the compilation of a binary neural network's decision function into tractable representations such as Ordered Binary Decision Diagrams (OBDDs) and Sentential Decision Diagrams (SDDs).

Smoothing Structured Decomposable Circuits

1 code implementation NeurIPS 2019 Andy Shih, Guy Van Den Broeck, Paul Beame, Antoine Amarilli

Further, for the important case of All-Marginals, we show a more efficient linear-time algorithm.

Density Estimation

A Symbolic Approach to Explaining Bayesian Network Classifiers

no code implementations9 May 2018 Andy Shih, Arthur Choi, Adnan Darwiche

We propose an approach for explaining Bayesian network classifiers, which is based on compiling such classifiers into decision functions that have a tractable and symbolic form.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.