Search Results for author: Thomas Icard

Found 16 papers, 7 papers with code

A Reply to Makelov et al. (2023)'s "Interpretability Illusion" Arguments

1 code implementation23 Jan 2024 Zhengxuan Wu, Atticus Geiger, Jing Huang, Aryaman Arora, Thomas Icard, Christopher Potts, Noah D. Goodman

We respond to the recent paper by Makelov et al. (2023), which reviews subspace interchange intervention methods like distributed alignment search (DAS; Geiger et al. 2023) and claims that these methods potentially cause "interpretability illusions".

Interpretability at Scale: Identifying Causal Mechanisms in Alpaca

1 code implementation NeurIPS 2023 Zhengxuan Wu, Atticus Geiger, Thomas Icard, Christopher Potts, Noah D. Goodman

With Boundless DAS, we discover that Alpaca does this by implementing a causal model with two interpretable boolean variables.

Finding Alignments Between Interpretable Causal Variables and Distributed Neural Representations

no code implementations5 Mar 2023 Atticus Geiger, Zhengxuan Wu, Christopher Potts, Thomas Icard, Noah D. Goodman

In DAS, we find the alignment between high-level and low-level models using gradient descent rather than conducting a brute-force search, and we allow individual neurons to play multiple distinct roles by analyzing representations in non-standard bases-distributed representations.

Explainable artificial intelligence

Causal Abstraction for Faithful Model Interpretation

no code implementations11 Jan 2023 Atticus Geiger, Chris Potts, Thomas Icard

A faithful and interpretable explanation of an AI model's behavior and internal structure is a high-level explanation that is human-intelligible but also consistent with the known, but often opaque low-level causal details of the model.

Explainable Artificial Intelligence (XAI)

Causal Abstraction with Soft Interventions

no code implementations22 Nov 2022 Riccardo Massidda, Atticus Geiger, Thomas Icard, Davide Bacciu

Causal abstraction provides a theory describing how several causal models can represent the same system at different levels of detail.

Inducing Causal Structure for Interpretable Neural Networks

2 code implementations1 Dec 2021 Atticus Geiger, Zhengxuan Wu, Hanson Lu, Josh Rozner, Elisa Kreiss, Thomas Icard, Noah D. Goodman, Christopher Potts

In IIT, we (1) align variables in a causal model (e. g., a deterministic program or Bayesian network) with representations in a neural model and (2) train the neural model to match the counterfactual behavior of the causal model on a base input when aligned representations in both models are set to be the value they would be for a source input.

counterfactual Data Augmentation +1

On the Opportunities and Risks of Foundation Models

2 code implementations16 Aug 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.

Transfer Learning

A Topological Perspective on Causal Inference

no code implementations NeurIPS 2021 Duligur Ibeling, Thomas Icard

This paper presents a topological learning-theoretic perspective on causal inference by introducing a series of topologies defined on general spaces of structural causal models (SCMs).

Causal Inference valid

Causal Abstractions of Neural Networks

1 code implementation NeurIPS 2021 Atticus Geiger, Hanson Lu, Thomas Icard, Christopher Potts

Structural analysis methods (e. g., probing and feature attribution) are increasingly important tools for neural network analysis.

Natural Language Inference

Intention as Commitment toward Time

no code implementations17 Apr 2020 Marc van Zee, Dragan Doder, Leendert van der Torre, Mehdi Dastani, Thomas Icard, Eric Pacuit

The first contribution is a logic for reasoning about intention, time and belief, in which assumptions of intentions are represented by preconditions of intended actions.

Probabilistic Reasoning across the Causal Hierarchy

no code implementations9 Jan 2020 Duligur Ibeling, Thomas Icard

We propose a formalization of the three-tier causal hierarchy of association, intervention, and counterfactuals as a series of probabilistic logical languages.

Bayesian Inference counterfactual

On Open-Universe Causal Reasoning

no code implementations4 Jul 2019 Duligur Ibeling, Thomas Icard

We extend two kinds of causal models, structural equation models and simulation models, to infinite variable spaces.

On the Conditional Logic of Simulation Models

no code implementations8 May 2018 Duligur Ibeling, Thomas Icard

We propose analyzing conditional reasoning by appeal to a notion of intervention on a simulation program, formalizing and subsuming a number of approaches to conditional thinking in the recent AI literature.

Cannot find the paper you are looking for? You can Submit a new open access paper.